All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] Add the stateless AV1 uAPI and the VIVPU virtual driver to showcase it.
@ 2021-08-10 22:05 daniel.almeida
  2021-08-10 22:05 ` [RFC PATCH 1/2] media: Add AV1 uAPI daniel.almeida
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: daniel.almeida @ 2021-08-10 22:05 UTC (permalink / raw)
  To: stevecho, shawnku, tzungbi, mcasas, nhebert, abodenha, randy.wu,
	yunfei.dong, gustavo.padovan, andrzej.pietrasiewicz,
	enric.balletbo, ezequiel, nicolas.dufresne, tomeu.vizoso,
	nick.milner, xiaoyong.lu, mchehab, hverkuil-cisco
  Cc: Daniel Almeida, linux-media, linux-kernel, kernel

From: Daniel Almeida <daniel.almeida@collabora.com>

Dear all,

This patchset adds the stateless AV1 uAPI and the VIVPU virtual driver to
showcase it.

Note that this patch depends on dynamically allocated control arrays, i.e. [0]
and [1], which are part of the following series[2].

This cover letter will discuss the AV1 OBUs and their relationship with the
V4L2 controls proposed therein. The VIVPU test driver will also be discussed.

Note that I have also written a GStreamer decoder element [3] to interface with
the VIVPU virtual driver through the proposed control interface to ensure that
these three pieces actually work. The MR in gst-plugins-bad is marked as "Draft"
only because the uAPI hasn't been merged yet and there's no real hardware to
test it.

Padding and holes have not been taken into account yet.



Relevant AV1 Open Bitstream Units (OBUs):
-----------------------------------------

AV1 is packetized into a syntax element known as OBU, which stands for Open
Bitstream Units. There are seven different types of OBUs defined in the AV1
specification, of which five are of interest for the purposes of this API, they
are:

Sequence Header OBU: Contains information that applies to the entire sequence.
Most importantly, it contains a set of flags that signal which AV1 features are
enabled for the entire video coded sequence. The sequence header OBU also
encodes the sequence profile.

Frame Header OBU: Contains information that applies to an entire frame. Notably,
this OBU will dictate the frame's dimensions, its frame type, quantization,
segmentation and filter parameters as well as the set of reference frames needed
to effect a decoding operation. A set of flags will signal whether some AV1
features are enabled for a particular frame.

Tile Group OBU: Contains tiling information. Tile groups contain the tile data
associated with a frame. Tiles are subdivisions of a picture that can be
independently decoded, optionally in parallel. The entire frame is assembled
from all the tiles after potential loop filtering.

Frame OBU: Shorthand for a frame header OBU plus a tile group OBU but with less
overhead. Frame OBUs are a convenience for the common case in which a frame
header is combined with tiling information.

Tile List OBU: Similar to a tile group OBU, but used in "Large Scale
Tile Decoding Mode". The tiling information contained in this OBU has an
additional header that allows the decoder to process a subset of tiles and
display the corresponding part of the image without having to fully decode all
the tiles for a frame.



AV1 uAPI V4L2 CIDs:
-------------------

V4L2_CID_STATELESS_AV1_SEQUENCE: represents a Sequence Header OBU. This control
should only be set once per Sequence Header OBU. The "flags" member contains a
bitfield with the set of flags for the current video coded sequence as parsed
from the bitstream.

V4L2_CID_STATELESS_AV1_FRAME_HEADER: represents a Frame Header OBU. This control
should be set once per frame.

V4L2_CID_STATELESS_AV1_{TILE_GROUP|TILE_GROUP_ENTRY}: represents a Tile Group
OBU or the tiling information within a Frame OBU. These controls contain an
array of metadata to decode the tiles associated with a frame. Both controls
depend on V4L2_CTRL_FLAG_DYNAMIC_ARRAY and drivers will be able to index into
the array using ctrl->p_cur.p_av1_tile_group and
ctrl->p_cur.p_av1_tile_group_entry as base pointers respectively. Frame OBUs
should be split into their Frame Header OBU and Tile Group OBU constituents
before the array entries can be set and there should be a maximum of 512 tile
group entries as per the AV1 specification. In the event that more than one tile
group is provided, drivers can disambiguate their corresponding entries in the
ctrl->p_cur.p_av1_tile_group_entry array by taking note of the tg_start and
tg_end fields.

V4L2_CID_STATELESS_AV1_{TILE_LIST|TILE_LIST_ENTRY}: represents a Tile List OBU.
These controls contain an array of metadata to decode a list of tiles associated
with a frame when the decoder is operating under "Large Scale Tile Decoding
Mode". Both controls depend on V4L2_CTRL_FLAG_DYNAMIC_ARRAY, and drivers will be
able index into the array using ctrl->p_cur.p_av1_tile_list and
ctrl->p_cur.p_av1_tile_list_entry as base pointers respectively. In the event
that more than one list is provided, drivers can disambiguate their
corresponding entries in the ctrl->p_cur.p_av1_tile_list_entry array by taking
note of the tile_count_minus_1 field.

V4L2_CID_STATELESS_AV1_PROFILE: this control lets the driver convey the
supported profiles to userspace.

V4L2_CID_STATELESS_AV1_LEVEL: this control lets the driver convey the supported
AV1 levels to userspace.

V4L2_CTRL_AV1_OPERATING_MODE: this control lets the driver convey the supported
operating modes to userspace. Conversely, userspace apps can change the value of
this control to switch between "general decoding" and "large scale tile
decoding". As per the AV1 specification, under *general decoding mode* the
driver should expect the input to be a sequence of OBUs and the output to be a
decoded frame, whereas under *large scale tile decoding mode* the driver should
expect the input to be a tile list OBU plus additional side information and the
output to be a decoded frame.



VIVPU:
------

This virtual driver was written as a way to showcase and test the control
interface for AV1 as well as the GStreamer decoder[3]. This is so we can detect
bugs at an early stage before real hardware is available. VIVPU does not attempt
to decode video at all.

Once VIVPU is loaded, one can run the following GStreamer pipeline successfully:

gst-launch-1.0 filesrc location=<path to some sample av1 file> ! parsebin !  v4l2slav1dec ! fakevideosink

This is provided that the patches in [3] have been applied and the v4l2codecs
gstreamer plugin is compiled.

It is also possible to print the controls' contents to the console by setting
vivpu_debug to 1. This is handy when debugging, even more so when one is
comparing two different userspace implementations because it makes it easier to
diff the controls that were passed to the kernel.

[0] https://patchwork.linuxtv.org/project/linux-media/patch/20210610113615.785359-2-hverkuil-cisco@xs4all.nl/

[1] https://patchwork.linuxtv.org/project/linux-media/patch/20210610113615.785359-3-hverkuil-cisco@xs4all.nl/

[2] https://patchwork.linuxtv.org/project/linux-media/list/?series=5647

[3] https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2305

Daniel Almeida (2):
  media: Add AV1 uAPI
  media: vivpu: add virtual VPU driver

 .../userspace-api/media/v4l/biblio.rst        |   10 +
 .../media/v4l/ext-ctrls-codec-stateless.rst   | 1268 +++++++++++++++++
 .../media/v4l/pixfmt-compressed.rst           |   21 +
 .../media/v4l/vidioc-g-ext-ctrls.rst          |   36 +
 .../media/v4l/vidioc-queryctrl.rst            |   54 +
 .../media/videodev2.h.rst.exceptions          |    9 +
 drivers/media/test-drivers/Kconfig            |    1 +
 drivers/media/test-drivers/Makefile           |    1 +
 drivers/media/test-drivers/vivpu/Kconfig      |   16 +
 drivers/media/test-drivers/vivpu/Makefile     |    4 +
 drivers/media/test-drivers/vivpu/vivpu-core.c |  418 ++++++
 drivers/media/test-drivers/vivpu/vivpu-dec.c  |  491 +++++++
 drivers/media/test-drivers/vivpu/vivpu-dec.h  |   61 +
 .../media/test-drivers/vivpu/vivpu-video.c    |  599 ++++++++
 .../media/test-drivers/vivpu/vivpu-video.h    |   46 +
 drivers/media/test-drivers/vivpu/vivpu.h      |  119 ++
 drivers/media/v4l2-core/v4l2-ctrls-core.c     |  286 +++-
 drivers/media/v4l2-core/v4l2-ctrls-defs.c     |   79 +
 drivers/media/v4l2-core/v4l2-ioctl.c          |    1 +
 include/media/v4l2-ctrls.h                    |   12 +
 include/uapi/linux/v4l2-controls.h            |  796 +++++++++++
 include/uapi/linux/videodev2.h                |   15 +
 22 files changed, 4342 insertions(+), 1 deletion(-)
 create mode 100644 drivers/media/test-drivers/vivpu/Kconfig
 create mode 100644 drivers/media/test-drivers/vivpu/Makefile
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu-core.c
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu-dec.c
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu-dec.h
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu-video.c
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu-video.h
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu.h

-- 
2.32.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH 1/2] media: Add AV1 uAPI
  2021-08-10 22:05 [RFC PATCH 0/2] Add the stateless AV1 uAPI and the VIVPU virtual driver to showcase it daniel.almeida
@ 2021-08-10 22:05 ` daniel.almeida
  2021-08-11  0:57   ` kernel test robot
                     ` (3 more replies)
  2021-08-10 22:05 ` [RFC PATCH 2/2] media: vivpu: add virtual VPU driver daniel.almeida
  2021-09-02 15:43 ` [RFC PATCH 0/2] Add the stateless AV1 uAPI and the VIVPU virtual driver to showcase it Hans Verkuil
  2 siblings, 4 replies; 14+ messages in thread
From: daniel.almeida @ 2021-08-10 22:05 UTC (permalink / raw)
  To: stevecho, shawnku, tzungbi, mcasas, nhebert, abodenha, randy.wu,
	yunfei.dong, gustavo.padovan, andrzej.pietrasiewicz,
	enric.balletbo, ezequiel, nicolas.dufresne, tomeu.vizoso,
	nick.milner, xiaoyong.lu, mchehab, hverkuil-cisco
  Cc: Daniel Almeida, linux-media, linux-kernel, kernel

From: Daniel Almeida <daniel.almeida@collabora.com>

This patch adds the  AOMedia Video 1 (AV1) kernel uAPI.

This design is based on currently available AV1 API implementations and
aims to support the development of AV1 stateless video codecs
on Linux.

Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
---
 .../userspace-api/media/v4l/biblio.rst        |   10 +
 .../media/v4l/ext-ctrls-codec-stateless.rst   | 1268 +++++++++++++++++
 .../media/v4l/pixfmt-compressed.rst           |   21 +
 .../media/v4l/vidioc-g-ext-ctrls.rst          |   36 +
 .../media/v4l/vidioc-queryctrl.rst            |   54 +
 .../media/videodev2.h.rst.exceptions          |    9 +
 drivers/media/v4l2-core/v4l2-ctrls-core.c     |  286 +++-
 drivers/media/v4l2-core/v4l2-ctrls-defs.c     |   79 +
 drivers/media/v4l2-core/v4l2-ioctl.c          |    1 +
 include/media/v4l2-ctrls.h                    |   12 +
 include/uapi/linux/v4l2-controls.h            |  796 +++++++++++
 include/uapi/linux/videodev2.h                |   15 +
 12 files changed, 2586 insertions(+), 1 deletion(-)

diff --git a/Documentation/userspace-api/media/v4l/biblio.rst b/Documentation/userspace-api/media/v4l/biblio.rst
index 7b8e6738ff9e..7061144d10bb 100644
--- a/Documentation/userspace-api/media/v4l/biblio.rst
+++ b/Documentation/userspace-api/media/v4l/biblio.rst
@@ -417,3 +417,13 @@ VP8
 :title:     RFC 6386: "VP8 Data Format and Decoding Guide"
 
 :author:    J. Bankoski et al.
+
+.. _av1:
+
+AV1
+===
+
+
+:title:     AV1 Bitstream & Decoding Process Specification
+
+:author:    Peter de Rivaz, Argon Design Ltd, Jack Haughton, Argon Design Ltd
diff --git a/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst b/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst
index 72f5e85b4f34..960500651e4b 100644
--- a/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst
+++ b/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst
@@ -1458,3 +1458,1271 @@ FWHT Flags
 .. raw:: latex
 
     \normalsize
+
+
+.. _v4l2-codec-stateless-av1:
+
+``V4L2_CID_STATELESS_AV1_SEQUENCE (struct)``
+    Represents an AV1 Sequence OBU. See section 5.5. "Sequence header OBU syntax"
+    in :ref:`av1` for more details.
+
+.. c:type:: v4l2_ctrl_av1_sequence
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
+
+.. flat-table:: struct v4l2_ctrl_av1_sequence
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u32
+      - ``flags``
+      - See :ref:`AV1 Sequence Flags <av1_sequence_flags>`.
+    * - __u8
+      - ``seq_profile``
+      - Specifies the features that can be used in the coded video sequence.
+    * - __u8
+      - ``order_hint_bits``
+      - Specifies the number of bits used for the order_hint field at each frame.
+    * - __u8
+      - ``bit_depth``
+      - the bitdepth to use for the sequence as described in section 5.5.2
+        "Color config syntax" in :ref:`av1` for more details.
+
+
+.. _av1_sequence_flags:
+
+``AV1 Sequence Flags``
+
+.. cssclass:: longtable
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_SEQUENCE_FLAG_STILL_PICTURE``
+      - 0x00000001
+      - If set, specifies that the coded video sequence contains only one coded
+	frame. If not set, specifies that the coded video sequence contains one or
+	more coded frames.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_USE_128X128_SUPERBLOCK``
+      - 0x00000002
+      - If set, indicates that superblocks contain 128x128 luma samples.
+	When equal to 0, it indicates that superblocks contain 64x64 luma samples.
+	(The number of contained chroma samples depends on subsampling_x and
+	subsampling_y).
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_FILTER_INTRA``
+      - 0x00000004
+      - If set, specifies that the use_filter_intra syntax element may be
+	present. If not set, specifies that the use_filter_intra syntax element will
+	not be present.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTRA_EDGE_FILTER``
+      - 0x00000008
+      - Specifies whether the intra edge filtering process should be enabled.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTERINTRA_COMPOUND``
+      - 0x00000010
+      - If set, specifies that the mode info for inter blocks may contain the
+	syntax element interintra. If not set, specifies that the syntax element
+	interintra will not be present.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_MASKED_COMPOUND``
+      - 0x00000020
+      - If set, specifies that the mode info for inter blocks may contain the
+	syntax element compound_type. If not set, specifies that the syntax element
+	compound_type will not be present.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_WARPED_MOTION``
+      - 0x00000040
+      - If set, indicates that the allow_warped_motion syntax element may be
+	present. If not set, indicates that the allow_warped_motion syntax element
+	will not be present.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_DUAL_FILTER``
+      - 0x00000080
+      - If set, indicates that the inter prediction filter type may be specified
+	independently in the horizontal and vertical directions. If the flag is
+	equal to 0, only one filter type may be specified, which is then used in
+	both directions.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_ORDER_HINT``
+      - 0x00000100
+      - If set, indicates that tools based on the values of order hints may be
+	used. If not set, indicates that tools based on order hints are disabled.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_JNT_COMP``
+      - 0x00000200
+      - If set, indicates that the distance weights process may be used for
+	inter prediction.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_REF_FRAME_MVS``
+      - 0x00000400
+      - If set, indicates that the use_ref_frame_mvs syntax element may be
+	present. If not set, indicates that the use_ref_frame_mvs syntax element
+	will not be present.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_SUPERRES``
+      - 0x00000800
+      - If set, specifies that the use_superres syntax element will be present
+	in the uncompressed header. If not set, specifies that the use_superres
+	syntax element will not be present (instead use_superres will be set to 0
+	in the uncompressed header without being read).
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_CDEF``
+      - 0x00001000
+      - If set, specifies that cdef filtering may be enabled. If not set,
+	specifies that cdef filtering is disabled.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_RESTORATION``
+      - 0x00002000
+      - If set, specifies that loop restoration filtering may be enabled. If not
+	set, specifies that loop restoration filtering is disabled.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME``
+      - 0x00004000
+      - If set, indicates that the video does not contain U and V color planes.
+	If not set, indicates that the video contains Y, U, and V color planes.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_COLOR_RANGE``
+      - 0x00008000
+      - If set, signals full swing representation. If not set, signals studio
+	swing representation.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_X``
+      - 0x00010000
+      - Specify the chroma subsampling format.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_Y``
+      - 0x00020000
+      - Specify the chroma subsampling format.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_FILM_GRAIN_PARAMS_PRESENT``
+      - 0x00040000
+      - Specifies whether film grain parameters are present in the coded video
+	sequence.
+    * - ``V4L2_AV1_SEQUENCE_FLAG_SEPARATE_UV_DELTA_Q``
+      - 0x00080000
+      - If set, indicates that the U and V planes may have separate delta
+	quantizer values. If not set, indicates that the U and V planes will share
+	the same delta quantizer value.
+
+``V4L2_CID_STATELESS_AV1_TILE_GROUP (struct)``
+    Represents a tile group as seen in an AV1 Tile Group OBU or Frame OBU. A
+    v4l2_ctrl_av1_tile_group instance will refer to tg_end - tg_start instances
+    of struct :c:type:`struct v4l2_ctrl_av1_tile_group_entry`. See section
+    6.10.1 "General tile group OBU semantics" in :ref:`av1` for more details.
+
+.. c:type:: v4l2_ctrl_av1_tile_group
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
+
+.. flat-table:: struct v4l2_ctrl_av1_tile_group
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u8
+      - ``flags``
+      - See :ref:`AV1 Tile Group Flags <av1_tile_group_flags>`.
+    * - __u8
+      - ``tg_start``
+      - Specifies the zero-based index of the first tile in the current tile
+        group.
+    * - __u8
+      - ``tg_end``
+      - Specifies the zero-based index of the last tile in the current tile
+        group.
+
+.. _av1_tile_group_flags:
+
+``AV1 Tile Group Flags``
+
+.. cssclass:: longtable
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_TILE_GROUP_FLAG_START_AND_END_PRESENT``
+      - 0x00000001
+      - Specifies whether tg_start and tg_end are present. If tg_start and
+	tg_end are not present, this tile group covers the entire frame.
+
+``V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY (struct)``
+    Represents a single AV1 tile inside an AV1 Tile Group. Note that MiRowStart,
+    MiRowEnd, MiColStart and MiColEnd can be retrieved from struct
+    v4l2_av1_tile_info in struct v4l2_ctrl_av1_frame_header using tile_row and
+    tile_col. See section 6.10.1 "General tile group OBU semantics" in
+    :ref:`av1` for more details.
+
+.. c:type:: v4l2_ctrl_av1_tile_group_entry
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
+
+.. flat-table:: struct v4l2_ctrl_av1_tile_group_entry
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u32
+      - ``tile_offset``
+      - Offset from the OBU data, i.e. where the coded tile data actually starts.
+    * - __u32
+      - ``tile_size``
+      - Specifies the size in bytes of the coded tile. Equivalent to "TileSize"
+        in :ref:`av1`.
+    * - __u32
+      - ``tile_row``
+      - Specifies the row of the current tile. Equivalent to "TileRow" in
+        :ref:`av1`.
+    * - __u32
+      - ``tile_col``
+      - Specifies the column of the current tile. Equivalent to "TileColumn" in
+        :ref:`av1`.
+
+``V4L2_CID_STATELESS_AV1_TILE_LIST (struct)``
+    Represents a tile list as seen in an AV1 Tile List OBU. Tile lists are used
+    in "Large Scale Tile Decode Mode". Note that tile_count_minus_1 should be at
+    most V4L2_AV1_MAX_TILE_COUNT - 1. A struct v4l2_ctrl_av1_tile_list instance
+    will refer to "tile_count_minus_1" + 1 instances of struct
+    v4l2_ctrl_av1_tile_list_entry.
+
+.. c:type:: v4l2_ctrl_av1_tile_list
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
+
+.. flat-table:: struct v4l2_ctrl_av1_tile_list
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u8
+      - ``output_frame_width_in_tiles_minus_1``
+      - This field plus one is the width of the output frame, in tile units.
+    * - __u8
+      - ``output_frame_height_in_tiles_minus_1``
+      - This field plus one is the height of the output frame, in tile units.
+    * - __u8
+      - ``tile_count_minus_1``
+      - This field plus one is the number of tile_list_entry in the list.
+
+``V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY (struct)``
+    Represents a tile list entry as seen in an AV1 Tile List OBU. See section
+    6.11.2. "Tile list entry semantics" of :ref:`av1` for more details.
+
+.. c:type:: v4l2_ctrl_av1_tile_list_entry
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
+
+.. flat-table:: struct v4l2_ctrl_av1_tile_list
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u8
+      - ``anchor_frame_idx``
+      - The index into an array AnchorFrames of the frames that the tile uses
+	for prediction.
+    * - __u8
+      - ``anchor_tile_row``
+      - The row coordinate of the tile in the frame that it belongs, in tile
+	units.
+    * - __u8
+      - ``anchor_tile_col``
+      - The column coordinate of the tile in the frame that it belongs, in tile
+	units.
+    * - __u8
+      - ``tile_data_size_minus_1``
+      - This field plus one is the size of the coded tile data in bytes.
+
+.. c:type:: v4l2_av1_film_grain
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
+
+.. flat-table:: struct v4l2_av1_film_grain
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u8
+      - ``flags``
+      - See :ref:`AV1 Film Grain Flags <av1_film_grain_flags>`.
+    * - __u16
+      - ``grain_seed``
+      - Specifies the starting value for the pseudo-random numbers used during
+	film grain synthesis.
+    * - __u8
+      - ``film_grain_params_ref_idx``
+      - Indicates which reference frame contains the film grain parameters to be
+	used for this frame.
+    * - __u8
+      - ``num_y_points``
+      - Specifies the number of points for the piece-wise linear scaling
+	function of the luma component.
+    * - __u8
+      - ``point_y_value[V4L2_AV1_MAX_NUM_Y_POINTS]``
+      - Represents the x (luma value) coordinate for the i-th point
+        of the piecewise linear scaling function for luma component. The values
+        are signaled on the scale of 0..255. (In case of 10 bit video, these
+        values correspond to luma values divided by 4. In case of 12 bit video,
+        these values correspond to luma values divided by 16.).
+    * - __u8
+      - ``point_y_scaling[V4L2_AV1_MAX_NUM_Y_POINTS]``
+      - Represents the scaling (output) value for the i-th point
+	of the piecewise linear scaling function for luma component.
+    * - __u8
+      - ``num_cb_points``
+      -  Specifies the number of points for the piece-wise linear scaling
+         function of the cb component.
+    * - __u8
+      - ``point_cb_value[V4L2_AV1_MAX_NUM_CB_POINTS]``
+      - Represents the x coordinate for the i-th point of the
+        piece-wise linear scaling function for cb component. The values are
+        signaled on the scale of 0..255.
+    * - __u8
+      - ``point_cb_scaling[V4L2_AV1_MAX_NUM_CB_POINTS]``
+      - Represents the scaling (output) value for the i-th point of the
+        piecewise linear scaling function for cb component.
+    * - __u8
+      - ``num_cr_points``
+      - Represents the number of points for the piece-wise
+        linear scaling function of the cr component.
+    * - __u8
+      - ``point_cr_value[V4L2_AV1_MAX_NUM_CR_POINTS]``
+      - Represents the x coordinate for the i-th point of the
+        piece-wise linear scaling function for cr component. The values are
+        signaled on the scale of 0..255.
+    * - __u8
+      - ``point_cr_scaling[V4L2_AV1_MAX_NUM_CR_POINTS]``
+      - Represents the scaling (output) value for the i-th point of the
+        piecewise linear scaling function for cr component.
+    * - __u8
+      - ``grain_scaling_minus_8``
+      - Represents the shift – 8 applied to the values of the chroma component.
+        The grain_scaling_minus_8 can take values of 0..3 and determines the
+        range and quantization step of the standard deviation of film grain.
+    * - __u8
+      - ``ar_coeff_lag``
+      - Specifies the number of auto-regressive coefficients for luma and
+	chroma.
+    * - __u8
+      - ``ar_coeffs_y_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA]``
+      - Specifies auto-regressive coefficients used for the Y plane.
+    * - __u8
+      - ``ar_coeffs_cb_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA]``
+      - Specifies auto-regressive coefficients used for the U plane.
+    * - __u8
+      - ``ar_coeffs_cr_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA]``
+      - Specifies auto-regressive coefficients used for the V plane.
+    * - __u8
+      - ``ar_coeff_shift_minus_6``
+      - Specifies the range of the auto-regressive coefficients. Values of 0,
+        1, 2, and 3 correspond to the ranges for auto-regressive coefficients of
+        [-2, 2), [-1, 1), [-0.5, 0.5) and [-0.25, 0.25) respectively.
+    * - __u8
+      - ``grain_scale_shift``
+      - Specifies how much the Gaussian random numbers should be scaled down
+	during the grain synthesis process.
+    * - __u8
+      - ``cb_mult``
+      - Represents a multiplier for the cb component used in derivation of the
+	input index to the cb component scaling function.
+    * - __u8
+      - ``cb_luma_mult``
+      - Represents a multiplier for the average luma component used in
+	derivation of the input index to the cb component scaling function..
+    * - __u16
+      - ``cb_offset``
+      - Represents an offset used in derivation of the input index to the
+	cb component scaling function.
+    * - __u8
+      - ``cr_mult``
+      - Represents a multiplier for the cb component used in derivation of the
+	input index to the cr component scaling function.
+    * - __u8
+      - ``cr_luma_mult``
+      - Represents a multiplier for the average luma component used in
+        derivation of the input index to the cr component scaling function.
+    * - __u16
+      - ``cr_offset``
+      - Represents an offset used in derivation of the input index to the
+        cr component scaling function.
+
+.. _av1_film_grain_flags:
+
+``AV1 Film Grain Flags``
+
+.. cssclass:: longtable
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_FILM_GRAIN_FLAG_APPLY_GRAIN``
+      - 0x00000001
+      - If set, specifies that film grain should be added to this frame. If not
+	set, specifies that film grain should not be added.
+    * - ``V4L2_AV1_FILM_GRAIN_FLAG_UPDATE_GRAIN``
+      - 0x00000002
+      - If set, means that a new set of parameters should be sent. If not set,
+	specifies that the previous set of parameters should be used.
+    * - ``V4L2_AV1_FILM_GRAIN_FLAG_CHROMA_SCALING_FROM_LUMA``
+      - 0x00000004
+      - If set, specifies that the chroma scaling is inferred from the luma
+	scaling.
+    * - ``V4L2_AV1_FILM_GRAIN_FLAG_OVERLAP``
+      - 0x00000008
+      - If set, indicates that the overlap between film grain blocks shall be
+	applied. If not set, indicates that the overlap between film grain blocks
+	shall not be applied.
+    * - ``V4L2_AV1_FILM_GRAIN_FLAG_CLIP_TO_RESTRICTED_RANGE``
+      - 0x00000010
+      - If set, indicates that clipping to the restricted (studio) range shall
+        be applied to the sample values after adding the film grain (see the
+        semantics for color_range for an explanation of studio swing). If not
+        set, indicates that clipping to the full range shall be applied to the
+        sample values after adding the film grain.
+
+.. c:type:: v4l2_av1_warp_model
+
+	AV1 Warp Model as described in section 3 "Symbols and abbreviated terms" of
+	:ref:`av1`.
+
+.. raw:: latex
+
+    \scriptsize
+
+.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_WARP_MODEL_IDENTITY``
+      - 0
+      - Warp model is just an identity transform.
+    * - ``V4L2_AV1_WARP_MODEL_TRANSLATION``
+      - 1
+      - Warp model is a pure translation.
+    * - ``V4L2_AV1_WARP_MODEL_ROTZOOM``
+      - 2
+      - Warp model is a rotation + symmetric zoom + translation.
+    * - ``V4L2_AV1_WARP_MODEL_AFFINE``
+      - 3
+      - Warp model is a general affine transform.
+
+.. c:type:: v4l2_av1_reference_frame
+
+AV1 Reference Frames as described in section 6.10.24. "Ref frames semantics"
+of :ref:`av1`.
+
+.. raw:: latex
+
+    \scriptsize
+
+.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_REF_INTRA_FRAME``
+      - 0
+      - Intra Frame Reference.
+    * - ``V4L2_AV1_REF_LAST_FRAME``
+      - 1
+      - Last Frame Reference.
+    * - ``V4L2_AV1_REF_LAST2_FRAME``
+      - 2
+      - Last2 Frame Reference.
+    * - ``V4L2_AV1_REF_LAST3_FRAME``
+      - 3
+      - Last3 Frame Reference.
+    * - ``V4L2_AV1_REF_GOLDEN_FRAME``
+      - 4
+      - Golden Frame Reference.
+    * - ``V4L2_AV1_REF_BWDREF_FRAME``
+      - 5
+      - BWD Frame Reference.
+    * - ``V4L2_AV1_REF_ALTREF2_FRAME``
+      - 6
+      - ALTREF2 Frame Reference.
+    * - ``V4L2_AV1_REF_ALTREF_FRAME``
+      - 7
+      - ALTREF Frame Reference.
+    * - ``V4L2_AV1_NUM_REF_FRAMES``
+      - 8
+      - Total number of reference frames.
+
+.. c:type:: v4l2_av1_global_motion
+
+AV1 Global Motion parameters as described in section 6.8.17
+"Global motion params semantics" of :ref:`av1` for more details.
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
+
+.. flat-table:: struct v4l2_av1_global_motion
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u8
+      - ``flags[V4L2_AV1_TOTAL_REFS_PER_FRAME]``
+      - A bitfield containing the flags per reference frame. See
+        :ref:`AV1 Global Motion Flags <av1_global_motion_flags>` for more
+        details.
+    * - enum :c:type:`v4l2_av1_warp_model`
+      - ``type[V4L2_AV1_TOTAL_REFS_PER_FRAME]``
+      - The type of global motion transform used.
+    * - __u32
+      - ``params[V4L2_AV1_TOTAL_REFS_PER_FRAME][6]``
+      - This field has the same meaning as "gm_params" in :ref:`av1`.
+    * - __u8
+      - ``invalid``
+      - Bitfield indicating whether the global motion params are invalid for a
+        given reference frame. See section 7.11.3.6. Setup shear process and the
+        variable "warpValid". Use V4L2_AV1_GLOBAL_MOTION_IS_INVALID(ref) to
+        create a suitable mask.
+
+.. _av1_global_motion_flags:
+
+``AV1 Global Motion Flags``
+
+.. cssclass:: longtable
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_GLOBAL_MOTION_FLAG_IS_GLOBAL``
+      - 0x00000001
+      - Specifies whether global motion parameters are present for a particular
+        reference frame.
+    * - ``V4L2_AV1_GLOBAL_MOTION_FLAG_IS_ROT_ZOOM``
+      - 0x00000002
+      - Specifies whether a particular reference frame uses rotation and zoom
+        global motion.
+    * - ``V4L2_AV1_GLOBAL_MOTION_FLAG_IS_TRANSLATION``
+      - 0x00000004
+      - Specifies whether a particular reference frame uses rotation and zoom
+        global motion.
+
+.. c:type:: v4l2_av1_frame_restoration_type
+
+AV1 Frame Restoration Type.
+
+.. raw:: latex
+
+    \scriptsize
+
+.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_FRAME_RESTORE_NONE``
+      - 0
+      - No filtering is applied.
+    * - ``V4L2_AV1_FRAME_RESTORE_WIENER``
+      - 1
+      - Wiener filter process is invoked.
+    * - ``V4L2_AV1_FRAME_RESTORE_SGRPROJ``
+      - 2
+      - Self guided filter process is invoked.
+    * - ``V4L2_AV1_FRAME_RESTORE_SWITCHABLE``
+      - 3
+      - Restoration filter is swichtable.
+
+.. c:type:: v4l2_av1_loop_restoration
+
+AV1 Loop Restauration as described in section 6.10.15 "Loop restoration params
+semantics" of :ref:`av1`.
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
+
+.. flat-table:: struct v4l2_av1_loop_restoration
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - :c:type:`v4l2_av1_frame_restoration_type`
+      - ``frame_restoration_type[V4L2_AV1_NUM_PLANES_MAX]``
+      - Specifies the type of restoration used for each plane.
+    * - __u8
+      - ``lr_unit_shift``
+      - Specifies if the luma restoration size should be halved.
+    * - __u8
+      - ``lr_uv_shift``
+      - Specifies if the chroma size should be half the luma size.
+    * - __u8
+      - ``loop_restoration_size[V4L2_AV1_MAX_NUM_PLANES]``
+      - specifies the size of loop restoration units in units of samples in the
+        current plane.
+
+.. c:type:: v4l2_av1_cdef
+
+AV1 CDEF params semantics as described in section 6.10.14. "CDEF params
+semantics" of :ref:`av1`.
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
+
+.. flat-table:: struct v4l2_av1_cdef
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u8
+      - ``damping_minus_3``
+      - Controls the amount of damping in the deringing filter.
+    * - __u8
+      - ``bits``
+      - Specifies the number of bits needed to specify which CDEF filter to
+        apply.
+    * - __u8
+      - ``y_pri_strength[V4L2_AV1_CDEF_MAX]``
+      -  Specifies the strength of the primary filter.
+    * - __u8
+      - ``y_sec_strength[V4L2_AV1_CDEF_MAX]``
+      -  Specifies the strength of the secondary filter.
+    * - __u8
+      - ``uv_pri_strength[V4L2_AV1_CDEF_MAX]``
+      -  Specifies the strength of the primary filter.
+    * - __u8
+      - ``uv_secondary_strength[V4L2_AV1_CDEF_MAX]``
+      -  Specifies the strength of the secondary filter.
+
+.. c:type:: v4l2_av1_segment_feature
+
+AV1 segment features as described in section 3 "Symbols and abbreviated terms"
+of :ref:`av1`.
+
+.. raw:: latex
+
+    \scriptsize
+
+.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_SEG_LVL_ALT_Q``
+      - 0
+      - Index for quantizer segment feature.
+    * - ``V4L2_AV1_SEG_LVL_ALT_LF_Y_V``
+      - 1
+      - Index for vertical luma loop filter segment feature.
+    * - ``V4L2_AV1_SEG_LVL_REF_FRAME``
+      - 5
+      - Index for reference frame segment feature.
+    * - ``V4L2_AV1_SEG_LVL_REF_SKIP``
+      - 6
+      - Index for skip segment feature.
+    * - ``V4L2_AV1_SEG_LVL_REF_GLOBALMV``
+      - 7
+      - Index for global mv feature.
+    * - ``V4L2_AV1_SEG_LVL_MAX``
+      - 8
+      - Number of segment features.
+
+.. c:type:: v4l2_av1_segmentation
+
+AV1 Segmentation params as defined in section 6.8.13. "Segmentation params
+semantics" of :ref:`av1`.
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
+
+.. flat-table:: struct v4l2_av1_film_grain
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u8
+      - ``flags``
+      - See :ref:`AV1 Segmentation Flags <av1_segmentation_flags>`
+    * - __u8
+      - ``feature_enabled[V4L2_AV1_MAX_SEGMENTS]``
+      - Bitmask defining which features are enabled in each segment. Use
+        V4L2_AV1_SEGMENT_FEATURE_ENABLED to build a suitable mask.
+    * - __u16
+      - `feature_data[V4L2_AV1_MAX_SEGMENTS][V4L2_AV1_SEG_LVL_MAX]``
+      -  Data attached to each feature. Data entry is only valid if the feature
+         is enabled
+    * - __u8
+      - ``last_active_seg_id``
+      -  Indicates the highest numbered segment id that has some
+         enabled feature. This is used when decoding the segment id to only decode
+         choices corresponding to used segments.
+
+.. _av1_segmentation_flags:
+
+``AV1 Segmentation Flags``
+
+.. cssclass:: longtable
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_SEGMENTATION_FLAG_ENABLED``
+      - 0x00000001
+      - If set, indicates that this frame makes use of the segmentation tool. If
+        not set, indicates that the frame does not use segmentation.
+    * - ``V4L2_AV1_SEGMENTATION_FLAG_UPDATE_MAP``
+      - 0x00000002
+      - If set, indicates that the segmentation map are updated during the
+        decoding of this frame. If not set, indicates that the segmentation map
+        from the previous frame is used.
+    * - ``V4L2_AV1_SEGMENTATION_FLAG_TEMPORAL_UPDATE``
+      - 0x00000004
+      - If set, indicates that the updates to the segmentation map are coded
+        relative to the existing segmentation map. If not set, indicates that
+        the new segmentation map is coded without reference to the existing
+        segmentation map.
+    * - ``V4L2_AV1_SEGMENTATION_FLAG_UPDATE_DATA``
+      - 0x00000008
+      - If set, indicates that the updates to the segmentation map are coded
+        relative to the existing segmentation map. If not set, indicates that
+        the new segmentation map is coded without reference to the existing
+        segmentation map.
+    * - ``V4L2_AV1_SEGMENTATION_FLAG_SEG_ID_PRE_SKIP``
+      - 0x00000010
+      - If set, indicates that the segment id will be read before the skip
+        syntax element. If not set, indicates that the skip syntax element will
+        be read first.
+
+.. c:type:: v4l2_av1_loop_filter
+
+AV1 Loop filter params as defined in section 6.8.10. "Loop filter semantics" of
+:ref:`av1`.
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
+
+.. flat-table:: struct v4l2_av1_global_motion
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u8
+      - ``flags``
+      - See
+        :ref:`AV1 Loop Filter flags <av1_loop_filter_flags>` for more details.
+    * - __u8
+      - ``level[4]``
+      - an array containing loop filter strength values. Different loop
+        filter strength values from the array are used depending on the image
+        plane being filtered, and the edge direction (vertical or horizontal)
+        being filtered.
+    * - __u8
+      - ``sharpness``
+      - indicates the sharpness level. The loop_filter_level and
+        loop_filter_sharpness together determine when a block edge is filtered,
+        and by how much the filtering can change the sample values. The loop
+        filter process is described in section 7.14 of :ref:`av1`.
+    * - __u8
+      - ``ref_deltas[V4L2_AV1_TOTAL_REFS_PER_FRAME]``
+      - contains the adjustment needed for the filter level based on the
+        chosen reference frame. If this syntax element is not present, it
+        maintains its previous value.
+    * - __u8
+      - ``mode_deltas[2]``
+      - contains the adjustment needed for the filter level based on
+        the chosen mode. If this syntax element is not present, it maintains its
+        previous value.
+    * - __u8
+      - ``delta_lf_res``
+      - specifies the left shift which should be applied to decoded loop filter
+        delta values.
+    * - __u8
+      - ``delta_lf_multi``
+      - a value equal to 1 specifies that separate loop filter
+        deltas are sent for horizontal luma edges, vertical luma edges, the U
+        edges, and the V edges. A value of delta_lf_multi equal to 0 specifies
+        that the same loop filter delta is used for all edges.
+
+.. _av1_loop_filter_flags:
+
+``AV1 Loop Filter Flags``
+
+.. cssclass:: longtable
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_LOOP_FILTER_FLAG_DELTA_ENABLED``
+      - 0x00000001
+      - If set, means that the filter level depends on the mode and reference
+        frame used to predict a block. If not set, means that the filter level
+        does not depend on the mode and reference frame.
+    * - ``V4L2_AV1_LOOP_FILTER_FLAG_DELTA_UPDATE``
+      - 0x00000002
+      - If set, means that additional syntax elements are present that specify
+        which mode and reference frame deltas are to be updated. If not set,
+        means that these syntax elements are not present.
+    * - ``V4L2_AV1_LOOP_FILTER_FLAG_DELTA_LF_PRESENT``
+      - 0x00000004
+      - Specifies whether loop filter delta values are present
+
+.. c:type:: v4l2_av1_quantization
+
+AV1 Quantization params as defined in section 6.8.11 "Quantization params
+semantics" of :ref:`av1`.
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
+
+.. flat-table:: struct v4l2_av1_quantization
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u8
+      - ``flags``
+      - See
+        :ref:`AV1 Loop Filter flags <av1_quantization_flags>` for more details.
+    * - __u8
+      - ``base_q_idx``
+      - Indicates the base frame qindex. This is used for Y AC coefficients and
+        as the base value for the other quantizers.
+    * - __u8
+      - ``delta_q_y_dc``
+      - Indicates the Y DC quantizer relative to base_q_idx.
+    * - __u8
+      - ``delta_q_u_dc``
+      - Indicates the U DC quantizer relative to base_q_idx.
+    * - __u8
+      - ``delta_q_u_ac``
+      - Indicates the U AC quantizer relative to base_q_idx.
+    * - __u8
+      - ``delta_q_v_dc``
+      - Indicates the V DC quantizer relative to base_q_idx.
+    * - __u8
+      - ``delta_q_v_dc``
+      - Indicates the V DC quantizer relative to base_q_idx.
+    * - __u8
+      - ``qm_y``
+      - Specifies the level in the quantizer matrix that should be used for
+        luma plane decoding.
+    * - __u8
+      - ``qm_u``
+      - Specifies the level in the quantizer matrix that should be used for
+        chroma U plane decoding.
+    * - __u8
+      - ``qm_y``
+      - Specifies the level in the quantizer matrix that should be used for
+        chroma V plane decoding.
+    * - __u8
+      - ``delta_q_res``
+      - Specifies the left shift which should be applied to decoded quantizer
+        index delta values.
+
+.. _av1_quantization_flags:
+
+``AV1 Quantization Flags``
+
+.. cssclass:: longtable
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_QUANTIZATION_FLAG_DIFF_UV_DELTA``
+      - 0x00000001
+      - If set, indicates that the U and V delta quantizer values are coded
+        separately. If not set, indicates that the U and V delta quantizer
+        values share a common value.
+    * - ``V4L2_AV1_QUANTIZATION_FLAG_USING_QMATRIX``
+      - 0x00000002
+      - If set, specifies that the quantizer matrix will be used to compute
+        quantizers.
+    * - ``V4L2_AV1_QUANTIZATION_FLAG_DELTA_Q_PRESENT``
+      - 0x00000004
+      - Specifies whether quantizer index delta values are present.
+
+.. c:type:: v4l2_av1_tile_info
+
+AV1 Tile info as defined in section 6.8.14. "Tile info semantics" of ref:`av1`.
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
+
+.. flat-table:: struct v4l2_av1_film_grain
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - __u8
+      - ``flags``
+      - See
+        :ref:`AV1 Tile Info flags <av1_tile_info_flags>` for more details.
+    * - __u32
+      - ``mi_col_starts[V4L2_AV1_MAX_TILE_COLS + 1]``
+      - An array specifying the start column (in units of 4x4 luma
+        samples) for each tile across the image.
+    * - __u32
+      - ``mi_row_starts[V4L2_AV1_MAX_TILE_ROWS + 1]``
+      - An array specifying the start row (in units of 4x4 luma
+        samples) for each tile across the image.
+    * - __u32
+      - ``width_in_sbs_minus_1[V4L2_AV1_MAX_TILE_COLS]``
+      - Specifies the width of a tile minus 1 in units of superblocks.
+    * - __u32
+      - ``height_in_sbs_minus_1[V4L2_AV1_MAX_TILE_ROWS]``
+      - Specifies the height of a tile minus 1 in units of superblocks.
+    * - __u8
+      - ``tile_size_bytes``
+      - Specifies the number of bytes needed to code each tile size.
+    * - __u8
+      - ``context_update_tile_id``
+      - Specifies which tile to use for the CDF update.
+    * - __u8
+      - ``tile_rows``
+      - Specifies the number of tiles down the frame.
+    * - __u8
+      - ``tile_rows``
+      - Specifies the number of tiles across the frame.
+
+.. _av1_tile_info_flags:
+
+``AV1 Tile Info Flags``
+
+.. cssclass:: longtable
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_TILE_INFO_FLAG_UNIFORM_TILE_SPACING``
+      - 0x00000001
+      - If set, means that the tiles are uniformly spaced across the frame. (In
+        other words, all tiles are the same size except for the ones at the
+        right and bottom edge which can be smaller). If not set means that the
+        tile sizes are coded.
+
+.. c:type:: v4l2_av1_frame_type
+
+AV1 Frame Type
+
+.. raw:: latex
+
+    \scriptsize
+
+.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_KEY_FRAME``
+      - 0
+      - Key frame.
+    * - ``V4L2_AV1_INTER_FRAME``
+      - 1
+      - Inter frame.
+    * - ``V4L2_AV1_INTRA_ONLY_FRAME``
+      - 2
+      - Intra-only frame.
+    * - ``V4L2_AV1_SWITCH_FRAME``
+      - 3
+      - Switch frame.
+
+.. c:type:: v4l2_av1_interpolation_filter
+
+AV1 Interpolation Filter
+
+.. raw:: latex
+
+    \scriptsize
+
+.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP``
+      - 0
+      - Eight tap filter.
+    * - ``V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SMOOTH``
+      - 1
+      - Eight tap smooth filter.
+    * - ``V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SHARP``
+      - 2
+      - Eight tap sharp filter.
+    * - ``V4L2_AV1_INTERPOLATION_FILTER_BILINEAR``
+      - 3
+      - Bilinear filter.
+    * - ``V4L2_AV1_INTERPOLATION_FILTER_SWITCHABLE``
+      - 4
+      - Filter selection is signaled at the block level.
+
+.. c:type:: v4l2_av1_tx_mode
+
+AV1 Tx mode as described in section 6.8.21 "TX mode semantics" of :ref:`av1`.
+
+.. raw:: latex
+
+    \scriptsize
+
+.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_TX_MODE_ONLY_4X4``
+      - 0
+      -  The inverse transform will use only 4x4 transforms.
+    * - ``V4L2_AV1_TX_MODE_LARGEST``
+      - 1
+      - The inverse transform will use the largest transform size that fits
+        inside the block.
+    * - ``V4L2_AV1_TX_MODE_SELECT``
+      - 2
+      - The choice of transform size is specified explicitly for each block.
+
+``V4L2_CID_STATELESS_AV1_FRAME_HEADER (struct)``
+    Represents a tile list entry as seen in an AV1 Tile List OBU. See section
+    6.11.2. "Tile list entry semantics" of :ref:`av1` for more details.
+
+.. c:type:: v4l2_ctrl_av1_frame_header
+
+.. cssclass:: longtable
+
+.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
+
+.. flat-table:: struct v4l2_ctrl_av1_frame_header
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - struct :c:type:`v4l2_av1_tile_info`
+      - ``tile_info``
+      - Tile info
+    * - struct :c:type:`v4l2_av1_quantization`
+      - ``quantization``
+      - Quantization params
+    * - struct :c:type:`v4l2_av1_segmentation`
+      - ``segmentation``
+      - Segmentation params
+    * - struct :c:type:`v4l2_av1_loop_filter`
+      - ``loop_filter``
+      - Loop filter params
+    * - struct :c:type:`v4l2_av1_cdef`
+      - ``cdef``
+      - CDEF params
+    * - struct :c:type:`v4l2_av1_loop_restoration`
+      - ``loop_restoration``
+      - Loop restoration params
+    * - struct :c:type:`v4l2_av1_loop_restoration`
+      - ``loop_restoration``
+      - Loop restoration params
+    * - struct :c:type:`v4l2_av1_loop_global_motion`
+      - ``global_motion``
+      - Global motion params
+    * - struct :c:type:`v4l2_av1_loop_film_grain`
+      - ``film_grain``
+      - Film grain params
+    * - __u32
+      - ``flags``
+      - See
+        :ref:`AV1 Tile Info flags <av1_frame_header_flags>` for more details.
+    * - enum :c:type:`v4l2_av1_frame_type`
+      - ``frame_type``
+      - Specifies the AV1 frame type
+    * - __u32
+      - ``order_hint``
+      - Specifies OrderHintBits least significant bits of the expected output
+        order for this frame.
+    * - __u8
+      - ``superres_denom``
+      - The denominator for the upscaling ratio.
+    * - __u32
+      - ``upscaled_width``
+      - The upscaled width.
+    * - enum :c:type:`v4l2_av1_interpolation_filter`
+      - ``interpolation_filter``
+      - Specifies the filter selection used for performing inter prediction.
+    * - enum :c:type:`v4l2_av1_tx_mode`
+      - ``tx_mode``
+      - Specifies how the transform size is determined.
+    * - __u32
+      - ``frame_width_minus_1``
+      - Add 1 to get the frame's width.
+    * - __u32
+      - ``frame_height_minus_1``
+      - Add 1 to get the frame's height.
+    * - __u16
+      - ``render_width_minus_1``
+      - Add 1 to get the render width of the frame in luma samples.
+    * - __u16
+      - ``render_height_minus_1``
+      - Add 1 to get the render height of the frame in luma samples.
+    * - __u32
+      - ``current_frame_id``
+      - Specifies the frame id number for the current frame. Frame
+        id numbers are additional information that do not affect the decoding
+        process, but provide decoders with a way of detecting missing reference
+        frames so that appropriate action can be taken.
+    * - __u8
+      - ``primary_ref_frame``
+      - Specifies which reference frame contains the CDF values and other state
+        that should be loaded at the start of the frame..
+    * - __u8
+      - ``buffer_removal_time[V4L2_AV1_MAX_OPERATING_POINTS]``
+      - Specifies the frame removal time in units of DecCT clock ticks counted
+        from the removal time of the last random access point for operating point
+        opNum.
+    * - __u8
+      - ``refresh_frame_flags[V4L2_AV1_MAX_OPERATING_POINTS]``
+      - Contains a bitmask that specifies which reference frame slots will be
+        updated with the current frame after it is decoded.
+    * - __u32
+      - ``ref_order_hint[V4L2_AV1_NUM_REF_FRAMES]``
+      - Specifies the expected output order hint for each reference frame.
+    * - __s8
+      - ``last_frame_idx``
+      - Specifies the reference frame to use for LAST_FRAME.
+    * - __s8
+      - ``gold_frame_idx``
+      - Specifies the reference frame to use for GOLDEN_FRAME.
+    * - __u64
+      - ``reference_frame_ts[V4L2_AV1_NUM_REF_FRAMES]``
+      - the V4L2 timestamp for each of the reference frames enumerated in
+        enum :c:type:`v4l2_av1_reference_frame`. The timestamp refers to the
+        ``timestamp`` field in struct :c:type:`v4l2_buffer`. Use the
+        :c:func:`v4l2_timeval_to_ns()` function to convert the struct
+        :c:type:`timeval` in struct :c:type:`v4l2_buffer` to a __u64.
+    * - __u8
+      - ``skip_mode_frame[2]``
+      - Specifies the frames to use for compound prediction when skip_mode is
+        equal to 1.
+
+.. _av1_frame_header_flags:
+
+``AV1 Frame Header Flags``
+
+.. cssclass:: longtable
+
+.. flat-table::
+    :header-rows:  0
+    :stub-columns: 0
+    :widths:       1 1 2
+
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_SHOW_FRAME``
+      - 0x00000001
+      - If set, specifies that this frame should be immediately output once
+        decoded. If not set, specifies that this frame should not be immediately
+        output. (It may be output later if a later uncompressed header uses
+        show_existing_frame equal to 1).
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_SHOWABLE_FRAME``
+      - 0x00000002
+      - If set, specifies that the frame may be output using the
+        show_existing_frame mechanism. If not set, specifies that this frame
+        will not be output using the show_existing_frame mechanism.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ERROR_RESILIENT_MODE``
+      - 0x00000004
+      - Specifies whether error resilient mode is enabled.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_CDF_UPDATE``
+      - 0x00000008
+      - Specifies whether the CDF update in the symbol decoding process should
+        be disabled.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_SCREEN_CONTENT_TOOLS``
+      - 0x00000010
+      - Specifies whether the CDF update in the symbol decoding process should
+        be disabled.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_FORCE_INTEGER_MV``
+      - 0x00000020
+      - If set, specifies that motion vectors will always be integers. If not
+        set, specifies that motion vectors can contain fractional bits.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_INTRABC``
+      - 0x00000040
+      - If set, indicates that intra block copy may be used in this frame. If
+        not set, indicates that intra block copy is not allowed in this frame.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_USE_SUPERRES``
+      - 0x00000080
+      - If set, indicates that upscaling is needed.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_HIGH_PRECISION_MV``
+      - 0x00000100
+      - If set, specifies that motion vectors are specified to eighth pel
+        precision. If not set, specifies that motion vectors are specified to
+        quarter pel precision;
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_IS_MOTION_MODE_SWITCHABLE``
+      - 0x00000200
+      - If not set, specifies that only the SIMPLE motion mode will be used.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_USE_REF_FRAME_MVS``
+      - 0x00000400
+      - If set specifies that motion vector information from a previous frame
+        can be used when decoding the current frame. If not set, specifies that
+        this information will not be used.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_FRAME_END_UPDATE_CDF``
+      - 0x00000800
+      - If set indicates that the end of frame CDF update is disabled. If not
+        set, indicates that the end of frame CDF update is enabled
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_UNIFORM_TILE_SPACING``
+      - 0x00001000
+      - If set, means that the tiles are uniformly spaced across the frame. (In
+        other words, all tiles are the same size except for the ones at the
+        right and bottom edge which can be smaller). If not set, means that the
+        tile sizes are coded
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_WARPED_MOTION``
+      - 0x00002000
+      - If set, indicates that the syntax element motion_mode may be present, if
+        not set, indicates that the syntax element motion_mode will not be
+        present.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_REFERENCE_SELECT``
+      - 0x00004000
+      - If set, specifies that the mode info for inter blocks contains the
+        syntax element comp_mode that indicates whether to use single or
+        compound reference prediction. If not set, specifies that all inter
+        blocks will use single prediction.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_REDUCED_TX_SET``
+      - 0x00008000
+      - If set, specifies that the frame is restricted to a reduced subset of
+        the full set of transform types.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_SKIP_MODE_PRESENT``
+      - 0x00010000
+      - If set, specifies that the syntax element skip_mode will be present, if
+        not set, specifies that skip_mode will not be used for this frame.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_FRAME_SIZE_OVERRIDE``
+      - 0x00020000
+      - If set, specifies that the frame size will either be specified as the
+        size of one of the reference frames, or computed from the
+        frame_width_minus_1 and frame_height_minus_1 syntax elements. If not
+        set, specifies that the frame size is equal to the size in the sequence
+        header.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_BUFFER_REMOVAL_TIME_PRESENT``
+      - 0x00040000
+      - If set, specifies that buffer_removal_time is present. If not set,
+        specifies that buffer_removal_time is not present.
+    * - ``V4L2_AV1_FRAME_HEADER_FLAG_FRAME_REFS_SHORT_SIGNALING``
+      - 0x00080000
+      - If set, indicates that only two reference frames are explicitly
+        signaled. If not set, indicates that all reference frames are explicitly
+        signaled.
diff --git a/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst b/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst
index 0ede39907ee2..c1951e890d6f 100644
--- a/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst
+++ b/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst
@@ -223,6 +223,27 @@ Compressed Formats
         through the ``V4L2_CID_STATELESS_FWHT_PARAMS`` control.
 	See the :ref:`associated Codec Control ID <codec-stateless-fwht>`.
 
+    * .. _V4L2-PIX-FMT-AV1-FRAME:
+
+      - ``V4L2_PIX_FMT_AV1_FRAME``
+      - 'AV1F'
+      - AV1 parsed frame, including the frame header, as extracted from the container.
+        This format is adapted for stateless video decoders that implement a AV1
+        pipeline with the :ref:`stateless_decoder`. Metadata associated with the
+        frame to decode is required to be passed through the
+        ``V4L2_CID_STATELESS_AV1_SEQUENCE``, ``V4L2_CID_STATELESS_AV1_FRAME``,
+        ``V4L2_CID_STATELESS_AV1_TILE_GROUP`` and
+        ``V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY`` controls.
+        ``V4L2_CID_STATELESS_AV1_TILE_LIST`` and
+        ``V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY`` controls should be used if
+        the decoder supports large scale tile decoding mode as signaled by the
+        ``V4L2_CID_STATELESS_AV1_OPERATING_MODE`` control.
+        See the :ref:`associated Codec Control IDs <v4l2-codec-stateless-av1>`.
+        Exactly one output and one capture buffer must be provided for use with
+        this pixel format. The output buffer must contain the appropriate number
+        of macroblocks to decode a full corresponding frame to the matching
+        capture buffer.
+
 .. raw:: latex
 
     \normalsize
diff --git a/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst b/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst
index 2d6bc8d94380..50d4ed714123 100644
--- a/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst
+++ b/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst
@@ -233,6 +233,42 @@ still cause this situation.
       - ``p_mpeg2_quantisation``
       - A pointer to a struct :c:type:`v4l2_ctrl_mpeg2_quantisation`. Valid if this control is
         of type ``V4L2_CTRL_TYPE_MPEG2_QUANTISATION``.
+    * - struct :c:type:`v4l2_ctrl_av1_sequence` *
+      - ``p_av1_sequence``
+      - A pointer to a struct :c:type:`v4l2_ctrl_av1_sequence`. Valid if this control is
+        of type ``V4L2_CTRL_TYPE_AV1_SEQUENCE``.
+    * - struct :c:type:`v4l2_ctrl_av1_tile_group` *
+      - ``p_av1_tile_group``
+      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_group`. Valid if this control is
+        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP``.
+    * - struct :c:type:`v4l2_ctrl_av1_tile_group_entry` *
+      - ``p_av1_tile_group_entry``
+      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_group_entry`. Valid if this control is
+        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY``.
+    * - struct :c:type:`v4l2_ctrl_av1_tile_list` *
+      - ``p_av1_tile_list``
+      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_list`. Valid if this control is
+        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_LIST``.
+    * - struct :c:type:`v4l2_ctrl_av1_tile_list_entry` *
+      - ``p_av1_tile_list_entry``
+      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_list_entry`. Valid if this control is
+        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_LIST_ENTRY``.
+    * - struct :c:type:`v4l2_ctrl_av1_frame_header` *
+      - ``p_av1_frame_header``
+      - A pointer to a struct :c:type:`v4l2_ctrl_av1_frame_header`. Valid if this control is
+        of type ``V4L2_CTRL_TYPE_AV1_FRAME_HEADER``.
+    * - struct :c:type:`v4l2_ctrl_av1_profile` *
+      - ``p_av1_profile``
+      - A pointer to a struct :c:type:`v4l2_ctrl_av1_profile`. Valid if this control is
+        of type ``V4L2_CTRL_TYPE_AV1_PROFILE``.
+    * - struct :c:type:`v4l2_ctrl_av1_level` *
+      - ``p_av1_level``
+      - A pointer to a struct :c:type:`v4l2_ctrl_av1_level`. Valid if this control is
+        of type ``V4L2_CTRL_TYPE_AV1_LEVEL``.
+    * - struct :c:type:`v4l2_ctrl_av1_operating_mode` *
+      - ``p_av1_operating_mode``
+      - A pointer to a struct :c:type:`v4l2_ctrl_av1_operating_mode`. Valid if this control is
+        of type ``V4L2_CTRL_TYPE_AV1_OPERATING_MODE``.
     * - struct :c:type:`v4l2_ctrl_hdr10_cll_info` *
       - ``p_hdr10_cll``
       - A pointer to a struct :c:type:`v4l2_ctrl_hdr10_cll_info`. Valid if this control is
diff --git a/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst b/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
index 819a70a26e18..73ff5311b7ae 100644
--- a/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
+++ b/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
@@ -507,6 +507,60 @@ See also the examples in :ref:`control`.
       - n/a
       - A struct :c:type:`v4l2_ctrl_hevc_decode_params`, containing HEVC
 	decoding parameters for stateless video decoders.
+    * - ``V4L2_CTRL_TYPE_AV1_SEQUENCE``
+      - n/a
+      - n/a
+      - n/a
+      - A struct :c:type:`v4l2_ctrl_av1_sequence`, containing AV1 Sequence OBU
+	decoding parameters for stateless video decoders.
+    * - ``V4L2_CTRL_TYPE_AV1_TILE_GROUP``
+      - n/a
+      - n/a
+      - n/a
+      - A struct :c:type:`v4l2_ctrl_av1_tile_group`, containing AV1 Tile Group
+	OBU decoding parameters for stateless video decoders.
+    * - ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY``
+      - n/a
+      - n/a
+      - n/a
+      - A struct :c:type:`v4l2_ctrl_av1_tile_group`, containing AV1 Tile Group
+	OBU decoding parameters for stateless video decoders.
+    * - ``V4L2_CTRL_TYPE_AV1_TILE_LIST``
+      - n/a
+      - n/a
+      - n/a
+      - A struct :c:type:`v4l2_ctrl_av1_tile_list`, containing AV1 Tile List
+	OBU decoding parameters for stateless video decoders.
+    * - ``V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY``
+      - n/a
+      - n/a
+      - n/a
+      - A struct :c:type:`v4l2_ctrl_av1_tile_list_entry`, containing AV1 Tile List
+	OBU decoding parameters for stateless video decoders.
+    * - ``V4L2_CTRL_TYPE_AV1_FRAME_HEADER``
+      - n/a
+      - n/a
+      - n/a
+      - A struct :c:type:`v4l2_ctrl_av1_frame_header`, containing AV1 Frame/Frame
+	Header OBU decoding parameters for stateless video decoders.
+    * - ``V4L2_CTRL_TYPE_AV1_PROFILE``
+      - n/a
+      - n/a
+      - n/a
+      - A enum :c:type:`v4l2_ctrl_av1_profile`, indicating what AV1 profiles
+	an AV1 stateless decoder might support.
+    * - ``V4L2_CTRL_TYPE_AV1_LEVEL``
+      - n/a
+      - n/a
+      - n/a
+      - A enum :c:type:`v4l2_ctrl_av1_level`, indicating what AV1 levels
+	an AV1 stateless decoder might support.
+    * - ``V4L2_CTRL_TYPE_AV1_OPERATING_MODE``
+      - n/a
+      - n/a
+      - n/a
+      - A enum :c:type:`v4l2_ctrl_av1_operating_mode`, indicating what AV1
+	operating modes an AV1 stateless decoder might support.
 
 .. raw:: latex
 
diff --git a/Documentation/userspace-api/media/videodev2.h.rst.exceptions b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
index 2217b56c2686..088d4014e4c5 100644
--- a/Documentation/userspace-api/media/videodev2.h.rst.exceptions
+++ b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
@@ -146,6 +146,15 @@ replace symbol V4L2_CTRL_TYPE_H264_DECODE_PARAMS :c:type:`v4l2_ctrl_type`
 replace symbol V4L2_CTRL_TYPE_HEVC_SPS :c:type:`v4l2_ctrl_type`
 replace symbol V4L2_CTRL_TYPE_HEVC_PPS :c:type:`v4l2_ctrl_type`
 replace symbol V4L2_CTRL_TYPE_HEVC_SLICE_PARAMS :c:type:`v4l2_ctrl_type`
+replace symbol V4L2_CTRL_TYPE_AV1_SEQUENCE :c:type:`v4l2_ctrl_type`
+replace symbol V4L2_CTRL_TYPE_AV1_TILE_GROUP :c:type:`v4l2_ctrl_type`
+replace symbol V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY :c:type:`v4l2_ctrl_type`
+replace symbol V4L2_CTRL_TYPE_AV1_TILE_LIST :c:type:`v4l2_ctrl_type`
+replace symbol V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY :c:type:`v4l2_ctrl_type`
+replace symbol V4L2_CTRL_TYPE_AV1_FRAME_HEADER :c:type:`v4l2_ctrl_type`
+replace symbol V4L2_CTRL_TYPE_AV1_PROFILE :c:type:`v4l2_ctrl_type`
+replace symbol V4L2_CTRL_TYPE_AV1_LEVEL :c:type:`v4l2_ctrl_type`
+replace symbol V4L2_CTRL_TYPE_AV1_OPERATING_MODE :c:type:`v4l2_ctrl_type`
 replace symbol V4L2_CTRL_TYPE_AREA :c:type:`v4l2_ctrl_type`
 replace symbol V4L2_CTRL_TYPE_FWHT_PARAMS :c:type:`v4l2_ctrl_type`
 replace symbol V4L2_CTRL_TYPE_VP8_FRAME :c:type:`v4l2_ctrl_type`
diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c
index e7ef2d16745e..3f0e425278b3 100644
--- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
+++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
@@ -283,6 +283,25 @@ static void std_log(const struct v4l2_ctrl *ctrl)
 	case V4L2_CTRL_TYPE_MPEG2_PICTURE:
 		pr_cont("MPEG2_PICTURE");
 		break;
+	case V4L2_CTRL_TYPE_AV1_SEQUENCE:
+		pr_cont("AV1_SEQUENCE");
+		break;
+	case V4L2_CTRL_TYPE_AV1_TILE_GROUP:
+		pr_cont("AV1_TILE_GROUP");
+		break;
+	case V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY:
+		pr_cont("AV1_TILE_GROUP_ENTRY");
+		break;
+	case V4L2_CTRL_TYPE_AV1_TILE_LIST:
+		pr_cont("AV1_TILE_LIST");
+		break;
+	case V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY:
+		pr_cont("AV1_TILE_LIST_ENTRY");
+		break;
+	case V4L2_CTRL_TYPE_AV1_FRAME_HEADER:
+		pr_cont("AV1_FRAME_HEADER");
+		break;
+
 	default:
 		pr_cont("unknown type %d", ctrl->type);
 		break;
@@ -317,6 +336,244 @@ static void std_log(const struct v4l2_ctrl *ctrl)
 #define zero_reserved(s) \
 	memset(&(s).reserved, 0, sizeof((s).reserved))
 
+static int validate_av1_quantization(struct v4l2_av1_quantization *q)
+{
+	if (q->flags > GENMASK(2, 0))
+		return -EINVAL;
+
+	if (q->delta_q_y_dc < -63 || q->delta_q_y_dc > 63 ||
+	    q->delta_q_u_dc < -63 || q->delta_q_u_dc > 63 ||
+	    q->delta_q_v_dc < -63 || q->delta_q_v_dc > 63 ||
+	    q->delta_q_u_ac < -63 || q->delta_q_u_ac > 63 ||
+	    q->delta_q_v_ac < -63 || q->delta_q_v_ac > 63 ||
+	    q->delta_q_res > GENMASK(1, 0))
+		return -EINVAL;
+
+	if (q->qm_y > GENMASK(3, 0) ||
+	    q->qm_u > GENMASK(3, 0) ||
+	    q->qm_v > GENMASK(3, 0))
+		return -EINVAL;
+
+	return 0;
+}
+
+static int validate_av1_segmentation(struct v4l2_av1_segmentation *s)
+{
+	u32 i;
+	u32 j;
+	s32 limit;
+
+	if (s->flags > GENMASK(3, 0))
+		return -EINVAL;
+
+	for (i = 0; i < ARRAY_SIZE(s->feature_data); i++) {
+		const int segmentation_feature_signed[] = { 1, 1, 1, 1, 1, 0, 0, 0 };
+		const int segmentation_feature_max[] = { 255, 63, 63, 63, 63, 7, 0, 0};
+
+		for (j = 0; j < ARRAY_SIZE(s->feature_data[j]); j++) {
+			if (segmentation_feature_signed[j]) {
+				limit = segmentation_feature_max[j];
+
+				if (s->feature_data[i][j] < -limit ||
+				    s->feature_data[i][j] > limit)
+					return -EINVAL;
+			} else {
+				if (s->feature_data[i][j] > limit)
+					return -EINVAL;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int validate_av1_loop_filter(struct v4l2_av1_loop_filter *lf)
+{
+	u32 i;
+
+	if (lf->flags > GENMASK(2, 0))
+		return -EINVAL;
+
+	for (i = 0; i < ARRAY_SIZE(lf->level); i++) {
+		if (lf->level[i] > GENMASK(5, 0))
+			return -EINVAL;
+	}
+
+	if (lf->sharpness > GENMASK(2, 0))
+		return -EINVAL;
+
+	for (i = 0; i < ARRAY_SIZE(lf->ref_deltas); i++) {
+		if (lf->ref_deltas[i] < -63 || lf->ref_deltas[i] > 63)
+			return -EINVAL;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(lf->mode_deltas); i++) {
+		if (lf->mode_deltas[i] < -63 || lf->mode_deltas[i] > 63)
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int validate_av1_cdef(struct v4l2_av1_cdef *cdef)
+{
+	u32 i;
+
+	if (cdef->damping_minus_3 > GENMASK(1, 0) ||
+	    cdef->bits > GENMASK(1, 0))
+		return -EINVAL;
+
+	for (i = 0; i < 1 << cdef->bits; i++) {
+		if (cdef->y_pri_strength[i] > GENMASK(3, 0) ||
+		    cdef->y_sec_strength[i] > 4 ||
+		    cdef->uv_pri_strength[i] > GENMASK(3, 0) ||
+		    cdef->uv_sec_strength[i] > 4)
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int validate_av1_loop_restauration(struct v4l2_av1_loop_restoration *lr)
+{
+	if (lr->lr_unit_shift > 3 || lr->lr_uv_shift > 1)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int validate_av1_film_grain(struct v4l2_av1_film_grain *fg)
+{
+	u32 i;
+
+	if (fg->flags > GENMASK(4, 0))
+		return -EINVAL;
+
+	if (fg->film_grain_params_ref_idx > GENMASK(2, 0) ||
+	    fg->num_y_points > 14 ||
+	    fg->num_cb_points > 10 ||
+	    fg->num_cr_points > GENMASK(3, 0) ||
+	    fg->grain_scaling_minus_8 > GENMASK(1, 0) ||
+	    fg->ar_coeff_lag > GENMASK(1, 0) ||
+	    fg->ar_coeff_shift_minus_6 > GENMASK(1, 0) ||
+	    fg->grain_scale_shift > GENMASK(1, 0))
+		return -EINVAL;
+
+	if (!(fg->flags & V4L2_AV1_FILM_GRAIN_FLAG_APPLY_GRAIN))
+		return 0;
+
+	for (i = 1; i < ARRAY_SIZE(fg->point_y_value); i++)
+		if (fg->point_y_value[i] <= fg->point_y_value[i - 1])
+			return -EINVAL;
+
+	for (i = 1; i < ARRAY_SIZE(fg->point_cb_value); i++)
+		if (fg->point_cb_value[i] <= fg->point_cb_value[i - 1])
+			return -EINVAL;
+
+	for (i = 1; i < ARRAY_SIZE(fg->point_cr_value); i++)
+		if (fg->point_cr_value[i] <= fg->point_cr_value[i - 1])
+			return -EINVAL;
+
+	return 0;
+}
+
+static int validate_av1_frame_header(struct v4l2_ctrl_av1_frame_header *f)
+{
+	int ret = 0;
+
+	ret = validate_av1_quantization(&f->quantization);
+	if (ret)
+		return ret;
+	ret = validate_av1_segmentation(&f->segmentation);
+	if (ret)
+		return ret;
+	ret = validate_av1_loop_filter(&f->loop_filter);
+	if (ret)
+		return ret;
+	ret = validate_av1_cdef(&f->cdef);
+	if (ret)
+		return ret;
+	ret = validate_av1_loop_restauration(&f->loop_restoration);
+	if (ret)
+		return ret;
+	ret = validate_av1_film_grain(&f->film_grain);
+	if (ret)
+		return ret;
+
+	if (f->flags &
+	~(V4L2_AV1_FRAME_HEADER_FLAG_SHOW_FRAME |
+	  V4L2_AV1_FRAME_HEADER_FLAG_SHOWABLE_FRAME |
+	  V4L2_AV1_FRAME_HEADER_FLAG_ERROR_RESILIENT_MODE |
+	  V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_CDF_UPDATE |
+	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_SCREEN_CONTENT_TOOLS |
+	  V4L2_AV1_FRAME_HEADER_FLAG_FORCE_INTEGER_MV |
+	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_INTRABC |
+	  V4L2_AV1_FRAME_HEADER_FLAG_USE_SUPERRES |
+	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_HIGH_PRECISION_MV |
+	  V4L2_AV1_FRAME_HEADER_FLAG_IS_MOTION_MODE_SWITCHABLE |
+	  V4L2_AV1_FRAME_HEADER_FLAG_USE_REF_FRAME_MVS |
+	  V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_FRAME_END_UPDATE_CDF |
+	  V4L2_AV1_FRAME_HEADER_FLAG_UNIFORM_TILE_SPACING |
+	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_WARPED_MOTION |
+	  V4L2_AV1_FRAME_HEADER_FLAG_REFERENCE_SELECT |
+	  V4L2_AV1_FRAME_HEADER_FLAG_REDUCED_TX_SET |
+	  V4L2_AV1_FRAME_HEADER_FLAG_FRAME_SIZE_OVERRIDE |
+	  V4L2_AV1_FRAME_HEADER_FLAG_BUFFER_REMOVAL_TIME_PRESENT |
+	  V4L2_AV1_FRAME_HEADER_FLAG_FRAME_REFS_SHORT_SIGNALING))
+		return -EINVAL;
+
+	if (f->superres_denom > GENMASK(2, 0) + 9)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int validate_av1_sequence(struct v4l2_ctrl_av1_sequence *s)
+{
+	if (s->flags &
+	~(V4L2_AV1_SEQUENCE_FLAG_STILL_PICTURE |
+	 V4L2_AV1_SEQUENCE_FLAG_USE_128X128_SUPERBLOCK |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_FILTER_INTRA |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTRA_EDGE_FILTER |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTERINTRA_COMPOUND |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_MASKED_COMPOUND |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_WARPED_MOTION |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_DUAL_FILTER |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_ORDER_HINT |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_JNT_COMP |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_REF_FRAME_MVS |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_SUPERRES |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_CDEF |
+	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_RESTORATION |
+	 V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME |
+	 V4L2_AV1_SEQUENCE_FLAG_COLOR_RANGE |
+	 V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_X |
+	 V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_Y |
+	 V4L2_AV1_SEQUENCE_FLAG_FILM_GRAIN_PARAMS_PRESENT |
+	 V4L2_AV1_SEQUENCE_FLAG_SEPARATE_UV_DELTA_Q))
+		return -EINVAL;
+
+	if (s->seq_profile == 1 && s->flags & V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME)
+		return -EINVAL;
+
+	/* reserved */
+	if (s->seq_profile > 2)
+		return -EINVAL;
+
+	/* TODO: PROFILES */
+	return 0;
+}
+
+static int validate_av1_tile_group(struct v4l2_ctrl_av1_tile_group *t)
+{
+	if (t->flags & ~(V4L2_AV1_TILE_GROUP_FLAG_START_AND_END_PRESENT))
+		return -EINVAL;
+	if (t->tg_start > t->tg_end)
+		return -EINVAL;
+
+	return 0;
+}
+
 /*
  * Compound controls validation requires setting unused fields/flags to zero
  * in order to properly detect unchanged controls with std_equal's memcmp.
@@ -573,7 +830,16 @@ static int std_validate_compound(const struct v4l2_ctrl *ctrl, u32 idx,
 		zero_padding(p_vp8_frame->entropy);
 		zero_padding(p_vp8_frame->coder_state);
 		break;
-
+	case V4L2_CTRL_TYPE_AV1_FRAME_HEADER:
+		return validate_av1_frame_header(p);
+	case V4L2_CTRL_TYPE_AV1_SEQUENCE:
+		return validate_av1_sequence(p);
+	case V4L2_CTRL_TYPE_AV1_TILE_GROUP:
+		return validate_av1_tile_group(p);
+	case V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY:
+	case V4L2_CTRL_TYPE_AV1_TILE_LIST:
+	case V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY:
+		break;
 	case V4L2_CTRL_TYPE_HEVC_SPS:
 		p_hevc_sps = p;
 
@@ -1313,6 +1579,24 @@ static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl,
 	case V4L2_CTRL_TYPE_VP8_FRAME:
 		elem_size = sizeof(struct v4l2_ctrl_vp8_frame);
 		break;
+	case V4L2_CTRL_TYPE_AV1_SEQUENCE:
+		elem_size = sizeof(struct v4l2_ctrl_av1_sequence);
+		break;
+	case V4L2_CTRL_TYPE_AV1_TILE_GROUP:
+		elem_size = sizeof(struct v4l2_ctrl_av1_tile_group);
+		break;
+	case V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY:
+		elem_size = sizeof(struct v4l2_ctrl_av1_tile_group_entry);
+		break;
+	case V4L2_CTRL_TYPE_AV1_TILE_LIST:
+		elem_size = sizeof(struct v4l2_ctrl_av1_tile_list);
+		break;
+	case V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY:
+		elem_size = sizeof(struct v4l2_ctrl_av1_tile_list_entry);
+		break;
+	case V4L2_CTRL_TYPE_AV1_FRAME_HEADER:
+		elem_size = sizeof(struct v4l2_ctrl_av1_frame_header);
+		break;
 	case V4L2_CTRL_TYPE_HEVC_SPS:
 		elem_size = sizeof(struct v4l2_ctrl_hevc_sps);
 		break;
diff --git a/drivers/media/v4l2-core/v4l2-ctrls-defs.c b/drivers/media/v4l2-core/v4l2-ctrls-defs.c
index 421300e13a41..6f9b53f180cc 100644
--- a/drivers/media/v4l2-core/v4l2-ctrls-defs.c
+++ b/drivers/media/v4l2-core/v4l2-ctrls-defs.c
@@ -499,6 +499,45 @@ const char * const *v4l2_ctrl_get_menu(u32 id)
 		NULL,
 	};
 
+	static const char * const av1_profile[] = {
+		"Main",
+		"High",
+		"Professional",
+		NULL,
+	};
+	static const char * const av1_level[] = {
+		"2.0",
+		"2.1",
+		"2.2",
+		"2.3",
+		"3.0",
+		"3.1",
+		"3.2",
+		"3.3",
+		"4.0",
+		"4.1",
+		"4.2",
+		"4.3",
+		"5.0",
+		"5.1",
+		"5.2",
+		"5.3",
+		"6.0",
+		"6.1",
+		"6.2",
+		"6.3",
+		"7.0",
+		"7.1",
+		"7.2",
+		"7.3",
+		NULL,
+	};
+	static const char * const av1_operating_mode[] = {
+		"General decoding",
+		"Large scale tile decoding",
+		NULL,
+	};
+
 	static const char * const hevc_profile[] = {
 		"Main",
 		"Main Still Picture",
@@ -685,6 +724,12 @@ const char * const *v4l2_ctrl_get_menu(u32 id)
 		return dv_it_content_type;
 	case V4L2_CID_DETECT_MD_MODE:
 		return detect_md_mode;
+	case V4L2_CID_STATELESS_AV1_PROFILE:
+		return av1_profile;
+	case V4L2_CID_STATELESS_AV1_LEVEL:
+		return av1_level;
+	case V4L2_CID_STATELESS_AV1_OPERATING_MODE:
+		return av1_operating_mode;
 	case V4L2_CID_MPEG_VIDEO_HEVC_PROFILE:
 		return hevc_profile;
 	case V4L2_CID_MPEG_VIDEO_HEVC_LEVEL:
@@ -1175,6 +1220,15 @@ const char *v4l2_ctrl_get_name(u32 id)
 	case V4L2_CID_STATELESS_MPEG2_SEQUENCE:			return "MPEG-2 Sequence Header";
 	case V4L2_CID_STATELESS_MPEG2_PICTURE:			return "MPEG-2 Picture Header";
 	case V4L2_CID_STATELESS_MPEG2_QUANTISATION:		return "MPEG-2 Quantisation Matrices";
+	case V4L2_CID_STATELESS_AV1_SEQUENCE:			return "AV1 Sequence parameters";
+	case V4L2_CID_STATELESS_AV1_TILE_GROUP:		        return "AV1 Tile Group";
+	case V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY:	        return "AV1 Tile Group Entry";
+	case V4L2_CID_STATELESS_AV1_TILE_LIST:		        return "AV1 Tile List";
+	case V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY:		return "AV1 Tile List Entry";
+	case V4L2_CID_STATELESS_AV1_FRAME_HEADER:		return "AV1 Frame Header parameters";
+	case V4L2_CID_STATELESS_AV1_PROFILE:			return "AV1 Profile";
+	case V4L2_CID_STATELESS_AV1_LEVEL:			return "AV1 Level";
+	case V4L2_CID_STATELESS_AV1_OPERATING_MODE:		return "AV1 Operating Mode";
 
 	/* Colorimetry controls */
 	/* Keep the order of the 'case's the same as in v4l2-controls.h! */
@@ -1343,6 +1397,9 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
 	case V4L2_CID_MPEG_VIDEO_VP9_PROFILE:
 	case V4L2_CID_MPEG_VIDEO_VP9_LEVEL:
 	case V4L2_CID_DETECT_MD_MODE:
+	case V4L2_CID_STATELESS_AV1_PROFILE:
+	case V4L2_CID_STATELESS_AV1_LEVEL:
+	case V4L2_CID_STATELESS_AV1_OPERATING_MODE:
 	case V4L2_CID_MPEG_VIDEO_HEVC_PROFILE:
 	case V4L2_CID_MPEG_VIDEO_HEVC_LEVEL:
 	case V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_TYPE:
@@ -1481,6 +1538,28 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
 	case V4L2_CID_STATELESS_VP8_FRAME:
 		*type = V4L2_CTRL_TYPE_VP8_FRAME;
 		break;
+	case V4L2_CID_STATELESS_AV1_SEQUENCE:
+		*type = V4L2_CTRL_TYPE_AV1_SEQUENCE;
+		break;
+	case V4L2_CID_STATELESS_AV1_TILE_GROUP:
+		*type = V4L2_CTRL_TYPE_AV1_TILE_GROUP;
+		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
+		break;
+	case V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY:
+		*type = V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY;
+		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
+		break;
+	case V4L2_CID_STATELESS_AV1_TILE_LIST:
+		*type = V4L2_CTRL_TYPE_AV1_TILE_LIST;
+		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
+		break;
+	case V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY:
+		*type = V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY;
+		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
+		break;
+	case V4L2_CID_STATELESS_AV1_FRAME_HEADER:
+		*type = V4L2_CTRL_TYPE_AV1_FRAME_HEADER;
+		break;
 	case V4L2_CID_MPEG_VIDEO_HEVC_SPS:
 		*type = V4L2_CTRL_TYPE_HEVC_SPS;
 		break;
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index 05d5db3d85e5..135474c43b65 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -1416,6 +1416,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
 		case V4L2_PIX_FMT_S5C_UYVY_JPG:	descr = "S5C73MX interleaved UYVY/JPEG"; break;
 		case V4L2_PIX_FMT_MT21C:	descr = "Mediatek Compressed Format"; break;
 		case V4L2_PIX_FMT_SUNXI_TILED_NV12: descr = "Sunxi Tiled NV12 Format"; break;
+		case V4L2_PIX_FMT_AV1_FRAME: descr = "AV1 Frame"; break;
 		default:
 			if (fmt->description[0])
 				return;
diff --git a/include/media/v4l2-ctrls.h b/include/media/v4l2-ctrls.h
index ebd9cef13309..5f8ba4fac92e 100644
--- a/include/media/v4l2-ctrls.h
+++ b/include/media/v4l2-ctrls.h
@@ -56,6 +56,12 @@ struct video_device;
  * @p_hdr10_cll:		Pointer to an HDR10 Content Light Level structure.
  * @p_hdr10_mastering:		Pointer to an HDR10 Mastering Display structure.
  * @p_area:			Pointer to an area.
+ * @p_av1_sequence:		Pointer to an AV1 sequence.
+ * @p_av1_tile_group:		Pointer to an AV1 tile group.
+ * @p_av1_tile_group_entry:	Pointer to an AV1 tile group entry.
+ * @p_av1_tile_list:		Pointer to an AV1 tile list.
+ * @p_av1_tile_list_entry:	Pointer to an AV1 tile list entry.
+ * @p_av1_frame_header:		Pointer to an AV1 frame header.
  * @p:				Pointer to a compound value.
  * @p_const:			Pointer to a constant compound value.
  */
@@ -83,6 +89,12 @@ union v4l2_ctrl_ptr {
 	struct v4l2_ctrl_hdr10_cll_info *p_hdr10_cll;
 	struct v4l2_ctrl_hdr10_mastering_display *p_hdr10_mastering;
 	struct v4l2_area *p_area;
+	struct v4l2_ctrl_av1_sequence *p_av1_sequence;
+	struct v4l2_ctrl_av1_tile_group *p_av1_tile_group;
+	struct v4l2_ctrl_av1_tile_group_entry *p_av1_tile_group_entry;
+	struct v4l2_ctrl_av1_tile_list *p_av1_tile_list;
+	struct v4l2_ctrl_av1_tile_list_entry *p_av1_tile_list_entry;
+	struct v4l2_ctrl_av1_frame_header *p_av1_frame_header;
 	void *p;
 	const void *p_const;
 };
diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h
index 5532b5f68493..0378fe8e1967 100644
--- a/include/uapi/linux/v4l2-controls.h
+++ b/include/uapi/linux/v4l2-controls.h
@@ -1976,6 +1976,802 @@ struct v4l2_ctrl_mpeg2_quantisation {
 	__u8	chroma_non_intra_quantiser_matrix[64];
 };
 
+/* Stateless AV1 controls */
+
+#define V4L2_AV1_TOTAL_REFS_PER_FRAME	8
+#define V4L2_AV1_CDEF_MAX		8
+#define V4L2_AV1_NUM_PLANES_MAX		3 /* 1 if monochrome, 3 otherwise */
+#define V4L2_AV1_MAX_SEGMENTS		8
+#define V4L2_AV1_MAX_OPERATING_POINTS	(1 << 5) /* 5 bits to encode */
+#define V4L2_AV1_REFS_PER_FRAME		7
+#define V4L2_AV1_MAX_NUM_Y_POINTS	(1 << 4) /* 4 bits to encode */
+#define V4L2_AV1_MAX_NUM_CB_POINTS	(1 << 4) /* 4 bits to encode */
+#define V4L2_AV1_MAX_NUM_CR_POINTS	(1 << 4) /* 4 bits to encode */
+#define V4L2_AV1_MAX_NUM_POS_LUMA	25 /* (2 * 3 * (3 + 1)) + 1 */
+#define V4L2_AV1_MAX_NUM_PLANES		3
+#define V4L2_AV1_MAX_TILE_COLS		64
+#define V4L2_AV1_MAX_TILE_ROWS		64
+#define V4L2_AV1_MAX_TILE_COUNT		512
+
+#define V4L2_AV1_SEQUENCE_FLAG_STILL_PICTURE		  BIT(0)
+#define V4L2_AV1_SEQUENCE_FLAG_USE_128X128_SUPERBLOCK	  BIT(1)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_FILTER_INTRA	  BIT(2)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTRA_EDGE_FILTER   BIT(3)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTERINTRA_COMPOUND BIT(4)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_MASKED_COMPOUND	  BIT(5)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_WARPED_MOTION	  BIT(6)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_DUAL_FILTER	  BIT(7)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_ORDER_HINT	  BIT(8)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_JNT_COMP		  BIT(9)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_REF_FRAME_MVS	  BIT(10)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_SUPERRES		  BIT(11)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_CDEF		  BIT(12)
+#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_RESTORATION	  BIT(13)
+#define V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME		  BIT(14)
+#define V4L2_AV1_SEQUENCE_FLAG_COLOR_RANGE		  BIT(15)
+#define V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_X		  BIT(16)
+#define V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_Y		  BIT(17)
+#define V4L2_AV1_SEQUENCE_FLAG_FILM_GRAIN_PARAMS_PRESENT  BIT(18)
+#define V4L2_AV1_SEQUENCE_FLAG_SEPARATE_UV_DELTA_Q	  BIT(19)
+
+#define V4L2_CID_STATELESS_AV1_SEQUENCE (V4L2_CID_CODEC_STATELESS_BASE + 401)
+/**
+ * struct v4l2_ctrl_av1_sequence - AV1 Sequence
+ *
+ * Represents an AV1 Sequence OBU. See section 5.5. "Sequence header OBU syntax"
+ * for more details.
+ *
+ * @flags: See V4L2_AV1_SEQUENCE_FLAG_{}.
+ * @seq_profile: specifies the features that can be used in the coded video
+ * sequence.
+ * @order_hint_bits: specifies the number of bits used for the order_hint field
+ * at each frame.
+ * @bit_depth: the bitdepth to use for the sequence as described in section
+ * 5.5.2 "Color config syntax".
+ */
+struct v4l2_ctrl_av1_sequence {
+	__u32 flags;
+	__u8 seq_profile;
+	__u8 order_hint_bits;
+	__u8 bit_depth;
+};
+
+#define V4L2_AV1_TILE_GROUP_FLAG_START_AND_END_PRESENT BIT(0)
+
+#define V4L2_CID_STATELESS_AV1_TILE_GROUP (V4L2_CID_CODEC_STATELESS_BASE + 402)
+/**
+ * struct v4l2_ctrl_av1_tile_group - AV1 Tile Group header.
+ *
+ * Represents a tile group as seen in an AV1 Tile Group OBU or Frame OBU. A
+ * v4l2_ctrl_av1_tile_group instance will refer to tg_end - tg_start instances
+ * of v4l2_ctrl_tile_group_entry. See section 6.10.1 "General tile group OBU
+ * semantics" for more details.
+ *
+ * @flags: see V4L2_AV1_TILE_GROUP_FLAG_{}.
+ * @tg_start: specifies the zero-based index of the first tile in the current
+ * tile group.
+ * @tg_end: specifies the zero-based index of the last tile in the current tile
+ * group.
+ */
+struct v4l2_ctrl_av1_tile_group {
+	__u8 flags;
+	__u32 tg_start;
+	__u32 tg_end;
+};
+
+#define V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY (V4L2_CID_CODEC_STATELESS_BASE + 403)
+/**
+ * struct v4l2_ctrl_av1_tile_group_entry - AV1 Tile Group entry
+ *
+ * Represents a single AV1 tile inside an AV1 Tile Group. Note that MiRowStart,
+ * MiRowEnd, MiColStart and MiColEnd can be retrieved from struct
+ * v4l2_av1_tile_info in struct v4l2_ctrl_av1_frame_header using tile_row and
+ * tile_col. See section 6.10.1 "General tile group OBU semantics" for more
+ * details.
+ *
+ * @tile_offset: offset from the OBU data, i.e. where the coded tile data
+ * actually starts.
+ * @tile_size: specifies the size in bytes of the coded tile. Equivalent to
+ * "TileSize" in the AV1 Specification.
+ * @tile_row: specifies the row of the current tile. Equivalent to "TileRow" in
+ * the AV1 Specification.
+ * @tile_col: specifies the col of the current tile. Equivalent to "TileCol" in
+ * the AV1 Specification.
+ */
+struct v4l2_ctrl_av1_tile_group_entry {
+	__u32 tile_offset;
+	__u32 tile_size;
+	__u32 tile_row;
+	__u32 tile_col;
+};
+
+#define V4L2_CID_STATELESS_AV1_TILE_LIST (V4L2_CID_CODEC_STATELESS_BASE + 404)
+/**
+ * struct v4l2_ctrl_av1_tile_list - AV1 Tile List header.
+ *
+ * Represents a tile list as seen in an AV1 Tile List OBU. Tile lists are used
+ * in "Large Scale Tile Decode Mode". Note that tile_count_minus_1 should be at
+ * most V4L2_AV1_MAX_TILE_COUNT - 1. A struct v4l2_ctrl_av1_tile_list instance
+ * will refer to "tile_count_minus_1" + 1 instances of struct
+ * v4l2_ctrl_av1_tile_list_entry.
+ *
+ * Each rendered frame may require at most two tile list OBU to be decodded. See
+ * section "6.11.1. General tile list OBU semantics" for more details.
+ *
+ * @output_frame_width_in_tiles_minus_1: this field plus one is the width of the
+ * output frame, in tile units.
+ * @output_frame_height_in_tiles_minus_1: this field plus one is the height of
+ * the output frame, in tile units.
+ * @tile_count_minus_1: this field plus one is the number of tile_list_entry in
+ * the list.
+ */
+struct v4l2_ctrl_av1_tile_list {
+	__u8 output_frame_width_in_tiles_minus_1;
+	__u8 output_frame_height_in_tiles_minus_1;
+	__u8 tile_count_minus_1;
+};
+
+#define V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY (V4L2_CID_CODEC_STATELESS_BASE + 405)
+
+/**
+ * struct v4l2_ctrl_av1_tile_list_entry - AV1 Tile List entry
+ *
+ * Represents a tile list entry as seen in an AV1 Tile List OBU. See section
+ * 6.11.2. "Tile list entry semantics" of the AV1 Specification for more
+ * details.
+ *
+ * @anchor_frame_idx: the index into an array AnchorFrames of the frames that
+ * the tile uses for prediction.
+ * @anchor_tile_row: the row coordinate of the tile in the frame that it
+ * belongs, in tile units.
+ * @anchor_tile_col: the column coordinate of the tile in the frame that it
+ * belongs, in tile units.
+ * @tile_data_size_minus_1: this field plus one is the size of the coded tile
+ * data in bytes.
+ */
+struct v4l2_ctrl_av1_tile_list_entry {
+	__u8 anchor_frame_idx;
+	__u8 anchor_tile_row;
+	__u8 anchor_tile_col;
+	__u8 tile_data_size_minus_1;
+};
+
+#define V4L2_AV1_FILM_GRAIN_FLAG_APPLY_GRAIN BIT(0)
+#define V4L2_AV1_FILM_GRAIN_FLAG_UPDATE_GRAIN BIT(1)
+#define V4L2_AV1_FILM_GRAIN_FLAG_CHROMA_SCALING_FROM_LUMA BIT(2)
+#define V4L2_AV1_FILM_GRAIN_FLAG_OVERLAP BIT(3)
+#define V4L2_AV1_FILM_GRAIN_FLAG_CLIP_TO_RESTRICTED_RANGE BIT(4)
+
+/**
+ * struct v4l2_av1_film_grain - AV1 Film Grain parameters.
+ *
+ * Film grain parameters as specified by section 6.8.20 of the AV1
+   Specification.
+ *
+ * @flags: see V4L2_AV1_FILM_GRAIN_{}.
+ * @grain_seed: specifies the starting value for the pseudo-random numbers used
+ * during film grain synthesis.
+ * @film_grain_params_ref_idx: indicates which reference frame contains the
+ * film grain parameters to be used for this frame.
+ * @num_y_points: specifies the number of points for the piece-wise linear
+ * scaling function of the luma component.
+ * @point_y_value: represents the x (luma value) coordinate for the i-th point
+ * of the piecewise linear scaling function for luma component. The values are
+ * signaled on the scale of 0..255. (In case of 10 bit video, these values
+ * correspond to luma values divided by 4. In case of 12 bit video, these values
+ * correspond to luma values divided by 16.).
+ * @point_y_scaling:  represents the scaling (output) value for the i-th point
+ * of the piecewise linear scaling function for luma component.
+ * @num_cb_points: specifies the number of points for the piece-wise linear
+ * scaling function of the cb component.
+ * @point_cb_value: represents the x coordinate for the i-th point of the
+ * piece-wise linear scaling function for cb component. The values are signaled
+ * on the scale of 0..255.
+ * @point_cb_scaling: represents the scaling (output) value for the i-th point
+ * of the piecewise linear scaling function for cb component.
+ * @num_cr_points: specifies represents the number of points for the piece-wise
+ * linear scaling function of the cr component.
+ * @point_cr_value:  represents the x coordinate for the i-th point of the
+ * piece-wise linear scaling function for cr component. The values are signaled
+ * on the scale of 0..255.
+ * @point_cr_scaling:  represents the scaling (output) value for the i-th point
+ * of the piecewise linear scaling function for cr component.
+ * @grain_scaling_minus_8: represents the shift – 8 applied to the values of the
+ * chroma component. The grain_scaling_minus_8 can take values of 0..3 and
+ * determines the range and quantization step of the standard deviation of film
+ * grain.
+ * @ar_coeff_lag: specifies the number of auto-regressive coefficients for luma
+ * and chroma.
+ * @ar_coeffs_y_plus_128: specifies auto-regressive coefficients used for the Y
+ * plane.
+ * @ar_coeffs_cb_plus_128: specifies auto-regressive coefficients used for the U
+ * plane.
+ * @ar_coeffs_cr_plus_128: specifies auto-regressive coefficients used for the V
+ * plane.
+ * @ar_coeff_shift_minus_6: specifies the range of the auto-regressive
+ * coefficients. Values of 0, 1, 2, and 3 correspond to the ranges for
+ * auto-regressive coefficients of [-2, 2), [-1, 1), [-0.5, 0.5) and [-0.25,
+ * 0.25) respectively.
+ * @grain_scale_shift: specifies how much the Gaussian random numbers should be
+ * scaled down during the grain synthesis process.
+ * @cb_mult: represents a multiplier for the cb component used in derivation of
+ * the input index to the cb component scaling function.
+ * @cb_luma_mult: represents a multiplier for the average luma component used in
+ * derivation of the input index to the cb component scaling function.
+ * @cb_offset: represents an offset used in derivation of the input index to the
+ * cb component scaling function.
+ * @cr_mult: represents a multiplier for the cr component used in derivation of
+ * the input index to the cr component scaling function.
+ * @cr_luma_mult: represents a multiplier for the average luma component used in
+ * derivation of the input index to the cr component scaling function.
+ * @cr_offset: represents an offset used in derivation of the input index to the
+ * cr component scaling function.
+ */
+struct v4l2_av1_film_grain {
+	__u8 flags;
+	__u16 grain_seed;
+	__u8 film_grain_params_ref_idx;
+	__u8 num_y_points;
+	__u8 point_y_value[V4L2_AV1_MAX_NUM_Y_POINTS];
+	__u8 point_y_scaling[V4L2_AV1_MAX_NUM_Y_POINTS];
+	__u8 num_cb_points;
+	__u8 point_cb_value[V4L2_AV1_MAX_NUM_CB_POINTS];
+	__u8 point_cb_scaling[V4L2_AV1_MAX_NUM_CB_POINTS];
+	__u8 num_cr_points;
+	__u8 point_cr_value[V4L2_AV1_MAX_NUM_CR_POINTS];
+	__u8 point_cr_scaling[V4L2_AV1_MAX_NUM_CR_POINTS];
+	__u8 grain_scaling_minus_8;
+	__u8 ar_coeff_lag;
+	__u8 ar_coeffs_y_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA];
+	__u8 ar_coeffs_cb_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA];
+	__u8 ar_coeffs_cr_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA];
+	__u8 ar_coeff_shift_minus_6;
+	__u8 grain_scale_shift;
+	__u8 cb_mult;
+	__u8 cb_luma_mult;
+	__u16 cb_offset;
+	__u8 cr_mult;
+	__u8 cr_luma_mult;
+	__u16 cr_offset;
+};
+
+/**
+ * enum v4l2_av1_warp_model - AV1 Warp Model as described in section 3
+ * "Symbols and abbreviated terms" of the AV1 Specification.
+ *
+ * @V4L2_AV1_WARP_MODEL_IDENTITY: Warp model is just an identity transform.
+ * @V4L2_AV1_WARP_MODEL_TRANSLATION: Warp model is a pure translation.
+ * @V4L2_AV1_WARP_MODEL_ROTZOOM: Warp model is a rotation + symmetric zoom +
+ * translation.
+ * @V4L2_AV1_WARP_MODEL_AFFINE: Warp model is a general affine transform.
+ */
+enum v4l2_av1_warp_model {
+	V4L2_AV1_WARP_MODEL_IDENTITY = 0,
+	V4L2_AV1_WARP_MODEL_TRANSLATION = 1,
+	V4L2_AV1_WARP_MODEL_ROTZOOM = 2,
+	V4L2_AV1_WARP_MODEL_AFFINE = 3,
+};
+
+/**
+ * enum v4l2_av1_reference_frame - AV1 reference frames
+ *
+ * @V4L2_AV1_REF_INTRA_FRAME: Intra Frame Reference
+ * @V4L2_AV1_REF_LAST_FRAME: Last Reference Frame
+ * @V4L2_AV1_REF_LAST2_FRAME: Last2 Reference Frame
+ * @V4L2_AV1_REF_LAST3_FRAME: Last3 Reference Frame
+ * @V4L2_AV1_REF_GOLDEN_FRAME: Golden Reference Frame
+ * @V4L2_AV1_REF_BWDREF_FRAME: BWD Reference Frame
+ * @V4L2_AV1_REF_ALTREF2_FRAME: Alternative2 Reference Frame
+ * @V4L2_AV1_REF_ALTREF_FRAME: Alternative Reference Frame
+ * @V4L2_AV1_NUM_REF_FRAMES: Total Reference Frame Number
+ */
+enum v4l2_av1_reference_frame {
+	V4L2_AV1_REF_INTRA_FRAME = 0,
+	V4L2_AV1_REF_LAST_FRAME = 1,
+	V4L2_AV1_REF_LAST2_FRAME = 2,
+	V4L2_AV1_REF_LAST3_FRAME = 3,
+	V4L2_AV1_REF_GOLDEN_FRAME = 4,
+	V4L2_AV1_REF_BWDREF_FRAME = 5,
+	V4L2_AV1_REF_ALTREF2_FRAME = 6,
+	V4L2_AV1_REF_ALTREF_FRAME = 7,
+	V4L2_AV1_NUM_REF_FRAMES,
+};
+
+#define V4L2_AV1_GLOBAL_MOTION_IS_INVALID(ref) (1 << (ref))
+
+#define V4L2_AV1_GLOBAL_MOTION_FLAG_IS_GLOBAL	   BIT(0)
+#define V4L2_AV1_GLOBAL_MOTION_FLAG_IS_ROT_ZOOM	   BIT(1)
+#define V4L2_AV1_GLOBAL_MOTION_FLAG_IS_TRANSLATION BIT(2)
+/**
+ * struct v4l2_av1_global_motion - AV1 Global Motion parameters as described in
+ * section 6.8.17 "Global motion params semantics" of the AV1 specification.
+ *
+ * @flags: A bitfield containing the flags per reference frame. See
+ * V4L2_AV1_GLOBAL_MOTION_FLAG_{}
+ * @type: The type of global motion transform used.
+ * @params: this field has the same meaning as "gm_params" in the AV1
+ * specification.
+ * @invalid: bitfield indicating whether the global motion params are invalid
+ * for a given reference frame. See section 7.11.3.6. Setup shear process and
+ * the variable "warpValid". Use V4L2_AV1_GLOBAL_MOTION_IS_INVALID(ref) to
+ * create a suitable mask.
+ */
+
+struct v4l2_av1_global_motion {
+	__u8 flags[V4L2_AV1_TOTAL_REFS_PER_FRAME];
+	enum v4l2_av1_warp_model type[V4L2_AV1_TOTAL_REFS_PER_FRAME];
+	__u32 params[V4L2_AV1_TOTAL_REFS_PER_FRAME][6];
+	__u8 invalid;
+};
+
+/**
+ * enum v4l2_av1_frame_restoration_type - AV1 Frame Restoration Type
+ * @V4L2_AV1_FRAME_RESTORE_NONE: no filtering is applied.
+ * @V4L2_AV1_FRAME_RESTORE_WIENER: Wiener filter process is invoked.
+ * @V4L2_AV1_FRAME_RESTORE_SGRPROJ: self guided filter process is invoked.
+ * @V4L2_AV1_FRAME_RESTORE_SWITCHABLE: restoration filter is swichtable.
+ */
+enum v4l2_av1_frame_restoration_type {
+	V4L2_AV1_FRAME_RESTORE_NONE = 0,
+	V4L2_AV1_FRAME_RESTORE_WIENER = 1,
+	V4L2_AV1_FRAME_RESTORE_SGRPROJ = 2,
+	V4L2_AV1_FRAME_RESTORE_SWITCHABLE = 3,
+};
+
+/**
+ * struct v4l2_av1_loop_restoration - AV1 Loop Restauration as described in
+ * section 6.10.15 "Loop restoration params semantics" of the AV1 specification.
+ *
+ * @frame_restoration_type: specifies the type of restoration used for each
+ * plane. See enum_v4l2_av1_frame_restoration_type.
+ * @lr_unit_shift: specifies if the luma restoration size should be halved.
+ * @lr_uv_shift: specifies if the chroma size should be half the luma size.
+ * @loop_restoration_size: specifies the size of loop restoration units in units
+ * of samples in the current plane.
+ */
+struct v4l2_av1_loop_restoration {
+	enum v4l2_av1_frame_restoration_type frame_restoration_type[V4L2_AV1_NUM_PLANES_MAX];
+	__u8 lr_unit_shift;
+	__u8 lr_uv_shift;
+	__u32 loop_restoration_size[V4L2_AV1_MAX_NUM_PLANES];
+};
+
+/**
+ * struct v4l2_av1_cdef - AV1 CDEF params semantics as described in section
+ * 6.10.14. "CDEF params semantics" of the AV1 specification
+ *
+ * @damping_minus_3: controls the amount of damping in the deringing filter.
+ * @bits: specifies the number of bits needed to specify which CDEF filter to
+ * apply.
+ * @y_pri_strength: specifies the strength of the primary filter.
+ * @y_sec_strength: specifies the strength of the secondary filter.
+ * @uv_pri_strength: specifies the strength of the primary filter.
+ * @uv_sec_strength: specifies the strength of the secondary filter.
+ */
+struct v4l2_av1_cdef {
+	__u8 damping_minus_3;
+	__u8 bits;
+	__u8 y_pri_strength[V4L2_AV1_CDEF_MAX];
+	__u8 y_sec_strength[V4L2_AV1_CDEF_MAX];
+	__u8 uv_pri_strength[V4L2_AV1_CDEF_MAX];
+	__u8 uv_sec_strength[V4L2_AV1_CDEF_MAX];
+};
+
+#define V4L2_AV1_SEGMENTATION_FLAG_ENABLED	   BIT(0)
+#define V4L2_AV1_SEGMENTATION_FLAG_UPDATE_MAP	   BIT(1)
+#define V4L2_AV1_SEGMENTATION_FLAG_TEMPORAL_UPDATE BIT(2)
+#define V4L2_AV1_SEGMENTATION_FLAG_UPDATE_DATA	   BIT(3)
+#define V4L2_AV1_SEGMENTATION_FLAG_SEG_ID_PRE_SKIP	BIT(4)
+
+/**
+ * enum v4l2_av1_segment_feature - AV1 segment features as described in section
+ * 3 "Symbols and abbreviated terms" of the AV1 specification.
+ *
+ * @V4L2_AV1_SEG_LVL_ALT_Q: Index for quantizer segment feature.
+ * @V4L2_AV1_SEG_LVL_ALT_LF_Y_V: Index for vertical luma loop filter segment
+ * feature.
+ * @V4L2_AV1_SEG_LVL_REF_FRAME: Index for reference frame segment feature.
+ * @V4L2_AV1_SEG_LVL_SKIP: Index for skip segment feature.
+ * @V4L2_AV1_SEG_LVL_GLOBALMV: Index for global mv feature.
+ * @V4L2_AV1_SEG_LVL_MAX: Number of segment features.
+ */
+enum v4l2_av1_segment_feature {
+	V4L2_AV1_SEG_LVL_ALT_Q = 0,
+	V4L2_AV1_SEG_LVL_ALT_LF_Y_V = 1,
+	V4L2_AV1_SEG_LVL_REF_FRAME = 5,
+	V4L2_AV1_SEG_LVL_REF_SKIP = 6,
+	V4L2_AV1_SEG_LVL_REF_GLOBALMV = 7,
+	V4L2_AV1_SEG_LVL_MAX = 8
+};
+
+#define V4L2_AV1_SEGMENT_FEATURE_ENABLED(id)	(1 << (id))
+
+/**
+ * struct v4l2_av1_segmentation - AV1 Segmentation params as defined in section
+ * 6.8.13. "Segmentation params semantics" of the AV1 specification.
+ *
+ * @flags: see V4L2_AV1_SEGMENTATION_FLAG_{}.
+ * @feature_enabled: bitmask defining which features are enabled in each segment.
+ * Use V4L2_AV1_SEGMENT_FEATURE_ENABLED to build a suitable mask.
+ * @feature_data: data attached to each feature. Data entry is only valid if the
+ * feature is enabled
+ * @last_active_seg_id: indicates the highest numbered segment id that has some
+ * enabled feature. This is used when decoding the segment id to only decode
+ * choices corresponding to used segments.
+ */
+struct v4l2_av1_segmentation {
+	__u8 flags;
+	__u8 feature_enabled[V4L2_AV1_MAX_SEGMENTS];
+	__u16 feature_data[V4L2_AV1_MAX_SEGMENTS][V4L2_AV1_SEG_LVL_MAX];
+	__u8 last_active_seg_id;
+};
+
+#define V4L2_AV1_LOOP_FILTER_FLAG_DELTA_ENABLED    BIT(0)
+#define V4L2_AV1_LOOP_FILTER_FLAG_DELTA_UPDATE     BIT(1)
+#define V4L2_AV1_LOOP_FILTER_FLAG_DELTA_LF_PRESENT BIT(2)
+
+/**
+ * struct v4l2_av1_loop_filter - AV1 Loop filter params as defined in section
+ * 6.8.10. "Loop filter semantics" of the AV1 specification.
+ *
+ * @flags: see V4L2_AV1_LOOP_FILTER_FLAG_{}
+ * @level: an array containing loop filter strength values. Different loop
+ * filter strength values from the array are used depending on the image plane
+ * being filtered, and the edge direction (vertical or horizontal) being
+ * filtered.
+ * @sharpness: indicates the sharpness level. The loop_filter_level and
+ * loop_filter_sharpness together determine when a block edge is filtered, and
+ * by how much the filtering can change the sample values. The loop filter
+ * process is described in section 7.14 of the AV1 specification.
+ * @ref_deltas: contains the adjustment needed for the filter level based on the
+ * chosen reference frame. If this syntax element is not present, it maintains
+ * its previous value.
+ * @mode_deltas: contains the adjustment needed for the filter level based on
+ * the chosen mode. If this syntax element is not present, it maintains its
+ * previous value.
+ * @delta_lf_res: specifies the left shift which should be applied to decoded
+ * loop filter delta values.
+ * @delta_lf_multi: a value equal to 1 specifies that separate loop filter
+ * deltas are sent for horizontal luma edges, vertical luma edges,
+ * the U edges, and the V edges. A value of delta_lf_multi equal to 0 specifies
+ * that the same loop filter delta is used for all edges.
+ */
+struct v4l2_av1_loop_filter {
+	__u8 flags;
+	__u8 level[4];
+	__u8 sharpness;
+	__u8 ref_deltas[V4L2_AV1_TOTAL_REFS_PER_FRAME];
+	__u8 mode_deltas[2];
+	__u8 delta_lf_res;
+	__u8 delta_lf_multi;
+};
+
+#define V4L2_AV1_QUANTIZATION_FLAG_DIFF_UV_DELTA   BIT(0)
+#define V4L2_AV1_QUANTIZATION_FLAG_USING_QMATRIX   BIT(1)
+#define V4L2_AV1_QUANTIZATION_FLAG_DELTA_Q_PRESENT BIT(2)
+
+/**
+ * struct v4l2_av1_quantization - AV1 Quantization params as defined in section
+ * 6.8.11 "Quantization params semantics" of the AV1 specification.
+ *
+ * @flags: see V4L2_AV1_QUANTIZATION_FLAG_{}
+ * @base_q_idx: indicates the base frame qindex. This is used for Y AC
+ * coefficients and as the base value for the other quantizers.
+ * @delta_q_y_dc: indicates the Y DC quantizer relative to base_q_idx.
+ * @delta_q_u_dc: indicates the U DC quantizer relative to base_q_idx.
+ * @delta_q_u_ac: indicates the U AC quantizer relative to base_q_idx.
+ * @delta_q_v_dc: indicates the V DC quantizer relative to base_q_idx.
+ * @delta_q_v_ac: indicates the V AC quantizer relative to base_q_idx.
+ * @qm_y: specifies the level in the quantizer matrix that should be used for
+ * luma plane decoding.
+ * @qm_u: specifies the level in the quantizer matrix that should be used for
+ * chroma U plane decoding.
+ * @qm_v: specifies the level in the quantizer matrix that should be used for
+ * chroma V plane decoding.
+ * @delta_q_res: specifies the left shift which should be applied to decoded
+ * quantizer index delta values.
+ */
+struct v4l2_av1_quantization {
+	__u8 flags;
+	__u8 base_q_idx;
+	__s8 delta_q_y_dc;
+	__s8 delta_q_u_dc;
+	__s8 delta_q_u_ac;
+	__s8 delta_q_v_dc;
+	__s8 delta_q_v_ac;
+	__u8 qm_y;
+	__u8 qm_u;
+	__u8 qm_v;
+	__u8 delta_q_res;
+};
+
+#define V4L2_AV1_TILE_INFO_FLAG_UNIFORM_TILE_SPACING	BIT(0)
+
+/**
+ * struct v4l2_av1_tile_info - AV1 Tile info as defined in section 6.8.14. "Tile
+ * info semantics" of the AV1 specification.
+ *
+ * @flags: see V4L2_AV1_TILE_INFO_FLAG_{}
+ * @mi_col_starts: an array specifying the start column (in units of 4x4 luma
+ * samples) for each tile across the image.
+ * @mi_row_starts: an array specifying the start row (in units of 4x4 luma
+ * samples) for each tile down the image.
+ * @width_in_sbs_minus_1: specifies the width of a tile minus 1 in units of
+ * superblocks.
+ * @height_in_sbs_minus_1:  specifies the height of a tile minus 1 in units of
+ * superblocks.
+ * @tile_size_bytes: specifies the number of bytes needed to code each tile
+ * size.
+ * @context_update_tile_id: specifies which tile to use for the CDF update.
+ * @tile_rows: specifies the number of tiles down the frame.
+ * @tile_cols: specifies the number of tiles across the frame.
+ */
+struct v4l2_av1_tile_info {
+	__u8 flags;
+	__u32 mi_col_starts[V4L2_AV1_MAX_TILE_COLS + 1];
+	__u32 mi_row_starts[V4L2_AV1_MAX_TILE_ROWS + 1];
+	__u32 width_in_sbs_minus_1[V4L2_AV1_MAX_TILE_COLS];
+	__u32 height_in_sbs_minus_1[V4L2_AV1_MAX_TILE_ROWS];
+	__u8 tile_size_bytes;
+	__u8 context_update_tile_id;
+	__u8 tile_cols;
+	__u8 tile_rows;
+};
+
+/**
+ * enum v4l2_av1_frame_type - AV1 Frame Type
+ *
+ * @V4L2_AV1_KEY_FRAME: Key frame
+ * @V4L2_AV1_INTER_FRAME: Inter frame
+ * @V4L2_AV1_INTRA_ONLY_FRAME: Intra-only frame
+ * @V4L2_AV1_SWITCH_FRAME: Switch frame
+ */
+enum v4l2_av1_frame_type {
+	V4L2_AV1_KEY_FRAME = 0,
+	V4L2_AV1_INTER_FRAME = 1,
+	V4L2_AV1_INTRA_ONLY_FRAME = 2,
+	V4L2_AV1_SWITCH_FRAME = 3
+};
+
+/**
+ * enum v4l2_vp9_interpolation_filter - AV1 interpolation filter types
+ *
+ * @V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP: eight tap filter
+ * @V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SMOOTH: eight tap smooth filter
+ * @V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SHARP: eight tap sharp filter
+ * @V4L2_AV1_INTERPOLATION_FILTER_BILINEAR: bilinear filter
+ * @V4L2_AV1_INTERPOLATION_FILTER_SWITCHABLE: filter selection is signaled at
+ * the block level
+ *
+ * See section 6.8.9 "Interpolation filter semantics" of the AV1 specification
+ * for more details.
+ */
+enum v4l2_av1_interpolation_filter {
+	V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP = 0,
+	V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SMOOTH = 1,
+	V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SHARP = 2,
+	V4L2_AV1_INTERPOLATION_FILTER_BILINEAR = 3,
+	V4L2_AV1_INTERPOLATION_FILTER_SWITCHABLE = 4,
+};
+
+/**
+ * enum v4l2_av1_tx_mode - AV1 Tx mode as described in section 6.8.21 "TX mode
+ * semantics" of the AV1 specification.
+ * @V4L2_AV1_TX_MODE_ONLY_4X4: the inverse transform will use only 4x4
+ * transforms
+ * @V4L2_AV1_TX_MODE_LARGEST: the inverse transform will use the largest
+ * transform size that fits inside the block
+ * @V4L2_AV1_TX_MODE_SELECT: the choice of transform size is specified
+ * explicitly for each block.
+ */
+enum v4l2_av1_tx_mode {
+	V4L2_AV1_TX_MODE_ONLY_4X4 = 0,
+	V4L2_AV1_TX_MODE_LARGEST = 1,
+	V4L2_AV1_TX_MODE_SELECT = 2
+};
+
+#define V4L2_AV1_FRAME_HEADER_FLAG_SHOW_FRAME			BIT(0)
+#define V4L2_AV1_FRAME_HEADER_FLAG_SHOWABLE_FRAME		BIT(1)
+#define V4L2_AV1_FRAME_HEADER_FLAG_ERROR_RESILIENT_MODE		BIT(2)
+#define V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_CDF_UPDATE		BIT(3)
+#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_SCREEN_CONTENT_TOOLS	BIT(4)
+#define V4L2_AV1_FRAME_HEADER_FLAG_FORCE_INTEGER_MV		BIT(5)
+#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_INTRABC		BIT(6)
+#define V4L2_AV1_FRAME_HEADER_FLAG_USE_SUPERRES			BIT(7)
+#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_HIGH_PRECISION_MV	BIT(8)
+#define V4L2_AV1_FRAME_HEADER_FLAG_IS_MOTION_MODE_SWITCHABLE	BIT(9)
+#define V4L2_AV1_FRAME_HEADER_FLAG_USE_REF_FRAME_MVS		BIT(10)
+#define V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_FRAME_END_UPDATE_CDF BIT(11)
+#define V4L2_AV1_FRAME_HEADER_FLAG_UNIFORM_TILE_SPACING		BIT(12)
+#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_WARPED_MOTION		BIT(13)
+#define V4L2_AV1_FRAME_HEADER_FLAG_REFERENCE_SELECT		BIT(14)
+#define V4L2_AV1_FRAME_HEADER_FLAG_REDUCED_TX_SET		BIT(15)
+#define V4L2_AV1_FRAME_HEADER_FLAG_SKIP_MODE_PRESENT		BIT(16)
+#define V4L2_AV1_FRAME_HEADER_FLAG_FRAME_SIZE_OVERRIDE		BIT(17)
+#define V4L2_AV1_FRAME_HEADER_FLAG_BUFFER_REMOVAL_TIME_PRESENT	BIT(18)
+#define V4L2_AV1_FRAME_HEADER_FLAG_FRAME_REFS_SHORT_SIGNALING	BIT(19)
+
+#define V4L2_CID_STATELESS_AV1_FRAME_HEADER (V4L2_CID_CODEC_STATELESS_BASE + 406)
+/**
+ * struct v4l2_ctrl_av1_frame_header - Represents an AV1 Frame Header OBU.
+ *
+ * @tile_info: tile info
+ * @quantization: quantization params
+ * @segmentation: segmentation params
+ * @loop_filter: loop filter params
+ * @cdef: cdef params
+ * @loop_restoration: loop restoration params
+ * @global_motion: global motion params
+ * @film_grain: film grain params
+ * @flags: see V4L2_AV1_FRAME_HEADER_FLAG_{}
+ * @frame_type: specifies the AV1 frame type
+ * @order_hint: specifies OrderHintBits least significant bits of the expected
+ * output order for this frame.
+ * @superres_denom: the denominator for the upscaling ratio.
+ * @upscaled_width: the upscaled width.
+ * @interpolation_filter: specifies the filter selection used for performing
+ * inter prediction.
+ * @tx_mode: specifies how the transform size is determined.
+ * @frame_width_minus_1: add 1 to get the frame's width.
+ * @frame_height_minus_1: add 1 to get the frame's height
+ * @render_width_minus_1: add 1 to get the render width of the frame in luma
+ * samples.
+ * @render_height_minus_1: add 1 to get the render height of the frame in luma
+ * samples.
+ * @current_frame_id: specifies the frame id number for the current frame. Frame
+ * id numbers are additional information that do not affect the decoding
+ * process, but provide decoders with a way of detecting missing reference
+ * frames so that appropriate action can be taken.
+ * @primary_ref_frame: specifies which reference frame contains the CDF values
+ * and other state that should be loaded at the start of the frame.
+ * @buf_removal_time: specifies the frame removal time in units of DecCT clock
+ * ticks counted from the removal time of the last random access point for
+ * operating point opNum.
+ * @refresh_frame_flags: contains a bitmask that specifies which reference frame
+ * slots will be updated with the current frame after it is decoded.
+ * @ref_order_hint: specifies the expected output order hint for each reference
+ * frame.
+ * @last_frame_idx: specifies the reference frame to use for LAST_FRAME.
+ * @gold_frame_idx: specifies the reference frame to use for GOLDEN_FRAME.
+ * refs
+ * @reference_frame_ts: the V4L2 timestamp for each of the reference frames
+ * enumerated in &v4l2_av1_reference_frame. The timestamp refers to the
+ * timestamp field in struct v4l2_buffer. Use v4l2_timeval_to_ns() to convert
+ * the struct timeval to a __u64.
+ * @skip_mode_frame: specifies the frames to use for compound prediction when
+ * skip_mode is equal to 1.
+ */
+struct v4l2_ctrl_av1_frame_header {
+	struct v4l2_av1_tile_info tile_info;
+	struct v4l2_av1_quantization quantization;
+	struct v4l2_av1_segmentation segmentation;
+	struct v4l2_av1_loop_filter  loop_filter;
+	struct v4l2_av1_cdef cdef;
+	struct v4l2_av1_loop_restoration loop_restoration;
+	struct v4l2_av1_global_motion global_motion;
+	struct v4l2_av1_film_grain film_grain;
+	__u32 flags;
+	enum v4l2_av1_frame_type frame_type;
+	__u32 order_hint;
+	__u8 superres_denom;
+	__u32 upscaled_width;
+	enum v4l2_av1_interpolation_filter interpolation_filter;
+	enum v4l2_av1_tx_mode tx_mode;
+	__u32 frame_width_minus_1;
+	__u32 frame_height_minus_1;
+	__u16 render_width_minus_1;
+	__u16 render_height_minus_1;
+
+	__u32 current_frame_id;
+	__u8 primary_ref_frame;
+	__u8 buffer_removal_time[V4L2_AV1_MAX_OPERATING_POINTS];
+	__u8 refresh_frame_flags;
+	__u32 ref_order_hint[V4L2_AV1_NUM_REF_FRAMES];
+	__s8 last_frame_idx;
+	__s8 gold_frame_idx;
+	__u64 reference_frame_ts[V4L2_AV1_NUM_REF_FRAMES];
+	__u8 skip_mode_frame[2];
+};
+
+/**
+ * enum v4l2_stateless_av1_profile - AV1 profiles
+ *
+ * @V4L2_STATELESS_AV1_PROFILE_MAIN: compliant decoders must be able to decode
+ * streams with seq_profile equal to 0.
+ * @V4L2_STATELESS_PROFILE_HIGH: compliant decoders must be able to decode
+ * streams with seq_profile equal to 0.
+ * @V4L2_STATELESS_PROFILE_PROFESSIONAL: compliant decoders must be able to
+ * decode streams with seq_profile equal to 0.
+ *
+ * Conveys the highest profile a decoder can work with.
+ */
+#define V4L2_CID_STATELESS_AV1_PROFILE (V4L2_CID_CODEC_STATELESS_BASE + 407)
+enum v4l2_stateless_av1_profile {
+	V4L2_STATELESS_AV1_PROFILE_MAIN = 0,
+	V4L2_STATELESS_AV1_PROFILE_HIGH = 1,
+	V4L2_STATELESS_AV1_PROFILE_PROFESSIONAL = 2,
+};
+
+/**
+ * enum v4l2_stateless_av1_level - AV1 levels
+ *
+ * @V4L2_STATELESS_AV1_LEVEL_2_0: Level 2.0.
+ * @V4L2_STATELESS_AV1_LEVEL_2_1: Level 2.1.
+ * @V4L2_STATELESS_AV1_LEVEL_2_2: Level 2.2.
+ * @V4L2_STATELESS_AV1_LEVEL_2_3: Level 2.3.
+ * @V4L2_STATELESS_AV1_LEVEL_3_0: Level 3.0.
+ * @V4L2_STATELESS_AV1_LEVEL_3_1: Level 3.1.
+ * @V4L2_STATELESS_AV1_LEVEL_3_2: Level 3.2.
+ * @V4L2_STATELESS_AV1_LEVEL_3_3: Level 3.3.
+ * @V4L2_STATELESS_AV1_LEVEL_4_0: Level 4.0.
+ * @V4L2_STATELESS_AV1_LEVEL_4_1: Level 4.1.
+ * @V4L2_STATELESS_AV1_LEVEL_4_2: Level 4.2.
+ * @V4L2_STATELESS_AV1_LEVEL_4_3: Level 4.3.
+ * @V4L2_STATELESS_AV1_LEVEL_5_0: Level 5.0.
+ * @V4L2_STATELESS_AV1_LEVEL_5_1: Level 5.1.
+ * @V4L2_STATELESS_AV1_LEVEL_5_2: Level 5.2.
+ * @V4L2_STATELESS_AV1_LEVEL_5_3: Level 5.3.
+ * @V4L2_STATELESS_AV1_LEVEL_6_0: Level 6.0.
+ * @V4L2_STATELESS_AV1_LEVEL_6_1: Level 6.1.
+ * @V4L2_STATELESS_AV1_LEVEL_6_2: Level 6.2.
+ * @V4L2_STATELESS_AV1_LEVEL_6_3: Level 6.3.
+ * @V4L2_STATELESS_AV1_LEVEL_7_0: Level 7.0.
+ * @V4L2_STATELESS_AV1_LEVEL_7_2: Level 7.2.
+ * @V4L2_STATELESS_AV1_LEVEL_7_3: Level 7.3.
+ *
+ * Conveys the highest level a decoder can work with.
+ */
+#define V4L2_CID_STATELESS_AV1_LEVEL (V4L2_CID_CODEC_STATELESS_BASE + 408)
+enum v4l2_stateless_av1_level {
+	V4L2_STATELESS_AV1_LEVEL_2_0 = 0,
+	V4L2_STATELESS_AV1_LEVEL_2_1 = 1,
+	V4L2_STATELESS_AV1_LEVEL_2_2 = 2,
+	V4L2_STATELESS_AV1_LEVEL_2_3 = 3,
+
+	V4L2_STATELESS_AV1_LEVEL_3_0 = 4,
+	V4L2_STATELESS_AV1_LEVEL_3_1 = 5,
+	V4L2_STATELESS_AV1_LEVEL_3_2 = 6,
+	V4L2_STATELESS_AV1_LEVEL_3_3 = 7,
+
+	V4L2_STATELESS_AV1_LEVEL_4_0 = 8,
+	V4L2_STATELESS_AV1_LEVEL_4_1 = 9,
+	V4L2_STATELESS_AV1_LEVEL_4_2 = 10,
+	V4L2_STATELESS_AV1_LEVEL_4_3 = 11,
+
+	V4L2_STATELESS_AV1_LEVEL_5_0 = 12,
+	V4L2_STATELESS_AV1_LEVEL_5_1 = 13,
+	V4L2_STATELESS_AV1_LEVEL_5_2 = 14,
+	V4L2_STATELESS_AV1_LEVEL_5_3 = 15,
+
+	V4L2_STATELESS_AV1_LEVEL_6_0 = 16,
+	V4L2_STATELESS_AV1_LEVEL_6_1 = 17,
+	V4L2_STATELESS_AV1_LEVEL_6_2 = 18,
+	V4L2_STATELESS_AV1_LEVEL_6_3 = 19,
+
+	V4L2_STATELESS_AV1_LEVEL_7_0 = 20,
+	V4L2_STATELESS_AV1_LEVEL_7_1 = 21,
+	V4L2_STATELESS_AV1_LEVEL_7_2 = 22,
+	V4L2_STATELESS_AV1_LEVEL_7_3 = 23
+};
+
+/**
+ * enum v4l2_stateless_av1_operating_mode - AV1 operating mode
+ *
+ * @V4L2_STATELESS_AV1_OPERATING_MODE_GENERAL_DECODING: input is a sequence of
+ * OBUs, output is decoded frames)
+ * @V4L2_STATELESS_AV1_OPERATING_MODE_LARGE_SCALE_TILE_DECODING: Large scale
+ * tile decoding (input is a tile list OBU plus additional side information,
+ * output is a decoded frame)
+ *
+ * Conveys the decoding mode the decoder is operating with. The two AV1 decoding
+ * modes are specified in section 7 "Decoding process" of the AV1 specification.
+ */
+#define V4L2_CID_STATELESS_AV1_OPERATING_MODE (V4L2_CID_CODEC_STATELESS_BASE + 409)
+enum v4l2_stateless_av1_operating_mode {
+	V4L2_STATELESS_AV1_OPERATING_MODE_GENERAL_DECODING = 0,
+	V4L2_STATELESS_AV1_OPERATING_MODE_LARGE_SCALE_TILE_DECODING = 1,
+};
+
 #define V4L2_CID_COLORIMETRY_CLASS_BASE	(V4L2_CTRL_CLASS_COLORIMETRY | 0x900)
 #define V4L2_CID_COLORIMETRY_CLASS	(V4L2_CTRL_CLASS_COLORIMETRY | 1)
 
diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
index 7222fc855d6b..d20f8505f980 100644
--- a/include/uapi/linux/videodev2.h
+++ b/include/uapi/linux/videodev2.h
@@ -701,6 +701,7 @@ struct v4l2_pix_format {
 #define V4L2_PIX_FMT_FWHT     v4l2_fourcc('F', 'W', 'H', 'T') /* Fast Walsh Hadamard Transform (vicodec) */
 #define V4L2_PIX_FMT_FWHT_STATELESS     v4l2_fourcc('S', 'F', 'W', 'H') /* Stateless FWHT (vicodec) */
 #define V4L2_PIX_FMT_H264_SLICE v4l2_fourcc('S', '2', '6', '4') /* H264 parsed slices */
+#define V4L2_PIX_FMT_AV1_FRAME	v4l2_fourcc('A', 'V', '1', 'F') /* AV1 parsed frame */
 
 /*  Vendor-specific formats   */
 #define V4L2_PIX_FMT_CPIA1    v4l2_fourcc('C', 'P', 'I', 'A') /* cpia1 YUV */
@@ -1750,6 +1751,13 @@ struct v4l2_ext_control {
 		struct v4l2_ctrl_mpeg2_sequence __user *p_mpeg2_sequence;
 		struct v4l2_ctrl_mpeg2_picture __user *p_mpeg2_picture;
 		struct v4l2_ctrl_mpeg2_quantisation __user *p_mpeg2_quantisation;
+
+		struct v4l2_ctrl_av1_sequence __user *p_av1_sequence;
+		struct v4l2_ctrl_av1_tile_group __user *p_av1_tile_group;
+		struct v4l2_ctrl_av1_tile_group_entry __user *p_av1_tile_group_entry;
+		struct v4l2_ctrl_av1_tile_list __user *p_av1_tile_list;
+		struct v4l2_ctrl_av1_tile_list_entry __user *p_av1_tile_list_entry;
+		struct v4l2_ctrl_av1_frame_header __user *p_av1_frame_header;
 		void __user *ptr;
 	};
 } __attribute__ ((packed));
@@ -1814,6 +1822,13 @@ enum v4l2_ctrl_type {
 	V4L2_CTRL_TYPE_MPEG2_QUANTISATION   = 0x0250,
 	V4L2_CTRL_TYPE_MPEG2_SEQUENCE       = 0x0251,
 	V4L2_CTRL_TYPE_MPEG2_PICTURE        = 0x0252,
+
+	V4L2_CTRL_TYPE_AV1_SEQUENCE	    = 0x280,
+	V4L2_CTRL_TYPE_AV1_TILE_GROUP	    = 0x281,
+	V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY = 0x282,
+	V4L2_CTRL_TYPE_AV1_TILE_LIST	    = 0x283,
+	V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY  = 0x284,
+	V4L2_CTRL_TYPE_AV1_FRAME_HEADER	    = 0x285,
 };
 
 /*  Used in the VIDIOC_QUERYCTRL ioctl for querying controls */
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 2/2] media: vivpu: add virtual VPU driver
  2021-08-10 22:05 [RFC PATCH 0/2] Add the stateless AV1 uAPI and the VIVPU virtual driver to showcase it daniel.almeida
  2021-08-10 22:05 ` [RFC PATCH 1/2] media: Add AV1 uAPI daniel.almeida
@ 2021-08-10 22:05 ` daniel.almeida
  2021-09-02 16:05   ` Hans Verkuil
  2022-06-06 21:26   ` [RFC PATCH v2] media: visl: add virtual stateless driver daniel.almeida
  2021-09-02 15:43 ` [RFC PATCH 0/2] Add the stateless AV1 uAPI and the VIVPU virtual driver to showcase it Hans Verkuil
  2 siblings, 2 replies; 14+ messages in thread
From: daniel.almeida @ 2021-08-10 22:05 UTC (permalink / raw)
  To: stevecho, shawnku, tzungbi, mcasas, nhebert, abodenha, randy.wu,
	yunfei.dong, gustavo.padovan, andrzej.pietrasiewicz,
	enric.balletbo, ezequiel, nicolas.dufresne, tomeu.vizoso,
	nick.milner, xiaoyong.lu, mchehab, hverkuil-cisco
  Cc: Daniel Almeida, linux-media, linux-kernel, kernel

From: Daniel Almeida <daniel.almeida@collabora.com>

Add a virtual VPU driver to aid userspace in testing stateless uAPI
implementations for which no hardware is currently available.

A userspace implementation can use vivpu to run a decoding loop even
when no hardware is available or when the kernel uAPI for the codec
has not been upstreamed yet. This can reveal bugs at an early stage.

Also makes it possible to work on the kernel uAPI for a codec and
a corresponding userspace implementation at the same time.

Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
---
 drivers/media/test-drivers/Kconfig            |   1 +
 drivers/media/test-drivers/Makefile           |   1 +
 drivers/media/test-drivers/vivpu/Kconfig      |  16 +
 drivers/media/test-drivers/vivpu/Makefile     |   4 +
 drivers/media/test-drivers/vivpu/vivpu-core.c | 418 ++++++++++++
 drivers/media/test-drivers/vivpu/vivpu-dec.c  | 491 ++++++++++++++
 drivers/media/test-drivers/vivpu/vivpu-dec.h  |  61 ++
 .../media/test-drivers/vivpu/vivpu-video.c    | 599 ++++++++++++++++++
 .../media/test-drivers/vivpu/vivpu-video.h    |  46 ++
 drivers/media/test-drivers/vivpu/vivpu.h      | 119 ++++
 10 files changed, 1756 insertions(+)
 create mode 100644 drivers/media/test-drivers/vivpu/Kconfig
 create mode 100644 drivers/media/test-drivers/vivpu/Makefile
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu-core.c
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu-dec.c
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu-dec.h
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu-video.c
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu-video.h
 create mode 100644 drivers/media/test-drivers/vivpu/vivpu.h

diff --git a/drivers/media/test-drivers/Kconfig b/drivers/media/test-drivers/Kconfig
index e27d6602545d..b426cef7fc88 100644
--- a/drivers/media/test-drivers/Kconfig
+++ b/drivers/media/test-drivers/Kconfig
@@ -22,6 +22,7 @@ config VIDEO_VIM2M
 	  framework.
 
 source "drivers/media/test-drivers/vicodec/Kconfig"
+source "drivers/media/test-drivers/vivpu/Kconfig"
 
 endif #V4L_TEST_DRIVERS
 
diff --git a/drivers/media/test-drivers/Makefile b/drivers/media/test-drivers/Makefile
index 9f0e4ebb2efe..a4fadccc4b95 100644
--- a/drivers/media/test-drivers/Makefile
+++ b/drivers/media/test-drivers/Makefile
@@ -7,4 +7,5 @@ obj-$(CONFIG_VIDEO_VIMC)		+= vimc/
 obj-$(CONFIG_VIDEO_VIVID)		+= vivid/
 obj-$(CONFIG_VIDEO_VIM2M)		+= vim2m.o
 obj-$(CONFIG_VIDEO_VICODEC)		+= vicodec/
+obj-$(CONFIG_VIDEO_VIVPU)		+= vivpu/
 obj-$(CONFIG_DVB_VIDTV)			+= vidtv/
diff --git a/drivers/media/test-drivers/vivpu/Kconfig b/drivers/media/test-drivers/vivpu/Kconfig
new file mode 100644
index 000000000000..1e6267418d19
--- /dev/null
+++ b/drivers/media/test-drivers/vivpu/Kconfig
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: GPL-2.0-only
+config VIDEO_VIVPU
+	tristate "Virtual VPU Driver (vivpu)"
+	depends on VIDEO_DEV && VIDEO_V4L2
+	select VIDEOBUF2_VMALLOC
+	select V4L2_MEM2MEM_DEV
+	select MEDIA_CONTROLLER
+	select MEDIA_CONTROLLER_REQUEST_API
+	help
+	  A virtual stateless VPU example device for uAPI development purposes.
+
+	  A userspace implementation can use vivpu to run a decoding loop even
+	  when no hardware is available or when the kernel uAPI for the codec
+	  has not been upstreamed yet. This can reveal bugs at an early stage
+
+	  When in doubt, say N.
diff --git a/drivers/media/test-drivers/vivpu/Makefile b/drivers/media/test-drivers/vivpu/Makefile
new file mode 100644
index 000000000000..d20a1dbbd9e5
--- /dev/null
+++ b/drivers/media/test-drivers/vivpu/Makefile
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: GPL-2.0
+vivpu-y := vivpu-core.o vivpu-video.o vivpu-dec.o
+
+obj-$(CONFIG_VIDEO_VIVPU) += vivpu.o
diff --git a/drivers/media/test-drivers/vivpu/vivpu-core.c b/drivers/media/test-drivers/vivpu/vivpu-core.c
new file mode 100644
index 000000000000..1eb1ce33bab1
--- /dev/null
+++ b/drivers/media/test-drivers/vivpu/vivpu-core.c
@@ -0,0 +1,418 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * A virtual stateless VPU example device for uAPI development purposes.
+ *
+ * A userspace implementation can use vivpu to run a decoding loop even
+ * when no hardware is available or when the kernel uAPI for the codec
+ * has not been upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/font.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "vivpu.h"
+#include "vivpu-dec.h"
+#include "vivpu-video.h"
+
+unsigned int vivpu_debug;
+module_param(vivpu_debug, uint, 0644);
+MODULE_PARM_DESC(vivpu_debug, " activates debug info");
+
+const unsigned int vivpu_src_default_w = 640;
+const unsigned int vivpu_src_default_h = 480;
+const unsigned int vivpu_src_default_depth = 8;
+
+unsigned int vivpu_transtime;
+module_param(vivpu_transtime, uint, 0644);
+MODULE_PARM_DESC(vivpu_transtime, " simulated process time.");
+
+struct v4l2_ctrl *vivpu_find_control(struct vivpu_ctx *ctx, u32 id)
+{
+	unsigned int i;
+
+	for (i = 0; ctx->ctrls[i]; i++)
+		if (ctx->ctrls[i]->id == id)
+			return ctx->ctrls[i];
+
+	return NULL;
+}
+
+void *vivpu_find_control_data(struct vivpu_ctx *ctx, u32 id)
+{
+	struct v4l2_ctrl *ctrl;
+
+	ctrl = vivpu_find_control(ctx, id);
+	if (ctrl)
+		return ctrl->p_cur.p;
+
+	return NULL;
+}
+
+u32 vivpu_control_num_elems(struct vivpu_ctx *ctx, u32 id)
+{
+	struct v4l2_ctrl *ctrl;
+
+	ctrl = vivpu_find_control(ctx, id);
+	if (ctrl)
+		return ctrl->elems;
+
+	return 0;
+}
+
+static void vivpu_device_release(struct video_device *vdev)
+{
+	struct vivpu_dev *dev = container_of(vdev, struct vivpu_dev, vfd);
+
+	v4l2_device_unregister(&dev->v4l2_dev);
+	v4l2_m2m_release(dev->m2m_dev);
+	media_device_cleanup(&dev->mdev);
+	kfree(dev);
+}
+
+static const struct vivpu_control vivpu_controls[] = {
+	{
+		.cfg.id = V4L2_CID_STATELESS_AV1_FRAME_HEADER,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_AV1_SEQUENCE,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_AV1_TILE_GROUP,
+		.cfg.dims = { V4L2_AV1_MAX_TILE_COUNT },
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY,
+		.cfg.dims = { V4L2_AV1_MAX_TILE_COUNT },
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_AV1_TILE_LIST,
+		.cfg.dims = { V4L2_AV1_MAX_TILE_COUNT },
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY,
+		.cfg.dims = { V4L2_AV1_MAX_TILE_COUNT },
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_AV1_PROFILE,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_AV1_LEVEL,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_AV1_OPERATING_MODE,
+	},
+};
+
+#define VIVPU_CONTROLS_COUNT	ARRAY_SIZE(vivpu_controls)
+
+static int vivpu_init_ctrls(struct vivpu_ctx *ctx)
+{
+	struct vivpu_dev *dev = ctx->dev;
+	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
+	struct v4l2_ctrl *ctrl;
+	unsigned int ctrl_size;
+	unsigned int i;
+
+	v4l2_ctrl_handler_init(hdl, VIVPU_CONTROLS_COUNT);
+	if (hdl->error) {
+		v4l2_err(&dev->v4l2_dev,
+			 "Failed to initialize control handler\n");
+		return hdl->error;
+	}
+
+	ctrl_size = sizeof(ctrl) * VIVPU_CONTROLS_COUNT + 1;
+
+	ctx->ctrls = kzalloc(ctrl_size, GFP_KERNEL);
+	if (!ctx->ctrls)
+		return -ENOMEM;
+
+	for (i = 0; i < VIVPU_CONTROLS_COUNT; i++) {
+		ctrl = v4l2_ctrl_new_custom(hdl, &vivpu_controls[i].cfg,
+					    NULL);
+		if (hdl->error) {
+			v4l2_err(&dev->v4l2_dev,
+				 "Failed to create new custom control, errno: %d\n",
+				 hdl->error);
+
+			return hdl->error;
+		}
+
+		ctx->ctrls[i] = ctrl;
+	}
+
+	ctx->fh.ctrl_handler = hdl;
+	v4l2_ctrl_handler_setup(hdl);
+
+	return 0;
+}
+
+static void vivpu_free_ctrls(struct vivpu_ctx *ctx)
+{
+	kfree(ctx->ctrls);
+	v4l2_ctrl_handler_free(&ctx->hdl);
+}
+
+static int vivpu_open(struct file *file)
+{
+	struct vivpu_dev *dev = video_drvdata(file);
+	struct vivpu_ctx *ctx = NULL;
+	int rc = 0;
+
+	if (mutex_lock_interruptible(&dev->dev_mutex))
+		return -ERESTARTSYS;
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx) {
+		rc = -ENOMEM;
+		goto unlock;
+	}
+
+	v4l2_fh_init(&ctx->fh, video_devdata(file));
+	file->private_data = &ctx->fh;
+	ctx->dev = dev;
+
+	rc = vivpu_init_ctrls(ctx);
+	if (rc)
+		goto free_ctx;
+
+	ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, ctx, &vivpu_queue_init);
+
+	mutex_init(&ctx->vb_mutex);
+
+	if (IS_ERR(ctx->fh.m2m_ctx)) {
+		rc = PTR_ERR(ctx->fh.m2m_ctx);
+		goto free_hdl;
+	}
+
+	vivpu_set_default_format(ctx);
+
+	v4l2_fh_add(&ctx->fh);
+
+	dprintk(dev, "Created instance: %p, m2m_ctx: %p\n",
+		ctx, ctx->fh.m2m_ctx);
+
+	mutex_unlock(&dev->dev_mutex);
+	return rc;
+
+free_hdl:
+	vivpu_free_ctrls(ctx);
+	v4l2_fh_exit(&ctx->fh);
+free_ctx:
+	kfree(ctx);
+unlock:
+	mutex_unlock(&dev->dev_mutex);
+	return rc;
+}
+
+static int vivpu_release(struct file *file)
+{
+	struct vivpu_dev *dev = video_drvdata(file);
+	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
+
+	dprintk(dev, "Releasing instance %p\n", ctx);
+
+	v4l2_fh_del(&ctx->fh);
+	v4l2_fh_exit(&ctx->fh);
+	vivpu_free_ctrls(ctx);
+	mutex_lock(&dev->dev_mutex);
+	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
+	mutex_unlock(&dev->dev_mutex);
+	kfree(ctx);
+
+	return 0;
+}
+
+static const struct v4l2_file_operations vivpu_fops = {
+	.owner		= THIS_MODULE,
+	.open		= vivpu_open,
+	.release	= vivpu_release,
+	.poll		= v4l2_m2m_fop_poll,
+	.unlocked_ioctl	= video_ioctl2,
+	.mmap		= v4l2_m2m_fop_mmap,
+};
+
+static const struct video_device vivpu_videodev = {
+	.name		= VIVPU_NAME,
+	.vfl_dir	= VFL_DIR_M2M,
+	.fops		= &vivpu_fops,
+	.ioctl_ops	= &vivpu_ioctl_ops,
+	.minor		= -1,
+	.release	= vivpu_device_release,
+	.device_caps	= V4L2_CAP_VIDEO_M2M | V4L2_CAP_STREAMING,
+};
+
+static const struct v4l2_m2m_ops vivpu_m2m_ops = {
+	.device_run	= vivpu_device_run,
+};
+
+static const struct media_device_ops vivpu_m2m_media_ops = {
+	.req_validate	= vivpu_request_validate,
+	.req_queue	= v4l2_m2m_request_queue,
+};
+
+static int vivpu_probe(struct platform_device *pdev)
+{
+	struct vivpu_dev *dev;
+	struct video_device *vfd;
+	int ret;
+
+	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+	if (!dev)
+		return -ENOMEM;
+
+	ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
+	if (ret)
+		goto error_vivpu_dev;
+
+	mutex_init(&dev->dev_mutex);
+
+	dev->vfd = vivpu_videodev;
+	vfd = &dev->vfd;
+	vfd->lock = &dev->dev_mutex;
+	vfd->v4l2_dev = &dev->v4l2_dev;
+
+	video_set_drvdata(vfd, dev);
+	v4l2_info(&dev->v4l2_dev,
+		  "Device registered as /dev/video%d\n", vfd->num);
+
+	platform_set_drvdata(pdev, dev);
+
+	dev->m2m_dev = v4l2_m2m_init(&vivpu_m2m_ops);
+	if (IS_ERR(dev->m2m_dev)) {
+		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
+		ret = PTR_ERR(dev->m2m_dev);
+		dev->m2m_dev = NULL;
+		goto error_dev;
+	}
+
+	dev->mdev.dev = &pdev->dev;
+	strscpy(dev->mdev.model, "vivpu", sizeof(dev->mdev.model));
+	strscpy(dev->mdev.bus_info, "platform:vivpu",
+		sizeof(dev->mdev.bus_info));
+	media_device_init(&dev->mdev);
+	dev->mdev.ops = &vivpu_m2m_media_ops;
+	dev->v4l2_dev.mdev = &dev->mdev;
+
+	ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
+	if (ret) {
+		v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
+		goto error_m2m;
+	}
+
+	ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd,
+						 MEDIA_ENT_F_PROC_VIDEO_DECODER);
+	if (ret) {
+		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n");
+		goto error_v4l2;
+	}
+
+	ret = media_device_register(&dev->mdev);
+	if (ret) {
+		v4l2_err(&dev->v4l2_dev, "Failed to register mem2mem media device\n");
+		goto error_m2m_mc;
+	}
+
+	return 0;
+
+error_m2m_mc:
+	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+error_v4l2:
+	video_unregister_device(&dev->vfd);
+	/* vivpu_device_release called by video_unregister_device to release various objects */
+	return ret;
+error_m2m:
+	v4l2_m2m_release(dev->m2m_dev);
+error_dev:
+	v4l2_device_unregister(&dev->v4l2_dev);
+error_vivpu_dev:
+	kfree(dev);
+
+	return ret;
+}
+
+static int vivpu_remove(struct platform_device *pdev)
+{
+	struct vivpu_dev *dev = platform_get_drvdata(pdev);
+
+	v4l2_info(&dev->v4l2_dev, "Removing " VIVPU_NAME);
+
+#ifdef CONFIG_MEDIA_CONTROLLER
+	if (media_devnode_is_registered(dev->mdev.devnode)) {
+		media_device_unregister(&dev->mdev);
+		v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+	}
+#endif
+	video_unregister_device(&dev->vfd);
+
+	return 0;
+}
+
+static struct platform_driver vivpu_pdrv = {
+	.probe		= vivpu_probe,
+	.remove		= vivpu_remove,
+	.driver		= {
+		.name	= VIVPU_NAME,
+	},
+};
+
+static void vivpu_dev_release(struct device *dev) {}
+
+static struct platform_device vivpu_pdev = {
+	.name		= VIVPU_NAME,
+	.dev.release	= vivpu_dev_release,
+};
+
+static void __exit vivpu_exit(void)
+{
+	platform_driver_unregister(&vivpu_pdrv);
+	platform_device_unregister(&vivpu_pdev);
+}
+
+static int __init vivpu_init(void)
+{
+	int ret;
+
+	ret = platform_device_register(&vivpu_pdev);
+	if (ret)
+		return ret;
+
+	ret = platform_driver_register(&vivpu_pdrv);
+	if (ret)
+		platform_device_unregister(&vivpu_pdev);
+
+	return ret;
+}
+
+MODULE_DESCRIPTION("Virtual VPU device");
+MODULE_AUTHOR("Daniel Almeida <daniel.almeida@collabora.com>");
+MODULE_LICENSE("GPL v2");
+
+module_init(vivpu_init);
+module_exit(vivpu_exit);
diff --git a/drivers/media/test-drivers/vivpu/vivpu-dec.c b/drivers/media/test-drivers/vivpu/vivpu-dec.c
new file mode 100644
index 000000000000..f928768aff77
--- /dev/null
+++ b/drivers/media/test-drivers/vivpu/vivpu-dec.c
@@ -0,0 +1,491 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * A virtual stateless VPU example device for uAPI development purposes.
+ *
+ * A userspace implementation can use vivpu to run a decoding loop even
+ * when no hardware is available or when the kernel uAPI for the codec
+ * has not been upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+
+#include "vivpu.h"
+#include "vivpu-dec.h"
+
+#include <linux/delay.h>
+#include <linux/workqueue.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/tpg/v4l2-tpg.h>
+
+static void
+vivpu_av1_check_reference_frames(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	u32 i;
+	int idx;
+	const struct v4l2_ctrl_av1_frame_header *f;
+	struct vb2_queue *capture_queue;
+
+	f = run->av1.frame_header;
+	capture_queue = &ctx->fh.m2m_ctx->cap_q_ctx.q;
+
+	/*
+	 * For every reference frame timestamp, make sure we can actually find
+	 * the buffer in the CAPTURE queue.
+	 */
+	for (i = 0; i < V4L2_AV1_NUM_REF_FRAMES; i++) {
+		idx = vb2_find_timestamp(capture_queue, f->reference_frame_ts[i], 0);
+		if (idx < 0)
+			v4l2_err(&ctx->dev->v4l2_dev,
+				 "no capture buffer found for reference_frame_ts[%d] %llu",
+				 i, f->reference_frame_ts[i]);
+		else
+			dprintk(ctx->dev,
+				"found capture buffer %d for reference_frame_ts[%d] %llu\n",
+				idx, i, f->reference_frame_ts[i]);
+	}
+}
+
+static void vivpu_dump_av1_seq(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_ctrl_av1_sequence *seq = run->av1.sequence;
+
+	dprintk(ctx->dev, "AV1 Sequence\n");
+	dprintk(ctx->dev, "flags %d\n", seq->flags);
+	dprintk(ctx->dev, "profile %d\n", seq->seq_profile);
+	dprintk(ctx->dev, "order_hint_bits %d\n", seq->order_hint_bits);
+	dprintk(ctx->dev, "bit_depth %d\n", seq->bit_depth);
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_tile_group(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_ctrl_av1_tile_group *tg;
+	u32 n;
+	u32 i;
+
+	n = vivpu_control_num_elems(ctx, V4L2_CID_STATELESS_AV1_TILE_GROUP);
+	for (i = 0; i < n; i++) {
+		tg = &run->av1.tile_group[i];
+		dprintk(ctx->dev, "AV1 Tile Group\n");
+		dprintk(ctx->dev, "flags %d\n", tg->flags);
+		dprintk(ctx->dev, "tg_start %d\n", tg->tg_start);
+		dprintk(ctx->dev, "tg_end %d\n", tg->tg_end);
+		dprintk(ctx->dev, "\n");
+	}
+
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_tile_group_entry(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_ctrl_av1_tile_group_entry *tge;
+	u32 n;
+	u32 i;
+
+	n = vivpu_control_num_elems(ctx, V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY);
+	for (i = 0; i < n; i++) {
+		tge = &run->av1.tg_entries[i];
+		dprintk(ctx->dev, "AV1 Tile Group Entry\n");
+		dprintk(ctx->dev, "tile_offset %d\n", tge->tile_offset);
+		dprintk(ctx->dev, "tile_size %d\n", tge->tile_size);
+		dprintk(ctx->dev, "tile_row %d\n", tge->tile_row);
+		dprintk(ctx->dev, "tile_col %d\n", tge->tile_col);
+
+		dprintk(ctx->dev, "\n");
+	}
+
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_tile_list(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_ctrl_av1_tile_list *tl;
+	u32 n;
+	u32 i;
+
+	n = vivpu_control_num_elems(ctx, V4L2_CID_STATELESS_AV1_TILE_LIST);
+	for (i = 0; i < n; i++) {
+		tl = &run->av1.tile_list[i];
+		dprintk(ctx->dev, "AV1 Tile List\n");
+		dprintk(ctx->dev, "output_frame_width_in_tiles_minus_1 %d\n",
+			tl->output_frame_width_in_tiles_minus_1);
+		dprintk(ctx->dev, "output_frame_height_in_tiles_minus_1 %d\n",
+			tl->output_frame_height_in_tiles_minus_1);
+		dprintk(ctx->dev, "tile_count_minus_1 %d\n",
+			tl->tile_count_minus_1);
+		dprintk(ctx->dev, "\n");
+	}
+
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_tile_list_entry(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_ctrl_av1_tile_list_entry *tle;
+	u32 n;
+	u32 i;
+
+	n = vivpu_control_num_elems(ctx, V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY);
+	for (i = 0; i < n; i++) {
+		tle = &run->av1.tl_entries[i];
+		dprintk(ctx->dev, "AV1 Tile List Entry\n");
+		dprintk(ctx->dev, "anchor_frame_idx %d\n", tle->anchor_frame_idx);
+		dprintk(ctx->dev, "anchor_tile_row %d\n", tle->anchor_tile_row);
+		dprintk(ctx->dev, "anchor_tile_col %d\n", tle->anchor_tile_col);
+		dprintk(ctx->dev, "tile_data_size_minus_1 %d\n",
+			tle->tile_data_size_minus_1);
+		dprintk(ctx->dev, "\n");
+	}
+
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_quantization(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_av1_quantization *q = &run->av1.frame_header->quantization;
+
+	dprintk(ctx->dev, "AV1 Quantization\n");
+	dprintk(ctx->dev, "flags %d\n", q->flags);
+	dprintk(ctx->dev, "base_q_idx %d\n", q->base_q_idx);
+	dprintk(ctx->dev, "delta_q_y_dc %d\n", q->delta_q_y_dc);
+	dprintk(ctx->dev, "delta_q_u_dc %d\n", q->delta_q_u_dc);
+	dprintk(ctx->dev, "delta_q_u_ac %d\n", q->delta_q_u_ac);
+	dprintk(ctx->dev, "delta_q_v_dc %d\n", q->delta_q_v_dc);
+	dprintk(ctx->dev, "delta_q_v_ac %d\n", q->delta_q_v_ac);
+	dprintk(ctx->dev, "qm_y %d\n", q->qm_y);
+	dprintk(ctx->dev, "qm_u %d\n", q->qm_u);
+	dprintk(ctx->dev, "qm_v %d\n", q->qm_v);
+	dprintk(ctx->dev, "delta_q_res %d\n", q->delta_q_res);
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_segmentation(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_av1_segmentation *s = &run->av1.frame_header->segmentation;
+	u32 i;
+	u32 j;
+
+	dprintk(ctx->dev, "AV1 Segmentation\n");
+	dprintk(ctx->dev, "flags %d\n", s->flags);
+
+	for (i = 0; i < ARRAY_SIZE(s->feature_enabled); i++)
+		dprintk(ctx->dev,
+			"feature_enabled[%d] %d\n",
+			i, s->feature_enabled[i]);
+
+	for (i = 0; i < V4L2_AV1_MAX_SEGMENTS; i++)
+		for (j = 0; j < V4L2_AV1_SEG_LVL_MAX; j++)
+			dprintk(ctx->dev,
+				"feature_data[%d][%d] %d\n",
+				i, j, s->feature_data[i][j]);
+
+	dprintk(ctx->dev, "last_active_seg_id %d\n", s->last_active_seg_id);
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_loop_filter(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_av1_loop_filter *lf = &run->av1.frame_header->loop_filter;
+	u32 i;
+
+	dprintk(ctx->dev, "AV1 Loop Filter\n");
+	dprintk(ctx->dev, "flags %d\n", lf->flags);
+
+	for (i = 0; i < ARRAY_SIZE(lf->level); i++)
+		dprintk(ctx->dev, "level[%d] %d\n", i, lf->level[i]);
+
+	dprintk(ctx->dev, "sharpness %d\n", lf->sharpness);
+
+	for (i = 0; i < ARRAY_SIZE(lf->ref_deltas); i++)
+		dprintk(ctx->dev, "ref_deltas[%d] %d\n", i, lf->ref_deltas[i]);
+
+	for (i = 0; i < ARRAY_SIZE(lf->mode_deltas); i++)
+		dprintk(ctx->dev, "mode_deltas[%d], %d\n", i, lf->mode_deltas[i]);
+
+	dprintk(ctx->dev, "delta_lf_res %d\n", lf->delta_lf_res);
+	dprintk(ctx->dev, "delta_lf_multi %d\n", lf->delta_lf_multi);
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_loop_restoration(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_av1_loop_restoration *lr;
+	u32 i;
+
+	lr = &run->av1.frame_header->loop_restoration;
+	dprintk(ctx->dev, "AV1 Loop Restoration\n");
+
+	for (i = 0; i < ARRAY_SIZE(lr->frame_restoration_type); i++)
+		dprintk(ctx->dev, "frame_restoration_type[%d] %d\n", i,
+			lr->frame_restoration_type[i]);
+
+	dprintk(ctx->dev, "lr_unit_shift %d\n", lr->lr_unit_shift);
+	dprintk(ctx->dev, "lr_uv_shift %d\n", lr->lr_uv_shift);
+
+	for (i = 0; i < ARRAY_SIZE(lr->loop_restoration_size); i++)
+		dprintk(ctx->dev, "loop_restoration_size[%d] %d\n",
+			i, lr->loop_restoration_size[i]);
+
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_cdef(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_av1_cdef *cdef = &run->av1.frame_header->cdef;
+	u32 i;
+
+	dprintk(ctx->dev, "AV1 CDEF\n");
+	dprintk(ctx->dev, "damping_minus_3 %d\n", cdef->damping_minus_3);
+	dprintk(ctx->dev, "bits %d\n", cdef->bits);
+
+	for (i = 0; i < ARRAY_SIZE(cdef->y_pri_strength); i++)
+		dprintk(ctx->dev,
+			"y_pri_strength[%d] %d\n", i, cdef->y_pri_strength[i]);
+	for (i = 0; i < ARRAY_SIZE(cdef->y_sec_strength); i++)
+		dprintk(ctx->dev,
+			"y_sec_strength[%d] %d\n", i, cdef->y_sec_strength[i]);
+	for (i = 0; i < ARRAY_SIZE(cdef->uv_pri_strength); i++)
+		dprintk(ctx->dev,
+			"uv_pri_strength[%d] %d\n", i, cdef->uv_pri_strength[i]);
+	for (i = 0; i < ARRAY_SIZE(cdef->uv_sec_strength); i++)
+		dprintk(ctx->dev,
+			"uv_sec_strength[%d] %d\n", i, cdef->uv_sec_strength[i]);
+
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_global_motion(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_av1_global_motion *gm;
+	u32 i;
+	u32 j;
+
+	gm = &run->av1.frame_header->global_motion;
+
+	dprintk(ctx->dev, "AV1 Global Motion\n");
+
+	for (i = 0; i < ARRAY_SIZE(gm->flags); i++)
+		dprintk(ctx->dev, "flags[%d] %d\n", i, gm->flags[i]);
+	for (i = 0; i < ARRAY_SIZE(gm->type); i++)
+		dprintk(ctx->dev, "type[%d] %d\n", i, gm->type[i]);
+
+	for (i = 0; i < V4L2_AV1_TOTAL_REFS_PER_FRAME; i++)
+		for (j = 0; j < 6; j++)
+			dprintk(ctx->dev, "params[%d][%d] %d\n",
+				i, j, gm->type[i]);
+
+	dprintk(ctx->dev, "invalid %d\n", gm->invalid);
+
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_film_grain(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_av1_film_grain *fg;
+	u32 i;
+
+	fg = &run->av1.frame_header->film_grain;
+
+	dprintk(ctx->dev, "AV1 Film Grain\n");
+	dprintk(ctx->dev, "flags %d\n", fg->flags);
+	dprintk(ctx->dev, "grain_seed %d\n", fg->grain_seed);
+	dprintk(ctx->dev, "film_grain_params_ref_idx %d\n",
+		fg->film_grain_params_ref_idx);
+	dprintk(ctx->dev, "num_y_points %d\n", fg->num_y_points);
+
+	for (i = 0; i < ARRAY_SIZE(fg->point_y_value); i++)
+		dprintk(ctx->dev, "point_y_value[%d] %d\n",
+			i, fg->point_y_value[i]);
+
+	for (i = 0; i < ARRAY_SIZE(fg->point_y_scaling); i++)
+		dprintk(ctx->dev, "point_y_scaling[%d] %d\n",
+			i, fg->point_y_scaling[i]);
+
+	dprintk(ctx->dev, "\n");
+}
+
+static void
+vivpu_dump_av1_tile_info(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_av1_tile_info *ti;
+	u32 i;
+
+	ti = &run->av1.frame_header->tile_info;
+
+	dprintk(ctx->dev, "AV1 Tile Info\n");
+
+	dprintk(ctx->dev, "flags %d\n", ti->flags);
+
+	for (i = 0; i < ARRAY_SIZE(ti->mi_col_starts); i++)
+		dprintk(ctx->dev, "mi_col_starts[%d] %d\n",
+			i, ti->mi_col_starts[i]);
+
+	for (i = 0; i < ARRAY_SIZE(ti->mi_row_starts); i++)
+		dprintk(ctx->dev, "mi_row_starts[%d] %d\n",
+			i, ti->mi_row_starts[i]);
+
+	for (i = 0; i < ARRAY_SIZE(ti->width_in_sbs_minus_1); i++)
+		dprintk(ctx->dev, "width_in_sbs_minus_1[%d] %d\n",
+			i, ti->width_in_sbs_minus_1[i]);
+
+	for (i = 0; i < ARRAY_SIZE(ti->height_in_sbs_minus_1); i++)
+		dprintk(ctx->dev, "height_in_sbs_minus_1[%d] %d\n",
+			i, ti->height_in_sbs_minus_1[i]);
+
+	dprintk(ctx->dev, "tile_size_bytes %d\n", ti->tile_size_bytes);
+	dprintk(ctx->dev, "context_update_tile_id %d\n", ti->context_update_tile_id);
+	dprintk(ctx->dev, "tile_cols %d\n", ti->tile_cols);
+	dprintk(ctx->dev, "tile_rows %d\n", ti->tile_rows);
+
+	dprintk(ctx->dev, "\n");
+}
+
+static void vivpu_dump_av1_frame(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	const struct v4l2_ctrl_av1_frame_header *f = run->av1.frame_header;
+	u32 i;
+
+	dprintk(ctx->dev, "AV1 Frame Header\n");
+	dprintk(ctx->dev, "flags %d\n", f->flags);
+	dprintk(ctx->dev, "frame_type %d\n", f->frame_type);
+	dprintk(ctx->dev, "order_hint %d\n", f->order_hint);
+	dprintk(ctx->dev, "superres_denom %d\n", f->superres_denom);
+	dprintk(ctx->dev, "upscaled_width %d\n", f->upscaled_width);
+	dprintk(ctx->dev, "interpolation_filter %d\n", f->interpolation_filter);
+	dprintk(ctx->dev, "tx_mode %d\n", f->tx_mode);
+	dprintk(ctx->dev, "frame_width_minus_1 %d\n", f->frame_width_minus_1);
+	dprintk(ctx->dev, "frame_height_minus_1 %d\n", f->frame_height_minus_1);
+	dprintk(ctx->dev, "render_width_minus_1 %d\n", f->render_width_minus_1);
+	dprintk(ctx->dev, "render_height_minus_1 %d\n", f->render_height_minus_1);
+	dprintk(ctx->dev, "current_frame_id %d\n", f->current_frame_id);
+	dprintk(ctx->dev, "primary_ref_frame %d\n", f->primary_ref_frame);
+
+	for (i = 0; i < V4L2_AV1_MAX_OPERATING_POINTS; i++) {
+		dprintk(ctx->dev, "buffer_removal_time[%d] %d\n",
+			i, f->buffer_removal_time[i]);
+	}
+
+	dprintk(ctx->dev, "refresh_frame_flags %d\n", f->refresh_frame_flags);
+	dprintk(ctx->dev, "last_frame_idx %d\n", f->last_frame_idx);
+	dprintk(ctx->dev, "gold_frame_idx %d\n", f->gold_frame_idx);
+
+	for (i = 0; i < ARRAY_SIZE(f->reference_frame_ts); i++)
+		dprintk(ctx->dev, "reference_frame_ts[%d] %llu\n", i,
+			f->reference_frame_ts[i]);
+
+	vivpu_dump_av1_tile_info(ctx, run);
+	vivpu_dump_av1_quantization(ctx, run);
+	vivpu_dump_av1_segmentation(ctx, run);
+	vivpu_dump_av1_loop_filter(ctx, run);
+	vivpu_dump_av1_cdef(ctx, run);
+	vivpu_dump_av1_loop_restoration(ctx, run);
+	vivpu_dump_av1_global_motion(ctx, run);
+	vivpu_dump_av1_film_grain(ctx, run);
+
+	for (i = 0; i < ARRAY_SIZE(f->skip_mode_frame); i++)
+		dprintk(ctx->dev, "skip_mode_frame[%d] %d\n",
+			i, f->skip_mode_frame[i]);
+
+	dprintk(ctx->dev, "\n");
+}
+
+static void vivpu_dump_av1_ctrls(struct vivpu_ctx *ctx, struct vivpu_run *run)
+{
+	vivpu_dump_av1_seq(ctx, run);
+	vivpu_dump_av1_frame(ctx, run);
+	vivpu_dump_av1_tile_group(ctx, run);
+	vivpu_dump_av1_tile_group_entry(ctx, run);
+	vivpu_dump_av1_tile_list(ctx, run);
+	vivpu_dump_av1_tile_list_entry(ctx, run);
+}
+
+void vivpu_device_run(void *priv)
+{
+	struct vivpu_ctx *ctx = priv;
+	struct vivpu_run run = {};
+	struct media_request *src_req;
+
+	run.src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+	run.dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+
+	/* Apply request(s) controls if needed. */
+	src_req = run.src->vb2_buf.req_obj.req;
+
+	if (src_req)
+		v4l2_ctrl_request_setup(src_req, &ctx->hdl);
+
+	switch (ctx->current_codec) {
+	case VIVPU_CODEC_AV1:
+		run.av1.sequence =
+			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_SEQUENCE);
+		run.av1.frame_header =
+			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_FRAME_HEADER);
+		run.av1.tile_group =
+			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_TILE_GROUP);
+		run.av1.tg_entries =
+			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY);
+		run.av1.tile_list =
+			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_TILE_LIST);
+		run.av1.tl_entries =
+			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY);
+
+		vivpu_dump_av1_ctrls(ctx, &run);
+		vivpu_av1_check_reference_frames(ctx, &run);
+		break;
+	default:
+		break;
+	}
+
+	v4l2_m2m_buf_copy_metadata(run.src, run.dst, true);
+	run.dst->sequence = ctx->q_data[V4L2_M2M_DST].sequence++;
+	run.src->sequence = ctx->q_data[V4L2_M2M_SRC].sequence++;
+	run.dst->field = ctx->dst_fmt.fmt.pix.field;
+
+	dprintk(ctx->dev, "Got src buffer %p, sequence %d, timestamp %llu\n",
+		run.src, run.src->sequence, run.src->vb2_buf.timestamp);
+
+	dprintk(ctx->dev, "Got dst buffer %p, sequence %d, timestamp %llu\n",
+		run.dst, run.dst->sequence, run.dst->vb2_buf.timestamp);
+
+	/* Complete request(s) controls if needed. */
+	if (src_req)
+		v4l2_ctrl_request_complete(src_req, &ctx->hdl);
+
+	if (vivpu_transtime)
+		usleep_range(vivpu_transtime, vivpu_transtime * 2);
+
+	v4l2_m2m_buf_done_and_job_finish(ctx->dev->m2m_dev,
+					 ctx->fh.m2m_ctx, VB2_BUF_STATE_DONE);
+}
diff --git a/drivers/media/test-drivers/vivpu/vivpu-dec.h b/drivers/media/test-drivers/vivpu/vivpu-dec.h
new file mode 100644
index 000000000000..4a3ca5952e43
--- /dev/null
+++ b/drivers/media/test-drivers/vivpu/vivpu-dec.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * A virtual stateless VPU example device for uAPI development purposes.
+ *
+ * A userspace implementation can use vivpu to run a decoding loop even
+ * when no hardware is available or when the kernel uAPI for the codec
+ * has not been upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+
+#ifndef _VIVPU_DEC_H_
+#define _VIVPU_DEC_H_
+
+#include "vivpu.h"
+
+struct vivpu_av1_run {
+	const struct v4l2_ctrl_av1_sequence *sequence;
+	const struct v4l2_ctrl_av1_frame_header *frame_header;
+	const struct v4l2_ctrl_av1_tile_group *tile_group;
+	const struct v4l2_ctrl_av1_tile_group_entry *tg_entries;
+	const struct v4l2_ctrl_av1_tile_list *tile_list;
+	const struct v4l2_ctrl_av1_tile_list_entry *tl_entries;
+};
+
+struct vivpu_run {
+	struct vb2_v4l2_buffer	*src;
+	struct vb2_v4l2_buffer	*dst;
+
+	union {
+		struct vivpu_av1_run	av1;
+	};
+};
+
+int vivpu_dec_start(struct vivpu_ctx *ctx);
+int vivpu_dec_stop(struct vivpu_ctx *ctx);
+int vivpu_job_ready(void *priv);
+void vivpu_device_run(void *priv);
+
+#endif /* _VIVPU_DEC_H_ */
diff --git a/drivers/media/test-drivers/vivpu/vivpu-video.c b/drivers/media/test-drivers/vivpu/vivpu-video.c
new file mode 100644
index 000000000000..a3018b0a4da3
--- /dev/null
+++ b/drivers/media/test-drivers/vivpu/vivpu-video.c
@@ -0,0 +1,599 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * A virtual stateless VPU example device for uAPI development purposes.
+ *
+ * A userspace implementation can use vivpu to run a decoding loop even
+ * when no hardware is available or when the kernel uAPI for the codec
+ * has not been upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+
+#include <media/v4l2-event.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf2-vmalloc.h>
+
+#include "vivpu-video.h"
+#include "vivpu.h"
+
+static const u32 vivpu_decoded_formats[] = {
+	V4L2_PIX_FMT_NV12,
+};
+
+static const struct vivpu_coded_format_desc coded_formats[] = {
+	{
+	.pixelformat = V4L2_PIX_FMT_AV1_FRAME,
+	/* simulated frame sizes for AV1 */
+	.frmsize = {
+		.min_width = 48,
+		.max_width = 4096,
+		.step_width = 16,
+		.min_height = 48,
+		.max_height = 2304,
+		.step_height = 16,
+	},
+	.num_decoded_fmts = ARRAY_SIZE(vivpu_decoded_formats),
+	/* simulate that the AV1 coded format decodes to raw NV12 */
+	.decoded_fmts = vivpu_decoded_formats,
+	}
+};
+
+static const struct vivpu_coded_format_desc*
+vivpu_find_coded_fmt_desc(u32 fourcc)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(coded_formats); i++) {
+		if (coded_formats[i].pixelformat == fourcc)
+			return &coded_formats[i];
+	}
+
+	return NULL;
+}
+
+void vivpu_set_default_format(struct vivpu_ctx *ctx)
+{
+	struct v4l2_format src_fmt = {
+		.fmt.pix = {
+			.width = vivpu_src_default_w,
+			.height = vivpu_src_default_h,
+			/* Zero bytes per line for encoded source. */
+			.bytesperline = 0,
+			/* Choose some minimum size since this can't be 0 */
+			.sizeimage = SZ_1K,
+		},
+	};
+
+	ctx->coded_format_desc = &coded_formats[0];
+	ctx->src_fmt = src_fmt;
+
+	v4l2_fill_pixfmt_mp(&ctx->dst_fmt.fmt.pix_mp,
+			    V4L2_PIX_FMT_NV12,
+			    vivpu_src_default_w, vivpu_src_default_h);
+
+	/* Always apply the frmsize constraint of the coded end. */
+	v4l2_apply_frmsize_constraints(&ctx->dst_fmt.fmt.pix.width,
+				       &ctx->dst_fmt.fmt.pix.height,
+				       &ctx->coded_format_desc->frmsize);
+
+	ctx->src_fmt.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
+	ctx->dst_fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
+}
+
+static const char *q_name(enum v4l2_buf_type type)
+{
+	switch (type) {
+	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+		return "Output";
+	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+		return "Capture";
+	default:
+		return "Invalid";
+	}
+}
+
+static struct vivpu_q_data *get_q_data(struct vivpu_ctx *ctx,
+				       enum v4l2_buf_type type)
+{
+	switch (type) {
+	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+	case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+		return &ctx->q_data[V4L2_M2M_SRC];
+	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+	case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+		return &ctx->q_data[V4L2_M2M_DST];
+	default:
+		break;
+	}
+	return NULL;
+}
+
+static int vivpu_querycap(struct file *file, void *priv,
+			  struct v4l2_capability *cap)
+{
+	strscpy(cap->driver, VIVPU_NAME, sizeof(cap->driver));
+	strscpy(cap->card, VIVPU_NAME, sizeof(cap->card));
+	snprintf(cap->bus_info, sizeof(cap->bus_info),
+		 "platform:%s", VIVPU_NAME);
+
+	return 0;
+}
+
+static int vivpu_enum_fmt_vid_cap(struct file *file, void *priv,
+				  struct v4l2_fmtdesc *f)
+{
+	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
+
+	if (f->index >= ctx->coded_format_desc->num_decoded_fmts)
+		return -EINVAL;
+
+	f->pixelformat = ctx->coded_format_desc->decoded_fmts[f->index];
+	return 0;
+}
+
+static int vivpu_enum_fmt_vid_out(struct file *file, void *priv,
+				  struct v4l2_fmtdesc *f)
+{
+	if (f->index >= ARRAY_SIZE(coded_formats))
+		return -EINVAL;
+
+	f->pixelformat = coded_formats[f->index].pixelformat;
+	return 0;
+}
+
+static int vivpu_g_fmt_vid_cap(struct file *file, void *priv,
+			       struct v4l2_format *f)
+{
+	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
+	*f = ctx->dst_fmt;
+
+	return 0;
+}
+
+static int vivpu_g_fmt_vid_out(struct file *file, void *priv,
+			       struct v4l2_format *f)
+{
+	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
+
+	*f = ctx->src_fmt;
+	return 0;
+}
+
+static int vivpu_try_fmt_vid_cap(struct file *file, void *priv,
+				 struct v4l2_format *f)
+{
+	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
+	const struct vivpu_coded_format_desc *coded_desc;
+	unsigned int i;
+
+	coded_desc = ctx->coded_format_desc;
+	if (WARN_ON(!coded_desc))
+		return -EINVAL;
+
+	for (i = 0; i < coded_desc->num_decoded_fmts; i++) {
+		if (coded_desc->decoded_fmts[i] == pix_mp->pixelformat)
+			break;
+	}
+
+	if (i == coded_desc->num_decoded_fmts)
+		return -EINVAL;
+
+	v4l2_apply_frmsize_constraints(&pix_mp->width,
+				       &pix_mp->height,
+				       &coded_desc->frmsize);
+
+	pix_mp->field = V4L2_FIELD_NONE;
+
+	return 0;
+}
+
+static int vivpu_try_fmt_vid_out(struct file *file, void *priv,
+				 struct v4l2_format *f)
+{
+	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+	const struct vivpu_coded_format_desc *coded_desc;
+
+	coded_desc = vivpu_find_coded_fmt_desc(pix_mp->pixelformat);
+	if (!coded_desc)
+		return -EINVAL;
+
+	/* apply the (simulated) hw constraints */
+	v4l2_apply_frmsize_constraints(&pix_mp->width,
+				       &pix_mp->height,
+				       &coded_desc->frmsize);
+
+	pix_mp->field = V4L2_FIELD_NONE;
+	/* All coded formats are considered single planar for now. */
+	pix_mp->num_planes = 1;
+
+	return 0;
+}
+
+static int vivpu_s_fmt_vid_out(struct file *file, void *priv,
+			       struct v4l2_format *f)
+{
+	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
+	struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
+	struct v4l2_format *cap_fmt = &ctx->dst_fmt;
+	const struct vivpu_coded_format_desc *desc;
+	struct vb2_queue *peer_vq;
+	int ret;
+
+	peer_vq = v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+	if (vb2_is_busy(peer_vq))
+		return -EBUSY;
+
+	dprintk(ctx->dev,
+		"Current OUTPUT queue format: width %d, height %d, pixfmt %d\n",
+		ctx->src_fmt.fmt.pix_mp.width, ctx->src_fmt.fmt.pix_mp.height,
+		ctx->src_fmt.fmt.pix_mp.pixelformat);
+
+	dprintk(ctx->dev,
+		"Current CAPTURE queue format: width %d, height %d, pixfmt %d\n",
+		ctx->dst_fmt.fmt.pix_mp.width, ctx->dst_fmt.fmt.pix_mp.height,
+		ctx->dst_fmt.fmt.pix_mp.pixelformat);
+
+	ret = vivpu_try_fmt_vid_out(file, priv, f);
+	if (ret) {
+		dprintk(ctx->dev,
+			"Unsupported format for the OUTPUT queue: %d\n",
+			f->fmt.pix_mp.pixelformat);
+		return ret;
+	}
+
+	desc = vivpu_find_coded_fmt_desc(f->fmt.pix_mp.pixelformat);
+	if (!desc) {
+		dprintk(ctx->dev,
+			"Unsupported format for the OUTPUT queue: %d\n",
+			f->fmt.pix_mp.pixelformat);
+		return -EINVAL;
+	}
+
+	ctx->coded_format_desc = desc;
+
+	ctx->src_fmt = *f;
+
+	v4l2_fill_pixfmt_mp(&ctx->dst_fmt.fmt.pix_mp,
+			    ctx->coded_format_desc->decoded_fmts[0],
+			    ctx->src_fmt.fmt.pix_mp.width,
+			    ctx->src_fmt.fmt.pix_mp.height);
+	cap_fmt->fmt.pix_mp.colorspace = f->fmt.pix_mp.colorspace;
+	cap_fmt->fmt.pix_mp.xfer_func = f->fmt.pix_mp.xfer_func;
+	cap_fmt->fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc;
+	cap_fmt->fmt.pix_mp.quantization = f->fmt.pix_mp.quantization;
+
+	dprintk(ctx->dev,
+		"Current OUTPUT queue format: width %d, height %d, pixfmt %d\n",
+		ctx->src_fmt.fmt.pix_mp.width, ctx->src_fmt.fmt.pix_mp.height,
+		ctx->src_fmt.fmt.pix_mp.pixelformat);
+
+	dprintk(ctx->dev,
+		"Current CAPTURE queue format: width %d, height %d, pixfmt %d\n",
+		ctx->dst_fmt.fmt.pix_mp.width, ctx->dst_fmt.fmt.pix_mp.height,
+		ctx->dst_fmt.fmt.pix_mp.pixelformat);
+
+	return 0;
+}
+
+static int vivpu_s_fmt_vid_cap(struct file *file, void *priv,
+			       struct v4l2_format *f)
+{
+	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
+	int ret;
+
+	dprintk(ctx->dev,
+		"Current CAPTURE queue format: width %d, height %d, pixfmt %d\n",
+		ctx->dst_fmt.fmt.pix_mp.width, ctx->dst_fmt.fmt.pix_mp.height,
+		ctx->dst_fmt.fmt.pix_mp.pixelformat);
+
+	ret = vivpu_try_fmt_vid_cap(file, priv, f);
+	if (ret)
+		return ret;
+
+	ctx->dst_fmt = *f;
+
+	dprintk(ctx->dev,
+		"Current CAPTURE queue format: width %d, height %d, pixfmt %d\n",
+		ctx->dst_fmt.fmt.pix_mp.width, ctx->dst_fmt.fmt.pix_mp.height,
+		ctx->dst_fmt.fmt.pix_mp.pixelformat);
+
+	return 0;
+}
+
+static int vivpu_enum_framesizes(struct file *file, void *priv,
+				 struct v4l2_frmsizeenum *fsize)
+{
+	const struct vivpu_coded_format_desc *fmt;
+	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
+
+	if (fsize->index != 0)
+		return -EINVAL;
+
+	fmt = vivpu_find_coded_fmt_desc(fsize->pixel_format);
+	if (!fmt) {
+		dprintk(ctx->dev,
+			"Unsupported format for the OUTPUT queue: %d\n",
+			fsize->pixel_format);
+
+		return -EINVAL;
+	}
+
+	fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
+	fsize->stepwise = fmt->frmsize;
+	return 0;
+}
+
+const struct v4l2_ioctl_ops vivpu_ioctl_ops = {
+	.vidioc_querycap		= vivpu_querycap,
+	.vidioc_enum_framesizes		= vivpu_enum_framesizes,
+
+	.vidioc_enum_fmt_vid_cap	= vivpu_enum_fmt_vid_cap,
+	.vidioc_g_fmt_vid_cap		= vivpu_g_fmt_vid_cap,
+	.vidioc_try_fmt_vid_cap		= vivpu_try_fmt_vid_cap,
+	.vidioc_s_fmt_vid_cap		= vivpu_s_fmt_vid_cap,
+
+	.vidioc_enum_fmt_vid_out	= vivpu_enum_fmt_vid_out,
+	.vidioc_g_fmt_vid_out		= vivpu_g_fmt_vid_out,
+	.vidioc_try_fmt_vid_out		= vivpu_try_fmt_vid_out,
+	.vidioc_s_fmt_vid_out		= vivpu_s_fmt_vid_out,
+
+	.vidioc_reqbufs			= v4l2_m2m_ioctl_reqbufs,
+	.vidioc_querybuf		= v4l2_m2m_ioctl_querybuf,
+	.vidioc_qbuf			= v4l2_m2m_ioctl_qbuf,
+	.vidioc_dqbuf			= v4l2_m2m_ioctl_dqbuf,
+	.vidioc_prepare_buf		= v4l2_m2m_ioctl_prepare_buf,
+	.vidioc_create_bufs		= v4l2_m2m_ioctl_create_bufs,
+	.vidioc_expbuf			= v4l2_m2m_ioctl_expbuf,
+
+	.vidioc_streamon		= v4l2_m2m_ioctl_streamon,
+	.vidioc_streamoff		= v4l2_m2m_ioctl_streamoff,
+
+	.vidioc_try_decoder_cmd		= v4l2_m2m_ioctl_stateless_try_decoder_cmd,
+	.vidioc_decoder_cmd		= v4l2_m2m_ioctl_stateless_decoder_cmd,
+
+	.vidioc_subscribe_event		= v4l2_ctrl_subscribe_event,
+	.vidioc_unsubscribe_event	= v4l2_event_unsubscribe,
+};
+
+static int vivpu_queue_setup(struct vb2_queue *vq,
+			     unsigned int *nbuffers,
+			     unsigned int *nplanes,
+			     unsigned int sizes[],
+			     struct device *alloc_devs[])
+{
+	struct vivpu_ctx *ctx = vb2_get_drv_priv(vq);
+	struct v4l2_pix_format *pix_fmt;
+
+	if (V4L2_TYPE_IS_OUTPUT(vq->type))
+		pix_fmt = &ctx->src_fmt.fmt.pix;
+	else
+		pix_fmt = &ctx->dst_fmt.fmt.pix;
+
+	if (*nplanes) {
+		if (sizes[0] < pix_fmt->sizeimage) {
+			v4l2_err(&ctx->dev->v4l2_dev, "sizes[0] is %d, sizeimage is %d\n",
+				 sizes[0], pix_fmt->sizeimage);
+			return -EINVAL;
+		}
+	} else {
+		sizes[0] = pix_fmt->sizeimage;
+		*nplanes = 1;
+	}
+
+	dprintk(ctx->dev, "%s: get %d buffer(s) of size %d each.\n",
+		q_name(vq->type), *nbuffers, sizes[0]);
+
+	return 0;
+}
+
+static void vivpu_queue_cleanup(struct vb2_queue *vq, u32 state)
+{
+	struct vivpu_ctx *ctx = vb2_get_drv_priv(vq);
+	struct vb2_v4l2_buffer *vbuf;
+
+	dprintk(ctx->dev, "Cleaning up queues\n");
+	for (;;) {
+		if (V4L2_TYPE_IS_OUTPUT(vq->type))
+			vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+		else
+			vbuf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+
+		if (!vbuf)
+			break;
+
+		v4l2_ctrl_request_complete(vbuf->vb2_buf.req_obj.req,
+					   &ctx->hdl);
+		dprintk(ctx->dev, "Marked request %p as complete\n",
+			vbuf->vb2_buf.req_obj.req);
+
+		v4l2_m2m_buf_done(vbuf, state);
+		dprintk(ctx->dev,
+			"Marked buffer %llu as done, state is %d\n",
+			vbuf->vb2_buf.timestamp,
+			state);
+	}
+}
+
+static int vivpu_buf_out_validate(struct vb2_buffer *vb)
+{
+	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+
+	vbuf->field = V4L2_FIELD_NONE;
+	return 0;
+}
+
+static int vivpu_buf_prepare(struct vb2_buffer *vb)
+{
+	struct vb2_queue *vq = vb->vb2_queue;
+	struct vivpu_ctx *ctx = vb2_get_drv_priv(vq);
+	u32 plane_sz = vb2_plane_size(vb, 0);
+	struct v4l2_pix_format *pix_fmt;
+
+	if (V4L2_TYPE_IS_OUTPUT(vq->type))
+		pix_fmt = &ctx->src_fmt.fmt.pix;
+	else
+		pix_fmt = &ctx->dst_fmt.fmt.pix;
+
+	if (plane_sz < pix_fmt->sizeimage) {
+		v4l2_err(&ctx->dev->v4l2_dev, "plane[0] size is %d, sizeimage is %d\n",
+			 plane_sz, pix_fmt->sizeimage);
+		return -EINVAL;
+	}
+
+	vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
+
+	return 0;
+}
+
+static int vivpu_start_streaming(struct vb2_queue *vq, unsigned int count)
+{
+	struct vivpu_ctx *ctx = vb2_get_drv_priv(vq);
+	struct vivpu_q_data *q_data = get_q_data(ctx, vq->type);
+	int ret = 0;
+
+	if (!q_data)
+		return -EINVAL;
+
+	q_data->sequence = 0;
+
+	switch (ctx->src_fmt.fmt.pix.pixelformat) {
+	case V4L2_PIX_FMT_AV1_FRAME:
+		dprintk(ctx->dev, "Pixfmt is AV1F\n");
+		ctx->current_codec = VIVPU_CODEC_AV1;
+		break;
+	default:
+		v4l2_err(&ctx->dev->v4l2_dev, "Unsupported src format %d\n",
+			 ctx->src_fmt.fmt.pix.pixelformat);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	return 0;
+
+err:
+	vivpu_queue_cleanup(vq, VB2_BUF_STATE_QUEUED);
+	return ret;
+}
+
+static void vivpu_stop_streaming(struct vb2_queue *vq)
+{
+	struct vivpu_ctx *ctx = vb2_get_drv_priv(vq);
+
+	dprintk(ctx->dev, "Stop streaming\n");
+	vivpu_queue_cleanup(vq, VB2_BUF_STATE_ERROR);
+}
+
+static void vivpu_buf_queue(struct vb2_buffer *vb)
+{
+	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+	struct vivpu_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+
+	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
+}
+
+static void vivpu_buf_request_complete(struct vb2_buffer *vb)
+{
+	struct vivpu_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+
+	v4l2_ctrl_request_complete(vb->req_obj.req, &ctx->hdl);
+}
+
+const struct vb2_ops vivpu_qops = {
+	.queue_setup          = vivpu_queue_setup,
+	.buf_out_validate     = vivpu_buf_out_validate,
+	.buf_prepare          = vivpu_buf_prepare,
+	.buf_queue            = vivpu_buf_queue,
+	.start_streaming      = vivpu_start_streaming,
+	.stop_streaming       = vivpu_stop_streaming,
+	.wait_prepare         = vb2_ops_wait_prepare,
+	.wait_finish          = vb2_ops_wait_finish,
+	.buf_request_complete = vivpu_buf_request_complete,
+};
+
+int vivpu_queue_init(void *priv, struct vb2_queue *src_vq,
+		     struct vb2_queue *dst_vq)
+{
+	struct vivpu_ctx *ctx = priv;
+	int ret;
+
+	src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
+	src_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
+	src_vq->drv_priv = ctx;
+	src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+	src_vq->ops = &vivpu_qops;
+	src_vq->mem_ops = &vb2_vmalloc_memops;
+	src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+	src_vq->lock = &ctx->vb_mutex;
+	src_vq->supports_requests = true;
+
+	ret = vb2_queue_init(src_vq);
+	if (ret)
+		return ret;
+
+	dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+	dst_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
+	dst_vq->drv_priv = ctx;
+	dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+	dst_vq->ops = &vivpu_qops;
+	dst_vq->mem_ops = &vb2_vmalloc_memops;
+	dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+	dst_vq->lock = &ctx->vb_mutex;
+
+	return vb2_queue_init(dst_vq);
+}
+
+int vivpu_request_validate(struct media_request *req)
+{
+	struct media_request_object *obj;
+	struct vivpu_ctx *ctx = NULL;
+	unsigned int count;
+
+	list_for_each_entry(obj, &req->objects, list) {
+		struct vb2_buffer *vb;
+
+		if (vb2_request_object_is_buffer(obj)) {
+			vb = container_of(obj, struct vb2_buffer, req_obj);
+			ctx = vb2_get_drv_priv(vb->vb2_queue);
+
+			break;
+		}
+	}
+
+	if (!ctx)
+		return -ENOENT;
+
+	count = vb2_request_buffer_cnt(req);
+	if (!count) {
+		v4l2_err(&ctx->dev->v4l2_dev,
+			 "No buffer was provided with the request\n");
+		return -ENOENT;
+	} else if (count > 1) {
+		v4l2_err(&ctx->dev->v4l2_dev,
+			 "More than one buffer was provided with the request\n");
+		return -EINVAL;
+	}
+
+	return vb2_request_validate(req);
+}
diff --git a/drivers/media/test-drivers/vivpu/vivpu-video.h b/drivers/media/test-drivers/vivpu/vivpu-video.h
new file mode 100644
index 000000000000..6cf8c1570887
--- /dev/null
+++ b/drivers/media/test-drivers/vivpu/vivpu-video.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * A virtual stateless VPU example device for uAPI development purposes.
+ *
+ * A userspace implementation can use vivpu to run a decoding loop even
+ * when no hardware is available or when the kernel uAPI for the codec
+ * has not been upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+
+#ifndef _VIVPU_VIDEO_H_
+#define _VIVPU_VIDEO_H_
+#include <media/v4l2-mem2mem.h>
+
+#include "vivpu.h"
+
+extern const struct v4l2_ioctl_ops vivpu_ioctl_ops;
+int vivpu_queue_init(void *priv, struct vb2_queue *src_vq,
+		     struct vb2_queue *dst_vq);
+
+void vivpu_set_default_format(struct vivpu_ctx *ctx);
+int vivpu_request_validate(struct media_request *req);
+
+#endif /* _VIVPU_VIDEO_H_ */
diff --git a/drivers/media/test-drivers/vivpu/vivpu.h b/drivers/media/test-drivers/vivpu/vivpu.h
new file mode 100644
index 000000000000..89b993c460c1
--- /dev/null
+++ b/drivers/media/test-drivers/vivpu/vivpu.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * A virtual stateless VPU example device for uAPI development purposes.
+ *
+ * A userspace implementation can use vivpu to run a decoding loop even
+ * when no hardware is available or when the kernel uAPI for the codec
+ * has not been upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version
+ */
+
+#ifndef _VIVPU_H_
+#define _VIVPU_H_
+
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/tpg/v4l2-tpg.h>
+
+#define VIVPU_NAME		"vivpu"
+#define VIVPU_M2M_NQUEUES	2
+
+extern const unsigned int vivpu_src_default_w;
+extern const unsigned int vivpu_src_default_h;
+extern const unsigned int vivpu_src_default_depth;
+extern unsigned int vivpu_transtime;
+
+struct vivpu_coded_format_desc {
+	u32 pixelformat;
+	struct v4l2_frmsize_stepwise frmsize;
+	unsigned int num_decoded_fmts;
+	const u32 *decoded_fmts;
+};
+
+enum {
+	V4L2_M2M_SRC = 0,
+	V4L2_M2M_DST = 1,
+};
+
+extern unsigned int vivpu_debug;
+#define dprintk(dev, fmt, arg...) \
+	v4l2_dbg(1, vivpu_debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)
+
+struct vivpu_q_data {
+	unsigned int		sequence;
+};
+
+struct vivpu_dev {
+	struct v4l2_device	v4l2_dev;
+	struct video_device	vfd;
+#ifdef CONFIG_MEDIA_CONTROLLER
+	struct media_device	mdev;
+#endif
+
+	struct mutex		dev_mutex;
+
+	struct v4l2_m2m_dev	*m2m_dev;
+};
+
+enum vivpu_codec {
+	VIVPU_CODEC_AV1,
+};
+
+struct vivpu_ctx {
+	struct v4l2_fh		fh;
+	struct vivpu_dev	*dev;
+	struct v4l2_ctrl_handler hdl;
+	struct v4l2_ctrl	**ctrls;
+
+	struct mutex		vb_mutex;
+
+	struct vivpu_q_data	q_data[VIVPU_M2M_NQUEUES];
+	enum   vivpu_codec	current_codec;
+
+	const struct vivpu_coded_format_desc *coded_format_desc;
+
+	struct v4l2_format	src_fmt;
+	struct v4l2_format	dst_fmt;
+};
+
+struct vivpu_control {
+	struct v4l2_ctrl_config cfg;
+};
+
+static inline struct vivpu_ctx *vivpu_file_to_ctx(struct file *file)
+{
+	return container_of(file->private_data, struct vivpu_ctx, fh);
+}
+
+static inline struct vivpu_ctx *vivpu_v4l2fh_to_ctx(struct v4l2_fh *v4l2_fh)
+{
+	return container_of(v4l2_fh, struct vivpu_ctx, fh);
+}
+
+void *vivpu_find_control_data(struct vivpu_ctx *ctx, u32 id);
+struct v4l2_ctrl *vivpu_find_control(struct vivpu_ctx *ctx, u32 id);
+u32 vivpu_control_num_elems(struct vivpu_ctx *ctx, u32 id);
+
+#endif /* _VIVPU_H_ */
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/2] media: Add AV1 uAPI
  2021-08-10 22:05 ` [RFC PATCH 1/2] media: Add AV1 uAPI daniel.almeida
@ 2021-08-11  0:57   ` kernel test robot
  2021-09-02 15:10   ` Hans Verkuil
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 14+ messages in thread
From: kernel test robot @ 2021-08-11  0:57 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 14605 bytes --]

Hi,

[FYI, it's a private test report for your RFC patch.]
[auto build test ERROR on linuxtv-media/master]
[also build test ERROR on v5.14-rc5]
[cannot apply to next-20210810]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/daniel-almeida-collabora-com/Add-the-stateless-AV1-uAPI-and-the-VIVPU-virtual-driver-to-showcase-it/20210811-060839
base:   git://linuxtv.org/media_tree.git master
config: i386-buildonly-randconfig-r006-20210810 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/4c180524ea26d4d368792055c8d9e3b438913fb0
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review daniel-almeida-collabora-com/Add-the-stateless-AV1-uAPI-and-the-VIVPU-virtual-driver-to-showcase-it/20210811-060839
        git checkout 4c180524ea26d4d368792055c8d9e3b438913fb0
        # save the attached .config to linux build tree
        mkdir build_dir
        make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/media/v4l2-core/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   drivers/media/v4l2-core/v4l2-ctrls-defs.c: In function 'v4l2_ctrl_fill':
>> drivers/media/v4l2-core/v4l2-ctrls-defs.c:1546:13: error: 'V4L2_CTRL_FLAG_DYNAMIC_ARRAY' undeclared (first use in this function); did you mean 'V4L2_CTRL_FLAG_READ_ONLY'?
    1546 |   *flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
         |             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
         |             V4L2_CTRL_FLAG_READ_ONLY
   drivers/media/v4l2-core/v4l2-ctrls-defs.c:1546:13: note: each undeclared identifier is reported only once for each function it appears in


vim +1546 drivers/media/v4l2-core/v4l2-ctrls-defs.c

  1243	
  1244	void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
  1245			    s64 *min, s64 *max, u64 *step, s64 *def, u32 *flags)
  1246	{
  1247		*name = v4l2_ctrl_get_name(id);
  1248		*flags = 0;
  1249	
  1250		switch (id) {
  1251		case V4L2_CID_AUDIO_MUTE:
  1252		case V4L2_CID_AUDIO_LOUDNESS:
  1253		case V4L2_CID_AUTO_WHITE_BALANCE:
  1254		case V4L2_CID_AUTOGAIN:
  1255		case V4L2_CID_HFLIP:
  1256		case V4L2_CID_VFLIP:
  1257		case V4L2_CID_HUE_AUTO:
  1258		case V4L2_CID_CHROMA_AGC:
  1259		case V4L2_CID_COLOR_KILLER:
  1260		case V4L2_CID_AUTOBRIGHTNESS:
  1261		case V4L2_CID_MPEG_AUDIO_MUTE:
  1262		case V4L2_CID_MPEG_VIDEO_MUTE:
  1263		case V4L2_CID_MPEG_VIDEO_GOP_CLOSURE:
  1264		case V4L2_CID_MPEG_VIDEO_PULLDOWN:
  1265		case V4L2_CID_EXPOSURE_AUTO_PRIORITY:
  1266		case V4L2_CID_FOCUS_AUTO:
  1267		case V4L2_CID_PRIVACY:
  1268		case V4L2_CID_AUDIO_LIMITER_ENABLED:
  1269		case V4L2_CID_AUDIO_COMPRESSION_ENABLED:
  1270		case V4L2_CID_PILOT_TONE_ENABLED:
  1271		case V4L2_CID_ILLUMINATORS_1:
  1272		case V4L2_CID_ILLUMINATORS_2:
  1273		case V4L2_CID_FLASH_STROBE_STATUS:
  1274		case V4L2_CID_FLASH_CHARGE:
  1275		case V4L2_CID_FLASH_READY:
  1276		case V4L2_CID_MPEG_VIDEO_DECODER_MPEG4_DEBLOCK_FILTER:
  1277		case V4L2_CID_MPEG_VIDEO_DECODER_SLICE_INTERFACE:
  1278		case V4L2_CID_MPEG_VIDEO_DEC_DISPLAY_DELAY_ENABLE:
  1279		case V4L2_CID_MPEG_VIDEO_FRAME_RC_ENABLE:
  1280		case V4L2_CID_MPEG_VIDEO_MB_RC_ENABLE:
  1281		case V4L2_CID_MPEG_VIDEO_H264_8X8_TRANSFORM:
  1282		case V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_ENABLE:
  1283		case V4L2_CID_MPEG_VIDEO_MPEG4_QPEL:
  1284		case V4L2_CID_MPEG_VIDEO_REPEAT_SEQ_HEADER:
  1285		case V4L2_CID_MPEG_VIDEO_AU_DELIMITER:
  1286		case V4L2_CID_WIDE_DYNAMIC_RANGE:
  1287		case V4L2_CID_IMAGE_STABILIZATION:
  1288		case V4L2_CID_RDS_RECEPTION:
  1289		case V4L2_CID_RF_TUNER_LNA_GAIN_AUTO:
  1290		case V4L2_CID_RF_TUNER_MIXER_GAIN_AUTO:
  1291		case V4L2_CID_RF_TUNER_IF_GAIN_AUTO:
  1292		case V4L2_CID_RF_TUNER_BANDWIDTH_AUTO:
  1293		case V4L2_CID_RF_TUNER_PLL_LOCK:
  1294		case V4L2_CID_RDS_TX_MONO_STEREO:
  1295		case V4L2_CID_RDS_TX_ARTIFICIAL_HEAD:
  1296		case V4L2_CID_RDS_TX_COMPRESSED:
  1297		case V4L2_CID_RDS_TX_DYNAMIC_PTY:
  1298		case V4L2_CID_RDS_TX_TRAFFIC_ANNOUNCEMENT:
  1299		case V4L2_CID_RDS_TX_TRAFFIC_PROGRAM:
  1300		case V4L2_CID_RDS_TX_MUSIC_SPEECH:
  1301		case V4L2_CID_RDS_TX_ALT_FREQS_ENABLE:
  1302		case V4L2_CID_RDS_RX_TRAFFIC_ANNOUNCEMENT:
  1303		case V4L2_CID_RDS_RX_TRAFFIC_PROGRAM:
  1304		case V4L2_CID_RDS_RX_MUSIC_SPEECH:
  1305			*type = V4L2_CTRL_TYPE_BOOLEAN;
  1306			*min = 0;
  1307			*max = *step = 1;
  1308			break;
  1309		case V4L2_CID_ROTATE:
  1310			*type = V4L2_CTRL_TYPE_INTEGER;
  1311			*flags |= V4L2_CTRL_FLAG_MODIFY_LAYOUT;
  1312			break;
  1313		case V4L2_CID_MPEG_VIDEO_MV_H_SEARCH_RANGE:
  1314		case V4L2_CID_MPEG_VIDEO_MV_V_SEARCH_RANGE:
  1315		case V4L2_CID_MPEG_VIDEO_DEC_DISPLAY_DELAY:
  1316		case V4L2_CID_MPEG_VIDEO_INTRA_REFRESH_PERIOD:
  1317			*type = V4L2_CTRL_TYPE_INTEGER;
  1318			break;
  1319		case V4L2_CID_MPEG_VIDEO_LTR_COUNT:
  1320			*type = V4L2_CTRL_TYPE_INTEGER;
  1321			break;
  1322		case V4L2_CID_MPEG_VIDEO_FRAME_LTR_INDEX:
  1323			*type = V4L2_CTRL_TYPE_INTEGER;
  1324			*flags |= V4L2_CTRL_FLAG_EXECUTE_ON_WRITE;
  1325			break;
  1326		case V4L2_CID_MPEG_VIDEO_USE_LTR_FRAMES:
  1327			*type = V4L2_CTRL_TYPE_BITMASK;
  1328			*flags |= V4L2_CTRL_FLAG_EXECUTE_ON_WRITE;
  1329			break;
  1330		case V4L2_CID_MPEG_VIDEO_FORCE_KEY_FRAME:
  1331		case V4L2_CID_PAN_RESET:
  1332		case V4L2_CID_TILT_RESET:
  1333		case V4L2_CID_FLASH_STROBE:
  1334		case V4L2_CID_FLASH_STROBE_STOP:
  1335		case V4L2_CID_AUTO_FOCUS_START:
  1336		case V4L2_CID_AUTO_FOCUS_STOP:
  1337		case V4L2_CID_DO_WHITE_BALANCE:
  1338			*type = V4L2_CTRL_TYPE_BUTTON;
  1339			*flags |= V4L2_CTRL_FLAG_WRITE_ONLY |
  1340				  V4L2_CTRL_FLAG_EXECUTE_ON_WRITE;
  1341			*min = *max = *step = *def = 0;
  1342			break;
  1343		case V4L2_CID_POWER_LINE_FREQUENCY:
  1344		case V4L2_CID_MPEG_AUDIO_SAMPLING_FREQ:
  1345		case V4L2_CID_MPEG_AUDIO_ENCODING:
  1346		case V4L2_CID_MPEG_AUDIO_L1_BITRATE:
  1347		case V4L2_CID_MPEG_AUDIO_L2_BITRATE:
  1348		case V4L2_CID_MPEG_AUDIO_L3_BITRATE:
  1349		case V4L2_CID_MPEG_AUDIO_AC3_BITRATE:
  1350		case V4L2_CID_MPEG_AUDIO_MODE:
  1351		case V4L2_CID_MPEG_AUDIO_MODE_EXTENSION:
  1352		case V4L2_CID_MPEG_AUDIO_EMPHASIS:
  1353		case V4L2_CID_MPEG_AUDIO_CRC:
  1354		case V4L2_CID_MPEG_AUDIO_DEC_PLAYBACK:
  1355		case V4L2_CID_MPEG_AUDIO_DEC_MULTILINGUAL_PLAYBACK:
  1356		case V4L2_CID_MPEG_VIDEO_ENCODING:
  1357		case V4L2_CID_MPEG_VIDEO_ASPECT:
  1358		case V4L2_CID_MPEG_VIDEO_BITRATE_MODE:
  1359		case V4L2_CID_MPEG_STREAM_TYPE:
  1360		case V4L2_CID_MPEG_STREAM_VBI_FMT:
  1361		case V4L2_CID_EXPOSURE_AUTO:
  1362		case V4L2_CID_AUTO_FOCUS_RANGE:
  1363		case V4L2_CID_COLORFX:
  1364		case V4L2_CID_AUTO_N_PRESET_WHITE_BALANCE:
  1365		case V4L2_CID_TUNE_PREEMPHASIS:
  1366		case V4L2_CID_FLASH_LED_MODE:
  1367		case V4L2_CID_FLASH_STROBE_SOURCE:
  1368		case V4L2_CID_MPEG_VIDEO_HEADER_MODE:
  1369		case V4L2_CID_MPEG_VIDEO_FRAME_SKIP_MODE:
  1370		case V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MODE:
  1371		case V4L2_CID_MPEG_VIDEO_H264_ENTROPY_MODE:
  1372		case V4L2_CID_MPEG_VIDEO_H264_LEVEL:
  1373		case V4L2_CID_MPEG_VIDEO_H264_LOOP_FILTER_MODE:
  1374		case V4L2_CID_MPEG_VIDEO_H264_PROFILE:
  1375		case V4L2_CID_MPEG_VIDEO_H264_VUI_SAR_IDC:
  1376		case V4L2_CID_MPEG_VIDEO_H264_SEI_FP_ARRANGEMENT_TYPE:
  1377		case V4L2_CID_MPEG_VIDEO_H264_FMO_MAP_TYPE:
  1378		case V4L2_CID_MPEG_VIDEO_H264_HIERARCHICAL_CODING_TYPE:
  1379		case V4L2_CID_MPEG_VIDEO_MPEG2_LEVEL:
  1380		case V4L2_CID_MPEG_VIDEO_MPEG2_PROFILE:
  1381		case V4L2_CID_MPEG_VIDEO_MPEG4_LEVEL:
  1382		case V4L2_CID_MPEG_VIDEO_MPEG4_PROFILE:
  1383		case V4L2_CID_JPEG_CHROMA_SUBSAMPLING:
  1384		case V4L2_CID_ISO_SENSITIVITY_AUTO:
  1385		case V4L2_CID_EXPOSURE_METERING:
  1386		case V4L2_CID_SCENE_MODE:
  1387		case V4L2_CID_DV_TX_MODE:
  1388		case V4L2_CID_DV_TX_RGB_RANGE:
  1389		case V4L2_CID_DV_TX_IT_CONTENT_TYPE:
  1390		case V4L2_CID_DV_RX_RGB_RANGE:
  1391		case V4L2_CID_DV_RX_IT_CONTENT_TYPE:
  1392		case V4L2_CID_TEST_PATTERN:
  1393		case V4L2_CID_DEINTERLACING_MODE:
  1394		case V4L2_CID_TUNE_DEEMPHASIS:
  1395		case V4L2_CID_MPEG_VIDEO_VPX_GOLDEN_FRAME_SEL:
  1396		case V4L2_CID_MPEG_VIDEO_VP8_PROFILE:
  1397		case V4L2_CID_MPEG_VIDEO_VP9_PROFILE:
  1398		case V4L2_CID_MPEG_VIDEO_VP9_LEVEL:
  1399		case V4L2_CID_DETECT_MD_MODE:
  1400		case V4L2_CID_STATELESS_AV1_PROFILE:
  1401		case V4L2_CID_STATELESS_AV1_LEVEL:
  1402		case V4L2_CID_STATELESS_AV1_OPERATING_MODE:
  1403		case V4L2_CID_MPEG_VIDEO_HEVC_PROFILE:
  1404		case V4L2_CID_MPEG_VIDEO_HEVC_LEVEL:
  1405		case V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_TYPE:
  1406		case V4L2_CID_MPEG_VIDEO_HEVC_REFRESH_TYPE:
  1407		case V4L2_CID_MPEG_VIDEO_HEVC_SIZE_OF_LENGTH_FIELD:
  1408		case V4L2_CID_MPEG_VIDEO_HEVC_TIER:
  1409		case V4L2_CID_MPEG_VIDEO_HEVC_LOOP_FILTER_MODE:
  1410		case V4L2_CID_MPEG_VIDEO_HEVC_DECODE_MODE:
  1411		case V4L2_CID_MPEG_VIDEO_HEVC_START_CODE:
  1412		case V4L2_CID_STATELESS_H264_DECODE_MODE:
  1413		case V4L2_CID_STATELESS_H264_START_CODE:
  1414		case V4L2_CID_CAMERA_ORIENTATION:
  1415			*type = V4L2_CTRL_TYPE_MENU;
  1416			break;
  1417		case V4L2_CID_LINK_FREQ:
  1418			*type = V4L2_CTRL_TYPE_INTEGER_MENU;
  1419			break;
  1420		case V4L2_CID_RDS_TX_PS_NAME:
  1421		case V4L2_CID_RDS_TX_RADIO_TEXT:
  1422		case V4L2_CID_RDS_RX_PS_NAME:
  1423		case V4L2_CID_RDS_RX_RADIO_TEXT:
  1424			*type = V4L2_CTRL_TYPE_STRING;
  1425			break;
  1426		case V4L2_CID_ISO_SENSITIVITY:
  1427		case V4L2_CID_AUTO_EXPOSURE_BIAS:
  1428		case V4L2_CID_MPEG_VIDEO_VPX_NUM_PARTITIONS:
  1429		case V4L2_CID_MPEG_VIDEO_VPX_NUM_REF_FRAMES:
  1430			*type = V4L2_CTRL_TYPE_INTEGER_MENU;
  1431			break;
  1432		case V4L2_CID_USER_CLASS:
  1433		case V4L2_CID_CAMERA_CLASS:
  1434		case V4L2_CID_CODEC_CLASS:
  1435		case V4L2_CID_FM_TX_CLASS:
  1436		case V4L2_CID_FLASH_CLASS:
  1437		case V4L2_CID_JPEG_CLASS:
  1438		case V4L2_CID_IMAGE_SOURCE_CLASS:
  1439		case V4L2_CID_IMAGE_PROC_CLASS:
  1440		case V4L2_CID_DV_CLASS:
  1441		case V4L2_CID_FM_RX_CLASS:
  1442		case V4L2_CID_RF_TUNER_CLASS:
  1443		case V4L2_CID_DETECT_CLASS:
  1444		case V4L2_CID_CODEC_STATELESS_CLASS:
  1445		case V4L2_CID_COLORIMETRY_CLASS:
  1446			*type = V4L2_CTRL_TYPE_CTRL_CLASS;
  1447			/* You can neither read nor write these */
  1448			*flags |= V4L2_CTRL_FLAG_READ_ONLY | V4L2_CTRL_FLAG_WRITE_ONLY;
  1449			*min = *max = *step = *def = 0;
  1450			break;
  1451		case V4L2_CID_BG_COLOR:
  1452			*type = V4L2_CTRL_TYPE_INTEGER;
  1453			*step = 1;
  1454			*min = 0;
  1455			/* Max is calculated as RGB888 that is 2^24 */
  1456			*max = 0xFFFFFF;
  1457			break;
  1458		case V4L2_CID_FLASH_FAULT:
  1459		case V4L2_CID_JPEG_ACTIVE_MARKER:
  1460		case V4L2_CID_3A_LOCK:
  1461		case V4L2_CID_AUTO_FOCUS_STATUS:
  1462		case V4L2_CID_DV_TX_HOTPLUG:
  1463		case V4L2_CID_DV_TX_RXSENSE:
  1464		case V4L2_CID_DV_TX_EDID_PRESENT:
  1465		case V4L2_CID_DV_RX_POWER_PRESENT:
  1466			*type = V4L2_CTRL_TYPE_BITMASK;
  1467			break;
  1468		case V4L2_CID_MIN_BUFFERS_FOR_CAPTURE:
  1469		case V4L2_CID_MIN_BUFFERS_FOR_OUTPUT:
  1470			*type = V4L2_CTRL_TYPE_INTEGER;
  1471			*flags |= V4L2_CTRL_FLAG_READ_ONLY;
  1472			break;
  1473		case V4L2_CID_MPEG_VIDEO_DEC_PTS:
  1474			*type = V4L2_CTRL_TYPE_INTEGER64;
  1475			*flags |= V4L2_CTRL_FLAG_VOLATILE | V4L2_CTRL_FLAG_READ_ONLY;
  1476			*min = *def = 0;
  1477			*max = 0x1ffffffffLL;
  1478			*step = 1;
  1479			break;
  1480		case V4L2_CID_MPEG_VIDEO_DEC_FRAME:
  1481			*type = V4L2_CTRL_TYPE_INTEGER64;
  1482			*flags |= V4L2_CTRL_FLAG_VOLATILE | V4L2_CTRL_FLAG_READ_ONLY;
  1483			*min = *def = 0;
  1484			*max = 0x7fffffffffffffffLL;
  1485			*step = 1;
  1486			break;
  1487		case V4L2_CID_MPEG_VIDEO_DEC_CONCEAL_COLOR:
  1488			*type = V4L2_CTRL_TYPE_INTEGER64;
  1489			*min = 0;
  1490			/* default for 8 bit black, luma is 16, chroma is 128 */
  1491			*def = 0x8000800010LL;
  1492			*max = 0xffffffffffffLL;
  1493			*step = 1;
  1494			break;
  1495		case V4L2_CID_PIXEL_RATE:
  1496			*type = V4L2_CTRL_TYPE_INTEGER64;
  1497			*flags |= V4L2_CTRL_FLAG_READ_ONLY;
  1498			break;
  1499		case V4L2_CID_DETECT_MD_REGION_GRID:
  1500			*type = V4L2_CTRL_TYPE_U8;
  1501			break;
  1502		case V4L2_CID_DETECT_MD_THRESHOLD_GRID:
  1503			*type = V4L2_CTRL_TYPE_U16;
  1504			break;
  1505		case V4L2_CID_RDS_TX_ALT_FREQS:
  1506			*type = V4L2_CTRL_TYPE_U32;
  1507			break;
  1508		case V4L2_CID_STATELESS_MPEG2_SEQUENCE:
  1509			*type = V4L2_CTRL_TYPE_MPEG2_SEQUENCE;
  1510			break;
  1511		case V4L2_CID_STATELESS_MPEG2_PICTURE:
  1512			*type = V4L2_CTRL_TYPE_MPEG2_PICTURE;
  1513			break;
  1514		case V4L2_CID_STATELESS_MPEG2_QUANTISATION:
  1515			*type = V4L2_CTRL_TYPE_MPEG2_QUANTISATION;
  1516			break;
  1517		case V4L2_CID_STATELESS_FWHT_PARAMS:
  1518			*type = V4L2_CTRL_TYPE_FWHT_PARAMS;
  1519			break;
  1520		case V4L2_CID_STATELESS_H264_SPS:
  1521			*type = V4L2_CTRL_TYPE_H264_SPS;
  1522			break;
  1523		case V4L2_CID_STATELESS_H264_PPS:
  1524			*type = V4L2_CTRL_TYPE_H264_PPS;
  1525			break;
  1526		case V4L2_CID_STATELESS_H264_SCALING_MATRIX:
  1527			*type = V4L2_CTRL_TYPE_H264_SCALING_MATRIX;
  1528			break;
  1529		case V4L2_CID_STATELESS_H264_SLICE_PARAMS:
  1530			*type = V4L2_CTRL_TYPE_H264_SLICE_PARAMS;
  1531			break;
  1532		case V4L2_CID_STATELESS_H264_DECODE_PARAMS:
  1533			*type = V4L2_CTRL_TYPE_H264_DECODE_PARAMS;
  1534			break;
  1535		case V4L2_CID_STATELESS_H264_PRED_WEIGHTS:
  1536			*type = V4L2_CTRL_TYPE_H264_PRED_WEIGHTS;
  1537			break;
  1538		case V4L2_CID_STATELESS_VP8_FRAME:
  1539			*type = V4L2_CTRL_TYPE_VP8_FRAME;
  1540			break;
  1541		case V4L2_CID_STATELESS_AV1_SEQUENCE:
  1542			*type = V4L2_CTRL_TYPE_AV1_SEQUENCE;
  1543			break;
  1544		case V4L2_CID_STATELESS_AV1_TILE_GROUP:
  1545			*type = V4L2_CTRL_TYPE_AV1_TILE_GROUP;
> 1546			*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 37070 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/2] media: Add AV1 uAPI
  2021-08-10 22:05 ` [RFC PATCH 1/2] media: Add AV1 uAPI daniel.almeida
  2021-08-11  0:57   ` kernel test robot
@ 2021-09-02 15:10   ` Hans Verkuil
  2022-01-28 15:45   ` Nicolas Dufresne
  2022-02-02 15:13   ` Nicolas Dufresne
  3 siblings, 0 replies; 14+ messages in thread
From: Hans Verkuil @ 2021-09-02 15:10 UTC (permalink / raw)
  To: daniel.almeida, stevecho, shawnku, tzungbi, mcasas, nhebert,
	abodenha, randy.wu, yunfei.dong, gustavo.padovan,
	andrzej.pietrasiewicz, enric.balletbo, ezequiel,
	nicolas.dufresne, tomeu.vizoso, nick.milner, xiaoyong.lu,
	mchehab
  Cc: linux-media, linux-kernel, kernel

Hi Daniel!

Two small comments below:

On 11/08/2021 00:05, daniel.almeida@collabora.com wrote:
> From: Daniel Almeida <daniel.almeida@collabora.com>
> 
> This patch adds the  AOMedia Video 1 (AV1) kernel uAPI.
> 
> This design is based on currently available AV1 API implementations and
> aims to support the development of AV1 stateless video codecs
> on Linux.
> 
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> ---
>  .../userspace-api/media/v4l/biblio.rst        |   10 +
>  .../media/v4l/ext-ctrls-codec-stateless.rst   | 1268 +++++++++++++++++
>  .../media/v4l/pixfmt-compressed.rst           |   21 +
>  .../media/v4l/vidioc-g-ext-ctrls.rst          |   36 +
>  .../media/v4l/vidioc-queryctrl.rst            |   54 +
>  .../media/videodev2.h.rst.exceptions          |    9 +
>  drivers/media/v4l2-core/v4l2-ctrls-core.c     |  286 +++-
>  drivers/media/v4l2-core/v4l2-ctrls-defs.c     |   79 +
>  drivers/media/v4l2-core/v4l2-ioctl.c          |    1 +
>  include/media/v4l2-ctrls.h                    |   12 +
>  include/uapi/linux/v4l2-controls.h            |  796 +++++++++++
>  include/uapi/linux/videodev2.h                |   15 +
>  12 files changed, 2586 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/userspace-api/media/v4l/biblio.rst b/Documentation/userspace-api/media/v4l/biblio.rst
> index 7b8e6738ff9e..7061144d10bb 100644
> --- a/Documentation/userspace-api/media/v4l/biblio.rst
> +++ b/Documentation/userspace-api/media/v4l/biblio.rst
> @@ -417,3 +417,13 @@ VP8
>  :title:     RFC 6386: "VP8 Data Format and Decoding Guide"
>  
>  :author:    J. Bankoski et al.
> +
> +.. _av1:
> +
> +AV1
> +===
> +
> +
> +:title:     AV1 Bitstream & Decoding Process Specification
> +
> +:author:    Peter de Rivaz, Argon Design Ltd, Jack Haughton, Argon Design Ltd

<snip>

> diff --git a/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst b/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
> index 819a70a26e18..73ff5311b7ae 100644
> --- a/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
> +++ b/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
> @@ -507,6 +507,60 @@ See also the examples in :ref:`control`.
>        - n/a
>        - A struct :c:type:`v4l2_ctrl_hevc_decode_params`, containing HEVC
>  	decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_SEQUENCE``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_sequence`, containing AV1 Sequence OBU
> +	decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_GROUP``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_group`, containing AV1 Tile Group
> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_group`, containing AV1 Tile Group

I guess this should be:

Tile Group -> Tile Group Entry

> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_LIST``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_list`, containing AV1 Tile List
> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_list_entry`, containing AV1 Tile List

Also missing 'Entry'

> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_FRAME_HEADER``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_frame_header`, containing AV1 Frame/Frame
> +	Header OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_PROFILE``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A enum :c:type:`v4l2_ctrl_av1_profile`, indicating what AV1 profiles
> +	an AV1 stateless decoder might support.
> +    * - ``V4L2_CTRL_TYPE_AV1_LEVEL``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A enum :c:type:`v4l2_ctrl_av1_level`, indicating what AV1 levels
> +	an AV1 stateless decoder might support.
> +    * - ``V4L2_CTRL_TYPE_AV1_OPERATING_MODE``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A enum :c:type:`v4l2_ctrl_av1_operating_mode`, indicating what AV1
> +	operating modes an AV1 stateless decoder might support.
>  
>  .. raw:: latex
>  

Regards,

	Hans

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 0/2] Add the stateless AV1 uAPI and the VIVPU virtual driver to showcase it.
  2021-08-10 22:05 [RFC PATCH 0/2] Add the stateless AV1 uAPI and the VIVPU virtual driver to showcase it daniel.almeida
  2021-08-10 22:05 ` [RFC PATCH 1/2] media: Add AV1 uAPI daniel.almeida
  2021-08-10 22:05 ` [RFC PATCH 2/2] media: vivpu: add virtual VPU driver daniel.almeida
@ 2021-09-02 15:43 ` Hans Verkuil
  2 siblings, 0 replies; 14+ messages in thread
From: Hans Verkuil @ 2021-09-02 15:43 UTC (permalink / raw)
  To: daniel.almeida, stevecho, shawnku, tzungbi, mcasas, nhebert,
	abodenha, randy.wu, yunfei.dong, gustavo.padovan,
	andrzej.pietrasiewicz, enric.balletbo, ezequiel,
	nicolas.dufresne, tomeu.vizoso, nick.milner, xiaoyong.lu,
	mchehab
  Cc: linux-media, linux-kernel, kernel

On 11/08/2021 00:05, daniel.almeida@collabora.com wrote:
> From: Daniel Almeida <daniel.almeida@collabora.com>
> 
> Dear all,
> 
> This patchset adds the stateless AV1 uAPI and the VIVPU virtual driver to
> showcase it.
> 
> Note that this patch depends on dynamically allocated control arrays, i.e. [0]
> and [1], which are part of the following series[2].
> 
> This cover letter will discuss the AV1 OBUs and their relationship with the
> V4L2 controls proposed therein. The VIVPU test driver will also be discussed.
> 
> Note that I have also written a GStreamer decoder element [3] to interface with
> the VIVPU virtual driver through the proposed control interface to ensure that
> these three pieces actually work. The MR in gst-plugins-bad is marked as "Draft"
> only because the uAPI hasn't been merged yet and there's no real hardware to
> test it.
> 
> Padding and holes have not been taken into account yet.
> 
> 
> 
> Relevant AV1 Open Bitstream Units (OBUs):
> -----------------------------------------
> 
> AV1 is packetized into a syntax element known as OBU, which stands for Open
> Bitstream Units. There are seven different types of OBUs defined in the AV1
> specification, of which five are of interest for the purposes of this API, they
> are:
> 
> Sequence Header OBU: Contains information that applies to the entire sequence.
> Most importantly, it contains a set of flags that signal which AV1 features are
> enabled for the entire video coded sequence. The sequence header OBU also
> encodes the sequence profile.
> 
> Frame Header OBU: Contains information that applies to an entire frame. Notably,
> this OBU will dictate the frame's dimensions, its frame type, quantization,
> segmentation and filter parameters as well as the set of reference frames needed
> to effect a decoding operation. A set of flags will signal whether some AV1
> features are enabled for a particular frame.
> 
> Tile Group OBU: Contains tiling information. Tile groups contain the tile data
> associated with a frame. Tiles are subdivisions of a picture that can be
> independently decoded, optionally in parallel. The entire frame is assembled
> from all the tiles after potential loop filtering.
> 
> Frame OBU: Shorthand for a frame header OBU plus a tile group OBU but with less
> overhead. Frame OBUs are a convenience for the common case in which a frame
> header is combined with tiling information.
> 
> Tile List OBU: Similar to a tile group OBU, but used in "Large Scale
> Tile Decoding Mode". The tiling information contained in this OBU has an
> additional header that allows the decoder to process a subset of tiles and
> display the corresponding part of the image without having to fully decode all
> the tiles for a frame.
> 
> 
> 
> AV1 uAPI V4L2 CIDs:
> -------------------
> 
> V4L2_CID_STATELESS_AV1_SEQUENCE: represents a Sequence Header OBU. This control
> should only be set once per Sequence Header OBU. The "flags" member contains a
> bitfield with the set of flags for the current video coded sequence as parsed
> from the bitstream.
> 
> V4L2_CID_STATELESS_AV1_FRAME_HEADER: represents a Frame Header OBU. This control
> should be set once per frame.
> 
> V4L2_CID_STATELESS_AV1_{TILE_GROUP|TILE_GROUP_ENTRY}: represents a Tile Group
> OBU or the tiling information within a Frame OBU. These controls contain an
> array of metadata to decode the tiles associated with a frame. Both controls
> depend on V4L2_CTRL_FLAG_DYNAMIC_ARRAY and drivers will be able to index into
> the array using ctrl->p_cur.p_av1_tile_group and
> ctrl->p_cur.p_av1_tile_group_entry as base pointers respectively. Frame OBUs
> should be split into their Frame Header OBU and Tile Group OBU constituents
> before the array entries can be set and there should be a maximum of 512 tile
> group entries as per the AV1 specification. In the event that more than one tile
> group is provided, drivers can disambiguate their corresponding entries in the
> ctrl->p_cur.p_av1_tile_group_entry array by taking note of the tg_start and
> tg_end fields.
> 
> V4L2_CID_STATELESS_AV1_{TILE_LIST|TILE_LIST_ENTRY}: represents a Tile List OBU.
> These controls contain an array of metadata to decode a list of tiles associated
> with a frame when the decoder is operating under "Large Scale Tile Decoding
> Mode". Both controls depend on V4L2_CTRL_FLAG_DYNAMIC_ARRAY, and drivers will be
> able index into the array using ctrl->p_cur.p_av1_tile_list and
> ctrl->p_cur.p_av1_tile_list_entry as base pointers respectively. In the event
> that more than one list is provided, drivers can disambiguate their
> corresponding entries in the ctrl->p_cur.p_av1_tile_list_entry array by taking
> note of the tile_count_minus_1 field.

It's a bit hard to tell for non-AV1 experts how these tile controls relate to
one another.

This is my understanding:

Without tiling only a V4L2_CID_STATELESS_AV1_FRAME_HEADER is needed.

With tiling you need a V4L2_CID_STATELESS_AV1_FRAME_HEADER and
V4L2_CID_STATELESS_AV1_{TILE_GROUP|TILE_GROUP_ENTRY} arrays.

With 'Large Scale Tile Decoding' you need a V4L2_CID_STATELESS_AV1_FRAME_HEADER
and V4L2_CID_STATELESS_AV1_{TILE_LIST|TILE_LIST_ENTRY} arrays (I think). It's
not clear to me if these TILE_LISTs replace TILE_GROUPs or add to them. If it
is the latter, then the V4L2_CID_STATELESS_AV1_{TILE_GROUP|TILE_GROUP_ENTRY}
arrays also need to be set for each frame.

In any case, this should probably be clarified in the documentation as well.

Regards,

	Hans

> 
> V4L2_CID_STATELESS_AV1_PROFILE: this control lets the driver convey the
> supported profiles to userspace.
> 
> V4L2_CID_STATELESS_AV1_LEVEL: this control lets the driver convey the supported
> AV1 levels to userspace.
> 
> V4L2_CTRL_AV1_OPERATING_MODE: this control lets the driver convey the supported
> operating modes to userspace. Conversely, userspace apps can change the value of
> this control to switch between "general decoding" and "large scale tile
> decoding". As per the AV1 specification, under *general decoding mode* the
> driver should expect the input to be a sequence of OBUs and the output to be a
> decoded frame, whereas under *large scale tile decoding mode* the driver should
> expect the input to be a tile list OBU plus additional side information and the
> output to be a decoded frame.
> 
> 
> 
> VIVPU:
> ------
> 
> This virtual driver was written as a way to showcase and test the control
> interface for AV1 as well as the GStreamer decoder[3]. This is so we can detect
> bugs at an early stage before real hardware is available. VIVPU does not attempt
> to decode video at all.
> 
> Once VIVPU is loaded, one can run the following GStreamer pipeline successfully:
> 
> gst-launch-1.0 filesrc location=<path to some sample av1 file> ! parsebin !  v4l2slav1dec ! fakevideosink
> 
> This is provided that the patches in [3] have been applied and the v4l2codecs
> gstreamer plugin is compiled.
> 
> It is also possible to print the controls' contents to the console by setting
> vivpu_debug to 1. This is handy when debugging, even more so when one is
> comparing two different userspace implementations because it makes it easier to
> diff the controls that were passed to the kernel.
> 
> [0] https://patchwork.linuxtv.org/project/linux-media/patch/20210610113615.785359-2-hverkuil-cisco@xs4all.nl/
> 
> [1] https://patchwork.linuxtv.org/project/linux-media/patch/20210610113615.785359-3-hverkuil-cisco@xs4all.nl/
> 
> [2] https://patchwork.linuxtv.org/project/linux-media/list/?series=5647
> 
> [3] https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad/-/merge_requests/2305
> 
> Daniel Almeida (2):
>   media: Add AV1 uAPI
>   media: vivpu: add virtual VPU driver
> 
>  .../userspace-api/media/v4l/biblio.rst        |   10 +
>  .../media/v4l/ext-ctrls-codec-stateless.rst   | 1268 +++++++++++++++++
>  .../media/v4l/pixfmt-compressed.rst           |   21 +
>  .../media/v4l/vidioc-g-ext-ctrls.rst          |   36 +
>  .../media/v4l/vidioc-queryctrl.rst            |   54 +
>  .../media/videodev2.h.rst.exceptions          |    9 +
>  drivers/media/test-drivers/Kconfig            |    1 +
>  drivers/media/test-drivers/Makefile           |    1 +
>  drivers/media/test-drivers/vivpu/Kconfig      |   16 +
>  drivers/media/test-drivers/vivpu/Makefile     |    4 +
>  drivers/media/test-drivers/vivpu/vivpu-core.c |  418 ++++++
>  drivers/media/test-drivers/vivpu/vivpu-dec.c  |  491 +++++++
>  drivers/media/test-drivers/vivpu/vivpu-dec.h  |   61 +
>  .../media/test-drivers/vivpu/vivpu-video.c    |  599 ++++++++
>  .../media/test-drivers/vivpu/vivpu-video.h    |   46 +
>  drivers/media/test-drivers/vivpu/vivpu.h      |  119 ++
>  drivers/media/v4l2-core/v4l2-ctrls-core.c     |  286 +++-
>  drivers/media/v4l2-core/v4l2-ctrls-defs.c     |   79 +
>  drivers/media/v4l2-core/v4l2-ioctl.c          |    1 +
>  include/media/v4l2-ctrls.h                    |   12 +
>  include/uapi/linux/v4l2-controls.h            |  796 +++++++++++
>  include/uapi/linux/videodev2.h                |   15 +
>  22 files changed, 4342 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/media/test-drivers/vivpu/Kconfig
>  create mode 100644 drivers/media/test-drivers/vivpu/Makefile
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu-core.c
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu-dec.c
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu-dec.h
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu-video.c
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu-video.h
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu.h
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 2/2] media: vivpu: add virtual VPU driver
  2021-08-10 22:05 ` [RFC PATCH 2/2] media: vivpu: add virtual VPU driver daniel.almeida
@ 2021-09-02 16:05   ` Hans Verkuil
  2022-06-06 21:26   ` [RFC PATCH v2] media: visl: add virtual stateless driver daniel.almeida
  1 sibling, 0 replies; 14+ messages in thread
From: Hans Verkuil @ 2021-09-02 16:05 UTC (permalink / raw)
  To: daniel.almeida, stevecho, shawnku, tzungbi, mcasas, nhebert,
	abodenha, randy.wu, yunfei.dong, gustavo.padovan,
	andrzej.pietrasiewicz, enric.balletbo, ezequiel,
	nicolas.dufresne, tomeu.vizoso, nick.milner, xiaoyong.lu,
	mchehab
  Cc: linux-media, linux-kernel, kernel

On 11/08/2021 00:05, daniel.almeida@collabora.com wrote:
> From: Daniel Almeida <daniel.almeida@collabora.com>
> 
> Add a virtual VPU driver to aid userspace in testing stateless uAPI
> implementations for which no hardware is currently available.
> 
> A userspace implementation can use vivpu to run a decoding loop even
> when no hardware is available or when the kernel uAPI for the codec
> has not been upstreamed yet. This can reveal bugs at an early stage.
> 
> Also makes it possible to work on the kernel uAPI for a codec and
> a corresponding userspace implementation at the same time.

The commit log should mention that it currently only supports AV1.

But for a first version I would actually like to see FWHT (from vicodec)
support as well. It should be trivial to add, but it will be nice to have
in place.

> 
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> ---
>  drivers/media/test-drivers/Kconfig            |   1 +
>  drivers/media/test-drivers/Makefile           |   1 +
>  drivers/media/test-drivers/vivpu/Kconfig      |  16 +

Even though it is a bit long, wouldn't vistateless be a better name?
'vpu' is a bit too generic, this driver is specifically for testing
stateless decoders (and potentially stateless encoders).

>  drivers/media/test-drivers/vivpu/Makefile     |   4 +
>  drivers/media/test-drivers/vivpu/vivpu-core.c | 418 ++++++++++++
>  drivers/media/test-drivers/vivpu/vivpu-dec.c  | 491 ++++++++++++++
>  drivers/media/test-drivers/vivpu/vivpu-dec.h  |  61 ++
>  .../media/test-drivers/vivpu/vivpu-video.c    | 599 ++++++++++++++++++
>  .../media/test-drivers/vivpu/vivpu-video.h    |  46 ++
>  drivers/media/test-drivers/vivpu/vivpu.h      | 119 ++++
>  10 files changed, 1756 insertions(+)
>  create mode 100644 drivers/media/test-drivers/vivpu/Kconfig
>  create mode 100644 drivers/media/test-drivers/vivpu/Makefile
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu-core.c
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu-dec.c
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu-dec.h
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu-video.c
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu-video.h
>  create mode 100644 drivers/media/test-drivers/vivpu/vivpu.h
> 
> diff --git a/drivers/media/test-drivers/Kconfig b/drivers/media/test-drivers/Kconfig
> index e27d6602545d..b426cef7fc88 100644
> --- a/drivers/media/test-drivers/Kconfig
> +++ b/drivers/media/test-drivers/Kconfig
> @@ -22,6 +22,7 @@ config VIDEO_VIM2M
>  	  framework.
>  
>  source "drivers/media/test-drivers/vicodec/Kconfig"
> +source "drivers/media/test-drivers/vivpu/Kconfig"
>  
>  endif #V4L_TEST_DRIVERS
>  
> diff --git a/drivers/media/test-drivers/Makefile b/drivers/media/test-drivers/Makefile
> index 9f0e4ebb2efe..a4fadccc4b95 100644
> --- a/drivers/media/test-drivers/Makefile
> +++ b/drivers/media/test-drivers/Makefile
> @@ -7,4 +7,5 @@ obj-$(CONFIG_VIDEO_VIMC)		+= vimc/
>  obj-$(CONFIG_VIDEO_VIVID)		+= vivid/
>  obj-$(CONFIG_VIDEO_VIM2M)		+= vim2m.o
>  obj-$(CONFIG_VIDEO_VICODEC)		+= vicodec/
> +obj-$(CONFIG_VIDEO_VIVPU)		+= vivpu/
>  obj-$(CONFIG_DVB_VIDTV)			+= vidtv/
> diff --git a/drivers/media/test-drivers/vivpu/Kconfig b/drivers/media/test-drivers/vivpu/Kconfig
> new file mode 100644
> index 000000000000..1e6267418d19
> --- /dev/null
> +++ b/drivers/media/test-drivers/vivpu/Kconfig
> @@ -0,0 +1,16 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +config VIDEO_VIVPU
> +	tristate "Virtual VPU Driver (vivpu)"
> +	depends on VIDEO_DEV && VIDEO_V4L2
> +	select VIDEOBUF2_VMALLOC
> +	select V4L2_MEM2MEM_DEV
> +	select MEDIA_CONTROLLER
> +	select MEDIA_CONTROLLER_REQUEST_API
> +	help
> +	  A virtual stateless VPU example device for uAPI development purposes.
> +
> +	  A userspace implementation can use vivpu to run a decoding loop even
> +	  when no hardware is available or when the kernel uAPI for the codec
> +	  has not been upstreamed yet. This can reveal bugs at an early stage
> +
> +	  When in doubt, say N.
> diff --git a/drivers/media/test-drivers/vivpu/Makefile b/drivers/media/test-drivers/vivpu/Makefile
> new file mode 100644
> index 000000000000..d20a1dbbd9e5
> --- /dev/null
> +++ b/drivers/media/test-drivers/vivpu/Makefile
> @@ -0,0 +1,4 @@
> +# SPDX-License-Identifier: GPL-2.0
> +vivpu-y := vivpu-core.o vivpu-video.o vivpu-dec.o
> +
> +obj-$(CONFIG_VIDEO_VIVPU) += vivpu.o
> diff --git a/drivers/media/test-drivers/vivpu/vivpu-core.c b/drivers/media/test-drivers/vivpu/vivpu-core.c
> new file mode 100644
> index 000000000000..1eb1ce33bab1
> --- /dev/null
> +++ b/drivers/media/test-drivers/vivpu/vivpu-core.c
> @@ -0,0 +1,418 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +/*
> + * A virtual stateless VPU example device for uAPI development purposes.
> + *
> + * A userspace implementation can use vivpu to run a decoding loop even
> + * when no hardware is available or when the kernel uAPI for the codec
> + * has not been upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by the
> + * Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version

This last paragraph can be dropped since you already have the SPDX tag. Ditto
for the other sources.

> + */
> +
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include <linux/font.h>
> +#include <media/v4l2-ctrls.h>
> +#include <media/v4l2-device.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/v4l2-mem2mem.h>
> +
> +#include "vivpu.h"
> +#include "vivpu-dec.h"
> +#include "vivpu-video.h"
> +
> +unsigned int vivpu_debug;
> +module_param(vivpu_debug, uint, 0644);
> +MODULE_PARM_DESC(vivpu_debug, " activates debug info");
> +
> +const unsigned int vivpu_src_default_w = 640;
> +const unsigned int vivpu_src_default_h = 480;
> +const unsigned int vivpu_src_default_depth = 8;
> +
> +unsigned int vivpu_transtime;
> +module_param(vivpu_transtime, uint, 0644);
> +MODULE_PARM_DESC(vivpu_transtime, " simulated process time.");

In what unit? Second? Millisecond? Should be mentioned in the description.

> +
> +struct v4l2_ctrl *vivpu_find_control(struct vivpu_ctx *ctx, u32 id)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; ctx->ctrls[i]; i++)
> +		if (ctx->ctrls[i]->id == id)
> +			return ctx->ctrls[i];
> +
> +	return NULL;
> +}
> +
> +void *vivpu_find_control_data(struct vivpu_ctx *ctx, u32 id)
> +{
> +	struct v4l2_ctrl *ctrl;
> +
> +	ctrl = vivpu_find_control(ctx, id);
> +	if (ctrl)
> +		return ctrl->p_cur.p;
> +
> +	return NULL;
> +}
> +
> +u32 vivpu_control_num_elems(struct vivpu_ctx *ctx, u32 id)
> +{
> +	struct v4l2_ctrl *ctrl;
> +
> +	ctrl = vivpu_find_control(ctx, id);
> +	if (ctrl)
> +		return ctrl->elems;
> +
> +	return 0;
> +}
> +
> +static void vivpu_device_release(struct video_device *vdev)
> +{
> +	struct vivpu_dev *dev = container_of(vdev, struct vivpu_dev, vfd);
> +
> +	v4l2_device_unregister(&dev->v4l2_dev);
> +	v4l2_m2m_release(dev->m2m_dev);
> +	media_device_cleanup(&dev->mdev);
> +	kfree(dev);
> +}
> +
> +static const struct vivpu_control vivpu_controls[] = {
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_AV1_FRAME_HEADER,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_AV1_SEQUENCE,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_AV1_TILE_GROUP,
> +		.cfg.dims = { V4L2_AV1_MAX_TILE_COUNT },
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY,
> +		.cfg.dims = { V4L2_AV1_MAX_TILE_COUNT },
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_AV1_TILE_LIST,
> +		.cfg.dims = { V4L2_AV1_MAX_TILE_COUNT },
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY,
> +		.cfg.dims = { V4L2_AV1_MAX_TILE_COUNT },
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_AV1_PROFILE,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_AV1_LEVEL,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_AV1_OPERATING_MODE,
> +	},
> +};
> +
> +#define VIVPU_CONTROLS_COUNT	ARRAY_SIZE(vivpu_controls)
> +
> +static int vivpu_init_ctrls(struct vivpu_ctx *ctx)
> +{
> +	struct vivpu_dev *dev = ctx->dev;
> +	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
> +	struct v4l2_ctrl *ctrl;
> +	unsigned int ctrl_size;
> +	unsigned int i;
> +
> +	v4l2_ctrl_handler_init(hdl, VIVPU_CONTROLS_COUNT);
> +	if (hdl->error) {
> +		v4l2_err(&dev->v4l2_dev,
> +			 "Failed to initialize control handler\n");
> +		return hdl->error;
> +	}
> +
> +	ctrl_size = sizeof(ctrl) * VIVPU_CONTROLS_COUNT + 1;
> +
> +	ctx->ctrls = kzalloc(ctrl_size, GFP_KERNEL);
> +	if (!ctx->ctrls)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < VIVPU_CONTROLS_COUNT; i++) {
> +		ctrl = v4l2_ctrl_new_custom(hdl, &vivpu_controls[i].cfg,
> +					    NULL);
> +		if (hdl->error) {
> +			v4l2_err(&dev->v4l2_dev,
> +				 "Failed to create new custom control, errno: %d\n",
> +				 hdl->error);
> +
> +			return hdl->error;
> +		}
> +
> +		ctx->ctrls[i] = ctrl;
> +	}
> +
> +	ctx->fh.ctrl_handler = hdl;
> +	v4l2_ctrl_handler_setup(hdl);
> +
> +	return 0;
> +}
> +
> +static void vivpu_free_ctrls(struct vivpu_ctx *ctx)
> +{
> +	kfree(ctx->ctrls);
> +	v4l2_ctrl_handler_free(&ctx->hdl);
> +}
> +
> +static int vivpu_open(struct file *file)
> +{
> +	struct vivpu_dev *dev = video_drvdata(file);
> +	struct vivpu_ctx *ctx = NULL;
> +	int rc = 0;
> +
> +	if (mutex_lock_interruptible(&dev->dev_mutex))
> +		return -ERESTARTSYS;
> +	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> +	if (!ctx) {
> +		rc = -ENOMEM;
> +		goto unlock;
> +	}
> +
> +	v4l2_fh_init(&ctx->fh, video_devdata(file));
> +	file->private_data = &ctx->fh;
> +	ctx->dev = dev;
> +
> +	rc = vivpu_init_ctrls(ctx);
> +	if (rc)
> +		goto free_ctx;
> +
> +	ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, ctx, &vivpu_queue_init);
> +
> +	mutex_init(&ctx->vb_mutex);
> +
> +	if (IS_ERR(ctx->fh.m2m_ctx)) {
> +		rc = PTR_ERR(ctx->fh.m2m_ctx);
> +		goto free_hdl;
> +	}
> +
> +	vivpu_set_default_format(ctx);
> +
> +	v4l2_fh_add(&ctx->fh);
> +
> +	dprintk(dev, "Created instance: %p, m2m_ctx: %p\n",
> +		ctx, ctx->fh.m2m_ctx);
> +
> +	mutex_unlock(&dev->dev_mutex);
> +	return rc;
> +
> +free_hdl:
> +	vivpu_free_ctrls(ctx);
> +	v4l2_fh_exit(&ctx->fh);
> +free_ctx:
> +	kfree(ctx);
> +unlock:
> +	mutex_unlock(&dev->dev_mutex);
> +	return rc;
> +}
> +
> +static int vivpu_release(struct file *file)
> +{
> +	struct vivpu_dev *dev = video_drvdata(file);
> +	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
> +
> +	dprintk(dev, "Releasing instance %p\n", ctx);
> +
> +	v4l2_fh_del(&ctx->fh);
> +	v4l2_fh_exit(&ctx->fh);
> +	vivpu_free_ctrls(ctx);
> +	mutex_lock(&dev->dev_mutex);
> +	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
> +	mutex_unlock(&dev->dev_mutex);
> +	kfree(ctx);
> +
> +	return 0;
> +}
> +
> +static const struct v4l2_file_operations vivpu_fops = {
> +	.owner		= THIS_MODULE,
> +	.open		= vivpu_open,
> +	.release	= vivpu_release,
> +	.poll		= v4l2_m2m_fop_poll,
> +	.unlocked_ioctl	= video_ioctl2,
> +	.mmap		= v4l2_m2m_fop_mmap,
> +};
> +
> +static const struct video_device vivpu_videodev = {
> +	.name		= VIVPU_NAME,
> +	.vfl_dir	= VFL_DIR_M2M,
> +	.fops		= &vivpu_fops,
> +	.ioctl_ops	= &vivpu_ioctl_ops,
> +	.minor		= -1,
> +	.release	= vivpu_device_release,
> +	.device_caps	= V4L2_CAP_VIDEO_M2M | V4L2_CAP_STREAMING,
> +};
> +
> +static const struct v4l2_m2m_ops vivpu_m2m_ops = {
> +	.device_run	= vivpu_device_run,
> +};
> +
> +static const struct media_device_ops vivpu_m2m_media_ops = {
> +	.req_validate	= vivpu_request_validate,
> +	.req_queue	= v4l2_m2m_request_queue,
> +};
> +
> +static int vivpu_probe(struct platform_device *pdev)
> +{
> +	struct vivpu_dev *dev;
> +	struct video_device *vfd;
> +	int ret;
> +
> +	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
> +	if (!dev)
> +		return -ENOMEM;
> +
> +	ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
> +	if (ret)
> +		goto error_vivpu_dev;
> +
> +	mutex_init(&dev->dev_mutex);
> +
> +	dev->vfd = vivpu_videodev;
> +	vfd = &dev->vfd;
> +	vfd->lock = &dev->dev_mutex;
> +	vfd->v4l2_dev = &dev->v4l2_dev;
> +
> +	video_set_drvdata(vfd, dev);
> +	v4l2_info(&dev->v4l2_dev,
> +		  "Device registered as /dev/video%d\n", vfd->num);
> +
> +	platform_set_drvdata(pdev, dev);
> +
> +	dev->m2m_dev = v4l2_m2m_init(&vivpu_m2m_ops);
> +	if (IS_ERR(dev->m2m_dev)) {
> +		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
> +		ret = PTR_ERR(dev->m2m_dev);
> +		dev->m2m_dev = NULL;
> +		goto error_dev;
> +	}
> +
> +	dev->mdev.dev = &pdev->dev;
> +	strscpy(dev->mdev.model, "vivpu", sizeof(dev->mdev.model));
> +	strscpy(dev->mdev.bus_info, "platform:vivpu",
> +		sizeof(dev->mdev.bus_info));
> +	media_device_init(&dev->mdev);
> +	dev->mdev.ops = &vivpu_m2m_media_ops;
> +	dev->v4l2_dev.mdev = &dev->mdev;
> +
> +	ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
> +	if (ret) {
> +		v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
> +		goto error_m2m;
> +	}
> +
> +	ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd,
> +						 MEDIA_ENT_F_PROC_VIDEO_DECODER);
> +	if (ret) {
> +		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n");
> +		goto error_v4l2;
> +	}
> +
> +	ret = media_device_register(&dev->mdev);
> +	if (ret) {
> +		v4l2_err(&dev->v4l2_dev, "Failed to register mem2mem media device\n");
> +		goto error_m2m_mc;
> +	}
> +
> +	return 0;
> +
> +error_m2m_mc:
> +	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
> +error_v4l2:
> +	video_unregister_device(&dev->vfd);
> +	/* vivpu_device_release called by video_unregister_device to release various objects */
> +	return ret;
> +error_m2m:
> +	v4l2_m2m_release(dev->m2m_dev);
> +error_dev:
> +	v4l2_device_unregister(&dev->v4l2_dev);
> +error_vivpu_dev:
> +	kfree(dev);
> +
> +	return ret;
> +}
> +
> +static int vivpu_remove(struct platform_device *pdev)
> +{
> +	struct vivpu_dev *dev = platform_get_drvdata(pdev);
> +
> +	v4l2_info(&dev->v4l2_dev, "Removing " VIVPU_NAME);
> +
> +#ifdef CONFIG_MEDIA_CONTROLLER
> +	if (media_devnode_is_registered(dev->mdev.devnode)) {
> +		media_device_unregister(&dev->mdev);
> +		v4l2_m2m_unregister_media_controller(dev->m2m_dev);
> +	}
> +#endif
> +	video_unregister_device(&dev->vfd);
> +
> +	return 0;
> +}
> +
> +static struct platform_driver vivpu_pdrv = {
> +	.probe		= vivpu_probe,
> +	.remove		= vivpu_remove,
> +	.driver		= {
> +		.name	= VIVPU_NAME,
> +	},
> +};
> +
> +static void vivpu_dev_release(struct device *dev) {}
> +
> +static struct platform_device vivpu_pdev = {
> +	.name		= VIVPU_NAME,
> +	.dev.release	= vivpu_dev_release,
> +};
> +
> +static void __exit vivpu_exit(void)
> +{
> +	platform_driver_unregister(&vivpu_pdrv);
> +	platform_device_unregister(&vivpu_pdev);
> +}
> +
> +static int __init vivpu_init(void)
> +{
> +	int ret;
> +
> +	ret = platform_device_register(&vivpu_pdev);
> +	if (ret)
> +		return ret;
> +
> +	ret = platform_driver_register(&vivpu_pdrv);
> +	if (ret)
> +		platform_device_unregister(&vivpu_pdev);
> +
> +	return ret;
> +}
> +
> +MODULE_DESCRIPTION("Virtual VPU device");
> +MODULE_AUTHOR("Daniel Almeida <daniel.almeida@collabora.com>");
> +MODULE_LICENSE("GPL v2");
> +
> +module_init(vivpu_init);
> +module_exit(vivpu_exit);
> diff --git a/drivers/media/test-drivers/vivpu/vivpu-dec.c b/drivers/media/test-drivers/vivpu/vivpu-dec.c
> new file mode 100644
> index 000000000000..f928768aff77
> --- /dev/null
> +++ b/drivers/media/test-drivers/vivpu/vivpu-dec.c
> @@ -0,0 +1,491 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +/*
> + * A virtual stateless VPU example device for uAPI development purposes.
> + *
> + * A userspace implementation can use vivpu to run a decoding loop even
> + * when no hardware is available or when the kernel uAPI for the codec
> + * has not been upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by the
> + * Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version
> + */
> +
> +#include "vivpu.h"
> +#include "vivpu-dec.h"
> +
> +#include <linux/delay.h>
> +#include <linux/workqueue.h>
> +#include <media/v4l2-mem2mem.h>
> +#include <media/tpg/v4l2-tpg.h>
> +
> +static void
> +vivpu_av1_check_reference_frames(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	u32 i;
> +	int idx;
> +	const struct v4l2_ctrl_av1_frame_header *f;
> +	struct vb2_queue *capture_queue;
> +
> +	f = run->av1.frame_header;
> +	capture_queue = &ctx->fh.m2m_ctx->cap_q_ctx.q;
> +
> +	/*
> +	 * For every reference frame timestamp, make sure we can actually find
> +	 * the buffer in the CAPTURE queue.
> +	 */
> +	for (i = 0; i < V4L2_AV1_NUM_REF_FRAMES; i++) {
> +		idx = vb2_find_timestamp(capture_queue, f->reference_frame_ts[i], 0);
> +		if (idx < 0)
> +			v4l2_err(&ctx->dev->v4l2_dev,
> +				 "no capture buffer found for reference_frame_ts[%d] %llu",
> +				 i, f->reference_frame_ts[i]);
> +		else
> +			dprintk(ctx->dev,
> +				"found capture buffer %d for reference_frame_ts[%d] %llu\n",
> +				idx, i, f->reference_frame_ts[i]);
> +	}
> +}
> +
> +static void vivpu_dump_av1_seq(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_ctrl_av1_sequence *seq = run->av1.sequence;
> +
> +	dprintk(ctx->dev, "AV1 Sequence\n");
> +	dprintk(ctx->dev, "flags %d\n", seq->flags);
> +	dprintk(ctx->dev, "profile %d\n", seq->seq_profile);
> +	dprintk(ctx->dev, "order_hint_bits %d\n", seq->order_hint_bits);
> +	dprintk(ctx->dev, "bit_depth %d\n", seq->bit_depth);
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_tile_group(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_ctrl_av1_tile_group *tg;
> +	u32 n;
> +	u32 i;
> +
> +	n = vivpu_control_num_elems(ctx, V4L2_CID_STATELESS_AV1_TILE_GROUP);
> +	for (i = 0; i < n; i++) {
> +		tg = &run->av1.tile_group[i];
> +		dprintk(ctx->dev, "AV1 Tile Group\n");
> +		dprintk(ctx->dev, "flags %d\n", tg->flags);
> +		dprintk(ctx->dev, "tg_start %d\n", tg->tg_start);
> +		dprintk(ctx->dev, "tg_end %d\n", tg->tg_end);
> +		dprintk(ctx->dev, "\n");
> +	}
> +
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_tile_group_entry(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_ctrl_av1_tile_group_entry *tge;
> +	u32 n;
> +	u32 i;
> +
> +	n = vivpu_control_num_elems(ctx, V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY);
> +	for (i = 0; i < n; i++) {
> +		tge = &run->av1.tg_entries[i];
> +		dprintk(ctx->dev, "AV1 Tile Group Entry\n");
> +		dprintk(ctx->dev, "tile_offset %d\n", tge->tile_offset);
> +		dprintk(ctx->dev, "tile_size %d\n", tge->tile_size);
> +		dprintk(ctx->dev, "tile_row %d\n", tge->tile_row);
> +		dprintk(ctx->dev, "tile_col %d\n", tge->tile_col);
> +
> +		dprintk(ctx->dev, "\n");
> +	}
> +
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_tile_list(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_ctrl_av1_tile_list *tl;
> +	u32 n;
> +	u32 i;
> +
> +	n = vivpu_control_num_elems(ctx, V4L2_CID_STATELESS_AV1_TILE_LIST);
> +	for (i = 0; i < n; i++) {
> +		tl = &run->av1.tile_list[i];
> +		dprintk(ctx->dev, "AV1 Tile List\n");
> +		dprintk(ctx->dev, "output_frame_width_in_tiles_minus_1 %d\n",
> +			tl->output_frame_width_in_tiles_minus_1);
> +		dprintk(ctx->dev, "output_frame_height_in_tiles_minus_1 %d\n",
> +			tl->output_frame_height_in_tiles_minus_1);
> +		dprintk(ctx->dev, "tile_count_minus_1 %d\n",
> +			tl->tile_count_minus_1);
> +		dprintk(ctx->dev, "\n");
> +	}
> +
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_tile_list_entry(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_ctrl_av1_tile_list_entry *tle;
> +	u32 n;
> +	u32 i;
> +
> +	n = vivpu_control_num_elems(ctx, V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY);
> +	for (i = 0; i < n; i++) {
> +		tle = &run->av1.tl_entries[i];
> +		dprintk(ctx->dev, "AV1 Tile List Entry\n");
> +		dprintk(ctx->dev, "anchor_frame_idx %d\n", tle->anchor_frame_idx);
> +		dprintk(ctx->dev, "anchor_tile_row %d\n", tle->anchor_tile_row);
> +		dprintk(ctx->dev, "anchor_tile_col %d\n", tle->anchor_tile_col);
> +		dprintk(ctx->dev, "tile_data_size_minus_1 %d\n",
> +			tle->tile_data_size_minus_1);
> +		dprintk(ctx->dev, "\n");
> +	}
> +
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_quantization(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_av1_quantization *q = &run->av1.frame_header->quantization;
> +
> +	dprintk(ctx->dev, "AV1 Quantization\n");
> +	dprintk(ctx->dev, "flags %d\n", q->flags);
> +	dprintk(ctx->dev, "base_q_idx %d\n", q->base_q_idx);
> +	dprintk(ctx->dev, "delta_q_y_dc %d\n", q->delta_q_y_dc);
> +	dprintk(ctx->dev, "delta_q_u_dc %d\n", q->delta_q_u_dc);
> +	dprintk(ctx->dev, "delta_q_u_ac %d\n", q->delta_q_u_ac);
> +	dprintk(ctx->dev, "delta_q_v_dc %d\n", q->delta_q_v_dc);
> +	dprintk(ctx->dev, "delta_q_v_ac %d\n", q->delta_q_v_ac);
> +	dprintk(ctx->dev, "qm_y %d\n", q->qm_y);
> +	dprintk(ctx->dev, "qm_u %d\n", q->qm_u);
> +	dprintk(ctx->dev, "qm_v %d\n", q->qm_v);
> +	dprintk(ctx->dev, "delta_q_res %d\n", q->delta_q_res);
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_segmentation(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_av1_segmentation *s = &run->av1.frame_header->segmentation;
> +	u32 i;
> +	u32 j;
> +
> +	dprintk(ctx->dev, "AV1 Segmentation\n");
> +	dprintk(ctx->dev, "flags %d\n", s->flags);
> +
> +	for (i = 0; i < ARRAY_SIZE(s->feature_enabled); i++)
> +		dprintk(ctx->dev,
> +			"feature_enabled[%d] %d\n",
> +			i, s->feature_enabled[i]);
> +
> +	for (i = 0; i < V4L2_AV1_MAX_SEGMENTS; i++)
> +		for (j = 0; j < V4L2_AV1_SEG_LVL_MAX; j++)
> +			dprintk(ctx->dev,
> +				"feature_data[%d][%d] %d\n",
> +				i, j, s->feature_data[i][j]);
> +
> +	dprintk(ctx->dev, "last_active_seg_id %d\n", s->last_active_seg_id);
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_loop_filter(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_av1_loop_filter *lf = &run->av1.frame_header->loop_filter;
> +	u32 i;
> +
> +	dprintk(ctx->dev, "AV1 Loop Filter\n");
> +	dprintk(ctx->dev, "flags %d\n", lf->flags);
> +
> +	for (i = 0; i < ARRAY_SIZE(lf->level); i++)
> +		dprintk(ctx->dev, "level[%d] %d\n", i, lf->level[i]);
> +
> +	dprintk(ctx->dev, "sharpness %d\n", lf->sharpness);
> +
> +	for (i = 0; i < ARRAY_SIZE(lf->ref_deltas); i++)
> +		dprintk(ctx->dev, "ref_deltas[%d] %d\n", i, lf->ref_deltas[i]);
> +
> +	for (i = 0; i < ARRAY_SIZE(lf->mode_deltas); i++)
> +		dprintk(ctx->dev, "mode_deltas[%d], %d\n", i, lf->mode_deltas[i]);
> +
> +	dprintk(ctx->dev, "delta_lf_res %d\n", lf->delta_lf_res);
> +	dprintk(ctx->dev, "delta_lf_multi %d\n", lf->delta_lf_multi);
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_loop_restoration(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_av1_loop_restoration *lr;
> +	u32 i;
> +
> +	lr = &run->av1.frame_header->loop_restoration;
> +	dprintk(ctx->dev, "AV1 Loop Restoration\n");
> +
> +	for (i = 0; i < ARRAY_SIZE(lr->frame_restoration_type); i++)
> +		dprintk(ctx->dev, "frame_restoration_type[%d] %d\n", i,
> +			lr->frame_restoration_type[i]);
> +
> +	dprintk(ctx->dev, "lr_unit_shift %d\n", lr->lr_unit_shift);
> +	dprintk(ctx->dev, "lr_uv_shift %d\n", lr->lr_uv_shift);
> +
> +	for (i = 0; i < ARRAY_SIZE(lr->loop_restoration_size); i++)
> +		dprintk(ctx->dev, "loop_restoration_size[%d] %d\n",
> +			i, lr->loop_restoration_size[i]);
> +
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_cdef(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_av1_cdef *cdef = &run->av1.frame_header->cdef;
> +	u32 i;
> +
> +	dprintk(ctx->dev, "AV1 CDEF\n");
> +	dprintk(ctx->dev, "damping_minus_3 %d\n", cdef->damping_minus_3);
> +	dprintk(ctx->dev, "bits %d\n", cdef->bits);
> +
> +	for (i = 0; i < ARRAY_SIZE(cdef->y_pri_strength); i++)
> +		dprintk(ctx->dev,
> +			"y_pri_strength[%d] %d\n", i, cdef->y_pri_strength[i]);
> +	for (i = 0; i < ARRAY_SIZE(cdef->y_sec_strength); i++)
> +		dprintk(ctx->dev,
> +			"y_sec_strength[%d] %d\n", i, cdef->y_sec_strength[i]);
> +	for (i = 0; i < ARRAY_SIZE(cdef->uv_pri_strength); i++)
> +		dprintk(ctx->dev,
> +			"uv_pri_strength[%d] %d\n", i, cdef->uv_pri_strength[i]);
> +	for (i = 0; i < ARRAY_SIZE(cdef->uv_sec_strength); i++)
> +		dprintk(ctx->dev,
> +			"uv_sec_strength[%d] %d\n", i, cdef->uv_sec_strength[i]);
> +
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_global_motion(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_av1_global_motion *gm;
> +	u32 i;
> +	u32 j;
> +
> +	gm = &run->av1.frame_header->global_motion;
> +
> +	dprintk(ctx->dev, "AV1 Global Motion\n");
> +
> +	for (i = 0; i < ARRAY_SIZE(gm->flags); i++)
> +		dprintk(ctx->dev, "flags[%d] %d\n", i, gm->flags[i]);
> +	for (i = 0; i < ARRAY_SIZE(gm->type); i++)
> +		dprintk(ctx->dev, "type[%d] %d\n", i, gm->type[i]);
> +
> +	for (i = 0; i < V4L2_AV1_TOTAL_REFS_PER_FRAME; i++)
> +		for (j = 0; j < 6; j++)
> +			dprintk(ctx->dev, "params[%d][%d] %d\n",
> +				i, j, gm->type[i]);
> +
> +	dprintk(ctx->dev, "invalid %d\n", gm->invalid);
> +
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_film_grain(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_av1_film_grain *fg;
> +	u32 i;
> +
> +	fg = &run->av1.frame_header->film_grain;
> +
> +	dprintk(ctx->dev, "AV1 Film Grain\n");
> +	dprintk(ctx->dev, "flags %d\n", fg->flags);
> +	dprintk(ctx->dev, "grain_seed %d\n", fg->grain_seed);
> +	dprintk(ctx->dev, "film_grain_params_ref_idx %d\n",
> +		fg->film_grain_params_ref_idx);
> +	dprintk(ctx->dev, "num_y_points %d\n", fg->num_y_points);
> +
> +	for (i = 0; i < ARRAY_SIZE(fg->point_y_value); i++)
> +		dprintk(ctx->dev, "point_y_value[%d] %d\n",
> +			i, fg->point_y_value[i]);
> +
> +	for (i = 0; i < ARRAY_SIZE(fg->point_y_scaling); i++)
> +		dprintk(ctx->dev, "point_y_scaling[%d] %d\n",
> +			i, fg->point_y_scaling[i]);
> +
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void
> +vivpu_dump_av1_tile_info(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_av1_tile_info *ti;
> +	u32 i;
> +
> +	ti = &run->av1.frame_header->tile_info;
> +
> +	dprintk(ctx->dev, "AV1 Tile Info\n");
> +
> +	dprintk(ctx->dev, "flags %d\n", ti->flags);
> +
> +	for (i = 0; i < ARRAY_SIZE(ti->mi_col_starts); i++)
> +		dprintk(ctx->dev, "mi_col_starts[%d] %d\n",
> +			i, ti->mi_col_starts[i]);
> +
> +	for (i = 0; i < ARRAY_SIZE(ti->mi_row_starts); i++)
> +		dprintk(ctx->dev, "mi_row_starts[%d] %d\n",
> +			i, ti->mi_row_starts[i]);
> +
> +	for (i = 0; i < ARRAY_SIZE(ti->width_in_sbs_minus_1); i++)
> +		dprintk(ctx->dev, "width_in_sbs_minus_1[%d] %d\n",
> +			i, ti->width_in_sbs_minus_1[i]);
> +
> +	for (i = 0; i < ARRAY_SIZE(ti->height_in_sbs_minus_1); i++)
> +		dprintk(ctx->dev, "height_in_sbs_minus_1[%d] %d\n",
> +			i, ti->height_in_sbs_minus_1[i]);
> +
> +	dprintk(ctx->dev, "tile_size_bytes %d\n", ti->tile_size_bytes);
> +	dprintk(ctx->dev, "context_update_tile_id %d\n", ti->context_update_tile_id);
> +	dprintk(ctx->dev, "tile_cols %d\n", ti->tile_cols);
> +	dprintk(ctx->dev, "tile_rows %d\n", ti->tile_rows);
> +
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void vivpu_dump_av1_frame(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	const struct v4l2_ctrl_av1_frame_header *f = run->av1.frame_header;
> +	u32 i;
> +
> +	dprintk(ctx->dev, "AV1 Frame Header\n");
> +	dprintk(ctx->dev, "flags %d\n", f->flags);
> +	dprintk(ctx->dev, "frame_type %d\n", f->frame_type);
> +	dprintk(ctx->dev, "order_hint %d\n", f->order_hint);
> +	dprintk(ctx->dev, "superres_denom %d\n", f->superres_denom);
> +	dprintk(ctx->dev, "upscaled_width %d\n", f->upscaled_width);
> +	dprintk(ctx->dev, "interpolation_filter %d\n", f->interpolation_filter);
> +	dprintk(ctx->dev, "tx_mode %d\n", f->tx_mode);
> +	dprintk(ctx->dev, "frame_width_minus_1 %d\n", f->frame_width_minus_1);
> +	dprintk(ctx->dev, "frame_height_minus_1 %d\n", f->frame_height_minus_1);
> +	dprintk(ctx->dev, "render_width_minus_1 %d\n", f->render_width_minus_1);
> +	dprintk(ctx->dev, "render_height_minus_1 %d\n", f->render_height_minus_1);
> +	dprintk(ctx->dev, "current_frame_id %d\n", f->current_frame_id);
> +	dprintk(ctx->dev, "primary_ref_frame %d\n", f->primary_ref_frame);
> +
> +	for (i = 0; i < V4L2_AV1_MAX_OPERATING_POINTS; i++) {
> +		dprintk(ctx->dev, "buffer_removal_time[%d] %d\n",
> +			i, f->buffer_removal_time[i]);
> +	}
> +
> +	dprintk(ctx->dev, "refresh_frame_flags %d\n", f->refresh_frame_flags);
> +	dprintk(ctx->dev, "last_frame_idx %d\n", f->last_frame_idx);
> +	dprintk(ctx->dev, "gold_frame_idx %d\n", f->gold_frame_idx);
> +
> +	for (i = 0; i < ARRAY_SIZE(f->reference_frame_ts); i++)
> +		dprintk(ctx->dev, "reference_frame_ts[%d] %llu\n", i,
> +			f->reference_frame_ts[i]);
> +
> +	vivpu_dump_av1_tile_info(ctx, run);
> +	vivpu_dump_av1_quantization(ctx, run);
> +	vivpu_dump_av1_segmentation(ctx, run);
> +	vivpu_dump_av1_loop_filter(ctx, run);
> +	vivpu_dump_av1_cdef(ctx, run);
> +	vivpu_dump_av1_loop_restoration(ctx, run);
> +	vivpu_dump_av1_global_motion(ctx, run);
> +	vivpu_dump_av1_film_grain(ctx, run);
> +
> +	for (i = 0; i < ARRAY_SIZE(f->skip_mode_frame); i++)
> +		dprintk(ctx->dev, "skip_mode_frame[%d] %d\n",
> +			i, f->skip_mode_frame[i]);
> +
> +	dprintk(ctx->dev, "\n");
> +}
> +
> +static void vivpu_dump_av1_ctrls(struct vivpu_ctx *ctx, struct vivpu_run *run)
> +{
> +	vivpu_dump_av1_seq(ctx, run);
> +	vivpu_dump_av1_frame(ctx, run);
> +	vivpu_dump_av1_tile_group(ctx, run);
> +	vivpu_dump_av1_tile_group_entry(ctx, run);
> +	vivpu_dump_av1_tile_list(ctx, run);
> +	vivpu_dump_av1_tile_list_entry(ctx, run);
> +}
> +
> +void vivpu_device_run(void *priv)
> +{
> +	struct vivpu_ctx *ctx = priv;
> +	struct vivpu_run run = {};
> +	struct media_request *src_req;
> +
> +	run.src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> +	run.dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
> +
> +	/* Apply request(s) controls if needed. */
> +	src_req = run.src->vb2_buf.req_obj.req;
> +
> +	if (src_req)
> +		v4l2_ctrl_request_setup(src_req, &ctx->hdl);
> +
> +	switch (ctx->current_codec) {
> +	case VIVPU_CODEC_AV1:
> +		run.av1.sequence =
> +			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_SEQUENCE);
> +		run.av1.frame_header =
> +			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_FRAME_HEADER);
> +		run.av1.tile_group =
> +			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_TILE_GROUP);
> +		run.av1.tg_entries =
> +			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY);
> +		run.av1.tile_list =
> +			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_TILE_LIST);
> +		run.av1.tl_entries =
> +			vivpu_find_control_data(ctx, V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY);
> +
> +		vivpu_dump_av1_ctrls(ctx, &run);
> +		vivpu_av1_check_reference_frames(ctx, &run);
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	v4l2_m2m_buf_copy_metadata(run.src, run.dst, true);
> +	run.dst->sequence = ctx->q_data[V4L2_M2M_DST].sequence++;
> +	run.src->sequence = ctx->q_data[V4L2_M2M_SRC].sequence++;
> +	run.dst->field = ctx->dst_fmt.fmt.pix.field;
> +
> +	dprintk(ctx->dev, "Got src buffer %p, sequence %d, timestamp %llu\n",
> +		run.src, run.src->sequence, run.src->vb2_buf.timestamp);
> +
> +	dprintk(ctx->dev, "Got dst buffer %p, sequence %d, timestamp %llu\n",
> +		run.dst, run.dst->sequence, run.dst->vb2_buf.timestamp);
> +
> +	/* Complete request(s) controls if needed. */
> +	if (src_req)
> +		v4l2_ctrl_request_complete(src_req, &ctx->hdl);
> +
> +	if (vivpu_transtime)
> +		usleep_range(vivpu_transtime, vivpu_transtime * 2);
> +
> +	v4l2_m2m_buf_done_and_job_finish(ctx->dev->m2m_dev,
> +					 ctx->fh.m2m_ctx, VB2_BUF_STATE_DONE);
> +}
> diff --git a/drivers/media/test-drivers/vivpu/vivpu-dec.h b/drivers/media/test-drivers/vivpu/vivpu-dec.h
> new file mode 100644
> index 000000000000..4a3ca5952e43
> --- /dev/null
> +++ b/drivers/media/test-drivers/vivpu/vivpu-dec.h
> @@ -0,0 +1,61 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * A virtual stateless VPU example device for uAPI development purposes.
> + *
> + * A userspace implementation can use vivpu to run a decoding loop even
> + * when no hardware is available or when the kernel uAPI for the codec
> + * has not been upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by the
> + * Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version
> + */
> +
> +#ifndef _VIVPU_DEC_H_
> +#define _VIVPU_DEC_H_
> +
> +#include "vivpu.h"
> +
> +struct vivpu_av1_run {
> +	const struct v4l2_ctrl_av1_sequence *sequence;
> +	const struct v4l2_ctrl_av1_frame_header *frame_header;
> +	const struct v4l2_ctrl_av1_tile_group *tile_group;
> +	const struct v4l2_ctrl_av1_tile_group_entry *tg_entries;
> +	const struct v4l2_ctrl_av1_tile_list *tile_list;
> +	const struct v4l2_ctrl_av1_tile_list_entry *tl_entries;
> +};
> +
> +struct vivpu_run {
> +	struct vb2_v4l2_buffer	*src;
> +	struct vb2_v4l2_buffer	*dst;
> +
> +	union {
> +		struct vivpu_av1_run	av1;
> +	};
> +};
> +
> +int vivpu_dec_start(struct vivpu_ctx *ctx);
> +int vivpu_dec_stop(struct vivpu_ctx *ctx);
> +int vivpu_job_ready(void *priv);
> +void vivpu_device_run(void *priv);
> +
> +#endif /* _VIVPU_DEC_H_ */
> diff --git a/drivers/media/test-drivers/vivpu/vivpu-video.c b/drivers/media/test-drivers/vivpu/vivpu-video.c
> new file mode 100644
> index 000000000000..a3018b0a4da3
> --- /dev/null
> +++ b/drivers/media/test-drivers/vivpu/vivpu-video.c
> @@ -0,0 +1,599 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +/*
> + * A virtual stateless VPU example device for uAPI development purposes.
> + *
> + * A userspace implementation can use vivpu to run a decoding loop even
> + * when no hardware is available or when the kernel uAPI for the codec
> + * has not been upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by the
> + * Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version
> + */
> +
> +#include <media/v4l2-event.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/videobuf2-vmalloc.h>
> +
> +#include "vivpu-video.h"
> +#include "vivpu.h"
> +
> +static const u32 vivpu_decoded_formats[] = {
> +	V4L2_PIX_FMT_NV12,
> +};
> +
> +static const struct vivpu_coded_format_desc coded_formats[] = {
> +	{
> +	.pixelformat = V4L2_PIX_FMT_AV1_FRAME,
> +	/* simulated frame sizes for AV1 */
> +	.frmsize = {
> +		.min_width = 48,
> +		.max_width = 4096,
> +		.step_width = 16,
> +		.min_height = 48,
> +		.max_height = 2304,
> +		.step_height = 16,
> +	},
> +	.num_decoded_fmts = ARRAY_SIZE(vivpu_decoded_formats),
> +	/* simulate that the AV1 coded format decodes to raw NV12 */
> +	.decoded_fmts = vivpu_decoded_formats,
> +	}
> +};
> +
> +static const struct vivpu_coded_format_desc*
> +vivpu_find_coded_fmt_desc(u32 fourcc)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(coded_formats); i++) {
> +		if (coded_formats[i].pixelformat == fourcc)
> +			return &coded_formats[i];
> +	}
> +
> +	return NULL;
> +}
> +
> +void vivpu_set_default_format(struct vivpu_ctx *ctx)
> +{
> +	struct v4l2_format src_fmt = {
> +		.fmt.pix = {
> +			.width = vivpu_src_default_w,
> +			.height = vivpu_src_default_h,
> +			/* Zero bytes per line for encoded source. */
> +			.bytesperline = 0,
> +			/* Choose some minimum size since this can't be 0 */
> +			.sizeimage = SZ_1K,
> +		},
> +	};
> +
> +	ctx->coded_format_desc = &coded_formats[0];
> +	ctx->src_fmt = src_fmt;
> +
> +	v4l2_fill_pixfmt_mp(&ctx->dst_fmt.fmt.pix_mp,
> +			    V4L2_PIX_FMT_NV12,
> +			    vivpu_src_default_w, vivpu_src_default_h);
> +
> +	/* Always apply the frmsize constraint of the coded end. */
> +	v4l2_apply_frmsize_constraints(&ctx->dst_fmt.fmt.pix.width,
> +				       &ctx->dst_fmt.fmt.pix.height,
> +				       &ctx->coded_format_desc->frmsize);
> +
> +	ctx->src_fmt.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
> +	ctx->dst_fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
> +}
> +
> +static const char *q_name(enum v4l2_buf_type type)
> +{
> +	switch (type) {
> +	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
> +		return "Output";
> +	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
> +		return "Capture";
> +	default:
> +		return "Invalid";
> +	}
> +}
> +
> +static struct vivpu_q_data *get_q_data(struct vivpu_ctx *ctx,
> +				       enum v4l2_buf_type type)
> +{
> +	switch (type) {
> +	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
> +	case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
> +		return &ctx->q_data[V4L2_M2M_SRC];
> +	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
> +	case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
> +		return &ctx->q_data[V4L2_M2M_DST];
> +	default:
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +static int vivpu_querycap(struct file *file, void *priv,
> +			  struct v4l2_capability *cap)
> +{
> +	strscpy(cap->driver, VIVPU_NAME, sizeof(cap->driver));
> +	strscpy(cap->card, VIVPU_NAME, sizeof(cap->card));
> +	snprintf(cap->bus_info, sizeof(cap->bus_info),
> +		 "platform:%s", VIVPU_NAME);
> +
> +	return 0;
> +}
> +
> +static int vivpu_enum_fmt_vid_cap(struct file *file, void *priv,
> +				  struct v4l2_fmtdesc *f)
> +{
> +	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
> +
> +	if (f->index >= ctx->coded_format_desc->num_decoded_fmts)
> +		return -EINVAL;
> +
> +	f->pixelformat = ctx->coded_format_desc->decoded_fmts[f->index];
> +	return 0;
> +}
> +
> +static int vivpu_enum_fmt_vid_out(struct file *file, void *priv,
> +				  struct v4l2_fmtdesc *f)
> +{
> +	if (f->index >= ARRAY_SIZE(coded_formats))
> +		return -EINVAL;
> +
> +	f->pixelformat = coded_formats[f->index].pixelformat;
> +	return 0;
> +}
> +
> +static int vivpu_g_fmt_vid_cap(struct file *file, void *priv,
> +			       struct v4l2_format *f)
> +{
> +	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
> +	*f = ctx->dst_fmt;
> +
> +	return 0;
> +}
> +
> +static int vivpu_g_fmt_vid_out(struct file *file, void *priv,
> +			       struct v4l2_format *f)
> +{
> +	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
> +
> +	*f = ctx->src_fmt;
> +	return 0;
> +}
> +
> +static int vivpu_try_fmt_vid_cap(struct file *file, void *priv,
> +				 struct v4l2_format *f)
> +{
> +	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> +	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
> +	const struct vivpu_coded_format_desc *coded_desc;
> +	unsigned int i;
> +
> +	coded_desc = ctx->coded_format_desc;
> +	if (WARN_ON(!coded_desc))
> +		return -EINVAL;
> +
> +	for (i = 0; i < coded_desc->num_decoded_fmts; i++) {
> +		if (coded_desc->decoded_fmts[i] == pix_mp->pixelformat)
> +			break;
> +	}
> +
> +	if (i == coded_desc->num_decoded_fmts)
> +		return -EINVAL;
> +
> +	v4l2_apply_frmsize_constraints(&pix_mp->width,
> +				       &pix_mp->height,
> +				       &coded_desc->frmsize);
> +
> +	pix_mp->field = V4L2_FIELD_NONE;
> +
> +	return 0;
> +}
> +
> +static int vivpu_try_fmt_vid_out(struct file *file, void *priv,
> +				 struct v4l2_format *f)
> +{
> +	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> +	const struct vivpu_coded_format_desc *coded_desc;
> +
> +	coded_desc = vivpu_find_coded_fmt_desc(pix_mp->pixelformat);
> +	if (!coded_desc)
> +		return -EINVAL;
> +
> +	/* apply the (simulated) hw constraints */
> +	v4l2_apply_frmsize_constraints(&pix_mp->width,
> +				       &pix_mp->height,
> +				       &coded_desc->frmsize);
> +
> +	pix_mp->field = V4L2_FIELD_NONE;
> +	/* All coded formats are considered single planar for now. */
> +	pix_mp->num_planes = 1;
> +
> +	return 0;
> +}
> +
> +static int vivpu_s_fmt_vid_out(struct file *file, void *priv,
> +			       struct v4l2_format *f)
> +{
> +	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
> +	struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
> +	struct v4l2_format *cap_fmt = &ctx->dst_fmt;
> +	const struct vivpu_coded_format_desc *desc;
> +	struct vb2_queue *peer_vq;
> +	int ret;
> +
> +	peer_vq = v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
> +	if (vb2_is_busy(peer_vq))
> +		return -EBUSY;
> +
> +	dprintk(ctx->dev,
> +		"Current OUTPUT queue format: width %d, height %d, pixfmt %d\n",
> +		ctx->src_fmt.fmt.pix_mp.width, ctx->src_fmt.fmt.pix_mp.height,
> +		ctx->src_fmt.fmt.pix_mp.pixelformat);
> +
> +	dprintk(ctx->dev,
> +		"Current CAPTURE queue format: width %d, height %d, pixfmt %d\n",
> +		ctx->dst_fmt.fmt.pix_mp.width, ctx->dst_fmt.fmt.pix_mp.height,
> +		ctx->dst_fmt.fmt.pix_mp.pixelformat);
> +
> +	ret = vivpu_try_fmt_vid_out(file, priv, f);
> +	if (ret) {
> +		dprintk(ctx->dev,
> +			"Unsupported format for the OUTPUT queue: %d\n",
> +			f->fmt.pix_mp.pixelformat);
> +		return ret;
> +	}
> +
> +	desc = vivpu_find_coded_fmt_desc(f->fmt.pix_mp.pixelformat);
> +	if (!desc) {
> +		dprintk(ctx->dev,
> +			"Unsupported format for the OUTPUT queue: %d\n",
> +			f->fmt.pix_mp.pixelformat);
> +		return -EINVAL;
> +	}
> +
> +	ctx->coded_format_desc = desc;
> +
> +	ctx->src_fmt = *f;
> +
> +	v4l2_fill_pixfmt_mp(&ctx->dst_fmt.fmt.pix_mp,
> +			    ctx->coded_format_desc->decoded_fmts[0],
> +			    ctx->src_fmt.fmt.pix_mp.width,
> +			    ctx->src_fmt.fmt.pix_mp.height);
> +	cap_fmt->fmt.pix_mp.colorspace = f->fmt.pix_mp.colorspace;
> +	cap_fmt->fmt.pix_mp.xfer_func = f->fmt.pix_mp.xfer_func;
> +	cap_fmt->fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc;
> +	cap_fmt->fmt.pix_mp.quantization = f->fmt.pix_mp.quantization;
> +
> +	dprintk(ctx->dev,
> +		"Current OUTPUT queue format: width %d, height %d, pixfmt %d\n",
> +		ctx->src_fmt.fmt.pix_mp.width, ctx->src_fmt.fmt.pix_mp.height,
> +		ctx->src_fmt.fmt.pix_mp.pixelformat);
> +
> +	dprintk(ctx->dev,
> +		"Current CAPTURE queue format: width %d, height %d, pixfmt %d\n",
> +		ctx->dst_fmt.fmt.pix_mp.width, ctx->dst_fmt.fmt.pix_mp.height,
> +		ctx->dst_fmt.fmt.pix_mp.pixelformat);
> +
> +	return 0;
> +}
> +
> +static int vivpu_s_fmt_vid_cap(struct file *file, void *priv,
> +			       struct v4l2_format *f)
> +{
> +	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
> +	int ret;
> +
> +	dprintk(ctx->dev,
> +		"Current CAPTURE queue format: width %d, height %d, pixfmt %d\n",
> +		ctx->dst_fmt.fmt.pix_mp.width, ctx->dst_fmt.fmt.pix_mp.height,
> +		ctx->dst_fmt.fmt.pix_mp.pixelformat);
> +
> +	ret = vivpu_try_fmt_vid_cap(file, priv, f);
> +	if (ret)
> +		return ret;
> +
> +	ctx->dst_fmt = *f;
> +
> +	dprintk(ctx->dev,
> +		"Current CAPTURE queue format: width %d, height %d, pixfmt %d\n",
> +		ctx->dst_fmt.fmt.pix_mp.width, ctx->dst_fmt.fmt.pix_mp.height,
> +		ctx->dst_fmt.fmt.pix_mp.pixelformat);
> +
> +	return 0;
> +}
> +
> +static int vivpu_enum_framesizes(struct file *file, void *priv,
> +				 struct v4l2_frmsizeenum *fsize)
> +{
> +	const struct vivpu_coded_format_desc *fmt;
> +	struct vivpu_ctx *ctx = vivpu_file_to_ctx(file);
> +
> +	if (fsize->index != 0)
> +		return -EINVAL;
> +
> +	fmt = vivpu_find_coded_fmt_desc(fsize->pixel_format);
> +	if (!fmt) {
> +		dprintk(ctx->dev,
> +			"Unsupported format for the OUTPUT queue: %d\n",
> +			fsize->pixel_format);
> +
> +		return -EINVAL;
> +	}
> +
> +	fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
> +	fsize->stepwise = fmt->frmsize;
> +	return 0;
> +}
> +
> +const struct v4l2_ioctl_ops vivpu_ioctl_ops = {
> +	.vidioc_querycap		= vivpu_querycap,
> +	.vidioc_enum_framesizes		= vivpu_enum_framesizes,
> +
> +	.vidioc_enum_fmt_vid_cap	= vivpu_enum_fmt_vid_cap,
> +	.vidioc_g_fmt_vid_cap		= vivpu_g_fmt_vid_cap,
> +	.vidioc_try_fmt_vid_cap		= vivpu_try_fmt_vid_cap,
> +	.vidioc_s_fmt_vid_cap		= vivpu_s_fmt_vid_cap,
> +
> +	.vidioc_enum_fmt_vid_out	= vivpu_enum_fmt_vid_out,
> +	.vidioc_g_fmt_vid_out		= vivpu_g_fmt_vid_out,
> +	.vidioc_try_fmt_vid_out		= vivpu_try_fmt_vid_out,
> +	.vidioc_s_fmt_vid_out		= vivpu_s_fmt_vid_out,
> +
> +	.vidioc_reqbufs			= v4l2_m2m_ioctl_reqbufs,
> +	.vidioc_querybuf		= v4l2_m2m_ioctl_querybuf,
> +	.vidioc_qbuf			= v4l2_m2m_ioctl_qbuf,
> +	.vidioc_dqbuf			= v4l2_m2m_ioctl_dqbuf,
> +	.vidioc_prepare_buf		= v4l2_m2m_ioctl_prepare_buf,
> +	.vidioc_create_bufs		= v4l2_m2m_ioctl_create_bufs,
> +	.vidioc_expbuf			= v4l2_m2m_ioctl_expbuf,
> +
> +	.vidioc_streamon		= v4l2_m2m_ioctl_streamon,
> +	.vidioc_streamoff		= v4l2_m2m_ioctl_streamoff,
> +
> +	.vidioc_try_decoder_cmd		= v4l2_m2m_ioctl_stateless_try_decoder_cmd,
> +	.vidioc_decoder_cmd		= v4l2_m2m_ioctl_stateless_decoder_cmd,
> +
> +	.vidioc_subscribe_event		= v4l2_ctrl_subscribe_event,
> +	.vidioc_unsubscribe_event	= v4l2_event_unsubscribe,
> +};
> +
> +static int vivpu_queue_setup(struct vb2_queue *vq,
> +			     unsigned int *nbuffers,
> +			     unsigned int *nplanes,
> +			     unsigned int sizes[],
> +			     struct device *alloc_devs[])
> +{
> +	struct vivpu_ctx *ctx = vb2_get_drv_priv(vq);
> +	struct v4l2_pix_format *pix_fmt;
> +
> +	if (V4L2_TYPE_IS_OUTPUT(vq->type))
> +		pix_fmt = &ctx->src_fmt.fmt.pix;
> +	else
> +		pix_fmt = &ctx->dst_fmt.fmt.pix;
> +
> +	if (*nplanes) {
> +		if (sizes[0] < pix_fmt->sizeimage) {
> +			v4l2_err(&ctx->dev->v4l2_dev, "sizes[0] is %d, sizeimage is %d\n",
> +				 sizes[0], pix_fmt->sizeimage);
> +			return -EINVAL;
> +		}
> +	} else {
> +		sizes[0] = pix_fmt->sizeimage;
> +		*nplanes = 1;
> +	}
> +
> +	dprintk(ctx->dev, "%s: get %d buffer(s) of size %d each.\n",
> +		q_name(vq->type), *nbuffers, sizes[0]);
> +
> +	return 0;
> +}
> +
> +static void vivpu_queue_cleanup(struct vb2_queue *vq, u32 state)
> +{
> +	struct vivpu_ctx *ctx = vb2_get_drv_priv(vq);
> +	struct vb2_v4l2_buffer *vbuf;
> +
> +	dprintk(ctx->dev, "Cleaning up queues\n");
> +	for (;;) {
> +		if (V4L2_TYPE_IS_OUTPUT(vq->type))
> +			vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> +		else
> +			vbuf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
> +
> +		if (!vbuf)
> +			break;
> +
> +		v4l2_ctrl_request_complete(vbuf->vb2_buf.req_obj.req,
> +					   &ctx->hdl);
> +		dprintk(ctx->dev, "Marked request %p as complete\n",
> +			vbuf->vb2_buf.req_obj.req);
> +
> +		v4l2_m2m_buf_done(vbuf, state);
> +		dprintk(ctx->dev,
> +			"Marked buffer %llu as done, state is %d\n",
> +			vbuf->vb2_buf.timestamp,
> +			state);
> +	}
> +}
> +
> +static int vivpu_buf_out_validate(struct vb2_buffer *vb)
> +{
> +	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> +
> +	vbuf->field = V4L2_FIELD_NONE;
> +	return 0;
> +}
> +
> +static int vivpu_buf_prepare(struct vb2_buffer *vb)
> +{
> +	struct vb2_queue *vq = vb->vb2_queue;
> +	struct vivpu_ctx *ctx = vb2_get_drv_priv(vq);
> +	u32 plane_sz = vb2_plane_size(vb, 0);
> +	struct v4l2_pix_format *pix_fmt;
> +
> +	if (V4L2_TYPE_IS_OUTPUT(vq->type))
> +		pix_fmt = &ctx->src_fmt.fmt.pix;
> +	else
> +		pix_fmt = &ctx->dst_fmt.fmt.pix;
> +
> +	if (plane_sz < pix_fmt->sizeimage) {
> +		v4l2_err(&ctx->dev->v4l2_dev, "plane[0] size is %d, sizeimage is %d\n",
> +			 plane_sz, pix_fmt->sizeimage);
> +		return -EINVAL;
> +	}
> +
> +	vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
> +
> +	return 0;
> +}
> +
> +static int vivpu_start_streaming(struct vb2_queue *vq, unsigned int count)
> +{
> +	struct vivpu_ctx *ctx = vb2_get_drv_priv(vq);
> +	struct vivpu_q_data *q_data = get_q_data(ctx, vq->type);
> +	int ret = 0;
> +
> +	if (!q_data)
> +		return -EINVAL;
> +
> +	q_data->sequence = 0;
> +
> +	switch (ctx->src_fmt.fmt.pix.pixelformat) {
> +	case V4L2_PIX_FMT_AV1_FRAME:
> +		dprintk(ctx->dev, "Pixfmt is AV1F\n");
> +		ctx->current_codec = VIVPU_CODEC_AV1;
> +		break;
> +	default:
> +		v4l2_err(&ctx->dev->v4l2_dev, "Unsupported src format %d\n",
> +			 ctx->src_fmt.fmt.pix.pixelformat);
> +		ret = -EINVAL;
> +		goto err;
> +	}
> +
> +	return 0;
> +
> +err:
> +	vivpu_queue_cleanup(vq, VB2_BUF_STATE_QUEUED);
> +	return ret;
> +}
> +
> +static void vivpu_stop_streaming(struct vb2_queue *vq)
> +{
> +	struct vivpu_ctx *ctx = vb2_get_drv_priv(vq);
> +
> +	dprintk(ctx->dev, "Stop streaming\n");
> +	vivpu_queue_cleanup(vq, VB2_BUF_STATE_ERROR);
> +}
> +
> +static void vivpu_buf_queue(struct vb2_buffer *vb)
> +{
> +	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> +	struct vivpu_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
> +
> +	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
> +}
> +
> +static void vivpu_buf_request_complete(struct vb2_buffer *vb)
> +{
> +	struct vivpu_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
> +
> +	v4l2_ctrl_request_complete(vb->req_obj.req, &ctx->hdl);
> +}
> +
> +const struct vb2_ops vivpu_qops = {
> +	.queue_setup          = vivpu_queue_setup,
> +	.buf_out_validate     = vivpu_buf_out_validate,
> +	.buf_prepare          = vivpu_buf_prepare,
> +	.buf_queue            = vivpu_buf_queue,
> +	.start_streaming      = vivpu_start_streaming,
> +	.stop_streaming       = vivpu_stop_streaming,
> +	.wait_prepare         = vb2_ops_wait_prepare,
> +	.wait_finish          = vb2_ops_wait_finish,
> +	.buf_request_complete = vivpu_buf_request_complete,
> +};
> +
> +int vivpu_queue_init(void *priv, struct vb2_queue *src_vq,
> +		     struct vb2_queue *dst_vq)
> +{
> +	struct vivpu_ctx *ctx = priv;
> +	int ret;
> +
> +	src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
> +	src_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
> +	src_vq->drv_priv = ctx;
> +	src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
> +	src_vq->ops = &vivpu_qops;
> +	src_vq->mem_ops = &vb2_vmalloc_memops;
> +	src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
> +	src_vq->lock = &ctx->vb_mutex;
> +	src_vq->supports_requests = true;
> +
> +	ret = vb2_queue_init(src_vq);
> +	if (ret)
> +		return ret;
> +
> +	dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
> +	dst_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
> +	dst_vq->drv_priv = ctx;
> +	dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
> +	dst_vq->ops = &vivpu_qops;
> +	dst_vq->mem_ops = &vb2_vmalloc_memops;
> +	dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
> +	dst_vq->lock = &ctx->vb_mutex;
> +
> +	return vb2_queue_init(dst_vq);
> +}
> +
> +int vivpu_request_validate(struct media_request *req)
> +{
> +	struct media_request_object *obj;
> +	struct vivpu_ctx *ctx = NULL;
> +	unsigned int count;
> +
> +	list_for_each_entry(obj, &req->objects, list) {
> +		struct vb2_buffer *vb;
> +
> +		if (vb2_request_object_is_buffer(obj)) {
> +			vb = container_of(obj, struct vb2_buffer, req_obj);
> +			ctx = vb2_get_drv_priv(vb->vb2_queue);
> +
> +			break;
> +		}
> +	}
> +
> +	if (!ctx)
> +		return -ENOENT;
> +
> +	count = vb2_request_buffer_cnt(req);
> +	if (!count) {
> +		v4l2_err(&ctx->dev->v4l2_dev,
> +			 "No buffer was provided with the request\n");
> +		return -ENOENT;
> +	} else if (count > 1) {
> +		v4l2_err(&ctx->dev->v4l2_dev,
> +			 "More than one buffer was provided with the request\n");
> +		return -EINVAL;
> +	}
> +
> +	return vb2_request_validate(req);
> +}
> diff --git a/drivers/media/test-drivers/vivpu/vivpu-video.h b/drivers/media/test-drivers/vivpu/vivpu-video.h
> new file mode 100644
> index 000000000000..6cf8c1570887
> --- /dev/null
> +++ b/drivers/media/test-drivers/vivpu/vivpu-video.h
> @@ -0,0 +1,46 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * A virtual stateless VPU example device for uAPI development purposes.
> + *
> + * A userspace implementation can use vivpu to run a decoding loop even
> + * when no hardware is available or when the kernel uAPI for the codec
> + * has not been upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by the
> + * Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version
> + */
> +
> +#ifndef _VIVPU_VIDEO_H_
> +#define _VIVPU_VIDEO_H_
> +#include <media/v4l2-mem2mem.h>
> +
> +#include "vivpu.h"
> +
> +extern const struct v4l2_ioctl_ops vivpu_ioctl_ops;
> +int vivpu_queue_init(void *priv, struct vb2_queue *src_vq,
> +		     struct vb2_queue *dst_vq);
> +
> +void vivpu_set_default_format(struct vivpu_ctx *ctx);
> +int vivpu_request_validate(struct media_request *req);
> +
> +#endif /* _VIVPU_VIDEO_H_ */
> diff --git a/drivers/media/test-drivers/vivpu/vivpu.h b/drivers/media/test-drivers/vivpu/vivpu.h
> new file mode 100644
> index 000000000000..89b993c460c1
> --- /dev/null
> +++ b/drivers/media/test-drivers/vivpu/vivpu.h
> @@ -0,0 +1,119 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * A virtual stateless VPU example device for uAPI development purposes.
> + *
> + * A userspace implementation can use vivpu to run a decoding loop even
> + * when no hardware is available or when the kernel uAPI for the codec
> + * has not been upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by the
> + * Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version
> + */
> +
> +#ifndef _VIVPU_H_
> +#define _VIVPU_H_
> +
> +#include <media/v4l2-ctrls.h>
> +#include <media/v4l2-device.h>
> +#include <media/tpg/v4l2-tpg.h>
> +
> +#define VIVPU_NAME		"vivpu"
> +#define VIVPU_M2M_NQUEUES	2
> +
> +extern const unsigned int vivpu_src_default_w;
> +extern const unsigned int vivpu_src_default_h;
> +extern const unsigned int vivpu_src_default_depth;
> +extern unsigned int vivpu_transtime;
> +
> +struct vivpu_coded_format_desc {
> +	u32 pixelformat;
> +	struct v4l2_frmsize_stepwise frmsize;
> +	unsigned int num_decoded_fmts;
> +	const u32 *decoded_fmts;
> +};
> +
> +enum {
> +	V4L2_M2M_SRC = 0,
> +	V4L2_M2M_DST = 1,
> +};
> +
> +extern unsigned int vivpu_debug;
> +#define dprintk(dev, fmt, arg...) \
> +	v4l2_dbg(1, vivpu_debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)
> +
> +struct vivpu_q_data {
> +	unsigned int		sequence;
> +};
> +
> +struct vivpu_dev {
> +	struct v4l2_device	v4l2_dev;
> +	struct video_device	vfd;
> +#ifdef CONFIG_MEDIA_CONTROLLER
> +	struct media_device	mdev;
> +#endif
> +
> +	struct mutex		dev_mutex;
> +
> +	struct v4l2_m2m_dev	*m2m_dev;
> +};
> +
> +enum vivpu_codec {
> +	VIVPU_CODEC_AV1,
> +};
> +
> +struct vivpu_ctx {
> +	struct v4l2_fh		fh;
> +	struct vivpu_dev	*dev;
> +	struct v4l2_ctrl_handler hdl;
> +	struct v4l2_ctrl	**ctrls;
> +
> +	struct mutex		vb_mutex;
> +
> +	struct vivpu_q_data	q_data[VIVPU_M2M_NQUEUES];
> +	enum   vivpu_codec	current_codec;
> +
> +	const struct vivpu_coded_format_desc *coded_format_desc;
> +
> +	struct v4l2_format	src_fmt;
> +	struct v4l2_format	dst_fmt;
> +};
> +
> +struct vivpu_control {
> +	struct v4l2_ctrl_config cfg;
> +};
> +
> +static inline struct vivpu_ctx *vivpu_file_to_ctx(struct file *file)
> +{
> +	return container_of(file->private_data, struct vivpu_ctx, fh);
> +}
> +
> +static inline struct vivpu_ctx *vivpu_v4l2fh_to_ctx(struct v4l2_fh *v4l2_fh)
> +{
> +	return container_of(v4l2_fh, struct vivpu_ctx, fh);
> +}
> +
> +void *vivpu_find_control_data(struct vivpu_ctx *ctx, u32 id);
> +struct v4l2_ctrl *vivpu_find_control(struct vivpu_ctx *ctx, u32 id);
> +u32 vivpu_control_num_elems(struct vivpu_ctx *ctx, u32 id);
> +
> +#endif /* _VIVPU_H_ */
> 

This looks quite interesting. I did wonder if it would make sense to improve
the validation of the tile information: is everything within range, no strange
values, etc. The more you can validate here, the more useful this driver
becomes for testing userspace decoders.

Another idea would be to add the test pattern generator to create 'real' images.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/2] media: Add AV1 uAPI
  2021-08-10 22:05 ` [RFC PATCH 1/2] media: Add AV1 uAPI daniel.almeida
  2021-08-11  0:57   ` kernel test robot
  2021-09-02 15:10   ` Hans Verkuil
@ 2022-01-28 15:45   ` Nicolas Dufresne
  2022-02-02 15:13   ` Nicolas Dufresne
  3 siblings, 0 replies; 14+ messages in thread
From: Nicolas Dufresne @ 2022-01-28 15:45 UTC (permalink / raw)
  To: daniel.almeida, stevecho, shawnku, tzungbi, mcasas, nhebert,
	abodenha, randy.wu, yunfei.dong, gustavo.padovan,
	andrzej.pietrasiewicz, tomeu.vizoso, nick.milner, xiaoyong.lu,
	mchehab, hverkuil-cisco
  Cc: linux-media, linux-kernel, kernel

Le mardi 10 août 2021 à 19:05 -0300, daniel.almeida@collabora.com a écrit :
> From: Daniel Almeida <daniel.almeida@collabora.com>
> 
> This patch adds the  AOMedia Video 1 (AV1) kernel uAPI.
> 
> This design is based on currently available AV1 API implementations and
> aims to support the development of AV1 stateless video codecs
> on Linux.
> 
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> ---
>  .../userspace-api/media/v4l/biblio.rst        |   10 +
>  .../media/v4l/ext-ctrls-codec-stateless.rst   | 1268 +++++++++++++++++
>  .../media/v4l/pixfmt-compressed.rst           |   21 +
>  .../media/v4l/vidioc-g-ext-ctrls.rst          |   36 +
>  .../media/v4l/vidioc-queryctrl.rst            |   54 +
>  .../media/videodev2.h.rst.exceptions          |    9 +
>  drivers/media/v4l2-core/v4l2-ctrls-core.c     |  286 +++-
>  drivers/media/v4l2-core/v4l2-ctrls-defs.c     |   79 +
>  drivers/media/v4l2-core/v4l2-ioctl.c          |    1 +
>  include/media/v4l2-ctrls.h                    |   12 +
>  include/uapi/linux/v4l2-controls.h            |  796 +++++++++++
>  include/uapi/linux/videodev2.h                |   15 +
>  12 files changed, 2586 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/userspace-api/media/v4l/biblio.rst b/Documentation/userspace-api/media/v4l/biblio.rst
> index 7b8e6738ff9e..7061144d10bb 100644
> --- a/Documentation/userspace-api/media/v4l/biblio.rst
> +++ b/Documentation/userspace-api/media/v4l/biblio.rst
> @@ -417,3 +417,13 @@ VP8
>  :title:     RFC 6386: "VP8 Data Format and Decoding Guide"
>  
>  :author:    J. Bankoski et al.
> +
> +.. _av1:
> +
> +AV1
> +===
> +
> +
> +:title:     AV1 Bitstream & Decoding Process Specification
> +
> +:author:    Peter de Rivaz, Argon Design Ltd, Jack Haughton, Argon Design Ltd
> diff --git a/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst b/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst
> index 72f5e85b4f34..960500651e4b 100644
> --- a/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst
> +++ b/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst
> @@ -1458,3 +1458,1271 @@ FWHT Flags
>  .. raw:: latex
>  
>      \normalsize
> +
> +
> +.. _v4l2-codec-stateless-av1:
> +
> +``V4L2_CID_STATELESS_AV1_SEQUENCE (struct)``
> +    Represents an AV1 Sequence OBU. See section 5.5. "Sequence header OBU syntax"
> +    in :ref:`av1` for more details.
> +
> +.. c:type:: v4l2_ctrl_av1_sequence
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_sequence
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u32
> +      - ``flags``
> +      - See :ref:`AV1 Sequence Flags <av1_sequence_flags>`.
> +    * - __u8
> +      - ``seq_profile``
> +      - Specifies the features that can be used in the coded video sequence.
> +    * - __u8
> +      - ``order_hint_bits``
> +      - Specifies the number of bits used for the order_hint field at each frame.
> +    * - __u8
> +      - ``bit_depth``
> +      - the bitdepth to use for the sequence as described in section 5.5.2
> +        "Color config syntax" in :ref:`av1` for more details.
> +
> +
> +.. _av1_sequence_flags:
> +
> +``AV1 Sequence Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_STILL_PICTURE``
> +      - 0x00000001
> +      - If set, specifies that the coded video sequence contains only one coded
> +	frame. If not set, specifies that the coded video sequence contains one or
> +	more coded frames.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_USE_128X128_SUPERBLOCK``
> +      - 0x00000002
> +      - If set, indicates that superblocks contain 128x128 luma samples.
> +	When equal to 0, it indicates that superblocks contain 64x64 luma samples.
> +	(The number of contained chroma samples depends on subsampling_x and
> +	subsampling_y).
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_FILTER_INTRA``
> +      - 0x00000004
> +      - If set, specifies that the use_filter_intra syntax element may be
> +	present. If not set, specifies that the use_filter_intra syntax element will
> +	not be present.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTRA_EDGE_FILTER``
> +      - 0x00000008
> +      - Specifies whether the intra edge filtering process should be enabled.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTERINTRA_COMPOUND``
> +      - 0x00000010
> +      - If set, specifies that the mode info for inter blocks may contain the
> +	syntax element interintra. If not set, specifies that the syntax element
> +	interintra will not be present.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_MASKED_COMPOUND``
> +      - 0x00000020
> +      - If set, specifies that the mode info for inter blocks may contain the
> +	syntax element compound_type. If not set, specifies that the syntax element
> +	compound_type will not be present.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_WARPED_MOTION``
> +      - 0x00000040
> +      - If set, indicates that the allow_warped_motion syntax element may be
> +	present. If not set, indicates that the allow_warped_motion syntax element
> +	will not be present.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_DUAL_FILTER``
> +      - 0x00000080
> +      - If set, indicates that the inter prediction filter type may be specified
> +	independently in the horizontal and vertical directions. If the flag is
> +	equal to 0, only one filter type may be specified, which is then used in
> +	both directions.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_ORDER_HINT``
> +      - 0x00000100
> +      - If set, indicates that tools based on the values of order hints may be
> +	used. If not set, indicates that tools based on order hints are disabled.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_JNT_COMP``
> +      - 0x00000200
> +      - If set, indicates that the distance weights process may be used for
> +	inter prediction.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_REF_FRAME_MVS``
> +      - 0x00000400
> +      - If set, indicates that the use_ref_frame_mvs syntax element may be
> +	present. If not set, indicates that the use_ref_frame_mvs syntax element
> +	will not be present.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_SUPERRES``
> +      - 0x00000800
> +      - If set, specifies that the use_superres syntax element will be present
> +	in the uncompressed header. If not set, specifies that the use_superres
> +	syntax element will not be present (instead use_superres will be set to 0
> +	in the uncompressed header without being read).
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_CDEF``
> +      - 0x00001000
> +      - If set, specifies that cdef filtering may be enabled. If not set,
> +	specifies that cdef filtering is disabled.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_RESTORATION``
> +      - 0x00002000
> +      - If set, specifies that loop restoration filtering may be enabled. If not
> +	set, specifies that loop restoration filtering is disabled.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME``
> +      - 0x00004000
> +      - If set, indicates that the video does not contain U and V color planes.
> +	If not set, indicates that the video contains Y, U, and V color planes.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_COLOR_RANGE``
> +      - 0x00008000
> +      - If set, signals full swing representation. If not set, signals studio
> +	swing representation.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_X``
> +      - 0x00010000
> +      - Specify the chroma subsampling format.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_Y``
> +      - 0x00020000
> +      - Specify the chroma subsampling format.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_FILM_GRAIN_PARAMS_PRESENT``
> +      - 0x00040000
> +      - Specifies whether film grain parameters are present in the coded video
> +	sequence.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_SEPARATE_UV_DELTA_Q``
> +      - 0x00080000
> +      - If set, indicates that the U and V planes may have separate delta
> +	quantizer values. If not set, indicates that the U and V planes will share
> +	the same delta quantizer value.
> +
> +``V4L2_CID_STATELESS_AV1_TILE_GROUP (struct)``
> +    Represents a tile group as seen in an AV1 Tile Group OBU or Frame OBU. A
> +    v4l2_ctrl_av1_tile_group instance will refer to tg_end - tg_start instances
> +    of struct :c:type:`struct v4l2_ctrl_av1_tile_group_entry`. See section
> +    6.10.1 "General tile group OBU semantics" in :ref:`av1` for more details.
> +
> +.. c:type:: v4l2_ctrl_av1_tile_group
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_tile_group
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See :ref:`AV1 Tile Group Flags <av1_tile_group_flags>`.
> +    * - __u8
> +      - ``tg_start``
> +      - Specifies the zero-based index of the first tile in the current tile
> +        group.
> +    * - __u8
> +      - ``tg_end``
> +      - Specifies the zero-based index of the last tile in the current tile
> +        group.
> +
> +.. _av1_tile_group_flags:
> +
> +``AV1 Tile Group Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_TILE_GROUP_FLAG_START_AND_END_PRESENT``
> +      - 0x00000001
> +      - Specifies whether tg_start and tg_end are present. If tg_start and
> +	tg_end are not present, this tile group covers the entire frame.
> +
> +``V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY (struct)``
> +    Represents a single AV1 tile inside an AV1 Tile Group. Note that MiRowStart,
> +    MiRowEnd, MiColStart and MiColEnd can be retrieved from struct
> +    v4l2_av1_tile_info in struct v4l2_ctrl_av1_frame_header using tile_row and
> +    tile_col. See section 6.10.1 "General tile group OBU semantics" in
> +    :ref:`av1` for more details.
> +
> +.. c:type:: v4l2_ctrl_av1_tile_group_entry
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_tile_group_entry
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u32
> +      - ``tile_offset``
> +      - Offset from the OBU data, i.e. where the coded tile data actually starts.
> +    * - __u32
> +      - ``tile_size``
> +      - Specifies the size in bytes of the coded tile. Equivalent to "TileSize"
> +        in :ref:`av1`.
> +    * - __u32
> +      - ``tile_row``
> +      - Specifies the row of the current tile. Equivalent to "TileRow" in
> +        :ref:`av1`.
> +    * - __u32
> +      - ``tile_col``
> +      - Specifies the column of the current tile. Equivalent to "TileColumn" in
> +        :ref:`av1`.
> +
> +``V4L2_CID_STATELESS_AV1_TILE_LIST (struct)``
> +    Represents a tile list as seen in an AV1 Tile List OBU. Tile lists are used
> +    in "Large Scale Tile Decode Mode". Note that tile_count_minus_1 should be at
> +    most V4L2_AV1_MAX_TILE_COUNT - 1. A struct v4l2_ctrl_av1_tile_list instance
> +    will refer to "tile_count_minus_1" + 1 instances of struct
> +    v4l2_ctrl_av1_tile_list_entry.
> +
> +.. c:type:: v4l2_ctrl_av1_tile_list
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_tile_list
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``output_frame_width_in_tiles_minus_1``
> +      - This field plus one is the width of the output frame, in tile units.
> +    * - __u8
> +      - ``output_frame_height_in_tiles_minus_1``
> +      - This field plus one is the height of the output frame, in tile units.
> +    * - __u8
> +      - ``tile_count_minus_1``
> +      - This field plus one is the number of tile_list_entry in the list.
> +
> +``V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY (struct)``
> +    Represents a tile list entry as seen in an AV1 Tile List OBU. See section
> +    6.11.2. "Tile list entry semantics" of :ref:`av1` for more details.
> +
> +.. c:type:: v4l2_ctrl_av1_tile_list_entry
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_tile_list
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``anchor_frame_idx``
> +      - The index into an array AnchorFrames of the frames that the tile uses
> +	for prediction.
> +    * - __u8
> +      - ``anchor_tile_row``
> +      - The row coordinate of the tile in the frame that it belongs, in tile
> +	units.
> +    * - __u8
> +      - ``anchor_tile_col``
> +      - The column coordinate of the tile in the frame that it belongs, in tile
> +	units.
> +    * - __u8
> +      - ``tile_data_size_minus_1``
> +      - This field plus one is the size of the coded tile data in bytes.
> +
> +.. c:type:: v4l2_av1_film_grain
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_film_grain
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See :ref:`AV1 Film Grain Flags <av1_film_grain_flags>`.
> +    * - __u16
> +      - ``grain_seed``
> +      - Specifies the starting value for the pseudo-random numbers used during
> +	film grain synthesis.
> +    * - __u8
> +      - ``film_grain_params_ref_idx``
> +      - Indicates which reference frame contains the film grain parameters to be
> +	used for this frame.
> +    * - __u8
> +      - ``num_y_points``
> +      - Specifies the number of points for the piece-wise linear scaling
> +	function of the luma component.
> +    * - __u8
> +      - ``point_y_value[V4L2_AV1_MAX_NUM_Y_POINTS]``
> +      - Represents the x (luma value) coordinate for the i-th point
> +        of the piecewise linear scaling function for luma component. The values
> +        are signaled on the scale of 0..255. (In case of 10 bit video, these
> +        values correspond to luma values divided by 4. In case of 12 bit video,
> +        these values correspond to luma values divided by 16.).
> +    * - __u8
> +      - ``point_y_scaling[V4L2_AV1_MAX_NUM_Y_POINTS]``
> +      - Represents the scaling (output) value for the i-th point
> +	of the piecewise linear scaling function for luma component.
> +    * - __u8
> +      - ``num_cb_points``
> +      -  Specifies the number of points for the piece-wise linear scaling
> +         function of the cb component.
> +    * - __u8
> +      - ``point_cb_value[V4L2_AV1_MAX_NUM_CB_POINTS]``
> +      - Represents the x coordinate for the i-th point of the
> +        piece-wise linear scaling function for cb component. The values are
> +        signaled on the scale of 0..255.
> +    * - __u8
> +      - ``point_cb_scaling[V4L2_AV1_MAX_NUM_CB_POINTS]``
> +      - Represents the scaling (output) value for the i-th point of the
> +        piecewise linear scaling function for cb component.
> +    * - __u8
> +      - ``num_cr_points``
> +      - Represents the number of points for the piece-wise
> +        linear scaling function of the cr component.
> +    * - __u8
> +      - ``point_cr_value[V4L2_AV1_MAX_NUM_CR_POINTS]``
> +      - Represents the x coordinate for the i-th point of the
> +        piece-wise linear scaling function for cr component. The values are
> +        signaled on the scale of 0..255.
> +    * - __u8
> +      - ``point_cr_scaling[V4L2_AV1_MAX_NUM_CR_POINTS]``
> +      - Represents the scaling (output) value for the i-th point of the
> +        piecewise linear scaling function for cr component.
> +    * - __u8
> +      - ``grain_scaling_minus_8``
> +      - Represents the shift – 8 applied to the values of the chroma component.
> +        The grain_scaling_minus_8 can take values of 0..3 and determines the
> +        range and quantization step of the standard deviation of film grain.
> +    * - __u8
> +      - ``ar_coeff_lag``
> +      - Specifies the number of auto-regressive coefficients for luma and
> +	chroma.
> +    * - __u8
> +      - ``ar_coeffs_y_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA]``
> +      - Specifies auto-regressive coefficients used for the Y plane.
> +    * - __u8
> +      - ``ar_coeffs_cb_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA]``
> +      - Specifies auto-regressive coefficients used for the U plane.
> +    * - __u8
> +      - ``ar_coeffs_cr_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA]``
> +      - Specifies auto-regressive coefficients used for the V plane.
> +    * - __u8
> +      - ``ar_coeff_shift_minus_6``
> +      - Specifies the range of the auto-regressive coefficients. Values of 0,
> +        1, 2, and 3 correspond to the ranges for auto-regressive coefficients of
> +        [-2, 2), [-1, 1), [-0.5, 0.5) and [-0.25, 0.25) respectively.
> +    * - __u8
> +      - ``grain_scale_shift``
> +      - Specifies how much the Gaussian random numbers should be scaled down
> +	during the grain synthesis process.
> +    * - __u8
> +      - ``cb_mult``
> +      - Represents a multiplier for the cb component used in derivation of the
> +	input index to the cb component scaling function.
> +    * - __u8
> +      - ``cb_luma_mult``
> +      - Represents a multiplier for the average luma component used in
> +	derivation of the input index to the cb component scaling function..
> +    * - __u16
> +      - ``cb_offset``
> +      - Represents an offset used in derivation of the input index to the
> +	cb component scaling function.
> +    * - __u8
> +      - ``cr_mult``
> +      - Represents a multiplier for the cb component used in derivation of the
> +	input index to the cr component scaling function.
> +    * - __u8
> +      - ``cr_luma_mult``
> +      - Represents a multiplier for the average luma component used in
> +        derivation of the input index to the cr component scaling function.
> +    * - __u16
> +      - ``cr_offset``
> +      - Represents an offset used in derivation of the input index to the
> +        cr component scaling function.
> +
> +.. _av1_film_grain_flags:
> +
> +``AV1 Film Grain Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_FILM_GRAIN_FLAG_APPLY_GRAIN``
> +      - 0x00000001
> +      - If set, specifies that film grain should be added to this frame. If not
> +	set, specifies that film grain should not be added.
> +    * - ``V4L2_AV1_FILM_GRAIN_FLAG_UPDATE_GRAIN``
> +      - 0x00000002
> +      - If set, means that a new set of parameters should be sent. If not set,
> +	specifies that the previous set of parameters should be used.
> +    * - ``V4L2_AV1_FILM_GRAIN_FLAG_CHROMA_SCALING_FROM_LUMA``
> +      - 0x00000004
> +      - If set, specifies that the chroma scaling is inferred from the luma
> +	scaling.
> +    * - ``V4L2_AV1_FILM_GRAIN_FLAG_OVERLAP``
> +      - 0x00000008
> +      - If set, indicates that the overlap between film grain blocks shall be
> +	applied. If not set, indicates that the overlap between film grain blocks
> +	shall not be applied.
> +    * - ``V4L2_AV1_FILM_GRAIN_FLAG_CLIP_TO_RESTRICTED_RANGE``
> +      - 0x00000010
> +      - If set, indicates that clipping to the restricted (studio) range shall
> +        be applied to the sample values after adding the film grain (see the
> +        semantics for color_range for an explanation of studio swing). If not
> +        set, indicates that clipping to the full range shall be applied to the
> +        sample values after adding the film grain.
> +
> +.. c:type:: v4l2_av1_warp_model
> +
> +	AV1 Warp Model as described in section 3 "Symbols and abbreviated terms" of
> +	:ref:`av1`.
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_WARP_MODEL_IDENTITY``
> +      - 0
> +      - Warp model is just an identity transform.
> +    * - ``V4L2_AV1_WARP_MODEL_TRANSLATION``
> +      - 1
> +      - Warp model is a pure translation.
> +    * - ``V4L2_AV1_WARP_MODEL_ROTZOOM``
> +      - 2
> +      - Warp model is a rotation + symmetric zoom + translation.
> +    * - ``V4L2_AV1_WARP_MODEL_AFFINE``
> +      - 3
> +      - Warp model is a general affine transform.
> +
> +.. c:type:: v4l2_av1_reference_frame
> +
> +AV1 Reference Frames as described in section 6.10.24. "Ref frames semantics"
> +of :ref:`av1`.
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_REF_INTRA_FRAME``
> +      - 0
> +      - Intra Frame Reference.
> +    * - ``V4L2_AV1_REF_LAST_FRAME``
> +      - 1
> +      - Last Frame Reference.
> +    * - ``V4L2_AV1_REF_LAST2_FRAME``
> +      - 2
> +      - Last2 Frame Reference.
> +    * - ``V4L2_AV1_REF_LAST3_FRAME``
> +      - 3
> +      - Last3 Frame Reference.
> +    * - ``V4L2_AV1_REF_GOLDEN_FRAME``
> +      - 4
> +      - Golden Frame Reference.
> +    * - ``V4L2_AV1_REF_BWDREF_FRAME``
> +      - 5
> +      - BWD Frame Reference.
> +    * - ``V4L2_AV1_REF_ALTREF2_FRAME``
> +      - 6
> +      - ALTREF2 Frame Reference.
> +    * - ``V4L2_AV1_REF_ALTREF_FRAME``
> +      - 7
> +      - ALTREF Frame Reference.
> +    * - ``V4L2_AV1_NUM_REF_FRAMES``
> +      - 8
> +      - Total number of reference frames.
> +
> +.. c:type:: v4l2_av1_global_motion
> +
> +AV1 Global Motion parameters as described in section 6.8.17
> +"Global motion params semantics" of :ref:`av1` for more details.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_global_motion
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags[V4L2_AV1_TOTAL_REFS_PER_FRAME]``
> +      - A bitfield containing the flags per reference frame. See
> +        :ref:`AV1 Global Motion Flags <av1_global_motion_flags>` for more
> +        details.
> +    * - enum :c:type:`v4l2_av1_warp_model`
> +      - ``type[V4L2_AV1_TOTAL_REFS_PER_FRAME]``
> +      - The type of global motion transform used.
> +    * - __u32
> +      - ``params[V4L2_AV1_TOTAL_REFS_PER_FRAME][6]``
> +      - This field has the same meaning as "gm_params" in :ref:`av1`.
> +    * - __u8
> +      - ``invalid``
> +      - Bitfield indicating whether the global motion params are invalid for a
> +        given reference frame. See section 7.11.3.6. Setup shear process and the
> +        variable "warpValid". Use V4L2_AV1_GLOBAL_MOTION_IS_INVALID(ref) to
> +        create a suitable mask.
> +
> +.. _av1_global_motion_flags:
> +
> +``AV1 Global Motion Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_GLOBAL_MOTION_FLAG_IS_GLOBAL``
> +      - 0x00000001
> +      - Specifies whether global motion parameters are present for a particular
> +        reference frame.
> +    * - ``V4L2_AV1_GLOBAL_MOTION_FLAG_IS_ROT_ZOOM``
> +      - 0x00000002
> +      - Specifies whether a particular reference frame uses rotation and zoom
> +        global motion.
> +    * - ``V4L2_AV1_GLOBAL_MOTION_FLAG_IS_TRANSLATION``
> +      - 0x00000004
> +      - Specifies whether a particular reference frame uses rotation and zoom
> +        global motion.
> +
> +.. c:type:: v4l2_av1_frame_restoration_type
> +
> +AV1 Frame Restoration Type.
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_FRAME_RESTORE_NONE``
> +      - 0
> +      - No filtering is applied.
> +    * - ``V4L2_AV1_FRAME_RESTORE_WIENER``
> +      - 1
> +      - Wiener filter process is invoked.
> +    * - ``V4L2_AV1_FRAME_RESTORE_SGRPROJ``
> +      - 2
> +      - Self guided filter process is invoked.
> +    * - ``V4L2_AV1_FRAME_RESTORE_SWITCHABLE``
> +      - 3
> +      - Restoration filter is swichtable.
> +
> +.. c:type:: v4l2_av1_loop_restoration
> +
> +AV1 Loop Restauration as described in section 6.10.15 "Loop restoration params
> +semantics" of :ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_loop_restoration
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - :c:type:`v4l2_av1_frame_restoration_type`
> +      - ``frame_restoration_type[V4L2_AV1_NUM_PLANES_MAX]``
> +      - Specifies the type of restoration used for each plane.
> +    * - __u8
> +      - ``lr_unit_shift``
> +      - Specifies if the luma restoration size should be halved.
> +    * - __u8
> +      - ``lr_uv_shift``
> +      - Specifies if the chroma size should be half the luma size.
> +    * - __u8
> +      - ``loop_restoration_size[V4L2_AV1_MAX_NUM_PLANES]``
> +      - specifies the size of loop restoration units in units of samples in the
> +        current plane.
> +
> +.. c:type:: v4l2_av1_cdef
> +
> +AV1 CDEF params semantics as described in section 6.10.14. "CDEF params
> +semantics" of :ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_cdef
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``damping_minus_3``
> +      - Controls the amount of damping in the deringing filter.
> +    * - __u8
> +      - ``bits``
> +      - Specifies the number of bits needed to specify which CDEF filter to
> +        apply.
> +    * - __u8
> +      - ``y_pri_strength[V4L2_AV1_CDEF_MAX]``
> +      -  Specifies the strength of the primary filter.
> +    * - __u8
> +      - ``y_sec_strength[V4L2_AV1_CDEF_MAX]``
> +      -  Specifies the strength of the secondary filter.
> +    * - __u8
> +      - ``uv_pri_strength[V4L2_AV1_CDEF_MAX]``
> +      -  Specifies the strength of the primary filter.
> +    * - __u8
> +      - ``uv_secondary_strength[V4L2_AV1_CDEF_MAX]``
> +      -  Specifies the strength of the secondary filter.
> +
> +.. c:type:: v4l2_av1_segment_feature
> +
> +AV1 segment features as described in section 3 "Symbols and abbreviated terms"
> +of :ref:`av1`.
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_SEG_LVL_ALT_Q``
> +      - 0
> +      - Index for quantizer segment feature.
> +    * - ``V4L2_AV1_SEG_LVL_ALT_LF_Y_V``
> +      - 1
> +      - Index for vertical luma loop filter segment feature.
> +    * - ``V4L2_AV1_SEG_LVL_REF_FRAME``
> +      - 5
> +      - Index for reference frame segment feature.
> +    * - ``V4L2_AV1_SEG_LVL_REF_SKIP``
> +      - 6
> +      - Index for skip segment feature.
> +    * - ``V4L2_AV1_SEG_LVL_REF_GLOBALMV``
> +      - 7
> +      - Index for global mv feature.
> +    * - ``V4L2_AV1_SEG_LVL_MAX``
> +      - 8
> +      - Number of segment features.
> +
> +.. c:type:: v4l2_av1_segmentation
> +
> +AV1 Segmentation params as defined in section 6.8.13. "Segmentation params
> +semantics" of :ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_film_grain
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See :ref:`AV1 Segmentation Flags <av1_segmentation_flags>`
> +    * - __u8
> +      - ``feature_enabled[V4L2_AV1_MAX_SEGMENTS]``
> +      - Bitmask defining which features are enabled in each segment. Use
> +        V4L2_AV1_SEGMENT_FEATURE_ENABLED to build a suitable mask.
> +    * - __u16
> +      - `feature_data[V4L2_AV1_MAX_SEGMENTS][V4L2_AV1_SEG_LVL_MAX]``
> +      -  Data attached to each feature. Data entry is only valid if the feature
> +         is enabled
> +    * - __u8
> +      - ``last_active_seg_id``
> +      -  Indicates the highest numbered segment id that has some
> +         enabled feature. This is used when decoding the segment id to only decode
> +         choices corresponding to used segments.
> +
> +.. _av1_segmentation_flags:
> +
> +``AV1 Segmentation Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_SEGMENTATION_FLAG_ENABLED``
> +      - 0x00000001
> +      - If set, indicates that this frame makes use of the segmentation tool. If
> +        not set, indicates that the frame does not use segmentation.
> +    * - ``V4L2_AV1_SEGMENTATION_FLAG_UPDATE_MAP``
> +      - 0x00000002
> +      - If set, indicates that the segmentation map are updated during the
> +        decoding of this frame. If not set, indicates that the segmentation map
> +        from the previous frame is used.
> +    * - ``V4L2_AV1_SEGMENTATION_FLAG_TEMPORAL_UPDATE``
> +      - 0x00000004
> +      - If set, indicates that the updates to the segmentation map are coded
> +        relative to the existing segmentation map. If not set, indicates that
> +        the new segmentation map is coded without reference to the existing
> +        segmentation map.
> +    * - ``V4L2_AV1_SEGMENTATION_FLAG_UPDATE_DATA``
> +      - 0x00000008
> +      - If set, indicates that the updates to the segmentation map are coded
> +        relative to the existing segmentation map. If not set, indicates that
> +        the new segmentation map is coded without reference to the existing
> +        segmentation map.
> +    * - ``V4L2_AV1_SEGMENTATION_FLAG_SEG_ID_PRE_SKIP``
> +      - 0x00000010
> +      - If set, indicates that the segment id will be read before the skip
> +        syntax element. If not set, indicates that the skip syntax element will
> +        be read first.
> +
> +.. c:type:: v4l2_av1_loop_filter
> +
> +AV1 Loop filter params as defined in section 6.8.10. "Loop filter semantics" of
> +:ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_global_motion
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See
> +        :ref:`AV1 Loop Filter flags <av1_loop_filter_flags>` for more details.
> +    * - __u8
> +      - ``level[4]``
> +      - an array containing loop filter strength values. Different loop
> +        filter strength values from the array are used depending on the image
> +        plane being filtered, and the edge direction (vertical or horizontal)
> +        being filtered.
> +    * - __u8
> +      - ``sharpness``
> +      - indicates the sharpness level. The loop_filter_level and
> +        loop_filter_sharpness together determine when a block edge is filtered,
> +        and by how much the filtering can change the sample values. The loop
> +        filter process is described in section 7.14 of :ref:`av1`.
> +    * - __u8
> +      - ``ref_deltas[V4L2_AV1_TOTAL_REFS_PER_FRAME]``

As per "5.9.11. Loop filter params syntax", this should be signed, so __s8. This
actually trigger a bug as it fails with the default values in validation.

  ref_deltas[N] < 63
  255 (aka -1) < 63

> +      - contains the adjustment needed for the filter level based on the
> +        chosen reference frame. If this syntax element is not present, it
> +        maintains its previous value.
> +    * - __u8
> +      - ``mode_deltas[2]``

Same here, should be __s8

> +      - contains the adjustment needed for the filter level based on
> +        the chosen mode. If this syntax element is not present, it maintains its
> +        previous value.
> +    * - __u8
> +      - ``delta_lf_res``
> +      - specifies the left shift which should be applied to decoded loop filter
> +        delta values.
> +    * - __u8
> +      - ``delta_lf_multi``
> +      - a value equal to 1 specifies that separate loop filter
> +        deltas are sent for horizontal luma edges, vertical luma edges, the U
> +        edges, and the V edges. A value of delta_lf_multi equal to 0 specifies
> +        that the same loop filter delta is used for all edges.
> +
> +.. _av1_loop_filter_flags:
> +
> +``AV1 Loop Filter Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_LOOP_FILTER_FLAG_DELTA_ENABLED``
> +      - 0x00000001
> +      - If set, means that the filter level depends on the mode and reference
> +        frame used to predict a block. If not set, means that the filter level
> +        does not depend on the mode and reference frame.
> +    * - ``V4L2_AV1_LOOP_FILTER_FLAG_DELTA_UPDATE``
> +      - 0x00000002
> +      - If set, means that additional syntax elements are present that specify
> +        which mode and reference frame deltas are to be updated. If not set,
> +        means that these syntax elements are not present.
> +    * - ``V4L2_AV1_LOOP_FILTER_FLAG_DELTA_LF_PRESENT``
> +      - 0x00000004
> +      - Specifies whether loop filter delta values are present
> +
> +.. c:type:: v4l2_av1_quantization
> +
> +AV1 Quantization params as defined in section 6.8.11 "Quantization params
> +semantics" of :ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_quantization
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See
> +        :ref:`AV1 Loop Filter flags <av1_quantization_flags>` for more details.
> +    * - __u8
> +      - ``base_q_idx``
> +      - Indicates the base frame qindex. This is used for Y AC coefficients and
> +        as the base value for the other quantizers.
> +    * - __u8
> +      - ``delta_q_y_dc``
> +      - Indicates the Y DC quantizer relative to base_q_idx.
> +    * - __u8
> +      - ``delta_q_u_dc``
> +      - Indicates the U DC quantizer relative to base_q_idx.
> +    * - __u8
> +      - ``delta_q_u_ac``
> +      - Indicates the U AC quantizer relative to base_q_idx.
> +    * - __u8
> +      - ``delta_q_v_dc``
> +      - Indicates the V DC quantizer relative to base_q_idx.
> +    * - __u8
> +      - ``delta_q_v_dc``
> +      - Indicates the V DC quantizer relative to base_q_idx.
> +    * - __u8
> +      - ``qm_y``
> +      - Specifies the level in the quantizer matrix that should be used for
> +        luma plane decoding.
> +    * - __u8
> +      - ``qm_u``
> +      - Specifies the level in the quantizer matrix that should be used for
> +        chroma U plane decoding.
> +    * - __u8
> +      - ``qm_y``
> +      - Specifies the level in the quantizer matrix that should be used for
> +        chroma V plane decoding.
> +    * - __u8
> +      - ``delta_q_res``
> +      - Specifies the left shift which should be applied to decoded quantizer
> +        index delta values.
> +
> +.. _av1_quantization_flags:
> +
> +``AV1 Quantization Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_QUANTIZATION_FLAG_DIFF_UV_DELTA``
> +      - 0x00000001
> +      - If set, indicates that the U and V delta quantizer values are coded
> +        separately. If not set, indicates that the U and V delta quantizer
> +        values share a common value.
> +    * - ``V4L2_AV1_QUANTIZATION_FLAG_USING_QMATRIX``
> +      - 0x00000002
> +      - If set, specifies that the quantizer matrix will be used to compute
> +        quantizers.
> +    * - ``V4L2_AV1_QUANTIZATION_FLAG_DELTA_Q_PRESENT``
> +      - 0x00000004
> +      - Specifies whether quantizer index delta values are present.
> +
> +.. c:type:: v4l2_av1_tile_info
> +
> +AV1 Tile info as defined in section 6.8.14. "Tile info semantics" of ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_film_grain
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See
> +        :ref:`AV1 Tile Info flags <av1_tile_info_flags>` for more details.
> +    * - __u32
> +      - ``mi_col_starts[V4L2_AV1_MAX_TILE_COLS + 1]``
> +      - An array specifying the start column (in units of 4x4 luma
> +        samples) for each tile across the image.
> +    * - __u32
> +      - ``mi_row_starts[V4L2_AV1_MAX_TILE_ROWS + 1]``
> +      - An array specifying the start row (in units of 4x4 luma
> +        samples) for each tile across the image.
> +    * - __u32
> +      - ``width_in_sbs_minus_1[V4L2_AV1_MAX_TILE_COLS]``
> +      - Specifies the width of a tile minus 1 in units of superblocks.
> +    * - __u32
> +      - ``height_in_sbs_minus_1[V4L2_AV1_MAX_TILE_ROWS]``
> +      - Specifies the height of a tile minus 1 in units of superblocks.
> +    * - __u8
> +      - ``tile_size_bytes``
> +      - Specifies the number of bytes needed to code each tile size.
> +    * - __u8
> +      - ``context_update_tile_id``
> +      - Specifies which tile to use for the CDF update.
> +    * - __u8
> +      - ``tile_rows``
> +      - Specifies the number of tiles down the frame.
> +    * - __u8
> +      - ``tile_rows``
> +      - Specifies the number of tiles across the frame.
> +
> +.. _av1_tile_info_flags:
> +
> +``AV1 Tile Info Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_TILE_INFO_FLAG_UNIFORM_TILE_SPACING``
> +      - 0x00000001
> +      - If set, means that the tiles are uniformly spaced across the frame. (In
> +        other words, all tiles are the same size except for the ones at the
> +        right and bottom edge which can be smaller). If not set means that the
> +        tile sizes are coded.
> +
> +.. c:type:: v4l2_av1_frame_type
> +
> +AV1 Frame Type
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_KEY_FRAME``
> +      - 0
> +      - Key frame.
> +    * - ``V4L2_AV1_INTER_FRAME``
> +      - 1
> +      - Inter frame.
> +    * - ``V4L2_AV1_INTRA_ONLY_FRAME``
> +      - 2
> +      - Intra-only frame.
> +    * - ``V4L2_AV1_SWITCH_FRAME``
> +      - 3
> +      - Switch frame.
> +
> +.. c:type:: v4l2_av1_interpolation_filter
> +
> +AV1 Interpolation Filter
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP``
> +      - 0
> +      - Eight tap filter.
> +    * - ``V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SMOOTH``
> +      - 1
> +      - Eight tap smooth filter.
> +    * - ``V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SHARP``
> +      - 2
> +      - Eight tap sharp filter.
> +    * - ``V4L2_AV1_INTERPOLATION_FILTER_BILINEAR``
> +      - 3
> +      - Bilinear filter.
> +    * - ``V4L2_AV1_INTERPOLATION_FILTER_SWITCHABLE``
> +      - 4
> +      - Filter selection is signaled at the block level.
> +
> +.. c:type:: v4l2_av1_tx_mode
> +
> +AV1 Tx mode as described in section 6.8.21 "TX mode semantics" of :ref:`av1`.
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_TX_MODE_ONLY_4X4``
> +      - 0
> +      -  The inverse transform will use only 4x4 transforms.
> +    * - ``V4L2_AV1_TX_MODE_LARGEST``
> +      - 1
> +      - The inverse transform will use the largest transform size that fits
> +        inside the block.
> +    * - ``V4L2_AV1_TX_MODE_SELECT``
> +      - 2
> +      - The choice of transform size is specified explicitly for each block.
> +
> +``V4L2_CID_STATELESS_AV1_FRAME_HEADER (struct)``
> +    Represents a tile list entry as seen in an AV1 Tile List OBU. See section
> +    6.11.2. "Tile list entry semantics" of :ref:`av1` for more details.
> +
> +.. c:type:: v4l2_ctrl_av1_frame_header
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_frame_header
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - struct :c:type:`v4l2_av1_tile_info`
> +      - ``tile_info``
> +      - Tile info
> +    * - struct :c:type:`v4l2_av1_quantization`
> +      - ``quantization``
> +      - Quantization params
> +    * - struct :c:type:`v4l2_av1_segmentation`
> +      - ``segmentation``
> +      - Segmentation params
> +    * - struct :c:type:`v4l2_av1_loop_filter`
> +      - ``loop_filter``
> +      - Loop filter params
> +    * - struct :c:type:`v4l2_av1_cdef`
> +      - ``cdef``
> +      - CDEF params
> +    * - struct :c:type:`v4l2_av1_loop_restoration`
> +      - ``loop_restoration``
> +      - Loop restoration params
> +    * - struct :c:type:`v4l2_av1_loop_restoration`
> +      - ``loop_restoration``
> +      - Loop restoration params
> +    * - struct :c:type:`v4l2_av1_loop_global_motion`
> +      - ``global_motion``
> +      - Global motion params
> +    * - struct :c:type:`v4l2_av1_loop_film_grain`
> +      - ``film_grain``
> +      - Film grain params
> +    * - __u32
> +      - ``flags``
> +      - See
> +        :ref:`AV1 Tile Info flags <av1_frame_header_flags>` for more details.
> +    * - enum :c:type:`v4l2_av1_frame_type`
> +      - ``frame_type``
> +      - Specifies the AV1 frame type
> +    * - __u32
> +      - ``order_hint``
> +      - Specifies OrderHintBits least significant bits of the expected output
> +        order for this frame.
> +    * - __u8
> +      - ``superres_denom``
> +      - The denominator for the upscaling ratio.
> +    * - __u32
> +      - ``upscaled_width``
> +      - The upscaled width.
> +    * - enum :c:type:`v4l2_av1_interpolation_filter`
> +      - ``interpolation_filter``
> +      - Specifies the filter selection used for performing inter prediction.
> +    * - enum :c:type:`v4l2_av1_tx_mode`
> +      - ``tx_mode``
> +      - Specifies how the transform size is determined.
> +    * - __u32
> +      - ``frame_width_minus_1``
> +      - Add 1 to get the frame's width.
> +    * - __u32
> +      - ``frame_height_minus_1``
> +      - Add 1 to get the frame's height.
> +    * - __u16
> +      - ``render_width_minus_1``
> +      - Add 1 to get the render width of the frame in luma samples.
> +    * - __u16
> +      - ``render_height_minus_1``
> +      - Add 1 to get the render height of the frame in luma samples.
> +    * - __u32
> +      - ``current_frame_id``
> +      - Specifies the frame id number for the current frame. Frame
> +        id numbers are additional information that do not affect the decoding
> +        process, but provide decoders with a way of detecting missing reference
> +        frames so that appropriate action can be taken.
> +    * - __u8
> +      - ``primary_ref_frame``
> +      - Specifies which reference frame contains the CDF values and other state
> +        that should be loaded at the start of the frame..
> +    * - __u8
> +      - ``buffer_removal_time[V4L2_AV1_MAX_OPERATING_POINTS]``
> +      - Specifies the frame removal time in units of DecCT clock ticks counted
> +        from the removal time of the last random access point for operating point
> +        opNum.
> +    * - __u8
> +      - ``refresh_frame_flags[V4L2_AV1_MAX_OPERATING_POINTS]``
> +      - Contains a bitmask that specifies which reference frame slots will be
> +        updated with the current frame after it is decoded.
> +    * - __u32
> +      - ``ref_order_hint[V4L2_AV1_NUM_REF_FRAMES]``
> +      - Specifies the expected output order hint for each reference frame.
> +    * - __s8
> +      - ``last_frame_idx``
> +      - Specifies the reference frame to use for LAST_FRAME.
> +    * - __s8
> +      - ``gold_frame_idx``
> +      - Specifies the reference frame to use for GOLDEN_FRAME.
> +    * - __u64
> +      - ``reference_frame_ts[V4L2_AV1_NUM_REF_FRAMES]``
> +      - the V4L2 timestamp for each of the reference frames enumerated in
> +        enum :c:type:`v4l2_av1_reference_frame`. The timestamp refers to the
> +        ``timestamp`` field in struct :c:type:`v4l2_buffer`. Use the
> +        :c:func:`v4l2_timeval_to_ns()` function to convert the struct
> +        :c:type:`timeval` in struct :c:type:`v4l2_buffer` to a __u64.
> +    * - __u8
> +      - ``skip_mode_frame[2]``
> +      - Specifies the frames to use for compound prediction when skip_mode is
> +        equal to 1.
> +
> +.. _av1_frame_header_flags:
> +
> +``AV1 Frame Header Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_SHOW_FRAME``
> +      - 0x00000001
> +      - If set, specifies that this frame should be immediately output once
> +        decoded. If not set, specifies that this frame should not be immediately
> +        output. (It may be output later if a later uncompressed header uses
> +        show_existing_frame equal to 1).
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_SHOWABLE_FRAME``
> +      - 0x00000002
> +      - If set, specifies that the frame may be output using the
> +        show_existing_frame mechanism. If not set, specifies that this frame
> +        will not be output using the show_existing_frame mechanism.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ERROR_RESILIENT_MODE``
> +      - 0x00000004
> +      - Specifies whether error resilient mode is enabled.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_CDF_UPDATE``
> +      - 0x00000008
> +      - Specifies whether the CDF update in the symbol decoding process should
> +        be disabled.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_SCREEN_CONTENT_TOOLS``
> +      - 0x00000010
> +      - Specifies whether the CDF update in the symbol decoding process should
> +        be disabled.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_FORCE_INTEGER_MV``
> +      - 0x00000020
> +      - If set, specifies that motion vectors will always be integers. If not
> +        set, specifies that motion vectors can contain fractional bits.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_INTRABC``
> +      - 0x00000040
> +      - If set, indicates that intra block copy may be used in this frame. If
> +        not set, indicates that intra block copy is not allowed in this frame.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_USE_SUPERRES``
> +      - 0x00000080
> +      - If set, indicates that upscaling is needed.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_HIGH_PRECISION_MV``
> +      - 0x00000100
> +      - If set, specifies that motion vectors are specified to eighth pel
> +        precision. If not set, specifies that motion vectors are specified to
> +        quarter pel precision;
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_IS_MOTION_MODE_SWITCHABLE``
> +      - 0x00000200
> +      - If not set, specifies that only the SIMPLE motion mode will be used.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_USE_REF_FRAME_MVS``
> +      - 0x00000400
> +      - If set specifies that motion vector information from a previous frame
> +        can be used when decoding the current frame. If not set, specifies that
> +        this information will not be used.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_FRAME_END_UPDATE_CDF``
> +      - 0x00000800
> +      - If set indicates that the end of frame CDF update is disabled. If not
> +        set, indicates that the end of frame CDF update is enabled
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_UNIFORM_TILE_SPACING``
> +      - 0x00001000
> +      - If set, means that the tiles are uniformly spaced across the frame. (In
> +        other words, all tiles are the same size except for the ones at the
> +        right and bottom edge which can be smaller). If not set, means that the
> +        tile sizes are coded
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_WARPED_MOTION``
> +      - 0x00002000
> +      - If set, indicates that the syntax element motion_mode may be present, if
> +        not set, indicates that the syntax element motion_mode will not be
> +        present.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_REFERENCE_SELECT``
> +      - 0x00004000
> +      - If set, specifies that the mode info for inter blocks contains the
> +        syntax element comp_mode that indicates whether to use single or
> +        compound reference prediction. If not set, specifies that all inter
> +        blocks will use single prediction.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_REDUCED_TX_SET``
> +      - 0x00008000
> +      - If set, specifies that the frame is restricted to a reduced subset of
> +        the full set of transform types.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_SKIP_MODE_PRESENT``
> +      - 0x00010000
> +      - If set, specifies that the syntax element skip_mode will be present, if
> +        not set, specifies that skip_mode will not be used for this frame.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_FRAME_SIZE_OVERRIDE``
> +      - 0x00020000
> +      - If set, specifies that the frame size will either be specified as the
> +        size of one of the reference frames, or computed from the
> +        frame_width_minus_1 and frame_height_minus_1 syntax elements. If not
> +        set, specifies that the frame size is equal to the size in the sequence
> +        header.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_BUFFER_REMOVAL_TIME_PRESENT``
> +      - 0x00040000
> +      - If set, specifies that buffer_removal_time is present. If not set,
> +        specifies that buffer_removal_time is not present.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_FRAME_REFS_SHORT_SIGNALING``
> +      - 0x00080000
> +      - If set, indicates that only two reference frames are explicitly
> +        signaled. If not set, indicates that all reference frames are explicitly
> +        signaled.
> diff --git a/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst b/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst
> index 0ede39907ee2..c1951e890d6f 100644
> --- a/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst
> +++ b/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst
> @@ -223,6 +223,27 @@ Compressed Formats
>          through the ``V4L2_CID_STATELESS_FWHT_PARAMS`` control.
>  	See the :ref:`associated Codec Control ID <codec-stateless-fwht>`.
>  
> +    * .. _V4L2-PIX-FMT-AV1-FRAME:
> +
> +      - ``V4L2_PIX_FMT_AV1_FRAME``
> +      - 'AV1F'
> +      - AV1 parsed frame, including the frame header, as extracted from the container.
> +        This format is adapted for stateless video decoders that implement a AV1
> +        pipeline with the :ref:`stateless_decoder`. Metadata associated with the
> +        frame to decode is required to be passed through the
> +        ``V4L2_CID_STATELESS_AV1_SEQUENCE``, ``V4L2_CID_STATELESS_AV1_FRAME``,
> +        ``V4L2_CID_STATELESS_AV1_TILE_GROUP`` and
> +        ``V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY`` controls.
> +        ``V4L2_CID_STATELESS_AV1_TILE_LIST`` and
> +        ``V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY`` controls should be used if
> +        the decoder supports large scale tile decoding mode as signaled by the
> +        ``V4L2_CID_STATELESS_AV1_OPERATING_MODE`` control.
> +        See the :ref:`associated Codec Control IDs <v4l2-codec-stateless-av1>`.
> +        Exactly one output and one capture buffer must be provided for use with
> +        this pixel format. The output buffer must contain the appropriate number
> +        of macroblocks to decode a full corresponding frame to the matching
> +        capture buffer.
> +
>  .. raw:: latex
>  
>      \normalsize
> diff --git a/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst b/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst
> index 2d6bc8d94380..50d4ed714123 100644
> --- a/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst
> +++ b/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst
> @@ -233,6 +233,42 @@ still cause this situation.
>        - ``p_mpeg2_quantisation``
>        - A pointer to a struct :c:type:`v4l2_ctrl_mpeg2_quantisation`. Valid if this control is
>          of type ``V4L2_CTRL_TYPE_MPEG2_QUANTISATION``.
> +    * - struct :c:type:`v4l2_ctrl_av1_sequence` *
> +      - ``p_av1_sequence``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_sequence`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_SEQUENCE``.
> +    * - struct :c:type:`v4l2_ctrl_av1_tile_group` *
> +      - ``p_av1_tile_group``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_group`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP``.
> +    * - struct :c:type:`v4l2_ctrl_av1_tile_group_entry` *
> +      - ``p_av1_tile_group_entry``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_group_entry`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY``.
> +    * - struct :c:type:`v4l2_ctrl_av1_tile_list` *
> +      - ``p_av1_tile_list``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_list`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_LIST``.
> +    * - struct :c:type:`v4l2_ctrl_av1_tile_list_entry` *
> +      - ``p_av1_tile_list_entry``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_list_entry`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_LIST_ENTRY``.
> +    * - struct :c:type:`v4l2_ctrl_av1_frame_header` *
> +      - ``p_av1_frame_header``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_frame_header`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_FRAME_HEADER``.
> +    * - struct :c:type:`v4l2_ctrl_av1_profile` *
> +      - ``p_av1_profile``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_profile`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_PROFILE``.
> +    * - struct :c:type:`v4l2_ctrl_av1_level` *
> +      - ``p_av1_level``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_level`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_LEVEL``.
> +    * - struct :c:type:`v4l2_ctrl_av1_operating_mode` *
> +      - ``p_av1_operating_mode``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_operating_mode`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_OPERATING_MODE``.
>      * - struct :c:type:`v4l2_ctrl_hdr10_cll_info` *
>        - ``p_hdr10_cll``
>        - A pointer to a struct :c:type:`v4l2_ctrl_hdr10_cll_info`. Valid if this control is
> diff --git a/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst b/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
> index 819a70a26e18..73ff5311b7ae 100644
> --- a/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
> +++ b/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
> @@ -507,6 +507,60 @@ See also the examples in :ref:`control`.
>        - n/a
>        - A struct :c:type:`v4l2_ctrl_hevc_decode_params`, containing HEVC
>  	decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_SEQUENCE``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_sequence`, containing AV1 Sequence OBU
> +	decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_GROUP``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_group`, containing AV1 Tile Group
> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_group`, containing AV1 Tile Group
> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_LIST``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_list`, containing AV1 Tile List
> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_list_entry`, containing AV1 Tile List
> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_FRAME_HEADER``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_frame_header`, containing AV1 Frame/Frame
> +	Header OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_PROFILE``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A enum :c:type:`v4l2_ctrl_av1_profile`, indicating what AV1 profiles
> +	an AV1 stateless decoder might support.
> +    * - ``V4L2_CTRL_TYPE_AV1_LEVEL``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A enum :c:type:`v4l2_ctrl_av1_level`, indicating what AV1 levels
> +	an AV1 stateless decoder might support.
> +    * - ``V4L2_CTRL_TYPE_AV1_OPERATING_MODE``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A enum :c:type:`v4l2_ctrl_av1_operating_mode`, indicating what AV1
> +	operating modes an AV1 stateless decoder might support.
>  
>  .. raw:: latex
>  
> diff --git a/Documentation/userspace-api/media/videodev2.h.rst.exceptions b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> index 2217b56c2686..088d4014e4c5 100644
> --- a/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> +++ b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> @@ -146,6 +146,15 @@ replace symbol V4L2_CTRL_TYPE_H264_DECODE_PARAMS :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_HEVC_SPS :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_HEVC_PPS :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_HEVC_SLICE_PARAMS :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_SEQUENCE :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_TILE_GROUP :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_TILE_LIST :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_FRAME_HEADER :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_PROFILE :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_LEVEL :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_OPERATING_MODE :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_AREA :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_FWHT_PARAMS :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_VP8_FRAME :c:type:`v4l2_ctrl_type`
> diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c
> index e7ef2d16745e..3f0e425278b3 100644
> --- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
> +++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
> @@ -283,6 +283,25 @@ static void std_log(const struct v4l2_ctrl *ctrl)
>  	case V4L2_CTRL_TYPE_MPEG2_PICTURE:
>  		pr_cont("MPEG2_PICTURE");
>  		break;
> +	case V4L2_CTRL_TYPE_AV1_SEQUENCE:
> +		pr_cont("AV1_SEQUENCE");
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP:
> +		pr_cont("AV1_TILE_GROUP");
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY:
> +		pr_cont("AV1_TILE_GROUP_ENTRY");
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST:
> +		pr_cont("AV1_TILE_LIST");
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY:
> +		pr_cont("AV1_TILE_LIST_ENTRY");
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_FRAME_HEADER:
> +		pr_cont("AV1_FRAME_HEADER");
> +		break;
> +
>  	default:
>  		pr_cont("unknown type %d", ctrl->type);
>  		break;
> @@ -317,6 +336,244 @@ static void std_log(const struct v4l2_ctrl *ctrl)
>  #define zero_reserved(s) \
>  	memset(&(s).reserved, 0, sizeof((s).reserved))
>  
> +static int validate_av1_quantization(struct v4l2_av1_quantization *q)
> +{
> +	if (q->flags > GENMASK(2, 0))
> +		return -EINVAL;
> +
> +	if (q->delta_q_y_dc < -63 || q->delta_q_y_dc > 63 ||
> +	    q->delta_q_u_dc < -63 || q->delta_q_u_dc > 63 ||
> +	    q->delta_q_v_dc < -63 || q->delta_q_v_dc > 63 ||
> +	    q->delta_q_u_ac < -63 || q->delta_q_u_ac > 63 ||
> +	    q->delta_q_v_ac < -63 || q->delta_q_v_ac > 63 ||
> +	    q->delta_q_res > GENMASK(1, 0))
> +		return -EINVAL;
> +
> +	if (q->qm_y > GENMASK(3, 0) ||
> +	    q->qm_u > GENMASK(3, 0) ||
> +	    q->qm_v > GENMASK(3, 0))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int validate_av1_segmentation(struct v4l2_av1_segmentation *s)
> +{
> +	u32 i;
> +	u32 j;
> +	s32 limit;
> +
> +	if (s->flags > GENMASK(3, 0))
> +		return -EINVAL;
> +
> +	for (i = 0; i < ARRAY_SIZE(s->feature_data); i++) {
> +		const int segmentation_feature_signed[] = { 1, 1, 1, 1, 1, 0, 0, 0 };
> +		const int segmentation_feature_max[] = { 255, 63, 63, 63, 63, 7, 0, 0};
> +
> +		for (j = 0; j < ARRAY_SIZE(s->feature_data[j]); j++) {
> +			if (segmentation_feature_signed[j]) {
> +				limit = segmentation_feature_max[j];
> +
> +				if (s->feature_data[i][j] < -limit ||
> +				    s->feature_data[i][j] > limit)
> +					return -EINVAL;
> +			} else {
> +				if (s->feature_data[i][j] > limit)
> +					return -EINVAL;
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int validate_av1_loop_filter(struct v4l2_av1_loop_filter *lf)
> +{
> +	u32 i;
> +
> +	if (lf->flags > GENMASK(2, 0))
> +		return -EINVAL;
> +
> +	for (i = 0; i < ARRAY_SIZE(lf->level); i++) {
> +		if (lf->level[i] > GENMASK(5, 0))
> +			return -EINVAL;
> +	}
> +
> +	if (lf->sharpness > GENMASK(2, 0))
> +		return -EINVAL;
> +
> +	for (i = 0; i < ARRAY_SIZE(lf->ref_deltas); i++) {
> +		if (lf->ref_deltas[i] < -63 || lf->ref_deltas[i] > 63)
> +			return -EINVAL;
> +	}
> +
> +	for (i = 0; i < ARRAY_SIZE(lf->mode_deltas); i++) {
> +		if (lf->mode_deltas[i] < -63 || lf->mode_deltas[i] > 63)
> +			return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int validate_av1_cdef(struct v4l2_av1_cdef *cdef)
> +{
> +	u32 i;
> +
> +	if (cdef->damping_minus_3 > GENMASK(1, 0) ||
> +	    cdef->bits > GENMASK(1, 0))
> +		return -EINVAL;
> +
> +	for (i = 0; i < 1 << cdef->bits; i++) {
> +		if (cdef->y_pri_strength[i] > GENMASK(3, 0) ||
> +		    cdef->y_sec_strength[i] > 4 ||
> +		    cdef->uv_pri_strength[i] > GENMASK(3, 0) ||
> +		    cdef->uv_sec_strength[i] > 4)
> +			return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int validate_av1_loop_restauration(struct v4l2_av1_loop_restoration *lr)
> +{
> +	if (lr->lr_unit_shift > 3 || lr->lr_uv_shift > 1)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int validate_av1_film_grain(struct v4l2_av1_film_grain *fg)
> +{
> +	u32 i;
> +
> +	if (fg->flags > GENMASK(4, 0))
> +		return -EINVAL;
> +
> +	if (fg->film_grain_params_ref_idx > GENMASK(2, 0) ||
> +	    fg->num_y_points > 14 ||
> +	    fg->num_cb_points > 10 ||
> +	    fg->num_cr_points > GENMASK(3, 0) ||
> +	    fg->grain_scaling_minus_8 > GENMASK(1, 0) ||
> +	    fg->ar_coeff_lag > GENMASK(1, 0) ||
> +	    fg->ar_coeff_shift_minus_6 > GENMASK(1, 0) ||
> +	    fg->grain_scale_shift > GENMASK(1, 0))
> +		return -EINVAL;
> +
> +	if (!(fg->flags & V4L2_AV1_FILM_GRAIN_FLAG_APPLY_GRAIN))
> +		return 0;
> +
> +	for (i = 1; i < ARRAY_SIZE(fg->point_y_value); i++)
> +		if (fg->point_y_value[i] <= fg->point_y_value[i - 1])
> +			return -EINVAL;
> +
> +	for (i = 1; i < ARRAY_SIZE(fg->point_cb_value); i++)
> +		if (fg->point_cb_value[i] <= fg->point_cb_value[i - 1])
> +			return -EINVAL;
> +
> +	for (i = 1; i < ARRAY_SIZE(fg->point_cr_value); i++)
> +		if (fg->point_cr_value[i] <= fg->point_cr_value[i - 1])
> +			return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int validate_av1_frame_header(struct v4l2_ctrl_av1_frame_header *f)
> +{
> +	int ret = 0;
> +
> +	ret = validate_av1_quantization(&f->quantization);
> +	if (ret)
> +		return ret;
> +	ret = validate_av1_segmentation(&f->segmentation);
> +	if (ret)
> +		return ret;
> +	ret = validate_av1_loop_filter(&f->loop_filter);
> +	if (ret)
> +		return ret;
> +	ret = validate_av1_cdef(&f->cdef);
> +	if (ret)
> +		return ret;
> +	ret = validate_av1_loop_restauration(&f->loop_restoration);
> +	if (ret)
> +		return ret;
> +	ret = validate_av1_film_grain(&f->film_grain);
> +	if (ret)
> +		return ret;
> +
> +	if (f->flags &
> +	~(V4L2_AV1_FRAME_HEADER_FLAG_SHOW_FRAME |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_SHOWABLE_FRAME |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_ERROR_RESILIENT_MODE |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_CDF_UPDATE |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_SCREEN_CONTENT_TOOLS |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_FORCE_INTEGER_MV |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_INTRABC |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_USE_SUPERRES |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_HIGH_PRECISION_MV |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_IS_MOTION_MODE_SWITCHABLE |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_USE_REF_FRAME_MVS |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_FRAME_END_UPDATE_CDF |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_UNIFORM_TILE_SPACING |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_WARPED_MOTION |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_REFERENCE_SELECT |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_REDUCED_TX_SET |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_FRAME_SIZE_OVERRIDE |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_BUFFER_REMOVAL_TIME_PRESENT |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_FRAME_REFS_SHORT_SIGNALING))
> +		return -EINVAL;
> +
> +	if (f->superres_denom > GENMASK(2, 0) + 9)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int validate_av1_sequence(struct v4l2_ctrl_av1_sequence *s)
> +{
> +	if (s->flags &
> +	~(V4L2_AV1_SEQUENCE_FLAG_STILL_PICTURE |
> +	 V4L2_AV1_SEQUENCE_FLAG_USE_128X128_SUPERBLOCK |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_FILTER_INTRA |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTRA_EDGE_FILTER |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTERINTRA_COMPOUND |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_MASKED_COMPOUND |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_WARPED_MOTION |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_DUAL_FILTER |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_ORDER_HINT |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_JNT_COMP |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_REF_FRAME_MVS |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_SUPERRES |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_CDEF |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_RESTORATION |
> +	 V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME |
> +	 V4L2_AV1_SEQUENCE_FLAG_COLOR_RANGE |
> +	 V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_X |
> +	 V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_Y |
> +	 V4L2_AV1_SEQUENCE_FLAG_FILM_GRAIN_PARAMS_PRESENT |
> +	 V4L2_AV1_SEQUENCE_FLAG_SEPARATE_UV_DELTA_Q))
> +		return -EINVAL;
> +
> +	if (s->seq_profile == 1 && s->flags & V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME)
> +		return -EINVAL;
> +
> +	/* reserved */
> +	if (s->seq_profile > 2)
> +		return -EINVAL;
> +
> +	/* TODO: PROFILES */
> +	return 0;
> +}
> +
> +static int validate_av1_tile_group(struct v4l2_ctrl_av1_tile_group *t)
> +{
> +	if (t->flags & ~(V4L2_AV1_TILE_GROUP_FLAG_START_AND_END_PRESENT))
> +		return -EINVAL;
> +	if (t->tg_start > t->tg_end)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
>  /*
>   * Compound controls validation requires setting unused fields/flags to zero
>   * in order to properly detect unchanged controls with std_equal's memcmp.
> @@ -573,7 +830,16 @@ static int std_validate_compound(const struct v4l2_ctrl *ctrl, u32 idx,
>  		zero_padding(p_vp8_frame->entropy);
>  		zero_padding(p_vp8_frame->coder_state);
>  		break;
> -
> +	case V4L2_CTRL_TYPE_AV1_FRAME_HEADER:
> +		return validate_av1_frame_header(p);
> +	case V4L2_CTRL_TYPE_AV1_SEQUENCE:
> +		return validate_av1_sequence(p);
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP:
> +		return validate_av1_tile_group(p);
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY:
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST:
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY:
> +		break;
>  	case V4L2_CTRL_TYPE_HEVC_SPS:
>  		p_hevc_sps = p;
>  
> @@ -1313,6 +1579,24 @@ static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl,
>  	case V4L2_CTRL_TYPE_VP8_FRAME:
>  		elem_size = sizeof(struct v4l2_ctrl_vp8_frame);
>  		break;
> +	case V4L2_CTRL_TYPE_AV1_SEQUENCE:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_sequence);
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_tile_group);
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_tile_group_entry);
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_tile_list);
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_tile_list_entry);
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_FRAME_HEADER:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_frame_header);
> +		break;
>  	case V4L2_CTRL_TYPE_HEVC_SPS:
>  		elem_size = sizeof(struct v4l2_ctrl_hevc_sps);
>  		break;
> diff --git a/drivers/media/v4l2-core/v4l2-ctrls-defs.c b/drivers/media/v4l2-core/v4l2-ctrls-defs.c
> index 421300e13a41..6f9b53f180cc 100644
> --- a/drivers/media/v4l2-core/v4l2-ctrls-defs.c
> +++ b/drivers/media/v4l2-core/v4l2-ctrls-defs.c
> @@ -499,6 +499,45 @@ const char * const *v4l2_ctrl_get_menu(u32 id)
>  		NULL,
>  	};
>  
> +	static const char * const av1_profile[] = {
> +		"Main",
> +		"High",
> +		"Professional",
> +		NULL,
> +	};
> +	static const char * const av1_level[] = {
> +		"2.0",
> +		"2.1",
> +		"2.2",
> +		"2.3",
> +		"3.0",
> +		"3.1",
> +		"3.2",
> +		"3.3",
> +		"4.0",
> +		"4.1",
> +		"4.2",
> +		"4.3",
> +		"5.0",
> +		"5.1",
> +		"5.2",
> +		"5.3",
> +		"6.0",
> +		"6.1",
> +		"6.2",
> +		"6.3",
> +		"7.0",
> +		"7.1",
> +		"7.2",
> +		"7.3",
> +		NULL,
> +	};
> +	static const char * const av1_operating_mode[] = {
> +		"General decoding",
> +		"Large scale tile decoding",
> +		NULL,
> +	};
> +
>  	static const char * const hevc_profile[] = {
>  		"Main",
>  		"Main Still Picture",
> @@ -685,6 +724,12 @@ const char * const *v4l2_ctrl_get_menu(u32 id)
>  		return dv_it_content_type;
>  	case V4L2_CID_DETECT_MD_MODE:
>  		return detect_md_mode;
> +	case V4L2_CID_STATELESS_AV1_PROFILE:
> +		return av1_profile;
> +	case V4L2_CID_STATELESS_AV1_LEVEL:
> +		return av1_level;
> +	case V4L2_CID_STATELESS_AV1_OPERATING_MODE:
> +		return av1_operating_mode;
>  	case V4L2_CID_MPEG_VIDEO_HEVC_PROFILE:
>  		return hevc_profile;
>  	case V4L2_CID_MPEG_VIDEO_HEVC_LEVEL:
> @@ -1175,6 +1220,15 @@ const char *v4l2_ctrl_get_name(u32 id)
>  	case V4L2_CID_STATELESS_MPEG2_SEQUENCE:			return "MPEG-2 Sequence Header";
>  	case V4L2_CID_STATELESS_MPEG2_PICTURE:			return "MPEG-2 Picture Header";
>  	case V4L2_CID_STATELESS_MPEG2_QUANTISATION:		return "MPEG-2 Quantisation Matrices";
> +	case V4L2_CID_STATELESS_AV1_SEQUENCE:			return "AV1 Sequence parameters";
> +	case V4L2_CID_STATELESS_AV1_TILE_GROUP:		        return "AV1 Tile Group";
> +	case V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY:	        return "AV1 Tile Group Entry";
> +	case V4L2_CID_STATELESS_AV1_TILE_LIST:		        return "AV1 Tile List";
> +	case V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY:		return "AV1 Tile List Entry";
> +	case V4L2_CID_STATELESS_AV1_FRAME_HEADER:		return "AV1 Frame Header parameters";
> +	case V4L2_CID_STATELESS_AV1_PROFILE:			return "AV1 Profile";
> +	case V4L2_CID_STATELESS_AV1_LEVEL:			return "AV1 Level";
> +	case V4L2_CID_STATELESS_AV1_OPERATING_MODE:		return "AV1 Operating Mode";
>  
>  	/* Colorimetry controls */
>  	/* Keep the order of the 'case's the same as in v4l2-controls.h! */
> @@ -1343,6 +1397,9 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
>  	case V4L2_CID_MPEG_VIDEO_VP9_PROFILE:
>  	case V4L2_CID_MPEG_VIDEO_VP9_LEVEL:
>  	case V4L2_CID_DETECT_MD_MODE:
> +	case V4L2_CID_STATELESS_AV1_PROFILE:
> +	case V4L2_CID_STATELESS_AV1_LEVEL:
> +	case V4L2_CID_STATELESS_AV1_OPERATING_MODE:
>  	case V4L2_CID_MPEG_VIDEO_HEVC_PROFILE:
>  	case V4L2_CID_MPEG_VIDEO_HEVC_LEVEL:
>  	case V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_TYPE:
> @@ -1481,6 +1538,28 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
>  	case V4L2_CID_STATELESS_VP8_FRAME:
>  		*type = V4L2_CTRL_TYPE_VP8_FRAME;
>  		break;
> +	case V4L2_CID_STATELESS_AV1_SEQUENCE:
> +		*type = V4L2_CTRL_TYPE_AV1_SEQUENCE;
> +		break;
> +	case V4L2_CID_STATELESS_AV1_TILE_GROUP:
> +		*type = V4L2_CTRL_TYPE_AV1_TILE_GROUP;
> +		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
> +		break;
> +	case V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY:
> +		*type = V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY;
> +		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
> +		break;
> +	case V4L2_CID_STATELESS_AV1_TILE_LIST:
> +		*type = V4L2_CTRL_TYPE_AV1_TILE_LIST;
> +		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
> +		break;
> +	case V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY:
> +		*type = V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY;
> +		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
> +		break;
> +	case V4L2_CID_STATELESS_AV1_FRAME_HEADER:
> +		*type = V4L2_CTRL_TYPE_AV1_FRAME_HEADER;
> +		break;
>  	case V4L2_CID_MPEG_VIDEO_HEVC_SPS:
>  		*type = V4L2_CTRL_TYPE_HEVC_SPS;
>  		break;
> diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
> index 05d5db3d85e5..135474c43b65 100644
> --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> @@ -1416,6 +1416,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
>  		case V4L2_PIX_FMT_S5C_UYVY_JPG:	descr = "S5C73MX interleaved UYVY/JPEG"; break;
>  		case V4L2_PIX_FMT_MT21C:	descr = "Mediatek Compressed Format"; break;
>  		case V4L2_PIX_FMT_SUNXI_TILED_NV12: descr = "Sunxi Tiled NV12 Format"; break;
> +		case V4L2_PIX_FMT_AV1_FRAME: descr = "AV1 Frame"; break;
>  		default:
>  			if (fmt->description[0])
>  				return;
> diff --git a/include/media/v4l2-ctrls.h b/include/media/v4l2-ctrls.h
> index ebd9cef13309..5f8ba4fac92e 100644
> --- a/include/media/v4l2-ctrls.h
> +++ b/include/media/v4l2-ctrls.h
> @@ -56,6 +56,12 @@ struct video_device;
>   * @p_hdr10_cll:		Pointer to an HDR10 Content Light Level structure.
>   * @p_hdr10_mastering:		Pointer to an HDR10 Mastering Display structure.
>   * @p_area:			Pointer to an area.
> + * @p_av1_sequence:		Pointer to an AV1 sequence.
> + * @p_av1_tile_group:		Pointer to an AV1 tile group.
> + * @p_av1_tile_group_entry:	Pointer to an AV1 tile group entry.
> + * @p_av1_tile_list:		Pointer to an AV1 tile list.
> + * @p_av1_tile_list_entry:	Pointer to an AV1 tile list entry.
> + * @p_av1_frame_header:		Pointer to an AV1 frame header.
>   * @p:				Pointer to a compound value.
>   * @p_const:			Pointer to a constant compound value.
>   */
> @@ -83,6 +89,12 @@ union v4l2_ctrl_ptr {
>  	struct v4l2_ctrl_hdr10_cll_info *p_hdr10_cll;
>  	struct v4l2_ctrl_hdr10_mastering_display *p_hdr10_mastering;
>  	struct v4l2_area *p_area;
> +	struct v4l2_ctrl_av1_sequence *p_av1_sequence;
> +	struct v4l2_ctrl_av1_tile_group *p_av1_tile_group;
> +	struct v4l2_ctrl_av1_tile_group_entry *p_av1_tile_group_entry;
> +	struct v4l2_ctrl_av1_tile_list *p_av1_tile_list;
> +	struct v4l2_ctrl_av1_tile_list_entry *p_av1_tile_list_entry;
> +	struct v4l2_ctrl_av1_frame_header *p_av1_frame_header;
>  	void *p;
>  	const void *p_const;
>  };
> diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h
> index 5532b5f68493..0378fe8e1967 100644
> --- a/include/uapi/linux/v4l2-controls.h
> +++ b/include/uapi/linux/v4l2-controls.h
> @@ -1976,6 +1976,802 @@ struct v4l2_ctrl_mpeg2_quantisation {
>  	__u8	chroma_non_intra_quantiser_matrix[64];
>  };
>  
> +/* Stateless AV1 controls */
> +
> +#define V4L2_AV1_TOTAL_REFS_PER_FRAME	8
> +#define V4L2_AV1_CDEF_MAX		8
> +#define V4L2_AV1_NUM_PLANES_MAX		3 /* 1 if monochrome, 3 otherwise */
> +#define V4L2_AV1_MAX_SEGMENTS		8
> +#define V4L2_AV1_MAX_OPERATING_POINTS	(1 << 5) /* 5 bits to encode */
> +#define V4L2_AV1_REFS_PER_FRAME		7
> +#define V4L2_AV1_MAX_NUM_Y_POINTS	(1 << 4) /* 4 bits to encode */
> +#define V4L2_AV1_MAX_NUM_CB_POINTS	(1 << 4) /* 4 bits to encode */
> +#define V4L2_AV1_MAX_NUM_CR_POINTS	(1 << 4) /* 4 bits to encode */
> +#define V4L2_AV1_MAX_NUM_POS_LUMA	25 /* (2 * 3 * (3 + 1)) + 1 */
> +#define V4L2_AV1_MAX_NUM_PLANES		3
> +#define V4L2_AV1_MAX_TILE_COLS		64
> +#define V4L2_AV1_MAX_TILE_ROWS		64
> +#define V4L2_AV1_MAX_TILE_COUNT		512
> +
> +#define V4L2_AV1_SEQUENCE_FLAG_STILL_PICTURE		  BIT(0)
> +#define V4L2_AV1_SEQUENCE_FLAG_USE_128X128_SUPERBLOCK	  BIT(1)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_FILTER_INTRA	  BIT(2)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTRA_EDGE_FILTER   BIT(3)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTERINTRA_COMPOUND BIT(4)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_MASKED_COMPOUND	  BIT(5)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_WARPED_MOTION	  BIT(6)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_DUAL_FILTER	  BIT(7)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_ORDER_HINT	  BIT(8)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_JNT_COMP		  BIT(9)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_REF_FRAME_MVS	  BIT(10)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_SUPERRES		  BIT(11)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_CDEF		  BIT(12)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_RESTORATION	  BIT(13)
> +#define V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME		  BIT(14)
> +#define V4L2_AV1_SEQUENCE_FLAG_COLOR_RANGE		  BIT(15)
> +#define V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_X		  BIT(16)
> +#define V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_Y		  BIT(17)
> +#define V4L2_AV1_SEQUENCE_FLAG_FILM_GRAIN_PARAMS_PRESENT  BIT(18)
> +#define V4L2_AV1_SEQUENCE_FLAG_SEPARATE_UV_DELTA_Q	  BIT(19)
> +
> +#define V4L2_CID_STATELESS_AV1_SEQUENCE (V4L2_CID_CODEC_STATELESS_BASE + 401)
> +/**
> + * struct v4l2_ctrl_av1_sequence - AV1 Sequence
> + *
> + * Represents an AV1 Sequence OBU. See section 5.5. "Sequence header OBU syntax"
> + * for more details.
> + *
> + * @flags: See V4L2_AV1_SEQUENCE_FLAG_{}.
> + * @seq_profile: specifies the features that can be used in the coded video
> + * sequence.
> + * @order_hint_bits: specifies the number of bits used for the order_hint field
> + * at each frame.
> + * @bit_depth: the bitdepth to use for the sequence as described in section
> + * 5.5.2 "Color config syntax".
> + */
> +struct v4l2_ctrl_av1_sequence {
> +	__u32 flags;
> +	__u8 seq_profile;
> +	__u8 order_hint_bits;
> +	__u8 bit_depth;
> +};
> +
> +#define V4L2_AV1_TILE_GROUP_FLAG_START_AND_END_PRESENT BIT(0)
> +
> +#define V4L2_CID_STATELESS_AV1_TILE_GROUP (V4L2_CID_CODEC_STATELESS_BASE + 402)
> +/**
> + * struct v4l2_ctrl_av1_tile_group - AV1 Tile Group header.
> + *
> + * Represents a tile group as seen in an AV1 Tile Group OBU or Frame OBU. A
> + * v4l2_ctrl_av1_tile_group instance will refer to tg_end - tg_start instances
> + * of v4l2_ctrl_tile_group_entry. See section 6.10.1 "General tile group OBU
> + * semantics" for more details.
> + *
> + * @flags: see V4L2_AV1_TILE_GROUP_FLAG_{}.
> + * @tg_start: specifies the zero-based index of the first tile in the current
> + * tile group.
> + * @tg_end: specifies the zero-based index of the last tile in the current tile
> + * group.
> + */
> +struct v4l2_ctrl_av1_tile_group {
> +	__u8 flags;
> +	__u32 tg_start;
> +	__u32 tg_end;
> +};
> +
> +#define V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY (V4L2_CID_CODEC_STATELESS_BASE + 403)
> +/**
> + * struct v4l2_ctrl_av1_tile_group_entry - AV1 Tile Group entry
> + *
> + * Represents a single AV1 tile inside an AV1 Tile Group. Note that MiRowStart,
> + * MiRowEnd, MiColStart and MiColEnd can be retrieved from struct
> + * v4l2_av1_tile_info in struct v4l2_ctrl_av1_frame_header using tile_row and
> + * tile_col. See section 6.10.1 "General tile group OBU semantics" for more
> + * details.
> + *
> + * @tile_offset: offset from the OBU data, i.e. where the coded tile data
> + * actually starts.
> + * @tile_size: specifies the size in bytes of the coded tile. Equivalent to
> + * "TileSize" in the AV1 Specification.
> + * @tile_row: specifies the row of the current tile. Equivalent to "TileRow" in
> + * the AV1 Specification.
> + * @tile_col: specifies the col of the current tile. Equivalent to "TileCol" in
> + * the AV1 Specification.
> + */
> +struct v4l2_ctrl_av1_tile_group_entry {
> +	__u32 tile_offset;
> +	__u32 tile_size;
> +	__u32 tile_row;
> +	__u32 tile_col;
> +};
> +
> +#define V4L2_CID_STATELESS_AV1_TILE_LIST (V4L2_CID_CODEC_STATELESS_BASE + 404)
> +/**
> + * struct v4l2_ctrl_av1_tile_list - AV1 Tile List header.
> + *
> + * Represents a tile list as seen in an AV1 Tile List OBU. Tile lists are used
> + * in "Large Scale Tile Decode Mode". Note that tile_count_minus_1 should be at
> + * most V4L2_AV1_MAX_TILE_COUNT - 1. A struct v4l2_ctrl_av1_tile_list instance
> + * will refer to "tile_count_minus_1" + 1 instances of struct
> + * v4l2_ctrl_av1_tile_list_entry.
> + *
> + * Each rendered frame may require at most two tile list OBU to be decodded. See
> + * section "6.11.1. General tile list OBU semantics" for more details.
> + *
> + * @output_frame_width_in_tiles_minus_1: this field plus one is the width of the
> + * output frame, in tile units.
> + * @output_frame_height_in_tiles_minus_1: this field plus one is the height of
> + * the output frame, in tile units.
> + * @tile_count_minus_1: this field plus one is the number of tile_list_entry in
> + * the list.
> + */
> +struct v4l2_ctrl_av1_tile_list {
> +	__u8 output_frame_width_in_tiles_minus_1;
> +	__u8 output_frame_height_in_tiles_minus_1;
> +	__u8 tile_count_minus_1;
> +};
> +
> +#define V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY (V4L2_CID_CODEC_STATELESS_BASE + 405)
> +
> +/**
> + * struct v4l2_ctrl_av1_tile_list_entry - AV1 Tile List entry
> + *
> + * Represents a tile list entry as seen in an AV1 Tile List OBU. See section
> + * 6.11.2. "Tile list entry semantics" of the AV1 Specification for more
> + * details.
> + *
> + * @anchor_frame_idx: the index into an array AnchorFrames of the frames that
> + * the tile uses for prediction.
> + * @anchor_tile_row: the row coordinate of the tile in the frame that it
> + * belongs, in tile units.
> + * @anchor_tile_col: the column coordinate of the tile in the frame that it
> + * belongs, in tile units.
> + * @tile_data_size_minus_1: this field plus one is the size of the coded tile
> + * data in bytes.
> + */
> +struct v4l2_ctrl_av1_tile_list_entry {
> +	__u8 anchor_frame_idx;
> +	__u8 anchor_tile_row;
> +	__u8 anchor_tile_col;
> +	__u8 tile_data_size_minus_1;
> +};
> +
> +#define V4L2_AV1_FILM_GRAIN_FLAG_APPLY_GRAIN BIT(0)
> +#define V4L2_AV1_FILM_GRAIN_FLAG_UPDATE_GRAIN BIT(1)
> +#define V4L2_AV1_FILM_GRAIN_FLAG_CHROMA_SCALING_FROM_LUMA BIT(2)
> +#define V4L2_AV1_FILM_GRAIN_FLAG_OVERLAP BIT(3)
> +#define V4L2_AV1_FILM_GRAIN_FLAG_CLIP_TO_RESTRICTED_RANGE BIT(4)
> +
> +/**
> + * struct v4l2_av1_film_grain - AV1 Film Grain parameters.
> + *
> + * Film grain parameters as specified by section 6.8.20 of the AV1
> +   Specification.
> + *
> + * @flags: see V4L2_AV1_FILM_GRAIN_{}.
> + * @grain_seed: specifies the starting value for the pseudo-random numbers used
> + * during film grain synthesis.
> + * @film_grain_params_ref_idx: indicates which reference frame contains the
> + * film grain parameters to be used for this frame.
> + * @num_y_points: specifies the number of points for the piece-wise linear
> + * scaling function of the luma component.
> + * @point_y_value: represents the x (luma value) coordinate for the i-th point
> + * of the piecewise linear scaling function for luma component. The values are
> + * signaled on the scale of 0..255. (In case of 10 bit video, these values
> + * correspond to luma values divided by 4. In case of 12 bit video, these values
> + * correspond to luma values divided by 16.).
> + * @point_y_scaling:  represents the scaling (output) value for the i-th point
> + * of the piecewise linear scaling function for luma component.
> + * @num_cb_points: specifies the number of points for the piece-wise linear
> + * scaling function of the cb component.
> + * @point_cb_value: represents the x coordinate for the i-th point of the
> + * piece-wise linear scaling function for cb component. The values are signaled
> + * on the scale of 0..255.
> + * @point_cb_scaling: represents the scaling (output) value for the i-th point
> + * of the piecewise linear scaling function for cb component.
> + * @num_cr_points: specifies represents the number of points for the piece-wise
> + * linear scaling function of the cr component.
> + * @point_cr_value:  represents the x coordinate for the i-th point of the
> + * piece-wise linear scaling function for cr component. The values are signaled
> + * on the scale of 0..255.
> + * @point_cr_scaling:  represents the scaling (output) value for the i-th point
> + * of the piecewise linear scaling function for cr component.
> + * @grain_scaling_minus_8: represents the shift – 8 applied to the values of the
> + * chroma component. The grain_scaling_minus_8 can take values of 0..3 and
> + * determines the range and quantization step of the standard deviation of film
> + * grain.
> + * @ar_coeff_lag: specifies the number of auto-regressive coefficients for luma
> + * and chroma.
> + * @ar_coeffs_y_plus_128: specifies auto-regressive coefficients used for the Y
> + * plane.
> + * @ar_coeffs_cb_plus_128: specifies auto-regressive coefficients used for the U
> + * plane.
> + * @ar_coeffs_cr_plus_128: specifies auto-regressive coefficients used for the V
> + * plane.
> + * @ar_coeff_shift_minus_6: specifies the range of the auto-regressive
> + * coefficients. Values of 0, 1, 2, and 3 correspond to the ranges for
> + * auto-regressive coefficients of [-2, 2), [-1, 1), [-0.5, 0.5) and [-0.25,
> + * 0.25) respectively.
> + * @grain_scale_shift: specifies how much the Gaussian random numbers should be
> + * scaled down during the grain synthesis process.
> + * @cb_mult: represents a multiplier for the cb component used in derivation of
> + * the input index to the cb component scaling function.
> + * @cb_luma_mult: represents a multiplier for the average luma component used in
> + * derivation of the input index to the cb component scaling function.
> + * @cb_offset: represents an offset used in derivation of the input index to the
> + * cb component scaling function.
> + * @cr_mult: represents a multiplier for the cr component used in derivation of
> + * the input index to the cr component scaling function.
> + * @cr_luma_mult: represents a multiplier for the average luma component used in
> + * derivation of the input index to the cr component scaling function.
> + * @cr_offset: represents an offset used in derivation of the input index to the
> + * cr component scaling function.
> + */
> +struct v4l2_av1_film_grain {
> +	__u8 flags;
> +	__u16 grain_seed;
> +	__u8 film_grain_params_ref_idx;
> +	__u8 num_y_points;
> +	__u8 point_y_value[V4L2_AV1_MAX_NUM_Y_POINTS];
> +	__u8 point_y_scaling[V4L2_AV1_MAX_NUM_Y_POINTS];
> +	__u8 num_cb_points;
> +	__u8 point_cb_value[V4L2_AV1_MAX_NUM_CB_POINTS];
> +	__u8 point_cb_scaling[V4L2_AV1_MAX_NUM_CB_POINTS];
> +	__u8 num_cr_points;
> +	__u8 point_cr_value[V4L2_AV1_MAX_NUM_CR_POINTS];
> +	__u8 point_cr_scaling[V4L2_AV1_MAX_NUM_CR_POINTS];
> +	__u8 grain_scaling_minus_8;
> +	__u8 ar_coeff_lag;
> +	__u8 ar_coeffs_y_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA];
> +	__u8 ar_coeffs_cb_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA];
> +	__u8 ar_coeffs_cr_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA];
> +	__u8 ar_coeff_shift_minus_6;
> +	__u8 grain_scale_shift;
> +	__u8 cb_mult;
> +	__u8 cb_luma_mult;
> +	__u16 cb_offset;
> +	__u8 cr_mult;
> +	__u8 cr_luma_mult;
> +	__u16 cr_offset;
> +};
> +
> +/**
> + * enum v4l2_av1_warp_model - AV1 Warp Model as described in section 3
> + * "Symbols and abbreviated terms" of the AV1 Specification.
> + *
> + * @V4L2_AV1_WARP_MODEL_IDENTITY: Warp model is just an identity transform.
> + * @V4L2_AV1_WARP_MODEL_TRANSLATION: Warp model is a pure translation.
> + * @V4L2_AV1_WARP_MODEL_ROTZOOM: Warp model is a rotation + symmetric zoom +
> + * translation.
> + * @V4L2_AV1_WARP_MODEL_AFFINE: Warp model is a general affine transform.
> + */
> +enum v4l2_av1_warp_model {
> +	V4L2_AV1_WARP_MODEL_IDENTITY = 0,
> +	V4L2_AV1_WARP_MODEL_TRANSLATION = 1,
> +	V4L2_AV1_WARP_MODEL_ROTZOOM = 2,
> +	V4L2_AV1_WARP_MODEL_AFFINE = 3,
> +};
> +
> +/**
> + * enum v4l2_av1_reference_frame - AV1 reference frames
> + *
> + * @V4L2_AV1_REF_INTRA_FRAME: Intra Frame Reference
> + * @V4L2_AV1_REF_LAST_FRAME: Last Reference Frame
> + * @V4L2_AV1_REF_LAST2_FRAME: Last2 Reference Frame
> + * @V4L2_AV1_REF_LAST3_FRAME: Last3 Reference Frame
> + * @V4L2_AV1_REF_GOLDEN_FRAME: Golden Reference Frame
> + * @V4L2_AV1_REF_BWDREF_FRAME: BWD Reference Frame
> + * @V4L2_AV1_REF_ALTREF2_FRAME: Alternative2 Reference Frame
> + * @V4L2_AV1_REF_ALTREF_FRAME: Alternative Reference Frame
> + * @V4L2_AV1_NUM_REF_FRAMES: Total Reference Frame Number
> + */
> +enum v4l2_av1_reference_frame {
> +	V4L2_AV1_REF_INTRA_FRAME = 0,
> +	V4L2_AV1_REF_LAST_FRAME = 1,
> +	V4L2_AV1_REF_LAST2_FRAME = 2,
> +	V4L2_AV1_REF_LAST3_FRAME = 3,
> +	V4L2_AV1_REF_GOLDEN_FRAME = 4,
> +	V4L2_AV1_REF_BWDREF_FRAME = 5,
> +	V4L2_AV1_REF_ALTREF2_FRAME = 6,
> +	V4L2_AV1_REF_ALTREF_FRAME = 7,
> +	V4L2_AV1_NUM_REF_FRAMES,
> +};
> +
> +#define V4L2_AV1_GLOBAL_MOTION_IS_INVALID(ref) (1 << (ref))
> +
> +#define V4L2_AV1_GLOBAL_MOTION_FLAG_IS_GLOBAL	   BIT(0)
> +#define V4L2_AV1_GLOBAL_MOTION_FLAG_IS_ROT_ZOOM	   BIT(1)
> +#define V4L2_AV1_GLOBAL_MOTION_FLAG_IS_TRANSLATION BIT(2)
> +/**
> + * struct v4l2_av1_global_motion - AV1 Global Motion parameters as described in
> + * section 6.8.17 "Global motion params semantics" of the AV1 specification.
> + *
> + * @flags: A bitfield containing the flags per reference frame. See
> + * V4L2_AV1_GLOBAL_MOTION_FLAG_{}
> + * @type: The type of global motion transform used.
> + * @params: this field has the same meaning as "gm_params" in the AV1
> + * specification.
> + * @invalid: bitfield indicating whether the global motion params are invalid
> + * for a given reference frame. See section 7.11.3.6. Setup shear process and
> + * the variable "warpValid". Use V4L2_AV1_GLOBAL_MOTION_IS_INVALID(ref) to
> + * create a suitable mask.
> + */
> +
> +struct v4l2_av1_global_motion {
> +	__u8 flags[V4L2_AV1_TOTAL_REFS_PER_FRAME];
> +	enum v4l2_av1_warp_model type[V4L2_AV1_TOTAL_REFS_PER_FRAME];
> +	__u32 params[V4L2_AV1_TOTAL_REFS_PER_FRAME][6];
> +	__u8 invalid;
> +};
> +
> +/**
> + * enum v4l2_av1_frame_restoration_type - AV1 Frame Restoration Type
> + * @V4L2_AV1_FRAME_RESTORE_NONE: no filtering is applied.
> + * @V4L2_AV1_FRAME_RESTORE_WIENER: Wiener filter process is invoked.
> + * @V4L2_AV1_FRAME_RESTORE_SGRPROJ: self guided filter process is invoked.
> + * @V4L2_AV1_FRAME_RESTORE_SWITCHABLE: restoration filter is swichtable.
> + */
> +enum v4l2_av1_frame_restoration_type {
> +	V4L2_AV1_FRAME_RESTORE_NONE = 0,
> +	V4L2_AV1_FRAME_RESTORE_WIENER = 1,
> +	V4L2_AV1_FRAME_RESTORE_SGRPROJ = 2,
> +	V4L2_AV1_FRAME_RESTORE_SWITCHABLE = 3,
> +};
> +
> +/**
> + * struct v4l2_av1_loop_restoration - AV1 Loop Restauration as described in
> + * section 6.10.15 "Loop restoration params semantics" of the AV1 specification.
> + *
> + * @frame_restoration_type: specifies the type of restoration used for each
> + * plane. See enum_v4l2_av1_frame_restoration_type.
> + * @lr_unit_shift: specifies if the luma restoration size should be halved.
> + * @lr_uv_shift: specifies if the chroma size should be half the luma size.
> + * @loop_restoration_size: specifies the size of loop restoration units in units
> + * of samples in the current plane.
> + */
> +struct v4l2_av1_loop_restoration {
> +	enum v4l2_av1_frame_restoration_type frame_restoration_type[V4L2_AV1_NUM_PLANES_MAX];
> +	__u8 lr_unit_shift;
> +	__u8 lr_uv_shift;
> +	__u32 loop_restoration_size[V4L2_AV1_MAX_NUM_PLANES];
> +};
> +
> +/**
> + * struct v4l2_av1_cdef - AV1 CDEF params semantics as described in section
> + * 6.10.14. "CDEF params semantics" of the AV1 specification
> + *
> + * @damping_minus_3: controls the amount of damping in the deringing filter.
> + * @bits: specifies the number of bits needed to specify which CDEF filter to
> + * apply.
> + * @y_pri_strength: specifies the strength of the primary filter.
> + * @y_sec_strength: specifies the strength of the secondary filter.
> + * @uv_pri_strength: specifies the strength of the primary filter.
> + * @uv_sec_strength: specifies the strength of the secondary filter.
> + */
> +struct v4l2_av1_cdef {
> +	__u8 damping_minus_3;
> +	__u8 bits;
> +	__u8 y_pri_strength[V4L2_AV1_CDEF_MAX];
> +	__u8 y_sec_strength[V4L2_AV1_CDEF_MAX];
> +	__u8 uv_pri_strength[V4L2_AV1_CDEF_MAX];
> +	__u8 uv_sec_strength[V4L2_AV1_CDEF_MAX];
> +};
> +
> +#define V4L2_AV1_SEGMENTATION_FLAG_ENABLED	   BIT(0)
> +#define V4L2_AV1_SEGMENTATION_FLAG_UPDATE_MAP	   BIT(1)
> +#define V4L2_AV1_SEGMENTATION_FLAG_TEMPORAL_UPDATE BIT(2)
> +#define V4L2_AV1_SEGMENTATION_FLAG_UPDATE_DATA	   BIT(3)
> +#define V4L2_AV1_SEGMENTATION_FLAG_SEG_ID_PRE_SKIP	BIT(4)
> +
> +/**
> + * enum v4l2_av1_segment_feature - AV1 segment features as described in section
> + * 3 "Symbols and abbreviated terms" of the AV1 specification.
> + *
> + * @V4L2_AV1_SEG_LVL_ALT_Q: Index for quantizer segment feature.
> + * @V4L2_AV1_SEG_LVL_ALT_LF_Y_V: Index for vertical luma loop filter segment
> + * feature.
> + * @V4L2_AV1_SEG_LVL_REF_FRAME: Index for reference frame segment feature.
> + * @V4L2_AV1_SEG_LVL_SKIP: Index for skip segment feature.
> + * @V4L2_AV1_SEG_LVL_GLOBALMV: Index for global mv feature.
> + * @V4L2_AV1_SEG_LVL_MAX: Number of segment features.
> + */
> +enum v4l2_av1_segment_feature {
> +	V4L2_AV1_SEG_LVL_ALT_Q = 0,
> +	V4L2_AV1_SEG_LVL_ALT_LF_Y_V = 1,
> +	V4L2_AV1_SEG_LVL_REF_FRAME = 5,
> +	V4L2_AV1_SEG_LVL_REF_SKIP = 6,
> +	V4L2_AV1_SEG_LVL_REF_GLOBALMV = 7,
> +	V4L2_AV1_SEG_LVL_MAX = 8
> +};
> +
> +#define V4L2_AV1_SEGMENT_FEATURE_ENABLED(id)	(1 << (id))
> +
> +/**
> + * struct v4l2_av1_segmentation - AV1 Segmentation params as defined in section
> + * 6.8.13. "Segmentation params semantics" of the AV1 specification.
> + *
> + * @flags: see V4L2_AV1_SEGMENTATION_FLAG_{}.
> + * @feature_enabled: bitmask defining which features are enabled in each segment.
> + * Use V4L2_AV1_SEGMENT_FEATURE_ENABLED to build a suitable mask.
> + * @feature_data: data attached to each feature. Data entry is only valid if the
> + * feature is enabled
> + * @last_active_seg_id: indicates the highest numbered segment id that has some
> + * enabled feature. This is used when decoding the segment id to only decode
> + * choices corresponding to used segments.
> + */
> +struct v4l2_av1_segmentation {
> +	__u8 flags;
> +	__u8 feature_enabled[V4L2_AV1_MAX_SEGMENTS];
> +	__u16 feature_data[V4L2_AV1_MAX_SEGMENTS][V4L2_AV1_SEG_LVL_MAX];
> +	__u8 last_active_seg_id;
> +};
> +
> +#define V4L2_AV1_LOOP_FILTER_FLAG_DELTA_ENABLED    BIT(0)
> +#define V4L2_AV1_LOOP_FILTER_FLAG_DELTA_UPDATE     BIT(1)
> +#define V4L2_AV1_LOOP_FILTER_FLAG_DELTA_LF_PRESENT BIT(2)
> +
> +/**
> + * struct v4l2_av1_loop_filter - AV1 Loop filter params as defined in section
> + * 6.8.10. "Loop filter semantics" of the AV1 specification.
> + *
> + * @flags: see V4L2_AV1_LOOP_FILTER_FLAG_{}
> + * @level: an array containing loop filter strength values. Different loop
> + * filter strength values from the array are used depending on the image plane
> + * being filtered, and the edge direction (vertical or horizontal) being
> + * filtered.
> + * @sharpness: indicates the sharpness level. The loop_filter_level and
> + * loop_filter_sharpness together determine when a block edge is filtered, and
> + * by how much the filtering can change the sample values. The loop filter
> + * process is described in section 7.14 of the AV1 specification.
> + * @ref_deltas: contains the adjustment needed for the filter level based on the
> + * chosen reference frame. If this syntax element is not present, it maintains
> + * its previous value.
> + * @mode_deltas: contains the adjustment needed for the filter level based on
> + * the chosen mode. If this syntax element is not present, it maintains its
> + * previous value.
> + * @delta_lf_res: specifies the left shift which should be applied to decoded
> + * loop filter delta values.
> + * @delta_lf_multi: a value equal to 1 specifies that separate loop filter
> + * deltas are sent for horizontal luma edges, vertical luma edges,
> + * the U edges, and the V edges. A value of delta_lf_multi equal to 0 specifies
> + * that the same loop filter delta is used for all edges.
> + */
> +struct v4l2_av1_loop_filter {
> +	__u8 flags;
> +	__u8 level[4];
> +	__u8 sharpness;
> +	__u8 ref_deltas[V4L2_AV1_TOTAL_REFS_PER_FRAME];
> +	__u8 mode_deltas[2];
> +	__u8 delta_lf_res;
> +	__u8 delta_lf_multi;
> +};
> +
> +#define V4L2_AV1_QUANTIZATION_FLAG_DIFF_UV_DELTA   BIT(0)
> +#define V4L2_AV1_QUANTIZATION_FLAG_USING_QMATRIX   BIT(1)
> +#define V4L2_AV1_QUANTIZATION_FLAG_DELTA_Q_PRESENT BIT(2)
> +
> +/**
> + * struct v4l2_av1_quantization - AV1 Quantization params as defined in section
> + * 6.8.11 "Quantization params semantics" of the AV1 specification.
> + *
> + * @flags: see V4L2_AV1_QUANTIZATION_FLAG_{}
> + * @base_q_idx: indicates the base frame qindex. This is used for Y AC
> + * coefficients and as the base value for the other quantizers.
> + * @delta_q_y_dc: indicates the Y DC quantizer relative to base_q_idx.
> + * @delta_q_u_dc: indicates the U DC quantizer relative to base_q_idx.
> + * @delta_q_u_ac: indicates the U AC quantizer relative to base_q_idx.
> + * @delta_q_v_dc: indicates the V DC quantizer relative to base_q_idx.
> + * @delta_q_v_ac: indicates the V AC quantizer relative to base_q_idx.
> + * @qm_y: specifies the level in the quantizer matrix that should be used for
> + * luma plane decoding.
> + * @qm_u: specifies the level in the quantizer matrix that should be used for
> + * chroma U plane decoding.
> + * @qm_v: specifies the level in the quantizer matrix that should be used for
> + * chroma V plane decoding.
> + * @delta_q_res: specifies the left shift which should be applied to decoded
> + * quantizer index delta values.
> + */
> +struct v4l2_av1_quantization {
> +	__u8 flags;
> +	__u8 base_q_idx;
> +	__s8 delta_q_y_dc;
> +	__s8 delta_q_u_dc;
> +	__s8 delta_q_u_ac;
> +	__s8 delta_q_v_dc;
> +	__s8 delta_q_v_ac;
> +	__u8 qm_y;
> +	__u8 qm_u;
> +	__u8 qm_v;
> +	__u8 delta_q_res;
> +};
> +
> +#define V4L2_AV1_TILE_INFO_FLAG_UNIFORM_TILE_SPACING	BIT(0)
> +
> +/**
> + * struct v4l2_av1_tile_info - AV1 Tile info as defined in section 6.8.14. "Tile
> + * info semantics" of the AV1 specification.
> + *
> + * @flags: see V4L2_AV1_TILE_INFO_FLAG_{}
> + * @mi_col_starts: an array specifying the start column (in units of 4x4 luma
> + * samples) for each tile across the image.
> + * @mi_row_starts: an array specifying the start row (in units of 4x4 luma
> + * samples) for each tile down the image.
> + * @width_in_sbs_minus_1: specifies the width of a tile minus 1 in units of
> + * superblocks.
> + * @height_in_sbs_minus_1:  specifies the height of a tile minus 1 in units of
> + * superblocks.
> + * @tile_size_bytes: specifies the number of bytes needed to code each tile
> + * size.
> + * @context_update_tile_id: specifies which tile to use for the CDF update.
> + * @tile_rows: specifies the number of tiles down the frame.
> + * @tile_cols: specifies the number of tiles across the frame.
> + */
> +struct v4l2_av1_tile_info {
> +	__u8 flags;
> +	__u32 mi_col_starts[V4L2_AV1_MAX_TILE_COLS + 1];
> +	__u32 mi_row_starts[V4L2_AV1_MAX_TILE_ROWS + 1];
> +	__u32 width_in_sbs_minus_1[V4L2_AV1_MAX_TILE_COLS];
> +	__u32 height_in_sbs_minus_1[V4L2_AV1_MAX_TILE_ROWS];
> +	__u8 tile_size_bytes;
> +	__u8 context_update_tile_id;
> +	__u8 tile_cols;
> +	__u8 tile_rows;
> +};
> +
> +/**
> + * enum v4l2_av1_frame_type - AV1 Frame Type
> + *
> + * @V4L2_AV1_KEY_FRAME: Key frame
> + * @V4L2_AV1_INTER_FRAME: Inter frame
> + * @V4L2_AV1_INTRA_ONLY_FRAME: Intra-only frame
> + * @V4L2_AV1_SWITCH_FRAME: Switch frame
> + */
> +enum v4l2_av1_frame_type {
> +	V4L2_AV1_KEY_FRAME = 0,
> +	V4L2_AV1_INTER_FRAME = 1,
> +	V4L2_AV1_INTRA_ONLY_FRAME = 2,
> +	V4L2_AV1_SWITCH_FRAME = 3
> +};
> +
> +/**
> + * enum v4l2_vp9_interpolation_filter - AV1 interpolation filter types
> + *
> + * @V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP: eight tap filter
> + * @V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SMOOTH: eight tap smooth filter
> + * @V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SHARP: eight tap sharp filter
> + * @V4L2_AV1_INTERPOLATION_FILTER_BILINEAR: bilinear filter
> + * @V4L2_AV1_INTERPOLATION_FILTER_SWITCHABLE: filter selection is signaled at
> + * the block level
> + *
> + * See section 6.8.9 "Interpolation filter semantics" of the AV1 specification
> + * for more details.
> + */
> +enum v4l2_av1_interpolation_filter {
> +	V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP = 0,
> +	V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SMOOTH = 1,
> +	V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SHARP = 2,
> +	V4L2_AV1_INTERPOLATION_FILTER_BILINEAR = 3,
> +	V4L2_AV1_INTERPOLATION_FILTER_SWITCHABLE = 4,
> +};
> +
> +/**
> + * enum v4l2_av1_tx_mode - AV1 Tx mode as described in section 6.8.21 "TX mode
> + * semantics" of the AV1 specification.
> + * @V4L2_AV1_TX_MODE_ONLY_4X4: the inverse transform will use only 4x4
> + * transforms
> + * @V4L2_AV1_TX_MODE_LARGEST: the inverse transform will use the largest
> + * transform size that fits inside the block
> + * @V4L2_AV1_TX_MODE_SELECT: the choice of transform size is specified
> + * explicitly for each block.
> + */
> +enum v4l2_av1_tx_mode {
> +	V4L2_AV1_TX_MODE_ONLY_4X4 = 0,
> +	V4L2_AV1_TX_MODE_LARGEST = 1,
> +	V4L2_AV1_TX_MODE_SELECT = 2
> +};
> +
> +#define V4L2_AV1_FRAME_HEADER_FLAG_SHOW_FRAME			BIT(0)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_SHOWABLE_FRAME		BIT(1)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_ERROR_RESILIENT_MODE		BIT(2)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_CDF_UPDATE		BIT(3)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_SCREEN_CONTENT_TOOLS	BIT(4)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_FORCE_INTEGER_MV		BIT(5)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_INTRABC		BIT(6)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_USE_SUPERRES			BIT(7)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_HIGH_PRECISION_MV	BIT(8)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_IS_MOTION_MODE_SWITCHABLE	BIT(9)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_USE_REF_FRAME_MVS		BIT(10)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_FRAME_END_UPDATE_CDF BIT(11)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_UNIFORM_TILE_SPACING		BIT(12)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_WARPED_MOTION		BIT(13)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_REFERENCE_SELECT		BIT(14)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_REDUCED_TX_SET		BIT(15)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_SKIP_MODE_PRESENT		BIT(16)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_FRAME_SIZE_OVERRIDE		BIT(17)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_BUFFER_REMOVAL_TIME_PRESENT	BIT(18)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_FRAME_REFS_SHORT_SIGNALING	BIT(19)
> +
> +#define V4L2_CID_STATELESS_AV1_FRAME_HEADER (V4L2_CID_CODEC_STATELESS_BASE + 406)
> +/**
> + * struct v4l2_ctrl_av1_frame_header - Represents an AV1 Frame Header OBU.
> + *
> + * @tile_info: tile info
> + * @quantization: quantization params
> + * @segmentation: segmentation params
> + * @loop_filter: loop filter params
> + * @cdef: cdef params
> + * @loop_restoration: loop restoration params
> + * @global_motion: global motion params
> + * @film_grain: film grain params
> + * @flags: see V4L2_AV1_FRAME_HEADER_FLAG_{}
> + * @frame_type: specifies the AV1 frame type
> + * @order_hint: specifies OrderHintBits least significant bits of the expected
> + * output order for this frame.
> + * @superres_denom: the denominator for the upscaling ratio.
> + * @upscaled_width: the upscaled width.
> + * @interpolation_filter: specifies the filter selection used for performing
> + * inter prediction.
> + * @tx_mode: specifies how the transform size is determined.
> + * @frame_width_minus_1: add 1 to get the frame's width.
> + * @frame_height_minus_1: add 1 to get the frame's height
> + * @render_width_minus_1: add 1 to get the render width of the frame in luma
> + * samples.
> + * @render_height_minus_1: add 1 to get the render height of the frame in luma
> + * samples.
> + * @current_frame_id: specifies the frame id number for the current frame. Frame
> + * id numbers are additional information that do not affect the decoding
> + * process, but provide decoders with a way of detecting missing reference
> + * frames so that appropriate action can be taken.
> + * @primary_ref_frame: specifies which reference frame contains the CDF values
> + * and other state that should be loaded at the start of the frame.
> + * @buf_removal_time: specifies the frame removal time in units of DecCT clock
> + * ticks counted from the removal time of the last random access point for
> + * operating point opNum.
> + * @refresh_frame_flags: contains a bitmask that specifies which reference frame
> + * slots will be updated with the current frame after it is decoded.
> + * @ref_order_hint: specifies the expected output order hint for each reference
> + * frame.
> + * @last_frame_idx: specifies the reference frame to use for LAST_FRAME.
> + * @gold_frame_idx: specifies the reference frame to use for GOLDEN_FRAME.
> + * refs
> + * @reference_frame_ts: the V4L2 timestamp for each of the reference frames
> + * enumerated in &v4l2_av1_reference_frame. The timestamp refers to the
> + * timestamp field in struct v4l2_buffer. Use v4l2_timeval_to_ns() to convert
> + * the struct timeval to a __u64.
> + * @skip_mode_frame: specifies the frames to use for compound prediction when
> + * skip_mode is equal to 1.
> + */
> +struct v4l2_ctrl_av1_frame_header {
> +	struct v4l2_av1_tile_info tile_info;
> +	struct v4l2_av1_quantization quantization;
> +	struct v4l2_av1_segmentation segmentation;
> +	struct v4l2_av1_loop_filter  loop_filter;
> +	struct v4l2_av1_cdef cdef;
> +	struct v4l2_av1_loop_restoration loop_restoration;
> +	struct v4l2_av1_global_motion global_motion;
> +	struct v4l2_av1_film_grain film_grain;
> +	__u32 flags;
> +	enum v4l2_av1_frame_type frame_type;
> +	__u32 order_hint;
> +	__u8 superres_denom;
> +	__u32 upscaled_width;
> +	enum v4l2_av1_interpolation_filter interpolation_filter;
> +	enum v4l2_av1_tx_mode tx_mode;
> +	__u32 frame_width_minus_1;
> +	__u32 frame_height_minus_1;
> +	__u16 render_width_minus_1;
> +	__u16 render_height_minus_1;
> +
> +	__u32 current_frame_id;
> +	__u8 primary_ref_frame;
> +	__u8 buffer_removal_time[V4L2_AV1_MAX_OPERATING_POINTS];
> +	__u8 refresh_frame_flags;
> +	__u32 ref_order_hint[V4L2_AV1_NUM_REF_FRAMES];
> +	__s8 last_frame_idx;
> +	__s8 gold_frame_idx;
> +	__u64 reference_frame_ts[V4L2_AV1_NUM_REF_FRAMES];
> +	__u8 skip_mode_frame[2];
> +};
> +
> +/**
> + * enum v4l2_stateless_av1_profile - AV1 profiles
> + *
> + * @V4L2_STATELESS_AV1_PROFILE_MAIN: compliant decoders must be able to decode
> + * streams with seq_profile equal to 0.
> + * @V4L2_STATELESS_PROFILE_HIGH: compliant decoders must be able to decode
> + * streams with seq_profile equal to 0.
> + * @V4L2_STATELESS_PROFILE_PROFESSIONAL: compliant decoders must be able to
> + * decode streams with seq_profile equal to 0.
> + *
> + * Conveys the highest profile a decoder can work with.
> + */
> +#define V4L2_CID_STATELESS_AV1_PROFILE (V4L2_CID_CODEC_STATELESS_BASE + 407)
> +enum v4l2_stateless_av1_profile {
> +	V4L2_STATELESS_AV1_PROFILE_MAIN = 0,
> +	V4L2_STATELESS_AV1_PROFILE_HIGH = 1,
> +	V4L2_STATELESS_AV1_PROFILE_PROFESSIONAL = 2,
> +};
> +
> +/**
> + * enum v4l2_stateless_av1_level - AV1 levels
> + *
> + * @V4L2_STATELESS_AV1_LEVEL_2_0: Level 2.0.
> + * @V4L2_STATELESS_AV1_LEVEL_2_1: Level 2.1.
> + * @V4L2_STATELESS_AV1_LEVEL_2_2: Level 2.2.
> + * @V4L2_STATELESS_AV1_LEVEL_2_3: Level 2.3.
> + * @V4L2_STATELESS_AV1_LEVEL_3_0: Level 3.0.
> + * @V4L2_STATELESS_AV1_LEVEL_3_1: Level 3.1.
> + * @V4L2_STATELESS_AV1_LEVEL_3_2: Level 3.2.
> + * @V4L2_STATELESS_AV1_LEVEL_3_3: Level 3.3.
> + * @V4L2_STATELESS_AV1_LEVEL_4_0: Level 4.0.
> + * @V4L2_STATELESS_AV1_LEVEL_4_1: Level 4.1.
> + * @V4L2_STATELESS_AV1_LEVEL_4_2: Level 4.2.
> + * @V4L2_STATELESS_AV1_LEVEL_4_3: Level 4.3.
> + * @V4L2_STATELESS_AV1_LEVEL_5_0: Level 5.0.
> + * @V4L2_STATELESS_AV1_LEVEL_5_1: Level 5.1.
> + * @V4L2_STATELESS_AV1_LEVEL_5_2: Level 5.2.
> + * @V4L2_STATELESS_AV1_LEVEL_5_3: Level 5.3.
> + * @V4L2_STATELESS_AV1_LEVEL_6_0: Level 6.0.
> + * @V4L2_STATELESS_AV1_LEVEL_6_1: Level 6.1.
> + * @V4L2_STATELESS_AV1_LEVEL_6_2: Level 6.2.
> + * @V4L2_STATELESS_AV1_LEVEL_6_3: Level 6.3.
> + * @V4L2_STATELESS_AV1_LEVEL_7_0: Level 7.0.
> + * @V4L2_STATELESS_AV1_LEVEL_7_2: Level 7.2.
> + * @V4L2_STATELESS_AV1_LEVEL_7_3: Level 7.3.
> + *
> + * Conveys the highest level a decoder can work with.
> + */
> +#define V4L2_CID_STATELESS_AV1_LEVEL (V4L2_CID_CODEC_STATELESS_BASE + 408)
> +enum v4l2_stateless_av1_level {
> +	V4L2_STATELESS_AV1_LEVEL_2_0 = 0,
> +	V4L2_STATELESS_AV1_LEVEL_2_1 = 1,
> +	V4L2_STATELESS_AV1_LEVEL_2_2 = 2,
> +	V4L2_STATELESS_AV1_LEVEL_2_3 = 3,
> +
> +	V4L2_STATELESS_AV1_LEVEL_3_0 = 4,
> +	V4L2_STATELESS_AV1_LEVEL_3_1 = 5,
> +	V4L2_STATELESS_AV1_LEVEL_3_2 = 6,
> +	V4L2_STATELESS_AV1_LEVEL_3_3 = 7,
> +
> +	V4L2_STATELESS_AV1_LEVEL_4_0 = 8,
> +	V4L2_STATELESS_AV1_LEVEL_4_1 = 9,
> +	V4L2_STATELESS_AV1_LEVEL_4_2 = 10,
> +	V4L2_STATELESS_AV1_LEVEL_4_3 = 11,
> +
> +	V4L2_STATELESS_AV1_LEVEL_5_0 = 12,
> +	V4L2_STATELESS_AV1_LEVEL_5_1 = 13,
> +	V4L2_STATELESS_AV1_LEVEL_5_2 = 14,
> +	V4L2_STATELESS_AV1_LEVEL_5_3 = 15,
> +
> +	V4L2_STATELESS_AV1_LEVEL_6_0 = 16,
> +	V4L2_STATELESS_AV1_LEVEL_6_1 = 17,
> +	V4L2_STATELESS_AV1_LEVEL_6_2 = 18,
> +	V4L2_STATELESS_AV1_LEVEL_6_3 = 19,
> +
> +	V4L2_STATELESS_AV1_LEVEL_7_0 = 20,
> +	V4L2_STATELESS_AV1_LEVEL_7_1 = 21,
> +	V4L2_STATELESS_AV1_LEVEL_7_2 = 22,
> +	V4L2_STATELESS_AV1_LEVEL_7_3 = 23
> +};
> +
> +/**
> + * enum v4l2_stateless_av1_operating_mode - AV1 operating mode
> + *
> + * @V4L2_STATELESS_AV1_OPERATING_MODE_GENERAL_DECODING: input is a sequence of
> + * OBUs, output is decoded frames)
> + * @V4L2_STATELESS_AV1_OPERATING_MODE_LARGE_SCALE_TILE_DECODING: Large scale
> + * tile decoding (input is a tile list OBU plus additional side information,
> + * output is a decoded frame)
> + *
> + * Conveys the decoding mode the decoder is operating with. The two AV1 decoding
> + * modes are specified in section 7 "Decoding process" of the AV1 specification.
> + */
> +#define V4L2_CID_STATELESS_AV1_OPERATING_MODE (V4L2_CID_CODEC_STATELESS_BASE + 409)
> +enum v4l2_stateless_av1_operating_mode {
> +	V4L2_STATELESS_AV1_OPERATING_MODE_GENERAL_DECODING = 0,
> +	V4L2_STATELESS_AV1_OPERATING_MODE_LARGE_SCALE_TILE_DECODING = 1,
> +};
> +
>  #define V4L2_CID_COLORIMETRY_CLASS_BASE	(V4L2_CTRL_CLASS_COLORIMETRY | 0x900)
>  #define V4L2_CID_COLORIMETRY_CLASS	(V4L2_CTRL_CLASS_COLORIMETRY | 1)
>  
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index 7222fc855d6b..d20f8505f980 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -701,6 +701,7 @@ struct v4l2_pix_format {
>  #define V4L2_PIX_FMT_FWHT     v4l2_fourcc('F', 'W', 'H', 'T') /* Fast Walsh Hadamard Transform (vicodec) */
>  #define V4L2_PIX_FMT_FWHT_STATELESS     v4l2_fourcc('S', 'F', 'W', 'H') /* Stateless FWHT (vicodec) */
>  #define V4L2_PIX_FMT_H264_SLICE v4l2_fourcc('S', '2', '6', '4') /* H264 parsed slices */
> +#define V4L2_PIX_FMT_AV1_FRAME	v4l2_fourcc('A', 'V', '1', 'F') /* AV1 parsed frame */
>  
>  /*  Vendor-specific formats   */
>  #define V4L2_PIX_FMT_CPIA1    v4l2_fourcc('C', 'P', 'I', 'A') /* cpia1 YUV */
> @@ -1750,6 +1751,13 @@ struct v4l2_ext_control {
>  		struct v4l2_ctrl_mpeg2_sequence __user *p_mpeg2_sequence;
>  		struct v4l2_ctrl_mpeg2_picture __user *p_mpeg2_picture;
>  		struct v4l2_ctrl_mpeg2_quantisation __user *p_mpeg2_quantisation;
> +
> +		struct v4l2_ctrl_av1_sequence __user *p_av1_sequence;
> +		struct v4l2_ctrl_av1_tile_group __user *p_av1_tile_group;
> +		struct v4l2_ctrl_av1_tile_group_entry __user *p_av1_tile_group_entry;
> +		struct v4l2_ctrl_av1_tile_list __user *p_av1_tile_list;
> +		struct v4l2_ctrl_av1_tile_list_entry __user *p_av1_tile_list_entry;
> +		struct v4l2_ctrl_av1_frame_header __user *p_av1_frame_header;
>  		void __user *ptr;
>  	};
>  } __attribute__ ((packed));
> @@ -1814,6 +1822,13 @@ enum v4l2_ctrl_type {
>  	V4L2_CTRL_TYPE_MPEG2_QUANTISATION   = 0x0250,
>  	V4L2_CTRL_TYPE_MPEG2_SEQUENCE       = 0x0251,
>  	V4L2_CTRL_TYPE_MPEG2_PICTURE        = 0x0252,
> +
> +	V4L2_CTRL_TYPE_AV1_SEQUENCE	    = 0x280,
> +	V4L2_CTRL_TYPE_AV1_TILE_GROUP	    = 0x281,
> +	V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY = 0x282,
> +	V4L2_CTRL_TYPE_AV1_TILE_LIST	    = 0x283,
> +	V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY  = 0x284,
> +	V4L2_CTRL_TYPE_AV1_FRAME_HEADER	    = 0x285,
>  };
>  
>  /*  Used in the VIDIOC_QUERYCTRL ioctl for querying controls */


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 1/2] media: Add AV1 uAPI
  2021-08-10 22:05 ` [RFC PATCH 1/2] media: Add AV1 uAPI daniel.almeida
                     ` (2 preceding siblings ...)
  2022-01-28 15:45   ` Nicolas Dufresne
@ 2022-02-02 15:13   ` Nicolas Dufresne
  3 siblings, 0 replies; 14+ messages in thread
From: Nicolas Dufresne @ 2022-02-02 15:13 UTC (permalink / raw)
  To: daniel.almeida, stevecho, shawnku, tzungbi, mcasas, nhebert,
	abodenha, randy.wu, yunfei.dong, gustavo.padovan,
	andrzej.pietrasiewicz, tomeu.vizoso, nick.milner, xiaoyong.lu,
	mchehab, hverkuil-cisco
  Cc: linux-media, linux-kernel, kernel

Le mardi 10 août 2021 à 19:05 -0300, daniel.almeida@collabora.com a écrit :
> From: Daniel Almeida <daniel.almeida@collabora.com>
> 
> This patch adds the  AOMedia Video 1 (AV1) kernel uAPI.
> 
> This design is based on currently available AV1 API implementations and
> aims to support the development of AV1 stateless video codecs
> on Linux.
> 
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> ---
>  .../userspace-api/media/v4l/biblio.rst        |   10 +
>  .../media/v4l/ext-ctrls-codec-stateless.rst   | 1268 +++++++++++++++++
>  .../media/v4l/pixfmt-compressed.rst           |   21 +
>  .../media/v4l/vidioc-g-ext-ctrls.rst          |   36 +
>  .../media/v4l/vidioc-queryctrl.rst            |   54 +
>  .../media/videodev2.h.rst.exceptions          |    9 +
>  drivers/media/v4l2-core/v4l2-ctrls-core.c     |  286 +++-
>  drivers/media/v4l2-core/v4l2-ctrls-defs.c     |   79 +
>  drivers/media/v4l2-core/v4l2-ioctl.c          |    1 +
>  include/media/v4l2-ctrls.h                    |   12 +
>  include/uapi/linux/v4l2-controls.h            |  796 +++++++++++
>  include/uapi/linux/videodev2.h                |   15 +
>  12 files changed, 2586 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/userspace-api/media/v4l/biblio.rst b/Documentation/userspace-api/media/v4l/biblio.rst
> index 7b8e6738ff9e..7061144d10bb 100644
> --- a/Documentation/userspace-api/media/v4l/biblio.rst
> +++ b/Documentation/userspace-api/media/v4l/biblio.rst
> @@ -417,3 +417,13 @@ VP8
>  :title:     RFC 6386: "VP8 Data Format and Decoding Guide"
>  
>  :author:    J. Bankoski et al.
> +
> +.. _av1:
> +
> +AV1
> +===
> +
> +
> +:title:     AV1 Bitstream & Decoding Process Specification
> +
> +:author:    Peter de Rivaz, Argon Design Ltd, Jack Haughton, Argon Design Ltd
> diff --git a/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst b/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst
> index 72f5e85b4f34..960500651e4b 100644
> --- a/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst
> +++ b/Documentation/userspace-api/media/v4l/ext-ctrls-codec-stateless.rst
> @@ -1458,3 +1458,1271 @@ FWHT Flags
>  .. raw:: latex
>  
>      \normalsize
> +
> +
> +.. _v4l2-codec-stateless-av1:
> +
> +``V4L2_CID_STATELESS_AV1_SEQUENCE (struct)``
> +    Represents an AV1 Sequence OBU. See section 5.5. "Sequence header OBU syntax"
> +    in :ref:`av1` for more details.
> +
> +.. c:type:: v4l2_ctrl_av1_sequence
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_sequence
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u32
> +      - ``flags``
> +      - See :ref:`AV1 Sequence Flags <av1_sequence_flags>`.
> +    * - __u8
> +      - ``seq_profile``
> +      - Specifies the features that can be used in the coded video sequence.
> +    * - __u8
> +      - ``order_hint_bits``
> +      - Specifies the number of bits used for the order_hint field at each frame.
> +    * - __u8
> +      - ``bit_depth``
> +      - the bitdepth to use for the sequence as described in section 5.5.2
> +        "Color config syntax" in :ref:`av1` for more details.
> +
> +
> +.. _av1_sequence_flags:
> +
> +``AV1 Sequence Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_STILL_PICTURE``
> +      - 0x00000001
> +      - If set, specifies that the coded video sequence contains only one coded
> +	frame. If not set, specifies that the coded video sequence contains one or
> +	more coded frames.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_USE_128X128_SUPERBLOCK``
> +      - 0x00000002
> +      - If set, indicates that superblocks contain 128x128 luma samples.
> +	When equal to 0, it indicates that superblocks contain 64x64 luma samples.
> +	(The number of contained chroma samples depends on subsampling_x and
> +	subsampling_y).
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_FILTER_INTRA``
> +      - 0x00000004
> +      - If set, specifies that the use_filter_intra syntax element may be
> +	present. If not set, specifies that the use_filter_intra syntax element will
> +	not be present.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTRA_EDGE_FILTER``
> +      - 0x00000008
> +      - Specifies whether the intra edge filtering process should be enabled.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTERINTRA_COMPOUND``
> +      - 0x00000010
> +      - If set, specifies that the mode info for inter blocks may contain the
> +	syntax element interintra. If not set, specifies that the syntax element
> +	interintra will not be present.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_MASKED_COMPOUND``
> +      - 0x00000020
> +      - If set, specifies that the mode info for inter blocks may contain the
> +	syntax element compound_type. If not set, specifies that the syntax element
> +	compound_type will not be present.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_WARPED_MOTION``
> +      - 0x00000040
> +      - If set, indicates that the allow_warped_motion syntax element may be
> +	present. If not set, indicates that the allow_warped_motion syntax element
> +	will not be present.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_DUAL_FILTER``
> +      - 0x00000080
> +      - If set, indicates that the inter prediction filter type may be specified
> +	independently in the horizontal and vertical directions. If the flag is
> +	equal to 0, only one filter type may be specified, which is then used in
> +	both directions.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_ORDER_HINT``
> +      - 0x00000100
> +      - If set, indicates that tools based on the values of order hints may be
> +	used. If not set, indicates that tools based on order hints are disabled.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_JNT_COMP``
> +      - 0x00000200
> +      - If set, indicates that the distance weights process may be used for
> +	inter prediction.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_REF_FRAME_MVS``
> +      - 0x00000400
> +      - If set, indicates that the use_ref_frame_mvs syntax element may be
> +	present. If not set, indicates that the use_ref_frame_mvs syntax element
> +	will not be present.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_SUPERRES``
> +      - 0x00000800
> +      - If set, specifies that the use_superres syntax element will be present
> +	in the uncompressed header. If not set, specifies that the use_superres
> +	syntax element will not be present (instead use_superres will be set to 0
> +	in the uncompressed header without being read).
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_CDEF``
> +      - 0x00001000
> +      - If set, specifies that cdef filtering may be enabled. If not set,
> +	specifies that cdef filtering is disabled.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_ENABLE_RESTORATION``
> +      - 0x00002000
> +      - If set, specifies that loop restoration filtering may be enabled. If not
> +	set, specifies that loop restoration filtering is disabled.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME``
> +      - 0x00004000
> +      - If set, indicates that the video does not contain U and V color planes.
> +	If not set, indicates that the video contains Y, U, and V color planes.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_COLOR_RANGE``
> +      - 0x00008000
> +      - If set, signals full swing representation. If not set, signals studio
> +	swing representation.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_X``
> +      - 0x00010000
> +      - Specify the chroma subsampling format.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_Y``
> +      - 0x00020000
> +      - Specify the chroma subsampling format.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_FILM_GRAIN_PARAMS_PRESENT``
> +      - 0x00040000
> +      - Specifies whether film grain parameters are present in the coded video
> +	sequence.
> +    * - ``V4L2_AV1_SEQUENCE_FLAG_SEPARATE_UV_DELTA_Q``
> +      - 0x00080000
> +      - If set, indicates that the U and V planes may have separate delta
> +	quantizer values. If not set, indicates that the U and V planes will share
> +	the same delta quantizer value.
> +
> +``V4L2_CID_STATELESS_AV1_TILE_GROUP (struct)``
> +    Represents a tile group as seen in an AV1 Tile Group OBU or Frame OBU. A
> +    v4l2_ctrl_av1_tile_group instance will refer to tg_end - tg_start instances
> +    of struct :c:type:`struct v4l2_ctrl_av1_tile_group_entry`. See section
> +    6.10.1 "General tile group OBU semantics" in :ref:`av1` for more details.
> +
> +.. c:type:: v4l2_ctrl_av1_tile_group
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_tile_group
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See :ref:`AV1 Tile Group Flags <av1_tile_group_flags>`.
> +    * - __u8
> +      - ``tg_start``
> +      - Specifies the zero-based index of the first tile in the current tile
> +        group.
> +    * - __u8
> +      - ``tg_end``
> +      - Specifies the zero-based index of the last tile in the current tile
> +        group.
> +
> +.. _av1_tile_group_flags:
> +
> +``AV1 Tile Group Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_TILE_GROUP_FLAG_START_AND_END_PRESENT``
> +      - 0x00000001
> +      - Specifies whether tg_start and tg_end are present. If tg_start and
> +	tg_end are not present, this tile group covers the entire frame.
> +
> +``V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY (struct)``
> +    Represents a single AV1 tile inside an AV1 Tile Group. Note that MiRowStart,
> +    MiRowEnd, MiColStart and MiColEnd can be retrieved from struct
> +    v4l2_av1_tile_info in struct v4l2_ctrl_av1_frame_header using tile_row and
> +    tile_col. See section 6.10.1 "General tile group OBU semantics" in
> +    :ref:`av1` for more details.
> +
> +.. c:type:: v4l2_ctrl_av1_tile_group_entry
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_tile_group_entry
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u32
> +      - ``tile_offset``
> +      - Offset from the OBU data, i.e. where the coded tile data actually starts.
> +    * - __u32
> +      - ``tile_size``
> +      - Specifies the size in bytes of the coded tile. Equivalent to "TileSize"
> +        in :ref:`av1`.
> +    * - __u32
> +      - ``tile_row``
> +      - Specifies the row of the current tile. Equivalent to "TileRow" in
> +        :ref:`av1`.
> +    * - __u32
> +      - ``tile_col``
> +      - Specifies the column of the current tile. Equivalent to "TileColumn" in
> +        :ref:`av1`.
> +
> +``V4L2_CID_STATELESS_AV1_TILE_LIST (struct)``
> +    Represents a tile list as seen in an AV1 Tile List OBU. Tile lists are used
> +    in "Large Scale Tile Decode Mode". Note that tile_count_minus_1 should be at
> +    most V4L2_AV1_MAX_TILE_COUNT - 1. A struct v4l2_ctrl_av1_tile_list instance
> +    will refer to "tile_count_minus_1" + 1 instances of struct
> +    v4l2_ctrl_av1_tile_list_entry.
> +
> +.. c:type:: v4l2_ctrl_av1_tile_list
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_tile_list
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``output_frame_width_in_tiles_minus_1``
> +      - This field plus one is the width of the output frame, in tile units.
> +    * - __u8
> +      - ``output_frame_height_in_tiles_minus_1``
> +      - This field plus one is the height of the output frame, in tile units.
> +    * - __u8
> +      - ``tile_count_minus_1``
> +      - This field plus one is the number of tile_list_entry in the list.
> +
> +``V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY (struct)``
> +    Represents a tile list entry as seen in an AV1 Tile List OBU. See section
> +    6.11.2. "Tile list entry semantics" of :ref:`av1` for more details.
> +
> +.. c:type:: v4l2_ctrl_av1_tile_list_entry
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_tile_list
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``anchor_frame_idx``
> +      - The index into an array AnchorFrames of the frames that the tile uses
> +	for prediction.
> +    * - __u8
> +      - ``anchor_tile_row``
> +      - The row coordinate of the tile in the frame that it belongs, in tile
> +	units.
> +    * - __u8
> +      - ``anchor_tile_col``
> +      - The column coordinate of the tile in the frame that it belongs, in tile
> +	units.
> +    * - __u8
> +      - ``tile_data_size_minus_1``
> +      - This field plus one is the size of the coded tile data in bytes.
> +
> +.. c:type:: v4l2_av1_film_grain
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_film_grain
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See :ref:`AV1 Film Grain Flags <av1_film_grain_flags>`.
> +    * - __u16
> +      - ``grain_seed``
> +      - Specifies the starting value for the pseudo-random numbers used during
> +	film grain synthesis.
> +    * - __u8
> +      - ``film_grain_params_ref_idx``
> +      - Indicates which reference frame contains the film grain parameters to be
> +	used for this frame.
> +    * - __u8
> +      - ``num_y_points``
> +      - Specifies the number of points for the piece-wise linear scaling
> +	function of the luma component.
> +    * - __u8
> +      - ``point_y_value[V4L2_AV1_MAX_NUM_Y_POINTS]``
> +      - Represents the x (luma value) coordinate for the i-th point
> +        of the piecewise linear scaling function for luma component. The values
> +        are signaled on the scale of 0..255. (In case of 10 bit video, these
> +        values correspond to luma values divided by 4. In case of 12 bit video,
> +        these values correspond to luma values divided by 16.).
> +    * - __u8
> +      - ``point_y_scaling[V4L2_AV1_MAX_NUM_Y_POINTS]``
> +      - Represents the scaling (output) value for the i-th point
> +	of the piecewise linear scaling function for luma component.
> +    * - __u8
> +      - ``num_cb_points``
> +      -  Specifies the number of points for the piece-wise linear scaling
> +         function of the cb component.
> +    * - __u8
> +      - ``point_cb_value[V4L2_AV1_MAX_NUM_CB_POINTS]``
> +      - Represents the x coordinate for the i-th point of the
> +        piece-wise linear scaling function for cb component. The values are
> +        signaled on the scale of 0..255.
> +    * - __u8
> +      - ``point_cb_scaling[V4L2_AV1_MAX_NUM_CB_POINTS]``
> +      - Represents the scaling (output) value for the i-th point of the
> +        piecewise linear scaling function for cb component.
> +    * - __u8
> +      - ``num_cr_points``
> +      - Represents the number of points for the piece-wise
> +        linear scaling function of the cr component.
> +    * - __u8
> +      - ``point_cr_value[V4L2_AV1_MAX_NUM_CR_POINTS]``
> +      - Represents the x coordinate for the i-th point of the
> +        piece-wise linear scaling function for cr component. The values are
> +        signaled on the scale of 0..255.
> +    * - __u8
> +      - ``point_cr_scaling[V4L2_AV1_MAX_NUM_CR_POINTS]``
> +      - Represents the scaling (output) value for the i-th point of the
> +        piecewise linear scaling function for cr component.
> +    * - __u8
> +      - ``grain_scaling_minus_8``
> +      - Represents the shift – 8 applied to the values of the chroma component.
> +        The grain_scaling_minus_8 can take values of 0..3 and determines the
> +        range and quantization step of the standard deviation of film grain.
> +    * - __u8
> +      - ``ar_coeff_lag``
> +      - Specifies the number of auto-regressive coefficients for luma and
> +	chroma.
> +    * - __u8
> +      - ``ar_coeffs_y_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA]``
> +      - Specifies auto-regressive coefficients used for the Y plane.
> +    * - __u8
> +      - ``ar_coeffs_cb_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA]``
> +      - Specifies auto-regressive coefficients used for the U plane.
> +    * - __u8
> +      - ``ar_coeffs_cr_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA]``
> +      - Specifies auto-regressive coefficients used for the V plane.
> +    * - __u8
> +      - ``ar_coeff_shift_minus_6``
> +      - Specifies the range of the auto-regressive coefficients. Values of 0,
> +        1, 2, and 3 correspond to the ranges for auto-regressive coefficients of
> +        [-2, 2), [-1, 1), [-0.5, 0.5) and [-0.25, 0.25) respectively.
> +    * - __u8
> +      - ``grain_scale_shift``
> +      - Specifies how much the Gaussian random numbers should be scaled down
> +	during the grain synthesis process.
> +    * - __u8
> +      - ``cb_mult``
> +      - Represents a multiplier for the cb component used in derivation of the
> +	input index to the cb component scaling function.
> +    * - __u8
> +      - ``cb_luma_mult``
> +      - Represents a multiplier for the average luma component used in
> +	derivation of the input index to the cb component scaling function..
> +    * - __u16
> +      - ``cb_offset``
> +      - Represents an offset used in derivation of the input index to the
> +	cb component scaling function.
> +    * - __u8
> +      - ``cr_mult``
> +      - Represents a multiplier for the cb component used in derivation of the
> +	input index to the cr component scaling function.
> +    * - __u8
> +      - ``cr_luma_mult``
> +      - Represents a multiplier for the average luma component used in
> +        derivation of the input index to the cr component scaling function.
> +    * - __u16
> +      - ``cr_offset``
> +      - Represents an offset used in derivation of the input index to the
> +        cr component scaling function.
> +
> +.. _av1_film_grain_flags:
> +
> +``AV1 Film Grain Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_FILM_GRAIN_FLAG_APPLY_GRAIN``
> +      - 0x00000001
> +      - If set, specifies that film grain should be added to this frame. If not
> +	set, specifies that film grain should not be added.
> +    * - ``V4L2_AV1_FILM_GRAIN_FLAG_UPDATE_GRAIN``
> +      - 0x00000002
> +      - If set, means that a new set of parameters should be sent. If not set,
> +	specifies that the previous set of parameters should be used.
> +    * - ``V4L2_AV1_FILM_GRAIN_FLAG_CHROMA_SCALING_FROM_LUMA``
> +      - 0x00000004
> +      - If set, specifies that the chroma scaling is inferred from the luma
> +	scaling.
> +    * - ``V4L2_AV1_FILM_GRAIN_FLAG_OVERLAP``
> +      - 0x00000008
> +      - If set, indicates that the overlap between film grain blocks shall be
> +	applied. If not set, indicates that the overlap between film grain blocks
> +	shall not be applied.
> +    * - ``V4L2_AV1_FILM_GRAIN_FLAG_CLIP_TO_RESTRICTED_RANGE``
> +      - 0x00000010
> +      - If set, indicates that clipping to the restricted (studio) range shall
> +        be applied to the sample values after adding the film grain (see the
> +        semantics for color_range for an explanation of studio swing). If not
> +        set, indicates that clipping to the full range shall be applied to the
> +        sample values after adding the film grain.
> +
> +.. c:type:: v4l2_av1_warp_model
> +
> +	AV1 Warp Model as described in section 3 "Symbols and abbreviated terms" of
> +	:ref:`av1`.
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_WARP_MODEL_IDENTITY``
> +      - 0
> +      - Warp model is just an identity transform.
> +    * - ``V4L2_AV1_WARP_MODEL_TRANSLATION``
> +      - 1
> +      - Warp model is a pure translation.
> +    * - ``V4L2_AV1_WARP_MODEL_ROTZOOM``
> +      - 2
> +      - Warp model is a rotation + symmetric zoom + translation.
> +    * - ``V4L2_AV1_WARP_MODEL_AFFINE``
> +      - 3
> +      - Warp model is a general affine transform.
> +
> +.. c:type:: v4l2_av1_reference_frame
> +
> +AV1 Reference Frames as described in section 6.10.24. "Ref frames semantics"
> +of :ref:`av1`.
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_REF_INTRA_FRAME``
> +      - 0
> +      - Intra Frame Reference.
> +    * - ``V4L2_AV1_REF_LAST_FRAME``
> +      - 1
> +      - Last Frame Reference.
> +    * - ``V4L2_AV1_REF_LAST2_FRAME``
> +      - 2
> +      - Last2 Frame Reference.
> +    * - ``V4L2_AV1_REF_LAST3_FRAME``
> +      - 3
> +      - Last3 Frame Reference.
> +    * - ``V4L2_AV1_REF_GOLDEN_FRAME``
> +      - 4
> +      - Golden Frame Reference.
> +    * - ``V4L2_AV1_REF_BWDREF_FRAME``
> +      - 5
> +      - BWD Frame Reference.
> +    * - ``V4L2_AV1_REF_ALTREF2_FRAME``
> +      - 6
> +      - ALTREF2 Frame Reference.
> +    * - ``V4L2_AV1_REF_ALTREF_FRAME``
> +      - 7
> +      - ALTREF Frame Reference.
> +    * - ``V4L2_AV1_NUM_REF_FRAMES``
> +      - 8
> +      - Total number of reference frames.
> +
> +.. c:type:: v4l2_av1_global_motion
> +
> +AV1 Global Motion parameters as described in section 6.8.17
> +"Global motion params semantics" of :ref:`av1` for more details.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_global_motion
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags[V4L2_AV1_TOTAL_REFS_PER_FRAME]``
> +      - A bitfield containing the flags per reference frame. See
> +        :ref:`AV1 Global Motion Flags <av1_global_motion_flags>` for more
> +        details.
> +    * - enum :c:type:`v4l2_av1_warp_model`
> +      - ``type[V4L2_AV1_TOTAL_REFS_PER_FRAME]``
> +      - The type of global motion transform used.
> +    * - __u32
> +      - ``params[V4L2_AV1_TOTAL_REFS_PER_FRAME][6]``
> +      - This field has the same meaning as "gm_params" in :ref:`av1`.
> +    * - __u8
> +      - ``invalid``
> +      - Bitfield indicating whether the global motion params are invalid for a
> +        given reference frame. See section 7.11.3.6. Setup shear process and the
> +        variable "warpValid". Use V4L2_AV1_GLOBAL_MOTION_IS_INVALID(ref) to
> +        create a suitable mask.
> +
> +.. _av1_global_motion_flags:
> +
> +``AV1 Global Motion Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_GLOBAL_MOTION_FLAG_IS_GLOBAL``
> +      - 0x00000001
> +      - Specifies whether global motion parameters are present for a particular
> +        reference frame.
> +    * - ``V4L2_AV1_GLOBAL_MOTION_FLAG_IS_ROT_ZOOM``
> +      - 0x00000002
> +      - Specifies whether a particular reference frame uses rotation and zoom
> +        global motion.
> +    * - ``V4L2_AV1_GLOBAL_MOTION_FLAG_IS_TRANSLATION``
> +      - 0x00000004
> +      - Specifies whether a particular reference frame uses rotation and zoom
> +        global motion.
> +
> +.. c:type:: v4l2_av1_frame_restoration_type
> +
> +AV1 Frame Restoration Type.
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_FRAME_RESTORE_NONE``
> +      - 0
> +      - No filtering is applied.
> +    * - ``V4L2_AV1_FRAME_RESTORE_WIENER``
> +      - 1
> +      - Wiener filter process is invoked.
> +    * - ``V4L2_AV1_FRAME_RESTORE_SGRPROJ``
> +      - 2
> +      - Self guided filter process is invoked.
> +    * - ``V4L2_AV1_FRAME_RESTORE_SWITCHABLE``
> +      - 3
> +      - Restoration filter is swichtable.
> +
> +.. c:type:: v4l2_av1_loop_restoration
> +
> +AV1 Loop Restauration as described in section 6.10.15 "Loop restoration params
> +semantics" of :ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_loop_restoration
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - :c:type:`v4l2_av1_frame_restoration_type`
> +      - ``frame_restoration_type[V4L2_AV1_NUM_PLANES_MAX]``
> +      - Specifies the type of restoration used for each plane.
> +    * - __u8
> +      - ``lr_unit_shift``
> +      - Specifies if the luma restoration size should be halved.
> +    * - __u8
> +      - ``lr_uv_shift``
> +      - Specifies if the chroma size should be half the luma size.
> +    * - __u8
> +      - ``loop_restoration_size[V4L2_AV1_MAX_NUM_PLANES]``
> +      - specifies the size of loop restoration units in units of samples in the
> +        current plane.
> +
> +.. c:type:: v4l2_av1_cdef
> +
> +AV1 CDEF params semantics as described in section 6.10.14. "CDEF params
> +semantics" of :ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_cdef
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``damping_minus_3``
> +      - Controls the amount of damping in the deringing filter.
> +    * - __u8
> +      - ``bits``
> +      - Specifies the number of bits needed to specify which CDEF filter to
> +        apply.
> +    * - __u8
> +      - ``y_pri_strength[V4L2_AV1_CDEF_MAX]``
> +      -  Specifies the strength of the primary filter.
> +    * - __u8
> +      - ``y_sec_strength[V4L2_AV1_CDEF_MAX]``
> +      -  Specifies the strength of the secondary filter.
> +    * - __u8
> +      - ``uv_pri_strength[V4L2_AV1_CDEF_MAX]``
> +      -  Specifies the strength of the primary filter.
> +    * - __u8
> +      - ``uv_secondary_strength[V4L2_AV1_CDEF_MAX]``
> +      -  Specifies the strength of the secondary filter.
> +
> +.. c:type:: v4l2_av1_segment_feature
> +
> +AV1 segment features as described in section 3 "Symbols and abbreviated terms"
> +of :ref:`av1`.
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_SEG_LVL_ALT_Q``
> +      - 0
> +      - Index for quantizer segment feature.
> +    * - ``V4L2_AV1_SEG_LVL_ALT_LF_Y_V``
> +      - 1
> +      - Index for vertical luma loop filter segment feature.
> +    * - ``V4L2_AV1_SEG_LVL_REF_FRAME``
> +      - 5
> +      - Index for reference frame segment feature.
> +    * - ``V4L2_AV1_SEG_LVL_REF_SKIP``
> +      - 6
> +      - Index for skip segment feature.
> +    * - ``V4L2_AV1_SEG_LVL_REF_GLOBALMV``
> +      - 7
> +      - Index for global mv feature.
> +    * - ``V4L2_AV1_SEG_LVL_MAX``
> +      - 8
> +      - Number of segment features.
> +
> +.. c:type:: v4l2_av1_segmentation
> +
> +AV1 Segmentation params as defined in section 6.8.13. "Segmentation params
> +semantics" of :ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_film_grain
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See :ref:`AV1 Segmentation Flags <av1_segmentation_flags>`
> +    * - __u8
> +      - ``feature_enabled[V4L2_AV1_MAX_SEGMENTS]``
> +      - Bitmask defining which features are enabled in each segment. Use
> +        V4L2_AV1_SEGMENT_FEATURE_ENABLED to build a suitable mask.
> +    * - __u16
> +      - `feature_data[V4L2_AV1_MAX_SEGMENTS][V4L2_AV1_SEG_LVL_MAX]``
> +      -  Data attached to each feature. Data entry is only valid if the feature
> +         is enabled
> +    * - __u8
> +      - ``last_active_seg_id``
> +      -  Indicates the highest numbered segment id that has some
> +         enabled feature. This is used when decoding the segment id to only decode
> +         choices corresponding to used segments.
> +
> +.. _av1_segmentation_flags:
> +
> +``AV1 Segmentation Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_SEGMENTATION_FLAG_ENABLED``
> +      - 0x00000001
> +      - If set, indicates that this frame makes use of the segmentation tool. If
> +        not set, indicates that the frame does not use segmentation.
> +    * - ``V4L2_AV1_SEGMENTATION_FLAG_UPDATE_MAP``
> +      - 0x00000002
> +      - If set, indicates that the segmentation map are updated during the
> +        decoding of this frame. If not set, indicates that the segmentation map
> +        from the previous frame is used.
> +    * - ``V4L2_AV1_SEGMENTATION_FLAG_TEMPORAL_UPDATE``
> +      - 0x00000004
> +      - If set, indicates that the updates to the segmentation map are coded
> +        relative to the existing segmentation map. If not set, indicates that
> +        the new segmentation map is coded without reference to the existing
> +        segmentation map.
> +    * - ``V4L2_AV1_SEGMENTATION_FLAG_UPDATE_DATA``
> +      - 0x00000008
> +      - If set, indicates that the updates to the segmentation map are coded
> +        relative to the existing segmentation map. If not set, indicates that
> +        the new segmentation map is coded without reference to the existing
> +        segmentation map.
> +    * - ``V4L2_AV1_SEGMENTATION_FLAG_SEG_ID_PRE_SKIP``
> +      - 0x00000010
> +      - If set, indicates that the segment id will be read before the skip
> +        syntax element. If not set, indicates that the skip syntax element will
> +        be read first.
> +
> +.. c:type:: v4l2_av1_loop_filter
> +
> +AV1 Loop filter params as defined in section 6.8.10. "Loop filter semantics" of
> +:ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_global_motion
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See
> +        :ref:`AV1 Loop Filter flags <av1_loop_filter_flags>` for more details.
> +    * - __u8
> +      - ``level[4]``
> +      - an array containing loop filter strength values. Different loop
> +        filter strength values from the array are used depending on the image
> +        plane being filtered, and the edge direction (vertical or horizontal)
> +        being filtered.
> +    * - __u8
> +      - ``sharpness``
> +      - indicates the sharpness level. The loop_filter_level and
> +        loop_filter_sharpness together determine when a block edge is filtered,
> +        and by how much the filtering can change the sample values. The loop
> +        filter process is described in section 7.14 of :ref:`av1`.
> +    * - __u8
> +      - ``ref_deltas[V4L2_AV1_TOTAL_REFS_PER_FRAME]``
> +      - contains the adjustment needed for the filter level based on the
> +        chosen reference frame. If this syntax element is not present, it
> +        maintains its previous value.
> +    * - __u8
> +      - ``mode_deltas[2]``
> +      - contains the adjustment needed for the filter level based on
> +        the chosen mode. If this syntax element is not present, it maintains its
> +        previous value.
> +    * - __u8
> +      - ``delta_lf_res``
> +      - specifies the left shift which should be applied to decoded loop filter
> +        delta values.
> +    * - __u8
> +      - ``delta_lf_multi``
> +      - a value equal to 1 specifies that separate loop filter
> +        deltas are sent for horizontal luma edges, vertical luma edges, the U
> +        edges, and the V edges. A value of delta_lf_multi equal to 0 specifies
> +        that the same loop filter delta is used for all edges.
> +
> +.. _av1_loop_filter_flags:
> +
> +``AV1 Loop Filter Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_LOOP_FILTER_FLAG_DELTA_ENABLED``
> +      - 0x00000001
> +      - If set, means that the filter level depends on the mode and reference
> +        frame used to predict a block. If not set, means that the filter level
> +        does not depend on the mode and reference frame.
> +    * - ``V4L2_AV1_LOOP_FILTER_FLAG_DELTA_UPDATE``
> +      - 0x00000002
> +      - If set, means that additional syntax elements are present that specify
> +        which mode and reference frame deltas are to be updated. If not set,
> +        means that these syntax elements are not present.
> +    * - ``V4L2_AV1_LOOP_FILTER_FLAG_DELTA_LF_PRESENT``
> +      - 0x00000004
> +      - Specifies whether loop filter delta values are present
> +
> +.. c:type:: v4l2_av1_quantization
> +
> +AV1 Quantization params as defined in section 6.8.11 "Quantization params
> +semantics" of :ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_quantization
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See
> +        :ref:`AV1 Loop Filter flags <av1_quantization_flags>` for more details.
> +    * - __u8
> +      - ``base_q_idx``
> +      - Indicates the base frame qindex. This is used for Y AC coefficients and
> +        as the base value for the other quantizers.
> +    * - __u8
> +      - ``delta_q_y_dc``
> +      - Indicates the Y DC quantizer relative to base_q_idx.
> +    * - __u8
> +      - ``delta_q_u_dc``
> +      - Indicates the U DC quantizer relative to base_q_idx.
> +    * - __u8
> +      - ``delta_q_u_ac``
> +      - Indicates the U AC quantizer relative to base_q_idx.
> +    * - __u8
> +      - ``delta_q_v_dc``
> +      - Indicates the V DC quantizer relative to base_q_idx.
> +    * - __u8
> +      - ``delta_q_v_dc``
> +      - Indicates the V DC quantizer relative to base_q_idx.
> +    * - __u8
> +      - ``qm_y``
> +      - Specifies the level in the quantizer matrix that should be used for
> +        luma plane decoding.
> +    * - __u8
> +      - ``qm_u``
> +      - Specifies the level in the quantizer matrix that should be used for
> +        chroma U plane decoding.
> +    * - __u8
> +      - ``qm_y``
> +      - Specifies the level in the quantizer matrix that should be used for
> +        chroma V plane decoding.
> +    * - __u8
> +      - ``delta_q_res``
> +      - Specifies the left shift which should be applied to decoded quantizer
> +        index delta values.
> +
> +.. _av1_quantization_flags:
> +
> +``AV1 Quantization Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_QUANTIZATION_FLAG_DIFF_UV_DELTA``
> +      - 0x00000001
> +      - If set, indicates that the U and V delta quantizer values are coded
> +        separately. If not set, indicates that the U and V delta quantizer
> +        values share a common value.
> +    * - ``V4L2_AV1_QUANTIZATION_FLAG_USING_QMATRIX``
> +      - 0x00000002
> +      - If set, specifies that the quantizer matrix will be used to compute
> +        quantizers.
> +    * - ``V4L2_AV1_QUANTIZATION_FLAG_DELTA_Q_PRESENT``
> +      - 0x00000004
> +      - Specifies whether quantizer index delta values are present.
> +
> +.. c:type:: v4l2_av1_tile_info
> +
> +AV1 Tile info as defined in section 6.8.14. "Tile info semantics" of ref:`av1`.
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{1.5cm}|p{5.8cm}|p{10.0cm}|
> +
> +.. flat-table:: struct v4l2_av1_film_grain
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - __u8
> +      - ``flags``
> +      - See
> +        :ref:`AV1 Tile Info flags <av1_tile_info_flags>` for more details.
> +    * - __u32
> +      - ``mi_col_starts[V4L2_AV1_MAX_TILE_COLS + 1]``
> +      - An array specifying the start column (in units of 4x4 luma
> +        samples) for each tile across the image.
> +    * - __u32
> +      - ``mi_row_starts[V4L2_AV1_MAX_TILE_ROWS + 1]``
> +      - An array specifying the start row (in units of 4x4 luma
> +        samples) for each tile across the image.
> +    * - __u32
> +      - ``width_in_sbs_minus_1[V4L2_AV1_MAX_TILE_COLS]``
> +      - Specifies the width of a tile minus 1 in units of superblocks.
> +    * - __u32
> +      - ``height_in_sbs_minus_1[V4L2_AV1_MAX_TILE_ROWS]``
> +      - Specifies the height of a tile minus 1 in units of superblocks.
> +    * - __u8
> +      - ``tile_size_bytes``
> +      - Specifies the number of bytes needed to code each tile size.
> +    * - __u8
> +      - ``context_update_tile_id``
> +      - Specifies which tile to use for the CDF update.
> +    * - __u8
> +      - ``tile_rows``
> +      - Specifies the number of tiles down the frame.
> +    * - __u8
> +      - ``tile_rows``
> +      - Specifies the number of tiles across the frame.
> +
> +.. _av1_tile_info_flags:
> +
> +``AV1 Tile Info Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_TILE_INFO_FLAG_UNIFORM_TILE_SPACING``
> +      - 0x00000001
> +      - If set, means that the tiles are uniformly spaced across the frame. (In
> +        other words, all tiles are the same size except for the ones at the
> +        right and bottom edge which can be smaller). If not set means that the
> +        tile sizes are coded.
> +
> +.. c:type:: v4l2_av1_frame_type
> +
> +AV1 Frame Type
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_KEY_FRAME``
> +      - 0
> +      - Key frame.
> +    * - ``V4L2_AV1_INTER_FRAME``
> +      - 1
> +      - Inter frame.
> +    * - ``V4L2_AV1_INTRA_ONLY_FRAME``
> +      - 2
> +      - Intra-only frame.
> +    * - ``V4L2_AV1_SWITCH_FRAME``
> +      - 3
> +      - Switch frame.
> +
> +.. c:type:: v4l2_av1_interpolation_filter
> +
> +AV1 Interpolation Filter
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP``
> +      - 0
> +      - Eight tap filter.
> +    * - ``V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SMOOTH``
> +      - 1
> +      - Eight tap smooth filter.
> +    * - ``V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SHARP``
> +      - 2
> +      - Eight tap sharp filter.
> +    * - ``V4L2_AV1_INTERPOLATION_FILTER_BILINEAR``
> +      - 3
> +      - Bilinear filter.
> +    * - ``V4L2_AV1_INTERPOLATION_FILTER_SWITCHABLE``
> +      - 4
> +      - Filter selection is signaled at the block level.
> +
> +.. c:type:: v4l2_av1_tx_mode
> +
> +AV1 Tx mode as described in section 6.8.21 "TX mode semantics" of :ref:`av1`.
> +
> +.. raw:: latex
> +
> +    \scriptsize
> +
> +.. tabularcolumns:: |p{7.4cm}|p{0.3cm}|p{9.6cm}|
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_TX_MODE_ONLY_4X4``
> +      - 0
> +      -  The inverse transform will use only 4x4 transforms.
> +    * - ``V4L2_AV1_TX_MODE_LARGEST``
> +      - 1
> +      - The inverse transform will use the largest transform size that fits
> +        inside the block.
> +    * - ``V4L2_AV1_TX_MODE_SELECT``
> +      - 2
> +      - The choice of transform size is specified explicitly for each block.
> +
> +``V4L2_CID_STATELESS_AV1_FRAME_HEADER (struct)``
> +    Represents a tile list entry as seen in an AV1 Tile List OBU. See section
> +    6.11.2. "Tile list entry semantics" of :ref:`av1` for more details.
> +
> +.. c:type:: v4l2_ctrl_av1_frame_header
> +
> +.. cssclass:: longtable
> +
> +.. tabularcolumns:: |p{5.8cm}|p{4.8cm}|p{6.6cm}|
> +
> +.. flat-table:: struct v4l2_ctrl_av1_frame_header
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - struct :c:type:`v4l2_av1_tile_info`
> +      - ``tile_info``
> +      - Tile info
> +    * - struct :c:type:`v4l2_av1_quantization`
> +      - ``quantization``
> +      - Quantization params
> +    * - struct :c:type:`v4l2_av1_segmentation`
> +      - ``segmentation``
> +      - Segmentation params
> +    * - struct :c:type:`v4l2_av1_loop_filter`
> +      - ``loop_filter``
> +      - Loop filter params
> +    * - struct :c:type:`v4l2_av1_cdef`
> +      - ``cdef``
> +      - CDEF params
> +    * - struct :c:type:`v4l2_av1_loop_restoration`
> +      - ``loop_restoration``
> +      - Loop restoration params
> +    * - struct :c:type:`v4l2_av1_loop_restoration`
> +      - ``loop_restoration``
> +      - Loop restoration params
> +    * - struct :c:type:`v4l2_av1_loop_global_motion`
> +      - ``global_motion``
> +      - Global motion params
> +    * - struct :c:type:`v4l2_av1_loop_film_grain`
> +      - ``film_grain``
> +      - Film grain params
> +    * - __u32
> +      - ``flags``
> +      - See
> +        :ref:`AV1 Tile Info flags <av1_frame_header_flags>` for more details.
> +    * - enum :c:type:`v4l2_av1_frame_type`
> +      - ``frame_type``
> +      - Specifies the AV1 frame type
> +    * - __u32
> +      - ``order_hint``
> +      - Specifies OrderHintBits least significant bits of the expected output
> +        order for this frame.
> +    * - __u8
> +      - ``superres_denom``
> +      - The denominator for the upscaling ratio.
> +    * - __u32
> +      - ``upscaled_width``
> +      - The upscaled width.
> +    * - enum :c:type:`v4l2_av1_interpolation_filter`
> +      - ``interpolation_filter``
> +      - Specifies the filter selection used for performing inter prediction.
> +    * - enum :c:type:`v4l2_av1_tx_mode`
> +      - ``tx_mode``
> +      - Specifies how the transform size is determined.
> +    * - __u32
> +      - ``frame_width_minus_1``
> +      - Add 1 to get the frame's width.
> +    * - __u32
> +      - ``frame_height_minus_1``
> +      - Add 1 to get the frame's height.
> +    * - __u16
> +      - ``render_width_minus_1``
> +      - Add 1 to get the render width of the frame in luma samples.
> +    * - __u16
> +      - ``render_height_minus_1``
> +      - Add 1 to get the render height of the frame in luma samples.
> +    * - __u32
> +      - ``current_frame_id``
> +      - Specifies the frame id number for the current frame. Frame
> +        id numbers are additional information that do not affect the decoding
> +        process, but provide decoders with a way of detecting missing reference
> +        frames so that appropriate action can be taken.
> +    * - __u8
> +      - ``primary_ref_frame``
> +      - Specifies which reference frame contains the CDF values and other state
> +        that should be loaded at the start of the frame..
> +    * - __u8
> +      - ``buffer_removal_time[V4L2_AV1_MAX_OPERATING_POINTS]``
> +      - Specifies the frame removal time in units of DecCT clock ticks counted
> +        from the removal time of the last random access point for operating point
> +        opNum.
> +    * - __u8
> +      - ``refresh_frame_flags[V4L2_AV1_MAX_OPERATING_POINTS]``
> +      - Contains a bitmask that specifies which reference frame slots will be
> +        updated with the current frame after it is decoded.
> +    * - __u32
> +      - ``ref_order_hint[V4L2_AV1_NUM_REF_FRAMES]``
> +      - Specifies the expected output order hint for each reference frame.
> +    * - __s8
> +      - ``last_frame_idx``
> +      - Specifies the reference frame to use for LAST_FRAME.
> +    * - __s8
> +      - ``gold_frame_idx``
> +      - Specifies the reference frame to use for GOLDEN_FRAME.
> +    * - __u64
> +      - ``reference_frame_ts[V4L2_AV1_NUM_REF_FRAMES]``
> +      - the V4L2 timestamp for each of the reference frames enumerated in
> +        enum :c:type:`v4l2_av1_reference_frame`. The timestamp refers to the
> +        ``timestamp`` field in struct :c:type:`v4l2_buffer`. Use the
> +        :c:func:`v4l2_timeval_to_ns()` function to convert the struct
> +        :c:type:`timeval` in struct :c:type:`v4l2_buffer` to a __u64.
> +    * - __u8
> +      - ``skip_mode_frame[2]``
> +      - Specifies the frames to use for compound prediction when skip_mode is
> +        equal to 1.
> +
> +.. _av1_frame_header_flags:
> +
> +``AV1 Frame Header Flags``
> +
> +.. cssclass:: longtable
> +
> +.. flat-table::
> +    :header-rows:  0
> +    :stub-columns: 0
> +    :widths:       1 1 2
> +
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_SHOW_FRAME``
> +      - 0x00000001
> +      - If set, specifies that this frame should be immediately output once
> +        decoded. If not set, specifies that this frame should not be immediately
> +        output. (It may be output later if a later uncompressed header uses
> +        show_existing_frame equal to 1).
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_SHOWABLE_FRAME``
> +      - 0x00000002
> +      - If set, specifies that the frame may be output using the
> +        show_existing_frame mechanism. If not set, specifies that this frame
> +        will not be output using the show_existing_frame mechanism.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ERROR_RESILIENT_MODE``
> +      - 0x00000004
> +      - Specifies whether error resilient mode is enabled.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_CDF_UPDATE``
> +      - 0x00000008
> +      - Specifies whether the CDF update in the symbol decoding process should
> +        be disabled.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_SCREEN_CONTENT_TOOLS``
> +      - 0x00000010
> +      - Specifies whether the CDF update in the symbol decoding process should
> +        be disabled.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_FORCE_INTEGER_MV``
> +      - 0x00000020
> +      - If set, specifies that motion vectors will always be integers. If not
> +        set, specifies that motion vectors can contain fractional bits.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_INTRABC``
> +      - 0x00000040
> +      - If set, indicates that intra block copy may be used in this frame. If
> +        not set, indicates that intra block copy is not allowed in this frame.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_USE_SUPERRES``
> +      - 0x00000080
> +      - If set, indicates that upscaling is needed.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_HIGH_PRECISION_MV``
> +      - 0x00000100
> +      - If set, specifies that motion vectors are specified to eighth pel
> +        precision. If not set, specifies that motion vectors are specified to
> +        quarter pel precision;
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_IS_MOTION_MODE_SWITCHABLE``
> +      - 0x00000200
> +      - If not set, specifies that only the SIMPLE motion mode will be used.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_USE_REF_FRAME_MVS``
> +      - 0x00000400
> +      - If set specifies that motion vector information from a previous frame
> +        can be used when decoding the current frame. If not set, specifies that
> +        this information will not be used.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_FRAME_END_UPDATE_CDF``
> +      - 0x00000800
> +      - If set indicates that the end of frame CDF update is disabled. If not
> +        set, indicates that the end of frame CDF update is enabled
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_UNIFORM_TILE_SPACING``
> +      - 0x00001000
> +      - If set, means that the tiles are uniformly spaced across the frame. (In
> +        other words, all tiles are the same size except for the ones at the
> +        right and bottom edge which can be smaller). If not set, means that the
> +        tile sizes are coded
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_WARPED_MOTION``
> +      - 0x00002000
> +      - If set, indicates that the syntax element motion_mode may be present, if
> +        not set, indicates that the syntax element motion_mode will not be
> +        present.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_REFERENCE_SELECT``
> +      - 0x00004000
> +      - If set, specifies that the mode info for inter blocks contains the
> +        syntax element comp_mode that indicates whether to use single or
> +        compound reference prediction. If not set, specifies that all inter
> +        blocks will use single prediction.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_REDUCED_TX_SET``
> +      - 0x00008000
> +      - If set, specifies that the frame is restricted to a reduced subset of
> +        the full set of transform types.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_SKIP_MODE_PRESENT``
> +      - 0x00010000
> +      - If set, specifies that the syntax element skip_mode will be present, if
> +        not set, specifies that skip_mode will not be used for this frame.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_FRAME_SIZE_OVERRIDE``
> +      - 0x00020000
> +      - If set, specifies that the frame size will either be specified as the
> +        size of one of the reference frames, or computed from the
> +        frame_width_minus_1 and frame_height_minus_1 syntax elements. If not
> +        set, specifies that the frame size is equal to the size in the sequence
> +        header.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_BUFFER_REMOVAL_TIME_PRESENT``
> +      - 0x00040000
> +      - If set, specifies that buffer_removal_time is present. If not set,
> +        specifies that buffer_removal_time is not present.
> +    * - ``V4L2_AV1_FRAME_HEADER_FLAG_FRAME_REFS_SHORT_SIGNALING``
> +      - 0x00080000
> +      - If set, indicates that only two reference frames are explicitly
> +        signaled. If not set, indicates that all reference frames are explicitly
> +        signaled.
> diff --git a/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst b/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst
> index 0ede39907ee2..c1951e890d6f 100644
> --- a/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst
> +++ b/Documentation/userspace-api/media/v4l/pixfmt-compressed.rst
> @@ -223,6 +223,27 @@ Compressed Formats
>          through the ``V4L2_CID_STATELESS_FWHT_PARAMS`` control.
>  	See the :ref:`associated Codec Control ID <codec-stateless-fwht>`.
>  
> +    * .. _V4L2-PIX-FMT-AV1-FRAME:
> +
> +      - ``V4L2_PIX_FMT_AV1_FRAME``
> +      - 'AV1F'
> +      - AV1 parsed frame, including the frame header, as extracted from the container.
> +        This format is adapted for stateless video decoders that implement a AV1
> +        pipeline with the :ref:`stateless_decoder`. Metadata associated with the
> +        frame to decode is required to be passed through the
> +        ``V4L2_CID_STATELESS_AV1_SEQUENCE``, ``V4L2_CID_STATELESS_AV1_FRAME``,
> +        ``V4L2_CID_STATELESS_AV1_TILE_GROUP`` and
> +        ``V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY`` controls.
> +        ``V4L2_CID_STATELESS_AV1_TILE_LIST`` and
> +        ``V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY`` controls should be used if
> +        the decoder supports large scale tile decoding mode as signaled by the
> +        ``V4L2_CID_STATELESS_AV1_OPERATING_MODE`` control.
> +        See the :ref:`associated Codec Control IDs <v4l2-codec-stateless-av1>`.
> +        Exactly one output and one capture buffer must be provided for use with
> +        this pixel format. The output buffer must contain the appropriate number
> +        of macroblocks to decode a full corresponding frame to the matching
> +        capture buffer.
> +
>  .. raw:: latex
>  
>      \normalsize
> diff --git a/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst b/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst
> index 2d6bc8d94380..50d4ed714123 100644
> --- a/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst
> +++ b/Documentation/userspace-api/media/v4l/vidioc-g-ext-ctrls.rst
> @@ -233,6 +233,42 @@ still cause this situation.
>        - ``p_mpeg2_quantisation``
>        - A pointer to a struct :c:type:`v4l2_ctrl_mpeg2_quantisation`. Valid if this control is
>          of type ``V4L2_CTRL_TYPE_MPEG2_QUANTISATION``.
> +    * - struct :c:type:`v4l2_ctrl_av1_sequence` *
> +      - ``p_av1_sequence``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_sequence`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_SEQUENCE``.
> +    * - struct :c:type:`v4l2_ctrl_av1_tile_group` *
> +      - ``p_av1_tile_group``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_group`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP``.
> +    * - struct :c:type:`v4l2_ctrl_av1_tile_group_entry` *
> +      - ``p_av1_tile_group_entry``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_group_entry`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY``.
> +    * - struct :c:type:`v4l2_ctrl_av1_tile_list` *
> +      - ``p_av1_tile_list``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_list`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_LIST``.
> +    * - struct :c:type:`v4l2_ctrl_av1_tile_list_entry` *
> +      - ``p_av1_tile_list_entry``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_tile_list_entry`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_LIST_ENTRY``.
> +    * - struct :c:type:`v4l2_ctrl_av1_frame_header` *
> +      - ``p_av1_frame_header``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_frame_header`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_FRAME_HEADER``.
> +    * - struct :c:type:`v4l2_ctrl_av1_profile` *
> +      - ``p_av1_profile``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_profile`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_PROFILE``.
> +    * - struct :c:type:`v4l2_ctrl_av1_level` *
> +      - ``p_av1_level``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_level`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_LEVEL``.
> +    * - struct :c:type:`v4l2_ctrl_av1_operating_mode` *
> +      - ``p_av1_operating_mode``
> +      - A pointer to a struct :c:type:`v4l2_ctrl_av1_operating_mode`. Valid if this control is
> +        of type ``V4L2_CTRL_TYPE_AV1_OPERATING_MODE``.
>      * - struct :c:type:`v4l2_ctrl_hdr10_cll_info` *
>        - ``p_hdr10_cll``
>        - A pointer to a struct :c:type:`v4l2_ctrl_hdr10_cll_info`. Valid if this control is
> diff --git a/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst b/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
> index 819a70a26e18..73ff5311b7ae 100644
> --- a/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
> +++ b/Documentation/userspace-api/media/v4l/vidioc-queryctrl.rst
> @@ -507,6 +507,60 @@ See also the examples in :ref:`control`.
>        - n/a
>        - A struct :c:type:`v4l2_ctrl_hevc_decode_params`, containing HEVC
>  	decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_SEQUENCE``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_sequence`, containing AV1 Sequence OBU
> +	decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_GROUP``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_group`, containing AV1 Tile Group
> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_group`, containing AV1 Tile Group
> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_LIST``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_list`, containing AV1 Tile List
> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_tile_list_entry`, containing AV1 Tile List
> +	OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_FRAME_HEADER``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A struct :c:type:`v4l2_ctrl_av1_frame_header`, containing AV1 Frame/Frame
> +	Header OBU decoding parameters for stateless video decoders.
> +    * - ``V4L2_CTRL_TYPE_AV1_PROFILE``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A enum :c:type:`v4l2_ctrl_av1_profile`, indicating what AV1 profiles
> +	an AV1 stateless decoder might support.
> +    * - ``V4L2_CTRL_TYPE_AV1_LEVEL``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A enum :c:type:`v4l2_ctrl_av1_level`, indicating what AV1 levels
> +	an AV1 stateless decoder might support.
> +    * - ``V4L2_CTRL_TYPE_AV1_OPERATING_MODE``
> +      - n/a
> +      - n/a
> +      - n/a
> +      - A enum :c:type:`v4l2_ctrl_av1_operating_mode`, indicating what AV1
> +	operating modes an AV1 stateless decoder might support.
>  
>  .. raw:: latex
>  
> diff --git a/Documentation/userspace-api/media/videodev2.h.rst.exceptions b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> index 2217b56c2686..088d4014e4c5 100644
> --- a/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> +++ b/Documentation/userspace-api/media/videodev2.h.rst.exceptions
> @@ -146,6 +146,15 @@ replace symbol V4L2_CTRL_TYPE_H264_DECODE_PARAMS :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_HEVC_SPS :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_HEVC_PPS :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_HEVC_SLICE_PARAMS :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_SEQUENCE :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_TILE_GROUP :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_TILE_LIST :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_FRAME_HEADER :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_PROFILE :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_LEVEL :c:type:`v4l2_ctrl_type`
> +replace symbol V4L2_CTRL_TYPE_AV1_OPERATING_MODE :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_AREA :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_FWHT_PARAMS :c:type:`v4l2_ctrl_type`
>  replace symbol V4L2_CTRL_TYPE_VP8_FRAME :c:type:`v4l2_ctrl_type`
> diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-core/v4l2-ctrls-core.c
> index e7ef2d16745e..3f0e425278b3 100644
> --- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
> +++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
> @@ -283,6 +283,25 @@ static void std_log(const struct v4l2_ctrl *ctrl)
>  	case V4L2_CTRL_TYPE_MPEG2_PICTURE:
>  		pr_cont("MPEG2_PICTURE");
>  		break;
> +	case V4L2_CTRL_TYPE_AV1_SEQUENCE:
> +		pr_cont("AV1_SEQUENCE");
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP:
> +		pr_cont("AV1_TILE_GROUP");
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY:
> +		pr_cont("AV1_TILE_GROUP_ENTRY");
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST:
> +		pr_cont("AV1_TILE_LIST");
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY:
> +		pr_cont("AV1_TILE_LIST_ENTRY");
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_FRAME_HEADER:
> +		pr_cont("AV1_FRAME_HEADER");
> +		break;
> +
>  	default:
>  		pr_cont("unknown type %d", ctrl->type);
>  		break;
> @@ -317,6 +336,244 @@ static void std_log(const struct v4l2_ctrl *ctrl)
>  #define zero_reserved(s) \
>  	memset(&(s).reserved, 0, sizeof((s).reserved))
>  
> +static int validate_av1_quantization(struct v4l2_av1_quantization *q)
> +{
> +	if (q->flags > GENMASK(2, 0))
> +		return -EINVAL;
> +
> +	if (q->delta_q_y_dc < -63 || q->delta_q_y_dc > 63 ||
> +	    q->delta_q_u_dc < -63 || q->delta_q_u_dc > 63 ||
> +	    q->delta_q_v_dc < -63 || q->delta_q_v_dc > 63 ||
> +	    q->delta_q_u_ac < -63 || q->delta_q_u_ac > 63 ||
> +	    q->delta_q_v_ac < -63 || q->delta_q_v_ac > 63 ||
> +	    q->delta_q_res > GENMASK(1, 0))
> +		return -EINVAL;
> +
> +	if (q->qm_y > GENMASK(3, 0) ||
> +	    q->qm_u > GENMASK(3, 0) ||
> +	    q->qm_v > GENMASK(3, 0))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int validate_av1_segmentation(struct v4l2_av1_segmentation *s)
> +{
> +	u32 i;
> +	u32 j;
> +	s32 limit;
> +
> +	if (s->flags > GENMASK(3, 0))
> +		return -EINVAL;
> +
> +	for (i = 0; i < ARRAY_SIZE(s->feature_data); i++) {
> +		const int segmentation_feature_signed[] = { 1, 1, 1, 1, 1, 0, 0, 0 };
> +		const int segmentation_feature_max[] = { 255, 63, 63, 63, 63, 7, 0, 0};
> +
> +		for (j = 0; j < ARRAY_SIZE(s->feature_data[j]); j++) {
> +			if (segmentation_feature_signed[j]) {
> +				limit = segmentation_feature_max[j];
> +
> +				if (s->feature_data[i][j] < -limit ||
> +				    s->feature_data[i][j] > limit)
> +					return -EINVAL;
> +			} else {
> +				if (s->feature_data[i][j] > limit)
> +					return -EINVAL;
> +			}
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static int validate_av1_loop_filter(struct v4l2_av1_loop_filter *lf)
> +{
> +	u32 i;
> +
> +	if (lf->flags > GENMASK(2, 0))
> +		return -EINVAL;
> +
> +	for (i = 0; i < ARRAY_SIZE(lf->level); i++) {
> +		if (lf->level[i] > GENMASK(5, 0))
> +			return -EINVAL;
> +	}
> +
> +	if (lf->sharpness > GENMASK(2, 0))
> +		return -EINVAL;
> +
> +	for (i = 0; i < ARRAY_SIZE(lf->ref_deltas); i++) {
> +		if (lf->ref_deltas[i] < -63 || lf->ref_deltas[i] > 63)
> +			return -EINVAL;
> +	}
> +
> +	for (i = 0; i < ARRAY_SIZE(lf->mode_deltas); i++) {
> +		if (lf->mode_deltas[i] < -63 || lf->mode_deltas[i] > 63)
> +			return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int validate_av1_cdef(struct v4l2_av1_cdef *cdef)
> +{
> +	u32 i;
> +
> +	if (cdef->damping_minus_3 > GENMASK(1, 0) ||
> +	    cdef->bits > GENMASK(1, 0))
> +		return -EINVAL;
> +
> +	for (i = 0; i < 1 << cdef->bits; i++) {
> +		if (cdef->y_pri_strength[i] > GENMASK(3, 0) ||
> +		    cdef->y_sec_strength[i] > 4 ||
> +		    cdef->uv_pri_strength[i] > GENMASK(3, 0) ||
> +		    cdef->uv_sec_strength[i] > 4)
> +			return -EINVAL;
> +	}
> +
> +	return 0;
> +}
> +
> +static int validate_av1_loop_restauration(struct v4l2_av1_loop_restoration *lr)
> +{
> +	if (lr->lr_unit_shift > 3 || lr->lr_uv_shift > 1)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int validate_av1_film_grain(struct v4l2_av1_film_grain *fg)
> +{
> +	u32 i;
> +
> +	if (fg->flags > GENMASK(4, 0))
> +		return -EINVAL;
> +
> +	if (fg->film_grain_params_ref_idx > GENMASK(2, 0) ||
> +	    fg->num_y_points > 14 ||
> +	    fg->num_cb_points > 10 ||
> +	    fg->num_cr_points > GENMASK(3, 0) ||
> +	    fg->grain_scaling_minus_8 > GENMASK(1, 0) ||
> +	    fg->ar_coeff_lag > GENMASK(1, 0) ||
> +	    fg->ar_coeff_shift_minus_6 > GENMASK(1, 0) ||
> +	    fg->grain_scale_shift > GENMASK(1, 0))
> +		return -EINVAL;
> +
> +	if (!(fg->flags & V4L2_AV1_FILM_GRAIN_FLAG_APPLY_GRAIN))
> +		return 0;
> +
> +	for (i = 1; i < ARRAY_SIZE(fg->point_y_value); i++)
> +		if (fg->point_y_value[i] <= fg->point_y_value[i - 1])
> +			return -EINVAL;
> +
> +	for (i = 1; i < ARRAY_SIZE(fg->point_cb_value); i++)
> +		if (fg->point_cb_value[i] <= fg->point_cb_value[i - 1])
> +			return -EINVAL;
> +
> +	for (i = 1; i < ARRAY_SIZE(fg->point_cr_value); i++)
> +		if (fg->point_cr_value[i] <= fg->point_cr_value[i - 1])
> +			return -EINVAL;

The three validations here failed for me, unless I repeated the last entry. We
probably want the validation to be done like this ?


diff --git a/drivers/media/v4l2-core/v4l2-ctrls-core.c b/drivers/media/v4l2-
core/v4l2-ctrls-core.c
index 2f497dc2e48e..1205025edd14 100644
--- a/drivers/media/v4l2-core/v4l2-ctrls-core.c
+++ b/drivers/media/v4l2-core/v4l2-ctrls-core.c
@@ -448,15 +448,15 @@ static int validate_av1_film_grain(struct
v4l2_av1_film_grain *fg)
        if (!(fg->flags & V4L2_AV1_FILM_GRAIN_FLAG_APPLY_GRAIN))
                return 0;
 
-       for (i = 1; i < ARRAY_SIZE(fg->point_y_value); i++)
+       for (i = 1; i < fg->num_y_points; i++)
                if (fg->point_y_value[i] <= fg->point_y_value[i - 1])
                        return -EINVAL;
 
-       for (i = 1; i < ARRAY_SIZE(fg->point_cb_value); i++)
+       for (i = 1; i < fg->num_cb_points; i++)
                if (fg->point_cb_value[i] <= fg->point_cb_value[i - 1])
                        return -EINVAL;
 
-       for (i = 1; i < ARRAY_SIZE(fg->point_cr_value); i++)
+       for (i = 1; i < fg->num_cr_points; i++)


                if (fg->point_cr_value[i] <= fg->point_cr_value[i - 1])
                        return -EINVAL;

> 
> 
> +
> +	return 0;
> +}
> +
> +static int validate_av1_frame_header(struct v4l2_ctrl_av1_frame_header *f)
> +{
> +	int ret = 0;
> +
> +	ret = validate_av1_quantization(&f->quantization);
> +	if (ret)
> +		return ret;
> +	ret = validate_av1_segmentation(&f->segmentation);
> +	if (ret)
> +		return ret;
> +	ret = validate_av1_loop_filter(&f->loop_filter);
> +	if (ret)
> +		return ret;
> +	ret = validate_av1_cdef(&f->cdef);
> +	if (ret)
> +		return ret;
> +	ret = validate_av1_loop_restauration(&f->loop_restoration);
> +	if (ret)
> +		return ret;
> +	ret = validate_av1_film_grain(&f->film_grain);
> +	if (ret)
> +		return ret;
> +
> +	if (f->flags &
> +	~(V4L2_AV1_FRAME_HEADER_FLAG_SHOW_FRAME |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_SHOWABLE_FRAME |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_ERROR_RESILIENT_MODE |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_CDF_UPDATE |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_SCREEN_CONTENT_TOOLS |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_FORCE_INTEGER_MV |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_INTRABC |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_USE_SUPERRES |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_HIGH_PRECISION_MV |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_IS_MOTION_MODE_SWITCHABLE |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_USE_REF_FRAME_MVS |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_FRAME_END_UPDATE_CDF |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_UNIFORM_TILE_SPACING |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_WARPED_MOTION |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_REFERENCE_SELECT |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_REDUCED_TX_SET |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_FRAME_SIZE_OVERRIDE |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_BUFFER_REMOVAL_TIME_PRESENT |
> +	  V4L2_AV1_FRAME_HEADER_FLAG_FRAME_REFS_SHORT_SIGNALING))
> +		return -EINVAL;
> +
> +	if (f->superres_denom > GENMASK(2, 0) + 9)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int validate_av1_sequence(struct v4l2_ctrl_av1_sequence *s)
> +{
> +	if (s->flags &
> +	~(V4L2_AV1_SEQUENCE_FLAG_STILL_PICTURE |
> +	 V4L2_AV1_SEQUENCE_FLAG_USE_128X128_SUPERBLOCK |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_FILTER_INTRA |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTRA_EDGE_FILTER |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTERINTRA_COMPOUND |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_MASKED_COMPOUND |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_WARPED_MOTION |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_DUAL_FILTER |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_ORDER_HINT |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_JNT_COMP |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_REF_FRAME_MVS |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_SUPERRES |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_CDEF |
> +	 V4L2_AV1_SEQUENCE_FLAG_ENABLE_RESTORATION |
> +	 V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME |
> +	 V4L2_AV1_SEQUENCE_FLAG_COLOR_RANGE |
> +	 V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_X |
> +	 V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_Y |
> +	 V4L2_AV1_SEQUENCE_FLAG_FILM_GRAIN_PARAMS_PRESENT |
> +	 V4L2_AV1_SEQUENCE_FLAG_SEPARATE_UV_DELTA_Q))
> +		return -EINVAL;
> +
> +	if (s->seq_profile == 1 && s->flags & V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME)
> +		return -EINVAL;
> +
> +	/* reserved */
> +	if (s->seq_profile > 2)
> +		return -EINVAL;
> +
> +	/* TODO: PROFILES */
> +	return 0;
> +}
> +
> +static int validate_av1_tile_group(struct v4l2_ctrl_av1_tile_group *t)
> +{
> +	if (t->flags & ~(V4L2_AV1_TILE_GROUP_FLAG_START_AND_END_PRESENT))
> +		return -EINVAL;
> +	if (t->tg_start > t->tg_end)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
>  /*
>   * Compound controls validation requires setting unused fields/flags to zero
>   * in order to properly detect unchanged controls with std_equal's memcmp.
> @@ -573,7 +830,16 @@ static int std_validate_compound(const struct v4l2_ctrl *ctrl, u32 idx,
>  		zero_padding(p_vp8_frame->entropy);
>  		zero_padding(p_vp8_frame->coder_state);
>  		break;
> -
> +	case V4L2_CTRL_TYPE_AV1_FRAME_HEADER:
> +		return validate_av1_frame_header(p);
> +	case V4L2_CTRL_TYPE_AV1_SEQUENCE:
> +		return validate_av1_sequence(p);
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP:
> +		return validate_av1_tile_group(p);
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY:
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST:
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY:
> +		break;
>  	case V4L2_CTRL_TYPE_HEVC_SPS:
>  		p_hevc_sps = p;
>  
> @@ -1313,6 +1579,24 @@ static struct v4l2_ctrl *v4l2_ctrl_new(struct v4l2_ctrl_handler *hdl,
>  	case V4L2_CTRL_TYPE_VP8_FRAME:
>  		elem_size = sizeof(struct v4l2_ctrl_vp8_frame);
>  		break;
> +	case V4L2_CTRL_TYPE_AV1_SEQUENCE:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_sequence);
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_tile_group);
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_tile_group_entry);
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_tile_list);
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_tile_list_entry);
> +		break;
> +	case V4L2_CTRL_TYPE_AV1_FRAME_HEADER:
> +		elem_size = sizeof(struct v4l2_ctrl_av1_frame_header);
> +		break;
>  	case V4L2_CTRL_TYPE_HEVC_SPS:
>  		elem_size = sizeof(struct v4l2_ctrl_hevc_sps);
>  		break;
> diff --git a/drivers/media/v4l2-core/v4l2-ctrls-defs.c b/drivers/media/v4l2-core/v4l2-ctrls-defs.c
> index 421300e13a41..6f9b53f180cc 100644
> --- a/drivers/media/v4l2-core/v4l2-ctrls-defs.c
> +++ b/drivers/media/v4l2-core/v4l2-ctrls-defs.c
> @@ -499,6 +499,45 @@ const char * const *v4l2_ctrl_get_menu(u32 id)
>  		NULL,
>  	};
>  
> +	static const char * const av1_profile[] = {
> +		"Main",
> +		"High",
> +		"Professional",
> +		NULL,
> +	};
> +	static const char * const av1_level[] = {
> +		"2.0",
> +		"2.1",
> +		"2.2",
> +		"2.3",
> +		"3.0",
> +		"3.1",
> +		"3.2",
> +		"3.3",
> +		"4.0",
> +		"4.1",
> +		"4.2",
> +		"4.3",
> +		"5.0",
> +		"5.1",
> +		"5.2",
> +		"5.3",
> +		"6.0",
> +		"6.1",
> +		"6.2",
> +		"6.3",
> +		"7.0",
> +		"7.1",
> +		"7.2",
> +		"7.3",
> +		NULL,
> +	};
> +	static const char * const av1_operating_mode[] = {
> +		"General decoding",
> +		"Large scale tile decoding",
> +		NULL,
> +	};
> +
>  	static const char * const hevc_profile[] = {
>  		"Main",
>  		"Main Still Picture",
> @@ -685,6 +724,12 @@ const char * const *v4l2_ctrl_get_menu(u32 id)
>  		return dv_it_content_type;
>  	case V4L2_CID_DETECT_MD_MODE:
>  		return detect_md_mode;
> +	case V4L2_CID_STATELESS_AV1_PROFILE:
> +		return av1_profile;
> +	case V4L2_CID_STATELESS_AV1_LEVEL:
> +		return av1_level;
> +	case V4L2_CID_STATELESS_AV1_OPERATING_MODE:
> +		return av1_operating_mode;
>  	case V4L2_CID_MPEG_VIDEO_HEVC_PROFILE:
>  		return hevc_profile;
>  	case V4L2_CID_MPEG_VIDEO_HEVC_LEVEL:
> @@ -1175,6 +1220,15 @@ const char *v4l2_ctrl_get_name(u32 id)
>  	case V4L2_CID_STATELESS_MPEG2_SEQUENCE:			return "MPEG-2 Sequence Header";
>  	case V4L2_CID_STATELESS_MPEG2_PICTURE:			return "MPEG-2 Picture Header";
>  	case V4L2_CID_STATELESS_MPEG2_QUANTISATION:		return "MPEG-2 Quantisation Matrices";
> +	case V4L2_CID_STATELESS_AV1_SEQUENCE:			return "AV1 Sequence parameters";
> +	case V4L2_CID_STATELESS_AV1_TILE_GROUP:		        return "AV1 Tile Group";
> +	case V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY:	        return "AV1 Tile Group Entry";
> +	case V4L2_CID_STATELESS_AV1_TILE_LIST:		        return "AV1 Tile List";
> +	case V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY:		return "AV1 Tile List Entry";
> +	case V4L2_CID_STATELESS_AV1_FRAME_HEADER:		return "AV1 Frame Header parameters";
> +	case V4L2_CID_STATELESS_AV1_PROFILE:			return "AV1 Profile";
> +	case V4L2_CID_STATELESS_AV1_LEVEL:			return "AV1 Level";
> +	case V4L2_CID_STATELESS_AV1_OPERATING_MODE:		return "AV1 Operating Mode";
>  
>  	/* Colorimetry controls */
>  	/* Keep the order of the 'case's the same as in v4l2-controls.h! */
> @@ -1343,6 +1397,9 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
>  	case V4L2_CID_MPEG_VIDEO_VP9_PROFILE:
>  	case V4L2_CID_MPEG_VIDEO_VP9_LEVEL:
>  	case V4L2_CID_DETECT_MD_MODE:
> +	case V4L2_CID_STATELESS_AV1_PROFILE:
> +	case V4L2_CID_STATELESS_AV1_LEVEL:
> +	case V4L2_CID_STATELESS_AV1_OPERATING_MODE:
>  	case V4L2_CID_MPEG_VIDEO_HEVC_PROFILE:
>  	case V4L2_CID_MPEG_VIDEO_HEVC_LEVEL:
>  	case V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_TYPE:
> @@ -1481,6 +1538,28 @@ void v4l2_ctrl_fill(u32 id, const char **name, enum v4l2_ctrl_type *type,
>  	case V4L2_CID_STATELESS_VP8_FRAME:
>  		*type = V4L2_CTRL_TYPE_VP8_FRAME;
>  		break;
> +	case V4L2_CID_STATELESS_AV1_SEQUENCE:
> +		*type = V4L2_CTRL_TYPE_AV1_SEQUENCE;
> +		break;
> +	case V4L2_CID_STATELESS_AV1_TILE_GROUP:
> +		*type = V4L2_CTRL_TYPE_AV1_TILE_GROUP;
> +		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
> +		break;
> +	case V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY:
> +		*type = V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY;
> +		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
> +		break;
> +	case V4L2_CID_STATELESS_AV1_TILE_LIST:
> +		*type = V4L2_CTRL_TYPE_AV1_TILE_LIST;
> +		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
> +		break;
> +	case V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY:
> +		*type = V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY;
> +		*flags |= V4L2_CTRL_FLAG_DYNAMIC_ARRAY;
> +		break;
> +	case V4L2_CID_STATELESS_AV1_FRAME_HEADER:
> +		*type = V4L2_CTRL_TYPE_AV1_FRAME_HEADER;
> +		break;
>  	case V4L2_CID_MPEG_VIDEO_HEVC_SPS:
>  		*type = V4L2_CTRL_TYPE_HEVC_SPS;
>  		break;
> diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
> index 05d5db3d85e5..135474c43b65 100644
> --- a/drivers/media/v4l2-core/v4l2-ioctl.c
> +++ b/drivers/media/v4l2-core/v4l2-ioctl.c
> @@ -1416,6 +1416,7 @@ static void v4l_fill_fmtdesc(struct v4l2_fmtdesc *fmt)
>  		case V4L2_PIX_FMT_S5C_UYVY_JPG:	descr = "S5C73MX interleaved UYVY/JPEG"; break;
>  		case V4L2_PIX_FMT_MT21C:	descr = "Mediatek Compressed Format"; break;
>  		case V4L2_PIX_FMT_SUNXI_TILED_NV12: descr = "Sunxi Tiled NV12 Format"; break;
> +		case V4L2_PIX_FMT_AV1_FRAME: descr = "AV1 Frame"; break;
>  		default:
>  			if (fmt->description[0])
>  				return;
> diff --git a/include/media/v4l2-ctrls.h b/include/media/v4l2-ctrls.h
> index ebd9cef13309..5f8ba4fac92e 100644
> --- a/include/media/v4l2-ctrls.h
> +++ b/include/media/v4l2-ctrls.h
> @@ -56,6 +56,12 @@ struct video_device;
>   * @p_hdr10_cll:		Pointer to an HDR10 Content Light Level structure.
>   * @p_hdr10_mastering:		Pointer to an HDR10 Mastering Display structure.
>   * @p_area:			Pointer to an area.
> + * @p_av1_sequence:		Pointer to an AV1 sequence.
> + * @p_av1_tile_group:		Pointer to an AV1 tile group.
> + * @p_av1_tile_group_entry:	Pointer to an AV1 tile group entry.
> + * @p_av1_tile_list:		Pointer to an AV1 tile list.
> + * @p_av1_tile_list_entry:	Pointer to an AV1 tile list entry.
> + * @p_av1_frame_header:		Pointer to an AV1 frame header.
>   * @p:				Pointer to a compound value.
>   * @p_const:			Pointer to a constant compound value.
>   */
> @@ -83,6 +89,12 @@ union v4l2_ctrl_ptr {
>  	struct v4l2_ctrl_hdr10_cll_info *p_hdr10_cll;
>  	struct v4l2_ctrl_hdr10_mastering_display *p_hdr10_mastering;
>  	struct v4l2_area *p_area;
> +	struct v4l2_ctrl_av1_sequence *p_av1_sequence;
> +	struct v4l2_ctrl_av1_tile_group *p_av1_tile_group;
> +	struct v4l2_ctrl_av1_tile_group_entry *p_av1_tile_group_entry;
> +	struct v4l2_ctrl_av1_tile_list *p_av1_tile_list;
> +	struct v4l2_ctrl_av1_tile_list_entry *p_av1_tile_list_entry;
> +	struct v4l2_ctrl_av1_frame_header *p_av1_frame_header;
>  	void *p;
>  	const void *p_const;
>  };
> diff --git a/include/uapi/linux/v4l2-controls.h b/include/uapi/linux/v4l2-controls.h
> index 5532b5f68493..0378fe8e1967 100644
> --- a/include/uapi/linux/v4l2-controls.h
> +++ b/include/uapi/linux/v4l2-controls.h
> @@ -1976,6 +1976,802 @@ struct v4l2_ctrl_mpeg2_quantisation {
>  	__u8	chroma_non_intra_quantiser_matrix[64];
>  };
>  
> +/* Stateless AV1 controls */
> +
> +#define V4L2_AV1_TOTAL_REFS_PER_FRAME	8
> +#define V4L2_AV1_CDEF_MAX		8
> +#define V4L2_AV1_NUM_PLANES_MAX		3 /* 1 if monochrome, 3 otherwise */
> +#define V4L2_AV1_MAX_SEGMENTS		8
> +#define V4L2_AV1_MAX_OPERATING_POINTS	(1 << 5) /* 5 bits to encode */
> +#define V4L2_AV1_REFS_PER_FRAME		7
> +#define V4L2_AV1_MAX_NUM_Y_POINTS	(1 << 4) /* 4 bits to encode */
> +#define V4L2_AV1_MAX_NUM_CB_POINTS	(1 << 4) /* 4 bits to encode */
> +#define V4L2_AV1_MAX_NUM_CR_POINTS	(1 << 4) /* 4 bits to encode */
> +#define V4L2_AV1_MAX_NUM_POS_LUMA	25 /* (2 * 3 * (3 + 1)) + 1 */
> +#define V4L2_AV1_MAX_NUM_PLANES		3
> +#define V4L2_AV1_MAX_TILE_COLS		64
> +#define V4L2_AV1_MAX_TILE_ROWS		64
> +#define V4L2_AV1_MAX_TILE_COUNT		512
> +
> +#define V4L2_AV1_SEQUENCE_FLAG_STILL_PICTURE		  BIT(0)
> +#define V4L2_AV1_SEQUENCE_FLAG_USE_128X128_SUPERBLOCK	  BIT(1)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_FILTER_INTRA	  BIT(2)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTRA_EDGE_FILTER   BIT(3)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_INTERINTRA_COMPOUND BIT(4)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_MASKED_COMPOUND	  BIT(5)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_WARPED_MOTION	  BIT(6)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_DUAL_FILTER	  BIT(7)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_ORDER_HINT	  BIT(8)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_JNT_COMP		  BIT(9)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_REF_FRAME_MVS	  BIT(10)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_SUPERRES		  BIT(11)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_CDEF		  BIT(12)
> +#define V4L2_AV1_SEQUENCE_FLAG_ENABLE_RESTORATION	  BIT(13)
> +#define V4L2_AV1_SEQUENCE_FLAG_MONO_CHROME		  BIT(14)
> +#define V4L2_AV1_SEQUENCE_FLAG_COLOR_RANGE		  BIT(15)
> +#define V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_X		  BIT(16)
> +#define V4L2_AV1_SEQUENCE_FLAG_SUBSAMPLING_Y		  BIT(17)
> +#define V4L2_AV1_SEQUENCE_FLAG_FILM_GRAIN_PARAMS_PRESENT  BIT(18)
> +#define V4L2_AV1_SEQUENCE_FLAG_SEPARATE_UV_DELTA_Q	  BIT(19)
> +
> +#define V4L2_CID_STATELESS_AV1_SEQUENCE (V4L2_CID_CODEC_STATELESS_BASE + 401)
> +/**
> + * struct v4l2_ctrl_av1_sequence - AV1 Sequence
> + *
> + * Represents an AV1 Sequence OBU. See section 5.5. "Sequence header OBU syntax"
> + * for more details.
> + *
> + * @flags: See V4L2_AV1_SEQUENCE_FLAG_{}.
> + * @seq_profile: specifies the features that can be used in the coded video
> + * sequence.
> + * @order_hint_bits: specifies the number of bits used for the order_hint field
> + * at each frame.
> + * @bit_depth: the bitdepth to use for the sequence as described in section
> + * 5.5.2 "Color config syntax".
> + */
> +struct v4l2_ctrl_av1_sequence {
> +	__u32 flags;
> +	__u8 seq_profile;
> +	__u8 order_hint_bits;
> +	__u8 bit_depth;
> +};
> +
> +#define V4L2_AV1_TILE_GROUP_FLAG_START_AND_END_PRESENT BIT(0)
> +
> +#define V4L2_CID_STATELESS_AV1_TILE_GROUP (V4L2_CID_CODEC_STATELESS_BASE + 402)
> +/**
> + * struct v4l2_ctrl_av1_tile_group - AV1 Tile Group header.
> + *
> + * Represents a tile group as seen in an AV1 Tile Group OBU or Frame OBU. A
> + * v4l2_ctrl_av1_tile_group instance will refer to tg_end - tg_start instances
> + * of v4l2_ctrl_tile_group_entry. See section 6.10.1 "General tile group OBU
> + * semantics" for more details.
> + *
> + * @flags: see V4L2_AV1_TILE_GROUP_FLAG_{}.
> + * @tg_start: specifies the zero-based index of the first tile in the current
> + * tile group.
> + * @tg_end: specifies the zero-based index of the last tile in the current tile
> + * group.
> + */
> +struct v4l2_ctrl_av1_tile_group {
> +	__u8 flags;
> +	__u32 tg_start;
> +	__u32 tg_end;
> +};
> +
> +#define V4L2_CID_STATELESS_AV1_TILE_GROUP_ENTRY (V4L2_CID_CODEC_STATELESS_BASE + 403)
> +/**
> + * struct v4l2_ctrl_av1_tile_group_entry - AV1 Tile Group entry
> + *
> + * Represents a single AV1 tile inside an AV1 Tile Group. Note that MiRowStart,
> + * MiRowEnd, MiColStart and MiColEnd can be retrieved from struct
> + * v4l2_av1_tile_info in struct v4l2_ctrl_av1_frame_header using tile_row and
> + * tile_col. See section 6.10.1 "General tile group OBU semantics" for more
> + * details.
> + *
> + * @tile_offset: offset from the OBU data, i.e. where the coded tile data
> + * actually starts.
> + * @tile_size: specifies the size in bytes of the coded tile. Equivalent to
> + * "TileSize" in the AV1 Specification.
> + * @tile_row: specifies the row of the current tile. Equivalent to "TileRow" in
> + * the AV1 Specification.
> + * @tile_col: specifies the col of the current tile. Equivalent to "TileCol" in
> + * the AV1 Specification.
> + */
> +struct v4l2_ctrl_av1_tile_group_entry {
> +	__u32 tile_offset;
> +	__u32 tile_size;
> +	__u32 tile_row;
> +	__u32 tile_col;
> +};
> +
> +#define V4L2_CID_STATELESS_AV1_TILE_LIST (V4L2_CID_CODEC_STATELESS_BASE + 404)
> +/**
> + * struct v4l2_ctrl_av1_tile_list - AV1 Tile List header.
> + *
> + * Represents a tile list as seen in an AV1 Tile List OBU. Tile lists are used
> + * in "Large Scale Tile Decode Mode". Note that tile_count_minus_1 should be at
> + * most V4L2_AV1_MAX_TILE_COUNT - 1. A struct v4l2_ctrl_av1_tile_list instance
> + * will refer to "tile_count_minus_1" + 1 instances of struct
> + * v4l2_ctrl_av1_tile_list_entry.
> + *
> + * Each rendered frame may require at most two tile list OBU to be decodded. See
> + * section "6.11.1. General tile list OBU semantics" for more details.
> + *
> + * @output_frame_width_in_tiles_minus_1: this field plus one is the width of the
> + * output frame, in tile units.
> + * @output_frame_height_in_tiles_minus_1: this field plus one is the height of
> + * the output frame, in tile units.
> + * @tile_count_minus_1: this field plus one is the number of tile_list_entry in
> + * the list.
> + */
> +struct v4l2_ctrl_av1_tile_list {
> +	__u8 output_frame_width_in_tiles_minus_1;
> +	__u8 output_frame_height_in_tiles_minus_1;
> +	__u8 tile_count_minus_1;
> +};
> +
> +#define V4L2_CID_STATELESS_AV1_TILE_LIST_ENTRY (V4L2_CID_CODEC_STATELESS_BASE + 405)
> +
> +/**
> + * struct v4l2_ctrl_av1_tile_list_entry - AV1 Tile List entry
> + *
> + * Represents a tile list entry as seen in an AV1 Tile List OBU. See section
> + * 6.11.2. "Tile list entry semantics" of the AV1 Specification for more
> + * details.
> + *
> + * @anchor_frame_idx: the index into an array AnchorFrames of the frames that
> + * the tile uses for prediction.
> + * @anchor_tile_row: the row coordinate of the tile in the frame that it
> + * belongs, in tile units.
> + * @anchor_tile_col: the column coordinate of the tile in the frame that it
> + * belongs, in tile units.
> + * @tile_data_size_minus_1: this field plus one is the size of the coded tile
> + * data in bytes.
> + */
> +struct v4l2_ctrl_av1_tile_list_entry {
> +	__u8 anchor_frame_idx;
> +	__u8 anchor_tile_row;
> +	__u8 anchor_tile_col;
> +	__u8 tile_data_size_minus_1;
> +};
> +
> +#define V4L2_AV1_FILM_GRAIN_FLAG_APPLY_GRAIN BIT(0)
> +#define V4L2_AV1_FILM_GRAIN_FLAG_UPDATE_GRAIN BIT(1)
> +#define V4L2_AV1_FILM_GRAIN_FLAG_CHROMA_SCALING_FROM_LUMA BIT(2)
> +#define V4L2_AV1_FILM_GRAIN_FLAG_OVERLAP BIT(3)
> +#define V4L2_AV1_FILM_GRAIN_FLAG_CLIP_TO_RESTRICTED_RANGE BIT(4)
> +
> +/**
> + * struct v4l2_av1_film_grain - AV1 Film Grain parameters.
> + *
> + * Film grain parameters as specified by section 6.8.20 of the AV1
> +   Specification.
> + *
> + * @flags: see V4L2_AV1_FILM_GRAIN_{}.
> + * @grain_seed: specifies the starting value for the pseudo-random numbers used
> + * during film grain synthesis.
> + * @film_grain_params_ref_idx: indicates which reference frame contains the
> + * film grain parameters to be used for this frame.
> + * @num_y_points: specifies the number of points for the piece-wise linear
> + * scaling function of the luma component.
> + * @point_y_value: represents the x (luma value) coordinate for the i-th point
> + * of the piecewise linear scaling function for luma component. The values are
> + * signaled on the scale of 0..255. (In case of 10 bit video, these values
> + * correspond to luma values divided by 4. In case of 12 bit video, these values
> + * correspond to luma values divided by 16.).
> + * @point_y_scaling:  represents the scaling (output) value for the i-th point
> + * of the piecewise linear scaling function for luma component.
> + * @num_cb_points: specifies the number of points for the piece-wise linear
> + * scaling function of the cb component.
> + * @point_cb_value: represents the x coordinate for the i-th point of the
> + * piece-wise linear scaling function for cb component. The values are signaled
> + * on the scale of 0..255.
> + * @point_cb_scaling: represents the scaling (output) value for the i-th point
> + * of the piecewise linear scaling function for cb component.
> + * @num_cr_points: specifies represents the number of points for the piece-wise
> + * linear scaling function of the cr component.
> + * @point_cr_value:  represents the x coordinate for the i-th point of the
> + * piece-wise linear scaling function for cr component. The values are signaled
> + * on the scale of 0..255.
> + * @point_cr_scaling:  represents the scaling (output) value for the i-th point
> + * of the piecewise linear scaling function for cr component.
> + * @grain_scaling_minus_8: represents the shift – 8 applied to the values of the
> + * chroma component. The grain_scaling_minus_8 can take values of 0..3 and
> + * determines the range and quantization step of the standard deviation of film
> + * grain.
> + * @ar_coeff_lag: specifies the number of auto-regressive coefficients for luma
> + * and chroma.
> + * @ar_coeffs_y_plus_128: specifies auto-regressive coefficients used for the Y
> + * plane.
> + * @ar_coeffs_cb_plus_128: specifies auto-regressive coefficients used for the U
> + * plane.
> + * @ar_coeffs_cr_plus_128: specifies auto-regressive coefficients used for the V
> + * plane.
> + * @ar_coeff_shift_minus_6: specifies the range of the auto-regressive
> + * coefficients. Values of 0, 1, 2, and 3 correspond to the ranges for
> + * auto-regressive coefficients of [-2, 2), [-1, 1), [-0.5, 0.5) and [-0.25,
> + * 0.25) respectively.
> + * @grain_scale_shift: specifies how much the Gaussian random numbers should be
> + * scaled down during the grain synthesis process.
> + * @cb_mult: represents a multiplier for the cb component used in derivation of
> + * the input index to the cb component scaling function.
> + * @cb_luma_mult: represents a multiplier for the average luma component used in
> + * derivation of the input index to the cb component scaling function.
> + * @cb_offset: represents an offset used in derivation of the input index to the
> + * cb component scaling function.
> + * @cr_mult: represents a multiplier for the cr component used in derivation of
> + * the input index to the cr component scaling function.
> + * @cr_luma_mult: represents a multiplier for the average luma component used in
> + * derivation of the input index to the cr component scaling function.
> + * @cr_offset: represents an offset used in derivation of the input index to the
> + * cr component scaling function.
> + */
> +struct v4l2_av1_film_grain {
> +	__u8 flags;
> +	__u16 grain_seed;
> +	__u8 film_grain_params_ref_idx;
> +	__u8 num_y_points;
> +	__u8 point_y_value[V4L2_AV1_MAX_NUM_Y_POINTS];
> +	__u8 point_y_scaling[V4L2_AV1_MAX_NUM_Y_POINTS];
> +	__u8 num_cb_points;
> +	__u8 point_cb_value[V4L2_AV1_MAX_NUM_CB_POINTS];
> +	__u8 point_cb_scaling[V4L2_AV1_MAX_NUM_CB_POINTS];
> +	__u8 num_cr_points;
> +	__u8 point_cr_value[V4L2_AV1_MAX_NUM_CR_POINTS];
> +	__u8 point_cr_scaling[V4L2_AV1_MAX_NUM_CR_POINTS];
> +	__u8 grain_scaling_minus_8;
> +	__u8 ar_coeff_lag;
> +	__u8 ar_coeffs_y_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA];
> +	__u8 ar_coeffs_cb_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA];
> +	__u8 ar_coeffs_cr_plus_128[V4L2_AV1_MAX_NUM_POS_LUMA];
> +	__u8 ar_coeff_shift_minus_6;
> +	__u8 grain_scale_shift;
> +	__u8 cb_mult;
> +	__u8 cb_luma_mult;
> +	__u16 cb_offset;
> +	__u8 cr_mult;
> +	__u8 cr_luma_mult;
> +	__u16 cr_offset;
> +};
> +
> +/**
> + * enum v4l2_av1_warp_model - AV1 Warp Model as described in section 3
> + * "Symbols and abbreviated terms" of the AV1 Specification.
> + *
> + * @V4L2_AV1_WARP_MODEL_IDENTITY: Warp model is just an identity transform.
> + * @V4L2_AV1_WARP_MODEL_TRANSLATION: Warp model is a pure translation.
> + * @V4L2_AV1_WARP_MODEL_ROTZOOM: Warp model is a rotation + symmetric zoom +
> + * translation.
> + * @V4L2_AV1_WARP_MODEL_AFFINE: Warp model is a general affine transform.
> + */
> +enum v4l2_av1_warp_model {
> +	V4L2_AV1_WARP_MODEL_IDENTITY = 0,
> +	V4L2_AV1_WARP_MODEL_TRANSLATION = 1,
> +	V4L2_AV1_WARP_MODEL_ROTZOOM = 2,
> +	V4L2_AV1_WARP_MODEL_AFFINE = 3,
> +};
> +
> +/**
> + * enum v4l2_av1_reference_frame - AV1 reference frames
> + *
> + * @V4L2_AV1_REF_INTRA_FRAME: Intra Frame Reference
> + * @V4L2_AV1_REF_LAST_FRAME: Last Reference Frame
> + * @V4L2_AV1_REF_LAST2_FRAME: Last2 Reference Frame
> + * @V4L2_AV1_REF_LAST3_FRAME: Last3 Reference Frame
> + * @V4L2_AV1_REF_GOLDEN_FRAME: Golden Reference Frame
> + * @V4L2_AV1_REF_BWDREF_FRAME: BWD Reference Frame
> + * @V4L2_AV1_REF_ALTREF2_FRAME: Alternative2 Reference Frame
> + * @V4L2_AV1_REF_ALTREF_FRAME: Alternative Reference Frame
> + * @V4L2_AV1_NUM_REF_FRAMES: Total Reference Frame Number
> + */
> +enum v4l2_av1_reference_frame {
> +	V4L2_AV1_REF_INTRA_FRAME = 0,
> +	V4L2_AV1_REF_LAST_FRAME = 1,
> +	V4L2_AV1_REF_LAST2_FRAME = 2,
> +	V4L2_AV1_REF_LAST3_FRAME = 3,
> +	V4L2_AV1_REF_GOLDEN_FRAME = 4,
> +	V4L2_AV1_REF_BWDREF_FRAME = 5,
> +	V4L2_AV1_REF_ALTREF2_FRAME = 6,
> +	V4L2_AV1_REF_ALTREF_FRAME = 7,
> +	V4L2_AV1_NUM_REF_FRAMES,
> +};
> +
> +#define V4L2_AV1_GLOBAL_MOTION_IS_INVALID(ref) (1 << (ref))
> +
> +#define V4L2_AV1_GLOBAL_MOTION_FLAG_IS_GLOBAL	   BIT(0)
> +#define V4L2_AV1_GLOBAL_MOTION_FLAG_IS_ROT_ZOOM	   BIT(1)
> +#define V4L2_AV1_GLOBAL_MOTION_FLAG_IS_TRANSLATION BIT(2)
> +/**
> + * struct v4l2_av1_global_motion - AV1 Global Motion parameters as described in
> + * section 6.8.17 "Global motion params semantics" of the AV1 specification.
> + *
> + * @flags: A bitfield containing the flags per reference frame. See
> + * V4L2_AV1_GLOBAL_MOTION_FLAG_{}
> + * @type: The type of global motion transform used.
> + * @params: this field has the same meaning as "gm_params" in the AV1
> + * specification.
> + * @invalid: bitfield indicating whether the global motion params are invalid
> + * for a given reference frame. See section 7.11.3.6. Setup shear process and
> + * the variable "warpValid". Use V4L2_AV1_GLOBAL_MOTION_IS_INVALID(ref) to
> + * create a suitable mask.
> + */
> +
> +struct v4l2_av1_global_motion {
> +	__u8 flags[V4L2_AV1_TOTAL_REFS_PER_FRAME];
> +	enum v4l2_av1_warp_model type[V4L2_AV1_TOTAL_REFS_PER_FRAME];
> +	__u32 params[V4L2_AV1_TOTAL_REFS_PER_FRAME][6];
> +	__u8 invalid;
> +};
> +
> +/**
> + * enum v4l2_av1_frame_restoration_type - AV1 Frame Restoration Type
> + * @V4L2_AV1_FRAME_RESTORE_NONE: no filtering is applied.
> + * @V4L2_AV1_FRAME_RESTORE_WIENER: Wiener filter process is invoked.
> + * @V4L2_AV1_FRAME_RESTORE_SGRPROJ: self guided filter process is invoked.
> + * @V4L2_AV1_FRAME_RESTORE_SWITCHABLE: restoration filter is swichtable.
> + */
> +enum v4l2_av1_frame_restoration_type {
> +	V4L2_AV1_FRAME_RESTORE_NONE = 0,
> +	V4L2_AV1_FRAME_RESTORE_WIENER = 1,
> +	V4L2_AV1_FRAME_RESTORE_SGRPROJ = 2,
> +	V4L2_AV1_FRAME_RESTORE_SWITCHABLE = 3,
> +};
> +
> +/**
> + * struct v4l2_av1_loop_restoration - AV1 Loop Restauration as described in
> + * section 6.10.15 "Loop restoration params semantics" of the AV1 specification.
> + *
> + * @frame_restoration_type: specifies the type of restoration used for each
> + * plane. See enum_v4l2_av1_frame_restoration_type.
> + * @lr_unit_shift: specifies if the luma restoration size should be halved.
> + * @lr_uv_shift: specifies if the chroma size should be half the luma size.
> + * @loop_restoration_size: specifies the size of loop restoration units in units
> + * of samples in the current plane.
> + */
> +struct v4l2_av1_loop_restoration {
> +	enum v4l2_av1_frame_restoration_type frame_restoration_type[V4L2_AV1_NUM_PLANES_MAX];
> +	__u8 lr_unit_shift;
> +	__u8 lr_uv_shift;
> +	__u32 loop_restoration_size[V4L2_AV1_MAX_NUM_PLANES];
> +};
> +
> +/**
> + * struct v4l2_av1_cdef - AV1 CDEF params semantics as described in section
> + * 6.10.14. "CDEF params semantics" of the AV1 specification
> + *
> + * @damping_minus_3: controls the amount of damping in the deringing filter.
> + * @bits: specifies the number of bits needed to specify which CDEF filter to
> + * apply.
> + * @y_pri_strength: specifies the strength of the primary filter.
> + * @y_sec_strength: specifies the strength of the secondary filter.
> + * @uv_pri_strength: specifies the strength of the primary filter.
> + * @uv_sec_strength: specifies the strength of the secondary filter.
> + */
> +struct v4l2_av1_cdef {
> +	__u8 damping_minus_3;
> +	__u8 bits;
> +	__u8 y_pri_strength[V4L2_AV1_CDEF_MAX];
> +	__u8 y_sec_strength[V4L2_AV1_CDEF_MAX];
> +	__u8 uv_pri_strength[V4L2_AV1_CDEF_MAX];
> +	__u8 uv_sec_strength[V4L2_AV1_CDEF_MAX];
> +};
> +
> +#define V4L2_AV1_SEGMENTATION_FLAG_ENABLED	   BIT(0)
> +#define V4L2_AV1_SEGMENTATION_FLAG_UPDATE_MAP	   BIT(1)
> +#define V4L2_AV1_SEGMENTATION_FLAG_TEMPORAL_UPDATE BIT(2)
> +#define V4L2_AV1_SEGMENTATION_FLAG_UPDATE_DATA	   BIT(3)
> +#define V4L2_AV1_SEGMENTATION_FLAG_SEG_ID_PRE_SKIP	BIT(4)
> +
> +/**
> + * enum v4l2_av1_segment_feature - AV1 segment features as described in section
> + * 3 "Symbols and abbreviated terms" of the AV1 specification.
> + *
> + * @V4L2_AV1_SEG_LVL_ALT_Q: Index for quantizer segment feature.
> + * @V4L2_AV1_SEG_LVL_ALT_LF_Y_V: Index for vertical luma loop filter segment
> + * feature.
> + * @V4L2_AV1_SEG_LVL_REF_FRAME: Index for reference frame segment feature.
> + * @V4L2_AV1_SEG_LVL_SKIP: Index for skip segment feature.
> + * @V4L2_AV1_SEG_LVL_GLOBALMV: Index for global mv feature.
> + * @V4L2_AV1_SEG_LVL_MAX: Number of segment features.
> + */
> +enum v4l2_av1_segment_feature {
> +	V4L2_AV1_SEG_LVL_ALT_Q = 0,
> +	V4L2_AV1_SEG_LVL_ALT_LF_Y_V = 1,
> +	V4L2_AV1_SEG_LVL_REF_FRAME = 5,
> +	V4L2_AV1_SEG_LVL_REF_SKIP = 6,
> +	V4L2_AV1_SEG_LVL_REF_GLOBALMV = 7,
> +	V4L2_AV1_SEG_LVL_MAX = 8
> +};
> +
> +#define V4L2_AV1_SEGMENT_FEATURE_ENABLED(id)	(1 << (id))
> +
> +/**
> + * struct v4l2_av1_segmentation - AV1 Segmentation params as defined in section
> + * 6.8.13. "Segmentation params semantics" of the AV1 specification.
> + *
> + * @flags: see V4L2_AV1_SEGMENTATION_FLAG_{}.
> + * @feature_enabled: bitmask defining which features are enabled in each segment.
> + * Use V4L2_AV1_SEGMENT_FEATURE_ENABLED to build a suitable mask.
> + * @feature_data: data attached to each feature. Data entry is only valid if the
> + * feature is enabled
> + * @last_active_seg_id: indicates the highest numbered segment id that has some
> + * enabled feature. This is used when decoding the segment id to only decode
> + * choices corresponding to used segments.
> + */
> +struct v4l2_av1_segmentation {
> +	__u8 flags;
> +	__u8 feature_enabled[V4L2_AV1_MAX_SEGMENTS];
> +	__u16 feature_data[V4L2_AV1_MAX_SEGMENTS][V4L2_AV1_SEG_LVL_MAX];
> +	__u8 last_active_seg_id;
> +};
> +
> +#define V4L2_AV1_LOOP_FILTER_FLAG_DELTA_ENABLED    BIT(0)
> +#define V4L2_AV1_LOOP_FILTER_FLAG_DELTA_UPDATE     BIT(1)
> +#define V4L2_AV1_LOOP_FILTER_FLAG_DELTA_LF_PRESENT BIT(2)
> +
> +/**
> + * struct v4l2_av1_loop_filter - AV1 Loop filter params as defined in section
> + * 6.8.10. "Loop filter semantics" of the AV1 specification.
> + *
> + * @flags: see V4L2_AV1_LOOP_FILTER_FLAG_{}
> + * @level: an array containing loop filter strength values. Different loop
> + * filter strength values from the array are used depending on the image plane
> + * being filtered, and the edge direction (vertical or horizontal) being
> + * filtered.
> + * @sharpness: indicates the sharpness level. The loop_filter_level and
> + * loop_filter_sharpness together determine when a block edge is filtered, and
> + * by how much the filtering can change the sample values. The loop filter
> + * process is described in section 7.14 of the AV1 specification.
> + * @ref_deltas: contains the adjustment needed for the filter level based on the
> + * chosen reference frame. If this syntax element is not present, it maintains
> + * its previous value.
> + * @mode_deltas: contains the adjustment needed for the filter level based on
> + * the chosen mode. If this syntax element is not present, it maintains its
> + * previous value.
> + * @delta_lf_res: specifies the left shift which should be applied to decoded
> + * loop filter delta values.
> + * @delta_lf_multi: a value equal to 1 specifies that separate loop filter
> + * deltas are sent for horizontal luma edges, vertical luma edges,
> + * the U edges, and the V edges. A value of delta_lf_multi equal to 0 specifies
> + * that the same loop filter delta is used for all edges.
> + */
> +struct v4l2_av1_loop_filter {
> +	__u8 flags;
> +	__u8 level[4];
> +	__u8 sharpness;
> +	__u8 ref_deltas[V4L2_AV1_TOTAL_REFS_PER_FRAME];
> +	__u8 mode_deltas[2];
> +	__u8 delta_lf_res;
> +	__u8 delta_lf_multi;
> +};
> +
> +#define V4L2_AV1_QUANTIZATION_FLAG_DIFF_UV_DELTA   BIT(0)
> +#define V4L2_AV1_QUANTIZATION_FLAG_USING_QMATRIX   BIT(1)
> +#define V4L2_AV1_QUANTIZATION_FLAG_DELTA_Q_PRESENT BIT(2)
> +
> +/**
> + * struct v4l2_av1_quantization - AV1 Quantization params as defined in section
> + * 6.8.11 "Quantization params semantics" of the AV1 specification.
> + *
> + * @flags: see V4L2_AV1_QUANTIZATION_FLAG_{}
> + * @base_q_idx: indicates the base frame qindex. This is used for Y AC
> + * coefficients and as the base value for the other quantizers.
> + * @delta_q_y_dc: indicates the Y DC quantizer relative to base_q_idx.
> + * @delta_q_u_dc: indicates the U DC quantizer relative to base_q_idx.
> + * @delta_q_u_ac: indicates the U AC quantizer relative to base_q_idx.
> + * @delta_q_v_dc: indicates the V DC quantizer relative to base_q_idx.
> + * @delta_q_v_ac: indicates the V AC quantizer relative to base_q_idx.
> + * @qm_y: specifies the level in the quantizer matrix that should be used for
> + * luma plane decoding.
> + * @qm_u: specifies the level in the quantizer matrix that should be used for
> + * chroma U plane decoding.
> + * @qm_v: specifies the level in the quantizer matrix that should be used for
> + * chroma V plane decoding.
> + * @delta_q_res: specifies the left shift which should be applied to decoded
> + * quantizer index delta values.
> + */
> +struct v4l2_av1_quantization {
> +	__u8 flags;
> +	__u8 base_q_idx;
> +	__s8 delta_q_y_dc;
> +	__s8 delta_q_u_dc;
> +	__s8 delta_q_u_ac;
> +	__s8 delta_q_v_dc;
> +	__s8 delta_q_v_ac;
> +	__u8 qm_y;
> +	__u8 qm_u;
> +	__u8 qm_v;
> +	__u8 delta_q_res;
> +};
> +
> +#define V4L2_AV1_TILE_INFO_FLAG_UNIFORM_TILE_SPACING	BIT(0)
> +
> +/**
> + * struct v4l2_av1_tile_info - AV1 Tile info as defined in section 6.8.14. "Tile
> + * info semantics" of the AV1 specification.
> + *
> + * @flags: see V4L2_AV1_TILE_INFO_FLAG_{}
> + * @mi_col_starts: an array specifying the start column (in units of 4x4 luma
> + * samples) for each tile across the image.
> + * @mi_row_starts: an array specifying the start row (in units of 4x4 luma
> + * samples) for each tile down the image.
> + * @width_in_sbs_minus_1: specifies the width of a tile minus 1 in units of
> + * superblocks.
> + * @height_in_sbs_minus_1:  specifies the height of a tile minus 1 in units of
> + * superblocks.
> + * @tile_size_bytes: specifies the number of bytes needed to code each tile
> + * size.
> + * @context_update_tile_id: specifies which tile to use for the CDF update.
> + * @tile_rows: specifies the number of tiles down the frame.
> + * @tile_cols: specifies the number of tiles across the frame.
> + */
> +struct v4l2_av1_tile_info {
> +	__u8 flags;
> +	__u32 mi_col_starts[V4L2_AV1_MAX_TILE_COLS + 1];
> +	__u32 mi_row_starts[V4L2_AV1_MAX_TILE_ROWS + 1];
> +	__u32 width_in_sbs_minus_1[V4L2_AV1_MAX_TILE_COLS];
> +	__u32 height_in_sbs_minus_1[V4L2_AV1_MAX_TILE_ROWS];
> +	__u8 tile_size_bytes;
> +	__u8 context_update_tile_id;
> +	__u8 tile_cols;
> +	__u8 tile_rows;
> +};
> +
> +/**
> + * enum v4l2_av1_frame_type - AV1 Frame Type
> + *
> + * @V4L2_AV1_KEY_FRAME: Key frame
> + * @V4L2_AV1_INTER_FRAME: Inter frame
> + * @V4L2_AV1_INTRA_ONLY_FRAME: Intra-only frame
> + * @V4L2_AV1_SWITCH_FRAME: Switch frame
> + */
> +enum v4l2_av1_frame_type {
> +	V4L2_AV1_KEY_FRAME = 0,
> +	V4L2_AV1_INTER_FRAME = 1,
> +	V4L2_AV1_INTRA_ONLY_FRAME = 2,
> +	V4L2_AV1_SWITCH_FRAME = 3
> +};
> +
> +/**
> + * enum v4l2_vp9_interpolation_filter - AV1 interpolation filter types
> + *
> + * @V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP: eight tap filter
> + * @V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SMOOTH: eight tap smooth filter
> + * @V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SHARP: eight tap sharp filter
> + * @V4L2_AV1_INTERPOLATION_FILTER_BILINEAR: bilinear filter
> + * @V4L2_AV1_INTERPOLATION_FILTER_SWITCHABLE: filter selection is signaled at
> + * the block level
> + *
> + * See section 6.8.9 "Interpolation filter semantics" of the AV1 specification
> + * for more details.
> + */
> +enum v4l2_av1_interpolation_filter {
> +	V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP = 0,
> +	V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SMOOTH = 1,
> +	V4L2_AV1_INTERPOLATION_FILTER_EIGHTTAP_SHARP = 2,
> +	V4L2_AV1_INTERPOLATION_FILTER_BILINEAR = 3,
> +	V4L2_AV1_INTERPOLATION_FILTER_SWITCHABLE = 4,
> +};
> +
> +/**
> + * enum v4l2_av1_tx_mode - AV1 Tx mode as described in section 6.8.21 "TX mode
> + * semantics" of the AV1 specification.
> + * @V4L2_AV1_TX_MODE_ONLY_4X4: the inverse transform will use only 4x4
> + * transforms
> + * @V4L2_AV1_TX_MODE_LARGEST: the inverse transform will use the largest
> + * transform size that fits inside the block
> + * @V4L2_AV1_TX_MODE_SELECT: the choice of transform size is specified
> + * explicitly for each block.
> + */
> +enum v4l2_av1_tx_mode {
> +	V4L2_AV1_TX_MODE_ONLY_4X4 = 0,
> +	V4L2_AV1_TX_MODE_LARGEST = 1,
> +	V4L2_AV1_TX_MODE_SELECT = 2
> +};
> +
> +#define V4L2_AV1_FRAME_HEADER_FLAG_SHOW_FRAME			BIT(0)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_SHOWABLE_FRAME		BIT(1)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_ERROR_RESILIENT_MODE		BIT(2)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_CDF_UPDATE		BIT(3)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_SCREEN_CONTENT_TOOLS	BIT(4)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_FORCE_INTEGER_MV		BIT(5)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_INTRABC		BIT(6)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_USE_SUPERRES			BIT(7)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_HIGH_PRECISION_MV	BIT(8)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_IS_MOTION_MODE_SWITCHABLE	BIT(9)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_USE_REF_FRAME_MVS		BIT(10)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_DISABLE_FRAME_END_UPDATE_CDF BIT(11)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_UNIFORM_TILE_SPACING		BIT(12)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_ALLOW_WARPED_MOTION		BIT(13)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_REFERENCE_SELECT		BIT(14)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_REDUCED_TX_SET		BIT(15)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_SKIP_MODE_PRESENT		BIT(16)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_FRAME_SIZE_OVERRIDE		BIT(17)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_BUFFER_REMOVAL_TIME_PRESENT	BIT(18)
> +#define V4L2_AV1_FRAME_HEADER_FLAG_FRAME_REFS_SHORT_SIGNALING	BIT(19)
> +
> +#define V4L2_CID_STATELESS_AV1_FRAME_HEADER (V4L2_CID_CODEC_STATELESS_BASE + 406)
> +/**
> + * struct v4l2_ctrl_av1_frame_header - Represents an AV1 Frame Header OBU.
> + *
> + * @tile_info: tile info
> + * @quantization: quantization params
> + * @segmentation: segmentation params
> + * @loop_filter: loop filter params
> + * @cdef: cdef params
> + * @loop_restoration: loop restoration params
> + * @global_motion: global motion params
> + * @film_grain: film grain params
> + * @flags: see V4L2_AV1_FRAME_HEADER_FLAG_{}
> + * @frame_type: specifies the AV1 frame type
> + * @order_hint: specifies OrderHintBits least significant bits of the expected
> + * output order for this frame.
> + * @superres_denom: the denominator for the upscaling ratio.
> + * @upscaled_width: the upscaled width.
> + * @interpolation_filter: specifies the filter selection used for performing
> + * inter prediction.
> + * @tx_mode: specifies how the transform size is determined.
> + * @frame_width_minus_1: add 1 to get the frame's width.
> + * @frame_height_minus_1: add 1 to get the frame's height
> + * @render_width_minus_1: add 1 to get the render width of the frame in luma
> + * samples.
> + * @render_height_minus_1: add 1 to get the render height of the frame in luma
> + * samples.
> + * @current_frame_id: specifies the frame id number for the current frame. Frame
> + * id numbers are additional information that do not affect the decoding
> + * process, but provide decoders with a way of detecting missing reference
> + * frames so that appropriate action can be taken.
> + * @primary_ref_frame: specifies which reference frame contains the CDF values
> + * and other state that should be loaded at the start of the frame.
> + * @buf_removal_time: specifies the frame removal time in units of DecCT clock
> + * ticks counted from the removal time of the last random access point for
> + * operating point opNum.
> + * @refresh_frame_flags: contains a bitmask that specifies which reference frame
> + * slots will be updated with the current frame after it is decoded.
> + * @ref_order_hint: specifies the expected output order hint for each reference
> + * frame.
> + * @last_frame_idx: specifies the reference frame to use for LAST_FRAME.
> + * @gold_frame_idx: specifies the reference frame to use for GOLDEN_FRAME.
> + * refs
> + * @reference_frame_ts: the V4L2 timestamp for each of the reference frames
> + * enumerated in &v4l2_av1_reference_frame. The timestamp refers to the
> + * timestamp field in struct v4l2_buffer. Use v4l2_timeval_to_ns() to convert
> + * the struct timeval to a __u64.
> + * @skip_mode_frame: specifies the frames to use for compound prediction when
> + * skip_mode is equal to 1.
> + */
> +struct v4l2_ctrl_av1_frame_header {
> +	struct v4l2_av1_tile_info tile_info;
> +	struct v4l2_av1_quantization quantization;
> +	struct v4l2_av1_segmentation segmentation;
> +	struct v4l2_av1_loop_filter  loop_filter;
> +	struct v4l2_av1_cdef cdef;
> +	struct v4l2_av1_loop_restoration loop_restoration;
> +	struct v4l2_av1_global_motion global_motion;
> +	struct v4l2_av1_film_grain film_grain;
> +	__u32 flags;
> +	enum v4l2_av1_frame_type frame_type;
> +	__u32 order_hint;
> +	__u8 superres_denom;
> +	__u32 upscaled_width;
> +	enum v4l2_av1_interpolation_filter interpolation_filter;
> +	enum v4l2_av1_tx_mode tx_mode;
> +	__u32 frame_width_minus_1;
> +	__u32 frame_height_minus_1;
> +	__u16 render_width_minus_1;
> +	__u16 render_height_minus_1;
> +
> +	__u32 current_frame_id;
> +	__u8 primary_ref_frame;
> +	__u8 buffer_removal_time[V4L2_AV1_MAX_OPERATING_POINTS];
> +	__u8 refresh_frame_flags;
> +	__u32 ref_order_hint[V4L2_AV1_NUM_REF_FRAMES];
> +	__s8 last_frame_idx;
> +	__s8 gold_frame_idx;
> +	__u64 reference_frame_ts[V4L2_AV1_NUM_REF_FRAMES];
> +	__u8 skip_mode_frame[2];
> +};
> +
> +/**
> + * enum v4l2_stateless_av1_profile - AV1 profiles
> + *
> + * @V4L2_STATELESS_AV1_PROFILE_MAIN: compliant decoders must be able to decode
> + * streams with seq_profile equal to 0.
> + * @V4L2_STATELESS_PROFILE_HIGH: compliant decoders must be able to decode
> + * streams with seq_profile equal to 0.
> + * @V4L2_STATELESS_PROFILE_PROFESSIONAL: compliant decoders must be able to
> + * decode streams with seq_profile equal to 0.
> + *
> + * Conveys the highest profile a decoder can work with.
> + */
> +#define V4L2_CID_STATELESS_AV1_PROFILE (V4L2_CID_CODEC_STATELESS_BASE + 407)
> +enum v4l2_stateless_av1_profile {
> +	V4L2_STATELESS_AV1_PROFILE_MAIN = 0,
> +	V4L2_STATELESS_AV1_PROFILE_HIGH = 1,
> +	V4L2_STATELESS_AV1_PROFILE_PROFESSIONAL = 2,
> +};
> +
> +/**
> + * enum v4l2_stateless_av1_level - AV1 levels
> + *
> + * @V4L2_STATELESS_AV1_LEVEL_2_0: Level 2.0.
> + * @V4L2_STATELESS_AV1_LEVEL_2_1: Level 2.1.
> + * @V4L2_STATELESS_AV1_LEVEL_2_2: Level 2.2.
> + * @V4L2_STATELESS_AV1_LEVEL_2_3: Level 2.3.
> + * @V4L2_STATELESS_AV1_LEVEL_3_0: Level 3.0.
> + * @V4L2_STATELESS_AV1_LEVEL_3_1: Level 3.1.
> + * @V4L2_STATELESS_AV1_LEVEL_3_2: Level 3.2.
> + * @V4L2_STATELESS_AV1_LEVEL_3_3: Level 3.3.
> + * @V4L2_STATELESS_AV1_LEVEL_4_0: Level 4.0.
> + * @V4L2_STATELESS_AV1_LEVEL_4_1: Level 4.1.
> + * @V4L2_STATELESS_AV1_LEVEL_4_2: Level 4.2.
> + * @V4L2_STATELESS_AV1_LEVEL_4_3: Level 4.3.
> + * @V4L2_STATELESS_AV1_LEVEL_5_0: Level 5.0.
> + * @V4L2_STATELESS_AV1_LEVEL_5_1: Level 5.1.
> + * @V4L2_STATELESS_AV1_LEVEL_5_2: Level 5.2.
> + * @V4L2_STATELESS_AV1_LEVEL_5_3: Level 5.3.
> + * @V4L2_STATELESS_AV1_LEVEL_6_0: Level 6.0.
> + * @V4L2_STATELESS_AV1_LEVEL_6_1: Level 6.1.
> + * @V4L2_STATELESS_AV1_LEVEL_6_2: Level 6.2.
> + * @V4L2_STATELESS_AV1_LEVEL_6_3: Level 6.3.
> + * @V4L2_STATELESS_AV1_LEVEL_7_0: Level 7.0.
> + * @V4L2_STATELESS_AV1_LEVEL_7_2: Level 7.2.
> + * @V4L2_STATELESS_AV1_LEVEL_7_3: Level 7.3.
> + *
> + * Conveys the highest level a decoder can work with.
> + */
> +#define V4L2_CID_STATELESS_AV1_LEVEL (V4L2_CID_CODEC_STATELESS_BASE + 408)
> +enum v4l2_stateless_av1_level {
> +	V4L2_STATELESS_AV1_LEVEL_2_0 = 0,
> +	V4L2_STATELESS_AV1_LEVEL_2_1 = 1,
> +	V4L2_STATELESS_AV1_LEVEL_2_2 = 2,
> +	V4L2_STATELESS_AV1_LEVEL_2_3 = 3,
> +
> +	V4L2_STATELESS_AV1_LEVEL_3_0 = 4,
> +	V4L2_STATELESS_AV1_LEVEL_3_1 = 5,
> +	V4L2_STATELESS_AV1_LEVEL_3_2 = 6,
> +	V4L2_STATELESS_AV1_LEVEL_3_3 = 7,
> +
> +	V4L2_STATELESS_AV1_LEVEL_4_0 = 8,
> +	V4L2_STATELESS_AV1_LEVEL_4_1 = 9,
> +	V4L2_STATELESS_AV1_LEVEL_4_2 = 10,
> +	V4L2_STATELESS_AV1_LEVEL_4_3 = 11,
> +
> +	V4L2_STATELESS_AV1_LEVEL_5_0 = 12,
> +	V4L2_STATELESS_AV1_LEVEL_5_1 = 13,
> +	V4L2_STATELESS_AV1_LEVEL_5_2 = 14,
> +	V4L2_STATELESS_AV1_LEVEL_5_3 = 15,
> +
> +	V4L2_STATELESS_AV1_LEVEL_6_0 = 16,
> +	V4L2_STATELESS_AV1_LEVEL_6_1 = 17,
> +	V4L2_STATELESS_AV1_LEVEL_6_2 = 18,
> +	V4L2_STATELESS_AV1_LEVEL_6_3 = 19,
> +
> +	V4L2_STATELESS_AV1_LEVEL_7_0 = 20,
> +	V4L2_STATELESS_AV1_LEVEL_7_1 = 21,
> +	V4L2_STATELESS_AV1_LEVEL_7_2 = 22,
> +	V4L2_STATELESS_AV1_LEVEL_7_3 = 23
> +};
> +
> +/**
> + * enum v4l2_stateless_av1_operating_mode - AV1 operating mode
> + *
> + * @V4L2_STATELESS_AV1_OPERATING_MODE_GENERAL_DECODING: input is a sequence of
> + * OBUs, output is decoded frames)
> + * @V4L2_STATELESS_AV1_OPERATING_MODE_LARGE_SCALE_TILE_DECODING: Large scale
> + * tile decoding (input is a tile list OBU plus additional side information,
> + * output is a decoded frame)
> + *
> + * Conveys the decoding mode the decoder is operating with. The two AV1 decoding
> + * modes are specified in section 7 "Decoding process" of the AV1 specification.
> + */
> +#define V4L2_CID_STATELESS_AV1_OPERATING_MODE (V4L2_CID_CODEC_STATELESS_BASE + 409)
> +enum v4l2_stateless_av1_operating_mode {
> +	V4L2_STATELESS_AV1_OPERATING_MODE_GENERAL_DECODING = 0,
> +	V4L2_STATELESS_AV1_OPERATING_MODE_LARGE_SCALE_TILE_DECODING = 1,
> +};
> +
>  #define V4L2_CID_COLORIMETRY_CLASS_BASE	(V4L2_CTRL_CLASS_COLORIMETRY | 0x900)
>  #define V4L2_CID_COLORIMETRY_CLASS	(V4L2_CTRL_CLASS_COLORIMETRY | 1)
>  
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index 7222fc855d6b..d20f8505f980 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -701,6 +701,7 @@ struct v4l2_pix_format {
>  #define V4L2_PIX_FMT_FWHT     v4l2_fourcc('F', 'W', 'H', 'T') /* Fast Walsh Hadamard Transform (vicodec) */
>  #define V4L2_PIX_FMT_FWHT_STATELESS     v4l2_fourcc('S', 'F', 'W', 'H') /* Stateless FWHT (vicodec) */
>  #define V4L2_PIX_FMT_H264_SLICE v4l2_fourcc('S', '2', '6', '4') /* H264 parsed slices */
> +#define V4L2_PIX_FMT_AV1_FRAME	v4l2_fourcc('A', 'V', '1', 'F') /* AV1 parsed frame */
>  
>  /*  Vendor-specific formats   */
>  #define V4L2_PIX_FMT_CPIA1    v4l2_fourcc('C', 'P', 'I', 'A') /* cpia1 YUV */
> @@ -1750,6 +1751,13 @@ struct v4l2_ext_control {
>  		struct v4l2_ctrl_mpeg2_sequence __user *p_mpeg2_sequence;
>  		struct v4l2_ctrl_mpeg2_picture __user *p_mpeg2_picture;
>  		struct v4l2_ctrl_mpeg2_quantisation __user *p_mpeg2_quantisation;
> +
> +		struct v4l2_ctrl_av1_sequence __user *p_av1_sequence;
> +		struct v4l2_ctrl_av1_tile_group __user *p_av1_tile_group;
> +		struct v4l2_ctrl_av1_tile_group_entry __user *p_av1_tile_group_entry;
> +		struct v4l2_ctrl_av1_tile_list __user *p_av1_tile_list;
> +		struct v4l2_ctrl_av1_tile_list_entry __user *p_av1_tile_list_entry;
> +		struct v4l2_ctrl_av1_frame_header __user *p_av1_frame_header;
>  		void __user *ptr;
>  	};
>  } __attribute__ ((packed));
> @@ -1814,6 +1822,13 @@ enum v4l2_ctrl_type {
>  	V4L2_CTRL_TYPE_MPEG2_QUANTISATION   = 0x0250,
>  	V4L2_CTRL_TYPE_MPEG2_SEQUENCE       = 0x0251,
>  	V4L2_CTRL_TYPE_MPEG2_PICTURE        = 0x0252,
> +
> +	V4L2_CTRL_TYPE_AV1_SEQUENCE	    = 0x280,
> +	V4L2_CTRL_TYPE_AV1_TILE_GROUP	    = 0x281,
> +	V4L2_CTRL_TYPE_AV1_TILE_GROUP_ENTRY = 0x282,
> +	V4L2_CTRL_TYPE_AV1_TILE_LIST	    = 0x283,
> +	V4L2_CTRL_TYPE_AV1_TILE_LIST_ENTRY  = 0x284,
> +	V4L2_CTRL_TYPE_AV1_FRAME_HEADER	    = 0x285,
>  };
>  
>  /*  Used in the VIDIOC_QUERYCTRL ioctl for querying controls */


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH v2] media: visl: add virtual stateless driver
  2021-08-10 22:05 ` [RFC PATCH 2/2] media: vivpu: add virtual VPU driver daniel.almeida
  2021-09-02 16:05   ` Hans Verkuil
@ 2022-06-06 21:26   ` daniel.almeida
  2022-06-07 12:02     ` Hans Verkuil
  2022-08-19 20:43     ` Deborah Brouwer
  1 sibling, 2 replies; 14+ messages in thread
From: daniel.almeida @ 2022-06-06 21:26 UTC (permalink / raw)
  To: hverkuil; +Cc: Daniel Almeida, linux-media

From: Daniel Almeida <daniel.almeida@collabora.com>

A virtual stateless device for stateless uAPI development purposes.

This tool's objective is to help the development and testing of userspace
applications that use the V4L2 stateless API to decode media.

A userspace implementation can use visl to run a decoding loop even when no
hardware is available or when the kernel uAPI for the codec has not been
upstreamed yet. This can reveal bugs at an early stage.

This driver can also trace the contents of the V4L2 controls submitted to it.
It can also dump the contents of the vb2 buffers through a debugfs
interface. This is in many ways similar to the tracing infrastructure
available for other popular encode/decode APIs out there and can help develop
a userspace application by using another (working) one as a reference.

Note that no actual decoding of video frames is performed by visl. The V4L2
test pattern generator is used to write various debug information to the
capture buffers instead.

Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>

---
Was media: vivpu: add virtual VPU driver

Changes from v1:

- Addressed review comments from v1
- Driver was renamed to visl
- Dropped AV1 support for now (as it's not upstream yet)
- Added support for FWHT, MPEG2, VP8, VP9, H264
- Added TPG support
- Driver can now dump the controls for the codecs above through ftrace
- Driver can now dump the vb2 bitstream buffer through a debugfs infrastructure

I ran this on a kernel with KASAN/kmemleak enabled, nothing showed up.

v4l2-compliance results:

v4l2-compliance 1.22.1, 64 bits, 64-bit time_t

Compliance test for visl device /dev/video0:

Driver Info:
        Driver name      : visl
        Card type        : visl
        Bus info         : platform:visl
        Driver version   : 5.19.0
        Capabilities     : 0x84204000
                Video Memory-to-Memory Multiplanar
                Streaming
                Extended Pix Format
                Device Capabilities
        Device Caps      : 0x04204000
                Video Memory-to-Memory Multiplanar
                Streaming
                Extended Pix Format
Media Driver Info:
        Driver name      : visl
        Model            : visl
        Serial           : 
        Bus info         : platform:visl
        Media version    : 5.19.0
        Hardware revision: 0x00000000 (0)
        Driver version   : 5.19.0
Interface Info:
        ID               : 0x0300000c
        Type             : V4L Video
Entity Info:
        ID               : 0x00000001 (1)
        Name             : visl-source
        Function         : V4L2 I/O
        Pad 0x01000002   : 0: Source
          Link 0x02000008: to remote pad 0x1000004 of entity 'visl-proc' (Video Decoder): Data, Enabled, Immutable

Required ioctls:
        test MC information (see 'Media Driver Info' above): OK
        test VIDIOC_QUERYCAP: OK
        test invalid ioctls: OK

Allow for multiple opens:
        test second /dev/video0 open: OK
        test VIDIOC_QUERYCAP: OK
        test VIDIOC_G/S_PRIORITY: OK
        test for unlimited opens: OK

Debug ioctls:
        test VIDIOC_DBG_G/S_REGISTER: OK (Not Supported)
        test VIDIOC_LOG_STATUS: OK (Not Supported)

Input ioctls:
        test VIDIOC_G/S_TUNER/ENUM_FREQ_BANDS: OK (Not Supported)
        test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
        test VIDIOC_S_HW_FREQ_SEEK: OK (Not Supported)
        test VIDIOC_ENUMAUDIO: OK (Not Supported)
        test VIDIOC_G/S/ENUMINPUT: OK (Not Supported)
        test VIDIOC_G/S_AUDIO: OK (Not Supported)
        Inputs: 0 Audio Inputs: 0 Tuners: 0

Output ioctls:
        test VIDIOC_G/S_MODULATOR: OK (Not Supported)
        test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
        test VIDIOC_ENUMAUDOUT: OK (Not Supported)
        test VIDIOC_G/S/ENUMOUTPUT: OK (Not Supported)
        test VIDIOC_G/S_AUDOUT: OK (Not Supported)
        Outputs: 0 Audio Outputs: 0 Modulators: 0

Input/Output configuration ioctls:
        test VIDIOC_ENUM/G/S/QUERY_STD: OK (Not Supported)
        test VIDIOC_ENUM/G/S/QUERY_DV_TIMINGS: OK (Not Supported)
        test VIDIOC_DV_TIMINGS_CAP: OK (Not Supported)
        test VIDIOC_G/S_EDID: OK (Not Supported)

Control ioctls:
        test VIDIOC_QUERY_EXT_CTRL/QUERYMENU: OK
        test VIDIOC_QUERYCTRL: OK
        test VIDIOC_G/S_CTRL: OK
        test VIDIOC_G/S/TRY_EXT_CTRLS: OK
        test VIDIOC_(UN)SUBSCRIBE_EVENT/DQEVENT: OK
        test VIDIOC_G/S_JPEGCOMP: OK (Not Supported)
        Standard Controls: 3 Private Controls: 0
        Standard Compound Controls: 13 Private Compound Controls: 0

Format ioctls:
        test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK
        test VIDIOC_G/S_PARM: OK (Not Supported)
        test VIDIOC_G_FBUF: OK (Not Supported)
        test VIDIOC_G_FMT: OK
        test VIDIOC_TRY_FMT: OK
        test VIDIOC_S_FMT: OK
        test VIDIOC_G_SLICED_VBI_CAP: OK (Not Supported)
        test Cropping: OK (Not Supported)
        test Composing: OK (Not Supported)
        test Scaling: OK

Codec ioctls:
        test VIDIOC_(TRY_)ENCODER_CMD: OK (Not Supported)
        test VIDIOC_G_ENC_INDEX: OK (Not Supported)
        test VIDIOC_(TRY_)DECODER_CMD: OK (Not Supported)

Buffer ioctls:
        test VIDIOC_REQBUFS/CREATE_BUFS/QUERYBUF: OK
        test VIDIOC_EXPBUF: OK
        test Requests: OK

Test input 0:

Streaming ioctls:
        test read/write: OK (Not Supported)
        test blocking wait: OK
        Video Capture Multiplanar: Captured 58 buffers    
        test MMAP (no poll): OK
        Video Capture Multiplanar: Captured 58 buffers    
        test MMAP (select): OK
        Video Capture Multiplanar: Captured 58 buffers    
        test MMAP (epoll): OK
        Video Capture Multiplanar: Captured 58 buffers    
        test USERPTR (no poll): OK
        Video Capture Multiplanar: Captured 58 buffers    
        test USERPTR (select): OK
        test DMABUF: Cannot test, specify --expbuf-device

Total for visl device /dev/video0: 53, Succeeded: 53, Failed: 0, Warnings: 0

---
 drivers/media/test-drivers/Kconfig            |   1 +
 drivers/media/test-drivers/Makefile           |   1 +
 drivers/media/test-drivers/visl/Kconfig       |  31 +
 drivers/media/test-drivers/visl/Makefile      |   8 +
 drivers/media/test-drivers/visl/visl-core.c   | 532 ++++++++++++
 .../media/test-drivers/visl/visl-debugfs.c    | 148 ++++
 .../media/test-drivers/visl/visl-debugfs.h    |  72 ++
 drivers/media/test-drivers/visl/visl-dec.c    | 468 +++++++++++
 drivers/media/test-drivers/visl/visl-dec.h    | 100 +++
 .../media/test-drivers/visl/visl-trace-fwht.h |  66 ++
 .../media/test-drivers/visl/visl-trace-h264.h | 349 ++++++++
 .../test-drivers/visl/visl-trace-mpeg2.h      |  99 +++
 .../test-drivers/visl/visl-trace-points.c     |   9 +
 .../media/test-drivers/visl/visl-trace-vp8.h  | 156 ++++
 .../media/test-drivers/visl/visl-trace-vp9.h  | 292 +++++++
 drivers/media/test-drivers/visl/visl-video.c  | 776 ++++++++++++++++++
 drivers/media/test-drivers/visl/visl-video.h  |  61 ++
 drivers/media/test-drivers/visl/visl.h        | 178 ++++
 18 files changed, 3347 insertions(+)
 create mode 100644 drivers/media/test-drivers/visl/Kconfig
 create mode 100644 drivers/media/test-drivers/visl/Makefile
 create mode 100644 drivers/media/test-drivers/visl/visl-core.c
 create mode 100644 drivers/media/test-drivers/visl/visl-debugfs.c
 create mode 100644 drivers/media/test-drivers/visl/visl-debugfs.h
 create mode 100644 drivers/media/test-drivers/visl/visl-dec.c
 create mode 100644 drivers/media/test-drivers/visl/visl-dec.h
 create mode 100644 drivers/media/test-drivers/visl/visl-trace-fwht.h
 create mode 100644 drivers/media/test-drivers/visl/visl-trace-h264.h
 create mode 100644 drivers/media/test-drivers/visl/visl-trace-mpeg2.h
 create mode 100644 drivers/media/test-drivers/visl/visl-trace-points.c
 create mode 100644 drivers/media/test-drivers/visl/visl-trace-vp8.h
 create mode 100644 drivers/media/test-drivers/visl/visl-trace-vp9.h
 create mode 100644 drivers/media/test-drivers/visl/visl-video.c
 create mode 100644 drivers/media/test-drivers/visl/visl-video.h
 create mode 100644 drivers/media/test-drivers/visl/visl.h

diff --git a/drivers/media/test-drivers/Kconfig b/drivers/media/test-drivers/Kconfig
index 51cf27834df0..459b433e9fae 100644
--- a/drivers/media/test-drivers/Kconfig
+++ b/drivers/media/test-drivers/Kconfig
@@ -20,6 +20,7 @@ config VIDEO_VIM2M
 source "drivers/media/test-drivers/vicodec/Kconfig"
 source "drivers/media/test-drivers/vimc/Kconfig"
 source "drivers/media/test-drivers/vivid/Kconfig"
+source "drivers/media/test-drivers/visl/Kconfig"
 
 endif #V4L_TEST_DRIVERS
 
diff --git a/drivers/media/test-drivers/Makefile b/drivers/media/test-drivers/Makefile
index ff390b687189..740714a4584d 100644
--- a/drivers/media/test-drivers/Makefile
+++ b/drivers/media/test-drivers/Makefile
@@ -12,3 +12,4 @@ obj-$(CONFIG_VIDEO_VICODEC) += vicodec/
 obj-$(CONFIG_VIDEO_VIM2M) += vim2m.o
 obj-$(CONFIG_VIDEO_VIMC) += vimc/
 obj-$(CONFIG_VIDEO_VIVID) += vivid/
+obj-$(CONFIG_VIDEO_VISL) += visl/
diff --git a/drivers/media/test-drivers/visl/Kconfig b/drivers/media/test-drivers/visl/Kconfig
new file mode 100644
index 000000000000..976319c3c372
--- /dev/null
+++ b/drivers/media/test-drivers/visl/Kconfig
@@ -0,0 +1,31 @@
+# SPDX-License-Identifier: GPL-2.0+
+config VIDEO_VISL
+	tristate "Virtual Stateless Driver (visl)"
+	depends on VIDEO_DEV
+	select FONT_SUPPORT
+	select FONT_8x16
+	select VIDEOBUF2_VMALLOC
+	select V4L2_MEM2MEM_DEV
+	select MEDIA_CONTROLLER
+	select MEDIA_CONTROLLER_REQUEST_API
+	select VIDEO_V4L2_TPG
+	help
+
+	  A virtual stateless device for uAPI development purposes.
+
+	  A userspace implementation can use visl to run a decoding loop even
+	  when no hardware is available or when the kernel uAPI for the codec
+	  has not been upstreamed yet. This can reveal bugs at an early stage.
+
+
+
+	  When in doubt, say N.
+
+config VISL_DEBUGFS
+	bool "Enable debugfs for visl"
+	depends on VIDEO_VISL
+	depends on DEBUG_FS
+
+	help
+	  Choose Y to dump the bitstream buffers through debugfs.
+	  When in doubt, say N.
diff --git a/drivers/media/test-drivers/visl/Makefile b/drivers/media/test-drivers/visl/Makefile
new file mode 100644
index 000000000000..fb4d5ae1b17f
--- /dev/null
+++ b/drivers/media/test-drivers/visl/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0+
+visl-y := visl-core.o visl-video.o visl-dec.o visl-trace-points.o
+
+ifeq ($(CONFIG_VISL_DEBUGFS),y)
+  visl-y += visl-debugfs.o
+endif
+
+obj-$(CONFIG_VIDEO_VISL) += visl.o
diff --git a/drivers/media/test-drivers/visl/visl-core.c b/drivers/media/test-drivers/visl/visl-core.c
new file mode 100644
index 000000000000..c59f88b72ea4
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-core.c
@@ -0,0 +1,532 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * A virtual stateless device for stateless uAPI development purposes.
+ *
+ * This tool's objective is to help the development and testing of userspace
+ * applications that use the V4L2 stateless API to decode media.
+ *
+ * A userspace implementation can use visl to run a decoding loop even when no
+ * hardware is available or when the kernel uAPI for the codec has not been
+ * upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * This driver can also trace the contents of the V4L2 controls submitted to it.
+ * It can also dump the contents of the vb2 buffers through a debugfs
+ * interface. This is in many ways similar to the tracing infrastructure
+ * available for other popular encode/decode APIs out there and can help develop
+ * a userspace application by using another (working) one as a reference.
+ *
+ * Note that no actual decoding of video frames is performed by visl. The V4L2
+ * test pattern generator is used to write various debug information to the
+ * capture buffers instead.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ */
+
+#include <linux/debugfs.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "visl.h"
+#include "visl-dec.h"
+#include "visl-debugfs.h"
+#include "visl-video.h"
+
+unsigned int visl_debug;
+module_param(visl_debug, uint, 0644);
+MODULE_PARM_DESC(visl_debug, " activates debug info");
+
+unsigned int visl_transtime_ms;
+module_param(visl_transtime_ms, uint, 0644);
+MODULE_PARM_DESC(visl_transtime_ms, " simulated process time in miliseconds.");
+
+/*
+ * dprintk can be slow through serial. This lets one limit the tracing to a
+ * particular number of frames
+ */
+int visl_dprintk_frame_start = -1;
+module_param(visl_dprintk_frame_start, int, 0);
+MODULE_PARM_DESC(visl_dprintk_frame_start, " a frame number to start tracing with dprintk");
+
+unsigned int visl_dprintk_nframes;
+module_param(visl_dprintk_nframes, uint, 0);
+MODULE_PARM_DESC(visl_dprintk_nframes,
+		 " the number of frames to trace with dprintk");
+
+unsigned int keep_bitstream_buffers;
+module_param(keep_bitstream_buffers, uint, 0);
+MODULE_PARM_DESC(keep_bitstream_buffers,
+		 " keep bitstream buffers in debugfs after streaming is stopped");
+
+int bitstream_trace_frame_start = -1;
+module_param(bitstream_trace_frame_start, int, 0);
+MODULE_PARM_DESC(bitstream_trace_frame_start,
+		 " a frame number to start dumping the bitstream through debugfs");
+
+unsigned int bitstream_trace_nframes;
+module_param(bitstream_trace_nframes, uint, 0);
+MODULE_PARM_DESC(bitstream_trace_nframes,
+		 " the number of frames to dump the bitstream through debugfs");
+
+static const struct visl_ctrl_desc visl_fwht_ctrl_descs[] = {
+	{
+		.cfg.id = V4L2_CID_STATELESS_FWHT_PARAMS,
+	},
+};
+
+const struct visl_ctrls visl_fwht_ctrls = {
+	.ctrls = visl_fwht_ctrl_descs,
+	.num_ctrls = ARRAY_SIZE(visl_fwht_ctrl_descs)
+};
+
+static const struct visl_ctrl_desc visl_mpeg2_ctrl_descs[] = {
+	{
+		.cfg.id = V4L2_CID_STATELESS_MPEG2_SEQUENCE,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_MPEG2_PICTURE,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_MPEG2_QUANTISATION,
+	},
+};
+
+const struct visl_ctrls visl_mpeg2_ctrls = {
+	.ctrls = visl_mpeg2_ctrl_descs,
+	.num_ctrls = ARRAY_SIZE(visl_mpeg2_ctrl_descs),
+};
+
+static const struct visl_ctrl_desc visl_vp8_ctrl_descs[] = {
+	{
+		.cfg.id = V4L2_CID_STATELESS_VP8_FRAME,
+	},
+};
+
+const struct visl_ctrls visl_vp8_ctrls = {
+	.ctrls = visl_vp8_ctrl_descs,
+	.num_ctrls = ARRAY_SIZE(visl_vp8_ctrl_descs),
+};
+
+static const struct visl_ctrl_desc visl_vp9_ctrl_descs[] = {
+	{
+		.cfg.id = V4L2_CID_STATELESS_VP9_FRAME,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_VP9_COMPRESSED_HDR,
+	},
+};
+
+const struct visl_ctrls visl_vp9_ctrls = {
+	.ctrls = visl_vp9_ctrl_descs,
+	.num_ctrls = ARRAY_SIZE(visl_vp9_ctrl_descs),
+};
+
+static const struct visl_ctrl_desc visl_h264_ctrl_descs[] = {
+	{
+		.cfg.id = V4L2_CID_STATELESS_H264_DECODE_PARAMS,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_H264_SPS,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_H264_PPS,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_H264_SCALING_MATRIX,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_H264_DECODE_MODE,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_H264_START_CODE,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_H264_SLICE_PARAMS,
+	},
+	{
+		.cfg.id = V4L2_CID_STATELESS_H264_PRED_WEIGHTS,
+	},
+};
+
+const struct visl_ctrls visl_h264_ctrls = {
+	.ctrls = visl_h264_ctrl_descs,
+	.num_ctrls = ARRAY_SIZE(visl_h264_ctrl_descs),
+};
+
+struct v4l2_ctrl *visl_find_control(struct visl_ctx *ctx, u32 id)
+{
+	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
+
+	return v4l2_ctrl_find(hdl, id);
+}
+
+void *visl_find_control_data(struct visl_ctx *ctx, u32 id)
+{
+	struct v4l2_ctrl *ctrl;
+
+	ctrl = visl_find_control(ctx, id);
+	if (ctrl)
+		return ctrl->p_cur.p;
+
+	return NULL;
+}
+
+u32 visl_control_num_elems(struct visl_ctx *ctx, u32 id)
+{
+	struct v4l2_ctrl *ctrl;
+
+	ctrl = visl_find_control(ctx, id);
+	if (ctrl)
+		return ctrl->elems;
+
+	return 0;
+}
+
+static void visl_device_release(struct video_device *vdev)
+{
+	struct visl_dev *dev = container_of(vdev, struct visl_dev, vfd);
+
+	v4l2_device_unregister(&dev->v4l2_dev);
+	v4l2_m2m_release(dev->m2m_dev);
+	media_device_cleanup(&dev->mdev);
+	visl_debugfs_deinit(dev);
+	kfree(dev);
+}
+
+static int visl_add_ctrls(struct visl_ctx *ctx, const struct visl_ctrls *ctrls)
+{
+	struct visl_dev *dev = ctx->dev;
+	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
+	unsigned int i;
+	struct v4l2_ctrl *ctrl;
+
+	for (i = 0; i < ctrls->num_ctrls; i++) {
+		ctrl = v4l2_ctrl_new_custom(hdl, &ctrls->ctrls[i].cfg, NULL);
+
+		if (hdl->error) {
+			v4l2_err(&dev->v4l2_dev,
+				 "Failed to create new custom control, errno: %d\n",
+				 hdl->error);
+
+			return hdl->error;
+		}
+	}
+
+	return 0;
+}
+
+#define VISL_CONTROLS_COUNT	ARRAY_SIZE(visl_controls)
+
+static int visl_init_ctrls(struct visl_ctx *ctx)
+{
+	struct visl_dev *dev = ctx->dev;
+	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
+	unsigned int ctrl_cnt = 0;
+	unsigned int i;
+	int ret;
+
+	for (i = 0; i < num_coded_fmts; i++)
+		ctrl_cnt += visl_coded_fmts[i].ctrls->num_ctrls;
+
+	v4l2_ctrl_handler_init(hdl, ctrl_cnt);
+	if (hdl->error) {
+		v4l2_err(&dev->v4l2_dev,
+			 "Failed to initialize control handler\n");
+		return hdl->error;
+	}
+
+	for (i = 0; i < num_coded_fmts; i++) {
+		ret = visl_add_ctrls(ctx, visl_coded_fmts[i].ctrls);
+		if (ret)
+			goto err_free_handler;
+	}
+
+	ctx->fh.ctrl_handler = hdl;
+	v4l2_ctrl_handler_setup(hdl);
+
+	return 0;
+
+err_free_handler:
+	v4l2_ctrl_handler_free(hdl);
+	return ret;
+}
+
+static void visl_free_ctrls(struct visl_ctx *ctx)
+{
+	v4l2_ctrl_handler_free(&ctx->hdl);
+}
+
+static int visl_open(struct file *file)
+{
+	struct visl_dev *dev = video_drvdata(file);
+	struct visl_ctx *ctx = NULL;
+	int rc = 0;
+
+	if (mutex_lock_interruptible(&dev->dev_mutex))
+		return -ERESTARTSYS;
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx) {
+		rc = -ENOMEM;
+		goto unlock;
+	}
+
+	ctx->tpg_str_buf = kmalloc(TPG_STR_BUF_SZ, GFP_KERNEL);
+
+	v4l2_fh_init(&ctx->fh, video_devdata(file));
+	file->private_data = &ctx->fh;
+	ctx->dev = dev;
+
+	rc = visl_init_ctrls(ctx);
+	if (rc)
+		goto free_ctx;
+
+	ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, ctx, &visl_queue_init);
+
+	mutex_init(&ctx->vb_mutex);
+
+	if (IS_ERR(ctx->fh.m2m_ctx)) {
+		rc = PTR_ERR(ctx->fh.m2m_ctx);
+		goto free_hdl;
+	}
+
+	rc = visl_set_default_format(ctx);
+	if (rc)
+		goto free_m2m_ctx;
+
+	v4l2_fh_add(&ctx->fh);
+
+	dprintk(dev, "Created instance: %p, m2m_ctx: %p\n",
+		ctx, ctx->fh.m2m_ctx);
+
+	mutex_unlock(&dev->dev_mutex);
+	return rc;
+
+free_m2m_ctx:
+	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
+free_hdl:
+	visl_free_ctrls(ctx);
+	v4l2_fh_exit(&ctx->fh);
+free_ctx:
+	kfree(ctx->tpg_str_buf);
+	kfree(ctx);
+unlock:
+	mutex_unlock(&dev->dev_mutex);
+	return rc;
+}
+
+static int visl_release(struct file *file)
+{
+	struct visl_dev *dev = video_drvdata(file);
+	struct visl_ctx *ctx = visl_file_to_ctx(file);
+
+	dprintk(dev, "Releasing instance %p\n", ctx);
+
+	tpg_free(&ctx->tpg);
+	v4l2_fh_del(&ctx->fh);
+	v4l2_fh_exit(&ctx->fh);
+	visl_free_ctrls(ctx);
+	mutex_lock(&dev->dev_mutex);
+	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
+	mutex_unlock(&dev->dev_mutex);
+
+	if (!keep_bitstream_buffers)
+		visl_debugfs_clear_bitstream(dev, ctx->capture_streamon_jiffies);
+
+	kfree(ctx->tpg_str_buf);
+	kfree(ctx);
+
+	return 0;
+}
+
+static const struct v4l2_file_operations visl_fops = {
+	.owner		= THIS_MODULE,
+	.open		= visl_open,
+	.release	= visl_release,
+	.poll		= v4l2_m2m_fop_poll,
+	.unlocked_ioctl	= video_ioctl2,
+	.mmap		= v4l2_m2m_fop_mmap,
+};
+
+static const struct video_device visl_videodev = {
+	.name		= VISL_NAME,
+	.vfl_dir	= VFL_DIR_M2M,
+	.fops		= &visl_fops,
+	.ioctl_ops	= &visl_ioctl_ops,
+	.minor		= -1,
+	.release	= visl_device_release,
+	.device_caps	= V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING,
+};
+
+static const struct v4l2_m2m_ops visl_m2m_ops = {
+	.device_run	= visl_device_run,
+};
+
+static const struct media_device_ops visl_m2m_media_ops = {
+	.req_validate	= visl_request_validate,
+	.req_queue	= v4l2_m2m_request_queue,
+};
+
+static int visl_probe(struct platform_device *pdev)
+{
+	struct visl_dev *dev;
+	struct video_device *vfd;
+	int ret;
+	int rc;
+
+	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+	if (!dev)
+		return -ENOMEM;
+
+	ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
+	if (ret)
+		goto error_visl_dev;
+
+	mutex_init(&dev->dev_mutex);
+
+	dev->vfd = visl_videodev;
+	vfd = &dev->vfd;
+	vfd->lock = &dev->dev_mutex;
+	vfd->v4l2_dev = &dev->v4l2_dev;
+
+	video_set_drvdata(vfd, dev);
+	v4l2_info(&dev->v4l2_dev,
+		  "Device registered as /dev/video%d\n", vfd->num);
+
+	platform_set_drvdata(pdev, dev);
+
+	dev->m2m_dev = v4l2_m2m_init(&visl_m2m_ops);
+	if (IS_ERR(dev->m2m_dev)) {
+		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
+		ret = PTR_ERR(dev->m2m_dev);
+		dev->m2m_dev = NULL;
+		goto error_dev;
+	}
+
+	dev->mdev.dev = &pdev->dev;
+	strscpy(dev->mdev.model, "visl", sizeof(dev->mdev.model));
+	strscpy(dev->mdev.bus_info, "platform:visl",
+		sizeof(dev->mdev.bus_info));
+	media_device_init(&dev->mdev);
+	dev->mdev.ops = &visl_m2m_media_ops;
+	dev->v4l2_dev.mdev = &dev->mdev;
+
+	ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
+	if (ret) {
+		v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
+		goto error_m2m;
+	}
+
+	ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd,
+						 MEDIA_ENT_F_PROC_VIDEO_DECODER);
+	if (ret) {
+		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n");
+		goto error_v4l2;
+	}
+
+	ret = media_device_register(&dev->mdev);
+	if (ret) {
+		v4l2_err(&dev->v4l2_dev, "Failed to register mem2mem media device\n");
+		goto error_m2m_mc;
+	}
+
+	rc = visl_debugfs_init(dev);
+	if (rc)
+		dprintk(dev, "visl_debugfs_init failed: %d\n"
+			"Continuing without debugfs support\n", rc);
+
+	return 0;
+
+error_m2m_mc:
+	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+error_v4l2:
+	video_unregister_device(&dev->vfd);
+	/* visl_device_release called by video_unregister_device to release various objects */
+	return ret;
+error_m2m:
+	v4l2_m2m_release(dev->m2m_dev);
+error_dev:
+	v4l2_device_unregister(&dev->v4l2_dev);
+error_visl_dev:
+	kfree(dev);
+
+	return ret;
+}
+
+static int visl_remove(struct platform_device *pdev)
+{
+	struct visl_dev *dev = platform_get_drvdata(pdev);
+
+	v4l2_info(&dev->v4l2_dev, "Removing " VISL_NAME);
+
+#ifdef CONFIG_MEDIA_CONTROLLER
+	if (media_devnode_is_registered(dev->mdev.devnode)) {
+		media_device_unregister(&dev->mdev);
+		v4l2_m2m_unregister_media_controller(dev->m2m_dev);
+	}
+#endif
+	video_unregister_device(&dev->vfd);
+
+	return 0;
+}
+
+static struct platform_driver visl_pdrv = {
+	.probe		= visl_probe,
+	.remove		= visl_remove,
+	.driver		= {
+		.name	= VISL_NAME,
+	},
+};
+
+static void visl_dev_release(struct device *dev) {}
+
+static struct platform_device visl_pdev = {
+	.name		= VISL_NAME,
+	.dev.release	= visl_dev_release,
+};
+
+static void __exit visl_exit(void)
+{
+	platform_driver_unregister(&visl_pdrv);
+	platform_device_unregister(&visl_pdev);
+}
+
+static int __init visl_init(void)
+{
+	int ret;
+
+	ret = platform_device_register(&visl_pdev);
+	if (ret)
+		return ret;
+
+	ret = platform_driver_register(&visl_pdrv);
+	if (ret)
+		platform_device_unregister(&visl_pdev);
+
+	return ret;
+}
+
+MODULE_DESCRIPTION("Virtual stateless device");
+MODULE_AUTHOR("Daniel Almeida <daniel.almeida@collabora.com>");
+MODULE_LICENSE("GPL v2");
+
+module_init(visl_init);
+module_exit(visl_exit);
diff --git a/drivers/media/test-drivers/visl/visl-debugfs.c b/drivers/media/test-drivers/visl/visl-debugfs.c
new file mode 100644
index 000000000000..6fbfd55d6c53
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-debugfs.c
@@ -0,0 +1,148 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * A virtual stateless device for stateless uAPI development purposes.
+ *
+ * This tool's objective is to help the development and testing of userspace
+ * applications that use the V4L2 stateless API to decode media.
+ *
+ * A userspace implementation can use visl to run a decoding loop even when no
+ * hardware is available or when the kernel uAPI for the codec has not been
+ * upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * This driver can also trace the contents of the V4L2 controls submitted to it.
+ * It can also dump the contents of the vb2 buffers through a debugfs
+ * interface. This is in many ways similar to the tracing infrastructure
+ * available for other popular encode/decode APIs out there and can help develop
+ * a userspace application by using another (working) one as a reference.
+ *
+ * Note that no actual decoding of video frames is performed by visl. The V4L2
+ * test pattern generator is used to write various debug information to the
+ * capture buffers instead.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ */
+
+#include <linux/debugfs.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <media/v4l2-mem2mem.h>
+
+#include "visl-debugfs.h"
+
+int visl_debugfs_init(struct visl_dev *dev)
+{
+	dev->debugfs_root = debugfs_create_dir("visl", NULL);
+	INIT_LIST_HEAD(&dev->bitstream_blobs);
+	mutex_init(&dev->bitstream_lock);
+
+	if (IS_ERR(dev->debugfs_root))
+		return PTR_ERR(dev->debugfs_root);
+
+	return visl_debugfs_bitstream_init(dev);
+}
+
+int visl_debugfs_bitstream_init(struct visl_dev *dev)
+{
+	dev->bitstream_debugfs = debugfs_create_dir("bitstream",
+						    dev->debugfs_root);
+	if (IS_ERR(dev->bitstream_debugfs))
+		return PTR_ERR(dev->bitstream_debugfs);
+
+	return 0;
+}
+
+void visl_trace_bitstream(struct visl_ctx *ctx, struct visl_run *run)
+{
+	u8 *vaddr = vb2_plane_vaddr(&run->src->vb2_buf, 0);
+	struct visl_blob *blob;
+	size_t data_sz = vb2_get_plane_payload(&run->dst->vb2_buf, 0);
+	struct dentry *dentry;
+	char name[32];
+
+	blob  = kzalloc(sizeof(*blob), GFP_KERNEL);
+	if (!blob)
+		return;
+
+	blob->blob.data = vzalloc(data_sz);
+	if (!blob->blob.data)
+		goto err_vmalloc;
+
+	blob->blob.size = data_sz;
+	snprintf(name, 32, "%llu_bitstream%d",
+		 ctx->capture_streamon_jiffies, run->src->sequence);
+
+	memcpy(blob->blob.data, vaddr, data_sz);
+
+	dentry = debugfs_create_blob(name, 0444, ctx->dev->bitstream_debugfs,
+				     &blob->blob);
+	if (IS_ERR(dentry))
+		goto err_debugfs;
+
+	blob->dentry = dentry;
+	blob->streamon_jiffies = ctx->capture_streamon_jiffies;
+
+	mutex_lock(&ctx->dev->bitstream_lock);
+	list_add_tail(&blob->list, &ctx->dev->bitstream_blobs);
+	mutex_unlock(&ctx->dev->bitstream_lock);
+
+	return;
+
+err_debugfs:
+	vfree(blob->blob.data);
+err_vmalloc:
+	kfree(blob);
+}
+
+void visl_debugfs_clear_bitstream(struct visl_dev *dev, u64 streamon_jiffies)
+{
+	struct visl_blob *blob;
+	struct visl_blob *tmp;
+
+	mutex_lock(&dev->bitstream_lock);
+	if (list_empty(&dev->bitstream_blobs))
+		goto unlock;
+
+	list_for_each_entry_safe(blob, tmp, &dev->bitstream_blobs, list) {
+		if (streamon_jiffies &&
+		    streamon_jiffies != blob->streamon_jiffies)
+			continue;
+
+		list_del(&blob->list);
+		debugfs_remove(blob->dentry);
+		vfree(blob->blob.data);
+		kfree(blob);
+	}
+
+unlock:
+	mutex_unlock(&dev->bitstream_lock);
+}
+
+void visl_debugfs_bitstream_deinit(struct visl_dev *dev)
+{
+	visl_debugfs_clear_bitstream(dev, 0);
+	debugfs_remove_recursive(dev->bitstream_debugfs);
+	dev->bitstream_debugfs = NULL;
+}
+
+void visl_debugfs_deinit(struct visl_dev *dev)
+{
+	visl_debugfs_bitstream_deinit(dev);
+	debugfs_remove_recursive(dev->debugfs_root);
+	dev->debugfs_root = NULL;
+}
diff --git a/drivers/media/test-drivers/visl/visl-debugfs.h b/drivers/media/test-drivers/visl/visl-debugfs.h
new file mode 100644
index 000000000000..e14e7d72b150
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-debugfs.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * A virtual stateless device for stateless uAPI development purposes.
+ *
+ * This tool's objective is to help the development and testing of userspace
+ * applications that use the V4L2 stateless API to decode media.
+ *
+ * A userspace implementation can use visl to run a decoding loop even when no
+ * hardware is available or when the kernel uAPI for the codec has not been
+ * upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * This driver can also trace the contents of the V4L2 controls submitted to it.
+ * It can also dump the contents of the vb2 buffers through a debugfs
+ * interface. This is in many ways similar to the tracing infrastructure
+ * available for other popular encode/decode APIs out there and can help develop
+ * a userspace application by using another (working) one as a reference.
+ *
+ * Note that no actual decoding of video frames is performed by visl. The V4L2
+ * test pattern generator is used to write various debug information to the
+ * capture buffers instead.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ */
+
+#include "visl.h"
+#include "visl-dec.h"
+
+#ifdef CONFIG_VISL_DEBUGFS
+
+int visl_debugfs_init(struct visl_dev *dev);
+int visl_debugfs_bitstream_init(struct visl_dev *dev);
+void visl_trace_bitstream(struct visl_ctx *ctx, struct visl_run *run);
+void visl_debugfs_clear_bitstream(struct visl_dev *dev, u64 streamon_jiffies);
+void visl_debugfs_bitstream_deinit(struct visl_dev *dev);
+void visl_debugfs_deinit(struct visl_dev *dev);
+
+#else
+
+static inline int visl_debugfs_init(struct visl_dev *dev)
+{
+	return 0;
+}
+
+static inline int visl_debugfs_bitstream_init(struct visl_dev *dev)
+{
+	return 0;
+}
+
+static inline void visl_trace_bitstream(struct visl_ctx *ctx, struct visl_run *run) {}
+static inline void
+visl_debugfs_clear_bitstream(struct visl_dev *dev, u64 streamon_jiffies) {}
+static inline void visl_debugfs_bitstream_deinit(struct visl_dev *dev) {}
+static inline void visl_debugfs_deinit(struct visl_dev *dev) {}
+
+#endif
+
diff --git a/drivers/media/test-drivers/visl/visl-dec.c b/drivers/media/test-drivers/visl/visl-dec.c
new file mode 100644
index 000000000000..3c68d97f87d1
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-dec.c
@@ -0,0 +1,468 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * A virtual stateless device for stateless uAPI development purposes.
+ *
+ * This tool's objective is to help the development and testing of userspace
+ * applications that use the V4L2 stateless API to decode media.
+ *
+ * A userspace implementation can use visl to run a decoding loop even when no
+ * hardware is available or when the kernel uAPI for the codec has not been
+ * upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * This driver can also trace the contents of the V4L2 controls submitted to it.
+ * It can also dump the contents of the vb2 buffers through a debugfs
+ * interface. This is in many ways similar to the tracing infrastructure
+ * available for other popular encode/decode APIs out there and can help develop
+ * a userspace application by using another (working) one as a reference.
+ *
+ * Note that no actual decoding of video frames is performed by visl. The V4L2
+ * test pattern generator is used to write various debug information to the
+ * capture buffers instead.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ */
+
+#include "visl.h"
+#include "visl-debugfs.h"
+#include "visl-dec.h"
+#include "visl-trace-fwht.h"
+#include "visl-trace-mpeg2.h"
+#include "visl-trace-vp8.h"
+#include "visl-trace-vp9.h"
+#include "visl-trace-h264.h"
+
+#include <linux/delay.h>
+#include <linux/workqueue.h>
+#include <media/v4l2-mem2mem.h>
+#include <media/tpg/v4l2-tpg.h>
+
+static void *plane_vaddr(struct tpg_data *tpg, struct vb2_buffer *buf,
+			 u32 p, u32 bpl[TPG_MAX_PLANES], u32 h)
+{
+	u32 i;
+	void *vbuf;
+
+	if (p == 0 || tpg_g_buffers(tpg) > 1)
+		return vb2_plane_vaddr(buf, p);
+	vbuf = vb2_plane_vaddr(buf, 0);
+	for (i = 0; i < p; i++)
+		vbuf += bpl[i] * h / tpg->vdownsampling[i];
+	return vbuf;
+}
+
+static void visl_get_ref_frames(struct visl_ctx *ctx, u8 *buf,
+				__kernel_size_t buflen, struct visl_run *run)
+{
+	struct vb2_queue *cap_q = &ctx->fh.m2m_ctx->cap_q_ctx.q;
+	char header[] = "Reference frames:\n";
+	u32 i;
+	u32 len;
+
+	len = scnprintf(buf, buflen, header);
+	buf += len;
+	buflen -= len;
+
+	switch (ctx->current_codec) {
+	case VISL_CODEC_NONE:
+		break;
+
+	case VISL_CODEC_FWHT: {
+		scnprintf(buf, buflen, "backwards_ref_ts: %lld, vb2_idx: %d",
+			  run->fwht.params->backward_ref_ts,
+			  vb2_find_timestamp(cap_q, run->fwht.params->backward_ref_ts, 0));
+		break;
+	}
+
+	case VISL_CODEC_MPEG2: {
+		scnprintf(buf, buflen,
+			  "backward_ref_ts: %llu, vb2_idx: %d\n"
+			  "forward_ref_ts: %llu, vb2_idx: %d\n",
+			  run->mpeg2.pic->backward_ref_ts,
+			  vb2_find_timestamp(cap_q, run->mpeg2.pic->backward_ref_ts, 0),
+			  run->mpeg2.pic->forward_ref_ts,
+			  vb2_find_timestamp(cap_q, run->mpeg2.pic->forward_ref_ts, 0));
+		break;
+	}
+
+	case VISL_CODEC_VP8: {
+		scnprintf(buf, buflen,
+			  "last_ref_ts: %llu, vb2_idx: %d\n"
+			  "golden_ref_ts: %llu, vb2_idx: %d\n"
+			  "alt_ref_ts: %llu, vb2_idx: %d\n",
+			  run->vp8.frame->last_frame_ts,
+			  vb2_find_timestamp(cap_q, run->vp8.frame->last_frame_ts, 0),
+			  run->vp8.frame->golden_frame_ts,
+			  vb2_find_timestamp(cap_q, run->vp8.frame->golden_frame_ts, 0),
+			  run->vp8.frame->alt_frame_ts,
+			  vb2_find_timestamp(cap_q, run->vp8.frame->alt_frame_ts, 0));
+		break;
+	}
+
+	case VISL_CODEC_VP9: {
+		scnprintf(buf, buflen,
+			  "last_ref_ts: %llu, vb2_idx: %d\n"
+			  "golden_ref_ts: %llu, vb2_idx: %d\n"
+			  "alt_ref_ts: %llu, vb2_idx: %d\n",
+			  run->vp9.frame->last_frame_ts,
+			  vb2_find_timestamp(cap_q, run->vp9.frame->last_frame_ts, 0),
+			  run->vp9.frame->golden_frame_ts,
+			  vb2_find_timestamp(cap_q, run->vp9.frame->golden_frame_ts, 0),
+			  run->vp9.frame->alt_frame_ts,
+			  vb2_find_timestamp(cap_q, run->vp9.frame->alt_frame_ts, 0));
+		break;
+	}
+	case VISL_CODEC_H264: {
+		char entry[] = "dpb[%d]:%u, vb2_index: %d\n";
+
+		for (i = 0; i < ARRAY_SIZE(run->h264.dpram->dpb); i++) {
+			len = scnprintf(buf, buflen, entry, i,
+					run->h264.dpram->dpb[i].reference_ts,
+					vb2_find_timestamp(cap_q,
+							   run->h264.dpram->dpb[i].reference_ts,
+							   0));
+			buf += len;
+			buflen -= len;
+		}
+
+		break;
+	}
+	}
+}
+
+static char *visl_get_vb2_state(enum vb2_buffer_state state)
+{
+	switch (state) {
+	case VB2_BUF_STATE_DEQUEUED:
+		return "Dequeued";
+	case VB2_BUF_STATE_IN_REQUEST:
+		return "In request";
+	case VB2_BUF_STATE_PREPARING:
+		return "Preparing";
+	case VB2_BUF_STATE_QUEUED:
+		return "Queued";
+	case VB2_BUF_STATE_ACTIVE:
+		return "Active";
+	case VB2_BUF_STATE_DONE:
+		return "Done";
+	case VB2_BUF_STATE_ERROR:
+		return "Error";
+	default:
+		return "";
+	}
+}
+
+static int visl_fill_bytesused(struct vb2_v4l2_buffer *v4l2_vb2_buf, char *buf, size_t bufsz)
+{
+	int len = 0;
+	u32 i;
+
+	for (i = 0; i < v4l2_vb2_buf->vb2_buf.num_planes; i++)
+		len += scnprintf(buf, bufsz,
+				"bytesused[%u]: %u length[%u]: %u data_offset[%u]: %u",
+				i, v4l2_vb2_buf->planes[i].bytesused,
+				i, v4l2_vb2_buf->planes[i].length,
+				i, v4l2_vb2_buf->planes[i].data_offset);
+
+	return len;
+}
+
+static void visl_tpg_fill_sequence(struct visl_ctx *ctx,
+				   struct visl_run *run, char buf[], size_t bufsz)
+{
+	u32 stream_ms;
+
+	stream_ms = jiffies_to_msecs(get_jiffies_64() - ctx->capture_streamon_jiffies);
+
+	scnprintf(buf, bufsz,
+		  "stream time: %02d:%02d:%02d:%03d sequence:%u timestamp:%lld field:%s",
+		  (stream_ms / (60 * 60 * 1000)) % 24,
+		  (stream_ms / (60 * 1000)) % 60,
+		  (stream_ms / 1000) % 60,
+		  stream_ms % 1000,
+		  run->dst->sequence,
+		  run->dst->vb2_buf.timestamp,
+		  (run->dst->field == V4L2_FIELD_ALTERNATE) ?
+		  (run->dst->field == V4L2_FIELD_TOP ?
+		  " top" : " bottom") : "none");
+}
+
+static void visl_tpg_fill(struct visl_ctx *ctx, struct visl_run *run)
+{
+	u8 *basep[TPG_MAX_PLANES][2];
+	char *buf = ctx->tpg_str_buf;
+	char *tmp = buf;
+	char *line_str;
+	u32 line = 1;
+	const u32 line_height = 16;
+	u32 len;
+	struct vb2_queue *out_q = &ctx->fh.m2m_ctx->out_q_ctx.q;
+	struct vb2_queue *cap_q = &ctx->fh.m2m_ctx->cap_q_ctx.q;
+	struct v4l2_pix_format_mplane *coded_fmt = &ctx->coded_fmt.fmt.pix_mp;
+	struct v4l2_pix_format_mplane *decoded_fmt = &ctx->decoded_fmt.fmt.pix_mp;
+	u32 p;
+	u32 i;
+
+	for (p = 0; p < tpg_g_planes(&ctx->tpg); p++) {
+		void *vbuf = plane_vaddr(&ctx->tpg,
+					 &run->dst->vb2_buf, p,
+					 ctx->tpg.bytesperline,
+					 ctx->tpg.buf_height);
+
+		tpg_calc_text_basep(&ctx->tpg, basep, p, vbuf);
+		tpg_fill_plane_buffer(&ctx->tpg, 0, p, vbuf);
+	}
+
+	visl_tpg_fill_sequence(ctx, run, buf, TPG_STR_BUF_SZ);
+	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
+	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
+	frame_dprintk(ctx->dev, run->dst->sequence, "");
+	line++;
+
+	visl_get_ref_frames(ctx, buf, TPG_STR_BUF_SZ, run);
+
+	while ((line_str = strsep(&tmp, "\n")) && strlen(line_str)) {
+		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, line_str);
+		frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", line_str);
+	}
+
+	frame_dprintk(ctx->dev, run->dst->sequence, "");
+	line++;
+
+	scnprintf(buf,
+		  TPG_STR_BUF_SZ,
+		  "OUTPUT pixelformat: %c%c%c%c, resolution: %dx%d, num_planes: %d",
+		  coded_fmt->pixelformat,
+		  (coded_fmt->pixelformat >> 8) & 0xff,
+		  (coded_fmt->pixelformat >> 16) & 0xff,
+		  (coded_fmt->pixelformat >> 24) & 0xff,
+		  coded_fmt->width,
+		  coded_fmt->height,
+		  coded_fmt->num_planes);
+
+	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
+	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
+
+	for (i = 0; i < coded_fmt->num_planes; i++) {
+		scnprintf(buf,
+			  TPG_STR_BUF_SZ,
+			  "plane[%d]: bytesperline: %d, sizeimage: %d",
+			  i,
+			  coded_fmt->plane_fmt[i].bytesperline,
+			  coded_fmt->plane_fmt[i].sizeimage);
+
+		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
+		frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
+	}
+
+	line++;
+	frame_dprintk(ctx->dev, run->dst->sequence, "");
+	scnprintf(buf, TPG_STR_BUF_SZ, "Output queue status:");
+	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
+	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
+
+	len = 0;
+	for (i = 0; i < out_q->num_buffers; i++) {
+		char entry[] = "index: %u, state: %s, request_fd: %d, ";
+		u32 old_len = len;
+		char *q_status = visl_get_vb2_state(out_q->bufs[i]->state);
+
+		len += scnprintf(&buf[len], TPG_STR_BUF_SZ - len,
+				 entry, i, q_status,
+				 to_vb2_v4l2_buffer(out_q->bufs[i])->request_fd);
+
+		len += visl_fill_bytesused(to_vb2_v4l2_buffer(out_q->bufs[i]),
+					   &buf[len],
+					   TPG_STR_BUF_SZ - len);
+
+		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, &buf[old_len]);
+		frame_dprintk(ctx->dev, run->dst->sequence, "%s", &buf[old_len]);
+	}
+
+	line++;
+	frame_dprintk(ctx->dev, run->dst->sequence, "");
+
+	scnprintf(buf,
+		  TPG_STR_BUF_SZ,
+		  "CAPTURE pixelformat: %c%c%c%c, resolution: %dx%d, num_planes: %d",
+		  decoded_fmt->pixelformat,
+		  (decoded_fmt->pixelformat >> 8) & 0xff,
+		  (decoded_fmt->pixelformat >> 16) & 0xff,
+		  (decoded_fmt->pixelformat >> 24) & 0xff,
+		  decoded_fmt->width,
+		  decoded_fmt->height,
+		  decoded_fmt->num_planes);
+
+	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
+	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
+
+	for (i = 0; i < decoded_fmt->num_planes; i++) {
+		scnprintf(buf,
+			  TPG_STR_BUF_SZ,
+			  "plane[%d]: bytesperline: %d, sizeimage: %d",
+			  i,
+			  decoded_fmt->plane_fmt[i].bytesperline,
+			  decoded_fmt->plane_fmt[i].sizeimage);
+
+		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
+		frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
+	}
+
+	line++;
+	frame_dprintk(ctx->dev, run->dst->sequence, "");
+	scnprintf(buf, TPG_STR_BUF_SZ, "Capture queue status:");
+	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
+	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
+
+	len = 0;
+	for (i = 0; i < cap_q->num_buffers; i++) {
+		u32 old_len = len;
+		char *q_status = visl_get_vb2_state(cap_q->bufs[i]->state);
+
+		len += scnprintf(&buf[len], TPG_STR_BUF_SZ - len,
+				 "index: %u, status: %s, timestamp: %llu, is_held: %d",
+				 cap_q->bufs[i]->index, q_status,
+				 cap_q->bufs[i]->timestamp,
+				 to_vb2_v4l2_buffer(cap_q->bufs[i])->is_held);
+
+		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, &buf[old_len]);
+		frame_dprintk(ctx->dev, run->dst->sequence, "%s", &buf[old_len]);
+	}
+}
+
+static void visl_trace_ctrls(struct visl_ctx *ctx, struct visl_run *run)
+{
+	int i;
+
+	switch (ctx->current_codec) {
+	default:
+	case VISL_CODEC_NONE:
+		break;
+	case VISL_CODEC_FWHT:
+		trace_v4l2_ctrl_fwht_params(run->fwht.params);
+		break;
+	case VISL_CODEC_MPEG2:
+		trace_v4l2_ctrl_mpeg2_sequence(run->mpeg2.seq);
+		trace_v4l2_ctrl_mpeg2_picture(run->mpeg2.pic);
+		trace_v4l2_ctrl_mpeg2_quantisation(run->mpeg2.quant);
+		break;
+	case VISL_CODEC_VP8:
+		trace_v4l2_ctrl_vp8_frame(run->vp8.frame);
+		trace_v4l2_ctrl_vp8_entropy(run->vp8.frame);
+		break;
+	case VISL_CODEC_VP9:
+		trace_v4l2_ctrl_vp9_frame(run->vp9.frame);
+		trace_v4l2_ctrl_vp9_compressed_hdr(run->vp9.probs);
+		trace_v4l2_ctrl_vp9_compressed_coeff(run->vp9.probs);
+		trace_v4l2_vp9_mv_probs(&run->vp9.probs->mv);
+		break;
+	case VISL_CODEC_H264:
+		trace_v4l2_ctrl_h264_sps(run->h264.sps);
+		trace_v4l2_ctrl_h264_pps(run->h264.pps);
+		trace_v4l2_ctrl_h264_scaling_matrix(run->h264.sm);
+		trace_v4l2_ctrl_h264_slice_params(run->h264.spram);
+
+		for (i = 0; i < ARRAY_SIZE(run->h264.spram->ref_pic_list0); i++)
+			trace_v4l2_h264_ref_pic_list0(&run->h264.spram->ref_pic_list0[i], i);
+		for (i = 0; i < ARRAY_SIZE(run->h264.spram->ref_pic_list0); i++)
+			trace_v4l2_h264_ref_pic_list1(&run->h264.spram->ref_pic_list1[i], i);
+
+		trace_v4l2_ctrl_h264_decode_params(run->h264.dpram);
+
+		for (i = 0; i < ARRAY_SIZE(run->h264.dpram->dpb); i++)
+			trace_v4l2_h264_dpb_entry(&run->h264.dpram->dpb[i], i);
+		break;
+	}
+}
+
+void visl_device_run(void *priv)
+{
+	struct visl_ctx *ctx = priv;
+	struct visl_run run = {};
+	struct media_request *src_req;
+
+	run.src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
+	run.dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
+
+	/* Apply request(s) controls if needed. */
+	src_req = run.src->vb2_buf.req_obj.req;
+
+	if (src_req)
+		v4l2_ctrl_request_setup(src_req, &ctx->hdl);
+
+	v4l2_m2m_buf_copy_metadata(run.src, run.dst, true);
+	run.dst->sequence = ctx->q_data[V4L2_M2M_DST].sequence++;
+	run.src->sequence = ctx->q_data[V4L2_M2M_SRC].sequence++;
+	run.dst->field = ctx->decoded_fmt.fmt.pix.field;
+
+	switch (ctx->current_codec) {
+	default:
+	case VISL_CODEC_NONE:
+		break;
+	case VISL_CODEC_FWHT:
+		run.fwht.params = visl_find_control_data(ctx, V4L2_CID_STATELESS_FWHT_PARAMS);
+		break;
+	case VISL_CODEC_MPEG2:
+		run.mpeg2.seq = visl_find_control_data(ctx, V4L2_CID_STATELESS_MPEG2_SEQUENCE);
+		run.mpeg2.pic = visl_find_control_data(ctx, V4L2_CID_STATELESS_MPEG2_PICTURE);
+		run.mpeg2.quant = visl_find_control_data(ctx,
+							 V4L2_CID_STATELESS_MPEG2_QUANTISATION);
+		break;
+	case VISL_CODEC_VP8:
+		run.vp8.frame = visl_find_control_data(ctx, V4L2_CID_STATELESS_VP8_FRAME);
+		break;
+	case VISL_CODEC_VP9:
+		run.vp9.frame = visl_find_control_data(ctx, V4L2_CID_STATELESS_VP9_FRAME);
+		run.vp9.probs = visl_find_control_data(ctx, V4L2_CID_STATELESS_VP9_COMPRESSED_HDR);
+		break;
+	case VISL_CODEC_H264:
+		run.h264.sps = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_SPS);
+		run.h264.pps = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_PPS);
+		run.h264.sm = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_SCALING_MATRIX);
+		run.h264.spram = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_SLICE_PARAMS);
+		run.h264.dpram = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_DECODE_PARAMS);
+		run.h264.pwht = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_PRED_WEIGHTS);
+		break;
+	}
+
+	frame_dprintk(ctx->dev, run.dst->sequence,
+		      "Got OUTPUT buffer sequence %d, timestamp %llu\n",
+		      run.src->sequence, run.src->vb2_buf.timestamp);
+
+	frame_dprintk(ctx->dev, run.dst->sequence,
+		      "Got CAPTURE buffer sequence %d, timestamp %llu\n",
+		      run.dst->sequence, run.dst->vb2_buf.timestamp);
+
+	visl_tpg_fill(ctx, &run);
+	visl_trace_ctrls(ctx, &run);
+
+	if (bitstream_trace_frame_start > -1 &&
+	    run.dst->sequence >= bitstream_trace_frame_start &&
+	    run.dst->sequence < bitstream_trace_frame_start + bitstream_trace_nframes)
+		visl_trace_bitstream(ctx, &run);
+
+	/* Complete request(s) controls if needed. */
+	if (src_req)
+		v4l2_ctrl_request_complete(src_req, &ctx->hdl);
+
+	if (visl_transtime_ms)
+		usleep_range(visl_transtime_ms * 1000, 2 * visl_transtime_ms * 1000);
+
+	v4l2_m2m_buf_done_and_job_finish(ctx->dev->m2m_dev,
+					 ctx->fh.m2m_ctx, VB2_BUF_STATE_DONE);
+}
diff --git a/drivers/media/test-drivers/visl/visl-dec.h b/drivers/media/test-drivers/visl/visl-dec.h
new file mode 100644
index 000000000000..56a550a8f747
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-dec.h
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * A virtual stateless device for stateless uAPI development purposes.
+ *
+ * This tool's objective is to help the development and testing of userspace
+ * applications that use the V4L2 stateless API to decode media.
+ *
+ * A userspace implementation can use visl to run a decoding loop even when no
+ * hardware is available or when the kernel uAPI for the codec has not been
+ * upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * This driver can also trace the contents of the V4L2 controls submitted to it.
+ * It can also dump the contents of the vb2 buffers through a debugfs
+ * interface. This is in many ways similar to the tracing infrastructure
+ * available for other popular encode/decode APIs out there and can help develop
+ * a userspace application by using another (working) one as a reference.
+ *
+ * Note that no actual decoding of video frames is performed by visl. The V4L2
+ * test pattern generator is used to write various debug information to the
+ * capture buffers instead.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ */
+
+#ifndef _VISL_DEC_H_
+#define _VISL_DEC_H_
+
+#include "visl.h"
+
+struct visl_av1_run {
+	const struct v4l2_ctrl_av1_sequence *sequence;
+	const struct v4l2_ctrl_av1_frame_header *frame_header;
+	const struct v4l2_ctrl_av1_tile_group *tile_group;
+	const struct v4l2_ctrl_av1_tile_group_entry *tg_entries;
+	const struct v4l2_ctrl_av1_film_grain *film_grain;
+};
+
+struct visl_fwht_run {
+	const struct v4l2_ctrl_fwht_params *params;
+};
+
+struct visl_mpeg2_run {
+	const struct v4l2_ctrl_mpeg2_sequence *seq;
+	const struct v4l2_ctrl_mpeg2_picture *pic;
+	const struct v4l2_ctrl_mpeg2_quantisation *quant;
+};
+
+struct visl_vp8_run {
+	const struct v4l2_ctrl_vp8_frame *frame;
+};
+
+struct visl_vp9_run {
+	const struct v4l2_ctrl_vp9_frame *frame;
+	const struct v4l2_ctrl_vp9_compressed_hdr *probs;
+};
+
+struct visl_h264_run {
+	const struct v4l2_ctrl_h264_sps *sps;
+	const struct v4l2_ctrl_h264_pps *pps;
+	const struct v4l2_ctrl_h264_scaling_matrix *sm;
+	const struct v4l2_ctrl_h264_slice_params *spram;
+	const struct v4l2_ctrl_h264_decode_params *dpram;
+	const struct v4l2_ctrl_h264_pred_weights *pwht;
+};
+
+struct visl_run {
+	struct vb2_v4l2_buffer	*src;
+	struct vb2_v4l2_buffer	*dst;
+
+	union {
+		struct visl_fwht_run	fwht;
+		struct visl_mpeg2_run	mpeg2;
+		struct visl_vp8_run	vp8;
+		struct visl_vp9_run	vp9;
+		struct visl_h264_run	h264;
+	};
+};
+
+int visl_dec_start(struct visl_ctx *ctx);
+int visl_dec_stop(struct visl_ctx *ctx);
+int visl_job_ready(void *priv);
+void visl_device_run(void *priv);
+
+#endif /* _VISL_DEC_H_ */
diff --git a/drivers/media/test-drivers/visl/visl-trace-fwht.h b/drivers/media/test-drivers/visl/visl-trace-fwht.h
new file mode 100644
index 000000000000..76034449e5b7
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-trace-fwht.h
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+#if !defined(_VISL_TRACE_FWHT_H_) || defined(TRACE_HEADER_MULTI_READ)
+#define _VISL_TRACE_FWHT_H_
+
+#include <linux/tracepoint.h>
+#include "visl.h"
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM visl_fwht_controls
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_fwht_params_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_fwht_params *p),
+	TP_ARGS(p),
+	TP_STRUCT__entry(
+			 __field(u64, backward_ref_ts)
+			 __field(u32, version)
+			 __field(u32, width)
+			 __field(u32, height)
+			 __field(u32, flags)
+			 __field(u32, colorspace)
+			 __field(u32, xfer_func)
+			 __field(u32, ycbcr_enc)
+			 __field(u32, quantization)
+			 ),
+	TP_fast_assign(
+		       __entry->backward_ref_ts = p->backward_ref_ts;
+		       __entry->version = p->version;
+		       __entry->width = p->width;
+		       __entry->height = p->height;
+		       __entry->flags = p->flags;
+		       __entry->colorspace = p->colorspace;
+		       __entry->xfer_func = p->xfer_func;
+		       __entry->ycbcr_enc = p->ycbcr_enc;
+		       __entry->quantization = p->quantization;
+		       ),
+	TP_printk("backward_ref_ts %llu version %u width %u height %u flags %s colorspace %u xfer_func %u ycbcr_enc %u quantization %u",
+		  __entry->backward_ref_ts, __entry->version, __entry->width, __entry->height,
+		  __print_flags(__entry->flags, "|",
+		  {V4L2_FWHT_FL_IS_INTERLACED, "IS_INTERLACED"},
+		  {V4L2_FWHT_FL_IS_BOTTOM_FIRST, "IS_BOTTOM_FIRST"},
+		  {V4L2_FWHT_FL_IS_ALTERNATE, "IS_ALTERNATE"},
+		  {V4L2_FWHT_FL_IS_BOTTOM_FIELD, "IS_BOTTOM_FIELD"},
+		  {V4L2_FWHT_FL_LUMA_IS_UNCOMPRESSED, "LUMA_IS_UNCOMPRESSED"},
+		  {V4L2_FWHT_FL_CB_IS_UNCOMPRESSED, "CB_IS_UNCOMPRESSED"},
+		  {V4L2_FWHT_FL_CR_IS_UNCOMPRESSED, "CR_IS_UNCOMPRESSED"},
+		  {V4L2_FWHT_FL_ALPHA_IS_UNCOMPRESSED, "ALPHA_IS_UNCOMPRESSED"},
+		  {V4L2_FWHT_FL_I_FRAME, "I_FRAME"},
+		  {V4L2_FWHT_FL_PIXENC_HSV, "PIXENC_HSV"},
+		  {V4L2_FWHT_FL_PIXENC_RGB, "PIXENC_RGB"},
+		  {V4L2_FWHT_FL_PIXENC_YUV, "PIXENC_YUV"}),
+		  __entry->colorspace, __entry->xfer_func, __entry->ycbcr_enc,
+		  __entry->quantization)
+);
+
+DEFINE_EVENT(v4l2_ctrl_fwht_params_tmpl, v4l2_ctrl_fwht_params,
+	TP_PROTO(const struct v4l2_ctrl_fwht_params *p),
+	TP_ARGS(p)
+);
+
+#endif
+
+#undef TRACE_INCLUDE_PATH
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
+#define TRACE_INCLUDE_FILE visl-trace-fwht
+#include <trace/define_trace.h>
diff --git a/drivers/media/test-drivers/visl/visl-trace-h264.h b/drivers/media/test-drivers/visl/visl-trace-h264.h
new file mode 100644
index 000000000000..0026a0dd5ce9
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-trace-h264.h
@@ -0,0 +1,349 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+#if !defined(_VISL_TRACE_H264_H_) || defined(TRACE_HEADER_MULTI_READ)
+#define _VISL_TRACE_H264_H_
+
+#include <linux/tracepoint.h>
+#include "visl.h"
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM visl_h264_controls
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_h264_sps_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_h264_sps *s),
+	TP_ARGS(s),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_sps, s)),
+	TP_fast_assign(__entry->s = *s),
+	TP_printk("\nprofile_idc %u\n"
+		  "constraint_set_flags %s\n"
+		  "level_idc %u\n"
+		  "seq_parameter_set_id %u\n"
+		  "chroma_format_idc %u\n"
+		  "bit_depth_luma_minus8 %u\n"
+		  "bit_depth_chroma_minus8 %u\n"
+		  "log2_max_frame_num_minus4 %u\n"
+		  "pic_order_cnt_type %u\n"
+		  "log2_max_pic_order_cnt_lsb_minus4 %u\n"
+		  "max_num_ref_frames %u\n"
+		  "num_ref_frames_in_pic_order_cnt_cycle %u\n"
+		  "offset_for_ref_frame %s\n"
+		  "offset_for_non_ref_pic %d\n"
+		  "offset_for_top_to_bottom_field %d\n"
+		  "pic_width_in_mbs_minus1 %u\n"
+		  "pic_height_in_map_units_minus1 %u\n"
+		  "flags %s",
+		  __entry->s.profile_idc,
+		  __print_flags(__entry->s.constraint_set_flags, "|",
+		  {V4L2_H264_SPS_CONSTRAINT_SET0_FLAG, "CONSTRAINT_SET0_FLAG"},
+		  {V4L2_H264_SPS_CONSTRAINT_SET1_FLAG, "CONSTRAINT_SET1_FLAG"},
+		  {V4L2_H264_SPS_CONSTRAINT_SET2_FLAG, "CONSTRAINT_SET2_FLAG"},
+		  {V4L2_H264_SPS_CONSTRAINT_SET3_FLAG, "CONSTRAINT_SET3_FLAG"},
+		  {V4L2_H264_SPS_CONSTRAINT_SET4_FLAG, "CONSTRAINT_SET4_FLAG"},
+		  {V4L2_H264_SPS_CONSTRAINT_SET5_FLAG, "CONSTRAINT_SET5_FLAG"}),
+		  __entry->s.level_idc,
+		  __entry->s.seq_parameter_set_id,
+		  __entry->s.chroma_format_idc,
+		  __entry->s.bit_depth_luma_minus8,
+		  __entry->s.bit_depth_chroma_minus8,
+		  __entry->s.log2_max_frame_num_minus4,
+		  __entry->s.pic_order_cnt_type,
+		  __entry->s.log2_max_pic_order_cnt_lsb_minus4,
+		  __entry->s.max_num_ref_frames,
+		  __entry->s.num_ref_frames_in_pic_order_cnt_cycle,
+		  __print_array(__entry->s.offset_for_ref_frame,
+		  		ARRAY_SIZE(__entry->s.offset_for_ref_frame),
+		  		sizeof(__entry->s.offset_for_ref_frame[0])),
+		  __entry->s.offset_for_non_ref_pic,
+		  __entry->s.offset_for_top_to_bottom_field,
+		  __entry->s.pic_width_in_mbs_minus1,
+		  __entry->s.pic_height_in_map_units_minus1,
+		  __print_flags(__entry->s.flags, "|",
+		  {V4L2_H264_SPS_FLAG_SEPARATE_COLOUR_PLANE, "SEPARATE_COLOUR_PLANE"},
+		  {V4L2_H264_SPS_FLAG_QPPRIME_Y_ZERO_TRANSFORM_BYPASS, "QPPRIME_Y_ZERO_TRANSFORM_BYPASS"},
+		  {V4L2_H264_SPS_FLAG_DELTA_PIC_ORDER_ALWAYS_ZERO, "DELTA_PIC_ORDER_ALWAYS_ZERO"},
+		  {V4L2_H264_SPS_FLAG_GAPS_IN_FRAME_NUM_VALUE_ALLOWED, "GAPS_IN_FRAME_NUM_VALUE_ALLOWED"},
+		  {V4L2_H264_SPS_FLAG_FRAME_MBS_ONLY, "FRAME_MBS_ONLY"},
+		  {V4L2_H264_SPS_FLAG_MB_ADAPTIVE_FRAME_FIELD, "MB_ADAPTIVE_FRAME_FIELD"},
+		  {V4L2_H264_SPS_FLAG_DIRECT_8X8_INFERENCE, "DIRECT_8X8_INFERENCE"}
+		  ))
+);
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_h264_pps_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_h264_pps *p),
+	TP_ARGS(p),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_pps, p)),
+	TP_fast_assign(__entry->p = *p),
+	TP_printk("\npic_parameter_set_id %u\n"
+		  "seq_parameter_set_id %u\n"
+		  "num_slice_groups_minus1 %u\n"
+		  "num_ref_idx_l0_default_active_minus1 %u\n"
+		  "num_ref_idx_l1_default_active_minus1 %u\n"
+		  "weighted_bipred_idc %u\n"
+		  "pic_init_qp_minus26 %d\n"
+		  "pic_init_qs_minus26 %d\n"
+		  "chroma_qp_index_offset %d\n"
+		  "second_chroma_qp_index_offset %d\n"
+		  "flags %s",
+		  __entry->p.pic_parameter_set_id,
+		  __entry->p.seq_parameter_set_id,
+		  __entry->p.num_slice_groups_minus1,
+		  __entry->p.num_ref_idx_l0_default_active_minus1,
+		  __entry->p.num_ref_idx_l1_default_active_minus1,
+		  __entry->p.weighted_bipred_idc,
+		  __entry->p.pic_init_qp_minus26,
+		  __entry->p.pic_init_qs_minus26,
+		  __entry->p.chroma_qp_index_offset,
+		  __entry->p.second_chroma_qp_index_offset,
+		  __print_flags(__entry->p.flags, "|",
+		  {V4L2_H264_PPS_FLAG_ENTROPY_CODING_MODE, "ENTROPY_CODING_MODE"},
+		  {V4L2_H264_PPS_FLAG_BOTTOM_FIELD_PIC_ORDER_IN_FRAME_PRESENT, "BOTTOM_FIELD_PIC_ORDER_IN_FRAME_PRESENT"},
+		  {V4L2_H264_PPS_FLAG_WEIGHTED_PRED, "WEIGHTED_PRED"},
+		  {V4L2_H264_PPS_FLAG_DEBLOCKING_FILTER_CONTROL_PRESENT, "DEBLOCKING_FILTER_CONTROL_PRESENT"},
+		  {V4L2_H264_PPS_FLAG_CONSTRAINED_INTRA_PRED, "CONSTRAINED_INTRA_PRED"},
+		  {V4L2_H264_PPS_FLAG_REDUNDANT_PIC_CNT_PRESENT, "REDUNDANT_PIC_CNT_PRESENT"},
+		  {V4L2_H264_PPS_FLAG_TRANSFORM_8X8_MODE, "TRANSFORM_8X8_MODE"},
+		  {V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT, "SCALING_MATRIX_PRESENT"}
+		  ))
+);
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_h264_scaling_matrix_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_h264_scaling_matrix *s),
+	TP_ARGS(s),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_scaling_matrix, s)),
+	TP_fast_assign(__entry->s = *s),
+	TP_printk("\nscaling_list_4x4 {%s}\nscaling_list_8x8 {%s}",
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->s.scaling_list_4x4,
+				   sizeof(__entry->s.scaling_list_4x4),
+				   false),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->s.scaling_list_8x8,
+				   sizeof(__entry->s.scaling_list_8x8),
+				   false)
+	)
+);
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_h264_pred_weights_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_h264_pred_weights *p),
+	TP_ARGS(p),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_pred_weights, p)),
+	TP_fast_assign(__entry->p = *p),
+	TP_printk("\nluma_log2_weight_denom %u\n"
+		  "chroma_log2_weight_denom %u\n"
+		  "weight_factor[0].luma_weight %s\n"
+		  "weight_factor[0].luma_offset %s\n"
+		  "weight_factor[0].chroma_weight {%s}\n"
+		  "weight_factor[0].chroma_offset {%s}\n"
+		  "weight_factor[1].luma_weight %s\n"
+		  "weight_factor[1].luma_offset %s\n"
+		  "weight_factor[1].chroma_weight {%s}\n"
+		  "weight_factor[1].chroma_offset {%s}\n",
+		  __entry->p.luma_log2_weight_denom,
+		  __entry->p.chroma_log2_weight_denom,
+		  __print_array(__entry->p.weight_factors[0].luma_weight,
+		  		ARRAY_SIZE(__entry->p.weight_factors[0].luma_weight),
+		  		sizeof(__entry->p.weight_factors[0].luma_weight[0])),
+		  __print_array(__entry->p.weight_factors[0].luma_offset,
+		  		ARRAY_SIZE(__entry->p.weight_factors[0].luma_offset),
+		  		sizeof(__entry->p.weight_factors[0].luma_offset[0])),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->p.weight_factors[0].chroma_weight,
+				   sizeof(__entry->p.weight_factors[0].chroma_weight),
+				   false),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->p.weight_factors[0].chroma_offset,
+				   sizeof(__entry->p.weight_factors[0].chroma_offset),
+				   false),
+		  __print_array(__entry->p.weight_factors[1].luma_weight,
+		  		ARRAY_SIZE(__entry->p.weight_factors[1].luma_weight),
+		  		sizeof(__entry->p.weight_factors[1].luma_weight[0])),
+		  __print_array(__entry->p.weight_factors[1].luma_offset,
+		  		ARRAY_SIZE(__entry->p.weight_factors[1].luma_offset),
+		  		sizeof(__entry->p.weight_factors[1].luma_offset[0])),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->p.weight_factors[1].chroma_weight,
+				   sizeof(__entry->p.weight_factors[1].chroma_weight),
+				   false),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->p.weight_factors[1].chroma_offset,
+				   sizeof(__entry->p.weight_factors[1].chroma_offset),
+				   false)
+	)
+);
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_h264_slice_params_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_h264_slice_params *s),
+	TP_ARGS(s),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_slice_params, s)),
+	TP_fast_assign(__entry->s = *s),
+	TP_printk("\nheader_bit_size %u\n"
+		  "first_mb_in_slice %u\n"
+		  "slice_type %s\n"
+		  "colour_plane_id %u\n"
+		  "redundant_pic_cnt %u\n"
+		  "cabac_init_idc %u\n"
+		  "slice_qp_delta %d\n"
+		  "slice_qs_delta %d\n"
+		  "disable_deblocking_filter_idc %u\n"
+		  "slice_alpha_c0_offset_div2 %u\n"
+		  "slice_beta_offset_div2 %u\n"
+		  "num_ref_idx_l0_active_minus1 %u\n"
+		  "num_ref_idx_l1_active_minus1 %u\n"
+		  "flags %s",
+		  __entry->s.header_bit_size,
+		  __entry->s.first_mb_in_slice,
+		  __print_symbolic(__entry->s.slice_type,
+		  {V4L2_H264_SLICE_TYPE_P, "P"},
+		  {V4L2_H264_SLICE_TYPE_B, "B"},
+		  {V4L2_H264_SLICE_TYPE_I, "I"},
+		  {V4L2_H264_SLICE_TYPE_SP, "SP"},
+		  {V4L2_H264_SLICE_TYPE_SI, "SI"}),
+		  __entry->s.colour_plane_id,
+		  __entry->s.redundant_pic_cnt,
+		  __entry->s.cabac_init_idc,
+		  __entry->s.slice_qp_delta,
+		  __entry->s.slice_qs_delta,
+		  __entry->s.disable_deblocking_filter_idc,
+		  __entry->s.slice_alpha_c0_offset_div2,
+		  __entry->s.slice_beta_offset_div2,
+		  __entry->s.num_ref_idx_l0_active_minus1,
+		  __entry->s.num_ref_idx_l1_active_minus1,
+		  __print_flags(__entry->s.flags, "|",
+		  {V4L2_H264_SLICE_FLAG_DIRECT_SPATIAL_MV_PRED, "DIRECT_SPATIAL_MV_PRED"},
+		  {V4L2_H264_SLICE_FLAG_SP_FOR_SWITCH, "SP_FOR_SWITCH"})
+	)
+);
+
+DECLARE_EVENT_CLASS(v4l2_h264_reference_tmpl,
+	TP_PROTO(const struct v4l2_h264_reference *r, int i),
+	TP_ARGS(r, i),
+	TP_STRUCT__entry(__field_struct(struct v4l2_h264_reference, r)
+			 __field(int, i)),
+	TP_fast_assign(__entry->r = *r; __entry->i = i;),
+	TP_printk("[%d]: fields %s index %u",
+		  __entry->i,
+		  __print_flags(__entry->r.fields, "|",
+		  {V4L2_H264_TOP_FIELD_REF, "TOP_FIELD_REF"},
+		  {V4L2_H264_BOTTOM_FIELD_REF, "BOTTOM_FIELD_REF"},
+		  {V4L2_H264_FRAME_REF, "FRAME_REF"}),
+		  __entry->r.index
+	)
+);
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_h264_decode_params_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_h264_decode_params *d),
+	TP_ARGS(d),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_decode_params, d)),
+	TP_fast_assign(__entry->d = *d),
+	TP_printk("\nnal_ref_idc %u\n"
+		  "frame_num %u\n"
+		  "top_field_order_cnt %d\n"
+		  "bottom_field_order_cnt %d\n"
+		  "idr_pic_id %u\n"
+		  "pic_order_cnt_lsb %u\n"
+		  "delta_pic_order_cnt_bottom %d\n"
+		  "delta_pic_order_cnt0 %d\n"
+		  "delta_pic_order_cnt1 %d\n"
+		  "dec_ref_pic_marking_bit_size %u\n"
+		  "pic_order_cnt_bit_size %u\n"
+		  "slice_group_change_cycle %u\n"
+		  "flags %s\n",
+		  __entry->d.nal_ref_idc,
+		  __entry->d.frame_num,
+		  __entry->d.top_field_order_cnt,
+		  __entry->d.bottom_field_order_cnt,
+		  __entry->d.idr_pic_id,
+		  __entry->d.pic_order_cnt_lsb,
+		  __entry->d.delta_pic_order_cnt_bottom,
+		  __entry->d.delta_pic_order_cnt0,
+		  __entry->d.delta_pic_order_cnt1,
+		  __entry->d.dec_ref_pic_marking_bit_size,
+		  __entry->d.pic_order_cnt_bit_size,
+		  __entry->d.slice_group_change_cycle,
+		  __print_flags(__entry->d.flags, "|",
+		  {V4L2_H264_DECODE_PARAM_FLAG_IDR_PIC, "IDR_PIC"},
+		  {V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC, "FIELD_PIC"},
+		  {V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD, "BOTTOM_FIELD"},
+		  {V4L2_H264_DECODE_PARAM_FLAG_PFRAME, "PFRAME"},
+		  {V4L2_H264_DECODE_PARAM_FLAG_BFRAME, "BFRAME"})
+	)
+);
+
+DECLARE_EVENT_CLASS(v4l2_h264_dpb_entry_tmpl,
+	TP_PROTO(const struct v4l2_h264_dpb_entry *e, int i),
+	TP_ARGS(e, i),
+	TP_STRUCT__entry(__field_struct(struct v4l2_h264_dpb_entry, e)
+			 __field(int, i)),
+	TP_fast_assign(__entry->e = *e; __entry->i = i;),
+	TP_printk("[%d]: reference_ts %llu, pic_num %u frame_num %u fields %s "
+		  "top_field_order_cnt %d bottom_field_order_cnt %d flags %s",
+		  __entry->i,
+		  __entry->e.reference_ts,
+		  __entry->e.pic_num,
+		  __entry->e.frame_num,
+		  __print_flags(__entry->e.fields, "|",
+		  {V4L2_H264_TOP_FIELD_REF, "TOP_FIELD_REF"},
+		  {V4L2_H264_BOTTOM_FIELD_REF, "BOTTOM_FIELD_REF"},
+		  {V4L2_H264_FRAME_REF, "FRAME_REF"}),
+		  __entry->e.top_field_order_cnt,
+		  __entry->e.bottom_field_order_cnt,
+		  __print_flags(__entry->e.flags, "|",
+		  {V4L2_H264_DPB_ENTRY_FLAG_VALID, "VALID"},
+		  {V4L2_H264_DPB_ENTRY_FLAG_ACTIVE, "ACTIVE"},
+		  {V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM, "LONG_TERM"},
+		  {V4L2_H264_DPB_ENTRY_FLAG_FIELD, "FIELD"})
+
+	)
+);
+
+DEFINE_EVENT(v4l2_ctrl_h264_sps_tmpl, v4l2_ctrl_h264_sps,
+	TP_PROTO(const struct v4l2_ctrl_h264_sps *s),
+	TP_ARGS(s)
+);
+
+DEFINE_EVENT(v4l2_ctrl_h264_pps_tmpl, v4l2_ctrl_h264_pps,
+	TP_PROTO(const struct v4l2_ctrl_h264_pps *p),
+	TP_ARGS(p)
+);
+
+DEFINE_EVENT(v4l2_ctrl_h264_scaling_matrix_tmpl, v4l2_ctrl_h264_scaling_matrix,
+	TP_PROTO(const struct v4l2_ctrl_h264_scaling_matrix *s),
+	TP_ARGS(s)
+);
+
+DEFINE_EVENT(v4l2_ctrl_h264_pred_weights_tmpl, v4l2_ctrl_h264_pred_weights,
+	TP_PROTO(const struct v4l2_ctrl_h264_pred_weights *p),
+	TP_ARGS(p)
+);
+
+DEFINE_EVENT(v4l2_ctrl_h264_slice_params_tmpl, v4l2_ctrl_h264_slice_params,
+	TP_PROTO(const struct v4l2_ctrl_h264_slice_params *s),
+	TP_ARGS(s)
+);
+
+DEFINE_EVENT(v4l2_h264_reference_tmpl, v4l2_h264_ref_pic_list0,
+	TP_PROTO(const struct v4l2_h264_reference *r, int i),
+	TP_ARGS(r, i)
+);
+
+DEFINE_EVENT(v4l2_h264_reference_tmpl, v4l2_h264_ref_pic_list1,
+	TP_PROTO(const struct v4l2_h264_reference *r, int i),
+	TP_ARGS(r, i)
+);
+
+DEFINE_EVENT(v4l2_ctrl_h264_decode_params_tmpl, v4l2_ctrl_h264_decode_params,
+	TP_PROTO(const struct v4l2_ctrl_h264_decode_params *d),
+	TP_ARGS(d)
+);
+
+DEFINE_EVENT(v4l2_h264_dpb_entry_tmpl, v4l2_h264_dpb_entry,
+	TP_PROTO(const struct v4l2_h264_dpb_entry *e, int i),
+	TP_ARGS(e, i)
+);
+
+#endif
+
+#undef TRACE_INCLUDE_PATH
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
+#define TRACE_INCLUDE_FILE visl-trace-h264
+#include <trace/define_trace.h>
diff --git a/drivers/media/test-drivers/visl/visl-trace-mpeg2.h b/drivers/media/test-drivers/visl/visl-trace-mpeg2.h
new file mode 100644
index 000000000000..889a3ba56502
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-trace-mpeg2.h
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+#if !defined(_VISL_TRACE_MPEG2_H_) || defined(TRACE_HEADER_MULTI_READ)
+#define _VISL_TRACE_MPEG2_H_
+
+#include <linux/tracepoint.h>
+#include "visl.h"
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM visl_mpeg2_controls
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_mpeg2_seq_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_mpeg2_sequence *s),
+	TP_ARGS(s),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_mpeg2_sequence, s)),
+	TP_fast_assign(__entry->s = *s;),
+	TP_printk("\nhorizontal_size %u \nvertical_size %u\nvbv_buffer_size %u \n"
+		  "profile_and_level_indication %u \nchroma_format %u\nflags %s\n",
+		  __entry->s.horizontal_size,
+		  __entry->s.vertical_size,
+		  __entry->s.vbv_buffer_size,
+		  __entry->s.profile_and_level_indication,
+		  __entry->s.chroma_format,
+		  __print_flags(__entry->s.flags, "|",
+		  {V4L2_MPEG2_SEQ_FLAG_PROGRESSIVE, "PROGRESSIVE"})
+	)
+);
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_mpeg2_pic_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_mpeg2_picture *p),
+	TP_ARGS(p),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_mpeg2_picture, p)),
+	TP_fast_assign(__entry->p = *p;),
+	TP_printk("\nbackward_ref_ts %llu \nforward_ref_ts %llu \nflags %s \nf_code {%s} \n"
+		  "picture_coding_type: %u \npicture_structure %u \nintra_dc_precision %u\n",
+		  __entry->p.backward_ref_ts,
+		  __entry->p.forward_ref_ts,
+		  __print_flags(__entry->p.flags, "|",
+		  {V4L2_MPEG2_PIC_FLAG_TOP_FIELD_FIRST, "TOP_FIELD_FIRST"},
+		  {V4L2_MPEG2_PIC_FLAG_FRAME_PRED_DCT, "FRAME_PRED_DCT"},
+		  {V4L2_MPEG2_PIC_FLAG_CONCEALMENT_MV, "CONCEALMENT_MV"},
+		  {V4L2_MPEG2_PIC_FLAG_Q_SCALE_TYPE, "Q_SCALE_TYPE"},
+		  {V4L2_MPEG2_PIC_FLAG_INTRA_VLC, "INTA_VLC"},
+		  {V4L2_MPEG2_PIC_FLAG_ALT_SCAN, "ALT_SCAN"},
+		  {V4L2_MPEG2_PIC_FLAG_REPEAT_FIRST, "REPEAT_FIRST"},
+		  {V4L2_MPEG2_PIC_FLAG_PROGRESSIVE, "PROGRESSIVE"}),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->p.f_code,
+				   sizeof(__entry->p.f_code),
+				   false),
+		  __entry->p.picture_coding_type,
+		  __entry->p.picture_structure,
+		  __entry->p.intra_dc_precision
+	)
+);
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_mpeg2_quant_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_mpeg2_quantisation *q),
+	TP_ARGS(q),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_mpeg2_quantisation, q)),
+	TP_fast_assign(__entry->q = *q;),
+	TP_printk("\nintra_quantiser_matrix %s \nnon_intra_quantiser_matrix %s\n"
+		  "chroma_intra_quantiser_matrix %s\nchroma_non_intra_quantiser_matrix %s\n",
+		  __print_array(__entry->q.intra_quantiser_matrix,
+				ARRAY_SIZE(__entry->q.intra_quantiser_matrix),
+				sizeof(__entry->q.intra_quantiser_matrix[0])),
+		  __print_array(__entry->q.non_intra_quantiser_matrix,
+				ARRAY_SIZE(__entry->q.non_intra_quantiser_matrix),
+				sizeof(__entry->q.non_intra_quantiser_matrix[0])),
+		  __print_array(__entry->q.chroma_intra_quantiser_matrix,
+				ARRAY_SIZE(__entry->q.chroma_intra_quantiser_matrix),
+				sizeof(__entry->q.chroma_intra_quantiser_matrix[0])),
+		  __print_array(__entry->q.chroma_non_intra_quantiser_matrix,
+				ARRAY_SIZE(__entry->q.chroma_non_intra_quantiser_matrix),
+				sizeof(__entry->q.chroma_non_intra_quantiser_matrix[0]))
+		  )
+)
+
+DEFINE_EVENT(v4l2_ctrl_mpeg2_seq_tmpl, v4l2_ctrl_mpeg2_sequence,
+	TP_PROTO(const struct v4l2_ctrl_mpeg2_sequence *s),
+	TP_ARGS(s)
+);
+
+DEFINE_EVENT(v4l2_ctrl_mpeg2_pic_tmpl, v4l2_ctrl_mpeg2_picture,
+	TP_PROTO(const struct v4l2_ctrl_mpeg2_picture *p),
+	TP_ARGS(p)
+);
+
+DEFINE_EVENT(v4l2_ctrl_mpeg2_quant_tmpl, v4l2_ctrl_mpeg2_quantisation,
+	TP_PROTO(const struct v4l2_ctrl_mpeg2_quantisation *q),
+	TP_ARGS(q)
+);
+
+#endif
+
+#undef TRACE_INCLUDE_PATH
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
+#define TRACE_INCLUDE_FILE visl-trace-mpeg2
+#include <trace/define_trace.h>
diff --git a/drivers/media/test-drivers/visl/visl-trace-points.c b/drivers/media/test-drivers/visl/visl-trace-points.c
new file mode 100644
index 000000000000..6aa98f90c20a
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-trace-points.c
@@ -0,0 +1,9 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "visl.h"
+
+#define CREATE_TRACE_POINTS
+#include "visl-trace-fwht.h"
+#include "visl-trace-mpeg2.h"
+#include "visl-trace-vp8.h"
+#include "visl-trace-vp9.h"
+#include "visl-trace-h264.h"
diff --git a/drivers/media/test-drivers/visl/visl-trace-vp8.h b/drivers/media/test-drivers/visl/visl-trace-vp8.h
new file mode 100644
index 000000000000..18c610ea18ab
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-trace-vp8.h
@@ -0,0 +1,156 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+#if !defined(_VISL_TRACE_VP8_H_) || defined(TRACE_HEADER_MULTI_READ)
+#define _VISL_TRACE_VP8_H_
+
+#include <linux/tracepoint.h>
+#include "visl.h"
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM visl_vp8_controls
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_vp8_entropy_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
+	TP_ARGS(f),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp8_frame, f)),
+	TP_fast_assign(__entry->f = *f;),
+	TP_printk("\nentropy.coeff_probs {%s}\n"
+		  "entropy.y_mode_probs %s\n"
+		  "entropy.uv_mode_probs %s\n"
+		  "entropy.mv_probs {%s}",
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->f.entropy.coeff_probs,
+				   sizeof(__entry->f.entropy.coeff_probs),
+				   false),
+		  __print_array(__entry->f.entropy.y_mode_probs,
+				ARRAY_SIZE(__entry->f.entropy.y_mode_probs),
+				sizeof(__entry->f.entropy.y_mode_probs[0])),
+		  __print_array(__entry->f.entropy.uv_mode_probs,
+				ARRAY_SIZE(__entry->f.entropy.uv_mode_probs),
+				sizeof(__entry->f.entropy.uv_mode_probs[0])),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->f.entropy.mv_probs,
+				   sizeof(__entry->f.entropy.mv_probs),
+				   false)
+		  )
+)
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_vp8_frame_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
+	TP_ARGS(f),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp8_frame, f)),
+	TP_fast_assign(__entry->f = *f;),
+	TP_printk("\nsegment.quant_update %s\n"
+		  "segment.lf_update %s\n"
+		  "segment.segment_probs %s\n"
+		  "segment.flags %s\n"
+		  "lf.ref_frm_delta %s\n"
+		  "lf.mb_mode_delta %s\n"
+		  "lf.sharpness_level %u\n"
+		  "lf.level %u\n"
+		  "lf.flags %s\n"
+		  "quant.y_ac_qi %u\n"
+		  "quant.y_dc_delta %d\n"
+		  "quant.y2_dc_delta %d\n"
+		  "quant.y2_ac_delta %d\n"
+		  "quant.uv_dc_delta %d\n"
+		  "quant.uv_ac_delta %d\n"
+		  "coder_state.range %u\n"
+		  "coder_state.value %u\n"
+		  "coder_state.bit_count %u\n"
+		  "width %u\n"
+		  "height %u\n"
+		  "horizontal_scale %u\n"
+		  "vertical_scale %u\n"
+		  "version %u\n"
+		  "prob_skip_false %u\n"
+		  "prob_intra %u\n"
+		  "prob_last %u\n"
+		  "prob_gf %u\n"
+		  "num_dct_parts %u\n"
+		  "first_part_size %u\n"
+		  "first_part_header_bits %u\n"
+		  "dct_part_sizes %s\n"
+		  "last_frame_ts %llu\n"
+		  "golden_frame_ts %llu\n"
+		  "alt_frame_ts %llu\n"
+		  "flags %s",
+		  __print_array(__entry->f.segment.quant_update,
+				ARRAY_SIZE(__entry->f.segment.quant_update),
+				sizeof(__entry->f.segment.quant_update[0])),
+		  __print_array(__entry->f.segment.lf_update,
+				ARRAY_SIZE(__entry->f.segment.lf_update),
+				sizeof(__entry->f.segment.lf_update[0])),
+		  __print_array(__entry->f.segment.segment_probs,
+				ARRAY_SIZE(__entry->f.segment.segment_probs),
+				sizeof(__entry->f.segment.segment_probs[0])),
+		  __print_flags(__entry->f.segment.flags, "|",
+		  {V4L2_VP8_SEGMENT_FLAG_ENABLED, "SEGMENT_ENABLED"},
+		  {V4L2_VP8_SEGMENT_FLAG_UPDATE_MAP, "SEGMENT_UPDATE_MAP"},
+		  {V4L2_VP8_SEGMENT_FLAG_UPDATE_FEATURE_DATA, "SEGMENT_UPDATE_FEATURE_DATA"},
+		  {V4L2_VP8_SEGMENT_FLAG_DELTA_VALUE_MODE, "SEGMENT_DELTA_VALUE_MODE"}),
+		  __print_array(__entry->f.lf.ref_frm_delta,
+				ARRAY_SIZE(__entry->f.lf.ref_frm_delta),
+				sizeof(__entry->f.lf.ref_frm_delta[0])),
+		  __print_array(__entry->f.lf.mb_mode_delta,
+				ARRAY_SIZE(__entry->f.lf.mb_mode_delta),
+				sizeof(__entry->f.lf.mb_mode_delta[0])),
+		  __entry->f.lf.sharpness_level,
+		  __entry->f.lf.level,
+		  __print_flags(__entry->f.lf.flags, "|",
+		  {V4L2_VP8_LF_ADJ_ENABLE, "LF_ADJ_ENABLED"},
+		  {V4L2_VP8_LF_DELTA_UPDATE, "LF_DELTA_UPDATE"},
+		  {V4L2_VP8_LF_FILTER_TYPE_SIMPLE, "LF_FILTER_TYPE_SIMPLE"}),
+		  __entry->f.quant.y_ac_qi,
+		  __entry->f.quant.y_dc_delta,
+		  __entry->f.quant.y2_dc_delta,
+		  __entry->f.quant.y2_ac_delta,
+		  __entry->f.quant.uv_dc_delta,
+		  __entry->f.quant.uv_ac_delta,
+		  __entry->f.coder_state.range,
+		  __entry->f.coder_state.value,
+		  __entry->f.coder_state.bit_count,
+		  __entry->f.width,
+		  __entry->f.height,
+		  __entry->f.horizontal_scale,
+		  __entry->f.vertical_scale,
+		  __entry->f.version,
+		  __entry->f.prob_skip_false,
+		  __entry->f.prob_intra,
+		  __entry->f.prob_last,
+		  __entry->f.prob_gf,
+		  __entry->f.num_dct_parts,
+		  __entry->f.first_part_size,
+		  __entry->f.first_part_header_bits,
+		  __print_array(__entry->f.dct_part_sizes,
+				ARRAY_SIZE(__entry->f.dct_part_sizes),
+				sizeof(__entry->f.dct_part_sizes[0])),
+		  __entry->f.last_frame_ts,
+		  __entry->f.golden_frame_ts,
+		  __entry->f.alt_frame_ts,
+		  __print_flags(__entry->f.flags, "|",
+		  {V4L2_VP8_FRAME_FLAG_KEY_FRAME, "KEY_FRAME"},
+		  {V4L2_VP8_FRAME_FLAG_EXPERIMENTAL, "EXPERIMENTAL"},
+		  {V4L2_VP8_FRAME_FLAG_SHOW_FRAME, "SHOW_FRAME"},
+		  {V4L2_VP8_FRAME_FLAG_MB_NO_SKIP_COEFF, "MB_NO_SKIP_COEFF"},
+		  {V4L2_VP8_FRAME_FLAG_SIGN_BIAS_GOLDEN, "SIGN_BIAS_GOLDEN"},
+		  {V4L2_VP8_FRAME_FLAG_SIGN_BIAS_ALT, "SIGN_BIAS_ALT"})
+		  )
+);
+
+DEFINE_EVENT(v4l2_ctrl_vp8_frame_tmpl, v4l2_ctrl_vp8_frame,
+	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
+	TP_ARGS(f)
+);
+
+DEFINE_EVENT(v4l2_ctrl_vp8_entropy_tmpl, v4l2_ctrl_vp8_entropy,
+	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
+	TP_ARGS(f)
+);
+
+#endif
+
+#undef TRACE_INCLUDE_PATH
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
+#define TRACE_INCLUDE_FILE visl-trace-vp8
+#include <trace/define_trace.h>
diff --git a/drivers/media/test-drivers/visl/visl-trace-vp9.h b/drivers/media/test-drivers/visl/visl-trace-vp9.h
new file mode 100644
index 000000000000..e0907231eac7
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-trace-vp9.h
@@ -0,0 +1,292 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+#if !defined(_VISL_TRACE_VP9_H_) || defined(TRACE_HEADER_MULTI_READ)
+#define _VISL_TRACE_VP9_H_
+
+#include <linux/tracepoint.h>
+#include "visl.h"
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM visl_vp9_controls
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_vp9_frame_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_vp9_frame *f),
+	TP_ARGS(f),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp9_frame, f)),
+	TP_fast_assign(__entry->f = *f;),
+	TP_printk("\nlf.ref_deltas %s\n"
+		  "lf.mode_deltas %s\n"
+		  "lf.level %u\n"
+		  "lf.sharpness %u\n"
+		  "lf.flags %s\n"
+		  "quant.base_q_idx %u\n"
+		  "quant.delta_q_y_dc %d\n"
+		  "quant.delta_q_uv_dc %d\n"
+		  "quant.delta_q_uv_ac %d\n"
+		  "seg.feature_data {%s}\n"
+		  "seg.feature_enabled %s\n"
+		  "seg.tree_probs %s\n"
+		  "seg.pred_probs %s\n"
+		  "seg.flags %s\n"
+		  "flags %s\n"
+		  "compressed_header_size %u\n"
+		  "uncompressed_header_size %u\n"
+		  "frame_width_minus_1 %u\n"
+		  "frame_height_minus_1 %u\n"
+		  "render_width_minus_1 %u\n"
+		  "render_height_minus_1 %u\n"
+		  "last_frame_ts %llu\n"
+		  "golden_frame_ts %llu\n"
+		  "alt_frame_ts %llu\n"
+		  "ref_frame_sign_bias %s\n"
+		  "reset_frame_context %s\n"
+		  "frame_context_idx %u\n"
+		  "profile %u\n"
+		  "bit_depth %u\n"
+		  "interpolation_filter %s\n"
+		  "tile_cols_log2 %u\n"
+		  "tile_rows_log_2 %u\n"
+		  "reference_mode %s\n",
+		  __print_array(__entry->f.lf.ref_deltas,
+				ARRAY_SIZE(__entry->f.lf.ref_deltas),
+				sizeof(__entry->f.lf.ref_deltas[0])),
+		  __print_array(__entry->f.lf.mode_deltas,
+				ARRAY_SIZE(__entry->f.lf.mode_deltas),
+				sizeof(__entry->f.lf.mode_deltas[0])),
+		  __entry->f.lf.level,
+		  __entry->f.lf.sharpness,
+		  __print_flags(__entry->f.lf.flags, "|",
+		  {V4L2_VP9_LOOP_FILTER_FLAG_DELTA_ENABLED, "DELTA_ENABLED"},
+		  {V4L2_VP9_LOOP_FILTER_FLAG_DELTA_UPDATE, "DELTA_UPDATE"}),
+		  __entry->f.quant.base_q_idx,
+		  __entry->f.quant.delta_q_y_dc,
+		  __entry->f.quant.delta_q_uv_dc,
+		  __entry->f.quant.delta_q_uv_ac,
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->f.seg.feature_data,
+				   sizeof(__entry->f.seg.feature_data),
+				   false),
+		  __print_array(__entry->f.seg.feature_enabled,
+				ARRAY_SIZE(__entry->f.seg.feature_enabled),
+				sizeof(__entry->f.seg.feature_enabled[0])),
+		  __print_array(__entry->f.seg.tree_probs,
+				ARRAY_SIZE(__entry->f.seg.tree_probs),
+				sizeof(__entry->f.seg.tree_probs[0])),
+		  __print_array(__entry->f.seg.pred_probs,
+				ARRAY_SIZE(__entry->f.seg.pred_probs),
+				sizeof(__entry->f.seg.pred_probs[0])),
+		  __print_flags(__entry->f.seg.flags, "|",
+		  {V4L2_VP9_SEGMENTATION_FLAG_ENABLED, "ENABLED"},
+		  {V4L2_VP9_SEGMENTATION_FLAG_UPDATE_MAP, "UPDATE_MAP"},
+		  {V4L2_VP9_SEGMENTATION_FLAG_TEMPORAL_UPDATE, "TEMPORAL_UPDATE"},
+		  {V4L2_VP9_SEGMENTATION_FLAG_UPDATE_DATA, "UPDATE_DATA"},
+		  {V4L2_VP9_SEGMENTATION_FLAG_ABS_OR_DELTA_UPDATE, "ABS_OR_DELTA_UPDATE"}),
+		  __print_flags(__entry->f.flags, "|",
+		  {V4L2_VP9_FRAME_FLAG_KEY_FRAME, "KEY_FRAME"},
+		  {V4L2_VP9_FRAME_FLAG_SHOW_FRAME, "SHOW_FRAME"},
+		  {V4L2_VP9_FRAME_FLAG_ERROR_RESILIENT, "ERROR_RESILIENT"},
+		  {V4L2_VP9_FRAME_FLAG_INTRA_ONLY, "INTRA_ONLY"},
+		  {V4L2_VP9_FRAME_FLAG_ALLOW_HIGH_PREC_MV, "ALLOW_HIGH_PREC_MV"},
+		  {V4L2_VP9_FRAME_FLAG_REFRESH_FRAME_CTX, "REFRESH_FRAME_CTX"},
+		  {V4L2_VP9_FRAME_FLAG_PARALLEL_DEC_MODE, "PARALLEL_DEC_MODE"},
+		  {V4L2_VP9_FRAME_FLAG_X_SUBSAMPLING, "X_SUBSAMPLING"},
+		  {V4L2_VP9_FRAME_FLAG_Y_SUBSAMPLING, "Y_SUBSAMPLING"},
+		  {V4L2_VP9_FRAME_FLAG_COLOR_RANGE_FULL_SWING, "COLOR_RANGE_FULL_SWING"}),
+		  __entry->f.compressed_header_size,
+		  __entry->f.uncompressed_header_size,
+		  __entry->f.frame_width_minus_1,
+		  __entry->f.frame_height_minus_1,
+		  __entry->f.render_width_minus_1,
+		  __entry->f.render_height_minus_1,
+		  __entry->f.last_frame_ts,
+		  __entry->f.golden_frame_ts,
+		  __entry->f.alt_frame_ts,
+		  __print_symbolic(__entry->f.ref_frame_sign_bias,
+		  {V4L2_VP9_SIGN_BIAS_LAST, "SIGN_BIAS_LAST"},
+		  {V4L2_VP9_SIGN_BIAS_GOLDEN, "SIGN_BIAS_GOLDEN"},
+		  {V4L2_VP9_SIGN_BIAS_ALT, "SIGN_BIAS_ALT"}),
+		  __print_symbolic(__entry->f.reset_frame_context,
+		  {V4L2_VP9_RESET_FRAME_CTX_NONE, "RESET_FRAME_CTX_NONE"},
+		  {V4L2_VP9_RESET_FRAME_CTX_SPEC, "RESET_FRAME_CTX_SPEC"},
+		  {V4L2_VP9_RESET_FRAME_CTX_ALL, "RESET_FRAME_CTX_ALL"}),
+		  __entry->f.frame_context_idx,
+		  __entry->f.profile,
+		  __entry->f.bit_depth,
+		  __print_symbolic(__entry->f.interpolation_filter,
+		  {V4L2_VP9_INTERP_FILTER_EIGHTTAP, "INTERP_FILTER_EIGHTTAP"},
+		  {V4L2_VP9_INTERP_FILTER_EIGHTTAP_SMOOTH, "INTERP_FILTER_EIGHTTAP_SMOOTH"},
+		  {V4L2_VP9_INTERP_FILTER_EIGHTTAP_SHARP, "INTERP_FILTER_EIGHTTAP_SHARP"},
+		  {V4L2_VP9_INTERP_FILTER_BILINEAR, "INTERP_FILTER_BILINEAR"},
+		  {V4L2_VP9_INTERP_FILTER_SWITCHABLE, "INTERP_FILTER_SWITCHABLE"}),
+		  __entry->f.tile_cols_log2,
+		  __entry->f.tile_rows_log2,
+		  __print_symbolic(__entry->f.reference_mode,
+		  {V4L2_VP9_REFERENCE_MODE_SINGLE_REFERENCE, "REFERENCE_MODE_SINGLE_REFERENCE"},
+		  {V4L2_VP9_REFERENCE_MODE_COMPOUND_REFERENCE, "REFERENCE_MODE_COMPOUND_REFERENCE"},
+		  {V4L2_VP9_REFERENCE_MODE_SELECT, "REFERENCE_MODE_SELECT"}))
+);
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_vp9_compressed_hdr_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
+	TP_ARGS(h),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp9_compressed_hdr, h)),
+	TP_fast_assign(__entry->h = *h;),
+	TP_printk("\ntx_mode %s\n"
+		  "tx8 {%s}\n"
+		  "tx16 {%s}\n"
+		  "tx32 {%s}\n"
+		  "skip %s\n"
+		  "inter_mode {%s}\n"
+		  "interp_filter {%s}\n"
+		  "is_inter %s\n"
+		  "comp_mode %s\n"
+		  "single_ref {%s}\n"
+		  "comp_ref %s\n"
+		  "y_mode {%s}\n"
+		  "uv_mode {%s}\n"
+		  "partition {%s}\n",
+		  __print_symbolic(__entry->h.tx_mode,
+		  {V4L2_VP9_TX_MODE_ONLY_4X4, "TX_MODE_ONLY_4X4"},
+		  {V4L2_VP9_TX_MODE_ALLOW_8X8, "TX_MODE_ALLOW_8X8"},
+		  {V4L2_VP9_TX_MODE_ALLOW_16X16, "TX_MODE_ALLOW_16X16"},
+		  {V4L2_VP9_TX_MODE_ALLOW_32X32, "TX_MODE_ALLOW_32X32"},
+		  {V4L2_VP9_TX_MODE_SELECT, "TX_MODE_SELECT"}),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->h.tx8,
+				   sizeof(__entry->h.tx8),
+				   false),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->h.tx16,
+				   sizeof(__entry->h.tx16),
+				   false),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->h.tx32,
+				   sizeof(__entry->h.tx32),
+				   false),
+		  __print_array(__entry->h.skip,
+				ARRAY_SIZE(__entry->h.skip),
+				sizeof(__entry->h.skip[0])),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->h.inter_mode,
+				   sizeof(__entry->h.inter_mode),
+				   false),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->h.interp_filter,
+				   sizeof(__entry->h.interp_filter),
+				   false),
+		  __print_array(__entry->h.is_inter,
+				ARRAY_SIZE(__entry->h.is_inter),
+				sizeof(__entry->h.is_inter[0])),
+		  __print_array(__entry->h.comp_mode,
+				ARRAY_SIZE(__entry->h.comp_mode),
+				sizeof(__entry->h.comp_mode[0])),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->h.single_ref,
+				   sizeof(__entry->h.single_ref),
+				   false),
+		  __print_array(__entry->h.comp_ref,
+				ARRAY_SIZE(__entry->h.comp_ref),
+				sizeof(__entry->h.comp_ref[0])),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->h.y_mode,
+				   sizeof(__entry->h.y_mode),
+				   false),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->h.uv_mode,
+				   sizeof(__entry->h.uv_mode),
+				   false),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->h.partition,
+				   sizeof(__entry->h.partition),
+				   false)
+	)
+);
+
+DECLARE_EVENT_CLASS(v4l2_ctrl_vp9_compressed_coef_tmpl,
+	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
+	TP_ARGS(h),
+	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp9_compressed_hdr, h)),
+	TP_fast_assign(__entry->h = *h;),
+	TP_printk("\n coef {%s}",
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->h.coef,
+				   sizeof(__entry->h.coef),
+				   false)
+	)
+);
+
+DECLARE_EVENT_CLASS(v4l2_vp9_mv_probs_tmpl,
+	TP_PROTO(const struct v4l2_vp9_mv_probs *p),
+	TP_ARGS(p),
+	TP_STRUCT__entry(__field_struct(struct v4l2_vp9_mv_probs, p)),
+	TP_fast_assign(__entry->p = *p;),
+	TP_printk("\n joint %s\n"
+		  "sign %s\n"
+		  "classes {%s}\n"
+		  "class0_bit %s\n"
+		  "bits {%s}\n"
+		  "class0_fr {%s}\n"
+		  "fr {%s}\n"
+		  "class0_hp %s\n"
+		  "hp %s\n",
+		  __print_array(__entry->p.joint,
+				ARRAY_SIZE(__entry->p.joint),
+				sizeof(__entry->p.joint[0])),
+		  __print_array(__entry->p.sign,
+				ARRAY_SIZE(__entry->p.sign),
+				sizeof(__entry->p.sign[0])),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->p.classes,
+				   sizeof(__entry->p.classes),
+				   false),
+		  __print_array(__entry->p.class0_bit,
+				ARRAY_SIZE(__entry->p.class0_bit),
+				sizeof(__entry->p.class0_bit[0])),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->p.bits,
+				   sizeof(__entry->p.bits),
+				   false),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->p.class0_fr,
+				   sizeof(__entry->p.class0_fr),
+				   false),
+		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
+		  		   __entry->p.fr,
+				   sizeof(__entry->p.fr),
+				   false),
+		  __print_array(__entry->p.class0_hp,
+				ARRAY_SIZE(__entry->p.class0_hp),
+				sizeof(__entry->p.class0_hp[0])),
+		  __print_array(__entry->p.hp,
+				ARRAY_SIZE(__entry->p.hp),
+				sizeof(__entry->p.hp[0]))
+	)
+);
+
+DEFINE_EVENT(v4l2_ctrl_vp9_frame_tmpl, v4l2_ctrl_vp9_frame,
+	TP_PROTO(const struct v4l2_ctrl_vp9_frame *f),
+	TP_ARGS(f)
+);
+
+DEFINE_EVENT(v4l2_ctrl_vp9_compressed_hdr_tmpl, v4l2_ctrl_vp9_compressed_hdr,
+	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
+	TP_ARGS(h)
+);
+
+DEFINE_EVENT(v4l2_ctrl_vp9_compressed_coef_tmpl, v4l2_ctrl_vp9_compressed_coeff,
+	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
+	TP_ARGS(h)
+);
+
+
+DEFINE_EVENT(v4l2_vp9_mv_probs_tmpl, v4l2_vp9_mv_probs,
+	TP_PROTO(const struct v4l2_vp9_mv_probs *p),
+	TP_ARGS(p)
+);
+
+#endif
+
+#undef TRACE_INCLUDE_PATH
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
+#define TRACE_INCLUDE_FILE visl-trace-vp9
+#include <trace/define_trace.h>
diff --git a/drivers/media/test-drivers/visl/visl-video.c b/drivers/media/test-drivers/visl/visl-video.c
new file mode 100644
index 000000000000..d1eb7c374e16
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-video.c
@@ -0,0 +1,776 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * A virtual stateless device for stateless uAPI development purposes.
+ *
+ * This tool's objective is to help the development and testing of userspace
+ * applications that use the V4L2 stateless API to decode media.
+ *
+ * A userspace implementation can use visl to run a decoding loop even when no
+ * hardware is available or when the kernel uAPI for the codec has not been
+ * upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * This driver can also trace the contents of the V4L2 controls submitted to it.
+ * It can also dump the contents of the vb2 buffers through a debugfs
+ * interface. This is in many ways similar to the tracing infrastructure
+ * available for other popular encode/decode APIs out there and can help develop
+ * a userspace application by using another (working) one as a reference.
+ *
+ * Note that no actual decoding of video frames is performed by visl. The V4L2
+ * test pattern generator is used to write various debug information to the
+ * capture buffers instead.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ */
+
+#include <linux/debugfs.h>
+#include <linux/font.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf2-vmalloc.h>
+
+#include "visl-video.h"
+
+#include "visl.h"
+#include "visl-debugfs.h"
+
+static void visl_set_current_codec(struct visl_ctx *ctx)
+{
+	switch (ctx->coded_fmt.fmt.pix_mp.pixelformat) {
+	case V4L2_PIX_FMT_FWHT_STATELESS:
+		ctx->current_codec = VISL_CODEC_FWHT;
+		break;
+	case V4L2_PIX_FMT_MPEG2_SLICE:
+		ctx->current_codec = VISL_CODEC_MPEG2;
+		break;
+	case V4L2_PIX_FMT_VP8_FRAME:
+		ctx->current_codec = VISL_CODEC_VP8;
+		break;
+	case V4L2_PIX_FMT_VP9_FRAME:
+		ctx->current_codec = VISL_CODEC_VP9;
+		break;
+	case V4L2_PIX_FMT_H264_SLICE:
+		ctx->current_codec = VISL_CODEC_H264;
+		break;
+	default:
+		ctx->current_codec = VISL_CODEC_NONE;
+		break;
+	}
+}
+
+static void visl_print_fmt(struct visl_ctx *ctx, const struct v4l2_format *f)
+{
+	const struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+	u32 i;
+
+	dprintk(ctx->dev, "width: %d\n", pix_mp->width);
+	dprintk(ctx->dev, "height: %d\n", pix_mp->height);
+	dprintk(ctx->dev, "pixelformat: %c%c%c%c\n",
+		pix_mp->pixelformat,
+		(pix_mp->pixelformat >> 8) & 0xff,
+		(pix_mp->pixelformat >> 16) & 0xff,
+		(pix_mp->pixelformat >> 24) & 0xff);
+
+	dprintk(ctx->dev, "field: %d\n", pix_mp->field);
+	dprintk(ctx->dev, "colorspace: %d\n", pix_mp->colorspace);
+	dprintk(ctx->dev, "num_planes: %d\n", pix_mp->num_planes);
+	dprintk(ctx->dev, "flags: %d\n", pix_mp->flags);
+	dprintk(ctx->dev, "quantization: %d\n", pix_mp->quantization);
+	dprintk(ctx->dev, "xfer_func: %d\n", pix_mp->xfer_func);
+
+	for (i = 0; i < pix_mp->num_planes; i++) {
+		dprintk(ctx->dev,
+			"plane[%d]: sizeimage: %d\n", i, pix_mp->plane_fmt[i].sizeimage);
+		dprintk(ctx->dev,
+			"plane[%d]: bytesperline: %d\n", i, pix_mp->plane_fmt[i].bytesperline);
+	}
+}
+
+static int visl_tpg_init(struct visl_ctx *ctx)
+{
+	const struct font_desc *font;
+	const char *font_name = "VGA8x16";
+	int ret;
+	u32 width = ctx->decoded_fmt.fmt.pix_mp.width;
+	u32 height = ctx->decoded_fmt.fmt.pix_mp.height;
+	struct v4l2_pix_format_mplane *f = &ctx->decoded_fmt.fmt.pix_mp;
+
+	tpg_free(&ctx->tpg);
+
+	font = find_font(font_name);
+	if (font) {
+		tpg_init(&ctx->tpg, width, height);
+
+		ret = tpg_alloc(&ctx->tpg, width);
+		if (ret)
+			goto err_alloc;
+
+		tpg_set_font(font->data);
+		ret = tpg_s_fourcc(&ctx->tpg,
+				   f->pixelformat);
+
+		if (!ret)
+			goto err_fourcc;
+
+		tpg_reset_source(&ctx->tpg, width, height, f->field);
+
+		tpg_s_pattern(&ctx->tpg, TPG_PAT_75_COLORBAR);
+
+		tpg_s_field(&ctx->tpg, f->field, false);
+		tpg_s_colorspace(&ctx->tpg, f->colorspace);
+		tpg_s_ycbcr_enc(&ctx->tpg, f->ycbcr_enc);
+		tpg_s_quantization(&ctx->tpg, f->quantization);
+		tpg_s_xfer_func(&ctx->tpg, f->xfer_func);
+	} else {
+		v4l2_err(&ctx->dev->v4l2_dev,
+			 "Font %s not found\n", font_name);
+
+		return -EINVAL;
+	}
+
+	dprintk(ctx->dev, "Initialized the V4L2 test pattern generator, w=%d, h=%d, max_w=%d\n",
+		width, height, width);
+
+	return 0;
+err_alloc:
+	return ret;
+err_fourcc:
+	tpg_free(&ctx->tpg);
+	return ret;
+}
+
+static const u32 visl_decoded_fmts[] = {
+	V4L2_PIX_FMT_NV12,
+	V4L2_PIX_FMT_YUV420,
+};
+
+const struct visl_coded_format_desc visl_coded_fmts[] = {
+	{
+		.pixelformat = V4L2_PIX_FMT_FWHT,
+		.frmsize = {
+			.min_width = 640,
+			.max_width = 4096,
+			.step_width = 1,
+			.min_height = 360,
+			.max_height = 2160,
+			.step_height = 1,
+		},
+		.ctrls = &visl_fwht_ctrls,
+		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
+		.decoded_fmts = visl_decoded_fmts,
+	},
+	{
+		.pixelformat = V4L2_PIX_FMT_MPEG2_SLICE,
+		.frmsize = {
+			.min_width = 16,
+			.max_width = 1920,
+			.step_width = 1,
+			.min_height = 16,
+			.max_height = 1152,
+			.step_height = 1,
+		},
+		.ctrls = &visl_mpeg2_ctrls,
+		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
+		.decoded_fmts = visl_decoded_fmts,
+	},
+	{
+		.pixelformat = V4L2_PIX_FMT_VP8_FRAME,
+		.frmsize = {
+			.min_width = 64,
+			.max_width = 16383,
+			.step_width = 1,
+			.min_height = 64,
+			.max_height = 16383,
+			.step_height = 1,
+		},
+		.ctrls = &visl_vp8_ctrls,
+		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
+		.decoded_fmts = visl_decoded_fmts,
+	},
+	{
+		.pixelformat = V4L2_PIX_FMT_VP9_FRAME,
+		.frmsize = {
+			.min_width = 64,
+			.max_width = 8192,
+			.step_width = 1,
+			.min_height = 64,
+			.max_height = 4352,
+			.step_height = 1,
+		},
+		.ctrls = &visl_vp9_ctrls,
+		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
+		.decoded_fmts = visl_decoded_fmts,
+	},
+	{
+		.pixelformat = V4L2_PIX_FMT_H264_SLICE,
+		.frmsize = {
+			.min_width = 64,
+			.max_width = 4096,
+			.step_width = 1,
+			.min_height = 64,
+			.max_height = 2304,
+			.step_height = 1,
+		},
+		.ctrls = &visl_h264_ctrls,
+		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
+		.decoded_fmts = visl_decoded_fmts,
+	},
+};
+
+const size_t num_coded_fmts = ARRAY_SIZE(visl_coded_fmts);
+
+static const struct visl_coded_format_desc*
+visl_find_coded_fmt_desc(u32 fourcc)
+{
+	unsigned int i;
+
+	for (i = 0; i < ARRAY_SIZE(visl_coded_fmts); i++) {
+		if (visl_coded_fmts[i].pixelformat == fourcc)
+			return &visl_coded_fmts[i];
+	}
+
+	return NULL;
+}
+
+static void visl_init_fmt(struct v4l2_format *f, u32 fourcc)
+{	memset(f, 0, sizeof(*f));
+	f->fmt.pix_mp.pixelformat = fourcc;
+	f->fmt.pix_mp.field = V4L2_FIELD_NONE;
+	f->fmt.pix_mp.colorspace = V4L2_COLORSPACE_REC709;
+	f->fmt.pix_mp.ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
+	f->fmt.pix_mp.quantization = V4L2_QUANTIZATION_DEFAULT;
+	f->fmt.pix_mp.xfer_func = V4L2_XFER_FUNC_DEFAULT;
+}
+
+void visl_reset_coded_fmt(struct visl_ctx *ctx)
+{
+	struct v4l2_format *f = &ctx->coded_fmt;
+	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+
+	ctx->coded_format_desc = &visl_coded_fmts[0];
+	visl_init_fmt(f, ctx->coded_format_desc->pixelformat);
+
+	f->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+	f->fmt.pix_mp.width = ctx->coded_format_desc->frmsize.min_width;
+	f->fmt.pix_mp.height = ctx->coded_format_desc->frmsize.min_height;
+
+	pix_mp->num_planes = 1;
+	pix_mp->plane_fmt[0].sizeimage = pix_mp->width * pix_mp->height * 8;
+
+	dprintk(ctx->dev, "OUTPUT format was set to:\n");
+	visl_print_fmt(ctx, &ctx->coded_fmt);
+
+	visl_set_current_codec(ctx);
+}
+
+int visl_reset_decoded_fmt(struct visl_ctx *ctx)
+{
+	struct v4l2_format *f = &ctx->decoded_fmt;
+	u32 decoded_fmt = ctx->coded_format_desc[0].decoded_fmts[0];
+
+	visl_init_fmt(f, decoded_fmt);
+
+	f->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
+
+	v4l2_fill_pixfmt_mp(&f->fmt.pix_mp,
+			    ctx->coded_format_desc->decoded_fmts[0],
+			    ctx->coded_fmt.fmt.pix_mp.width,
+			    ctx->coded_fmt.fmt.pix_mp.height);
+
+	dprintk(ctx->dev, "CAPTURE format was set to:\n");
+	visl_print_fmt(ctx, &ctx->decoded_fmt);
+
+	return visl_tpg_init(ctx);
+}
+
+int visl_set_default_format(struct visl_ctx *ctx)
+{
+	visl_reset_coded_fmt(ctx);
+	return visl_reset_decoded_fmt(ctx);
+}
+
+static struct visl_q_data *get_q_data(struct visl_ctx *ctx,
+				      enum v4l2_buf_type type)
+{
+	switch (type) {
+	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+	case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+		return &ctx->q_data[V4L2_M2M_SRC];
+	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+	case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+		return &ctx->q_data[V4L2_M2M_DST];
+	default:
+		break;
+	}
+	return NULL;
+}
+
+static int visl_querycap(struct file *file, void *priv,
+			 struct v4l2_capability *cap)
+{
+	strscpy(cap->driver, VISL_NAME, sizeof(cap->driver));
+	strscpy(cap->card, VISL_NAME, sizeof(cap->card));
+	snprintf(cap->bus_info, sizeof(cap->bus_info),
+		 "platform:%s", VISL_NAME);
+
+	return 0;
+}
+
+static int visl_enum_fmt_vid_cap(struct file *file, void *priv,
+				 struct v4l2_fmtdesc *f)
+{
+	struct visl_ctx *ctx = visl_file_to_ctx(file);
+
+	if (f->index >= ctx->coded_format_desc->num_decoded_fmts)
+		return -EINVAL;
+
+	f->pixelformat = ctx->coded_format_desc->decoded_fmts[f->index];
+	return 0;
+}
+
+static int visl_enum_fmt_vid_out(struct file *file, void *priv,
+				 struct v4l2_fmtdesc *f)
+{
+	if (f->index >= ARRAY_SIZE(visl_coded_fmts))
+		return -EINVAL;
+
+	f->pixelformat = visl_coded_fmts[f->index].pixelformat;
+	return 0;
+}
+
+static int visl_g_fmt_vid_cap(struct file *file, void *priv,
+			      struct v4l2_format *f)
+{
+	struct visl_ctx *ctx = visl_file_to_ctx(file);
+	*f = ctx->decoded_fmt;
+
+	return 0;
+}
+
+static int visl_g_fmt_vid_out(struct file *file, void *priv,
+			      struct v4l2_format *f)
+{
+	struct visl_ctx *ctx = visl_file_to_ctx(file);
+
+	*f = ctx->coded_fmt;
+	return 0;
+}
+
+static int visl_try_fmt_vid_cap(struct file *file, void *priv,
+				struct v4l2_format *f)
+{
+	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+	struct visl_ctx *ctx = visl_file_to_ctx(file);
+	const struct visl_coded_format_desc *coded_desc;
+	unsigned int i;
+
+	coded_desc = ctx->coded_format_desc;
+
+	for (i = 0; i < coded_desc->num_decoded_fmts; i++) {
+		if (coded_desc->decoded_fmts[i] == pix_mp->pixelformat)
+			break;
+	}
+
+	if (i == coded_desc->num_decoded_fmts)
+		pix_mp->pixelformat = coded_desc->decoded_fmts[0];
+
+	v4l2_apply_frmsize_constraints(&pix_mp->width,
+				       &pix_mp->height,
+				       &coded_desc->frmsize);
+
+	v4l2_fill_pixfmt_mp(pix_mp, pix_mp->pixelformat,
+			    pix_mp->width, pix_mp->height);
+
+	pix_mp->field = V4L2_FIELD_NONE;
+
+	return 0;
+}
+
+static int visl_try_fmt_vid_out(struct file *file, void *priv,
+				struct v4l2_format *f)
+{
+	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
+	const struct visl_coded_format_desc *coded_desc;
+
+	coded_desc = visl_find_coded_fmt_desc(pix_mp->pixelformat);
+	if (!coded_desc) {
+		pix_mp->pixelformat = visl_coded_fmts[0].pixelformat;
+		coded_desc = &visl_coded_fmts[0];
+	}
+
+	v4l2_apply_frmsize_constraints(&pix_mp->width,
+				       &pix_mp->height,
+				       &coded_desc->frmsize);
+
+	pix_mp->field = V4L2_FIELD_NONE;
+	pix_mp->num_planes = 1;
+
+	return 0;
+}
+
+static int visl_s_fmt_vid_out(struct file *file, void *priv,
+			      struct v4l2_format *f)
+{
+	struct visl_ctx *ctx = visl_file_to_ctx(file);
+	struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
+	const struct visl_coded_format_desc *desc;
+	struct vb2_queue *peer_vq;
+	int ret;
+
+	peer_vq = v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
+	if (vb2_is_busy(peer_vq))
+		return -EBUSY;
+
+	dprintk(ctx->dev, "Trying to set the OUTPUT format to:\n");
+	visl_print_fmt(ctx, f);
+
+	ret = visl_try_fmt_vid_out(file, priv, f);
+	if (ret)
+		return ret;
+
+	desc = visl_find_coded_fmt_desc(f->fmt.pix_mp.pixelformat);
+	ctx->coded_format_desc = desc;
+	ctx->coded_fmt = *f;
+
+	v4l2_fill_pixfmt_mp(&ctx->coded_fmt.fmt.pix_mp,
+			    ctx->coded_fmt.fmt.pix_mp.pixelformat,
+			    ctx->coded_fmt.fmt.pix_mp.width,
+			    ctx->coded_fmt.fmt.pix_mp.height);
+
+	ret = visl_reset_decoded_fmt(ctx);
+	if (ret)
+		return ret;
+
+	ctx->decoded_fmt.fmt.pix_mp.colorspace = f->fmt.pix_mp.colorspace;
+	ctx->decoded_fmt.fmt.pix_mp.xfer_func = f->fmt.pix_mp.xfer_func;
+	ctx->decoded_fmt.fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc;
+	ctx->decoded_fmt.fmt.pix_mp.quantization = f->fmt.pix_mp.quantization;
+
+	dprintk(ctx->dev, "OUTPUT format was set to:\n");
+	visl_print_fmt(ctx, &ctx->coded_fmt);
+
+	visl_set_current_codec(ctx);
+	return 0;
+}
+
+static int visl_s_fmt_vid_cap(struct file *file, void *priv,
+			      struct v4l2_format *f)
+{
+	struct visl_ctx *ctx = visl_file_to_ctx(file);
+	int ret;
+
+	dprintk(ctx->dev, "Trying to set the CAPTURE format to:\n");
+	visl_print_fmt(ctx, f);
+
+	ret = visl_try_fmt_vid_cap(file, priv, f);
+	if (ret)
+		return ret;
+
+	ctx->decoded_fmt = *f;
+
+	dprintk(ctx->dev, "CAPTURE format was set to:\n");
+	visl_print_fmt(ctx, &ctx->decoded_fmt);
+
+	visl_tpg_init(ctx);
+	return 0;
+}
+
+static int visl_enum_framesizes(struct file *file, void *priv,
+				struct v4l2_frmsizeenum *fsize)
+{
+	const struct visl_coded_format_desc *fmt;
+	struct visl_ctx *ctx = visl_file_to_ctx(file);
+
+	if (fsize->index != 0)
+		return -EINVAL;
+
+	fmt = visl_find_coded_fmt_desc(fsize->pixel_format);
+	if (!fmt) {
+		dprintk(ctx->dev,
+			"Unsupported format for the OUTPUT queue: %d\n",
+			fsize->pixel_format);
+
+		return -EINVAL;
+	}
+
+	fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
+	fsize->stepwise = fmt->frmsize;
+	return 0;
+}
+
+const struct v4l2_ioctl_ops visl_ioctl_ops = {
+	.vidioc_querycap		= visl_querycap,
+	.vidioc_enum_framesizes		= visl_enum_framesizes,
+
+	.vidioc_enum_fmt_vid_cap	= visl_enum_fmt_vid_cap,
+	.vidioc_g_fmt_vid_cap_mplane	= visl_g_fmt_vid_cap,
+	.vidioc_try_fmt_vid_cap_mplane	= visl_try_fmt_vid_cap,
+	.vidioc_s_fmt_vid_cap_mplane	= visl_s_fmt_vid_cap,
+
+	.vidioc_enum_fmt_vid_out	= visl_enum_fmt_vid_out,
+	.vidioc_g_fmt_vid_out_mplane	= visl_g_fmt_vid_out,
+	.vidioc_try_fmt_vid_out_mplane	= visl_try_fmt_vid_out,
+	.vidioc_s_fmt_vid_out_mplane	= visl_s_fmt_vid_out,
+
+	.vidioc_reqbufs			= v4l2_m2m_ioctl_reqbufs,
+	.vidioc_querybuf		= v4l2_m2m_ioctl_querybuf,
+	.vidioc_qbuf			= v4l2_m2m_ioctl_qbuf,
+	.vidioc_dqbuf			= v4l2_m2m_ioctl_dqbuf,
+	.vidioc_prepare_buf		= v4l2_m2m_ioctl_prepare_buf,
+	.vidioc_create_bufs		= v4l2_m2m_ioctl_create_bufs,
+	.vidioc_expbuf			= v4l2_m2m_ioctl_expbuf,
+
+	.vidioc_streamon		= v4l2_m2m_ioctl_streamon,
+	.vidioc_streamoff		= v4l2_m2m_ioctl_streamoff,
+
+	.vidioc_subscribe_event		= v4l2_ctrl_subscribe_event,
+	.vidioc_unsubscribe_event	= v4l2_event_unsubscribe,
+};
+
+static int visl_queue_setup(struct vb2_queue *vq,
+			    unsigned int *nbuffers,
+			    unsigned int *num_planes,
+			    unsigned int sizes[],
+			    struct device *alloc_devs[])
+{
+	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
+	struct v4l2_format *f;
+	u32 i;
+	char *qname;
+
+	if (V4L2_TYPE_IS_OUTPUT(vq->type)) {
+		f = &ctx->coded_fmt;
+		qname = "Output";
+	} else {
+		f = &ctx->decoded_fmt;
+		qname = "Capture";
+	}
+
+	if (*num_planes) {
+		if (*num_planes != f->fmt.pix_mp.num_planes)
+			return -EINVAL;
+
+		for (i = 0; i < f->fmt.pix_mp.num_planes; i++) {
+			if (sizes[i] < f->fmt.pix_mp.plane_fmt[i].sizeimage)
+				return -EINVAL;
+		}
+	} else {
+		*num_planes = f->fmt.pix_mp.num_planes;
+		for (i = 0; i < f->fmt.pix_mp.num_planes; i++)
+			sizes[i] = f->fmt.pix_mp.plane_fmt[i].sizeimage;
+	}
+
+	dprintk(ctx->dev, "%s: %d buffer(s) requested, num_planes=%d.\n",
+		qname, *nbuffers, *num_planes);
+
+	for (i = 0; i < f->fmt.pix_mp.num_planes; i++)
+		dprintk(ctx->dev, "plane[%d].sizeimage=%d\n",
+			i, f->fmt.pix_mp.plane_fmt[i].sizeimage);
+
+	return 0;
+}
+
+static void visl_queue_cleanup(struct vb2_queue *vq, u32 state)
+{
+	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
+	struct vb2_v4l2_buffer *vbuf;
+
+	dprintk(ctx->dev, "Cleaning up queues\n");
+	for (;;) {
+		if (V4L2_TYPE_IS_OUTPUT(vq->type))
+			vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
+		else
+			vbuf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
+
+		if (!vbuf)
+			break;
+
+		v4l2_ctrl_request_complete(vbuf->vb2_buf.req_obj.req,
+					   &ctx->hdl);
+		dprintk(ctx->dev, "Marked request %p as complete\n",
+			vbuf->vb2_buf.req_obj.req);
+
+		v4l2_m2m_buf_done(vbuf, state);
+		dprintk(ctx->dev,
+			"Marked buffer %llu as done, state is %d\n",
+			vbuf->vb2_buf.timestamp,
+			state);
+	}
+}
+
+static int visl_buf_out_validate(struct vb2_buffer *vb)
+{
+	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+
+	vbuf->field = V4L2_FIELD_NONE;
+	return 0;
+}
+
+static int visl_buf_prepare(struct vb2_buffer *vb)
+{
+	struct vb2_queue *vq = vb->vb2_queue;
+	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
+	u32 plane_sz = vb2_plane_size(vb, 0);
+	struct v4l2_pix_format *pix_fmt;
+
+	if (V4L2_TYPE_IS_OUTPUT(vq->type))
+		pix_fmt = &ctx->coded_fmt.fmt.pix;
+	else
+		pix_fmt = &ctx->decoded_fmt.fmt.pix;
+
+	if (plane_sz < pix_fmt->sizeimage) {
+		v4l2_err(&ctx->dev->v4l2_dev, "plane[0] size is %d, sizeimage is %d\n",
+			 plane_sz, pix_fmt->sizeimage);
+		return -EINVAL;
+	}
+
+	vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
+
+	return 0;
+}
+
+static int visl_start_streaming(struct vb2_queue *vq, unsigned int count)
+{
+	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
+	struct visl_q_data *q_data = get_q_data(ctx, vq->type);
+	int rc = 0;
+
+	if (!q_data) {
+		rc = -EINVAL;
+		goto err;
+	}
+
+	q_data->sequence = 0;
+
+	if (V4L2_TYPE_IS_CAPTURE(vq->type)) {
+		ctx->capture_streamon_jiffies = get_jiffies_64();
+		return 0;
+	}
+
+	if (WARN_ON(!ctx->coded_format_desc)) {
+		rc =  -EINVAL;
+		goto err;
+	}
+
+	return 0;
+
+err:
+	visl_queue_cleanup(vq, VB2_BUF_STATE_QUEUED);
+	return rc;
+}
+
+static void visl_stop_streaming(struct vb2_queue *vq)
+{
+	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
+
+	dprintk(ctx->dev, "Stop streaming\n");
+	visl_queue_cleanup(vq, VB2_BUF_STATE_ERROR);
+}
+
+static void visl_buf_queue(struct vb2_buffer *vb)
+{
+	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+	struct visl_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+
+	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
+}
+
+static void visl_buf_request_complete(struct vb2_buffer *vb)
+{
+	struct visl_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
+
+	v4l2_ctrl_request_complete(vb->req_obj.req, &ctx->hdl);
+}
+
+const struct vb2_ops visl_qops = {
+	.queue_setup          = visl_queue_setup,
+	.buf_out_validate     = visl_buf_out_validate,
+	.buf_prepare          = visl_buf_prepare,
+	.buf_queue            = visl_buf_queue,
+	.start_streaming      = visl_start_streaming,
+	.stop_streaming       = visl_stop_streaming,
+	.wait_prepare         = vb2_ops_wait_prepare,
+	.wait_finish          = vb2_ops_wait_finish,
+	.buf_request_complete = visl_buf_request_complete,
+};
+
+int visl_queue_init(void *priv, struct vb2_queue *src_vq,
+		    struct vb2_queue *dst_vq)
+{
+	struct visl_ctx *ctx = priv;
+	int ret;
+
+	src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
+	src_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
+	src_vq->drv_priv = ctx;
+	src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+	src_vq->ops = &visl_qops;
+	src_vq->mem_ops = &vb2_vmalloc_memops;
+	src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+	src_vq->lock = &ctx->vb_mutex;
+	src_vq->supports_requests = true;
+
+	ret = vb2_queue_init(src_vq);
+	if (ret)
+		return ret;
+
+	dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
+	dst_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
+	dst_vq->drv_priv = ctx;
+	dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
+	dst_vq->ops = &visl_qops;
+	dst_vq->mem_ops = &vb2_vmalloc_memops;
+	dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+	dst_vq->lock = &ctx->vb_mutex;
+
+	return vb2_queue_init(dst_vq);
+}
+
+int visl_request_validate(struct media_request *req)
+{
+	struct media_request_object *obj;
+	struct visl_ctx *ctx = NULL;
+	unsigned int count;
+
+	list_for_each_entry(obj, &req->objects, list) {
+		struct vb2_buffer *vb;
+
+		if (vb2_request_object_is_buffer(obj)) {
+			vb = container_of(obj, struct vb2_buffer, req_obj);
+			ctx = vb2_get_drv_priv(vb->vb2_queue);
+
+			break;
+		}
+	}
+
+	if (!ctx)
+		return -ENOENT;
+
+	count = vb2_request_buffer_cnt(req);
+	if (!count) {
+		v4l2_err(&ctx->dev->v4l2_dev,
+			 "No buffer was provided with the request\n");
+		return -ENOENT;
+	} else if (count > 1) {
+		v4l2_err(&ctx->dev->v4l2_dev,
+			 "More than one buffer was provided with the request\n");
+		return -EINVAL;
+	}
+
+	return vb2_request_validate(req);
+}
diff --git a/drivers/media/test-drivers/visl/visl-video.h b/drivers/media/test-drivers/visl/visl-video.h
new file mode 100644
index 000000000000..dbfc1c6a052d
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl-video.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * A virtual stateless device for stateless uAPI development purposes.
+ *
+ * This tool's objective is to help the development and testing of userspace
+ * applications that use the V4L2 stateless API to decode media.
+ *
+ * A userspace implementation can use visl to run a decoding loop even when no
+ * hardware is available or when the kernel uAPI for the codec has not been
+ * upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * This driver can also trace the contents of the V4L2 controls submitted to it.
+ * It can also dump the contents of the vb2 buffers through a debugfs
+ * interface. This is in many ways similar to the tracing infrastructure
+ * available for other popular encode/decode APIs out there and can help develop
+ * a userspace application by using another (working) one as a reference.
+ *
+ * Note that no actual decoding of video frames is performed by visl. The V4L2
+ * test pattern generator is used to write various debug information to the
+ * capture buffers instead.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ */
+
+#ifndef _VISL_VIDEO_H_
+#define _VISL_VIDEO_H_
+#include <media/v4l2-mem2mem.h>
+
+#include "visl.h"
+
+extern const struct v4l2_ioctl_ops visl_ioctl_ops;
+
+extern const struct visl_ctrls visl_fwht_ctrls;
+extern const struct visl_ctrls visl_mpeg2_ctrls;
+extern const struct visl_ctrls visl_vp8_ctrls;
+extern const struct visl_ctrls visl_vp9_ctrls;
+extern const struct visl_ctrls visl_h264_ctrls;
+
+int visl_queue_init(void *priv, struct vb2_queue *src_vq,
+		    struct vb2_queue *dst_vq);
+
+int visl_set_default_format(struct visl_ctx *ctx);
+int visl_request_validate(struct media_request *req);
+
+#endif /* _VISL_VIDEO_H_ */
diff --git a/drivers/media/test-drivers/visl/visl.h b/drivers/media/test-drivers/visl/visl.h
new file mode 100644
index 000000000000..a473d154805c
--- /dev/null
+++ b/drivers/media/test-drivers/visl/visl.h
@@ -0,0 +1,178 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * A virtual stateless device for stateless uAPI development purposes.
+ *
+ * This tool's objective is to help the development and testing of userspace
+ * applications that use the V4L2 stateless API to decode media.
+ *
+ * A userspace implementation can use visl to run a decoding loop even when no
+ * hardware is available or when the kernel uAPI for the codec has not been
+ * upstreamed yet. This can reveal bugs at an early stage.
+ *
+ * This driver can also trace the contents of the V4L2 controls submitted to it.
+ * It can also dump the contents of the vb2 buffers through a debugfs
+ * interface. This is in many ways similar to the tracing infrastructure
+ * available for other popular encode/decode APIs out there and can help develop
+ * a userspace application by using another (working) one as a reference.
+ *
+ * Note that no actual decoding of video frames is performed by visl. The V4L2
+ * test pattern generator is used to write various debug information to the
+ * capture buffers instead.
+ *
+ * Copyright (c) Collabora, Ltd.
+ *
+ * Based on the vim2m driver, that is:
+ *
+ * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
+ * Pawel Osciak, <pawel@osciak.com>
+ * Marek Szyprowski, <m.szyprowski@samsung.com>
+ *
+ * Based on the vicodec driver, that is:
+ *
+ * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * Based on the Cedrus VPU driver, that is:
+ *
+ * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
+ * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ * Copyright (C) 2018 Bootlin
+ */
+
+#ifndef _VISL_H_
+#define _VISL_H_
+
+#include <linux/debugfs.h>
+#include <linux/list.h>
+
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/tpg/v4l2-tpg.h>
+
+#define VISL_NAME		"visl"
+#define VISL_M2M_NQUEUES	2
+
+#define TPG_STR_BUF_SZ		2048
+
+extern unsigned int visl_transtime_ms;
+
+struct visl_ctrls {
+	const struct visl_ctrl_desc *ctrls;
+	unsigned int num_ctrls;
+};
+
+struct visl_coded_format_desc {
+	u32 pixelformat;
+	struct v4l2_frmsize_stepwise frmsize;
+	const struct visl_ctrls *ctrls;
+	unsigned int num_decoded_fmts;
+	const u32 *decoded_fmts;
+};
+
+extern const struct visl_coded_format_desc visl_coded_fmts[];
+extern const size_t num_coded_fmts;
+
+enum {
+	V4L2_M2M_SRC = 0,
+	V4L2_M2M_DST = 1,
+};
+
+extern unsigned int visl_debug;
+#define dprintk(dev, fmt, arg...) \
+	v4l2_dbg(1, visl_debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)
+
+extern int visl_dprintk_frame_start;
+extern unsigned int visl_dprintk_nframes;
+extern unsigned int keep_bitstream_buffers;
+extern int bitstream_trace_frame_start;
+extern unsigned int bitstream_trace_nframes;
+
+#define frame_dprintk(dev, current, fmt, arg...) \
+	do { \
+		if (visl_dprintk_frame_start > -1 && \
+		    current >= visl_dprintk_frame_start && \
+		    current < visl_dprintk_frame_start + visl_dprintk_nframes) \
+			dprintk(dev, fmt, ## arg); \
+	} while (0) \
+
+struct visl_q_data {
+	unsigned int		sequence;
+};
+
+struct visl_dev {
+	struct v4l2_device	v4l2_dev;
+	struct video_device	vfd;
+#ifdef CONFIG_MEDIA_CONTROLLER
+	struct media_device	mdev;
+#endif
+
+	struct mutex		dev_mutex;
+
+	struct v4l2_m2m_dev	*m2m_dev;
+
+#ifdef CONFIG_VISL_DEBUGFS
+	struct dentry		*debugfs_root;
+	struct dentry		*bitstream_debugfs;
+	struct list_head	bitstream_blobs;
+	/*
+	 * Protects the "blob" list as it can be accessed from "visl_release"
+	 * if keep_bitstream_buffers = 0 while some other client is tracing
+	 */
+	struct mutex		bitstream_lock;
+#endif
+};
+
+enum visl_codec {
+	VISL_CODEC_NONE,
+	VISL_CODEC_FWHT,
+	VISL_CODEC_MPEG2,
+	VISL_CODEC_VP8,
+	VISL_CODEC_VP9,
+	VISL_CODEC_H264,
+};
+
+struct visl_blob {
+	struct list_head list;
+	struct dentry *dentry;
+	u64 streamon_jiffies;
+	struct debugfs_blob_wrapper blob;
+};
+
+struct visl_ctx {
+	struct v4l2_fh		fh;
+	struct visl_dev	*dev;
+	struct v4l2_ctrl_handler hdl;
+
+	struct mutex		vb_mutex;
+
+	struct visl_q_data	q_data[VISL_M2M_NQUEUES];
+	enum   visl_codec	current_codec;
+
+	const struct visl_coded_format_desc *coded_format_desc;
+
+	struct v4l2_format	coded_fmt;
+	struct v4l2_format	decoded_fmt;
+
+	struct tpg_data		tpg;
+	u64			capture_streamon_jiffies;
+	char			*tpg_str_buf;
+};
+
+struct visl_ctrl_desc {
+	struct v4l2_ctrl_config cfg;
+};
+
+static inline struct visl_ctx *visl_file_to_ctx(struct file *file)
+{
+	return container_of(file->private_data, struct visl_ctx, fh);
+}
+
+static inline struct visl_ctx *visl_v4l2fh_to_ctx(struct v4l2_fh *v4l2_fh)
+{
+	return container_of(v4l2_fh, struct visl_ctx, fh);
+}
+
+void *visl_find_control_data(struct visl_ctx *ctx, u32 id);
+struct v4l2_ctrl *visl_find_control(struct visl_ctx *ctx, u32 id);
+u32 visl_control_num_elems(struct visl_ctx *ctx, u32 id);
+
+#endif /* _VISL_H_ */
-- 
2.36.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH v2] media: visl: add virtual stateless driver
  2022-06-06 21:26   ` [RFC PATCH v2] media: visl: add virtual stateless driver daniel.almeida
@ 2022-06-07 12:02     ` Hans Verkuil
  2022-06-08 15:31       ` Nicolas Dufresne
  2022-08-19 20:43     ` Deborah Brouwer
  1 sibling, 1 reply; 14+ messages in thread
From: Hans Verkuil @ 2022-06-07 12:02 UTC (permalink / raw)
  To: daniel.almeida; +Cc: linux-media

On 6/6/22 23:26, daniel.almeida@collabora.com wrote:
> From: Daniel Almeida <daniel.almeida@collabora.com>
> 
> A virtual stateless device for stateless uAPI development purposes.
> 
> This tool's objective is to help the development and testing of userspace
> applications that use the V4L2 stateless API to decode media.

So this is specifically for *decoding*, right? Is it the intention that the
same driver will be able to handle stateless encoding as well in the future?
Or would that be a new driver?

It matters primarily for the naming of the driver. If it is decoding only,
then it should be something like visldec.

> 
> A userspace implementation can use visl to run a decoding loop even when no
> hardware is available or when the kernel uAPI for the codec has not been
> upstreamed yet. This can reveal bugs at an early stage.
> 
> This driver can also trace the contents of the V4L2 controls submitted to it.
> It can also dump the contents of the vb2 buffers through a debugfs
> interface. This is in many ways similar to the tracing infrastructure
> available for other popular encode/decode APIs out there and can help develop
> a userspace application by using another (working) one as a reference.
> 
> Note that no actual decoding of video frames is performed by visl. The V4L2
> test pattern generator is used to write various debug information to the
> capture buffers instead.
> 
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> 
> ---
> Was media: vivpu: add virtual VPU driver
> 
> Changes from v1:
> 
> - Addressed review comments from v1
> - Driver was renamed to visl
> - Dropped AV1 support for now (as it's not upstream yet)
> - Added support for FWHT, MPEG2, VP8, VP9, H264
> - Added TPG support
> - Driver can now dump the controls for the codecs above through ftrace
> - Driver can now dump the vb2 bitstream buffer through a debugfs infrastructure
> 
> I ran this on a kernel with KASAN/kmemleak enabled, nothing showed up.
> 
> v4l2-compliance results:
> 
> v4l2-compliance 1.22.1, 64 bits, 64-bit time_t

Based on the output I can tell that this is an old v4l2-compliance utility.

Please compile it straight from the v4l-utils git repo.

Also compare it with the output when used with vicodec: the compliance test
should be able to detect that it is a stateless decoder, but I don't see that
in the output below, either because it is a too old version, or the driver
does something wrong, breaking this detection.

> 
> Compliance test for visl device /dev/video0:
> 
> Driver Info:
>         Driver name      : visl
>         Card type        : visl
>         Bus info         : platform:visl
>         Driver version   : 5.19.0
>         Capabilities     : 0x84204000
>                 Video Memory-to-Memory Multiplanar
>                 Streaming
>                 Extended Pix Format
>                 Device Capabilities
>         Device Caps      : 0x04204000
>                 Video Memory-to-Memory Multiplanar
>                 Streaming
>                 Extended Pix Format
> Media Driver Info:
>         Driver name      : visl
>         Model            : visl
>         Serial           : 
>         Bus info         : platform:visl
>         Media version    : 5.19.0
>         Hardware revision: 0x00000000 (0)
>         Driver version   : 5.19.0
> Interface Info:
>         ID               : 0x0300000c
>         Type             : V4L Video
> Entity Info:
>         ID               : 0x00000001 (1)
>         Name             : visl-source
>         Function         : V4L2 I/O
>         Pad 0x01000002   : 0: Source
>           Link 0x02000008: to remote pad 0x1000004 of entity 'visl-proc' (Video Decoder): Data, Enabled, Immutable
> 
> Required ioctls:
>         test MC information (see 'Media Driver Info' above): OK
>         test VIDIOC_QUERYCAP: OK
>         test invalid ioctls: OK
> 
> Allow for multiple opens:
>         test second /dev/video0 open: OK
>         test VIDIOC_QUERYCAP: OK
>         test VIDIOC_G/S_PRIORITY: OK
>         test for unlimited opens: OK
> 
> Debug ioctls:
>         test VIDIOC_DBG_G/S_REGISTER: OK (Not Supported)
>         test VIDIOC_LOG_STATUS: OK (Not Supported)
> 
> Input ioctls:
>         test VIDIOC_G/S_TUNER/ENUM_FREQ_BANDS: OK (Not Supported)
>         test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
>         test VIDIOC_S_HW_FREQ_SEEK: OK (Not Supported)
>         test VIDIOC_ENUMAUDIO: OK (Not Supported)
>         test VIDIOC_G/S/ENUMINPUT: OK (Not Supported)
>         test VIDIOC_G/S_AUDIO: OK (Not Supported)
>         Inputs: 0 Audio Inputs: 0 Tuners: 0
> 
> Output ioctls:
>         test VIDIOC_G/S_MODULATOR: OK (Not Supported)
>         test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
>         test VIDIOC_ENUMAUDOUT: OK (Not Supported)
>         test VIDIOC_G/S/ENUMOUTPUT: OK (Not Supported)
>         test VIDIOC_G/S_AUDOUT: OK (Not Supported)
>         Outputs: 0 Audio Outputs: 0 Modulators: 0
> 
> Input/Output configuration ioctls:
>         test VIDIOC_ENUM/G/S/QUERY_STD: OK (Not Supported)
>         test VIDIOC_ENUM/G/S/QUERY_DV_TIMINGS: OK (Not Supported)
>         test VIDIOC_DV_TIMINGS_CAP: OK (Not Supported)
>         test VIDIOC_G/S_EDID: OK (Not Supported)
> 
> Control ioctls:
>         test VIDIOC_QUERY_EXT_CTRL/QUERYMENU: OK
>         test VIDIOC_QUERYCTRL: OK
>         test VIDIOC_G/S_CTRL: OK
>         test VIDIOC_G/S/TRY_EXT_CTRLS: OK
>         test VIDIOC_(UN)SUBSCRIBE_EVENT/DQEVENT: OK
>         test VIDIOC_G/S_JPEGCOMP: OK (Not Supported)
>         Standard Controls: 3 Private Controls: 0
>         Standard Compound Controls: 13 Private Compound Controls: 0
> 
> Format ioctls:
>         test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK
>         test VIDIOC_G/S_PARM: OK (Not Supported)
>         test VIDIOC_G_FBUF: OK (Not Supported)
>         test VIDIOC_G_FMT: OK
>         test VIDIOC_TRY_FMT: OK
>         test VIDIOC_S_FMT: OK
>         test VIDIOC_G_SLICED_VBI_CAP: OK (Not Supported)
>         test Cropping: OK (Not Supported)
>         test Composing: OK (Not Supported)
>         test Scaling: OK
> 
> Codec ioctls:
>         test VIDIOC_(TRY_)ENCODER_CMD: OK (Not Supported)
>         test VIDIOC_G_ENC_INDEX: OK (Not Supported)
>         test VIDIOC_(TRY_)DECODER_CMD: OK (Not Supported)
> 
> Buffer ioctls:
>         test VIDIOC_REQBUFS/CREATE_BUFS/QUERYBUF: OK
>         test VIDIOC_EXPBUF: OK
>         test Requests: OK
> 
> Test input 0:
> 
> Streaming ioctls:
>         test read/write: OK (Not Supported)
>         test blocking wait: OK
>         Video Capture Multiplanar: Captured 58 buffers    
>         test MMAP (no poll): OK
>         Video Capture Multiplanar: Captured 58 buffers    
>         test MMAP (select): OK
>         Video Capture Multiplanar: Captured 58 buffers    
>         test MMAP (epoll): OK
>         Video Capture Multiplanar: Captured 58 buffers    
>         test USERPTR (no poll): OK
>         Video Capture Multiplanar: Captured 58 buffers    
>         test USERPTR (select): OK
>         test DMABUF: Cannot test, specify --expbuf-device
> 
> Total for visl device /dev/video0: 53, Succeeded: 53, Failed: 0, Warnings: 0
> 
> ---
>  drivers/media/test-drivers/Kconfig            |   1 +
>  drivers/media/test-drivers/Makefile           |   1 +
>  drivers/media/test-drivers/visl/Kconfig       |  31 +
>  drivers/media/test-drivers/visl/Makefile      |   8 +
>  drivers/media/test-drivers/visl/visl-core.c   | 532 ++++++++++++
>  .../media/test-drivers/visl/visl-debugfs.c    | 148 ++++
>  .../media/test-drivers/visl/visl-debugfs.h    |  72 ++
>  drivers/media/test-drivers/visl/visl-dec.c    | 468 +++++++++++
>  drivers/media/test-drivers/visl/visl-dec.h    | 100 +++
>  .../media/test-drivers/visl/visl-trace-fwht.h |  66 ++
>  .../media/test-drivers/visl/visl-trace-h264.h | 349 ++++++++
>  .../test-drivers/visl/visl-trace-mpeg2.h      |  99 +++
>  .../test-drivers/visl/visl-trace-points.c     |   9 +
>  .../media/test-drivers/visl/visl-trace-vp8.h  | 156 ++++
>  .../media/test-drivers/visl/visl-trace-vp9.h  | 292 +++++++
>  drivers/media/test-drivers/visl/visl-video.c  | 776 ++++++++++++++++++
>  drivers/media/test-drivers/visl/visl-video.h  |  61 ++
>  drivers/media/test-drivers/visl/visl.h        | 178 ++++
>  18 files changed, 3347 insertions(+)
>  create mode 100644 drivers/media/test-drivers/visl/Kconfig
>  create mode 100644 drivers/media/test-drivers/visl/Makefile
>  create mode 100644 drivers/media/test-drivers/visl/visl-core.c
>  create mode 100644 drivers/media/test-drivers/visl/visl-debugfs.c
>  create mode 100644 drivers/media/test-drivers/visl/visl-debugfs.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-dec.c
>  create mode 100644 drivers/media/test-drivers/visl/visl-dec.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-fwht.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-h264.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-mpeg2.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-points.c
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-vp8.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-vp9.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-video.c
>  create mode 100644 drivers/media/test-drivers/visl/visl-video.h
>  create mode 100644 drivers/media/test-drivers/visl/visl.h
> 
> diff --git a/drivers/media/test-drivers/Kconfig b/drivers/media/test-drivers/Kconfig
> index 51cf27834df0..459b433e9fae 100644
> --- a/drivers/media/test-drivers/Kconfig
> +++ b/drivers/media/test-drivers/Kconfig
> @@ -20,6 +20,7 @@ config VIDEO_VIM2M
>  source "drivers/media/test-drivers/vicodec/Kconfig"
>  source "drivers/media/test-drivers/vimc/Kconfig"
>  source "drivers/media/test-drivers/vivid/Kconfig"
> +source "drivers/media/test-drivers/visl/Kconfig"
>  
>  endif #V4L_TEST_DRIVERS
>  
> diff --git a/drivers/media/test-drivers/Makefile b/drivers/media/test-drivers/Makefile
> index ff390b687189..740714a4584d 100644
> --- a/drivers/media/test-drivers/Makefile
> +++ b/drivers/media/test-drivers/Makefile
> @@ -12,3 +12,4 @@ obj-$(CONFIG_VIDEO_VICODEC) += vicodec/
>  obj-$(CONFIG_VIDEO_VIM2M) += vim2m.o
>  obj-$(CONFIG_VIDEO_VIMC) += vimc/
>  obj-$(CONFIG_VIDEO_VIVID) += vivid/
> +obj-$(CONFIG_VIDEO_VISL) += visl/
> diff --git a/drivers/media/test-drivers/visl/Kconfig b/drivers/media/test-drivers/visl/Kconfig
> new file mode 100644
> index 000000000000..976319c3c372
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/Kconfig
> @@ -0,0 +1,31 @@
> +# SPDX-License-Identifier: GPL-2.0+
> +config VIDEO_VISL
> +	tristate "Virtual Stateless Driver (visl)"

I think this should be "Virtual Stateless Codec Driver" (or Stateless Decoder Driver if
it will only be for decoding). "Stateless Driver" is too vague.

> +	depends on VIDEO_DEV
> +	select FONT_SUPPORT
> +	select FONT_8x16
> +	select VIDEOBUF2_VMALLOC
> +	select V4L2_MEM2MEM_DEV
> +	select MEDIA_CONTROLLER
> +	select MEDIA_CONTROLLER_REQUEST_API
> +	select VIDEO_V4L2_TPG
> +	help
> +
> +	  A virtual stateless device for uAPI development purposes.
> +
> +	  A userspace implementation can use visl to run a decoding loop even
> +	  when no hardware is available or when the kernel uAPI for the codec
> +	  has not been upstreamed yet. This can reveal bugs at an early stage.
> +
> +

A few too many empty lines here. One is enough.

Regards,

	Hans

> +
> +	  When in doubt, say N.
> +
> +config VISL_DEBUGFS
> +	bool "Enable debugfs for visl"
> +	depends on VIDEO_VISL
> +	depends on DEBUG_FS
> +
> +	help
> +	  Choose Y to dump the bitstream buffers through debugfs.
> +	  When in doubt, say N.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH v2] media: visl: add virtual stateless driver
  2022-06-07 12:02     ` Hans Verkuil
@ 2022-06-08 15:31       ` Nicolas Dufresne
  0 siblings, 0 replies; 14+ messages in thread
From: Nicolas Dufresne @ 2022-06-08 15:31 UTC (permalink / raw)
  To: Hans Verkuil, daniel.almeida; +Cc: linux-media

Le mardi 07 juin 2022 à 14:02 +0200, Hans Verkuil a écrit :
> On 6/6/22 23:26, daniel.almeida@collabora.com wrote:
> > From: Daniel Almeida <daniel.almeida@collabora.com>
> > 
> > A virtual stateless device for stateless uAPI development purposes.
> > 
> > This tool's objective is to help the development and testing of userspace
> > applications that use the V4L2 stateless API to decode media.
> 
> So this is specifically for *decoding*, right? Is it the intention that the
> same driver will be able to handle stateless encoding as well in the future?
> Or would that be a new driver?
> 
> It matters primarily for the naming of the driver. If it is decoding only,
> then it should be something like visldec.

For the same reason we didn't add this to vicodec, we should probably keep
decoding and encoding separate. An stateless encoder stub would need to produce
some valid stream, I'm not sure how we should address this. So yes, we maybe
want to call is visldec.

Nicolas

p.s. do we need to define the pronunciation for that? "Vi SL dec" or "Vizel Dec"
?

> 
> > 
> > A userspace implementation can use visl to run a decoding loop even when no
> > hardware is available or when the kernel uAPI for the codec has not been
> > upstreamed yet. This can reveal bugs at an early stage.
> > 
> > This driver can also trace the contents of the V4L2 controls submitted to it.
> > It can also dump the contents of the vb2 buffers through a debugfs
> > interface. This is in many ways similar to the tracing infrastructure
> > available for other popular encode/decode APIs out there and can help develop
> > a userspace application by using another (working) one as a reference.
> > 
> > Note that no actual decoding of video frames is performed by visl. The V4L2
> > test pattern generator is used to write various debug information to the
> > capture buffers instead.
> > 
> > Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> > 
> > ---
> > Was media: vivpu: add virtual VPU driver
> > 
> > Changes from v1:
> > 
> > - Addressed review comments from v1
> > - Driver was renamed to visl
> > - Dropped AV1 support for now (as it's not upstream yet)
> > - Added support for FWHT, MPEG2, VP8, VP9, H264
> > - Added TPG support
> > - Driver can now dump the controls for the codecs above through ftrace
> > - Driver can now dump the vb2 bitstream buffer through a debugfs infrastructure
> > 
> > I ran this on a kernel with KASAN/kmemleak enabled, nothing showed up.
> > 
> > v4l2-compliance results:
> > 
> > v4l2-compliance 1.22.1, 64 bits, 64-bit time_t
> 
> Based on the output I can tell that this is an old v4l2-compliance utility.
> 
> Please compile it straight from the v4l-utils git repo.
> 
> Also compare it with the output when used with vicodec: the compliance test
> should be able to detect that it is a stateless decoder, but I don't see that
> in the output below, either because it is a too old version, or the driver
> does something wrong, breaking this detection.
> 
> > 
> > Compliance test for visl device /dev/video0:
> > 
> > Driver Info:
> >         Driver name      : visl
> >         Card type        : visl
> >         Bus info         : platform:visl
> >         Driver version   : 5.19.0
> >         Capabilities     : 0x84204000
> >                 Video Memory-to-Memory Multiplanar
> >                 Streaming
> >                 Extended Pix Format
> >                 Device Capabilities
> >         Device Caps      : 0x04204000
> >                 Video Memory-to-Memory Multiplanar
> >                 Streaming
> >                 Extended Pix Format
> > Media Driver Info:
> >         Driver name      : visl
> >         Model            : visl
> >         Serial           : 
> >         Bus info         : platform:visl
> >         Media version    : 5.19.0
> >         Hardware revision: 0x00000000 (0)
> >         Driver version   : 5.19.0
> > Interface Info:
> >         ID               : 0x0300000c
> >         Type             : V4L Video
> > Entity Info:
> >         ID               : 0x00000001 (1)
> >         Name             : visl-source
> >         Function         : V4L2 I/O
> >         Pad 0x01000002   : 0: Source
> >           Link 0x02000008: to remote pad 0x1000004 of entity 'visl-proc' (Video Decoder): Data, Enabled, Immutable
> > 
> > Required ioctls:
> >         test MC information (see 'Media Driver Info' above): OK
> >         test VIDIOC_QUERYCAP: OK
> >         test invalid ioctls: OK
> > 
> > Allow for multiple opens:
> >         test second /dev/video0 open: OK
> >         test VIDIOC_QUERYCAP: OK
> >         test VIDIOC_G/S_PRIORITY: OK
> >         test for unlimited opens: OK
> > 
> > Debug ioctls:
> >         test VIDIOC_DBG_G/S_REGISTER: OK (Not Supported)
> >         test VIDIOC_LOG_STATUS: OK (Not Supported)
> > 
> > Input ioctls:
> >         test VIDIOC_G/S_TUNER/ENUM_FREQ_BANDS: OK (Not Supported)
> >         test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
> >         test VIDIOC_S_HW_FREQ_SEEK: OK (Not Supported)
> >         test VIDIOC_ENUMAUDIO: OK (Not Supported)
> >         test VIDIOC_G/S/ENUMINPUT: OK (Not Supported)
> >         test VIDIOC_G/S_AUDIO: OK (Not Supported)
> >         Inputs: 0 Audio Inputs: 0 Tuners: 0
> > 
> > Output ioctls:
> >         test VIDIOC_G/S_MODULATOR: OK (Not Supported)
> >         test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
> >         test VIDIOC_ENUMAUDOUT: OK (Not Supported)
> >         test VIDIOC_G/S/ENUMOUTPUT: OK (Not Supported)
> >         test VIDIOC_G/S_AUDOUT: OK (Not Supported)
> >         Outputs: 0 Audio Outputs: 0 Modulators: 0
> > 
> > Input/Output configuration ioctls:
> >         test VIDIOC_ENUM/G/S/QUERY_STD: OK (Not Supported)
> >         test VIDIOC_ENUM/G/S/QUERY_DV_TIMINGS: OK (Not Supported)
> >         test VIDIOC_DV_TIMINGS_CAP: OK (Not Supported)
> >         test VIDIOC_G/S_EDID: OK (Not Supported)
> > 
> > Control ioctls:
> >         test VIDIOC_QUERY_EXT_CTRL/QUERYMENU: OK
> >         test VIDIOC_QUERYCTRL: OK
> >         test VIDIOC_G/S_CTRL: OK
> >         test VIDIOC_G/S/TRY_EXT_CTRLS: OK
> >         test VIDIOC_(UN)SUBSCRIBE_EVENT/DQEVENT: OK
> >         test VIDIOC_G/S_JPEGCOMP: OK (Not Supported)
> >         Standard Controls: 3 Private Controls: 0
> >         Standard Compound Controls: 13 Private Compound Controls: 0
> > 
> > Format ioctls:
> >         test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK
> >         test VIDIOC_G/S_PARM: OK (Not Supported)
> >         test VIDIOC_G_FBUF: OK (Not Supported)
> >         test VIDIOC_G_FMT: OK
> >         test VIDIOC_TRY_FMT: OK
> >         test VIDIOC_S_FMT: OK
> >         test VIDIOC_G_SLICED_VBI_CAP: OK (Not Supported)
> >         test Cropping: OK (Not Supported)
> >         test Composing: OK (Not Supported)
> >         test Scaling: OK
> > 
> > Codec ioctls:
> >         test VIDIOC_(TRY_)ENCODER_CMD: OK (Not Supported)
> >         test VIDIOC_G_ENC_INDEX: OK (Not Supported)
> >         test VIDIOC_(TRY_)DECODER_CMD: OK (Not Supported)
> > 
> > Buffer ioctls:
> >         test VIDIOC_REQBUFS/CREATE_BUFS/QUERYBUF: OK
> >         test VIDIOC_EXPBUF: OK
> >         test Requests: OK
> > 
> > Test input 0:
> > 
> > Streaming ioctls:
> >         test read/write: OK (Not Supported)
> >         test blocking wait: OK
> >         Video Capture Multiplanar: Captured 58 buffers    
> >         test MMAP (no poll): OK
> >         Video Capture Multiplanar: Captured 58 buffers    
> >         test MMAP (select): OK
> >         Video Capture Multiplanar: Captured 58 buffers    
> >         test MMAP (epoll): OK
> >         Video Capture Multiplanar: Captured 58 buffers    
> >         test USERPTR (no poll): OK
> >         Video Capture Multiplanar: Captured 58 buffers    
> >         test USERPTR (select): OK
> >         test DMABUF: Cannot test, specify --expbuf-device
> > 
> > Total for visl device /dev/video0: 53, Succeeded: 53, Failed: 0, Warnings: 0
> > 
> > ---
> >  drivers/media/test-drivers/Kconfig            |   1 +
> >  drivers/media/test-drivers/Makefile           |   1 +
> >  drivers/media/test-drivers/visl/Kconfig       |  31 +
> >  drivers/media/test-drivers/visl/Makefile      |   8 +
> >  drivers/media/test-drivers/visl/visl-core.c   | 532 ++++++++++++
> >  .../media/test-drivers/visl/visl-debugfs.c    | 148 ++++
> >  .../media/test-drivers/visl/visl-debugfs.h    |  72 ++
> >  drivers/media/test-drivers/visl/visl-dec.c    | 468 +++++++++++
> >  drivers/media/test-drivers/visl/visl-dec.h    | 100 +++
> >  .../media/test-drivers/visl/visl-trace-fwht.h |  66 ++
> >  .../media/test-drivers/visl/visl-trace-h264.h | 349 ++++++++
> >  .../test-drivers/visl/visl-trace-mpeg2.h      |  99 +++
> >  .../test-drivers/visl/visl-trace-points.c     |   9 +
> >  .../media/test-drivers/visl/visl-trace-vp8.h  | 156 ++++
> >  .../media/test-drivers/visl/visl-trace-vp9.h  | 292 +++++++
> >  drivers/media/test-drivers/visl/visl-video.c  | 776 ++++++++++++++++++
> >  drivers/media/test-drivers/visl/visl-video.h  |  61 ++
> >  drivers/media/test-drivers/visl/visl.h        | 178 ++++
> >  18 files changed, 3347 insertions(+)
> >  create mode 100644 drivers/media/test-drivers/visl/Kconfig
> >  create mode 100644 drivers/media/test-drivers/visl/Makefile
> >  create mode 100644 drivers/media/test-drivers/visl/visl-core.c
> >  create mode 100644 drivers/media/test-drivers/visl/visl-debugfs.c
> >  create mode 100644 drivers/media/test-drivers/visl/visl-debugfs.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-dec.c
> >  create mode 100644 drivers/media/test-drivers/visl/visl-dec.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-fwht.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-h264.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-mpeg2.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-points.c
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-vp8.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-vp9.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-video.c
> >  create mode 100644 drivers/media/test-drivers/visl/visl-video.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl.h
> > 
> > diff --git a/drivers/media/test-drivers/Kconfig b/drivers/media/test-drivers/Kconfig
> > index 51cf27834df0..459b433e9fae 100644
> > --- a/drivers/media/test-drivers/Kconfig
> > +++ b/drivers/media/test-drivers/Kconfig
> > @@ -20,6 +20,7 @@ config VIDEO_VIM2M
> >  source "drivers/media/test-drivers/vicodec/Kconfig"
> >  source "drivers/media/test-drivers/vimc/Kconfig"
> >  source "drivers/media/test-drivers/vivid/Kconfig"
> > +source "drivers/media/test-drivers/visl/Kconfig"
> >  
> >  endif #V4L_TEST_DRIVERS
> >  
> > diff --git a/drivers/media/test-drivers/Makefile b/drivers/media/test-drivers/Makefile
> > index ff390b687189..740714a4584d 100644
> > --- a/drivers/media/test-drivers/Makefile
> > +++ b/drivers/media/test-drivers/Makefile
> > @@ -12,3 +12,4 @@ obj-$(CONFIG_VIDEO_VICODEC) += vicodec/
> >  obj-$(CONFIG_VIDEO_VIM2M) += vim2m.o
> >  obj-$(CONFIG_VIDEO_VIMC) += vimc/
> >  obj-$(CONFIG_VIDEO_VIVID) += vivid/
> > +obj-$(CONFIG_VIDEO_VISL) += visl/
> > diff --git a/drivers/media/test-drivers/visl/Kconfig b/drivers/media/test-drivers/visl/Kconfig
> > new file mode 100644
> > index 000000000000..976319c3c372
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/Kconfig
> > @@ -0,0 +1,31 @@
> > +# SPDX-License-Identifier: GPL-2.0+
> > +config VIDEO_VISL
> > +	tristate "Virtual Stateless Driver (visl)"
> 
> I think this should be "Virtual Stateless Codec Driver" (or Stateless Decoder Driver if
> it will only be for decoding). "Stateless Driver" is too vague.
> 
> > +	depends on VIDEO_DEV
> > +	select FONT_SUPPORT
> > +	select FONT_8x16
> > +	select VIDEOBUF2_VMALLOC
> > +	select V4L2_MEM2MEM_DEV
> > +	select MEDIA_CONTROLLER
> > +	select MEDIA_CONTROLLER_REQUEST_API
> > +	select VIDEO_V4L2_TPG
> > +	help
> > +
> > +	  A virtual stateless device for uAPI development purposes.
> > +
> > +	  A userspace implementation can use visl to run a decoding loop even
> > +	  when no hardware is available or when the kernel uAPI for the codec
> > +	  has not been upstreamed yet. This can reveal bugs at an early stage.
> > +
> > +
> 
> A few too many empty lines here. One is enough.
> 
> Regards,
> 
> 	Hans
> 
> > +
> > +	  When in doubt, say N.
> > +
> > +config VISL_DEBUGFS
> > +	bool "Enable debugfs for visl"
> > +	depends on VIDEO_VISL
> > +	depends on DEBUG_FS
> > +
> > +	help
> > +	  Choose Y to dump the bitstream buffers through debugfs.
> > +	  When in doubt, say N.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH v2] media: visl: add virtual stateless driver
  2022-06-06 21:26   ` [RFC PATCH v2] media: visl: add virtual stateless driver daniel.almeida
  2022-06-07 12:02     ` Hans Verkuil
@ 2022-08-19 20:43     ` Deborah Brouwer
  2022-10-04 23:10       ` Deborah Brouwer
  1 sibling, 1 reply; 14+ messages in thread
From: Deborah Brouwer @ 2022-08-19 20:43 UTC (permalink / raw)
  To: daniel.almeida; +Cc: hverkuil, linux-media

On Mon, Jun 06, 2022 at 06:26:22PM -0300, daniel.almeida@collabora.com wrote:
> From: Daniel Almeida <daniel.almeida@collabora.com>
> 
> A virtual stateless device for stateless uAPI development purposes.
> 
> This tool's objective is to help the development and testing of userspace
> applications that use the V4L2 stateless API to decode media.
> 
> A userspace implementation can use visl to run a decoding loop even when no
> hardware is available or when the kernel uAPI for the codec has not been
> upstreamed yet. This can reveal bugs at an early stage.
> 
> This driver can also trace the contents of the V4L2 controls submitted to it.
> It can also dump the contents of the vb2 buffers through a debugfs
> interface. This is in many ways similar to the tracing infrastructure
> available for other popular encode/decode APIs out there and can help develop
> a userspace application by using another (working) one as a reference.
> 
> Note that no actual decoding of video frames is performed by visl. The V4L2
> test pattern generator is used to write various debug information to the
> capture buffers instead.
> 
> Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> 

Tested-by: Deborah Brouwer <deborah.brouwer@collabora.com>

> ---
> Was media: vivpu: add virtual VPU driver
> 
> Changes from v1:
> 
> - Addressed review comments from v1
> - Driver was renamed to visl
> - Dropped AV1 support for now (as it's not upstream yet)
> - Added support for FWHT, MPEG2, VP8, VP9, H264
> - Added TPG support
> - Driver can now dump the controls for the codecs above through ftrace
> - Driver can now dump the vb2 bitstream buffer through a debugfs infrastructure
> 
> I ran this on a kernel with KASAN/kmemleak enabled, nothing showed up.
> 
> v4l2-compliance results:
> 
> v4l2-compliance 1.22.1, 64 bits, 64-bit time_t
> 
> Compliance test for visl device /dev/video0:
> 
> Driver Info:
>         Driver name      : visl
>         Card type        : visl
>         Bus info         : platform:visl
>         Driver version   : 5.19.0
>         Capabilities     : 0x84204000
>                 Video Memory-to-Memory Multiplanar
>                 Streaming
>                 Extended Pix Format
>                 Device Capabilities
>         Device Caps      : 0x04204000
>                 Video Memory-to-Memory Multiplanar
>                 Streaming
>                 Extended Pix Format
> Media Driver Info:
>         Driver name      : visl
>         Model            : visl
>         Serial           : 
>         Bus info         : platform:visl
>         Media version    : 5.19.0
>         Hardware revision: 0x00000000 (0)
>         Driver version   : 5.19.0
> Interface Info:
>         ID               : 0x0300000c
>         Type             : V4L Video
> Entity Info:
>         ID               : 0x00000001 (1)
>         Name             : visl-source
>         Function         : V4L2 I/O
>         Pad 0x01000002   : 0: Source
>           Link 0x02000008: to remote pad 0x1000004 of entity 'visl-proc' (Video Decoder): Data, Enabled, Immutable
> 
> Required ioctls:
>         test MC information (see 'Media Driver Info' above): OK
>         test VIDIOC_QUERYCAP: OK
>         test invalid ioctls: OK
> 
> Allow for multiple opens:
>         test second /dev/video0 open: OK
>         test VIDIOC_QUERYCAP: OK
>         test VIDIOC_G/S_PRIORITY: OK
>         test for unlimited opens: OK
> 
> Debug ioctls:
>         test VIDIOC_DBG_G/S_REGISTER: OK (Not Supported)
>         test VIDIOC_LOG_STATUS: OK (Not Supported)
> 
> Input ioctls:
>         test VIDIOC_G/S_TUNER/ENUM_FREQ_BANDS: OK (Not Supported)
>         test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
>         test VIDIOC_S_HW_FREQ_SEEK: OK (Not Supported)
>         test VIDIOC_ENUMAUDIO: OK (Not Supported)
>         test VIDIOC_G/S/ENUMINPUT: OK (Not Supported)
>         test VIDIOC_G/S_AUDIO: OK (Not Supported)
>         Inputs: 0 Audio Inputs: 0 Tuners: 0
> 
> Output ioctls:
>         test VIDIOC_G/S_MODULATOR: OK (Not Supported)
>         test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
>         test VIDIOC_ENUMAUDOUT: OK (Not Supported)
>         test VIDIOC_G/S/ENUMOUTPUT: OK (Not Supported)
>         test VIDIOC_G/S_AUDOUT: OK (Not Supported)
>         Outputs: 0 Audio Outputs: 0 Modulators: 0
> 
> Input/Output configuration ioctls:
>         test VIDIOC_ENUM/G/S/QUERY_STD: OK (Not Supported)
>         test VIDIOC_ENUM/G/S/QUERY_DV_TIMINGS: OK (Not Supported)
>         test VIDIOC_DV_TIMINGS_CAP: OK (Not Supported)
>         test VIDIOC_G/S_EDID: OK (Not Supported)
> 
> Control ioctls:
>         test VIDIOC_QUERY_EXT_CTRL/QUERYMENU: OK
>         test VIDIOC_QUERYCTRL: OK
>         test VIDIOC_G/S_CTRL: OK
>         test VIDIOC_G/S/TRY_EXT_CTRLS: OK
>         test VIDIOC_(UN)SUBSCRIBE_EVENT/DQEVENT: OK
>         test VIDIOC_G/S_JPEGCOMP: OK (Not Supported)
>         Standard Controls: 3 Private Controls: 0
>         Standard Compound Controls: 13 Private Compound Controls: 0
> 
> Format ioctls:
>         test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK
>         test VIDIOC_G/S_PARM: OK (Not Supported)
>         test VIDIOC_G_FBUF: OK (Not Supported)
>         test VIDIOC_G_FMT: OK
>         test VIDIOC_TRY_FMT: OK
>         test VIDIOC_S_FMT: OK
>         test VIDIOC_G_SLICED_VBI_CAP: OK (Not Supported)
>         test Cropping: OK (Not Supported)
>         test Composing: OK (Not Supported)
>         test Scaling: OK
> 
> Codec ioctls:
>         test VIDIOC_(TRY_)ENCODER_CMD: OK (Not Supported)
>         test VIDIOC_G_ENC_INDEX: OK (Not Supported)
>         test VIDIOC_(TRY_)DECODER_CMD: OK (Not Supported)
> 
> Buffer ioctls:
>         test VIDIOC_REQBUFS/CREATE_BUFS/QUERYBUF: OK
>         test VIDIOC_EXPBUF: OK
>         test Requests: OK
> 
> Test input 0:
> 
> Streaming ioctls:
>         test read/write: OK (Not Supported)
>         test blocking wait: OK
>         Video Capture Multiplanar: Captured 58 buffers    
>         test MMAP (no poll): OK
>         Video Capture Multiplanar: Captured 58 buffers    
>         test MMAP (select): OK
>         Video Capture Multiplanar: Captured 58 buffers    
>         test MMAP (epoll): OK
>         Video Capture Multiplanar: Captured 58 buffers    
>         test USERPTR (no poll): OK
>         Video Capture Multiplanar: Captured 58 buffers    
>         test USERPTR (select): OK
>         test DMABUF: Cannot test, specify --expbuf-device
> 
> Total for visl device /dev/video0: 53, Succeeded: 53, Failed: 0, Warnings: 0
> 
> ---
>  drivers/media/test-drivers/Kconfig            |   1 +
>  drivers/media/test-drivers/Makefile           |   1 +
>  drivers/media/test-drivers/visl/Kconfig       |  31 +
>  drivers/media/test-drivers/visl/Makefile      |   8 +
>  drivers/media/test-drivers/visl/visl-core.c   | 532 ++++++++++++
>  .../media/test-drivers/visl/visl-debugfs.c    | 148 ++++
>  .../media/test-drivers/visl/visl-debugfs.h    |  72 ++
>  drivers/media/test-drivers/visl/visl-dec.c    | 468 +++++++++++
>  drivers/media/test-drivers/visl/visl-dec.h    | 100 +++
>  .../media/test-drivers/visl/visl-trace-fwht.h |  66 ++
>  .../media/test-drivers/visl/visl-trace-h264.h | 349 ++++++++
>  .../test-drivers/visl/visl-trace-mpeg2.h      |  99 +++
>  .../test-drivers/visl/visl-trace-points.c     |   9 +
>  .../media/test-drivers/visl/visl-trace-vp8.h  | 156 ++++
>  .../media/test-drivers/visl/visl-trace-vp9.h  | 292 +++++++
>  drivers/media/test-drivers/visl/visl-video.c  | 776 ++++++++++++++++++
>  drivers/media/test-drivers/visl/visl-video.h  |  61 ++
>  drivers/media/test-drivers/visl/visl.h        | 178 ++++
>  18 files changed, 3347 insertions(+)
>  create mode 100644 drivers/media/test-drivers/visl/Kconfig
>  create mode 100644 drivers/media/test-drivers/visl/Makefile
>  create mode 100644 drivers/media/test-drivers/visl/visl-core.c
>  create mode 100644 drivers/media/test-drivers/visl/visl-debugfs.c
>  create mode 100644 drivers/media/test-drivers/visl/visl-debugfs.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-dec.c
>  create mode 100644 drivers/media/test-drivers/visl/visl-dec.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-fwht.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-h264.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-mpeg2.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-points.c
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-vp8.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-trace-vp9.h
>  create mode 100644 drivers/media/test-drivers/visl/visl-video.c
>  create mode 100644 drivers/media/test-drivers/visl/visl-video.h
>  create mode 100644 drivers/media/test-drivers/visl/visl.h
> 
> diff --git a/drivers/media/test-drivers/Kconfig b/drivers/media/test-drivers/Kconfig
> index 51cf27834df0..459b433e9fae 100644
> --- a/drivers/media/test-drivers/Kconfig
> +++ b/drivers/media/test-drivers/Kconfig
> @@ -20,6 +20,7 @@ config VIDEO_VIM2M
>  source "drivers/media/test-drivers/vicodec/Kconfig"
>  source "drivers/media/test-drivers/vimc/Kconfig"
>  source "drivers/media/test-drivers/vivid/Kconfig"
> +source "drivers/media/test-drivers/visl/Kconfig"
>  
>  endif #V4L_TEST_DRIVERS
>  
> diff --git a/drivers/media/test-drivers/Makefile b/drivers/media/test-drivers/Makefile
> index ff390b687189..740714a4584d 100644
> --- a/drivers/media/test-drivers/Makefile
> +++ b/drivers/media/test-drivers/Makefile
> @@ -12,3 +12,4 @@ obj-$(CONFIG_VIDEO_VICODEC) += vicodec/
>  obj-$(CONFIG_VIDEO_VIM2M) += vim2m.o
>  obj-$(CONFIG_VIDEO_VIMC) += vimc/
>  obj-$(CONFIG_VIDEO_VIVID) += vivid/
> +obj-$(CONFIG_VIDEO_VISL) += visl/
> diff --git a/drivers/media/test-drivers/visl/Kconfig b/drivers/media/test-drivers/visl/Kconfig
> new file mode 100644
> index 000000000000..976319c3c372
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/Kconfig
> @@ -0,0 +1,31 @@
> +# SPDX-License-Identifier: GPL-2.0+
> +config VIDEO_VISL
> +	tristate "Virtual Stateless Driver (visl)"
> +	depends on VIDEO_DEV
> +	select FONT_SUPPORT
> +	select FONT_8x16
> +	select VIDEOBUF2_VMALLOC
> +	select V4L2_MEM2MEM_DEV
> +	select MEDIA_CONTROLLER
> +	select MEDIA_CONTROLLER_REQUEST_API
> +	select VIDEO_V4L2_TPG
> +	help
> +
> +	  A virtual stateless device for uAPI development purposes.
> +
> +	  A userspace implementation can use visl to run a decoding loop even
> +	  when no hardware is available or when the kernel uAPI for the codec
> +	  has not been upstreamed yet. This can reveal bugs at an early stage.
> +
> +
> +
> +	  When in doubt, say N.
> +
> +config VISL_DEBUGFS
> +	bool "Enable debugfs for visl"
> +	depends on VIDEO_VISL
> +	depends on DEBUG_FS
> +
> +	help
> +	  Choose Y to dump the bitstream buffers through debugfs.
> +	  When in doubt, say N.
> diff --git a/drivers/media/test-drivers/visl/Makefile b/drivers/media/test-drivers/visl/Makefile
> new file mode 100644
> index 000000000000..fb4d5ae1b17f
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/Makefile
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: GPL-2.0+
> +visl-y := visl-core.o visl-video.o visl-dec.o visl-trace-points.o
> +
> +ifeq ($(CONFIG_VISL_DEBUGFS),y)
> +  visl-y += visl-debugfs.o
> +endif
> +
> +obj-$(CONFIG_VIDEO_VISL) += visl.o
> diff --git a/drivers/media/test-drivers/visl/visl-core.c b/drivers/media/test-drivers/visl/visl-core.c
> new file mode 100644
> index 000000000000..c59f88b72ea4
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-core.c
> @@ -0,0 +1,532 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +/*
> + * A virtual stateless device for stateless uAPI development purposes.
> + *
> + * This tool's objective is to help the development and testing of userspace
> + * applications that use the V4L2 stateless API to decode media.
> + *
> + * A userspace implementation can use visl to run a decoding loop even when no
> + * hardware is available or when the kernel uAPI for the codec has not been
> + * upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * This driver can also trace the contents of the V4L2 controls submitted to it.
> + * It can also dump the contents of the vb2 buffers through a debugfs
> + * interface. This is in many ways similar to the tracing infrastructure
> + * available for other popular encode/decode APIs out there and can help develop
> + * a userspace application by using another (working) one as a reference.
> + *
> + * Note that no actual decoding of video frames is performed by visl. The V4L2
> + * test pattern generator is used to write various debug information to the
> + * capture buffers instead.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + */
> +
> +#include <linux/debugfs.h>
> +#include <linux/module.h>
> +#include <linux/platform_device.h>
> +#include <media/v4l2-ctrls.h>
> +#include <media/v4l2-device.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/v4l2-mem2mem.h>
> +
> +#include "visl.h"
> +#include "visl-dec.h"
> +#include "visl-debugfs.h"
> +#include "visl-video.h"
> +
> +unsigned int visl_debug;
> +module_param(visl_debug, uint, 0644);
> +MODULE_PARM_DESC(visl_debug, " activates debug info");
> +
> +unsigned int visl_transtime_ms;
> +module_param(visl_transtime_ms, uint, 0644);
> +MODULE_PARM_DESC(visl_transtime_ms, " simulated process time in miliseconds.");
> +
> +/*
> + * dprintk can be slow through serial. This lets one limit the tracing to a
> + * particular number of frames
> + */
> +int visl_dprintk_frame_start = -1;
> +module_param(visl_dprintk_frame_start, int, 0);
> +MODULE_PARM_DESC(visl_dprintk_frame_start, " a frame number to start tracing with dprintk");
> +
> +unsigned int visl_dprintk_nframes;
> +module_param(visl_dprintk_nframes, uint, 0);
> +MODULE_PARM_DESC(visl_dprintk_nframes,
> +		 " the number of frames to trace with dprintk");
> +
> +unsigned int keep_bitstream_buffers;
> +module_param(keep_bitstream_buffers, uint, 0);
> +MODULE_PARM_DESC(keep_bitstream_buffers,
> +		 " keep bitstream buffers in debugfs after streaming is stopped");
> +
> +int bitstream_trace_frame_start = -1;
> +module_param(bitstream_trace_frame_start, int, 0);
> +MODULE_PARM_DESC(bitstream_trace_frame_start,
> +		 " a frame number to start dumping the bitstream through debugfs");
> +
> +unsigned int bitstream_trace_nframes;
> +module_param(bitstream_trace_nframes, uint, 0);
> +MODULE_PARM_DESC(bitstream_trace_nframes,
> +		 " the number of frames to dump the bitstream through debugfs");
> +
> +static const struct visl_ctrl_desc visl_fwht_ctrl_descs[] = {
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_FWHT_PARAMS,
> +	},
> +};
> +
> +const struct visl_ctrls visl_fwht_ctrls = {
> +	.ctrls = visl_fwht_ctrl_descs,
> +	.num_ctrls = ARRAY_SIZE(visl_fwht_ctrl_descs)
> +};
> +
> +static const struct visl_ctrl_desc visl_mpeg2_ctrl_descs[] = {
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_MPEG2_SEQUENCE,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_MPEG2_PICTURE,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_MPEG2_QUANTISATION,
> +	},
> +};
> +
> +const struct visl_ctrls visl_mpeg2_ctrls = {
> +	.ctrls = visl_mpeg2_ctrl_descs,
> +	.num_ctrls = ARRAY_SIZE(visl_mpeg2_ctrl_descs),
> +};
> +
> +static const struct visl_ctrl_desc visl_vp8_ctrl_descs[] = {
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_VP8_FRAME,
> +	},
> +};
> +
> +const struct visl_ctrls visl_vp8_ctrls = {
> +	.ctrls = visl_vp8_ctrl_descs,
> +	.num_ctrls = ARRAY_SIZE(visl_vp8_ctrl_descs),
> +};
> +
> +static const struct visl_ctrl_desc visl_vp9_ctrl_descs[] = {
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_VP9_FRAME,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_VP9_COMPRESSED_HDR,
> +	},
> +};
> +
> +const struct visl_ctrls visl_vp9_ctrls = {
> +	.ctrls = visl_vp9_ctrl_descs,
> +	.num_ctrls = ARRAY_SIZE(visl_vp9_ctrl_descs),
> +};
> +
> +static const struct visl_ctrl_desc visl_h264_ctrl_descs[] = {
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_H264_DECODE_PARAMS,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_H264_SPS,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_H264_PPS,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_H264_SCALING_MATRIX,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_H264_DECODE_MODE,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_H264_START_CODE,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_H264_SLICE_PARAMS,
> +	},
> +	{
> +		.cfg.id = V4L2_CID_STATELESS_H264_PRED_WEIGHTS,
> +	},
> +};
> +
> +const struct visl_ctrls visl_h264_ctrls = {
> +	.ctrls = visl_h264_ctrl_descs,
> +	.num_ctrls = ARRAY_SIZE(visl_h264_ctrl_descs),
> +};
> +
> +struct v4l2_ctrl *visl_find_control(struct visl_ctx *ctx, u32 id)
> +{
> +	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
> +
> +	return v4l2_ctrl_find(hdl, id);
> +}
> +
> +void *visl_find_control_data(struct visl_ctx *ctx, u32 id)
> +{
> +	struct v4l2_ctrl *ctrl;
> +
> +	ctrl = visl_find_control(ctx, id);
> +	if (ctrl)
> +		return ctrl->p_cur.p;
> +
> +	return NULL;
> +}
> +
> +u32 visl_control_num_elems(struct visl_ctx *ctx, u32 id)
> +{
> +	struct v4l2_ctrl *ctrl;
> +
> +	ctrl = visl_find_control(ctx, id);
> +	if (ctrl)
> +		return ctrl->elems;
> +
> +	return 0;
> +}
> +
> +static void visl_device_release(struct video_device *vdev)
> +{
> +	struct visl_dev *dev = container_of(vdev, struct visl_dev, vfd);
> +
> +	v4l2_device_unregister(&dev->v4l2_dev);
> +	v4l2_m2m_release(dev->m2m_dev);
> +	media_device_cleanup(&dev->mdev);
> +	visl_debugfs_deinit(dev);
> +	kfree(dev);
> +}
> +
> +static int visl_add_ctrls(struct visl_ctx *ctx, const struct visl_ctrls *ctrls)
> +{
> +	struct visl_dev *dev = ctx->dev;
> +	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
> +	unsigned int i;
> +	struct v4l2_ctrl *ctrl;
> +
> +	for (i = 0; i < ctrls->num_ctrls; i++) {
> +		ctrl = v4l2_ctrl_new_custom(hdl, &ctrls->ctrls[i].cfg, NULL);
> +
> +		if (hdl->error) {
> +			v4l2_err(&dev->v4l2_dev,
> +				 "Failed to create new custom control, errno: %d\n",
> +				 hdl->error);
> +
> +			return hdl->error;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +#define VISL_CONTROLS_COUNT	ARRAY_SIZE(visl_controls)
> +
> +static int visl_init_ctrls(struct visl_ctx *ctx)
> +{
> +	struct visl_dev *dev = ctx->dev;
> +	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
> +	unsigned int ctrl_cnt = 0;
> +	unsigned int i;
> +	int ret;
> +
> +	for (i = 0; i < num_coded_fmts; i++)
> +		ctrl_cnt += visl_coded_fmts[i].ctrls->num_ctrls;
> +
> +	v4l2_ctrl_handler_init(hdl, ctrl_cnt);
> +	if (hdl->error) {
> +		v4l2_err(&dev->v4l2_dev,
> +			 "Failed to initialize control handler\n");
> +		return hdl->error;
> +	}
> +
> +	for (i = 0; i < num_coded_fmts; i++) {
> +		ret = visl_add_ctrls(ctx, visl_coded_fmts[i].ctrls);
> +		if (ret)
> +			goto err_free_handler;
> +	}
> +
> +	ctx->fh.ctrl_handler = hdl;
> +	v4l2_ctrl_handler_setup(hdl);
> +
> +	return 0;
> +
> +err_free_handler:
> +	v4l2_ctrl_handler_free(hdl);
> +	return ret;
> +}
> +
> +static void visl_free_ctrls(struct visl_ctx *ctx)
> +{
> +	v4l2_ctrl_handler_free(&ctx->hdl);
> +}
> +
> +static int visl_open(struct file *file)
> +{
> +	struct visl_dev *dev = video_drvdata(file);
> +	struct visl_ctx *ctx = NULL;
> +	int rc = 0;
> +
> +	if (mutex_lock_interruptible(&dev->dev_mutex))
> +		return -ERESTARTSYS;
> +	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> +	if (!ctx) {
> +		rc = -ENOMEM;
> +		goto unlock;
> +	}
> +
> +	ctx->tpg_str_buf = kmalloc(TPG_STR_BUF_SZ, GFP_KERNEL);
> +
> +	v4l2_fh_init(&ctx->fh, video_devdata(file));
> +	file->private_data = &ctx->fh;
> +	ctx->dev = dev;
> +
> +	rc = visl_init_ctrls(ctx);
> +	if (rc)
> +		goto free_ctx;
> +
> +	ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, ctx, &visl_queue_init);
> +
> +	mutex_init(&ctx->vb_mutex);
> +
> +	if (IS_ERR(ctx->fh.m2m_ctx)) {
> +		rc = PTR_ERR(ctx->fh.m2m_ctx);
> +		goto free_hdl;
> +	}
> +
> +	rc = visl_set_default_format(ctx);
> +	if (rc)
> +		goto free_m2m_ctx;
> +
> +	v4l2_fh_add(&ctx->fh);
> +
> +	dprintk(dev, "Created instance: %p, m2m_ctx: %p\n",
> +		ctx, ctx->fh.m2m_ctx);
> +
> +	mutex_unlock(&dev->dev_mutex);
> +	return rc;
> +
> +free_m2m_ctx:
> +	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
> +free_hdl:
> +	visl_free_ctrls(ctx);
> +	v4l2_fh_exit(&ctx->fh);
> +free_ctx:
> +	kfree(ctx->tpg_str_buf);
> +	kfree(ctx);
> +unlock:
> +	mutex_unlock(&dev->dev_mutex);
> +	return rc;
> +}
> +
> +static int visl_release(struct file *file)
> +{
> +	struct visl_dev *dev = video_drvdata(file);
> +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> +
> +	dprintk(dev, "Releasing instance %p\n", ctx);
> +
> +	tpg_free(&ctx->tpg);
> +	v4l2_fh_del(&ctx->fh);
> +	v4l2_fh_exit(&ctx->fh);
> +	visl_free_ctrls(ctx);
> +	mutex_lock(&dev->dev_mutex);
> +	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
> +	mutex_unlock(&dev->dev_mutex);
> +
> +	if (!keep_bitstream_buffers)
> +		visl_debugfs_clear_bitstream(dev, ctx->capture_streamon_jiffies);
> +
> +	kfree(ctx->tpg_str_buf);
> +	kfree(ctx);
> +
> +	return 0;
> +}
> +
> +static const struct v4l2_file_operations visl_fops = {
> +	.owner		= THIS_MODULE,
> +	.open		= visl_open,
> +	.release	= visl_release,
> +	.poll		= v4l2_m2m_fop_poll,
> +	.unlocked_ioctl	= video_ioctl2,
> +	.mmap		= v4l2_m2m_fop_mmap,
> +};
> +
> +static const struct video_device visl_videodev = {
> +	.name		= VISL_NAME,
> +	.vfl_dir	= VFL_DIR_M2M,
> +	.fops		= &visl_fops,
> +	.ioctl_ops	= &visl_ioctl_ops,
> +	.minor		= -1,
> +	.release	= visl_device_release,
> +	.device_caps	= V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING,
> +};
> +
> +static const struct v4l2_m2m_ops visl_m2m_ops = {
> +	.device_run	= visl_device_run,
> +};
> +
> +static const struct media_device_ops visl_m2m_media_ops = {
> +	.req_validate	= visl_request_validate,
> +	.req_queue	= v4l2_m2m_request_queue,
> +};
> +
> +static int visl_probe(struct platform_device *pdev)
> +{
> +	struct visl_dev *dev;
> +	struct video_device *vfd;
> +	int ret;
> +	int rc;
> +
> +	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
> +	if (!dev)
> +		return -ENOMEM;
> +
> +	ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
> +	if (ret)
> +		goto error_visl_dev;
> +
> +	mutex_init(&dev->dev_mutex);
> +
> +	dev->vfd = visl_videodev;
> +	vfd = &dev->vfd;
> +	vfd->lock = &dev->dev_mutex;
> +	vfd->v4l2_dev = &dev->v4l2_dev;
> +
> +	video_set_drvdata(vfd, dev);
> +	v4l2_info(&dev->v4l2_dev,
> +		  "Device registered as /dev/video%d\n", vfd->num);
> +
> +	platform_set_drvdata(pdev, dev);
> +
> +	dev->m2m_dev = v4l2_m2m_init(&visl_m2m_ops);
> +	if (IS_ERR(dev->m2m_dev)) {
> +		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
> +		ret = PTR_ERR(dev->m2m_dev);
> +		dev->m2m_dev = NULL;
> +		goto error_dev;
> +	}
> +
> +	dev->mdev.dev = &pdev->dev;
> +	strscpy(dev->mdev.model, "visl", sizeof(dev->mdev.model));
> +	strscpy(dev->mdev.bus_info, "platform:visl",
> +		sizeof(dev->mdev.bus_info));
> +	media_device_init(&dev->mdev);
> +	dev->mdev.ops = &visl_m2m_media_ops;
> +	dev->v4l2_dev.mdev = &dev->mdev;
> +
> +	ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
> +	if (ret) {
> +		v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
> +		goto error_m2m;
> +	}
> +
> +	ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd,
> +						 MEDIA_ENT_F_PROC_VIDEO_DECODER);
> +	if (ret) {
> +		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n");
> +		goto error_v4l2;
> +	}
> +
> +	ret = media_device_register(&dev->mdev);
> +	if (ret) {
> +		v4l2_err(&dev->v4l2_dev, "Failed to register mem2mem media device\n");
> +		goto error_m2m_mc;
> +	}
> +
> +	rc = visl_debugfs_init(dev);
> +	if (rc)
> +		dprintk(dev, "visl_debugfs_init failed: %d\n"
> +			"Continuing without debugfs support\n", rc);
> +
> +	return 0;
> +
> +error_m2m_mc:
> +	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
> +error_v4l2:
> +	video_unregister_device(&dev->vfd);
> +	/* visl_device_release called by video_unregister_device to release various objects */
> +	return ret;
> +error_m2m:
> +	v4l2_m2m_release(dev->m2m_dev);
> +error_dev:
> +	v4l2_device_unregister(&dev->v4l2_dev);
> +error_visl_dev:
> +	kfree(dev);
> +
> +	return ret;
> +}
> +
> +static int visl_remove(struct platform_device *pdev)
> +{
> +	struct visl_dev *dev = platform_get_drvdata(pdev);
> +
> +	v4l2_info(&dev->v4l2_dev, "Removing " VISL_NAME);
> +
> +#ifdef CONFIG_MEDIA_CONTROLLER
> +	if (media_devnode_is_registered(dev->mdev.devnode)) {
> +		media_device_unregister(&dev->mdev);
> +		v4l2_m2m_unregister_media_controller(dev->m2m_dev);
> +	}
> +#endif
> +	video_unregister_device(&dev->vfd);
> +
> +	return 0;
> +}
> +
> +static struct platform_driver visl_pdrv = {
> +	.probe		= visl_probe,
> +	.remove		= visl_remove,
> +	.driver		= {
> +		.name	= VISL_NAME,
> +	},
> +};
> +
> +static void visl_dev_release(struct device *dev) {}
> +
> +static struct platform_device visl_pdev = {
> +	.name		= VISL_NAME,
> +	.dev.release	= visl_dev_release,
> +};
> +
> +static void __exit visl_exit(void)
> +{
> +	platform_driver_unregister(&visl_pdrv);
> +	platform_device_unregister(&visl_pdev);
> +}
> +
> +static int __init visl_init(void)
> +{
> +	int ret;
> +
> +	ret = platform_device_register(&visl_pdev);
> +	if (ret)
> +		return ret;
> +
> +	ret = platform_driver_register(&visl_pdrv);
> +	if (ret)
> +		platform_device_unregister(&visl_pdev);
> +
> +	return ret;
> +}
> +
> +MODULE_DESCRIPTION("Virtual stateless device");
> +MODULE_AUTHOR("Daniel Almeida <daniel.almeida@collabora.com>");
> +MODULE_LICENSE("GPL v2");
> +
> +module_init(visl_init);
> +module_exit(visl_exit);
> diff --git a/drivers/media/test-drivers/visl/visl-debugfs.c b/drivers/media/test-drivers/visl/visl-debugfs.c
> new file mode 100644
> index 000000000000..6fbfd55d6c53
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-debugfs.c
> @@ -0,0 +1,148 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +/*
> + * A virtual stateless device for stateless uAPI development purposes.
> + *
> + * This tool's objective is to help the development and testing of userspace
> + * applications that use the V4L2 stateless API to decode media.
> + *
> + * A userspace implementation can use visl to run a decoding loop even when no
> + * hardware is available or when the kernel uAPI for the codec has not been
> + * upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * This driver can also trace the contents of the V4L2 controls submitted to it.
> + * It can also dump the contents of the vb2 buffers through a debugfs
> + * interface. This is in many ways similar to the tracing infrastructure
> + * available for other popular encode/decode APIs out there and can help develop
> + * a userspace application by using another (working) one as a reference.
> + *
> + * Note that no actual decoding of video frames is performed by visl. The V4L2
> + * test pattern generator is used to write various debug information to the
> + * capture buffers instead.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + */
> +
> +#include <linux/debugfs.h>
> +#include <linux/list.h>
> +#include <linux/mutex.h>
> +#include <media/v4l2-mem2mem.h>
> +
> +#include "visl-debugfs.h"
> +
> +int visl_debugfs_init(struct visl_dev *dev)
> +{
> +	dev->debugfs_root = debugfs_create_dir("visl", NULL);
> +	INIT_LIST_HEAD(&dev->bitstream_blobs);
> +	mutex_init(&dev->bitstream_lock);
> +
> +	if (IS_ERR(dev->debugfs_root))
> +		return PTR_ERR(dev->debugfs_root);
> +
> +	return visl_debugfs_bitstream_init(dev);
> +}
> +
> +int visl_debugfs_bitstream_init(struct visl_dev *dev)
> +{
> +	dev->bitstream_debugfs = debugfs_create_dir("bitstream",
> +						    dev->debugfs_root);
> +	if (IS_ERR(dev->bitstream_debugfs))
> +		return PTR_ERR(dev->bitstream_debugfs);
> +
> +	return 0;
> +}
> +
> +void visl_trace_bitstream(struct visl_ctx *ctx, struct visl_run *run)
> +{
> +	u8 *vaddr = vb2_plane_vaddr(&run->src->vb2_buf, 0);
> +	struct visl_blob *blob;
> +	size_t data_sz = vb2_get_plane_payload(&run->dst->vb2_buf, 0);
> +	struct dentry *dentry;
> +	char name[32];
> +
> +	blob  = kzalloc(sizeof(*blob), GFP_KERNEL);
> +	if (!blob)
> +		return;
> +
> +	blob->blob.data = vzalloc(data_sz);
> +	if (!blob->blob.data)
> +		goto err_vmalloc;
> +
> +	blob->blob.size = data_sz;
> +	snprintf(name, 32, "%llu_bitstream%d",
> +		 ctx->capture_streamon_jiffies, run->src->sequence);
> +
> +	memcpy(blob->blob.data, vaddr, data_sz);
> +
> +	dentry = debugfs_create_blob(name, 0444, ctx->dev->bitstream_debugfs,
> +				     &blob->blob);
> +	if (IS_ERR(dentry))
> +		goto err_debugfs;
> +
> +	blob->dentry = dentry;
> +	blob->streamon_jiffies = ctx->capture_streamon_jiffies;
> +
> +	mutex_lock(&ctx->dev->bitstream_lock);
> +	list_add_tail(&blob->list, &ctx->dev->bitstream_blobs);
> +	mutex_unlock(&ctx->dev->bitstream_lock);
> +
> +	return;
> +
> +err_debugfs:
> +	vfree(blob->blob.data);
> +err_vmalloc:
> +	kfree(blob);
> +}
> +
> +void visl_debugfs_clear_bitstream(struct visl_dev *dev, u64 streamon_jiffies)
> +{
> +	struct visl_blob *blob;
> +	struct visl_blob *tmp;
> +
> +	mutex_lock(&dev->bitstream_lock);
> +	if (list_empty(&dev->bitstream_blobs))
> +		goto unlock;
> +
> +	list_for_each_entry_safe(blob, tmp, &dev->bitstream_blobs, list) {
> +		if (streamon_jiffies &&
> +		    streamon_jiffies != blob->streamon_jiffies)
> +			continue;
> +
> +		list_del(&blob->list);
> +		debugfs_remove(blob->dentry);
> +		vfree(blob->blob.data);
> +		kfree(blob);
> +	}
> +
> +unlock:
> +	mutex_unlock(&dev->bitstream_lock);
> +}
> +
> +void visl_debugfs_bitstream_deinit(struct visl_dev *dev)
> +{
> +	visl_debugfs_clear_bitstream(dev, 0);
> +	debugfs_remove_recursive(dev->bitstream_debugfs);
> +	dev->bitstream_debugfs = NULL;
> +}
> +
> +void visl_debugfs_deinit(struct visl_dev *dev)
> +{
> +	visl_debugfs_bitstream_deinit(dev);
> +	debugfs_remove_recursive(dev->debugfs_root);
> +	dev->debugfs_root = NULL;
> +}
> diff --git a/drivers/media/test-drivers/visl/visl-debugfs.h b/drivers/media/test-drivers/visl/visl-debugfs.h
> new file mode 100644
> index 000000000000..e14e7d72b150
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-debugfs.h
> @@ -0,0 +1,72 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * A virtual stateless device for stateless uAPI development purposes.
> + *
> + * This tool's objective is to help the development and testing of userspace
> + * applications that use the V4L2 stateless API to decode media.
> + *
> + * A userspace implementation can use visl to run a decoding loop even when no
> + * hardware is available or when the kernel uAPI for the codec has not been
> + * upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * This driver can also trace the contents of the V4L2 controls submitted to it.
> + * It can also dump the contents of the vb2 buffers through a debugfs
> + * interface. This is in many ways similar to the tracing infrastructure
> + * available for other popular encode/decode APIs out there and can help develop
> + * a userspace application by using another (working) one as a reference.
> + *
> + * Note that no actual decoding of video frames is performed by visl. The V4L2
> + * test pattern generator is used to write various debug information to the
> + * capture buffers instead.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + */
> +
> +#include "visl.h"
> +#include "visl-dec.h"
> +
> +#ifdef CONFIG_VISL_DEBUGFS
> +
> +int visl_debugfs_init(struct visl_dev *dev);
> +int visl_debugfs_bitstream_init(struct visl_dev *dev);
> +void visl_trace_bitstream(struct visl_ctx *ctx, struct visl_run *run);
> +void visl_debugfs_clear_bitstream(struct visl_dev *dev, u64 streamon_jiffies);
> +void visl_debugfs_bitstream_deinit(struct visl_dev *dev);
> +void visl_debugfs_deinit(struct visl_dev *dev);
> +
> +#else
> +
> +static inline int visl_debugfs_init(struct visl_dev *dev)
> +{
> +	return 0;
> +}
> +
> +static inline int visl_debugfs_bitstream_init(struct visl_dev *dev)
> +{
> +	return 0;
> +}
> +
> +static inline void visl_trace_bitstream(struct visl_ctx *ctx, struct visl_run *run) {}
> +static inline void
> +visl_debugfs_clear_bitstream(struct visl_dev *dev, u64 streamon_jiffies) {}
> +static inline void visl_debugfs_bitstream_deinit(struct visl_dev *dev) {}
> +static inline void visl_debugfs_deinit(struct visl_dev *dev) {}
> +
> +#endif
> +
> diff --git a/drivers/media/test-drivers/visl/visl-dec.c b/drivers/media/test-drivers/visl/visl-dec.c
> new file mode 100644
> index 000000000000..3c68d97f87d1
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-dec.c
> @@ -0,0 +1,468 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +/*
> + * A virtual stateless device for stateless uAPI development purposes.
> + *
> + * This tool's objective is to help the development and testing of userspace
> + * applications that use the V4L2 stateless API to decode media.
> + *
> + * A userspace implementation can use visl to run a decoding loop even when no
> + * hardware is available or when the kernel uAPI for the codec has not been
> + * upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * This driver can also trace the contents of the V4L2 controls submitted to it.
> + * It can also dump the contents of the vb2 buffers through a debugfs
> + * interface. This is in many ways similar to the tracing infrastructure
> + * available for other popular encode/decode APIs out there and can help develop
> + * a userspace application by using another (working) one as a reference.
> + *
> + * Note that no actual decoding of video frames is performed by visl. The V4L2
> + * test pattern generator is used to write various debug information to the
> + * capture buffers instead.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + */
> +
> +#include "visl.h"
> +#include "visl-debugfs.h"
> +#include "visl-dec.h"
> +#include "visl-trace-fwht.h"
> +#include "visl-trace-mpeg2.h"
> +#include "visl-trace-vp8.h"
> +#include "visl-trace-vp9.h"
> +#include "visl-trace-h264.h"
> +
> +#include <linux/delay.h>
> +#include <linux/workqueue.h>
> +#include <media/v4l2-mem2mem.h>
> +#include <media/tpg/v4l2-tpg.h>
> +
> +static void *plane_vaddr(struct tpg_data *tpg, struct vb2_buffer *buf,
> +			 u32 p, u32 bpl[TPG_MAX_PLANES], u32 h)
> +{
> +	u32 i;
> +	void *vbuf;
> +
> +	if (p == 0 || tpg_g_buffers(tpg) > 1)
> +		return vb2_plane_vaddr(buf, p);
> +	vbuf = vb2_plane_vaddr(buf, 0);
> +	for (i = 0; i < p; i++)
> +		vbuf += bpl[i] * h / tpg->vdownsampling[i];
> +	return vbuf;
> +}
> +
> +static void visl_get_ref_frames(struct visl_ctx *ctx, u8 *buf,
> +				__kernel_size_t buflen, struct visl_run *run)
> +{
> +	struct vb2_queue *cap_q = &ctx->fh.m2m_ctx->cap_q_ctx.q;
> +	char header[] = "Reference frames:\n";
> +	u32 i;
> +	u32 len;
> +
> +	len = scnprintf(buf, buflen, header);
> +	buf += len;
> +	buflen -= len;
> +
> +	switch (ctx->current_codec) {
> +	case VISL_CODEC_NONE:
> +		break;
> +
> +	case VISL_CODEC_FWHT: {
> +		scnprintf(buf, buflen, "backwards_ref_ts: %lld, vb2_idx: %d",
> +			  run->fwht.params->backward_ref_ts,
> +			  vb2_find_timestamp(cap_q, run->fwht.params->backward_ref_ts, 0));
> +		break;
> +	}
> +
> +	case VISL_CODEC_MPEG2: {
> +		scnprintf(buf, buflen,
> +			  "backward_ref_ts: %llu, vb2_idx: %d\n"
> +			  "forward_ref_ts: %llu, vb2_idx: %d\n",
> +			  run->mpeg2.pic->backward_ref_ts,
> +			  vb2_find_timestamp(cap_q, run->mpeg2.pic->backward_ref_ts, 0),
> +			  run->mpeg2.pic->forward_ref_ts,
> +			  vb2_find_timestamp(cap_q, run->mpeg2.pic->forward_ref_ts, 0));
> +		break;
> +	}
> +
> +	case VISL_CODEC_VP8: {
> +		scnprintf(buf, buflen,
> +			  "last_ref_ts: %llu, vb2_idx: %d\n"
> +			  "golden_ref_ts: %llu, vb2_idx: %d\n"
> +			  "alt_ref_ts: %llu, vb2_idx: %d\n",
> +			  run->vp8.frame->last_frame_ts,
> +			  vb2_find_timestamp(cap_q, run->vp8.frame->last_frame_ts, 0),
> +			  run->vp8.frame->golden_frame_ts,
> +			  vb2_find_timestamp(cap_q, run->vp8.frame->golden_frame_ts, 0),
> +			  run->vp8.frame->alt_frame_ts,
> +			  vb2_find_timestamp(cap_q, run->vp8.frame->alt_frame_ts, 0));
> +		break;
> +	}
> +
> +	case VISL_CODEC_VP9: {
> +		scnprintf(buf, buflen,
> +			  "last_ref_ts: %llu, vb2_idx: %d\n"
> +			  "golden_ref_ts: %llu, vb2_idx: %d\n"
> +			  "alt_ref_ts: %llu, vb2_idx: %d\n",
> +			  run->vp9.frame->last_frame_ts,
> +			  vb2_find_timestamp(cap_q, run->vp9.frame->last_frame_ts, 0),
> +			  run->vp9.frame->golden_frame_ts,
> +			  vb2_find_timestamp(cap_q, run->vp9.frame->golden_frame_ts, 0),
> +			  run->vp9.frame->alt_frame_ts,
> +			  vb2_find_timestamp(cap_q, run->vp9.frame->alt_frame_ts, 0));
> +		break;
> +	}
> +	case VISL_CODEC_H264: {
> +		char entry[] = "dpb[%d]:%u, vb2_index: %d\n";
> +
> +		for (i = 0; i < ARRAY_SIZE(run->h264.dpram->dpb); i++) {
> +			len = scnprintf(buf, buflen, entry, i,
> +					run->h264.dpram->dpb[i].reference_ts,
> +					vb2_find_timestamp(cap_q,
> +							   run->h264.dpram->dpb[i].reference_ts,
> +							   0));
> +			buf += len;
> +			buflen -= len;
> +		}
> +
> +		break;
> +	}
> +	}
> +}
> +
> +static char *visl_get_vb2_state(enum vb2_buffer_state state)
> +{
> +	switch (state) {
> +	case VB2_BUF_STATE_DEQUEUED:
> +		return "Dequeued";
> +	case VB2_BUF_STATE_IN_REQUEST:
> +		return "In request";
> +	case VB2_BUF_STATE_PREPARING:
> +		return "Preparing";
> +	case VB2_BUF_STATE_QUEUED:
> +		return "Queued";
> +	case VB2_BUF_STATE_ACTIVE:
> +		return "Active";
> +	case VB2_BUF_STATE_DONE:
> +		return "Done";
> +	case VB2_BUF_STATE_ERROR:
> +		return "Error";
> +	default:
> +		return "";
> +	}
> +}
> +
> +static int visl_fill_bytesused(struct vb2_v4l2_buffer *v4l2_vb2_buf, char *buf, size_t bufsz)
> +{
> +	int len = 0;
> +	u32 i;
> +
> +	for (i = 0; i < v4l2_vb2_buf->vb2_buf.num_planes; i++)
> +		len += scnprintf(buf, bufsz,
> +				"bytesused[%u]: %u length[%u]: %u data_offset[%u]: %u",
> +				i, v4l2_vb2_buf->planes[i].bytesused,
> +				i, v4l2_vb2_buf->planes[i].length,
> +				i, v4l2_vb2_buf->planes[i].data_offset);
> +
> +	return len;
> +}
> +
> +static void visl_tpg_fill_sequence(struct visl_ctx *ctx,
> +				   struct visl_run *run, char buf[], size_t bufsz)
> +{
> +	u32 stream_ms;
> +
> +	stream_ms = jiffies_to_msecs(get_jiffies_64() - ctx->capture_streamon_jiffies);
> +
> +	scnprintf(buf, bufsz,
> +		  "stream time: %02d:%02d:%02d:%03d sequence:%u timestamp:%lld field:%s",
> +		  (stream_ms / (60 * 60 * 1000)) % 24,
> +		  (stream_ms / (60 * 1000)) % 60,
> +		  (stream_ms / 1000) % 60,
> +		  stream_ms % 1000,
> +		  run->dst->sequence,
> +		  run->dst->vb2_buf.timestamp,
> +		  (run->dst->field == V4L2_FIELD_ALTERNATE) ?
> +		  (run->dst->field == V4L2_FIELD_TOP ?
> +		  " top" : " bottom") : "none");
> +}
> +
> +static void visl_tpg_fill(struct visl_ctx *ctx, struct visl_run *run)
> +{
> +	u8 *basep[TPG_MAX_PLANES][2];
> +	char *buf = ctx->tpg_str_buf;
> +	char *tmp = buf;
> +	char *line_str;
> +	u32 line = 1;
> +	const u32 line_height = 16;
> +	u32 len;
> +	struct vb2_queue *out_q = &ctx->fh.m2m_ctx->out_q_ctx.q;
> +	struct vb2_queue *cap_q = &ctx->fh.m2m_ctx->cap_q_ctx.q;
> +	struct v4l2_pix_format_mplane *coded_fmt = &ctx->coded_fmt.fmt.pix_mp;
> +	struct v4l2_pix_format_mplane *decoded_fmt = &ctx->decoded_fmt.fmt.pix_mp;
> +	u32 p;
> +	u32 i;
> +
> +	for (p = 0; p < tpg_g_planes(&ctx->tpg); p++) {
> +		void *vbuf = plane_vaddr(&ctx->tpg,
> +					 &run->dst->vb2_buf, p,
> +					 ctx->tpg.bytesperline,
> +					 ctx->tpg.buf_height);
> +
> +		tpg_calc_text_basep(&ctx->tpg, basep, p, vbuf);
> +		tpg_fill_plane_buffer(&ctx->tpg, 0, p, vbuf);
> +	}
> +
> +	visl_tpg_fill_sequence(ctx, run, buf, TPG_STR_BUF_SZ);
> +	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> +	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> +	frame_dprintk(ctx->dev, run->dst->sequence, "");
> +	line++;
> +
> +	visl_get_ref_frames(ctx, buf, TPG_STR_BUF_SZ, run);
> +
> +	while ((line_str = strsep(&tmp, "\n")) && strlen(line_str)) {
> +		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, line_str);
> +		frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", line_str);
> +	}
> +
> +	frame_dprintk(ctx->dev, run->dst->sequence, "");
> +	line++;
> +
> +	scnprintf(buf,
> +		  TPG_STR_BUF_SZ,
> +		  "OUTPUT pixelformat: %c%c%c%c, resolution: %dx%d, num_planes: %d",
> +		  coded_fmt->pixelformat,
> +		  (coded_fmt->pixelformat >> 8) & 0xff,
> +		  (coded_fmt->pixelformat >> 16) & 0xff,
> +		  (coded_fmt->pixelformat >> 24) & 0xff,
> +		  coded_fmt->width,
> +		  coded_fmt->height,
> +		  coded_fmt->num_planes);
> +
> +	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> +	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> +
> +	for (i = 0; i < coded_fmt->num_planes; i++) {
> +		scnprintf(buf,
> +			  TPG_STR_BUF_SZ,
> +			  "plane[%d]: bytesperline: %d, sizeimage: %d",
> +			  i,
> +			  coded_fmt->plane_fmt[i].bytesperline,
> +			  coded_fmt->plane_fmt[i].sizeimage);
> +
> +		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> +		frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> +	}
> +
> +	line++;
> +	frame_dprintk(ctx->dev, run->dst->sequence, "");
> +	scnprintf(buf, TPG_STR_BUF_SZ, "Output queue status:");
> +	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> +	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> +
> +	len = 0;
> +	for (i = 0; i < out_q->num_buffers; i++) {
> +		char entry[] = "index: %u, state: %s, request_fd: %d, ";
> +		u32 old_len = len;
> +		char *q_status = visl_get_vb2_state(out_q->bufs[i]->state);
> +
> +		len += scnprintf(&buf[len], TPG_STR_BUF_SZ - len,
> +				 entry, i, q_status,
> +				 to_vb2_v4l2_buffer(out_q->bufs[i])->request_fd);
> +
> +		len += visl_fill_bytesused(to_vb2_v4l2_buffer(out_q->bufs[i]),
> +					   &buf[len],
> +					   TPG_STR_BUF_SZ - len);
> +
> +		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, &buf[old_len]);
> +		frame_dprintk(ctx->dev, run->dst->sequence, "%s", &buf[old_len]);
> +	}
> +
> +	line++;
> +	frame_dprintk(ctx->dev, run->dst->sequence, "");
> +
> +	scnprintf(buf,
> +		  TPG_STR_BUF_SZ,
> +		  "CAPTURE pixelformat: %c%c%c%c, resolution: %dx%d, num_planes: %d",
> +		  decoded_fmt->pixelformat,
> +		  (decoded_fmt->pixelformat >> 8) & 0xff,
> +		  (decoded_fmt->pixelformat >> 16) & 0xff,
> +		  (decoded_fmt->pixelformat >> 24) & 0xff,
> +		  decoded_fmt->width,
> +		  decoded_fmt->height,
> +		  decoded_fmt->num_planes);
> +
> +	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> +	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> +
> +	for (i = 0; i < decoded_fmt->num_planes; i++) {
> +		scnprintf(buf,
> +			  TPG_STR_BUF_SZ,
> +			  "plane[%d]: bytesperline: %d, sizeimage: %d",
> +			  i,
> +			  decoded_fmt->plane_fmt[i].bytesperline,
> +			  decoded_fmt->plane_fmt[i].sizeimage);
> +
> +		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> +		frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> +	}
> +
> +	line++;
> +	frame_dprintk(ctx->dev, run->dst->sequence, "");
> +	scnprintf(buf, TPG_STR_BUF_SZ, "Capture queue status:");
> +	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> +	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> +
> +	len = 0;
> +	for (i = 0; i < cap_q->num_buffers; i++) {
> +		u32 old_len = len;
> +		char *q_status = visl_get_vb2_state(cap_q->bufs[i]->state);
> +
> +		len += scnprintf(&buf[len], TPG_STR_BUF_SZ - len,
> +				 "index: %u, status: %s, timestamp: %llu, is_held: %d",
> +				 cap_q->bufs[i]->index, q_status,
> +				 cap_q->bufs[i]->timestamp,
> +				 to_vb2_v4l2_buffer(cap_q->bufs[i])->is_held);
> +
> +		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, &buf[old_len]);
> +		frame_dprintk(ctx->dev, run->dst->sequence, "%s", &buf[old_len]);
> +	}
> +}
> +
> +static void visl_trace_ctrls(struct visl_ctx *ctx, struct visl_run *run)
> +{
> +	int i;
> +
> +	switch (ctx->current_codec) {
> +	default:
> +	case VISL_CODEC_NONE:
> +		break;
> +	case VISL_CODEC_FWHT:
> +		trace_v4l2_ctrl_fwht_params(run->fwht.params);
> +		break;
> +	case VISL_CODEC_MPEG2:
> +		trace_v4l2_ctrl_mpeg2_sequence(run->mpeg2.seq);
> +		trace_v4l2_ctrl_mpeg2_picture(run->mpeg2.pic);
> +		trace_v4l2_ctrl_mpeg2_quantisation(run->mpeg2.quant);
> +		break;
> +	case VISL_CODEC_VP8:
> +		trace_v4l2_ctrl_vp8_frame(run->vp8.frame);
> +		trace_v4l2_ctrl_vp8_entropy(run->vp8.frame);
> +		break;
> +	case VISL_CODEC_VP9:
> +		trace_v4l2_ctrl_vp9_frame(run->vp9.frame);
> +		trace_v4l2_ctrl_vp9_compressed_hdr(run->vp9.probs);
> +		trace_v4l2_ctrl_vp9_compressed_coeff(run->vp9.probs);
> +		trace_v4l2_vp9_mv_probs(&run->vp9.probs->mv);
> +		break;
> +	case VISL_CODEC_H264:
> +		trace_v4l2_ctrl_h264_sps(run->h264.sps);
> +		trace_v4l2_ctrl_h264_pps(run->h264.pps);
> +		trace_v4l2_ctrl_h264_scaling_matrix(run->h264.sm);
> +		trace_v4l2_ctrl_h264_slice_params(run->h264.spram);
> +
> +		for (i = 0; i < ARRAY_SIZE(run->h264.spram->ref_pic_list0); i++)
> +			trace_v4l2_h264_ref_pic_list0(&run->h264.spram->ref_pic_list0[i], i);
> +		for (i = 0; i < ARRAY_SIZE(run->h264.spram->ref_pic_list0); i++)
> +			trace_v4l2_h264_ref_pic_list1(&run->h264.spram->ref_pic_list1[i], i);
> +
> +		trace_v4l2_ctrl_h264_decode_params(run->h264.dpram);
> +
> +		for (i = 0; i < ARRAY_SIZE(run->h264.dpram->dpb); i++)
> +			trace_v4l2_h264_dpb_entry(&run->h264.dpram->dpb[i], i);
> +		break;
> +	}
> +}
> +
> +void visl_device_run(void *priv)
> +{
> +	struct visl_ctx *ctx = priv;
> +	struct visl_run run = {};
> +	struct media_request *src_req;
> +
> +	run.src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> +	run.dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
> +
> +	/* Apply request(s) controls if needed. */
> +	src_req = run.src->vb2_buf.req_obj.req;
> +
> +	if (src_req)
> +		v4l2_ctrl_request_setup(src_req, &ctx->hdl);
> +
> +	v4l2_m2m_buf_copy_metadata(run.src, run.dst, true);
> +	run.dst->sequence = ctx->q_data[V4L2_M2M_DST].sequence++;
> +	run.src->sequence = ctx->q_data[V4L2_M2M_SRC].sequence++;
> +	run.dst->field = ctx->decoded_fmt.fmt.pix.field;
> +
> +	switch (ctx->current_codec) {
> +	default:
> +	case VISL_CODEC_NONE:
> +		break;
> +	case VISL_CODEC_FWHT:
> +		run.fwht.params = visl_find_control_data(ctx, V4L2_CID_STATELESS_FWHT_PARAMS);
> +		break;
> +	case VISL_CODEC_MPEG2:
> +		run.mpeg2.seq = visl_find_control_data(ctx, V4L2_CID_STATELESS_MPEG2_SEQUENCE);
> +		run.mpeg2.pic = visl_find_control_data(ctx, V4L2_CID_STATELESS_MPEG2_PICTURE);
> +		run.mpeg2.quant = visl_find_control_data(ctx,
> +							 V4L2_CID_STATELESS_MPEG2_QUANTISATION);
> +		break;
> +	case VISL_CODEC_VP8:
> +		run.vp8.frame = visl_find_control_data(ctx, V4L2_CID_STATELESS_VP8_FRAME);
> +		break;
> +	case VISL_CODEC_VP9:
> +		run.vp9.frame = visl_find_control_data(ctx, V4L2_CID_STATELESS_VP9_FRAME);
> +		run.vp9.probs = visl_find_control_data(ctx, V4L2_CID_STATELESS_VP9_COMPRESSED_HDR);
> +		break;
> +	case VISL_CODEC_H264:
> +		run.h264.sps = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_SPS);
> +		run.h264.pps = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_PPS);
> +		run.h264.sm = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_SCALING_MATRIX);
> +		run.h264.spram = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_SLICE_PARAMS);
> +		run.h264.dpram = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_DECODE_PARAMS);
> +		run.h264.pwht = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_PRED_WEIGHTS);
> +		break;
> +	}
> +
> +	frame_dprintk(ctx->dev, run.dst->sequence,
> +		      "Got OUTPUT buffer sequence %d, timestamp %llu\n",
> +		      run.src->sequence, run.src->vb2_buf.timestamp);
> +
> +	frame_dprintk(ctx->dev, run.dst->sequence,
> +		      "Got CAPTURE buffer sequence %d, timestamp %llu\n",
> +		      run.dst->sequence, run.dst->vb2_buf.timestamp);
> +
> +	visl_tpg_fill(ctx, &run);
> +	visl_trace_ctrls(ctx, &run);
> +
> +	if (bitstream_trace_frame_start > -1 &&
> +	    run.dst->sequence >= bitstream_trace_frame_start &&
> +	    run.dst->sequence < bitstream_trace_frame_start + bitstream_trace_nframes)
> +		visl_trace_bitstream(ctx, &run);
> +
> +	/* Complete request(s) controls if needed. */
> +	if (src_req)
> +		v4l2_ctrl_request_complete(src_req, &ctx->hdl);
> +
> +	if (visl_transtime_ms)
> +		usleep_range(visl_transtime_ms * 1000, 2 * visl_transtime_ms * 1000);
> +
> +	v4l2_m2m_buf_done_and_job_finish(ctx->dev->m2m_dev,
> +					 ctx->fh.m2m_ctx, VB2_BUF_STATE_DONE);
> +}
> diff --git a/drivers/media/test-drivers/visl/visl-dec.h b/drivers/media/test-drivers/visl/visl-dec.h
> new file mode 100644
> index 000000000000..56a550a8f747
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-dec.h
> @@ -0,0 +1,100 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * A virtual stateless device for stateless uAPI development purposes.
> + *
> + * This tool's objective is to help the development and testing of userspace
> + * applications that use the V4L2 stateless API to decode media.
> + *
> + * A userspace implementation can use visl to run a decoding loop even when no
> + * hardware is available or when the kernel uAPI for the codec has not been
> + * upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * This driver can also trace the contents of the V4L2 controls submitted to it.
> + * It can also dump the contents of the vb2 buffers through a debugfs
> + * interface. This is in many ways similar to the tracing infrastructure
> + * available for other popular encode/decode APIs out there and can help develop
> + * a userspace application by using another (working) one as a reference.
> + *
> + * Note that no actual decoding of video frames is performed by visl. The V4L2
> + * test pattern generator is used to write various debug information to the
> + * capture buffers instead.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + */
> +
> +#ifndef _VISL_DEC_H_
> +#define _VISL_DEC_H_
> +
> +#include "visl.h"
> +
> +struct visl_av1_run {
> +	const struct v4l2_ctrl_av1_sequence *sequence;
> +	const struct v4l2_ctrl_av1_frame_header *frame_header;
> +	const struct v4l2_ctrl_av1_tile_group *tile_group;
> +	const struct v4l2_ctrl_av1_tile_group_entry *tg_entries;
> +	const struct v4l2_ctrl_av1_film_grain *film_grain;
> +};
> +
> +struct visl_fwht_run {
> +	const struct v4l2_ctrl_fwht_params *params;
> +};
> +
> +struct visl_mpeg2_run {
> +	const struct v4l2_ctrl_mpeg2_sequence *seq;
> +	const struct v4l2_ctrl_mpeg2_picture *pic;
> +	const struct v4l2_ctrl_mpeg2_quantisation *quant;
> +};
> +
> +struct visl_vp8_run {
> +	const struct v4l2_ctrl_vp8_frame *frame;
> +};
> +
> +struct visl_vp9_run {
> +	const struct v4l2_ctrl_vp9_frame *frame;
> +	const struct v4l2_ctrl_vp9_compressed_hdr *probs;
> +};
> +
> +struct visl_h264_run {
> +	const struct v4l2_ctrl_h264_sps *sps;
> +	const struct v4l2_ctrl_h264_pps *pps;
> +	const struct v4l2_ctrl_h264_scaling_matrix *sm;
> +	const struct v4l2_ctrl_h264_slice_params *spram;
> +	const struct v4l2_ctrl_h264_decode_params *dpram;
> +	const struct v4l2_ctrl_h264_pred_weights *pwht;
> +};
> +
> +struct visl_run {
> +	struct vb2_v4l2_buffer	*src;
> +	struct vb2_v4l2_buffer	*dst;
> +
> +	union {
> +		struct visl_fwht_run	fwht;
> +		struct visl_mpeg2_run	mpeg2;
> +		struct visl_vp8_run	vp8;
> +		struct visl_vp9_run	vp9;
> +		struct visl_h264_run	h264;
> +	};
> +};
> +
> +int visl_dec_start(struct visl_ctx *ctx);
> +int visl_dec_stop(struct visl_ctx *ctx);
> +int visl_job_ready(void *priv);
> +void visl_device_run(void *priv);
> +
> +#endif /* _VISL_DEC_H_ */
> diff --git a/drivers/media/test-drivers/visl/visl-trace-fwht.h b/drivers/media/test-drivers/visl/visl-trace-fwht.h
> new file mode 100644
> index 000000000000..76034449e5b7
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-trace-fwht.h
> @@ -0,0 +1,66 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +#if !defined(_VISL_TRACE_FWHT_H_) || defined(TRACE_HEADER_MULTI_READ)
> +#define _VISL_TRACE_FWHT_H_
> +
> +#include <linux/tracepoint.h>
> +#include "visl.h"
> +
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM visl_fwht_controls
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_fwht_params_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_fwht_params *p),
> +	TP_ARGS(p),
> +	TP_STRUCT__entry(
> +			 __field(u64, backward_ref_ts)
> +			 __field(u32, version)
> +			 __field(u32, width)
> +			 __field(u32, height)
> +			 __field(u32, flags)
> +			 __field(u32, colorspace)
> +			 __field(u32, xfer_func)
> +			 __field(u32, ycbcr_enc)
> +			 __field(u32, quantization)
> +			 ),
> +	TP_fast_assign(
> +		       __entry->backward_ref_ts = p->backward_ref_ts;
> +		       __entry->version = p->version;
> +		       __entry->width = p->width;
> +		       __entry->height = p->height;
> +		       __entry->flags = p->flags;
> +		       __entry->colorspace = p->colorspace;
> +		       __entry->xfer_func = p->xfer_func;
> +		       __entry->ycbcr_enc = p->ycbcr_enc;
> +		       __entry->quantization = p->quantization;
> +		       ),
> +	TP_printk("backward_ref_ts %llu version %u width %u height %u flags %s colorspace %u xfer_func %u ycbcr_enc %u quantization %u",
> +		  __entry->backward_ref_ts, __entry->version, __entry->width, __entry->height,
> +		  __print_flags(__entry->flags, "|",
> +		  {V4L2_FWHT_FL_IS_INTERLACED, "IS_INTERLACED"},
> +		  {V4L2_FWHT_FL_IS_BOTTOM_FIRST, "IS_BOTTOM_FIRST"},
> +		  {V4L2_FWHT_FL_IS_ALTERNATE, "IS_ALTERNATE"},
> +		  {V4L2_FWHT_FL_IS_BOTTOM_FIELD, "IS_BOTTOM_FIELD"},
> +		  {V4L2_FWHT_FL_LUMA_IS_UNCOMPRESSED, "LUMA_IS_UNCOMPRESSED"},
> +		  {V4L2_FWHT_FL_CB_IS_UNCOMPRESSED, "CB_IS_UNCOMPRESSED"},
> +		  {V4L2_FWHT_FL_CR_IS_UNCOMPRESSED, "CR_IS_UNCOMPRESSED"},
> +		  {V4L2_FWHT_FL_ALPHA_IS_UNCOMPRESSED, "ALPHA_IS_UNCOMPRESSED"},
> +		  {V4L2_FWHT_FL_I_FRAME, "I_FRAME"},
> +		  {V4L2_FWHT_FL_PIXENC_HSV, "PIXENC_HSV"},
> +		  {V4L2_FWHT_FL_PIXENC_RGB, "PIXENC_RGB"},
> +		  {V4L2_FWHT_FL_PIXENC_YUV, "PIXENC_YUV"}),
> +		  __entry->colorspace, __entry->xfer_func, __entry->ycbcr_enc,
> +		  __entry->quantization)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_fwht_params_tmpl, v4l2_ctrl_fwht_params,
> +	TP_PROTO(const struct v4l2_ctrl_fwht_params *p),
> +	TP_ARGS(p)
> +);
> +
> +#endif
> +
> +#undef TRACE_INCLUDE_PATH
> +#undef TRACE_INCLUDE_FILE
> +#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
> +#define TRACE_INCLUDE_FILE visl-trace-fwht
> +#include <trace/define_trace.h>
> diff --git a/drivers/media/test-drivers/visl/visl-trace-h264.h b/drivers/media/test-drivers/visl/visl-trace-h264.h
> new file mode 100644
> index 000000000000..0026a0dd5ce9
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-trace-h264.h
> @@ -0,0 +1,349 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +#if !defined(_VISL_TRACE_H264_H_) || defined(TRACE_HEADER_MULTI_READ)
> +#define _VISL_TRACE_H264_H_
> +
> +#include <linux/tracepoint.h>
> +#include "visl.h"
> +
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM visl_h264_controls
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_sps_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_h264_sps *s),
> +	TP_ARGS(s),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_sps, s)),
> +	TP_fast_assign(__entry->s = *s),
> +	TP_printk("\nprofile_idc %u\n"
> +		  "constraint_set_flags %s\n"
> +		  "level_idc %u\n"
> +		  "seq_parameter_set_id %u\n"
> +		  "chroma_format_idc %u\n"
> +		  "bit_depth_luma_minus8 %u\n"
> +		  "bit_depth_chroma_minus8 %u\n"
> +		  "log2_max_frame_num_minus4 %u\n"
> +		  "pic_order_cnt_type %u\n"
> +		  "log2_max_pic_order_cnt_lsb_minus4 %u\n"
> +		  "max_num_ref_frames %u\n"
> +		  "num_ref_frames_in_pic_order_cnt_cycle %u\n"
> +		  "offset_for_ref_frame %s\n"
> +		  "offset_for_non_ref_pic %d\n"
> +		  "offset_for_top_to_bottom_field %d\n"
> +		  "pic_width_in_mbs_minus1 %u\n"
> +		  "pic_height_in_map_units_minus1 %u\n"
> +		  "flags %s",
> +		  __entry->s.profile_idc,
> +		  __print_flags(__entry->s.constraint_set_flags, "|",
> +		  {V4L2_H264_SPS_CONSTRAINT_SET0_FLAG, "CONSTRAINT_SET0_FLAG"},
> +		  {V4L2_H264_SPS_CONSTRAINT_SET1_FLAG, "CONSTRAINT_SET1_FLAG"},
> +		  {V4L2_H264_SPS_CONSTRAINT_SET2_FLAG, "CONSTRAINT_SET2_FLAG"},
> +		  {V4L2_H264_SPS_CONSTRAINT_SET3_FLAG, "CONSTRAINT_SET3_FLAG"},
> +		  {V4L2_H264_SPS_CONSTRAINT_SET4_FLAG, "CONSTRAINT_SET4_FLAG"},
> +		  {V4L2_H264_SPS_CONSTRAINT_SET5_FLAG, "CONSTRAINT_SET5_FLAG"}),
> +		  __entry->s.level_idc,
> +		  __entry->s.seq_parameter_set_id,
> +		  __entry->s.chroma_format_idc,
> +		  __entry->s.bit_depth_luma_minus8,
> +		  __entry->s.bit_depth_chroma_minus8,
> +		  __entry->s.log2_max_frame_num_minus4,
> +		  __entry->s.pic_order_cnt_type,
> +		  __entry->s.log2_max_pic_order_cnt_lsb_minus4,
> +		  __entry->s.max_num_ref_frames,
> +		  __entry->s.num_ref_frames_in_pic_order_cnt_cycle,
> +		  __print_array(__entry->s.offset_for_ref_frame,
> +		  		ARRAY_SIZE(__entry->s.offset_for_ref_frame),
> +		  		sizeof(__entry->s.offset_for_ref_frame[0])),
> +		  __entry->s.offset_for_non_ref_pic,
> +		  __entry->s.offset_for_top_to_bottom_field,
> +		  __entry->s.pic_width_in_mbs_minus1,
> +		  __entry->s.pic_height_in_map_units_minus1,
> +		  __print_flags(__entry->s.flags, "|",
> +		  {V4L2_H264_SPS_FLAG_SEPARATE_COLOUR_PLANE, "SEPARATE_COLOUR_PLANE"},
> +		  {V4L2_H264_SPS_FLAG_QPPRIME_Y_ZERO_TRANSFORM_BYPASS, "QPPRIME_Y_ZERO_TRANSFORM_BYPASS"},
> +		  {V4L2_H264_SPS_FLAG_DELTA_PIC_ORDER_ALWAYS_ZERO, "DELTA_PIC_ORDER_ALWAYS_ZERO"},
> +		  {V4L2_H264_SPS_FLAG_GAPS_IN_FRAME_NUM_VALUE_ALLOWED, "GAPS_IN_FRAME_NUM_VALUE_ALLOWED"},
> +		  {V4L2_H264_SPS_FLAG_FRAME_MBS_ONLY, "FRAME_MBS_ONLY"},
> +		  {V4L2_H264_SPS_FLAG_MB_ADAPTIVE_FRAME_FIELD, "MB_ADAPTIVE_FRAME_FIELD"},
> +		  {V4L2_H264_SPS_FLAG_DIRECT_8X8_INFERENCE, "DIRECT_8X8_INFERENCE"}
> +		  ))
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_pps_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_h264_pps *p),
> +	TP_ARGS(p),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_pps, p)),
> +	TP_fast_assign(__entry->p = *p),
> +	TP_printk("\npic_parameter_set_id %u\n"
> +		  "seq_parameter_set_id %u\n"
> +		  "num_slice_groups_minus1 %u\n"
> +		  "num_ref_idx_l0_default_active_minus1 %u\n"
> +		  "num_ref_idx_l1_default_active_minus1 %u\n"
> +		  "weighted_bipred_idc %u\n"
> +		  "pic_init_qp_minus26 %d\n"
> +		  "pic_init_qs_minus26 %d\n"
> +		  "chroma_qp_index_offset %d\n"
> +		  "second_chroma_qp_index_offset %d\n"
> +		  "flags %s",
> +		  __entry->p.pic_parameter_set_id,
> +		  __entry->p.seq_parameter_set_id,
> +		  __entry->p.num_slice_groups_minus1,
> +		  __entry->p.num_ref_idx_l0_default_active_minus1,
> +		  __entry->p.num_ref_idx_l1_default_active_minus1,
> +		  __entry->p.weighted_bipred_idc,
> +		  __entry->p.pic_init_qp_minus26,
> +		  __entry->p.pic_init_qs_minus26,
> +		  __entry->p.chroma_qp_index_offset,
> +		  __entry->p.second_chroma_qp_index_offset,
> +		  __print_flags(__entry->p.flags, "|",
> +		  {V4L2_H264_PPS_FLAG_ENTROPY_CODING_MODE, "ENTROPY_CODING_MODE"},
> +		  {V4L2_H264_PPS_FLAG_BOTTOM_FIELD_PIC_ORDER_IN_FRAME_PRESENT, "BOTTOM_FIELD_PIC_ORDER_IN_FRAME_PRESENT"},
> +		  {V4L2_H264_PPS_FLAG_WEIGHTED_PRED, "WEIGHTED_PRED"},
> +		  {V4L2_H264_PPS_FLAG_DEBLOCKING_FILTER_CONTROL_PRESENT, "DEBLOCKING_FILTER_CONTROL_PRESENT"},
> +		  {V4L2_H264_PPS_FLAG_CONSTRAINED_INTRA_PRED, "CONSTRAINED_INTRA_PRED"},
> +		  {V4L2_H264_PPS_FLAG_REDUNDANT_PIC_CNT_PRESENT, "REDUNDANT_PIC_CNT_PRESENT"},
> +		  {V4L2_H264_PPS_FLAG_TRANSFORM_8X8_MODE, "TRANSFORM_8X8_MODE"},
> +		  {V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT, "SCALING_MATRIX_PRESENT"}
> +		  ))
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_scaling_matrix_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_h264_scaling_matrix *s),
> +	TP_ARGS(s),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_scaling_matrix, s)),
> +	TP_fast_assign(__entry->s = *s),
> +	TP_printk("\nscaling_list_4x4 {%s}\nscaling_list_8x8 {%s}",
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->s.scaling_list_4x4,
> +				   sizeof(__entry->s.scaling_list_4x4),
> +				   false),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->s.scaling_list_8x8,
> +				   sizeof(__entry->s.scaling_list_8x8),
> +				   false)
> +	)
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_pred_weights_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_h264_pred_weights *p),
> +	TP_ARGS(p),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_pred_weights, p)),
> +	TP_fast_assign(__entry->p = *p),
> +	TP_printk("\nluma_log2_weight_denom %u\n"
> +		  "chroma_log2_weight_denom %u\n"
> +		  "weight_factor[0].luma_weight %s\n"
> +		  "weight_factor[0].luma_offset %s\n"
> +		  "weight_factor[0].chroma_weight {%s}\n"
> +		  "weight_factor[0].chroma_offset {%s}\n"
> +		  "weight_factor[1].luma_weight %s\n"
> +		  "weight_factor[1].luma_offset %s\n"
> +		  "weight_factor[1].chroma_weight {%s}\n"
> +		  "weight_factor[1].chroma_offset {%s}\n",
> +		  __entry->p.luma_log2_weight_denom,
> +		  __entry->p.chroma_log2_weight_denom,
> +		  __print_array(__entry->p.weight_factors[0].luma_weight,
> +		  		ARRAY_SIZE(__entry->p.weight_factors[0].luma_weight),
> +		  		sizeof(__entry->p.weight_factors[0].luma_weight[0])),
> +		  __print_array(__entry->p.weight_factors[0].luma_offset,
> +		  		ARRAY_SIZE(__entry->p.weight_factors[0].luma_offset),
> +		  		sizeof(__entry->p.weight_factors[0].luma_offset[0])),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->p.weight_factors[0].chroma_weight,
> +				   sizeof(__entry->p.weight_factors[0].chroma_weight),
> +				   false),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->p.weight_factors[0].chroma_offset,
> +				   sizeof(__entry->p.weight_factors[0].chroma_offset),
> +				   false),
> +		  __print_array(__entry->p.weight_factors[1].luma_weight,
> +		  		ARRAY_SIZE(__entry->p.weight_factors[1].luma_weight),
> +		  		sizeof(__entry->p.weight_factors[1].luma_weight[0])),
> +		  __print_array(__entry->p.weight_factors[1].luma_offset,
> +		  		ARRAY_SIZE(__entry->p.weight_factors[1].luma_offset),
> +		  		sizeof(__entry->p.weight_factors[1].luma_offset[0])),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->p.weight_factors[1].chroma_weight,
> +				   sizeof(__entry->p.weight_factors[1].chroma_weight),
> +				   false),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->p.weight_factors[1].chroma_offset,
> +				   sizeof(__entry->p.weight_factors[1].chroma_offset),
> +				   false)
> +	)
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_slice_params_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_h264_slice_params *s),
> +	TP_ARGS(s),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_slice_params, s)),
> +	TP_fast_assign(__entry->s = *s),
> +	TP_printk("\nheader_bit_size %u\n"
> +		  "first_mb_in_slice %u\n"
> +		  "slice_type %s\n"
> +		  "colour_plane_id %u\n"
> +		  "redundant_pic_cnt %u\n"
> +		  "cabac_init_idc %u\n"
> +		  "slice_qp_delta %d\n"
> +		  "slice_qs_delta %d\n"
> +		  "disable_deblocking_filter_idc %u\n"
> +		  "slice_alpha_c0_offset_div2 %u\n"
> +		  "slice_beta_offset_div2 %u\n"
> +		  "num_ref_idx_l0_active_minus1 %u\n"
> +		  "num_ref_idx_l1_active_minus1 %u\n"
> +		  "flags %s",
> +		  __entry->s.header_bit_size,
> +		  __entry->s.first_mb_in_slice,
> +		  __print_symbolic(__entry->s.slice_type,
> +		  {V4L2_H264_SLICE_TYPE_P, "P"},
> +		  {V4L2_H264_SLICE_TYPE_B, "B"},
> +		  {V4L2_H264_SLICE_TYPE_I, "I"},
> +		  {V4L2_H264_SLICE_TYPE_SP, "SP"},
> +		  {V4L2_H264_SLICE_TYPE_SI, "SI"}),
> +		  __entry->s.colour_plane_id,
> +		  __entry->s.redundant_pic_cnt,
> +		  __entry->s.cabac_init_idc,
> +		  __entry->s.slice_qp_delta,
> +		  __entry->s.slice_qs_delta,
> +		  __entry->s.disable_deblocking_filter_idc,
> +		  __entry->s.slice_alpha_c0_offset_div2,
> +		  __entry->s.slice_beta_offset_div2,
> +		  __entry->s.num_ref_idx_l0_active_minus1,
> +		  __entry->s.num_ref_idx_l1_active_minus1,
> +		  __print_flags(__entry->s.flags, "|",
> +		  {V4L2_H264_SLICE_FLAG_DIRECT_SPATIAL_MV_PRED, "DIRECT_SPATIAL_MV_PRED"},
> +		  {V4L2_H264_SLICE_FLAG_SP_FOR_SWITCH, "SP_FOR_SWITCH"})
> +	)
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_h264_reference_tmpl,
> +	TP_PROTO(const struct v4l2_h264_reference *r, int i),
> +	TP_ARGS(r, i),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_h264_reference, r)
> +			 __field(int, i)),
> +	TP_fast_assign(__entry->r = *r; __entry->i = i;),
> +	TP_printk("[%d]: fields %s index %u",
> +		  __entry->i,
> +		  __print_flags(__entry->r.fields, "|",
> +		  {V4L2_H264_TOP_FIELD_REF, "TOP_FIELD_REF"},
> +		  {V4L2_H264_BOTTOM_FIELD_REF, "BOTTOM_FIELD_REF"},
> +		  {V4L2_H264_FRAME_REF, "FRAME_REF"}),
> +		  __entry->r.index
> +	)
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_decode_params_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_h264_decode_params *d),
> +	TP_ARGS(d),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_decode_params, d)),
> +	TP_fast_assign(__entry->d = *d),
> +	TP_printk("\nnal_ref_idc %u\n"
> +		  "frame_num %u\n"
> +		  "top_field_order_cnt %d\n"
> +		  "bottom_field_order_cnt %d\n"
> +		  "idr_pic_id %u\n"
> +		  "pic_order_cnt_lsb %u\n"
> +		  "delta_pic_order_cnt_bottom %d\n"
> +		  "delta_pic_order_cnt0 %d\n"
> +		  "delta_pic_order_cnt1 %d\n"
> +		  "dec_ref_pic_marking_bit_size %u\n"
> +		  "pic_order_cnt_bit_size %u\n"
> +		  "slice_group_change_cycle %u\n"
> +		  "flags %s\n",
> +		  __entry->d.nal_ref_idc,
> +		  __entry->d.frame_num,
> +		  __entry->d.top_field_order_cnt,
> +		  __entry->d.bottom_field_order_cnt,
> +		  __entry->d.idr_pic_id,
> +		  __entry->d.pic_order_cnt_lsb,
> +		  __entry->d.delta_pic_order_cnt_bottom,
> +		  __entry->d.delta_pic_order_cnt0,
> +		  __entry->d.delta_pic_order_cnt1,
> +		  __entry->d.dec_ref_pic_marking_bit_size,
> +		  __entry->d.pic_order_cnt_bit_size,
> +		  __entry->d.slice_group_change_cycle,
> +		  __print_flags(__entry->d.flags, "|",
> +		  {V4L2_H264_DECODE_PARAM_FLAG_IDR_PIC, "IDR_PIC"},
> +		  {V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC, "FIELD_PIC"},
> +		  {V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD, "BOTTOM_FIELD"},
> +		  {V4L2_H264_DECODE_PARAM_FLAG_PFRAME, "PFRAME"},
> +		  {V4L2_H264_DECODE_PARAM_FLAG_BFRAME, "BFRAME"})
> +	)
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_h264_dpb_entry_tmpl,
> +	TP_PROTO(const struct v4l2_h264_dpb_entry *e, int i),
> +	TP_ARGS(e, i),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_h264_dpb_entry, e)
> +			 __field(int, i)),
> +	TP_fast_assign(__entry->e = *e; __entry->i = i;),
> +	TP_printk("[%d]: reference_ts %llu, pic_num %u frame_num %u fields %s "
> +		  "top_field_order_cnt %d bottom_field_order_cnt %d flags %s",
> +		  __entry->i,
> +		  __entry->e.reference_ts,
> +		  __entry->e.pic_num,
> +		  __entry->e.frame_num,
> +		  __print_flags(__entry->e.fields, "|",
> +		  {V4L2_H264_TOP_FIELD_REF, "TOP_FIELD_REF"},
> +		  {V4L2_H264_BOTTOM_FIELD_REF, "BOTTOM_FIELD_REF"},
> +		  {V4L2_H264_FRAME_REF, "FRAME_REF"}),
> +		  __entry->e.top_field_order_cnt,
> +		  __entry->e.bottom_field_order_cnt,
> +		  __print_flags(__entry->e.flags, "|",
> +		  {V4L2_H264_DPB_ENTRY_FLAG_VALID, "VALID"},
> +		  {V4L2_H264_DPB_ENTRY_FLAG_ACTIVE, "ACTIVE"},
> +		  {V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM, "LONG_TERM"},
> +		  {V4L2_H264_DPB_ENTRY_FLAG_FIELD, "FIELD"})
> +
> +	)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_h264_sps_tmpl, v4l2_ctrl_h264_sps,
> +	TP_PROTO(const struct v4l2_ctrl_h264_sps *s),
> +	TP_ARGS(s)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_h264_pps_tmpl, v4l2_ctrl_h264_pps,
> +	TP_PROTO(const struct v4l2_ctrl_h264_pps *p),
> +	TP_ARGS(p)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_h264_scaling_matrix_tmpl, v4l2_ctrl_h264_scaling_matrix,
> +	TP_PROTO(const struct v4l2_ctrl_h264_scaling_matrix *s),
> +	TP_ARGS(s)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_h264_pred_weights_tmpl, v4l2_ctrl_h264_pred_weights,
> +	TP_PROTO(const struct v4l2_ctrl_h264_pred_weights *p),
> +	TP_ARGS(p)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_h264_slice_params_tmpl, v4l2_ctrl_h264_slice_params,
> +	TP_PROTO(const struct v4l2_ctrl_h264_slice_params *s),
> +	TP_ARGS(s)
> +);
> +
> +DEFINE_EVENT(v4l2_h264_reference_tmpl, v4l2_h264_ref_pic_list0,
> +	TP_PROTO(const struct v4l2_h264_reference *r, int i),
> +	TP_ARGS(r, i)
> +);
> +
> +DEFINE_EVENT(v4l2_h264_reference_tmpl, v4l2_h264_ref_pic_list1,
> +	TP_PROTO(const struct v4l2_h264_reference *r, int i),
> +	TP_ARGS(r, i)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_h264_decode_params_tmpl, v4l2_ctrl_h264_decode_params,
> +	TP_PROTO(const struct v4l2_ctrl_h264_decode_params *d),
> +	TP_ARGS(d)
> +);
> +
> +DEFINE_EVENT(v4l2_h264_dpb_entry_tmpl, v4l2_h264_dpb_entry,
> +	TP_PROTO(const struct v4l2_h264_dpb_entry *e, int i),
> +	TP_ARGS(e, i)
> +);
> +
> +#endif
> +
> +#undef TRACE_INCLUDE_PATH
> +#undef TRACE_INCLUDE_FILE
> +#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
> +#define TRACE_INCLUDE_FILE visl-trace-h264
> +#include <trace/define_trace.h>
> diff --git a/drivers/media/test-drivers/visl/visl-trace-mpeg2.h b/drivers/media/test-drivers/visl/visl-trace-mpeg2.h
> new file mode 100644
> index 000000000000..889a3ba56502
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-trace-mpeg2.h
> @@ -0,0 +1,99 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +#if !defined(_VISL_TRACE_MPEG2_H_) || defined(TRACE_HEADER_MULTI_READ)
> +#define _VISL_TRACE_MPEG2_H_
> +
> +#include <linux/tracepoint.h>
> +#include "visl.h"
> +
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM visl_mpeg2_controls
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_mpeg2_seq_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_mpeg2_sequence *s),
> +	TP_ARGS(s),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_mpeg2_sequence, s)),
> +	TP_fast_assign(__entry->s = *s;),
> +	TP_printk("\nhorizontal_size %u \nvertical_size %u\nvbv_buffer_size %u \n"
> +		  "profile_and_level_indication %u \nchroma_format %u\nflags %s\n",
> +		  __entry->s.horizontal_size,
> +		  __entry->s.vertical_size,
> +		  __entry->s.vbv_buffer_size,
> +		  __entry->s.profile_and_level_indication,
> +		  __entry->s.chroma_format,
> +		  __print_flags(__entry->s.flags, "|",
> +		  {V4L2_MPEG2_SEQ_FLAG_PROGRESSIVE, "PROGRESSIVE"})
> +	)
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_mpeg2_pic_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_mpeg2_picture *p),
> +	TP_ARGS(p),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_mpeg2_picture, p)),
> +	TP_fast_assign(__entry->p = *p;),
> +	TP_printk("\nbackward_ref_ts %llu \nforward_ref_ts %llu \nflags %s \nf_code {%s} \n"
> +		  "picture_coding_type: %u \npicture_structure %u \nintra_dc_precision %u\n",
> +		  __entry->p.backward_ref_ts,
> +		  __entry->p.forward_ref_ts,
> +		  __print_flags(__entry->p.flags, "|",
> +		  {V4L2_MPEG2_PIC_FLAG_TOP_FIELD_FIRST, "TOP_FIELD_FIRST"},
> +		  {V4L2_MPEG2_PIC_FLAG_FRAME_PRED_DCT, "FRAME_PRED_DCT"},
> +		  {V4L2_MPEG2_PIC_FLAG_CONCEALMENT_MV, "CONCEALMENT_MV"},
> +		  {V4L2_MPEG2_PIC_FLAG_Q_SCALE_TYPE, "Q_SCALE_TYPE"},
> +		  {V4L2_MPEG2_PIC_FLAG_INTRA_VLC, "INTA_VLC"},
> +		  {V4L2_MPEG2_PIC_FLAG_ALT_SCAN, "ALT_SCAN"},
> +		  {V4L2_MPEG2_PIC_FLAG_REPEAT_FIRST, "REPEAT_FIRST"},
> +		  {V4L2_MPEG2_PIC_FLAG_PROGRESSIVE, "PROGRESSIVE"}),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->p.f_code,
> +				   sizeof(__entry->p.f_code),
> +				   false),
> +		  __entry->p.picture_coding_type,
> +		  __entry->p.picture_structure,
> +		  __entry->p.intra_dc_precision
> +	)
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_mpeg2_quant_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_mpeg2_quantisation *q),
> +	TP_ARGS(q),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_mpeg2_quantisation, q)),
> +	TP_fast_assign(__entry->q = *q;),
> +	TP_printk("\nintra_quantiser_matrix %s \nnon_intra_quantiser_matrix %s\n"
> +		  "chroma_intra_quantiser_matrix %s\nchroma_non_intra_quantiser_matrix %s\n",
> +		  __print_array(__entry->q.intra_quantiser_matrix,
> +				ARRAY_SIZE(__entry->q.intra_quantiser_matrix),
> +				sizeof(__entry->q.intra_quantiser_matrix[0])),
> +		  __print_array(__entry->q.non_intra_quantiser_matrix,
> +				ARRAY_SIZE(__entry->q.non_intra_quantiser_matrix),
> +				sizeof(__entry->q.non_intra_quantiser_matrix[0])),
> +		  __print_array(__entry->q.chroma_intra_quantiser_matrix,
> +				ARRAY_SIZE(__entry->q.chroma_intra_quantiser_matrix),
> +				sizeof(__entry->q.chroma_intra_quantiser_matrix[0])),
> +		  __print_array(__entry->q.chroma_non_intra_quantiser_matrix,
> +				ARRAY_SIZE(__entry->q.chroma_non_intra_quantiser_matrix),
> +				sizeof(__entry->q.chroma_non_intra_quantiser_matrix[0]))
> +		  )
> +)
> +
> +DEFINE_EVENT(v4l2_ctrl_mpeg2_seq_tmpl, v4l2_ctrl_mpeg2_sequence,
> +	TP_PROTO(const struct v4l2_ctrl_mpeg2_sequence *s),
> +	TP_ARGS(s)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_mpeg2_pic_tmpl, v4l2_ctrl_mpeg2_picture,
> +	TP_PROTO(const struct v4l2_ctrl_mpeg2_picture *p),
> +	TP_ARGS(p)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_mpeg2_quant_tmpl, v4l2_ctrl_mpeg2_quantisation,
> +	TP_PROTO(const struct v4l2_ctrl_mpeg2_quantisation *q),
> +	TP_ARGS(q)
> +);
> +
> +#endif
> +
> +#undef TRACE_INCLUDE_PATH
> +#undef TRACE_INCLUDE_FILE
> +#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
> +#define TRACE_INCLUDE_FILE visl-trace-mpeg2
> +#include <trace/define_trace.h>
> diff --git a/drivers/media/test-drivers/visl/visl-trace-points.c b/drivers/media/test-drivers/visl/visl-trace-points.c
> new file mode 100644
> index 000000000000..6aa98f90c20a
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-trace-points.c
> @@ -0,0 +1,9 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include "visl.h"
> +
> +#define CREATE_TRACE_POINTS
> +#include "visl-trace-fwht.h"
> +#include "visl-trace-mpeg2.h"
> +#include "visl-trace-vp8.h"
> +#include "visl-trace-vp9.h"
> +#include "visl-trace-h264.h"
> diff --git a/drivers/media/test-drivers/visl/visl-trace-vp8.h b/drivers/media/test-drivers/visl/visl-trace-vp8.h
> new file mode 100644
> index 000000000000..18c610ea18ab
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-trace-vp8.h
> @@ -0,0 +1,156 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +#if !defined(_VISL_TRACE_VP8_H_) || defined(TRACE_HEADER_MULTI_READ)
> +#define _VISL_TRACE_VP8_H_
> +
> +#include <linux/tracepoint.h>
> +#include "visl.h"
> +
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM visl_vp8_controls
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_vp8_entropy_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
> +	TP_ARGS(f),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp8_frame, f)),
> +	TP_fast_assign(__entry->f = *f;),
> +	TP_printk("\nentropy.coeff_probs {%s}\n"
> +		  "entropy.y_mode_probs %s\n"
> +		  "entropy.uv_mode_probs %s\n"
> +		  "entropy.mv_probs {%s}",
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->f.entropy.coeff_probs,
> +				   sizeof(__entry->f.entropy.coeff_probs),
> +				   false),
> +		  __print_array(__entry->f.entropy.y_mode_probs,
> +				ARRAY_SIZE(__entry->f.entropy.y_mode_probs),
> +				sizeof(__entry->f.entropy.y_mode_probs[0])),
> +		  __print_array(__entry->f.entropy.uv_mode_probs,
> +				ARRAY_SIZE(__entry->f.entropy.uv_mode_probs),
> +				sizeof(__entry->f.entropy.uv_mode_probs[0])),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->f.entropy.mv_probs,
> +				   sizeof(__entry->f.entropy.mv_probs),
> +				   false)
> +		  )
> +)
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_vp8_frame_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
> +	TP_ARGS(f),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp8_frame, f)),
> +	TP_fast_assign(__entry->f = *f;),
> +	TP_printk("\nsegment.quant_update %s\n"
> +		  "segment.lf_update %s\n"
> +		  "segment.segment_probs %s\n"
> +		  "segment.flags %s\n"
> +		  "lf.ref_frm_delta %s\n"
> +		  "lf.mb_mode_delta %s\n"
> +		  "lf.sharpness_level %u\n"
> +		  "lf.level %u\n"
> +		  "lf.flags %s\n"
> +		  "quant.y_ac_qi %u\n"
> +		  "quant.y_dc_delta %d\n"
> +		  "quant.y2_dc_delta %d\n"
> +		  "quant.y2_ac_delta %d\n"
> +		  "quant.uv_dc_delta %d\n"
> +		  "quant.uv_ac_delta %d\n"
> +		  "coder_state.range %u\n"
> +		  "coder_state.value %u\n"
> +		  "coder_state.bit_count %u\n"
> +		  "width %u\n"
> +		  "height %u\n"
> +		  "horizontal_scale %u\n"
> +		  "vertical_scale %u\n"
> +		  "version %u\n"
> +		  "prob_skip_false %u\n"
> +		  "prob_intra %u\n"
> +		  "prob_last %u\n"
> +		  "prob_gf %u\n"
> +		  "num_dct_parts %u\n"
> +		  "first_part_size %u\n"
> +		  "first_part_header_bits %u\n"
> +		  "dct_part_sizes %s\n"
> +		  "last_frame_ts %llu\n"
> +		  "golden_frame_ts %llu\n"
> +		  "alt_frame_ts %llu\n"
> +		  "flags %s",
> +		  __print_array(__entry->f.segment.quant_update,
> +				ARRAY_SIZE(__entry->f.segment.quant_update),
> +				sizeof(__entry->f.segment.quant_update[0])),
> +		  __print_array(__entry->f.segment.lf_update,
> +				ARRAY_SIZE(__entry->f.segment.lf_update),
> +				sizeof(__entry->f.segment.lf_update[0])),
> +		  __print_array(__entry->f.segment.segment_probs,
> +				ARRAY_SIZE(__entry->f.segment.segment_probs),
> +				sizeof(__entry->f.segment.segment_probs[0])),
> +		  __print_flags(__entry->f.segment.flags, "|",
> +		  {V4L2_VP8_SEGMENT_FLAG_ENABLED, "SEGMENT_ENABLED"},
> +		  {V4L2_VP8_SEGMENT_FLAG_UPDATE_MAP, "SEGMENT_UPDATE_MAP"},
> +		  {V4L2_VP8_SEGMENT_FLAG_UPDATE_FEATURE_DATA, "SEGMENT_UPDATE_FEATURE_DATA"},
> +		  {V4L2_VP8_SEGMENT_FLAG_DELTA_VALUE_MODE, "SEGMENT_DELTA_VALUE_MODE"}),
> +		  __print_array(__entry->f.lf.ref_frm_delta,
> +				ARRAY_SIZE(__entry->f.lf.ref_frm_delta),
> +				sizeof(__entry->f.lf.ref_frm_delta[0])),
> +		  __print_array(__entry->f.lf.mb_mode_delta,
> +				ARRAY_SIZE(__entry->f.lf.mb_mode_delta),
> +				sizeof(__entry->f.lf.mb_mode_delta[0])),
> +		  __entry->f.lf.sharpness_level,
> +		  __entry->f.lf.level,
> +		  __print_flags(__entry->f.lf.flags, "|",
> +		  {V4L2_VP8_LF_ADJ_ENABLE, "LF_ADJ_ENABLED"},
> +		  {V4L2_VP8_LF_DELTA_UPDATE, "LF_DELTA_UPDATE"},
> +		  {V4L2_VP8_LF_FILTER_TYPE_SIMPLE, "LF_FILTER_TYPE_SIMPLE"}),
> +		  __entry->f.quant.y_ac_qi,
> +		  __entry->f.quant.y_dc_delta,
> +		  __entry->f.quant.y2_dc_delta,
> +		  __entry->f.quant.y2_ac_delta,
> +		  __entry->f.quant.uv_dc_delta,
> +		  __entry->f.quant.uv_ac_delta,
> +		  __entry->f.coder_state.range,
> +		  __entry->f.coder_state.value,
> +		  __entry->f.coder_state.bit_count,
> +		  __entry->f.width,
> +		  __entry->f.height,
> +		  __entry->f.horizontal_scale,
> +		  __entry->f.vertical_scale,
> +		  __entry->f.version,
> +		  __entry->f.prob_skip_false,
> +		  __entry->f.prob_intra,
> +		  __entry->f.prob_last,
> +		  __entry->f.prob_gf,
> +		  __entry->f.num_dct_parts,
> +		  __entry->f.first_part_size,
> +		  __entry->f.first_part_header_bits,
> +		  __print_array(__entry->f.dct_part_sizes,
> +				ARRAY_SIZE(__entry->f.dct_part_sizes),
> +				sizeof(__entry->f.dct_part_sizes[0])),
> +		  __entry->f.last_frame_ts,
> +		  __entry->f.golden_frame_ts,
> +		  __entry->f.alt_frame_ts,
> +		  __print_flags(__entry->f.flags, "|",
> +		  {V4L2_VP8_FRAME_FLAG_KEY_FRAME, "KEY_FRAME"},
> +		  {V4L2_VP8_FRAME_FLAG_EXPERIMENTAL, "EXPERIMENTAL"},
> +		  {V4L2_VP8_FRAME_FLAG_SHOW_FRAME, "SHOW_FRAME"},
> +		  {V4L2_VP8_FRAME_FLAG_MB_NO_SKIP_COEFF, "MB_NO_SKIP_COEFF"},
> +		  {V4L2_VP8_FRAME_FLAG_SIGN_BIAS_GOLDEN, "SIGN_BIAS_GOLDEN"},
> +		  {V4L2_VP8_FRAME_FLAG_SIGN_BIAS_ALT, "SIGN_BIAS_ALT"})
> +		  )
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_vp8_frame_tmpl, v4l2_ctrl_vp8_frame,
> +	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
> +	TP_ARGS(f)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_vp8_entropy_tmpl, v4l2_ctrl_vp8_entropy,
> +	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
> +	TP_ARGS(f)
> +);
> +
> +#endif
> +
> +#undef TRACE_INCLUDE_PATH
> +#undef TRACE_INCLUDE_FILE
> +#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
> +#define TRACE_INCLUDE_FILE visl-trace-vp8
> +#include <trace/define_trace.h>
> diff --git a/drivers/media/test-drivers/visl/visl-trace-vp9.h b/drivers/media/test-drivers/visl/visl-trace-vp9.h
> new file mode 100644
> index 000000000000..e0907231eac7
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-trace-vp9.h
> @@ -0,0 +1,292 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +#if !defined(_VISL_TRACE_VP9_H_) || defined(TRACE_HEADER_MULTI_READ)
> +#define _VISL_TRACE_VP9_H_
> +
> +#include <linux/tracepoint.h>
> +#include "visl.h"
> +
> +#undef TRACE_SYSTEM
> +#define TRACE_SYSTEM visl_vp9_controls
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_vp9_frame_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_vp9_frame *f),
> +	TP_ARGS(f),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp9_frame, f)),
> +	TP_fast_assign(__entry->f = *f;),
> +	TP_printk("\nlf.ref_deltas %s\n"
> +		  "lf.mode_deltas %s\n"
> +		  "lf.level %u\n"
> +		  "lf.sharpness %u\n"
> +		  "lf.flags %s\n"
> +		  "quant.base_q_idx %u\n"
> +		  "quant.delta_q_y_dc %d\n"
> +		  "quant.delta_q_uv_dc %d\n"
> +		  "quant.delta_q_uv_ac %d\n"
> +		  "seg.feature_data {%s}\n"
> +		  "seg.feature_enabled %s\n"
> +		  "seg.tree_probs %s\n"
> +		  "seg.pred_probs %s\n"
> +		  "seg.flags %s\n"
> +		  "flags %s\n"
> +		  "compressed_header_size %u\n"
> +		  "uncompressed_header_size %u\n"
> +		  "frame_width_minus_1 %u\n"
> +		  "frame_height_minus_1 %u\n"
> +		  "render_width_minus_1 %u\n"
> +		  "render_height_minus_1 %u\n"
> +		  "last_frame_ts %llu\n"
> +		  "golden_frame_ts %llu\n"
> +		  "alt_frame_ts %llu\n"
> +		  "ref_frame_sign_bias %s\n"
> +		  "reset_frame_context %s\n"
> +		  "frame_context_idx %u\n"
> +		  "profile %u\n"
> +		  "bit_depth %u\n"
> +		  "interpolation_filter %s\n"
> +		  "tile_cols_log2 %u\n"
> +		  "tile_rows_log_2 %u\n"
> +		  "reference_mode %s\n",
> +		  __print_array(__entry->f.lf.ref_deltas,
> +				ARRAY_SIZE(__entry->f.lf.ref_deltas),
> +				sizeof(__entry->f.lf.ref_deltas[0])),
> +		  __print_array(__entry->f.lf.mode_deltas,
> +				ARRAY_SIZE(__entry->f.lf.mode_deltas),
> +				sizeof(__entry->f.lf.mode_deltas[0])),
> +		  __entry->f.lf.level,
> +		  __entry->f.lf.sharpness,
> +		  __print_flags(__entry->f.lf.flags, "|",
> +		  {V4L2_VP9_LOOP_FILTER_FLAG_DELTA_ENABLED, "DELTA_ENABLED"},
> +		  {V4L2_VP9_LOOP_FILTER_FLAG_DELTA_UPDATE, "DELTA_UPDATE"}),
> +		  __entry->f.quant.base_q_idx,
> +		  __entry->f.quant.delta_q_y_dc,
> +		  __entry->f.quant.delta_q_uv_dc,
> +		  __entry->f.quant.delta_q_uv_ac,
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->f.seg.feature_data,
> +				   sizeof(__entry->f.seg.feature_data),
> +				   false),
> +		  __print_array(__entry->f.seg.feature_enabled,
> +				ARRAY_SIZE(__entry->f.seg.feature_enabled),
> +				sizeof(__entry->f.seg.feature_enabled[0])),
> +		  __print_array(__entry->f.seg.tree_probs,
> +				ARRAY_SIZE(__entry->f.seg.tree_probs),
> +				sizeof(__entry->f.seg.tree_probs[0])),
> +		  __print_array(__entry->f.seg.pred_probs,
> +				ARRAY_SIZE(__entry->f.seg.pred_probs),
> +				sizeof(__entry->f.seg.pred_probs[0])),
> +		  __print_flags(__entry->f.seg.flags, "|",
> +		  {V4L2_VP9_SEGMENTATION_FLAG_ENABLED, "ENABLED"},
> +		  {V4L2_VP9_SEGMENTATION_FLAG_UPDATE_MAP, "UPDATE_MAP"},
> +		  {V4L2_VP9_SEGMENTATION_FLAG_TEMPORAL_UPDATE, "TEMPORAL_UPDATE"},
> +		  {V4L2_VP9_SEGMENTATION_FLAG_UPDATE_DATA, "UPDATE_DATA"},
> +		  {V4L2_VP9_SEGMENTATION_FLAG_ABS_OR_DELTA_UPDATE, "ABS_OR_DELTA_UPDATE"}),
> +		  __print_flags(__entry->f.flags, "|",
> +		  {V4L2_VP9_FRAME_FLAG_KEY_FRAME, "KEY_FRAME"},
> +		  {V4L2_VP9_FRAME_FLAG_SHOW_FRAME, "SHOW_FRAME"},
> +		  {V4L2_VP9_FRAME_FLAG_ERROR_RESILIENT, "ERROR_RESILIENT"},
> +		  {V4L2_VP9_FRAME_FLAG_INTRA_ONLY, "INTRA_ONLY"},
> +		  {V4L2_VP9_FRAME_FLAG_ALLOW_HIGH_PREC_MV, "ALLOW_HIGH_PREC_MV"},
> +		  {V4L2_VP9_FRAME_FLAG_REFRESH_FRAME_CTX, "REFRESH_FRAME_CTX"},
> +		  {V4L2_VP9_FRAME_FLAG_PARALLEL_DEC_MODE, "PARALLEL_DEC_MODE"},
> +		  {V4L2_VP9_FRAME_FLAG_X_SUBSAMPLING, "X_SUBSAMPLING"},
> +		  {V4L2_VP9_FRAME_FLAG_Y_SUBSAMPLING, "Y_SUBSAMPLING"},
> +		  {V4L2_VP9_FRAME_FLAG_COLOR_RANGE_FULL_SWING, "COLOR_RANGE_FULL_SWING"}),
> +		  __entry->f.compressed_header_size,
> +		  __entry->f.uncompressed_header_size,
> +		  __entry->f.frame_width_minus_1,
> +		  __entry->f.frame_height_minus_1,
> +		  __entry->f.render_width_minus_1,
> +		  __entry->f.render_height_minus_1,
> +		  __entry->f.last_frame_ts,
> +		  __entry->f.golden_frame_ts,
> +		  __entry->f.alt_frame_ts,
> +		  __print_symbolic(__entry->f.ref_frame_sign_bias,
> +		  {V4L2_VP9_SIGN_BIAS_LAST, "SIGN_BIAS_LAST"},
> +		  {V4L2_VP9_SIGN_BIAS_GOLDEN, "SIGN_BIAS_GOLDEN"},
> +		  {V4L2_VP9_SIGN_BIAS_ALT, "SIGN_BIAS_ALT"}),
> +		  __print_symbolic(__entry->f.reset_frame_context,
> +		  {V4L2_VP9_RESET_FRAME_CTX_NONE, "RESET_FRAME_CTX_NONE"},
> +		  {V4L2_VP9_RESET_FRAME_CTX_SPEC, "RESET_FRAME_CTX_SPEC"},
> +		  {V4L2_VP9_RESET_FRAME_CTX_ALL, "RESET_FRAME_CTX_ALL"}),
> +		  __entry->f.frame_context_idx,
> +		  __entry->f.profile,
> +		  __entry->f.bit_depth,
> +		  __print_symbolic(__entry->f.interpolation_filter,
> +		  {V4L2_VP9_INTERP_FILTER_EIGHTTAP, "INTERP_FILTER_EIGHTTAP"},
> +		  {V4L2_VP9_INTERP_FILTER_EIGHTTAP_SMOOTH, "INTERP_FILTER_EIGHTTAP_SMOOTH"},
> +		  {V4L2_VP9_INTERP_FILTER_EIGHTTAP_SHARP, "INTERP_FILTER_EIGHTTAP_SHARP"},
> +		  {V4L2_VP9_INTERP_FILTER_BILINEAR, "INTERP_FILTER_BILINEAR"},
> +		  {V4L2_VP9_INTERP_FILTER_SWITCHABLE, "INTERP_FILTER_SWITCHABLE"}),
> +		  __entry->f.tile_cols_log2,
> +		  __entry->f.tile_rows_log2,
> +		  __print_symbolic(__entry->f.reference_mode,
> +		  {V4L2_VP9_REFERENCE_MODE_SINGLE_REFERENCE, "REFERENCE_MODE_SINGLE_REFERENCE"},
> +		  {V4L2_VP9_REFERENCE_MODE_COMPOUND_REFERENCE, "REFERENCE_MODE_COMPOUND_REFERENCE"},
> +		  {V4L2_VP9_REFERENCE_MODE_SELECT, "REFERENCE_MODE_SELECT"}))
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_vp9_compressed_hdr_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
> +	TP_ARGS(h),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp9_compressed_hdr, h)),
> +	TP_fast_assign(__entry->h = *h;),
> +	TP_printk("\ntx_mode %s\n"
> +		  "tx8 {%s}\n"
> +		  "tx16 {%s}\n"
> +		  "tx32 {%s}\n"
> +		  "skip %s\n"
> +		  "inter_mode {%s}\n"
> +		  "interp_filter {%s}\n"
> +		  "is_inter %s\n"
> +		  "comp_mode %s\n"
> +		  "single_ref {%s}\n"
> +		  "comp_ref %s\n"
> +		  "y_mode {%s}\n"
> +		  "uv_mode {%s}\n"
> +		  "partition {%s}\n",
> +		  __print_symbolic(__entry->h.tx_mode,
> +		  {V4L2_VP9_TX_MODE_ONLY_4X4, "TX_MODE_ONLY_4X4"},
> +		  {V4L2_VP9_TX_MODE_ALLOW_8X8, "TX_MODE_ALLOW_8X8"},
> +		  {V4L2_VP9_TX_MODE_ALLOW_16X16, "TX_MODE_ALLOW_16X16"},
> +		  {V4L2_VP9_TX_MODE_ALLOW_32X32, "TX_MODE_ALLOW_32X32"},
> +		  {V4L2_VP9_TX_MODE_SELECT, "TX_MODE_SELECT"}),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->h.tx8,
> +				   sizeof(__entry->h.tx8),
> +				   false),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->h.tx16,
> +				   sizeof(__entry->h.tx16),
> +				   false),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->h.tx32,
> +				   sizeof(__entry->h.tx32),
> +				   false),
> +		  __print_array(__entry->h.skip,
> +				ARRAY_SIZE(__entry->h.skip),
> +				sizeof(__entry->h.skip[0])),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->h.inter_mode,
> +				   sizeof(__entry->h.inter_mode),
> +				   false),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->h.interp_filter,
> +				   sizeof(__entry->h.interp_filter),
> +				   false),
> +		  __print_array(__entry->h.is_inter,
> +				ARRAY_SIZE(__entry->h.is_inter),
> +				sizeof(__entry->h.is_inter[0])),
> +		  __print_array(__entry->h.comp_mode,
> +				ARRAY_SIZE(__entry->h.comp_mode),
> +				sizeof(__entry->h.comp_mode[0])),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->h.single_ref,
> +				   sizeof(__entry->h.single_ref),
> +				   false),
> +		  __print_array(__entry->h.comp_ref,
> +				ARRAY_SIZE(__entry->h.comp_ref),
> +				sizeof(__entry->h.comp_ref[0])),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->h.y_mode,
> +				   sizeof(__entry->h.y_mode),
> +				   false),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->h.uv_mode,
> +				   sizeof(__entry->h.uv_mode),
> +				   false),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->h.partition,
> +				   sizeof(__entry->h.partition),
> +				   false)
> +	)
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_ctrl_vp9_compressed_coef_tmpl,
> +	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
> +	TP_ARGS(h),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp9_compressed_hdr, h)),
> +	TP_fast_assign(__entry->h = *h;),
> +	TP_printk("\n coef {%s}",
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->h.coef,
> +				   sizeof(__entry->h.coef),
> +				   false)
> +	)
> +);
> +
> +DECLARE_EVENT_CLASS(v4l2_vp9_mv_probs_tmpl,
> +	TP_PROTO(const struct v4l2_vp9_mv_probs *p),
> +	TP_ARGS(p),
> +	TP_STRUCT__entry(__field_struct(struct v4l2_vp9_mv_probs, p)),
> +	TP_fast_assign(__entry->p = *p;),
> +	TP_printk("\n joint %s\n"
> +		  "sign %s\n"
> +		  "classes {%s}\n"
> +		  "class0_bit %s\n"
> +		  "bits {%s}\n"
> +		  "class0_fr {%s}\n"
> +		  "fr {%s}\n"
> +		  "class0_hp %s\n"
> +		  "hp %s\n",
> +		  __print_array(__entry->p.joint,
> +				ARRAY_SIZE(__entry->p.joint),
> +				sizeof(__entry->p.joint[0])),
> +		  __print_array(__entry->p.sign,
> +				ARRAY_SIZE(__entry->p.sign),
> +				sizeof(__entry->p.sign[0])),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->p.classes,
> +				   sizeof(__entry->p.classes),
> +				   false),
> +		  __print_array(__entry->p.class0_bit,
> +				ARRAY_SIZE(__entry->p.class0_bit),
> +				sizeof(__entry->p.class0_bit[0])),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->p.bits,
> +				   sizeof(__entry->p.bits),
> +				   false),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->p.class0_fr,
> +				   sizeof(__entry->p.class0_fr),
> +				   false),
> +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> +		  		   __entry->p.fr,
> +				   sizeof(__entry->p.fr),
> +				   false),
> +		  __print_array(__entry->p.class0_hp,
> +				ARRAY_SIZE(__entry->p.class0_hp),
> +				sizeof(__entry->p.class0_hp[0])),
> +		  __print_array(__entry->p.hp,
> +				ARRAY_SIZE(__entry->p.hp),
> +				sizeof(__entry->p.hp[0]))
> +	)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_vp9_frame_tmpl, v4l2_ctrl_vp9_frame,
> +	TP_PROTO(const struct v4l2_ctrl_vp9_frame *f),
> +	TP_ARGS(f)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_vp9_compressed_hdr_tmpl, v4l2_ctrl_vp9_compressed_hdr,
> +	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
> +	TP_ARGS(h)
> +);
> +
> +DEFINE_EVENT(v4l2_ctrl_vp9_compressed_coef_tmpl, v4l2_ctrl_vp9_compressed_coeff,
> +	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
> +	TP_ARGS(h)
> +);
> +
> +
> +DEFINE_EVENT(v4l2_vp9_mv_probs_tmpl, v4l2_vp9_mv_probs,
> +	TP_PROTO(const struct v4l2_vp9_mv_probs *p),
> +	TP_ARGS(p)
> +);
> +
> +#endif
> +
> +#undef TRACE_INCLUDE_PATH
> +#undef TRACE_INCLUDE_FILE
> +#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
> +#define TRACE_INCLUDE_FILE visl-trace-vp9
> +#include <trace/define_trace.h>
> diff --git a/drivers/media/test-drivers/visl/visl-video.c b/drivers/media/test-drivers/visl/visl-video.c
> new file mode 100644
> index 000000000000..d1eb7c374e16
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-video.c
> @@ -0,0 +1,776 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +/*
> + * A virtual stateless device for stateless uAPI development purposes.
> + *
> + * This tool's objective is to help the development and testing of userspace
> + * applications that use the V4L2 stateless API to decode media.
> + *
> + * A userspace implementation can use visl to run a decoding loop even when no
> + * hardware is available or when the kernel uAPI for the codec has not been
> + * upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * This driver can also trace the contents of the V4L2 controls submitted to it.
> + * It can also dump the contents of the vb2 buffers through a debugfs
> + * interface. This is in many ways similar to the tracing infrastructure
> + * available for other popular encode/decode APIs out there and can help develop
> + * a userspace application by using another (working) one as a reference.
> + *
> + * Note that no actual decoding of video frames is performed by visl. The V4L2
> + * test pattern generator is used to write various debug information to the
> + * capture buffers instead.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + */
> +
> +#include <linux/debugfs.h>
> +#include <linux/font.h>
> +#include <media/v4l2-event.h>
> +#include <media/v4l2-ioctl.h>
> +#include <media/videobuf2-vmalloc.h>
> +
> +#include "visl-video.h"
> +
> +#include "visl.h"
> +#include "visl-debugfs.h"
> +
> +static void visl_set_current_codec(struct visl_ctx *ctx)
> +{
> +	switch (ctx->coded_fmt.fmt.pix_mp.pixelformat) {
> +	case V4L2_PIX_FMT_FWHT_STATELESS:
> +		ctx->current_codec = VISL_CODEC_FWHT;
> +		break;
> +	case V4L2_PIX_FMT_MPEG2_SLICE:
> +		ctx->current_codec = VISL_CODEC_MPEG2;
> +		break;
> +	case V4L2_PIX_FMT_VP8_FRAME:
> +		ctx->current_codec = VISL_CODEC_VP8;
> +		break;
> +	case V4L2_PIX_FMT_VP9_FRAME:
> +		ctx->current_codec = VISL_CODEC_VP9;
> +		break;
> +	case V4L2_PIX_FMT_H264_SLICE:
> +		ctx->current_codec = VISL_CODEC_H264;
> +		break;
> +	default:
> +		ctx->current_codec = VISL_CODEC_NONE;
> +		break;
> +	}
> +}
> +
> +static void visl_print_fmt(struct visl_ctx *ctx, const struct v4l2_format *f)
> +{
> +	const struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> +	u32 i;
> +
> +	dprintk(ctx->dev, "width: %d\n", pix_mp->width);
> +	dprintk(ctx->dev, "height: %d\n", pix_mp->height);
> +	dprintk(ctx->dev, "pixelformat: %c%c%c%c\n",
> +		pix_mp->pixelformat,
> +		(pix_mp->pixelformat >> 8) & 0xff,
> +		(pix_mp->pixelformat >> 16) & 0xff,
> +		(pix_mp->pixelformat >> 24) & 0xff);
> +
> +	dprintk(ctx->dev, "field: %d\n", pix_mp->field);
> +	dprintk(ctx->dev, "colorspace: %d\n", pix_mp->colorspace);
> +	dprintk(ctx->dev, "num_planes: %d\n", pix_mp->num_planes);
> +	dprintk(ctx->dev, "flags: %d\n", pix_mp->flags);
> +	dprintk(ctx->dev, "quantization: %d\n", pix_mp->quantization);
> +	dprintk(ctx->dev, "xfer_func: %d\n", pix_mp->xfer_func);
> +
> +	for (i = 0; i < pix_mp->num_planes; i++) {
> +		dprintk(ctx->dev,
> +			"plane[%d]: sizeimage: %d\n", i, pix_mp->plane_fmt[i].sizeimage);
> +		dprintk(ctx->dev,
> +			"plane[%d]: bytesperline: %d\n", i, pix_mp->plane_fmt[i].bytesperline);
> +	}
> +}
> +
> +static int visl_tpg_init(struct visl_ctx *ctx)
> +{
> +	const struct font_desc *font;
> +	const char *font_name = "VGA8x16";
> +	int ret;
> +	u32 width = ctx->decoded_fmt.fmt.pix_mp.width;
> +	u32 height = ctx->decoded_fmt.fmt.pix_mp.height;
> +	struct v4l2_pix_format_mplane *f = &ctx->decoded_fmt.fmt.pix_mp;
> +
> +	tpg_free(&ctx->tpg);
> +
> +	font = find_font(font_name);
> +	if (font) {
> +		tpg_init(&ctx->tpg, width, height);
> +
> +		ret = tpg_alloc(&ctx->tpg, width);
> +		if (ret)
> +			goto err_alloc;
> +
> +		tpg_set_font(font->data);
> +		ret = tpg_s_fourcc(&ctx->tpg,
> +				   f->pixelformat);
> +
> +		if (!ret)
> +			goto err_fourcc;
> +
> +		tpg_reset_source(&ctx->tpg, width, height, f->field);
> +
> +		tpg_s_pattern(&ctx->tpg, TPG_PAT_75_COLORBAR);
> +
> +		tpg_s_field(&ctx->tpg, f->field, false);
> +		tpg_s_colorspace(&ctx->tpg, f->colorspace);
> +		tpg_s_ycbcr_enc(&ctx->tpg, f->ycbcr_enc);
> +		tpg_s_quantization(&ctx->tpg, f->quantization);
> +		tpg_s_xfer_func(&ctx->tpg, f->xfer_func);
> +	} else {
> +		v4l2_err(&ctx->dev->v4l2_dev,
> +			 "Font %s not found\n", font_name);
> +
> +		return -EINVAL;
> +	}
> +
> +	dprintk(ctx->dev, "Initialized the V4L2 test pattern generator, w=%d, h=%d, max_w=%d\n",
> +		width, height, width);
> +
> +	return 0;
> +err_alloc:
> +	return ret;
> +err_fourcc:
> +	tpg_free(&ctx->tpg);
> +	return ret;
> +}
> +
> +static const u32 visl_decoded_fmts[] = {
> +	V4L2_PIX_FMT_NV12,
> +	V4L2_PIX_FMT_YUV420,
> +};
> +
> +const struct visl_coded_format_desc visl_coded_fmts[] = {
> +	{
> +		.pixelformat = V4L2_PIX_FMT_FWHT,
> +		.frmsize = {
> +			.min_width = 640,
> +			.max_width = 4096,
> +			.step_width = 1,
> +			.min_height = 360,
> +			.max_height = 2160,
> +			.step_height = 1,
> +		},
> +		.ctrls = &visl_fwht_ctrls,
> +		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
> +		.decoded_fmts = visl_decoded_fmts,
> +	},
> +	{
> +		.pixelformat = V4L2_PIX_FMT_MPEG2_SLICE,
> +		.frmsize = {
> +			.min_width = 16,
> +			.max_width = 1920,
> +			.step_width = 1,
> +			.min_height = 16,
> +			.max_height = 1152,
> +			.step_height = 1,
> +		},
> +		.ctrls = &visl_mpeg2_ctrls,
> +		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
> +		.decoded_fmts = visl_decoded_fmts,
> +	},
> +	{
> +		.pixelformat = V4L2_PIX_FMT_VP8_FRAME,
> +		.frmsize = {
> +			.min_width = 64,
> +			.max_width = 16383,
> +			.step_width = 1,
> +			.min_height = 64,
> +			.max_height = 16383,
> +			.step_height = 1,
> +		},
> +		.ctrls = &visl_vp8_ctrls,
> +		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
> +		.decoded_fmts = visl_decoded_fmts,
> +	},
> +	{
> +		.pixelformat = V4L2_PIX_FMT_VP9_FRAME,
> +		.frmsize = {
> +			.min_width = 64,
> +			.max_width = 8192,
> +			.step_width = 1,
> +			.min_height = 64,
> +			.max_height = 4352,
> +			.step_height = 1,
> +		},
> +		.ctrls = &visl_vp9_ctrls,
> +		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
> +		.decoded_fmts = visl_decoded_fmts,
> +	},
> +	{
> +		.pixelformat = V4L2_PIX_FMT_H264_SLICE,
> +		.frmsize = {
> +			.min_width = 64,
> +			.max_width = 4096,
> +			.step_width = 1,
> +			.min_height = 64,
> +			.max_height = 2304,
> +			.step_height = 1,
> +		},
> +		.ctrls = &visl_h264_ctrls,
> +		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
> +		.decoded_fmts = visl_decoded_fmts,
> +	},
> +};
> +
> +const size_t num_coded_fmts = ARRAY_SIZE(visl_coded_fmts);
> +
> +static const struct visl_coded_format_desc*
> +visl_find_coded_fmt_desc(u32 fourcc)
> +{
> +	unsigned int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(visl_coded_fmts); i++) {
> +		if (visl_coded_fmts[i].pixelformat == fourcc)
> +			return &visl_coded_fmts[i];
> +	}
> +
> +	return NULL;
> +}
> +
> +static void visl_init_fmt(struct v4l2_format *f, u32 fourcc)
> +{	memset(f, 0, sizeof(*f));
> +	f->fmt.pix_mp.pixelformat = fourcc;
> +	f->fmt.pix_mp.field = V4L2_FIELD_NONE;
> +	f->fmt.pix_mp.colorspace = V4L2_COLORSPACE_REC709;
> +	f->fmt.pix_mp.ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
> +	f->fmt.pix_mp.quantization = V4L2_QUANTIZATION_DEFAULT;
> +	f->fmt.pix_mp.xfer_func = V4L2_XFER_FUNC_DEFAULT;
> +}
> +
> +void visl_reset_coded_fmt(struct visl_ctx *ctx)
> +{
> +	struct v4l2_format *f = &ctx->coded_fmt;
> +	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> +
> +	ctx->coded_format_desc = &visl_coded_fmts[0];
> +	visl_init_fmt(f, ctx->coded_format_desc->pixelformat);
> +
> +	f->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
> +	f->fmt.pix_mp.width = ctx->coded_format_desc->frmsize.min_width;
> +	f->fmt.pix_mp.height = ctx->coded_format_desc->frmsize.min_height;
> +
> +	pix_mp->num_planes = 1;
> +	pix_mp->plane_fmt[0].sizeimage = pix_mp->width * pix_mp->height * 8;
> +
> +	dprintk(ctx->dev, "OUTPUT format was set to:\n");
> +	visl_print_fmt(ctx, &ctx->coded_fmt);
> +
> +	visl_set_current_codec(ctx);
> +}
> +
> +int visl_reset_decoded_fmt(struct visl_ctx *ctx)
> +{
> +	struct v4l2_format *f = &ctx->decoded_fmt;
> +	u32 decoded_fmt = ctx->coded_format_desc[0].decoded_fmts[0];
> +
> +	visl_init_fmt(f, decoded_fmt);
> +
> +	f->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
> +
> +	v4l2_fill_pixfmt_mp(&f->fmt.pix_mp,
> +			    ctx->coded_format_desc->decoded_fmts[0],
> +			    ctx->coded_fmt.fmt.pix_mp.width,
> +			    ctx->coded_fmt.fmt.pix_mp.height);
> +
> +	dprintk(ctx->dev, "CAPTURE format was set to:\n");
> +	visl_print_fmt(ctx, &ctx->decoded_fmt);
> +
> +	return visl_tpg_init(ctx);
> +}
> +
> +int visl_set_default_format(struct visl_ctx *ctx)
> +{
> +	visl_reset_coded_fmt(ctx);
> +	return visl_reset_decoded_fmt(ctx);
> +}
> +
> +static struct visl_q_data *get_q_data(struct visl_ctx *ctx,
> +				      enum v4l2_buf_type type)
> +{
> +	switch (type) {
> +	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
> +	case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
> +		return &ctx->q_data[V4L2_M2M_SRC];
> +	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
> +	case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
> +		return &ctx->q_data[V4L2_M2M_DST];
> +	default:
> +		break;
> +	}
> +	return NULL;
> +}
> +
> +static int visl_querycap(struct file *file, void *priv,
> +			 struct v4l2_capability *cap)
> +{
> +	strscpy(cap->driver, VISL_NAME, sizeof(cap->driver));
> +	strscpy(cap->card, VISL_NAME, sizeof(cap->card));
> +	snprintf(cap->bus_info, sizeof(cap->bus_info),
> +		 "platform:%s", VISL_NAME);
> +
> +	return 0;
> +}
> +
> +static int visl_enum_fmt_vid_cap(struct file *file, void *priv,
> +				 struct v4l2_fmtdesc *f)
> +{
> +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> +
> +	if (f->index >= ctx->coded_format_desc->num_decoded_fmts)
> +		return -EINVAL;
> +
> +	f->pixelformat = ctx->coded_format_desc->decoded_fmts[f->index];
> +	return 0;
> +}
> +
> +static int visl_enum_fmt_vid_out(struct file *file, void *priv,
> +				 struct v4l2_fmtdesc *f)
> +{
> +	if (f->index >= ARRAY_SIZE(visl_coded_fmts))
> +		return -EINVAL;
> +
> +	f->pixelformat = visl_coded_fmts[f->index].pixelformat;
> +	return 0;
> +}
> +
> +static int visl_g_fmt_vid_cap(struct file *file, void *priv,
> +			      struct v4l2_format *f)
> +{
> +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> +	*f = ctx->decoded_fmt;
> +
> +	return 0;
> +}
> +
> +static int visl_g_fmt_vid_out(struct file *file, void *priv,
> +			      struct v4l2_format *f)
> +{
> +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> +
> +	*f = ctx->coded_fmt;
> +	return 0;
> +}
> +
> +static int visl_try_fmt_vid_cap(struct file *file, void *priv,
> +				struct v4l2_format *f)
> +{
> +	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> +	const struct visl_coded_format_desc *coded_desc;
> +	unsigned int i;
> +
> +	coded_desc = ctx->coded_format_desc;
> +
> +	for (i = 0; i < coded_desc->num_decoded_fmts; i++) {
> +		if (coded_desc->decoded_fmts[i] == pix_mp->pixelformat)
> +			break;
> +	}
> +
> +	if (i == coded_desc->num_decoded_fmts)
> +		pix_mp->pixelformat = coded_desc->decoded_fmts[0];
> +
> +	v4l2_apply_frmsize_constraints(&pix_mp->width,
> +				       &pix_mp->height,
> +				       &coded_desc->frmsize);
> +
> +	v4l2_fill_pixfmt_mp(pix_mp, pix_mp->pixelformat,
> +			    pix_mp->width, pix_mp->height);
> +
> +	pix_mp->field = V4L2_FIELD_NONE;
> +
> +	return 0;
> +}
> +
> +static int visl_try_fmt_vid_out(struct file *file, void *priv,
> +				struct v4l2_format *f)
> +{
> +	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> +	const struct visl_coded_format_desc *coded_desc;
> +
> +	coded_desc = visl_find_coded_fmt_desc(pix_mp->pixelformat);
> +	if (!coded_desc) {
> +		pix_mp->pixelformat = visl_coded_fmts[0].pixelformat;
> +		coded_desc = &visl_coded_fmts[0];
> +	}
> +
> +	v4l2_apply_frmsize_constraints(&pix_mp->width,
> +				       &pix_mp->height,
> +				       &coded_desc->frmsize);
> +
> +	pix_mp->field = V4L2_FIELD_NONE;
> +	pix_mp->num_planes = 1;
> +
> +	return 0;
> +}
> +
> +static int visl_s_fmt_vid_out(struct file *file, void *priv,
> +			      struct v4l2_format *f)
> +{
> +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> +	struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
> +	const struct visl_coded_format_desc *desc;
> +	struct vb2_queue *peer_vq;
> +	int ret;
> +
> +	peer_vq = v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
> +	if (vb2_is_busy(peer_vq))
> +		return -EBUSY;
> +
> +	dprintk(ctx->dev, "Trying to set the OUTPUT format to:\n");
> +	visl_print_fmt(ctx, f);
> +
> +	ret = visl_try_fmt_vid_out(file, priv, f);
> +	if (ret)
> +		return ret;
> +
> +	desc = visl_find_coded_fmt_desc(f->fmt.pix_mp.pixelformat);
> +	ctx->coded_format_desc = desc;
> +	ctx->coded_fmt = *f;
> +
> +	v4l2_fill_pixfmt_mp(&ctx->coded_fmt.fmt.pix_mp,
> +			    ctx->coded_fmt.fmt.pix_mp.pixelformat,
> +			    ctx->coded_fmt.fmt.pix_mp.width,
> +			    ctx->coded_fmt.fmt.pix_mp.height);

Since v4l2_fill_pixfmt_mp always returns -EINVAL, requesting buffers will
crash if the application does not set sizeimage itself.

> +
> +	ret = visl_reset_decoded_fmt(ctx);
> +	if (ret)
> +		return ret;
> +
> +	ctx->decoded_fmt.fmt.pix_mp.colorspace = f->fmt.pix_mp.colorspace;
> +	ctx->decoded_fmt.fmt.pix_mp.xfer_func = f->fmt.pix_mp.xfer_func;
> +	ctx->decoded_fmt.fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc;
> +	ctx->decoded_fmt.fmt.pix_mp.quantization = f->fmt.pix_mp.quantization;
> +
> +	dprintk(ctx->dev, "OUTPUT format was set to:\n");
> +	visl_print_fmt(ctx, &ctx->coded_fmt);
> +
> +	visl_set_current_codec(ctx);
> +	return 0;
> +}
> +
> +static int visl_s_fmt_vid_cap(struct file *file, void *priv,
> +			      struct v4l2_format *f)
> +{
> +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> +	int ret;
> +
> +	dprintk(ctx->dev, "Trying to set the CAPTURE format to:\n");
> +	visl_print_fmt(ctx, f);
> +
> +	ret = visl_try_fmt_vid_cap(file, priv, f);
> +	if (ret)
> +		return ret;
> +
> +	ctx->decoded_fmt = *f;
> +
> +	dprintk(ctx->dev, "CAPTURE format was set to:\n");
> +	visl_print_fmt(ctx, &ctx->decoded_fmt);
> +
> +	visl_tpg_init(ctx);
> +	return 0;
> +}
> +
> +static int visl_enum_framesizes(struct file *file, void *priv,
> +				struct v4l2_frmsizeenum *fsize)
> +{
> +	const struct visl_coded_format_desc *fmt;
> +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> +
> +	if (fsize->index != 0)
> +		return -EINVAL;
> +
> +	fmt = visl_find_coded_fmt_desc(fsize->pixel_format);
> +	if (!fmt) {
> +		dprintk(ctx->dev,
> +			"Unsupported format for the OUTPUT queue: %d\n",
> +			fsize->pixel_format);
> +
> +		return -EINVAL;
> +	}
> +
> +	fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
> +	fsize->stepwise = fmt->frmsize;
> +	return 0;
> +}
> +
> +const struct v4l2_ioctl_ops visl_ioctl_ops = {
> +	.vidioc_querycap		= visl_querycap,
> +	.vidioc_enum_framesizes		= visl_enum_framesizes,
> +
> +	.vidioc_enum_fmt_vid_cap	= visl_enum_fmt_vid_cap,
> +	.vidioc_g_fmt_vid_cap_mplane	= visl_g_fmt_vid_cap,
> +	.vidioc_try_fmt_vid_cap_mplane	= visl_try_fmt_vid_cap,
> +	.vidioc_s_fmt_vid_cap_mplane	= visl_s_fmt_vid_cap,
> +
> +	.vidioc_enum_fmt_vid_out	= visl_enum_fmt_vid_out,
> +	.vidioc_g_fmt_vid_out_mplane	= visl_g_fmt_vid_out,
> +	.vidioc_try_fmt_vid_out_mplane	= visl_try_fmt_vid_out,
> +	.vidioc_s_fmt_vid_out_mplane	= visl_s_fmt_vid_out,
> +
> +	.vidioc_reqbufs			= v4l2_m2m_ioctl_reqbufs,
> +	.vidioc_querybuf		= v4l2_m2m_ioctl_querybuf,
> +	.vidioc_qbuf			= v4l2_m2m_ioctl_qbuf,
> +	.vidioc_dqbuf			= v4l2_m2m_ioctl_dqbuf,
> +	.vidioc_prepare_buf		= v4l2_m2m_ioctl_prepare_buf,
> +	.vidioc_create_bufs		= v4l2_m2m_ioctl_create_bufs,
> +	.vidioc_expbuf			= v4l2_m2m_ioctl_expbuf,
> +
> +	.vidioc_streamon		= v4l2_m2m_ioctl_streamon,
> +	.vidioc_streamoff		= v4l2_m2m_ioctl_streamoff,
> +
> +	.vidioc_subscribe_event		= v4l2_ctrl_subscribe_event,
> +	.vidioc_unsubscribe_event	= v4l2_event_unsubscribe,
> +};
> +
> +static int visl_queue_setup(struct vb2_queue *vq,
> +			    unsigned int *nbuffers,
> +			    unsigned int *num_planes,
> +			    unsigned int sizes[],
> +			    struct device *alloc_devs[])
> +{
> +	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
> +	struct v4l2_format *f;
> +	u32 i;
> +	char *qname;
> +
> +	if (V4L2_TYPE_IS_OUTPUT(vq->type)) {
> +		f = &ctx->coded_fmt;
> +		qname = "Output";
> +	} else {
> +		f = &ctx->decoded_fmt;
> +		qname = "Capture";
> +	}
> +
> +	if (*num_planes) {
> +		if (*num_planes != f->fmt.pix_mp.num_planes)
> +			return -EINVAL;
> +
> +		for (i = 0; i < f->fmt.pix_mp.num_planes; i++) {
> +			if (sizes[i] < f->fmt.pix_mp.plane_fmt[i].sizeimage)
> +				return -EINVAL;
> +		}
> +	} else {
> +		*num_planes = f->fmt.pix_mp.num_planes;
> +		for (i = 0; i < f->fmt.pix_mp.num_planes; i++)
> +			sizes[i] = f->fmt.pix_mp.plane_fmt[i].sizeimage;
> +	}
> +
> +	dprintk(ctx->dev, "%s: %d buffer(s) requested, num_planes=%d.\n",
> +		qname, *nbuffers, *num_planes);
> +
> +	for (i = 0; i < f->fmt.pix_mp.num_planes; i++)
> +		dprintk(ctx->dev, "plane[%d].sizeimage=%d\n",
> +			i, f->fmt.pix_mp.plane_fmt[i].sizeimage);
> +
> +	return 0;
> +}
> +
> +static void visl_queue_cleanup(struct vb2_queue *vq, u32 state)
> +{
> +	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
> +	struct vb2_v4l2_buffer *vbuf;
> +
> +	dprintk(ctx->dev, "Cleaning up queues\n");
> +	for (;;) {
> +		if (V4L2_TYPE_IS_OUTPUT(vq->type))
> +			vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> +		else
> +			vbuf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
> +
> +		if (!vbuf)
> +			break;
> +
> +		v4l2_ctrl_request_complete(vbuf->vb2_buf.req_obj.req,
> +					   &ctx->hdl);
> +		dprintk(ctx->dev, "Marked request %p as complete\n",
> +			vbuf->vb2_buf.req_obj.req);
> +
> +		v4l2_m2m_buf_done(vbuf, state);
> +		dprintk(ctx->dev,
> +			"Marked buffer %llu as done, state is %d\n",
> +			vbuf->vb2_buf.timestamp,
> +			state);
> +	}
> +}
> +
> +static int visl_buf_out_validate(struct vb2_buffer *vb)
> +{
> +	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> +
> +	vbuf->field = V4L2_FIELD_NONE;
> +	return 0;
> +}
> +
> +static int visl_buf_prepare(struct vb2_buffer *vb)
> +{
> +	struct vb2_queue *vq = vb->vb2_queue;
> +	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
> +	u32 plane_sz = vb2_plane_size(vb, 0);
> +	struct v4l2_pix_format *pix_fmt;
> +
> +	if (V4L2_TYPE_IS_OUTPUT(vq->type))
> +		pix_fmt = &ctx->coded_fmt.fmt.pix;
> +	else
> +		pix_fmt = &ctx->decoded_fmt.fmt.pix;
> +
> +	if (plane_sz < pix_fmt->sizeimage) {
> +		v4l2_err(&ctx->dev->v4l2_dev, "plane[0] size is %d, sizeimage is %d\n",
> +			 plane_sz, pix_fmt->sizeimage);
> +		return -EINVAL;
> +	}
> +
> +	vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
> +
> +	return 0;
> +}
> +
> +static int visl_start_streaming(struct vb2_queue *vq, unsigned int count)
> +{
> +	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
> +	struct visl_q_data *q_data = get_q_data(ctx, vq->type);
> +	int rc = 0;
> +
> +	if (!q_data) {
> +		rc = -EINVAL;
> +		goto err;
> +	}
> +
> +	q_data->sequence = 0;
> +
> +	if (V4L2_TYPE_IS_CAPTURE(vq->type)) {
> +		ctx->capture_streamon_jiffies = get_jiffies_64();
> +		return 0;
> +	}
> +
> +	if (WARN_ON(!ctx->coded_format_desc)) {
> +		rc =  -EINVAL;
> +		goto err;
> +	}
> +
> +	return 0;
> +
> +err:
> +	visl_queue_cleanup(vq, VB2_BUF_STATE_QUEUED);
> +	return rc;
> +}
> +
> +static void visl_stop_streaming(struct vb2_queue *vq)
> +{
> +	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
> +
> +	dprintk(ctx->dev, "Stop streaming\n");
> +	visl_queue_cleanup(vq, VB2_BUF_STATE_ERROR);
> +}
> +
> +static void visl_buf_queue(struct vb2_buffer *vb)
> +{
> +	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> +	struct visl_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
> +
> +	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
> +}
> +
> +static void visl_buf_request_complete(struct vb2_buffer *vb)
> +{
> +	struct visl_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
> +
> +	v4l2_ctrl_request_complete(vb->req_obj.req, &ctx->hdl);
> +}
> +
> +const struct vb2_ops visl_qops = {
> +	.queue_setup          = visl_queue_setup,
> +	.buf_out_validate     = visl_buf_out_validate,
> +	.buf_prepare          = visl_buf_prepare,
> +	.buf_queue            = visl_buf_queue,
> +	.start_streaming      = visl_start_streaming,
> +	.stop_streaming       = visl_stop_streaming,
> +	.wait_prepare         = vb2_ops_wait_prepare,
> +	.wait_finish          = vb2_ops_wait_finish,
> +	.buf_request_complete = visl_buf_request_complete,
> +};
> +
> +int visl_queue_init(void *priv, struct vb2_queue *src_vq,
> +		    struct vb2_queue *dst_vq)
> +{
> +	struct visl_ctx *ctx = priv;
> +	int ret;
> +
> +	src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
> +	src_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
> +	src_vq->drv_priv = ctx;
> +	src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
> +	src_vq->ops = &visl_qops;
> +	src_vq->mem_ops = &vb2_vmalloc_memops;
> +	src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
> +	src_vq->lock = &ctx->vb_mutex;
> +	src_vq->supports_requests = true;
> +
> +	ret = vb2_queue_init(src_vq);
> +	if (ret)
> +		return ret;
> +
> +	dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
> +	dst_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
> +	dst_vq->drv_priv = ctx;
> +	dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
> +	dst_vq->ops = &visl_qops;
> +	dst_vq->mem_ops = &vb2_vmalloc_memops;
> +	dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
> +	dst_vq->lock = &ctx->vb_mutex;
> +
> +	return vb2_queue_init(dst_vq);
> +}
> +
> +int visl_request_validate(struct media_request *req)
> +{
> +	struct media_request_object *obj;
> +	struct visl_ctx *ctx = NULL;
> +	unsigned int count;
> +
> +	list_for_each_entry(obj, &req->objects, list) {
> +		struct vb2_buffer *vb;
> +
> +		if (vb2_request_object_is_buffer(obj)) {
> +			vb = container_of(obj, struct vb2_buffer, req_obj);
> +			ctx = vb2_get_drv_priv(vb->vb2_queue);
> +
> +			break;
> +		}
> +	}
> +
> +	if (!ctx)
> +		return -ENOENT;
> +
> +	count = vb2_request_buffer_cnt(req);
> +	if (!count) {
> +		v4l2_err(&ctx->dev->v4l2_dev,
> +			 "No buffer was provided with the request\n");
> +		return -ENOENT;
> +	} else if (count > 1) {
> +		v4l2_err(&ctx->dev->v4l2_dev,
> +			 "More than one buffer was provided with the request\n");
> +		return -EINVAL;
> +	}
> +
> +	return vb2_request_validate(req);
> +}
> diff --git a/drivers/media/test-drivers/visl/visl-video.h b/drivers/media/test-drivers/visl/visl-video.h
> new file mode 100644
> index 000000000000..dbfc1c6a052d
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl-video.h
> @@ -0,0 +1,61 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * A virtual stateless device for stateless uAPI development purposes.
> + *
> + * This tool's objective is to help the development and testing of userspace
> + * applications that use the V4L2 stateless API to decode media.
> + *
> + * A userspace implementation can use visl to run a decoding loop even when no
> + * hardware is available or when the kernel uAPI for the codec has not been
> + * upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * This driver can also trace the contents of the V4L2 controls submitted to it.
> + * It can also dump the contents of the vb2 buffers through a debugfs
> + * interface. This is in many ways similar to the tracing infrastructure
> + * available for other popular encode/decode APIs out there and can help develop
> + * a userspace application by using another (working) one as a reference.
> + *
> + * Note that no actual decoding of video frames is performed by visl. The V4L2
> + * test pattern generator is used to write various debug information to the
> + * capture buffers instead.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + */
> +
> +#ifndef _VISL_VIDEO_H_
> +#define _VISL_VIDEO_H_
> +#include <media/v4l2-mem2mem.h>
> +
> +#include "visl.h"
> +
> +extern const struct v4l2_ioctl_ops visl_ioctl_ops;
> +
> +extern const struct visl_ctrls visl_fwht_ctrls;
> +extern const struct visl_ctrls visl_mpeg2_ctrls;
> +extern const struct visl_ctrls visl_vp8_ctrls;
> +extern const struct visl_ctrls visl_vp9_ctrls;
> +extern const struct visl_ctrls visl_h264_ctrls;
> +
> +int visl_queue_init(void *priv, struct vb2_queue *src_vq,
> +		    struct vb2_queue *dst_vq);
> +
> +int visl_set_default_format(struct visl_ctx *ctx);
> +int visl_request_validate(struct media_request *req);
> +
> +#endif /* _VISL_VIDEO_H_ */
> diff --git a/drivers/media/test-drivers/visl/visl.h b/drivers/media/test-drivers/visl/visl.h
> new file mode 100644
> index 000000000000..a473d154805c
> --- /dev/null
> +++ b/drivers/media/test-drivers/visl/visl.h
> @@ -0,0 +1,178 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * A virtual stateless device for stateless uAPI development purposes.
> + *
> + * This tool's objective is to help the development and testing of userspace
> + * applications that use the V4L2 stateless API to decode media.
> + *
> + * A userspace implementation can use visl to run a decoding loop even when no
> + * hardware is available or when the kernel uAPI for the codec has not been
> + * upstreamed yet. This can reveal bugs at an early stage.
> + *
> + * This driver can also trace the contents of the V4L2 controls submitted to it.
> + * It can also dump the contents of the vb2 buffers through a debugfs
> + * interface. This is in many ways similar to the tracing infrastructure
> + * available for other popular encode/decode APIs out there and can help develop
> + * a userspace application by using another (working) one as a reference.
> + *
> + * Note that no actual decoding of video frames is performed by visl. The V4L2
> + * test pattern generator is used to write various debug information to the
> + * capture buffers instead.
> + *
> + * Copyright (c) Collabora, Ltd.
> + *
> + * Based on the vim2m driver, that is:
> + *
> + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> + * Pawel Osciak, <pawel@osciak.com>
> + * Marek Szyprowski, <m.szyprowski@samsung.com>
> + *
> + * Based on the vicodec driver, that is:
> + *
> + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> + *
> + * Based on the Cedrus VPU driver, that is:
> + *
> + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> + * Copyright (C) 2018 Bootlin
> + */
> +
> +#ifndef _VISL_H_
> +#define _VISL_H_
> +
> +#include <linux/debugfs.h>
> +#include <linux/list.h>
> +
> +#include <media/v4l2-ctrls.h>
> +#include <media/v4l2-device.h>
> +#include <media/tpg/v4l2-tpg.h>
> +
> +#define VISL_NAME		"visl"
> +#define VISL_M2M_NQUEUES	2
> +
> +#define TPG_STR_BUF_SZ		2048
> +
> +extern unsigned int visl_transtime_ms;
> +
> +struct visl_ctrls {
> +	const struct visl_ctrl_desc *ctrls;
> +	unsigned int num_ctrls;
> +};
> +
> +struct visl_coded_format_desc {
> +	u32 pixelformat;
> +	struct v4l2_frmsize_stepwise frmsize;
> +	const struct visl_ctrls *ctrls;
> +	unsigned int num_decoded_fmts;
> +	const u32 *decoded_fmts;
> +};
> +
> +extern const struct visl_coded_format_desc visl_coded_fmts[];
> +extern const size_t num_coded_fmts;
> +
> +enum {
> +	V4L2_M2M_SRC = 0,
> +	V4L2_M2M_DST = 1,
> +};
> +
> +extern unsigned int visl_debug;
> +#define dprintk(dev, fmt, arg...) \
> +	v4l2_dbg(1, visl_debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)
> +
> +extern int visl_dprintk_frame_start;
> +extern unsigned int visl_dprintk_nframes;
> +extern unsigned int keep_bitstream_buffers;
> +extern int bitstream_trace_frame_start;
> +extern unsigned int bitstream_trace_nframes;
> +
> +#define frame_dprintk(dev, current, fmt, arg...) \
> +	do { \
> +		if (visl_dprintk_frame_start > -1 && \
> +		    current >= visl_dprintk_frame_start && \
> +		    current < visl_dprintk_frame_start + visl_dprintk_nframes) \
> +			dprintk(dev, fmt, ## arg); \
> +	} while (0) \
> +
> +struct visl_q_data {
> +	unsigned int		sequence;
> +};
> +
> +struct visl_dev {
> +	struct v4l2_device	v4l2_dev;
> +	struct video_device	vfd;
> +#ifdef CONFIG_MEDIA_CONTROLLER
> +	struct media_device	mdev;
> +#endif
> +
> +	struct mutex		dev_mutex;
> +
> +	struct v4l2_m2m_dev	*m2m_dev;
> +
> +#ifdef CONFIG_VISL_DEBUGFS
> +	struct dentry		*debugfs_root;
> +	struct dentry		*bitstream_debugfs;
> +	struct list_head	bitstream_blobs;
> +	/*
> +	 * Protects the "blob" list as it can be accessed from "visl_release"
> +	 * if keep_bitstream_buffers = 0 while some other client is tracing
> +	 */
> +	struct mutex		bitstream_lock;
> +#endif
> +};
> +
> +enum visl_codec {
> +	VISL_CODEC_NONE,
> +	VISL_CODEC_FWHT,
> +	VISL_CODEC_MPEG2,
> +	VISL_CODEC_VP8,
> +	VISL_CODEC_VP9,
> +	VISL_CODEC_H264,
> +};
> +
> +struct visl_blob {
> +	struct list_head list;
> +	struct dentry *dentry;
> +	u64 streamon_jiffies;
> +	struct debugfs_blob_wrapper blob;
> +};
> +
> +struct visl_ctx {
> +	struct v4l2_fh		fh;
> +	struct visl_dev	*dev;
> +	struct v4l2_ctrl_handler hdl;
> +
> +	struct mutex		vb_mutex;
> +
> +	struct visl_q_data	q_data[VISL_M2M_NQUEUES];
> +	enum   visl_codec	current_codec;
> +
> +	const struct visl_coded_format_desc *coded_format_desc;
> +
> +	struct v4l2_format	coded_fmt;
> +	struct v4l2_format	decoded_fmt;
> +
> +	struct tpg_data		tpg;
> +	u64			capture_streamon_jiffies;
> +	char			*tpg_str_buf;
> +};
> +
> +struct visl_ctrl_desc {
> +	struct v4l2_ctrl_config cfg;
> +};
> +
> +static inline struct visl_ctx *visl_file_to_ctx(struct file *file)
> +{
> +	return container_of(file->private_data, struct visl_ctx, fh);
> +}
> +
> +static inline struct visl_ctx *visl_v4l2fh_to_ctx(struct v4l2_fh *v4l2_fh)
> +{
> +	return container_of(v4l2_fh, struct visl_ctx, fh);
> +}
> +
> +void *visl_find_control_data(struct visl_ctx *ctx, u32 id);
> +struct v4l2_ctrl *visl_find_control(struct visl_ctx *ctx, u32 id);
> +u32 visl_control_num_elems(struct visl_ctx *ctx, u32 id);
> +
> +#endif /* _VISL_H_ */
> -- 
> 2.36.1
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH v2] media: visl: add virtual stateless driver
  2022-08-19 20:43     ` Deborah Brouwer
@ 2022-10-04 23:10       ` Deborah Brouwer
  0 siblings, 0 replies; 14+ messages in thread
From: Deborah Brouwer @ 2022-10-04 23:10 UTC (permalink / raw)
  To: daniel.almeida; +Cc: hverkuil, linux-media

Hi Daniel,
This driver has been very helpful to me for working on the v4l2-tracer
utility.
I have a few more comments to add below.

On Fri, Aug 19, 2022 at 01:43:44PM -0700, Deborah Brouwer wrote:
> On Mon, Jun 06, 2022 at 06:26:22PM -0300, daniel.almeida@collabora.com wrote:
> > From: Daniel Almeida <daniel.almeida@collabora.com>
> > 
> > A virtual stateless device for stateless uAPI development purposes.
> > 
> > This tool's objective is to help the development and testing of userspace
> > applications that use the V4L2 stateless API to decode media.
> > 
> > A userspace implementation can use visl to run a decoding loop even when no
> > hardware is available or when the kernel uAPI for the codec has not been
> > upstreamed yet. This can reveal bugs at an early stage.
> > 
> > This driver can also trace the contents of the V4L2 controls submitted to it.
> > It can also dump the contents of the vb2 buffers through a debugfs
> > interface. This is in many ways similar to the tracing infrastructure
> > available for other popular encode/decode APIs out there and can help develop
> > a userspace application by using another (working) one as a reference.
> > 
> > Note that no actual decoding of video frames is performed by visl. The V4L2
> > test pattern generator is used to write various debug information to the
> > capture buffers instead.
> > 
> > Signed-off-by: Daniel Almeida <daniel.almeida@collabora.com>
> > 
> 
> Tested-by: Deborah Brouwer <deborah.brouwer@collabora.com>
> 
> > ---
> > Was media: vivpu: add virtual VPU driver
> > 
> > Changes from v1:
> > 
> > - Addressed review comments from v1
> > - Driver was renamed to visl
> > - Dropped AV1 support for now (as it's not upstream yet)
> > - Added support for FWHT, MPEG2, VP8, VP9, H264
> > - Added TPG support
> > - Driver can now dump the controls for the codecs above through ftrace
> > - Driver can now dump the vb2 bitstream buffer through a debugfs infrastructure
> > 
> > I ran this on a kernel with KASAN/kmemleak enabled, nothing showed up.
> > 
> > v4l2-compliance results:
> > 
> > v4l2-compliance 1.22.1, 64 bits, 64-bit time_t
> > 
> > Compliance test for visl device /dev/video0:
> > 
> > Driver Info:
> >         Driver name      : visl
> >         Card type        : visl
> >         Bus info         : platform:visl
> >         Driver version   : 5.19.0
> >         Capabilities     : 0x84204000
> >                 Video Memory-to-Memory Multiplanar
> >                 Streaming
> >                 Extended Pix Format
> >                 Device Capabilities
> >         Device Caps      : 0x04204000
> >                 Video Memory-to-Memory Multiplanar
> >                 Streaming
> >                 Extended Pix Format
> > Media Driver Info:
> >         Driver name      : visl
> >         Model            : visl
> >         Serial           : 
> >         Bus info         : platform:visl
> >         Media version    : 5.19.0
> >         Hardware revision: 0x00000000 (0)
> >         Driver version   : 5.19.0
> > Interface Info:
> >         ID               : 0x0300000c
> >         Type             : V4L Video
> > Entity Info:
> >         ID               : 0x00000001 (1)
> >         Name             : visl-source
> >         Function         : V4L2 I/O
> >         Pad 0x01000002   : 0: Source
> >           Link 0x02000008: to remote pad 0x1000004 of entity 'visl-proc' (Video Decoder): Data, Enabled, Immutable
> > 
> > Required ioctls:
> >         test MC information (see 'Media Driver Info' above): OK
> >         test VIDIOC_QUERYCAP: OK
> >         test invalid ioctls: OK
> > 
> > Allow for multiple opens:
> >         test second /dev/video0 open: OK
> >         test VIDIOC_QUERYCAP: OK
> >         test VIDIOC_G/S_PRIORITY: OK
> >         test for unlimited opens: OK
> > 
> > Debug ioctls:
> >         test VIDIOC_DBG_G/S_REGISTER: OK (Not Supported)
> >         test VIDIOC_LOG_STATUS: OK (Not Supported)
> > 
> > Input ioctls:
> >         test VIDIOC_G/S_TUNER/ENUM_FREQ_BANDS: OK (Not Supported)
> >         test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
> >         test VIDIOC_S_HW_FREQ_SEEK: OK (Not Supported)
> >         test VIDIOC_ENUMAUDIO: OK (Not Supported)
> >         test VIDIOC_G/S/ENUMINPUT: OK (Not Supported)
> >         test VIDIOC_G/S_AUDIO: OK (Not Supported)
> >         Inputs: 0 Audio Inputs: 0 Tuners: 0
> > 
> > Output ioctls:
> >         test VIDIOC_G/S_MODULATOR: OK (Not Supported)
> >         test VIDIOC_G/S_FREQUENCY: OK (Not Supported)
> >         test VIDIOC_ENUMAUDOUT: OK (Not Supported)
> >         test VIDIOC_G/S/ENUMOUTPUT: OK (Not Supported)
> >         test VIDIOC_G/S_AUDOUT: OK (Not Supported)
> >         Outputs: 0 Audio Outputs: 0 Modulators: 0
> > 
> > Input/Output configuration ioctls:
> >         test VIDIOC_ENUM/G/S/QUERY_STD: OK (Not Supported)
> >         test VIDIOC_ENUM/G/S/QUERY_DV_TIMINGS: OK (Not Supported)
> >         test VIDIOC_DV_TIMINGS_CAP: OK (Not Supported)
> >         test VIDIOC_G/S_EDID: OK (Not Supported)
> > 
> > Control ioctls:
> >         test VIDIOC_QUERY_EXT_CTRL/QUERYMENU: OK
> >         test VIDIOC_QUERYCTRL: OK
> >         test VIDIOC_G/S_CTRL: OK
> >         test VIDIOC_G/S/TRY_EXT_CTRLS: OK
> >         test VIDIOC_(UN)SUBSCRIBE_EVENT/DQEVENT: OK
> >         test VIDIOC_G/S_JPEGCOMP: OK (Not Supported)
> >         Standard Controls: 3 Private Controls: 0
> >         Standard Compound Controls: 13 Private Compound Controls: 0
> > 
> > Format ioctls:
> >         test VIDIOC_ENUM_FMT/FRAMESIZES/FRAMEINTERVALS: OK
> >         test VIDIOC_G/S_PARM: OK (Not Supported)
> >         test VIDIOC_G_FBUF: OK (Not Supported)
> >         test VIDIOC_G_FMT: OK
> >         test VIDIOC_TRY_FMT: OK
> >         test VIDIOC_S_FMT: OK
> >         test VIDIOC_G_SLICED_VBI_CAP: OK (Not Supported)
> >         test Cropping: OK (Not Supported)
> >         test Composing: OK (Not Supported)
> >         test Scaling: OK
> > 
> > Codec ioctls:
> >         test VIDIOC_(TRY_)ENCODER_CMD: OK (Not Supported)
> >         test VIDIOC_G_ENC_INDEX: OK (Not Supported)
> >         test VIDIOC_(TRY_)DECODER_CMD: OK (Not Supported)
> > 
> > Buffer ioctls:
> >         test VIDIOC_REQBUFS/CREATE_BUFS/QUERYBUF: OK
> >         test VIDIOC_EXPBUF: OK
> >         test Requests: OK
> > 
> > Test input 0:
> > 
> > Streaming ioctls:
> >         test read/write: OK (Not Supported)
> >         test blocking wait: OK
> >         Video Capture Multiplanar: Captured 58 buffers    
> >         test MMAP (no poll): OK
> >         Video Capture Multiplanar: Captured 58 buffers    
> >         test MMAP (select): OK
> >         Video Capture Multiplanar: Captured 58 buffers    
> >         test MMAP (epoll): OK
> >         Video Capture Multiplanar: Captured 58 buffers    
> >         test USERPTR (no poll): OK
> >         Video Capture Multiplanar: Captured 58 buffers    
> >         test USERPTR (select): OK
> >         test DMABUF: Cannot test, specify --expbuf-device
> > 
> > Total for visl device /dev/video0: 53, Succeeded: 53, Failed: 0, Warnings: 0
> > 
> > ---
> >  drivers/media/test-drivers/Kconfig            |   1 +
> >  drivers/media/test-drivers/Makefile           |   1 +
> >  drivers/media/test-drivers/visl/Kconfig       |  31 +
> >  drivers/media/test-drivers/visl/Makefile      |   8 +
> >  drivers/media/test-drivers/visl/visl-core.c   | 532 ++++++++++++
> >  .../media/test-drivers/visl/visl-debugfs.c    | 148 ++++
> >  .../media/test-drivers/visl/visl-debugfs.h    |  72 ++
> >  drivers/media/test-drivers/visl/visl-dec.c    | 468 +++++++++++
> >  drivers/media/test-drivers/visl/visl-dec.h    | 100 +++
> >  .../media/test-drivers/visl/visl-trace-fwht.h |  66 ++
> >  .../media/test-drivers/visl/visl-trace-h264.h | 349 ++++++++
> >  .../test-drivers/visl/visl-trace-mpeg2.h      |  99 +++
> >  .../test-drivers/visl/visl-trace-points.c     |   9 +
> >  .../media/test-drivers/visl/visl-trace-vp8.h  | 156 ++++
> >  .../media/test-drivers/visl/visl-trace-vp9.h  | 292 +++++++
> >  drivers/media/test-drivers/visl/visl-video.c  | 776 ++++++++++++++++++
> >  drivers/media/test-drivers/visl/visl-video.h  |  61 ++
> >  drivers/media/test-drivers/visl/visl.h        | 178 ++++
> >  18 files changed, 3347 insertions(+)
> >  create mode 100644 drivers/media/test-drivers/visl/Kconfig
> >  create mode 100644 drivers/media/test-drivers/visl/Makefile
> >  create mode 100644 drivers/media/test-drivers/visl/visl-core.c
> >  create mode 100644 drivers/media/test-drivers/visl/visl-debugfs.c
> >  create mode 100644 drivers/media/test-drivers/visl/visl-debugfs.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-dec.c
> >  create mode 100644 drivers/media/test-drivers/visl/visl-dec.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-fwht.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-h264.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-mpeg2.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-points.c
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-vp8.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-trace-vp9.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl-video.c
> >  create mode 100644 drivers/media/test-drivers/visl/visl-video.h
> >  create mode 100644 drivers/media/test-drivers/visl/visl.h
> > 
> > diff --git a/drivers/media/test-drivers/Kconfig b/drivers/media/test-drivers/Kconfig
> > index 51cf27834df0..459b433e9fae 100644
> > --- a/drivers/media/test-drivers/Kconfig
> > +++ b/drivers/media/test-drivers/Kconfig
> > @@ -20,6 +20,7 @@ config VIDEO_VIM2M
> >  source "drivers/media/test-drivers/vicodec/Kconfig"
> >  source "drivers/media/test-drivers/vimc/Kconfig"
> >  source "drivers/media/test-drivers/vivid/Kconfig"
> > +source "drivers/media/test-drivers/visl/Kconfig"
> >  
> >  endif #V4L_TEST_DRIVERS
> >  
> > diff --git a/drivers/media/test-drivers/Makefile b/drivers/media/test-drivers/Makefile
> > index ff390b687189..740714a4584d 100644
> > --- a/drivers/media/test-drivers/Makefile
> > +++ b/drivers/media/test-drivers/Makefile
> > @@ -12,3 +12,4 @@ obj-$(CONFIG_VIDEO_VICODEC) += vicodec/
> >  obj-$(CONFIG_VIDEO_VIM2M) += vim2m.o
> >  obj-$(CONFIG_VIDEO_VIMC) += vimc/
> >  obj-$(CONFIG_VIDEO_VIVID) += vivid/
> > +obj-$(CONFIG_VIDEO_VISL) += visl/
> > diff --git a/drivers/media/test-drivers/visl/Kconfig b/drivers/media/test-drivers/visl/Kconfig
> > new file mode 100644
> > index 000000000000..976319c3c372
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/Kconfig
> > @@ -0,0 +1,31 @@
> > +# SPDX-License-Identifier: GPL-2.0+
> > +config VIDEO_VISL
> > +	tristate "Virtual Stateless Driver (visl)"
> > +	depends on VIDEO_DEV
> > +	select FONT_SUPPORT
> > +	select FONT_8x16
> > +	select VIDEOBUF2_VMALLOC
> > +	select V4L2_MEM2MEM_DEV
> > +	select MEDIA_CONTROLLER
> > +	select MEDIA_CONTROLLER_REQUEST_API
> > +	select VIDEO_V4L2_TPG
> > +	help
> > +
> > +	  A virtual stateless device for uAPI development purposes.
> > +
> > +	  A userspace implementation can use visl to run a decoding loop even
> > +	  when no hardware is available or when the kernel uAPI for the codec
> > +	  has not been upstreamed yet. This can reveal bugs at an early stage.
> > +
> > +
> > +
> > +	  When in doubt, say N.
> > +
> > +config VISL_DEBUGFS
> > +	bool "Enable debugfs for visl"
> > +	depends on VIDEO_VISL
> > +	depends on DEBUG_FS
> > +
> > +	help
> > +	  Choose Y to dump the bitstream buffers through debugfs.
> > +	  When in doubt, say N.
> > diff --git a/drivers/media/test-drivers/visl/Makefile b/drivers/media/test-drivers/visl/Makefile
> > new file mode 100644
> > index 000000000000..fb4d5ae1b17f
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/Makefile
> > @@ -0,0 +1,8 @@
> > +# SPDX-License-Identifier: GPL-2.0+
> > +visl-y := visl-core.o visl-video.o visl-dec.o visl-trace-points.o
> > +
> > +ifeq ($(CONFIG_VISL_DEBUGFS),y)
> > +  visl-y += visl-debugfs.o
> > +endif
> > +
> > +obj-$(CONFIG_VIDEO_VISL) += visl.o
> > diff --git a/drivers/media/test-drivers/visl/visl-core.c b/drivers/media/test-drivers/visl/visl-core.c
> > new file mode 100644
> > index 000000000000..c59f88b72ea4
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-core.c
> > @@ -0,0 +1,532 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +/*
> > + * A virtual stateless device for stateless uAPI development purposes.
> > + *
> > + * This tool's objective is to help the development and testing of userspace
> > + * applications that use the V4L2 stateless API to decode media.
> > + *
> > + * A userspace implementation can use visl to run a decoding loop even when no
> > + * hardware is available or when the kernel uAPI for the codec has not been
> > + * upstreamed yet. This can reveal bugs at an early stage.
> > + *
> > + * This driver can also trace the contents of the V4L2 controls submitted to it.
> > + * It can also dump the contents of the vb2 buffers through a debugfs
> > + * interface. This is in many ways similar to the tracing infrastructure
> > + * available for other popular encode/decode APIs out there and can help develop
> > + * a userspace application by using another (working) one as a reference.
> > + *
> > + * Note that no actual decoding of video frames is performed by visl. The V4L2
> > + * test pattern generator is used to write various debug information to the
> > + * capture buffers instead.
> > + *
> > + * Copyright (c) Collabora, Ltd.
> > + *
> > + * Based on the vim2m driver, that is:
> > + *
> > + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> > + * Pawel Osciak, <pawel@osciak.com>
> > + * Marek Szyprowski, <m.szyprowski@samsung.com>
> > + *
> > + * Based on the vicodec driver, that is:
> > + *
> > + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> > + *
> > + * Based on the Cedrus VPU driver, that is:
> > + *
> > + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> > + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> > + * Copyright (C) 2018 Bootlin
> > + */
> > +
> > +#include <linux/debugfs.h>
> > +#include <linux/module.h>
> > +#include <linux/platform_device.h>
> > +#include <media/v4l2-ctrls.h>
> > +#include <media/v4l2-device.h>
> > +#include <media/v4l2-ioctl.h>
> > +#include <media/v4l2-mem2mem.h>
> > +
> > +#include "visl.h"
> > +#include "visl-dec.h"
> > +#include "visl-debugfs.h"
> > +#include "visl-video.h"
> > +
> > +unsigned int visl_debug;
> > +module_param(visl_debug, uint, 0644);
> > +MODULE_PARM_DESC(visl_debug, " activates debug info");
> > +
> > +unsigned int visl_transtime_ms;
> > +module_param(visl_transtime_ms, uint, 0644);
> > +MODULE_PARM_DESC(visl_transtime_ms, " simulated process time in miliseconds.");
> > +
> > +/*
> > + * dprintk can be slow through serial. This lets one limit the tracing to a
> > + * particular number of frames
> > + */
> > +int visl_dprintk_frame_start = -1;
> > +module_param(visl_dprintk_frame_start, int, 0);
> > +MODULE_PARM_DESC(visl_dprintk_frame_start, " a frame number to start tracing with dprintk");
> > +
> > +unsigned int visl_dprintk_nframes;
> > +module_param(visl_dprintk_nframes, uint, 0);
> > +MODULE_PARM_DESC(visl_dprintk_nframes,
> > +		 " the number of frames to trace with dprintk");
> > +
> > +unsigned int keep_bitstream_buffers;
> > +module_param(keep_bitstream_buffers, uint, 0);
> > +MODULE_PARM_DESC(keep_bitstream_buffers,
> > +		 " keep bitstream buffers in debugfs after streaming is stopped");
> > +
> > +int bitstream_trace_frame_start = -1;
> > +module_param(bitstream_trace_frame_start, int, 0);
> > +MODULE_PARM_DESC(bitstream_trace_frame_start,
> > +		 " a frame number to start dumping the bitstream through debugfs");
> > +
> > +unsigned int bitstream_trace_nframes;
> > +module_param(bitstream_trace_nframes, uint, 0);
> > +MODULE_PARM_DESC(bitstream_trace_nframes,
> > +		 " the number of frames to dump the bitstream through debugfs");
> > +
> > +static const struct visl_ctrl_desc visl_fwht_ctrl_descs[] = {
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_FWHT_PARAMS,
> > +	},
> > +};
> > +
> > +const struct visl_ctrls visl_fwht_ctrls = {
> > +	.ctrls = visl_fwht_ctrl_descs,
> > +	.num_ctrls = ARRAY_SIZE(visl_fwht_ctrl_descs)
> > +};
> > +
> > +static const struct visl_ctrl_desc visl_mpeg2_ctrl_descs[] = {
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_MPEG2_SEQUENCE,
> > +	},
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_MPEG2_PICTURE,
> > +	},
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_MPEG2_QUANTISATION,
> > +	},
> > +};
> > +
> > +const struct visl_ctrls visl_mpeg2_ctrls = {
> > +	.ctrls = visl_mpeg2_ctrl_descs,
> > +	.num_ctrls = ARRAY_SIZE(visl_mpeg2_ctrl_descs),
> > +};
> > +
> > +static const struct visl_ctrl_desc visl_vp8_ctrl_descs[] = {
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_VP8_FRAME,
> > +	},
> > +};
> > +
> > +const struct visl_ctrls visl_vp8_ctrls = {
> > +	.ctrls = visl_vp8_ctrl_descs,
> > +	.num_ctrls = ARRAY_SIZE(visl_vp8_ctrl_descs),
> > +};
> > +
> > +static const struct visl_ctrl_desc visl_vp9_ctrl_descs[] = {
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_VP9_FRAME,
> > +	},
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_VP9_COMPRESSED_HDR,
> > +	},
> > +};
> > +
> > +const struct visl_ctrls visl_vp9_ctrls = {
> > +	.ctrls = visl_vp9_ctrl_descs,
> > +	.num_ctrls = ARRAY_SIZE(visl_vp9_ctrl_descs),
> > +};
> > +
> > +static const struct visl_ctrl_desc visl_h264_ctrl_descs[] = {
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_H264_DECODE_PARAMS,
> > +	},
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_H264_SPS,
> > +	},
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_H264_PPS,
> > +	},
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_H264_SCALING_MATRIX,
> > +	},
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_H264_DECODE_MODE,
> > +	},
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_H264_START_CODE,
> > +	},
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_H264_SLICE_PARAMS,
> > +	},
> > +	{
> > +		.cfg.id = V4L2_CID_STATELESS_H264_PRED_WEIGHTS,
> > +	},
> > +};
> > +
> > +const struct visl_ctrls visl_h264_ctrls = {
> > +	.ctrls = visl_h264_ctrl_descs,
> > +	.num_ctrls = ARRAY_SIZE(visl_h264_ctrl_descs),
> > +};
> > +
> > +struct v4l2_ctrl *visl_find_control(struct visl_ctx *ctx, u32 id)
> > +{
> > +	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
> > +
> > +	return v4l2_ctrl_find(hdl, id);
> > +}
> > +
> > +void *visl_find_control_data(struct visl_ctx *ctx, u32 id)
> > +{
> > +	struct v4l2_ctrl *ctrl;
> > +
> > +	ctrl = visl_find_control(ctx, id);
> > +	if (ctrl)
> > +		return ctrl->p_cur.p;
> > +
> > +	return NULL;
> > +}
> > +
> > +u32 visl_control_num_elems(struct visl_ctx *ctx, u32 id)
> > +{
> > +	struct v4l2_ctrl *ctrl;
> > +
> > +	ctrl = visl_find_control(ctx, id);
> > +	if (ctrl)
> > +		return ctrl->elems;
> > +
> > +	return 0;
> > +}
> > +
> > +static void visl_device_release(struct video_device *vdev)
> > +{
> > +	struct visl_dev *dev = container_of(vdev, struct visl_dev, vfd);
> > +
> > +	v4l2_device_unregister(&dev->v4l2_dev);
> > +	v4l2_m2m_release(dev->m2m_dev);
> > +	media_device_cleanup(&dev->mdev);
> > +	visl_debugfs_deinit(dev);
> > +	kfree(dev);
> > +}
> > +
> > +static int visl_add_ctrls(struct visl_ctx *ctx, const struct visl_ctrls *ctrls)
> > +{
> > +	struct visl_dev *dev = ctx->dev;
> > +	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
> > +	unsigned int i;
> > +	struct v4l2_ctrl *ctrl;
> > +
> > +	for (i = 0; i < ctrls->num_ctrls; i++) {
> > +		ctrl = v4l2_ctrl_new_custom(hdl, &ctrls->ctrls[i].cfg, NULL);
> > +
> > +		if (hdl->error) {
> > +			v4l2_err(&dev->v4l2_dev,
> > +				 "Failed to create new custom control, errno: %d\n",
> > +				 hdl->error);
> > +
> > +			return hdl->error;
> > +		}
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +#define VISL_CONTROLS_COUNT	ARRAY_SIZE(visl_controls)
> > +
> > +static int visl_init_ctrls(struct visl_ctx *ctx)
> > +{
> > +	struct visl_dev *dev = ctx->dev;
> > +	struct v4l2_ctrl_handler *hdl = &ctx->hdl;
> > +	unsigned int ctrl_cnt = 0;
> > +	unsigned int i;
> > +	int ret;
> > +
> > +	for (i = 0; i < num_coded_fmts; i++)
> > +		ctrl_cnt += visl_coded_fmts[i].ctrls->num_ctrls;
> > +
> > +	v4l2_ctrl_handler_init(hdl, ctrl_cnt);
> > +	if (hdl->error) {
> > +		v4l2_err(&dev->v4l2_dev,
> > +			 "Failed to initialize control handler\n");
> > +		return hdl->error;
> > +	}
> > +
> > +	for (i = 0; i < num_coded_fmts; i++) {
> > +		ret = visl_add_ctrls(ctx, visl_coded_fmts[i].ctrls);
> > +		if (ret)
> > +			goto err_free_handler;
> > +	}
> > +
> > +	ctx->fh.ctrl_handler = hdl;
> > +	v4l2_ctrl_handler_setup(hdl);
> > +
> > +	return 0;
> > +
> > +err_free_handler:
> > +	v4l2_ctrl_handler_free(hdl);
> > +	return ret;
> > +}
> > +
> > +static void visl_free_ctrls(struct visl_ctx *ctx)
> > +{
> > +	v4l2_ctrl_handler_free(&ctx->hdl);
> > +}
> > +
> > +static int visl_open(struct file *file)
> > +{
> > +	struct visl_dev *dev = video_drvdata(file);
> > +	struct visl_ctx *ctx = NULL;
> > +	int rc = 0;
> > +
> > +	if (mutex_lock_interruptible(&dev->dev_mutex))
> > +		return -ERESTARTSYS;
> > +	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> > +	if (!ctx) {
> > +		rc = -ENOMEM;
> > +		goto unlock;
> > +	}
> > +
> > +	ctx->tpg_str_buf = kmalloc(TPG_STR_BUF_SZ, GFP_KERNEL);
> > +
> > +	v4l2_fh_init(&ctx->fh, video_devdata(file));
> > +	file->private_data = &ctx->fh;
> > +	ctx->dev = dev;
> > +
> > +	rc = visl_init_ctrls(ctx);
> > +	if (rc)
> > +		goto free_ctx;
> > +
> > +	ctx->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, ctx, &visl_queue_init);
> > +
> > +	mutex_init(&ctx->vb_mutex);
> > +
> > +	if (IS_ERR(ctx->fh.m2m_ctx)) {
> > +		rc = PTR_ERR(ctx->fh.m2m_ctx);
> > +		goto free_hdl;
> > +	}
> > +
> > +	rc = visl_set_default_format(ctx);
> > +	if (rc)
> > +		goto free_m2m_ctx;
> > +
> > +	v4l2_fh_add(&ctx->fh);
> > +
> > +	dprintk(dev, "Created instance: %p, m2m_ctx: %p\n",
> > +		ctx, ctx->fh.m2m_ctx);
> > +
> > +	mutex_unlock(&dev->dev_mutex);
> > +	return rc;
> > +
> > +free_m2m_ctx:
> > +	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
> > +free_hdl:
> > +	visl_free_ctrls(ctx);
> > +	v4l2_fh_exit(&ctx->fh);
> > +free_ctx:
> > +	kfree(ctx->tpg_str_buf);
> > +	kfree(ctx);
> > +unlock:
> > +	mutex_unlock(&dev->dev_mutex);
> > +	return rc;
> > +}
> > +
> > +static int visl_release(struct file *file)
> > +{
> > +	struct visl_dev *dev = video_drvdata(file);
> > +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> > +
> > +	dprintk(dev, "Releasing instance %p\n", ctx);
> > +
> > +	tpg_free(&ctx->tpg);
> > +	v4l2_fh_del(&ctx->fh);
> > +	v4l2_fh_exit(&ctx->fh);
> > +	visl_free_ctrls(ctx);
> > +	mutex_lock(&dev->dev_mutex);
> > +	v4l2_m2m_ctx_release(ctx->fh.m2m_ctx);
> > +	mutex_unlock(&dev->dev_mutex);
> > +
> > +	if (!keep_bitstream_buffers)
> > +		visl_debugfs_clear_bitstream(dev, ctx->capture_streamon_jiffies);
> > +
> > +	kfree(ctx->tpg_str_buf);
> > +	kfree(ctx);
> > +
> > +	return 0;
> > +}
> > +
> > +static const struct v4l2_file_operations visl_fops = {
> > +	.owner		= THIS_MODULE,
> > +	.open		= visl_open,
> > +	.release	= visl_release,
> > +	.poll		= v4l2_m2m_fop_poll,
> > +	.unlocked_ioctl	= video_ioctl2,
> > +	.mmap		= v4l2_m2m_fop_mmap,
> > +};
> > +
> > +static const struct video_device visl_videodev = {
> > +	.name		= VISL_NAME,
> > +	.vfl_dir	= VFL_DIR_M2M,
> > +	.fops		= &visl_fops,
> > +	.ioctl_ops	= &visl_ioctl_ops,
> > +	.minor		= -1,
> > +	.release	= visl_device_release,
> > +	.device_caps	= V4L2_CAP_VIDEO_M2M_MPLANE | V4L2_CAP_STREAMING,
> > +};
> > +
> > +static const struct v4l2_m2m_ops visl_m2m_ops = {
> > +	.device_run	= visl_device_run,
> > +};
> > +
> > +static const struct media_device_ops visl_m2m_media_ops = {
> > +	.req_validate	= visl_request_validate,
> > +	.req_queue	= v4l2_m2m_request_queue,
> > +};
> > +
> > +static int visl_probe(struct platform_device *pdev)
> > +{
> > +	struct visl_dev *dev;
> > +	struct video_device *vfd;
> > +	int ret;
> > +	int rc;
> > +
> > +	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
> > +	if (!dev)
> > +		return -ENOMEM;
> > +
> > +	ret = v4l2_device_register(&pdev->dev, &dev->v4l2_dev);
> > +	if (ret)
> > +		goto error_visl_dev;
> > +
> > +	mutex_init(&dev->dev_mutex);
> > +
> > +	dev->vfd = visl_videodev;
> > +	vfd = &dev->vfd;
> > +	vfd->lock = &dev->dev_mutex;
> > +	vfd->v4l2_dev = &dev->v4l2_dev;
> > +
> > +	video_set_drvdata(vfd, dev);
> > +	v4l2_info(&dev->v4l2_dev,
> > +		  "Device registered as /dev/video%d\n", vfd->num);

This always shows up in dmesg as:
Device registered as /dev/video0
even if it's actually on another node like /dev/video5

> > +
> > +	platform_set_drvdata(pdev, dev);
> > +
> > +	dev->m2m_dev = v4l2_m2m_init(&visl_m2m_ops);
> > +	if (IS_ERR(dev->m2m_dev)) {
> > +		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem device\n");
> > +		ret = PTR_ERR(dev->m2m_dev);
> > +		dev->m2m_dev = NULL;
> > +		goto error_dev;
> > +	}
> > +
> > +	dev->mdev.dev = &pdev->dev;
> > +	strscpy(dev->mdev.model, "visl", sizeof(dev->mdev.model));
> > +	strscpy(dev->mdev.bus_info, "platform:visl",
> > +		sizeof(dev->mdev.bus_info));
> > +	media_device_init(&dev->mdev);
> > +	dev->mdev.ops = &visl_m2m_media_ops;
> > +	dev->v4l2_dev.mdev = &dev->mdev;
> > +
> > +	ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
> > +	if (ret) {
> > +		v4l2_err(&dev->v4l2_dev, "Failed to register video device\n");
> > +		goto error_m2m;
> > +	}
> > +
> > +	ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd,
> > +						 MEDIA_ENT_F_PROC_VIDEO_DECODER);
> > +	if (ret) {
> > +		v4l2_err(&dev->v4l2_dev, "Failed to init mem2mem media controller\n");
> > +		goto error_v4l2;
> > +	}
> > +
> > +	ret = media_device_register(&dev->mdev);
> > +	if (ret) {
> > +		v4l2_err(&dev->v4l2_dev, "Failed to register mem2mem media device\n");
> > +		goto error_m2m_mc;
> > +	}
> > +
> > +	rc = visl_debugfs_init(dev);
> > +	if (rc)
> > +		dprintk(dev, "visl_debugfs_init failed: %d\n"
> > +			"Continuing without debugfs support\n", rc);
> > +
> > +	return 0;
> > +
> > +error_m2m_mc:
> > +	v4l2_m2m_unregister_media_controller(dev->m2m_dev);
> > +error_v4l2:
> > +	video_unregister_device(&dev->vfd);
> > +	/* visl_device_release called by video_unregister_device to release various objects */
> > +	return ret;
> > +error_m2m:
> > +	v4l2_m2m_release(dev->m2m_dev);
> > +error_dev:
> > +	v4l2_device_unregister(&dev->v4l2_dev);
> > +error_visl_dev:
> > +	kfree(dev);
> > +
> > +	return ret;
> > +}
> > +
> > +static int visl_remove(struct platform_device *pdev)
> > +{
> > +	struct visl_dev *dev = platform_get_drvdata(pdev);
> > +
> > +	v4l2_info(&dev->v4l2_dev, "Removing " VISL_NAME);
> > +
> > +#ifdef CONFIG_MEDIA_CONTROLLER
> > +	if (media_devnode_is_registered(dev->mdev.devnode)) {
> > +		media_device_unregister(&dev->mdev);
> > +		v4l2_m2m_unregister_media_controller(dev->m2m_dev);
> > +	}
> > +#endif
> > +	video_unregister_device(&dev->vfd);
> > +
> > +	return 0;
> > +}
> > +
> > +static struct platform_driver visl_pdrv = {
> > +	.probe		= visl_probe,
> > +	.remove		= visl_remove,
> > +	.driver		= {
> > +		.name	= VISL_NAME,
> > +	},
> > +};
> > +
> > +static void visl_dev_release(struct device *dev) {}
> > +
> > +static struct platform_device visl_pdev = {
> > +	.name		= VISL_NAME,
> > +	.dev.release	= visl_dev_release,
> > +};
> > +
> > +static void __exit visl_exit(void)
> > +{
> > +	platform_driver_unregister(&visl_pdrv);
> > +	platform_device_unregister(&visl_pdev);
> > +}
> > +
> > +static int __init visl_init(void)
> > +{
> > +	int ret;
> > +
> > +	ret = platform_device_register(&visl_pdev);
> > +	if (ret)
> > +		return ret;
> > +
> > +	ret = platform_driver_register(&visl_pdrv);
> > +	if (ret)
> > +		platform_device_unregister(&visl_pdev);
> > +
> > +	return ret;
> > +}
> > +
> > +MODULE_DESCRIPTION("Virtual stateless device");
> > +MODULE_AUTHOR("Daniel Almeida <daniel.almeida@collabora.com>");
> > +MODULE_LICENSE("GPL v2");
> > +
> > +module_init(visl_init);
> > +module_exit(visl_exit);
> > diff --git a/drivers/media/test-drivers/visl/visl-debugfs.c b/drivers/media/test-drivers/visl/visl-debugfs.c
> > new file mode 100644
> > index 000000000000..6fbfd55d6c53
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-debugfs.c
> > @@ -0,0 +1,148 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +/*
> > + * A virtual stateless device for stateless uAPI development purposes.
> > + *
> > + * This tool's objective is to help the development and testing of userspace
> > + * applications that use the V4L2 stateless API to decode media.
> > + *
> > + * A userspace implementation can use visl to run a decoding loop even when no
> > + * hardware is available or when the kernel uAPI for the codec has not been
> > + * upstreamed yet. This can reveal bugs at an early stage.
> > + *
> > + * This driver can also trace the contents of the V4L2 controls submitted to it.
> > + * It can also dump the contents of the vb2 buffers through a debugfs
> > + * interface. This is in many ways similar to the tracing infrastructure
> > + * available for other popular encode/decode APIs out there and can help develop
> > + * a userspace application by using another (working) one as a reference.
> > + *
> > + * Note that no actual decoding of video frames is performed by visl. The V4L2
> > + * test pattern generator is used to write various debug information to the
> > + * capture buffers instead.
> > + *
> > + * Copyright (c) Collabora, Ltd.
> > + *
> > + * Based on the vim2m driver, that is:
> > + *
> > + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> > + * Pawel Osciak, <pawel@osciak.com>
> > + * Marek Szyprowski, <m.szyprowski@samsung.com>
> > + *
> > + * Based on the vicodec driver, that is:
> > + *
> > + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> > + *
> > + * Based on the Cedrus VPU driver, that is:
> > + *
> > + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> > + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> > + * Copyright (C) 2018 Bootlin
> > + */
> > +
> > +#include <linux/debugfs.h>
> > +#include <linux/list.h>
> > +#include <linux/mutex.h>
> > +#include <media/v4l2-mem2mem.h>
> > +
> > +#include "visl-debugfs.h"
> > +
> > +int visl_debugfs_init(struct visl_dev *dev)
> > +{
> > +	dev->debugfs_root = debugfs_create_dir("visl", NULL);
> > +	INIT_LIST_HEAD(&dev->bitstream_blobs);
> > +	mutex_init(&dev->bitstream_lock);
> > +
> > +	if (IS_ERR(dev->debugfs_root))
> > +		return PTR_ERR(dev->debugfs_root);
> > +
> > +	return visl_debugfs_bitstream_init(dev);
> > +}
> > +
> > +int visl_debugfs_bitstream_init(struct visl_dev *dev)
> > +{
> > +	dev->bitstream_debugfs = debugfs_create_dir("bitstream",
> > +						    dev->debugfs_root);
> > +	if (IS_ERR(dev->bitstream_debugfs))
> > +		return PTR_ERR(dev->bitstream_debugfs);
> > +
> > +	return 0;
> > +}
> > +
> > +void visl_trace_bitstream(struct visl_ctx *ctx, struct visl_run *run)
> > +{
> > +	u8 *vaddr = vb2_plane_vaddr(&run->src->vb2_buf, 0);
> > +	struct visl_blob *blob;
> > +	size_t data_sz = vb2_get_plane_payload(&run->dst->vb2_buf, 0);
> > +	struct dentry *dentry;
> > +	char name[32];
> > +
> > +	blob  = kzalloc(sizeof(*blob), GFP_KERNEL);
> > +	if (!blob)
> > +		return;
> > +
> > +	blob->blob.data = vzalloc(data_sz);
> > +	if (!blob->blob.data)
> > +		goto err_vmalloc;
> > +
> > +	blob->blob.size = data_sz;
> > +	snprintf(name, 32, "%llu_bitstream%d",
> > +		 ctx->capture_streamon_jiffies, run->src->sequence);
> > +
> > +	memcpy(blob->blob.data, vaddr, data_sz);
> > +
> > +	dentry = debugfs_create_blob(name, 0444, ctx->dev->bitstream_debugfs,
> > +				     &blob->blob);
> > +	if (IS_ERR(dentry))
> > +		goto err_debugfs;
> > +
> > +	blob->dentry = dentry;
> > +	blob->streamon_jiffies = ctx->capture_streamon_jiffies;
> > +
> > +	mutex_lock(&ctx->dev->bitstream_lock);
> > +	list_add_tail(&blob->list, &ctx->dev->bitstream_blobs);
> > +	mutex_unlock(&ctx->dev->bitstream_lock);
> > +
> > +	return;
> > +
> > +err_debugfs:
> > +	vfree(blob->blob.data);
> > +err_vmalloc:
> > +	kfree(blob);
> > +}
> > +
> > +void visl_debugfs_clear_bitstream(struct visl_dev *dev, u64 streamon_jiffies)
> > +{
> > +	struct visl_blob *blob;
> > +	struct visl_blob *tmp;
> > +
> > +	mutex_lock(&dev->bitstream_lock);
> > +	if (list_empty(&dev->bitstream_blobs))
> > +		goto unlock;
> > +
> > +	list_for_each_entry_safe(blob, tmp, &dev->bitstream_blobs, list) {
> > +		if (streamon_jiffies &&
> > +		    streamon_jiffies != blob->streamon_jiffies)
> > +			continue;
> > +
> > +		list_del(&blob->list);
> > +		debugfs_remove(blob->dentry);
> > +		vfree(blob->blob.data);
> > +		kfree(blob);
> > +	}
> > +
> > +unlock:
> > +	mutex_unlock(&dev->bitstream_lock);
> > +}
> > +
> > +void visl_debugfs_bitstream_deinit(struct visl_dev *dev)
> > +{
> > +	visl_debugfs_clear_bitstream(dev, 0);
> > +	debugfs_remove_recursive(dev->bitstream_debugfs);
> > +	dev->bitstream_debugfs = NULL;
> > +}
> > +
> > +void visl_debugfs_deinit(struct visl_dev *dev)
> > +{
> > +	visl_debugfs_bitstream_deinit(dev);
> > +	debugfs_remove_recursive(dev->debugfs_root);
> > +	dev->debugfs_root = NULL;
> > +}
> > diff --git a/drivers/media/test-drivers/visl/visl-debugfs.h b/drivers/media/test-drivers/visl/visl-debugfs.h
> > new file mode 100644
> > index 000000000000..e14e7d72b150
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-debugfs.h
> > @@ -0,0 +1,72 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +/*
> > + * A virtual stateless device for stateless uAPI development purposes.
> > + *
> > + * This tool's objective is to help the development and testing of userspace
> > + * applications that use the V4L2 stateless API to decode media.
> > + *
> > + * A userspace implementation can use visl to run a decoding loop even when no
> > + * hardware is available or when the kernel uAPI for the codec has not been
> > + * upstreamed yet. This can reveal bugs at an early stage.
> > + *
> > + * This driver can also trace the contents of the V4L2 controls submitted to it.
> > + * It can also dump the contents of the vb2 buffers through a debugfs
> > + * interface. This is in many ways similar to the tracing infrastructure
> > + * available for other popular encode/decode APIs out there and can help develop
> > + * a userspace application by using another (working) one as a reference.
> > + *
> > + * Note that no actual decoding of video frames is performed by visl. The V4L2
> > + * test pattern generator is used to write various debug information to the
> > + * capture buffers instead.
> > + *
> > + * Copyright (c) Collabora, Ltd.
> > + *
> > + * Based on the vim2m driver, that is:
> > + *
> > + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> > + * Pawel Osciak, <pawel@osciak.com>
> > + * Marek Szyprowski, <m.szyprowski@samsung.com>
> > + *
> > + * Based on the vicodec driver, that is:
> > + *
> > + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> > + *
> > + * Based on the Cedrus VPU driver, that is:
> > + *
> > + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> > + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> > + * Copyright (C) 2018 Bootlin
> > + */
> > +
> > +#include "visl.h"
> > +#include "visl-dec.h"
> > +
> > +#ifdef CONFIG_VISL_DEBUGFS
> > +
> > +int visl_debugfs_init(struct visl_dev *dev);
> > +int visl_debugfs_bitstream_init(struct visl_dev *dev);
> > +void visl_trace_bitstream(struct visl_ctx *ctx, struct visl_run *run);
> > +void visl_debugfs_clear_bitstream(struct visl_dev *dev, u64 streamon_jiffies);
> > +void visl_debugfs_bitstream_deinit(struct visl_dev *dev);
> > +void visl_debugfs_deinit(struct visl_dev *dev);
> > +
> > +#else
> > +
> > +static inline int visl_debugfs_init(struct visl_dev *dev)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline int visl_debugfs_bitstream_init(struct visl_dev *dev)
> > +{
> > +	return 0;
> > +}
> > +
> > +static inline void visl_trace_bitstream(struct visl_ctx *ctx, struct visl_run *run) {}
> > +static inline void
> > +visl_debugfs_clear_bitstream(struct visl_dev *dev, u64 streamon_jiffies) {}
> > +static inline void visl_debugfs_bitstream_deinit(struct visl_dev *dev) {}
> > +static inline void visl_debugfs_deinit(struct visl_dev *dev) {}
> > +
> > +#endif
> > +
> > diff --git a/drivers/media/test-drivers/visl/visl-dec.c b/drivers/media/test-drivers/visl/visl-dec.c
> > new file mode 100644
> > index 000000000000..3c68d97f87d1
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-dec.c
> > @@ -0,0 +1,468 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +/*
> > + * A virtual stateless device for stateless uAPI development purposes.
> > + *
> > + * This tool's objective is to help the development and testing of userspace
> > + * applications that use the V4L2 stateless API to decode media.
> > + *
> > + * A userspace implementation can use visl to run a decoding loop even when no
> > + * hardware is available or when the kernel uAPI for the codec has not been
> > + * upstreamed yet. This can reveal bugs at an early stage.
> > + *
> > + * This driver can also trace the contents of the V4L2 controls submitted to it.
> > + * It can also dump the contents of the vb2 buffers through a debugfs
> > + * interface. This is in many ways similar to the tracing infrastructure
> > + * available for other popular encode/decode APIs out there and can help develop
> > + * a userspace application by using another (working) one as a reference.
> > + *
> > + * Note that no actual decoding of video frames is performed by visl. The V4L2
> > + * test pattern generator is used to write various debug information to the
> > + * capture buffers instead.
> > + *
> > + * Copyright (c) Collabora, Ltd.
> > + *
> > + * Based on the vim2m driver, that is:
> > + *
> > + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> > + * Pawel Osciak, <pawel@osciak.com>
> > + * Marek Szyprowski, <m.szyprowski@samsung.com>
> > + *
> > + * Based on the vicodec driver, that is:
> > + *
> > + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> > + *
> > + * Based on the Cedrus VPU driver, that is:
> > + *
> > + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> > + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> > + * Copyright (C) 2018 Bootlin
> > + */
> > +
> > +#include "visl.h"
> > +#include "visl-debugfs.h"
> > +#include "visl-dec.h"
> > +#include "visl-trace-fwht.h"
> > +#include "visl-trace-mpeg2.h"
> > +#include "visl-trace-vp8.h"
> > +#include "visl-trace-vp9.h"
> > +#include "visl-trace-h264.h"
> > +
> > +#include <linux/delay.h>
> > +#include <linux/workqueue.h>
> > +#include <media/v4l2-mem2mem.h>
> > +#include <media/tpg/v4l2-tpg.h>
> > +
> > +static void *plane_vaddr(struct tpg_data *tpg, struct vb2_buffer *buf,
> > +			 u32 p, u32 bpl[TPG_MAX_PLANES], u32 h)
> > +{
> > +	u32 i;
> > +	void *vbuf;
> > +
> > +	if (p == 0 || tpg_g_buffers(tpg) > 1)
> > +		return vb2_plane_vaddr(buf, p);
> > +	vbuf = vb2_plane_vaddr(buf, 0);
> > +	for (i = 0; i < p; i++)
> > +		vbuf += bpl[i] * h / tpg->vdownsampling[i];
> > +	return vbuf;
> > +}
> > +
> > +static void visl_get_ref_frames(struct visl_ctx *ctx, u8 *buf,
> > +				__kernel_size_t buflen, struct visl_run *run)
> > +{
> > +	struct vb2_queue *cap_q = &ctx->fh.m2m_ctx->cap_q_ctx.q;
> > +	char header[] = "Reference frames:\n";
> > +	u32 i;
> > +	u32 len;
> > +
> > +	len = scnprintf(buf, buflen, header);
> > +	buf += len;
> > +	buflen -= len;
> > +
> > +	switch (ctx->current_codec) {
> > +	case VISL_CODEC_NONE:
> > +		break;
> > +
> > +	case VISL_CODEC_FWHT: {
> > +		scnprintf(buf, buflen, "backwards_ref_ts: %lld, vb2_idx: %d",
> > +			  run->fwht.params->backward_ref_ts,
> > +			  vb2_find_timestamp(cap_q, run->fwht.params->backward_ref_ts, 0));
> > +		break;
> > +	}
> > +
> > +	case VISL_CODEC_MPEG2: {
> > +		scnprintf(buf, buflen,
> > +			  "backward_ref_ts: %llu, vb2_idx: %d\n"
> > +			  "forward_ref_ts: %llu, vb2_idx: %d\n",
> > +			  run->mpeg2.pic->backward_ref_ts,
> > +			  vb2_find_timestamp(cap_q, run->mpeg2.pic->backward_ref_ts, 0),
> > +			  run->mpeg2.pic->forward_ref_ts,
> > +			  vb2_find_timestamp(cap_q, run->mpeg2.pic->forward_ref_ts, 0));
> > +		break;
> > +	}
> > +
> > +	case VISL_CODEC_VP8: {
> > +		scnprintf(buf, buflen,
> > +			  "last_ref_ts: %llu, vb2_idx: %d\n"
> > +			  "golden_ref_ts: %llu, vb2_idx: %d\n"
> > +			  "alt_ref_ts: %llu, vb2_idx: %d\n",
> > +			  run->vp8.frame->last_frame_ts,
> > +			  vb2_find_timestamp(cap_q, run->vp8.frame->last_frame_ts, 0),
> > +			  run->vp8.frame->golden_frame_ts,
> > +			  vb2_find_timestamp(cap_q, run->vp8.frame->golden_frame_ts, 0),
> > +			  run->vp8.frame->alt_frame_ts,
> > +			  vb2_find_timestamp(cap_q, run->vp8.frame->alt_frame_ts, 0));
> > +		break;
> > +	}
> > +
> > +	case VISL_CODEC_VP9: {
> > +		scnprintf(buf, buflen,
> > +			  "last_ref_ts: %llu, vb2_idx: %d\n"
> > +			  "golden_ref_ts: %llu, vb2_idx: %d\n"
> > +			  "alt_ref_ts: %llu, vb2_idx: %d\n",
> > +			  run->vp9.frame->last_frame_ts,
> > +			  vb2_find_timestamp(cap_q, run->vp9.frame->last_frame_ts, 0),
> > +			  run->vp9.frame->golden_frame_ts,
> > +			  vb2_find_timestamp(cap_q, run->vp9.frame->golden_frame_ts, 0),
> > +			  run->vp9.frame->alt_frame_ts,
> > +			  vb2_find_timestamp(cap_q, run->vp9.frame->alt_frame_ts, 0));
> > +		break;
> > +	}
> > +	case VISL_CODEC_H264: {
> > +		char entry[] = "dpb[%d]:%u, vb2_index: %d\n";
> > +
> > +		for (i = 0; i < ARRAY_SIZE(run->h264.dpram->dpb); i++) {
> > +			len = scnprintf(buf, buflen, entry, i,
> > +					run->h264.dpram->dpb[i].reference_ts,
> > +					vb2_find_timestamp(cap_q,
> > +							   run->h264.dpram->dpb[i].reference_ts,
> > +							   0));
> > +			buf += len;
> > +			buflen -= len;
> > +		}
> > +
> > +		break;
> > +	}
> > +	}
> > +}
> > +
> > +static char *visl_get_vb2_state(enum vb2_buffer_state state)
> > +{
> > +	switch (state) {
> > +	case VB2_BUF_STATE_DEQUEUED:
> > +		return "Dequeued";
> > +	case VB2_BUF_STATE_IN_REQUEST:
> > +		return "In request";
> > +	case VB2_BUF_STATE_PREPARING:
> > +		return "Preparing";
> > +	case VB2_BUF_STATE_QUEUED:
> > +		return "Queued";
> > +	case VB2_BUF_STATE_ACTIVE:
> > +		return "Active";
> > +	case VB2_BUF_STATE_DONE:
> > +		return "Done";
> > +	case VB2_BUF_STATE_ERROR:
> > +		return "Error";
> > +	default:
> > +		return "";
> > +	}
> > +}
> > +
> > +static int visl_fill_bytesused(struct vb2_v4l2_buffer *v4l2_vb2_buf, char *buf, size_t bufsz)
> > +{
> > +	int len = 0;
> > +	u32 i;
> > +
> > +	for (i = 0; i < v4l2_vb2_buf->vb2_buf.num_planes; i++)
> > +		len += scnprintf(buf, bufsz,
> > +				"bytesused[%u]: %u length[%u]: %u data_offset[%u]: %u",
> > +				i, v4l2_vb2_buf->planes[i].bytesused,
> > +				i, v4l2_vb2_buf->planes[i].length,
> > +				i, v4l2_vb2_buf->planes[i].data_offset);
> > +
> > +	return len;
> > +}
> > +
> > +static void visl_tpg_fill_sequence(struct visl_ctx *ctx,
> > +				   struct visl_run *run, char buf[], size_t bufsz)
> > +{
> > +	u32 stream_ms;
> > +
> > +	stream_ms = jiffies_to_msecs(get_jiffies_64() - ctx->capture_streamon_jiffies);
> > +
> > +	scnprintf(buf, bufsz,
> > +		  "stream time: %02d:%02d:%02d:%03d sequence:%u timestamp:%lld field:%s",
> > +		  (stream_ms / (60 * 60 * 1000)) % 24,
> > +		  (stream_ms / (60 * 1000)) % 60,
> > +		  (stream_ms / 1000) % 60,
> > +		  stream_ms % 1000,
> > +		  run->dst->sequence,
> > +		  run->dst->vb2_buf.timestamp,
> > +		  (run->dst->field == V4L2_FIELD_ALTERNATE) ?
> > +		  (run->dst->field == V4L2_FIELD_TOP ?
> > +		  " top" : " bottom") : "none");
> > +}
> > +
> > +static void visl_tpg_fill(struct visl_ctx *ctx, struct visl_run *run)
> > +{
> > +	u8 *basep[TPG_MAX_PLANES][2];
> > +	char *buf = ctx->tpg_str_buf;
> > +	char *tmp = buf;
> > +	char *line_str;
> > +	u32 line = 1;
> > +	const u32 line_height = 16;
> > +	u32 len;
> > +	struct vb2_queue *out_q = &ctx->fh.m2m_ctx->out_q_ctx.q;
> > +	struct vb2_queue *cap_q = &ctx->fh.m2m_ctx->cap_q_ctx.q;
> > +	struct v4l2_pix_format_mplane *coded_fmt = &ctx->coded_fmt.fmt.pix_mp;
> > +	struct v4l2_pix_format_mplane *decoded_fmt = &ctx->decoded_fmt.fmt.pix_mp;
> > +	u32 p;
> > +	u32 i;
> > +
> > +	for (p = 0; p < tpg_g_planes(&ctx->tpg); p++) {
> > +		void *vbuf = plane_vaddr(&ctx->tpg,
> > +					 &run->dst->vb2_buf, p,
> > +					 ctx->tpg.bytesperline,
> > +					 ctx->tpg.buf_height);
> > +
> > +		tpg_calc_text_basep(&ctx->tpg, basep, p, vbuf);
> > +		tpg_fill_plane_buffer(&ctx->tpg, 0, p, vbuf);
> > +	}
> > +
> > +	visl_tpg_fill_sequence(ctx, run, buf, TPG_STR_BUF_SZ);
> > +	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> > +	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> > +	frame_dprintk(ctx->dev, run->dst->sequence, "");
> > +	line++;
> > +
> > +	visl_get_ref_frames(ctx, buf, TPG_STR_BUF_SZ, run);
> > +
> > +	while ((line_str = strsep(&tmp, "\n")) && strlen(line_str)) {
> > +		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, line_str);
> > +		frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", line_str);
> > +	}
> > +
> > +	frame_dprintk(ctx->dev, run->dst->sequence, "");
> > +	line++;
> > +
> > +	scnprintf(buf,
> > +		  TPG_STR_BUF_SZ,
> > +		  "OUTPUT pixelformat: %c%c%c%c, resolution: %dx%d, num_planes: %d",
> > +		  coded_fmt->pixelformat,
> > +		  (coded_fmt->pixelformat >> 8) & 0xff,
> > +		  (coded_fmt->pixelformat >> 16) & 0xff,
> > +		  (coded_fmt->pixelformat >> 24) & 0xff,
> > +		  coded_fmt->width,
> > +		  coded_fmt->height,
> > +		  coded_fmt->num_planes);
> > +
> > +	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> > +	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> > +
> > +	for (i = 0; i < coded_fmt->num_planes; i++) {
> > +		scnprintf(buf,
> > +			  TPG_STR_BUF_SZ,
> > +			  "plane[%d]: bytesperline: %d, sizeimage: %d",
> > +			  i,
> > +			  coded_fmt->plane_fmt[i].bytesperline,
> > +			  coded_fmt->plane_fmt[i].sizeimage);
> > +
> > +		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> > +		frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> > +	}
> > +
> > +	line++;
> > +	frame_dprintk(ctx->dev, run->dst->sequence, "");
> > +	scnprintf(buf, TPG_STR_BUF_SZ, "Output queue status:");
> > +	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> > +	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> > +
> > +	len = 0;
> > +	for (i = 0; i < out_q->num_buffers; i++) {
> > +		char entry[] = "index: %u, state: %s, request_fd: %d, ";
> > +		u32 old_len = len;
> > +		char *q_status = visl_get_vb2_state(out_q->bufs[i]->state);
> > +
> > +		len += scnprintf(&buf[len], TPG_STR_BUF_SZ - len,
> > +				 entry, i, q_status,
> > +				 to_vb2_v4l2_buffer(out_q->bufs[i])->request_fd);
> > +
> > +		len += visl_fill_bytesused(to_vb2_v4l2_buffer(out_q->bufs[i]),
> > +					   &buf[len],
> > +					   TPG_STR_BUF_SZ - len);
> > +
> > +		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, &buf[old_len]);
> > +		frame_dprintk(ctx->dev, run->dst->sequence, "%s", &buf[old_len]);
> > +	}
> > +
> > +	line++;
> > +	frame_dprintk(ctx->dev, run->dst->sequence, "");
> > +
> > +	scnprintf(buf,
> > +		  TPG_STR_BUF_SZ,
> > +		  "CAPTURE pixelformat: %c%c%c%c, resolution: %dx%d, num_planes: %d",
> > +		  decoded_fmt->pixelformat,
> > +		  (decoded_fmt->pixelformat >> 8) & 0xff,
> > +		  (decoded_fmt->pixelformat >> 16) & 0xff,
> > +		  (decoded_fmt->pixelformat >> 24) & 0xff,
> > +		  decoded_fmt->width,
> > +		  decoded_fmt->height,
> > +		  decoded_fmt->num_planes);
> > +
> > +	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> > +	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> > +
> > +	for (i = 0; i < decoded_fmt->num_planes; i++) {
> > +		scnprintf(buf,
> > +			  TPG_STR_BUF_SZ,
> > +			  "plane[%d]: bytesperline: %d, sizeimage: %d",
> > +			  i,
> > +			  decoded_fmt->plane_fmt[i].bytesperline,
> > +			  decoded_fmt->plane_fmt[i].sizeimage);
> > +
> > +		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> > +		frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> > +	}
> > +
> > +	line++;
> > +	frame_dprintk(ctx->dev, run->dst->sequence, "");
> > +	scnprintf(buf, TPG_STR_BUF_SZ, "Capture queue status:");
> > +	tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, buf);
> > +	frame_dprintk(ctx->dev, run->dst->sequence, "%s\n", buf);
> > +
> > +	len = 0;
> > +	for (i = 0; i < cap_q->num_buffers; i++) {
> > +		u32 old_len = len;
> > +		char *q_status = visl_get_vb2_state(cap_q->bufs[i]->state);
> > +
> > +		len += scnprintf(&buf[len], TPG_STR_BUF_SZ - len,
> > +				 "index: %u, status: %s, timestamp: %llu, is_held: %d",
> > +				 cap_q->bufs[i]->index, q_status,
> > +				 cap_q->bufs[i]->timestamp,
> > +				 to_vb2_v4l2_buffer(cap_q->bufs[i])->is_held);
> > +
> > +		tpg_gen_text(&ctx->tpg, basep, line++ * line_height, 16, &buf[old_len]);
> > +		frame_dprintk(ctx->dev, run->dst->sequence, "%s", &buf[old_len]);
> > +	}
> > +}
> > +
> > +static void visl_trace_ctrls(struct visl_ctx *ctx, struct visl_run *run)
> > +{
> > +	int i;
> > +
> > +	switch (ctx->current_codec) {
> > +	default:
> > +	case VISL_CODEC_NONE:
> > +		break;
> > +	case VISL_CODEC_FWHT:
> > +		trace_v4l2_ctrl_fwht_params(run->fwht.params);
> > +		break;
> > +	case VISL_CODEC_MPEG2:
> > +		trace_v4l2_ctrl_mpeg2_sequence(run->mpeg2.seq);
> > +		trace_v4l2_ctrl_mpeg2_picture(run->mpeg2.pic);
> > +		trace_v4l2_ctrl_mpeg2_quantisation(run->mpeg2.quant);
> > +		break;
> > +	case VISL_CODEC_VP8:
> > +		trace_v4l2_ctrl_vp8_frame(run->vp8.frame);
> > +		trace_v4l2_ctrl_vp8_entropy(run->vp8.frame);
> > +		break;
> > +	case VISL_CODEC_VP9:
> > +		trace_v4l2_ctrl_vp9_frame(run->vp9.frame);
> > +		trace_v4l2_ctrl_vp9_compressed_hdr(run->vp9.probs);
> > +		trace_v4l2_ctrl_vp9_compressed_coeff(run->vp9.probs);
> > +		trace_v4l2_vp9_mv_probs(&run->vp9.probs->mv);
> > +		break;
> > +	case VISL_CODEC_H264:
> > +		trace_v4l2_ctrl_h264_sps(run->h264.sps);
> > +		trace_v4l2_ctrl_h264_pps(run->h264.pps);
> > +		trace_v4l2_ctrl_h264_scaling_matrix(run->h264.sm);
> > +		trace_v4l2_ctrl_h264_slice_params(run->h264.spram);
> > +
> > +		for (i = 0; i < ARRAY_SIZE(run->h264.spram->ref_pic_list0); i++)
> > +			trace_v4l2_h264_ref_pic_list0(&run->h264.spram->ref_pic_list0[i], i);
> > +		for (i = 0; i < ARRAY_SIZE(run->h264.spram->ref_pic_list0); i++)
> > +			trace_v4l2_h264_ref_pic_list1(&run->h264.spram->ref_pic_list1[i], i);
> > +
> > +		trace_v4l2_ctrl_h264_decode_params(run->h264.dpram);
> > +
> > +		for (i = 0; i < ARRAY_SIZE(run->h264.dpram->dpb); i++)
> > +			trace_v4l2_h264_dpb_entry(&run->h264.dpram->dpb[i], i);
> > +		break;
> > +	}
> > +}

I checked the ftrace results from visl and compared them to the v4l2-tracer output.

The VP8 ftrace results all match the v4l2-tracer output.

The H264 ftrace results match the v4l2-tracer output except
that tracing v4l2_ctrl_h264_pred_weights is missing in visl.

The FWHT ftrace results also match the v4l2-tracer output except
that visl doesn't recognize this flag:
V4L2_FWHT_FL_COMPONENTS_NUM_MSK
and just spits out a hex value instead.

The MPEG2 ftrace results all match the v4l2-tracer output.
The VP9 ftrace results match the v4l2-tracer output.

> > +
> > +void visl_device_run(void *priv)
> > +{
> > +	struct visl_ctx *ctx = priv;
> > +	struct visl_run run = {};
> > +	struct media_request *src_req;
> > +
> > +	run.src = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> > +	run.dst = v4l2_m2m_next_dst_buf(ctx->fh.m2m_ctx);
> > +
> > +	/* Apply request(s) controls if needed. */
> > +	src_req = run.src->vb2_buf.req_obj.req;
> > +
> > +	if (src_req)
> > +		v4l2_ctrl_request_setup(src_req, &ctx->hdl);
> > +
> > +	v4l2_m2m_buf_copy_metadata(run.src, run.dst, true);
> > +	run.dst->sequence = ctx->q_data[V4L2_M2M_DST].sequence++;
> > +	run.src->sequence = ctx->q_data[V4L2_M2M_SRC].sequence++;
> > +	run.dst->field = ctx->decoded_fmt.fmt.pix.field;
> > +
> > +	switch (ctx->current_codec) {
> > +	default:
> > +	case VISL_CODEC_NONE:
> > +		break;
> > +	case VISL_CODEC_FWHT:
> > +		run.fwht.params = visl_find_control_data(ctx, V4L2_CID_STATELESS_FWHT_PARAMS);
> > +		break;
> > +	case VISL_CODEC_MPEG2:
> > +		run.mpeg2.seq = visl_find_control_data(ctx, V4L2_CID_STATELESS_MPEG2_SEQUENCE);
> > +		run.mpeg2.pic = visl_find_control_data(ctx, V4L2_CID_STATELESS_MPEG2_PICTURE);
> > +		run.mpeg2.quant = visl_find_control_data(ctx,
> > +							 V4L2_CID_STATELESS_MPEG2_QUANTISATION);
> > +		break;
> > +	case VISL_CODEC_VP8:
> > +		run.vp8.frame = visl_find_control_data(ctx, V4L2_CID_STATELESS_VP8_FRAME);
> > +		break;
> > +	case VISL_CODEC_VP9:
> > +		run.vp9.frame = visl_find_control_data(ctx, V4L2_CID_STATELESS_VP9_FRAME);
> > +		run.vp9.probs = visl_find_control_data(ctx, V4L2_CID_STATELESS_VP9_COMPRESSED_HDR);
> > +		break;
> > +	case VISL_CODEC_H264:
> > +		run.h264.sps = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_SPS);
> > +		run.h264.pps = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_PPS);
> > +		run.h264.sm = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_SCALING_MATRIX);
> > +		run.h264.spram = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_SLICE_PARAMS);
> > +		run.h264.dpram = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_DECODE_PARAMS);
> > +		run.h264.pwht = visl_find_control_data(ctx, V4L2_CID_STATELESS_H264_PRED_WEIGHTS);
> > +		break;
> > +	}
> > +
> > +	frame_dprintk(ctx->dev, run.dst->sequence,
> > +		      "Got OUTPUT buffer sequence %d, timestamp %llu\n",
> > +		      run.src->sequence, run.src->vb2_buf.timestamp);
> > +
> > +	frame_dprintk(ctx->dev, run.dst->sequence,
> > +		      "Got CAPTURE buffer sequence %d, timestamp %llu\n",
> > +		      run.dst->sequence, run.dst->vb2_buf.timestamp);
> > +
> > +	visl_tpg_fill(ctx, &run);
> > +	visl_trace_ctrls(ctx, &run);
> > +
> > +	if (bitstream_trace_frame_start > -1 &&
> > +	    run.dst->sequence >= bitstream_trace_frame_start &&
> > +	    run.dst->sequence < bitstream_trace_frame_start + bitstream_trace_nframes)
> > +		visl_trace_bitstream(ctx, &run);
> > +
> > +	/* Complete request(s) controls if needed. */
> > +	if (src_req)
> > +		v4l2_ctrl_request_complete(src_req, &ctx->hdl);
> > +
> > +	if (visl_transtime_ms)
> > +		usleep_range(visl_transtime_ms * 1000, 2 * visl_transtime_ms * 1000);
> > +
> > +	v4l2_m2m_buf_done_and_job_finish(ctx->dev->m2m_dev,
> > +					 ctx->fh.m2m_ctx, VB2_BUF_STATE_DONE);
> > +}
> > diff --git a/drivers/media/test-drivers/visl/visl-dec.h b/drivers/media/test-drivers/visl/visl-dec.h
> > new file mode 100644
> > index 000000000000..56a550a8f747
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-dec.h
> > @@ -0,0 +1,100 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +/*
> > + * A virtual stateless device for stateless uAPI development purposes.
> > + *
> > + * This tool's objective is to help the development and testing of userspace
> > + * applications that use the V4L2 stateless API to decode media.
> > + *
> > + * A userspace implementation can use visl to run a decoding loop even when no
> > + * hardware is available or when the kernel uAPI for the codec has not been
> > + * upstreamed yet. This can reveal bugs at an early stage.
> > + *
> > + * This driver can also trace the contents of the V4L2 controls submitted to it.
> > + * It can also dump the contents of the vb2 buffers through a debugfs
> > + * interface. This is in many ways similar to the tracing infrastructure
> > + * available for other popular encode/decode APIs out there and can help develop
> > + * a userspace application by using another (working) one as a reference.
> > + *
> > + * Note that no actual decoding of video frames is performed by visl. The V4L2
> > + * test pattern generator is used to write various debug information to the
> > + * capture buffers instead.
> > + *
> > + * Copyright (c) Collabora, Ltd.
> > + *
> > + * Based on the vim2m driver, that is:
> > + *
> > + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> > + * Pawel Osciak, <pawel@osciak.com>
> > + * Marek Szyprowski, <m.szyprowski@samsung.com>
> > + *
> > + * Based on the vicodec driver, that is:
> > + *
> > + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> > + *
> > + * Based on the Cedrus VPU driver, that is:
> > + *
> > + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> > + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> > + * Copyright (C) 2018 Bootlin
> > + */
> > +
> > +#ifndef _VISL_DEC_H_
> > +#define _VISL_DEC_H_
> > +
> > +#include "visl.h"
> > +
> > +struct visl_av1_run {
> > +	const struct v4l2_ctrl_av1_sequence *sequence;
> > +	const struct v4l2_ctrl_av1_frame_header *frame_header;
> > +	const struct v4l2_ctrl_av1_tile_group *tile_group;
> > +	const struct v4l2_ctrl_av1_tile_group_entry *tg_entries;
> > +	const struct v4l2_ctrl_av1_film_grain *film_grain;
> > +};
> > +
> > +struct visl_fwht_run {
> > +	const struct v4l2_ctrl_fwht_params *params;
> > +};
> > +
> > +struct visl_mpeg2_run {
> > +	const struct v4l2_ctrl_mpeg2_sequence *seq;
> > +	const struct v4l2_ctrl_mpeg2_picture *pic;
> > +	const struct v4l2_ctrl_mpeg2_quantisation *quant;
> > +};
> > +
> > +struct visl_vp8_run {
> > +	const struct v4l2_ctrl_vp8_frame *frame;
> > +};
> > +
> > +struct visl_vp9_run {
> > +	const struct v4l2_ctrl_vp9_frame *frame;
> > +	const struct v4l2_ctrl_vp9_compressed_hdr *probs;
> > +};
> > +
> > +struct visl_h264_run {
> > +	const struct v4l2_ctrl_h264_sps *sps;
> > +	const struct v4l2_ctrl_h264_pps *pps;
> > +	const struct v4l2_ctrl_h264_scaling_matrix *sm;
> > +	const struct v4l2_ctrl_h264_slice_params *spram;
> > +	const struct v4l2_ctrl_h264_decode_params *dpram;
> > +	const struct v4l2_ctrl_h264_pred_weights *pwht;
> > +};
> > +
> > +struct visl_run {
> > +	struct vb2_v4l2_buffer	*src;
> > +	struct vb2_v4l2_buffer	*dst;
> > +
> > +	union {
> > +		struct visl_fwht_run	fwht;
> > +		struct visl_mpeg2_run	mpeg2;
> > +		struct visl_vp8_run	vp8;
> > +		struct visl_vp9_run	vp9;
> > +		struct visl_h264_run	h264;
> > +	};
> > +};
> > +
> > +int visl_dec_start(struct visl_ctx *ctx);
> > +int visl_dec_stop(struct visl_ctx *ctx);
> > +int visl_job_ready(void *priv);
> > +void visl_device_run(void *priv);
> > +
> > +#endif /* _VISL_DEC_H_ */
> > diff --git a/drivers/media/test-drivers/visl/visl-trace-fwht.h b/drivers/media/test-drivers/visl/visl-trace-fwht.h
> > new file mode 100644
> > index 000000000000..76034449e5b7
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-trace-fwht.h
> > @@ -0,0 +1,66 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +#if !defined(_VISL_TRACE_FWHT_H_) || defined(TRACE_HEADER_MULTI_READ)
> > +#define _VISL_TRACE_FWHT_H_
> > +
> > +#include <linux/tracepoint.h>
> > +#include "visl.h"
> > +
> > +#undef TRACE_SYSTEM
> > +#define TRACE_SYSTEM visl_fwht_controls
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_fwht_params_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_fwht_params *p),
> > +	TP_ARGS(p),
> > +	TP_STRUCT__entry(
> > +			 __field(u64, backward_ref_ts)
> > +			 __field(u32, version)
> > +			 __field(u32, width)
> > +			 __field(u32, height)
> > +			 __field(u32, flags)
> > +			 __field(u32, colorspace)
> > +			 __field(u32, xfer_func)
> > +			 __field(u32, ycbcr_enc)
> > +			 __field(u32, quantization)
> > +			 ),
> > +	TP_fast_assign(
> > +		       __entry->backward_ref_ts = p->backward_ref_ts;
> > +		       __entry->version = p->version;
> > +		       __entry->width = p->width;
> > +		       __entry->height = p->height;
> > +		       __entry->flags = p->flags;
> > +		       __entry->colorspace = p->colorspace;
> > +		       __entry->xfer_func = p->xfer_func;
> > +		       __entry->ycbcr_enc = p->ycbcr_enc;
> > +		       __entry->quantization = p->quantization;
> > +		       ),
> > +	TP_printk("backward_ref_ts %llu version %u width %u height %u flags %s colorspace %u xfer_func %u ycbcr_enc %u quantization %u",
> > +		  __entry->backward_ref_ts, __entry->version, __entry->width, __entry->height,
> > +		  __print_flags(__entry->flags, "|",
> > +		  {V4L2_FWHT_FL_IS_INTERLACED, "IS_INTERLACED"},
> > +		  {V4L2_FWHT_FL_IS_BOTTOM_FIRST, "IS_BOTTOM_FIRST"},
> > +		  {V4L2_FWHT_FL_IS_ALTERNATE, "IS_ALTERNATE"},
> > +		  {V4L2_FWHT_FL_IS_BOTTOM_FIELD, "IS_BOTTOM_FIELD"},
> > +		  {V4L2_FWHT_FL_LUMA_IS_UNCOMPRESSED, "LUMA_IS_UNCOMPRESSED"},
> > +		  {V4L2_FWHT_FL_CB_IS_UNCOMPRESSED, "CB_IS_UNCOMPRESSED"},
> > +		  {V4L2_FWHT_FL_CR_IS_UNCOMPRESSED, "CR_IS_UNCOMPRESSED"},
> > +		  {V4L2_FWHT_FL_ALPHA_IS_UNCOMPRESSED, "ALPHA_IS_UNCOMPRESSED"},
> > +		  {V4L2_FWHT_FL_I_FRAME, "I_FRAME"},
> > +		  {V4L2_FWHT_FL_PIXENC_HSV, "PIXENC_HSV"},
> > +		  {V4L2_FWHT_FL_PIXENC_RGB, "PIXENC_RGB"},
> > +		  {V4L2_FWHT_FL_PIXENC_YUV, "PIXENC_YUV"}),
> > +		  __entry->colorspace, __entry->xfer_func, __entry->ycbcr_enc,
> > +		  __entry->quantization)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_fwht_params_tmpl, v4l2_ctrl_fwht_params,
> > +	TP_PROTO(const struct v4l2_ctrl_fwht_params *p),
> > +	TP_ARGS(p)
> > +);
> > +
> > +#endif
> > +
> > +#undef TRACE_INCLUDE_PATH
> > +#undef TRACE_INCLUDE_FILE
> > +#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
> > +#define TRACE_INCLUDE_FILE visl-trace-fwht
> > +#include <trace/define_trace.h>
> > diff --git a/drivers/media/test-drivers/visl/visl-trace-h264.h b/drivers/media/test-drivers/visl/visl-trace-h264.h
> > new file mode 100644
> > index 000000000000..0026a0dd5ce9
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-trace-h264.h
> > @@ -0,0 +1,349 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +#if !defined(_VISL_TRACE_H264_H_) || defined(TRACE_HEADER_MULTI_READ)
> > +#define _VISL_TRACE_H264_H_
> > +
> > +#include <linux/tracepoint.h>
> > +#include "visl.h"
> > +
> > +#undef TRACE_SYSTEM
> > +#define TRACE_SYSTEM visl_h264_controls
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_sps_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_sps *s),
> > +	TP_ARGS(s),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_sps, s)),
> > +	TP_fast_assign(__entry->s = *s),
> > +	TP_printk("\nprofile_idc %u\n"
> > +		  "constraint_set_flags %s\n"
> > +		  "level_idc %u\n"
> > +		  "seq_parameter_set_id %u\n"
> > +		  "chroma_format_idc %u\n"
> > +		  "bit_depth_luma_minus8 %u\n"
> > +		  "bit_depth_chroma_minus8 %u\n"
> > +		  "log2_max_frame_num_minus4 %u\n"
> > +		  "pic_order_cnt_type %u\n"
> > +		  "log2_max_pic_order_cnt_lsb_minus4 %u\n"
> > +		  "max_num_ref_frames %u\n"
> > +		  "num_ref_frames_in_pic_order_cnt_cycle %u\n"
> > +		  "offset_for_ref_frame %s\n"
> > +		  "offset_for_non_ref_pic %d\n"
> > +		  "offset_for_top_to_bottom_field %d\n"
> > +		  "pic_width_in_mbs_minus1 %u\n"
> > +		  "pic_height_in_map_units_minus1 %u\n"
> > +		  "flags %s",
> > +		  __entry->s.profile_idc,
> > +		  __print_flags(__entry->s.constraint_set_flags, "|",
> > +		  {V4L2_H264_SPS_CONSTRAINT_SET0_FLAG, "CONSTRAINT_SET0_FLAG"},
> > +		  {V4L2_H264_SPS_CONSTRAINT_SET1_FLAG, "CONSTRAINT_SET1_FLAG"},
> > +		  {V4L2_H264_SPS_CONSTRAINT_SET2_FLAG, "CONSTRAINT_SET2_FLAG"},
> > +		  {V4L2_H264_SPS_CONSTRAINT_SET3_FLAG, "CONSTRAINT_SET3_FLAG"},
> > +		  {V4L2_H264_SPS_CONSTRAINT_SET4_FLAG, "CONSTRAINT_SET4_FLAG"},
> > +		  {V4L2_H264_SPS_CONSTRAINT_SET5_FLAG, "CONSTRAINT_SET5_FLAG"}),
> > +		  __entry->s.level_idc,
> > +		  __entry->s.seq_parameter_set_id,
> > +		  __entry->s.chroma_format_idc,
> > +		  __entry->s.bit_depth_luma_minus8,
> > +		  __entry->s.bit_depth_chroma_minus8,
> > +		  __entry->s.log2_max_frame_num_minus4,
> > +		  __entry->s.pic_order_cnt_type,
> > +		  __entry->s.log2_max_pic_order_cnt_lsb_minus4,
> > +		  __entry->s.max_num_ref_frames,
> > +		  __entry->s.num_ref_frames_in_pic_order_cnt_cycle,
> > +		  __print_array(__entry->s.offset_for_ref_frame,
> > +		  		ARRAY_SIZE(__entry->s.offset_for_ref_frame),
> > +		  		sizeof(__entry->s.offset_for_ref_frame[0])),
> > +		  __entry->s.offset_for_non_ref_pic,
> > +		  __entry->s.offset_for_top_to_bottom_field,
> > +		  __entry->s.pic_width_in_mbs_minus1,
> > +		  __entry->s.pic_height_in_map_units_minus1,
> > +		  __print_flags(__entry->s.flags, "|",
> > +		  {V4L2_H264_SPS_FLAG_SEPARATE_COLOUR_PLANE, "SEPARATE_COLOUR_PLANE"},
> > +		  {V4L2_H264_SPS_FLAG_QPPRIME_Y_ZERO_TRANSFORM_BYPASS, "QPPRIME_Y_ZERO_TRANSFORM_BYPASS"},
> > +		  {V4L2_H264_SPS_FLAG_DELTA_PIC_ORDER_ALWAYS_ZERO, "DELTA_PIC_ORDER_ALWAYS_ZERO"},
> > +		  {V4L2_H264_SPS_FLAG_GAPS_IN_FRAME_NUM_VALUE_ALLOWED, "GAPS_IN_FRAME_NUM_VALUE_ALLOWED"},
> > +		  {V4L2_H264_SPS_FLAG_FRAME_MBS_ONLY, "FRAME_MBS_ONLY"},
> > +		  {V4L2_H264_SPS_FLAG_MB_ADAPTIVE_FRAME_FIELD, "MB_ADAPTIVE_FRAME_FIELD"},
> > +		  {V4L2_H264_SPS_FLAG_DIRECT_8X8_INFERENCE, "DIRECT_8X8_INFERENCE"}
> > +		  ))
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_pps_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_pps *p),
> > +	TP_ARGS(p),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_pps, p)),
> > +	TP_fast_assign(__entry->p = *p),
> > +	TP_printk("\npic_parameter_set_id %u\n"
> > +		  "seq_parameter_set_id %u\n"
> > +		  "num_slice_groups_minus1 %u\n"
> > +		  "num_ref_idx_l0_default_active_minus1 %u\n"
> > +		  "num_ref_idx_l1_default_active_minus1 %u\n"
> > +		  "weighted_bipred_idc %u\n"
> > +		  "pic_init_qp_minus26 %d\n"
> > +		  "pic_init_qs_minus26 %d\n"
> > +		  "chroma_qp_index_offset %d\n"
> > +		  "second_chroma_qp_index_offset %d\n"
> > +		  "flags %s",
> > +		  __entry->p.pic_parameter_set_id,
> > +		  __entry->p.seq_parameter_set_id,
> > +		  __entry->p.num_slice_groups_minus1,
> > +		  __entry->p.num_ref_idx_l0_default_active_minus1,
> > +		  __entry->p.num_ref_idx_l1_default_active_minus1,
> > +		  __entry->p.weighted_bipred_idc,
> > +		  __entry->p.pic_init_qp_minus26,
> > +		  __entry->p.pic_init_qs_minus26,
> > +		  __entry->p.chroma_qp_index_offset,
> > +		  __entry->p.second_chroma_qp_index_offset,
> > +		  __print_flags(__entry->p.flags, "|",
> > +		  {V4L2_H264_PPS_FLAG_ENTROPY_CODING_MODE, "ENTROPY_CODING_MODE"},
> > +		  {V4L2_H264_PPS_FLAG_BOTTOM_FIELD_PIC_ORDER_IN_FRAME_PRESENT, "BOTTOM_FIELD_PIC_ORDER_IN_FRAME_PRESENT"},
> > +		  {V4L2_H264_PPS_FLAG_WEIGHTED_PRED, "WEIGHTED_PRED"},
> > +		  {V4L2_H264_PPS_FLAG_DEBLOCKING_FILTER_CONTROL_PRESENT, "DEBLOCKING_FILTER_CONTROL_PRESENT"},
> > +		  {V4L2_H264_PPS_FLAG_CONSTRAINED_INTRA_PRED, "CONSTRAINED_INTRA_PRED"},
> > +		  {V4L2_H264_PPS_FLAG_REDUNDANT_PIC_CNT_PRESENT, "REDUNDANT_PIC_CNT_PRESENT"},
> > +		  {V4L2_H264_PPS_FLAG_TRANSFORM_8X8_MODE, "TRANSFORM_8X8_MODE"},
> > +		  {V4L2_H264_PPS_FLAG_SCALING_MATRIX_PRESENT, "SCALING_MATRIX_PRESENT"}
> > +		  ))
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_scaling_matrix_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_scaling_matrix *s),
> > +	TP_ARGS(s),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_scaling_matrix, s)),
> > +	TP_fast_assign(__entry->s = *s),
> > +	TP_printk("\nscaling_list_4x4 {%s}\nscaling_list_8x8 {%s}",
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->s.scaling_list_4x4,
> > +				   sizeof(__entry->s.scaling_list_4x4),
> > +				   false),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->s.scaling_list_8x8,
> > +				   sizeof(__entry->s.scaling_list_8x8),
> > +				   false)
> > +	)
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_pred_weights_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_pred_weights *p),
> > +	TP_ARGS(p),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_pred_weights, p)),
> > +	TP_fast_assign(__entry->p = *p),
> > +	TP_printk("\nluma_log2_weight_denom %u\n"
> > +		  "chroma_log2_weight_denom %u\n"
> > +		  "weight_factor[0].luma_weight %s\n"
> > +		  "weight_factor[0].luma_offset %s\n"
> > +		  "weight_factor[0].chroma_weight {%s}\n"
> > +		  "weight_factor[0].chroma_offset {%s}\n"
> > +		  "weight_factor[1].luma_weight %s\n"
> > +		  "weight_factor[1].luma_offset %s\n"
> > +		  "weight_factor[1].chroma_weight {%s}\n"
> > +		  "weight_factor[1].chroma_offset {%s}\n",
> > +		  __entry->p.luma_log2_weight_denom,
> > +		  __entry->p.chroma_log2_weight_denom,
> > +		  __print_array(__entry->p.weight_factors[0].luma_weight,
> > +		  		ARRAY_SIZE(__entry->p.weight_factors[0].luma_weight),
> > +		  		sizeof(__entry->p.weight_factors[0].luma_weight[0])),
> > +		  __print_array(__entry->p.weight_factors[0].luma_offset,
> > +		  		ARRAY_SIZE(__entry->p.weight_factors[0].luma_offset),
> > +		  		sizeof(__entry->p.weight_factors[0].luma_offset[0])),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->p.weight_factors[0].chroma_weight,
> > +				   sizeof(__entry->p.weight_factors[0].chroma_weight),
> > +				   false),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->p.weight_factors[0].chroma_offset,
> > +				   sizeof(__entry->p.weight_factors[0].chroma_offset),
> > +				   false),
> > +		  __print_array(__entry->p.weight_factors[1].luma_weight,
> > +		  		ARRAY_SIZE(__entry->p.weight_factors[1].luma_weight),
> > +		  		sizeof(__entry->p.weight_factors[1].luma_weight[0])),
> > +		  __print_array(__entry->p.weight_factors[1].luma_offset,
> > +		  		ARRAY_SIZE(__entry->p.weight_factors[1].luma_offset),
> > +		  		sizeof(__entry->p.weight_factors[1].luma_offset[0])),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->p.weight_factors[1].chroma_weight,
> > +				   sizeof(__entry->p.weight_factors[1].chroma_weight),
> > +				   false),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->p.weight_factors[1].chroma_offset,
> > +				   sizeof(__entry->p.weight_factors[1].chroma_offset),
> > +				   false)
> > +	)
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_slice_params_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_slice_params *s),
> > +	TP_ARGS(s),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_slice_params, s)),
> > +	TP_fast_assign(__entry->s = *s),
> > +	TP_printk("\nheader_bit_size %u\n"
> > +		  "first_mb_in_slice %u\n"
> > +		  "slice_type %s\n"
> > +		  "colour_plane_id %u\n"
> > +		  "redundant_pic_cnt %u\n"
> > +		  "cabac_init_idc %u\n"
> > +		  "slice_qp_delta %d\n"
> > +		  "slice_qs_delta %d\n"
> > +		  "disable_deblocking_filter_idc %u\n"
> > +		  "slice_alpha_c0_offset_div2 %u\n"
> > +		  "slice_beta_offset_div2 %u\n"
> > +		  "num_ref_idx_l0_active_minus1 %u\n"
> > +		  "num_ref_idx_l1_active_minus1 %u\n"
> > +		  "flags %s",
> > +		  __entry->s.header_bit_size,
> > +		  __entry->s.first_mb_in_slice,
> > +		  __print_symbolic(__entry->s.slice_type,
> > +		  {V4L2_H264_SLICE_TYPE_P, "P"},
> > +		  {V4L2_H264_SLICE_TYPE_B, "B"},
> > +		  {V4L2_H264_SLICE_TYPE_I, "I"},
> > +		  {V4L2_H264_SLICE_TYPE_SP, "SP"},
> > +		  {V4L2_H264_SLICE_TYPE_SI, "SI"}),
> > +		  __entry->s.colour_plane_id,
> > +		  __entry->s.redundant_pic_cnt,
> > +		  __entry->s.cabac_init_idc,
> > +		  __entry->s.slice_qp_delta,
> > +		  __entry->s.slice_qs_delta,
> > +		  __entry->s.disable_deblocking_filter_idc,
> > +		  __entry->s.slice_alpha_c0_offset_div2,
> > +		  __entry->s.slice_beta_offset_div2,
> > +		  __entry->s.num_ref_idx_l0_active_minus1,
> > +		  __entry->s.num_ref_idx_l1_active_minus1,
> > +		  __print_flags(__entry->s.flags, "|",
> > +		  {V4L2_H264_SLICE_FLAG_DIRECT_SPATIAL_MV_PRED, "DIRECT_SPATIAL_MV_PRED"},
> > +		  {V4L2_H264_SLICE_FLAG_SP_FOR_SWITCH, "SP_FOR_SWITCH"})
> > +	)
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_h264_reference_tmpl,
> > +	TP_PROTO(const struct v4l2_h264_reference *r, int i),
> > +	TP_ARGS(r, i),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_h264_reference, r)
> > +			 __field(int, i)),
> > +	TP_fast_assign(__entry->r = *r; __entry->i = i;),
> > +	TP_printk("[%d]: fields %s index %u",
> > +		  __entry->i,
> > +		  __print_flags(__entry->r.fields, "|",
> > +		  {V4L2_H264_TOP_FIELD_REF, "TOP_FIELD_REF"},
> > +		  {V4L2_H264_BOTTOM_FIELD_REF, "BOTTOM_FIELD_REF"},
> > +		  {V4L2_H264_FRAME_REF, "FRAME_REF"}),
> > +		  __entry->r.index
> > +	)
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_h264_decode_params_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_decode_params *d),
> > +	TP_ARGS(d),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_h264_decode_params, d)),
> > +	TP_fast_assign(__entry->d = *d),
> > +	TP_printk("\nnal_ref_idc %u\n"
> > +		  "frame_num %u\n"
> > +		  "top_field_order_cnt %d\n"
> > +		  "bottom_field_order_cnt %d\n"
> > +		  "idr_pic_id %u\n"
> > +		  "pic_order_cnt_lsb %u\n"
> > +		  "delta_pic_order_cnt_bottom %d\n"
> > +		  "delta_pic_order_cnt0 %d\n"
> > +		  "delta_pic_order_cnt1 %d\n"
> > +		  "dec_ref_pic_marking_bit_size %u\n"
> > +		  "pic_order_cnt_bit_size %u\n"
> > +		  "slice_group_change_cycle %u\n"
> > +		  "flags %s\n",
> > +		  __entry->d.nal_ref_idc,
> > +		  __entry->d.frame_num,
> > +		  __entry->d.top_field_order_cnt,
> > +		  __entry->d.bottom_field_order_cnt,
> > +		  __entry->d.idr_pic_id,
> > +		  __entry->d.pic_order_cnt_lsb,
> > +		  __entry->d.delta_pic_order_cnt_bottom,
> > +		  __entry->d.delta_pic_order_cnt0,
> > +		  __entry->d.delta_pic_order_cnt1,
> > +		  __entry->d.dec_ref_pic_marking_bit_size,
> > +		  __entry->d.pic_order_cnt_bit_size,
> > +		  __entry->d.slice_group_change_cycle,
> > +		  __print_flags(__entry->d.flags, "|",
> > +		  {V4L2_H264_DECODE_PARAM_FLAG_IDR_PIC, "IDR_PIC"},
> > +		  {V4L2_H264_DECODE_PARAM_FLAG_FIELD_PIC, "FIELD_PIC"},
> > +		  {V4L2_H264_DECODE_PARAM_FLAG_BOTTOM_FIELD, "BOTTOM_FIELD"},
> > +		  {V4L2_H264_DECODE_PARAM_FLAG_PFRAME, "PFRAME"},
> > +		  {V4L2_H264_DECODE_PARAM_FLAG_BFRAME, "BFRAME"})
> > +	)
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_h264_dpb_entry_tmpl,
> > +	TP_PROTO(const struct v4l2_h264_dpb_entry *e, int i),
> > +	TP_ARGS(e, i),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_h264_dpb_entry, e)
> > +			 __field(int, i)),
> > +	TP_fast_assign(__entry->e = *e; __entry->i = i;),
> > +	TP_printk("[%d]: reference_ts %llu, pic_num %u frame_num %u fields %s "
> > +		  "top_field_order_cnt %d bottom_field_order_cnt %d flags %s",
> > +		  __entry->i,
> > +		  __entry->e.reference_ts,
> > +		  __entry->e.pic_num,
> > +		  __entry->e.frame_num,
> > +		  __print_flags(__entry->e.fields, "|",
> > +		  {V4L2_H264_TOP_FIELD_REF, "TOP_FIELD_REF"},
> > +		  {V4L2_H264_BOTTOM_FIELD_REF, "BOTTOM_FIELD_REF"},
> > +		  {V4L2_H264_FRAME_REF, "FRAME_REF"}),
> > +		  __entry->e.top_field_order_cnt,
> > +		  __entry->e.bottom_field_order_cnt,
> > +		  __print_flags(__entry->e.flags, "|",
> > +		  {V4L2_H264_DPB_ENTRY_FLAG_VALID, "VALID"},
> > +		  {V4L2_H264_DPB_ENTRY_FLAG_ACTIVE, "ACTIVE"},
> > +		  {V4L2_H264_DPB_ENTRY_FLAG_LONG_TERM, "LONG_TERM"},
> > +		  {V4L2_H264_DPB_ENTRY_FLAG_FIELD, "FIELD"})
> > +
> > +	)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_h264_sps_tmpl, v4l2_ctrl_h264_sps,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_sps *s),
> > +	TP_ARGS(s)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_h264_pps_tmpl, v4l2_ctrl_h264_pps,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_pps *p),
> > +	TP_ARGS(p)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_h264_scaling_matrix_tmpl, v4l2_ctrl_h264_scaling_matrix,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_scaling_matrix *s),
> > +	TP_ARGS(s)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_h264_pred_weights_tmpl, v4l2_ctrl_h264_pred_weights,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_pred_weights *p),
> > +	TP_ARGS(p)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_h264_slice_params_tmpl, v4l2_ctrl_h264_slice_params,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_slice_params *s),
> > +	TP_ARGS(s)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_h264_reference_tmpl, v4l2_h264_ref_pic_list0,
> > +	TP_PROTO(const struct v4l2_h264_reference *r, int i),
> > +	TP_ARGS(r, i)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_h264_reference_tmpl, v4l2_h264_ref_pic_list1,
> > +	TP_PROTO(const struct v4l2_h264_reference *r, int i),
> > +	TP_ARGS(r, i)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_h264_decode_params_tmpl, v4l2_ctrl_h264_decode_params,
> > +	TP_PROTO(const struct v4l2_ctrl_h264_decode_params *d),
> > +	TP_ARGS(d)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_h264_dpb_entry_tmpl, v4l2_h264_dpb_entry,
> > +	TP_PROTO(const struct v4l2_h264_dpb_entry *e, int i),
> > +	TP_ARGS(e, i)
> > +);
> > +
> > +#endif
> > +
> > +#undef TRACE_INCLUDE_PATH
> > +#undef TRACE_INCLUDE_FILE
> > +#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
> > +#define TRACE_INCLUDE_FILE visl-trace-h264
> > +#include <trace/define_trace.h>
> > diff --git a/drivers/media/test-drivers/visl/visl-trace-mpeg2.h b/drivers/media/test-drivers/visl/visl-trace-mpeg2.h
> > new file mode 100644
> > index 000000000000..889a3ba56502
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-trace-mpeg2.h
> > @@ -0,0 +1,99 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +#if !defined(_VISL_TRACE_MPEG2_H_) || defined(TRACE_HEADER_MULTI_READ)
> > +#define _VISL_TRACE_MPEG2_H_
> > +
> > +#include <linux/tracepoint.h>
> > +#include "visl.h"
> > +
> > +#undef TRACE_SYSTEM
> > +#define TRACE_SYSTEM visl_mpeg2_controls
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_mpeg2_seq_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_mpeg2_sequence *s),
> > +	TP_ARGS(s),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_mpeg2_sequence, s)),
> > +	TP_fast_assign(__entry->s = *s;),
> > +	TP_printk("\nhorizontal_size %u \nvertical_size %u\nvbv_buffer_size %u \n"
> > +		  "profile_and_level_indication %u \nchroma_format %u\nflags %s\n",
> > +		  __entry->s.horizontal_size,
> > +		  __entry->s.vertical_size,
> > +		  __entry->s.vbv_buffer_size,
> > +		  __entry->s.profile_and_level_indication,
> > +		  __entry->s.chroma_format,
> > +		  __print_flags(__entry->s.flags, "|",
> > +		  {V4L2_MPEG2_SEQ_FLAG_PROGRESSIVE, "PROGRESSIVE"})
> > +	)
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_mpeg2_pic_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_mpeg2_picture *p),
> > +	TP_ARGS(p),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_mpeg2_picture, p)),
> > +	TP_fast_assign(__entry->p = *p;),
> > +	TP_printk("\nbackward_ref_ts %llu \nforward_ref_ts %llu \nflags %s \nf_code {%s} \n"
> > +		  "picture_coding_type: %u \npicture_structure %u \nintra_dc_precision %u\n",
> > +		  __entry->p.backward_ref_ts,
> > +		  __entry->p.forward_ref_ts,
> > +		  __print_flags(__entry->p.flags, "|",
> > +		  {V4L2_MPEG2_PIC_FLAG_TOP_FIELD_FIRST, "TOP_FIELD_FIRST"},
> > +		  {V4L2_MPEG2_PIC_FLAG_FRAME_PRED_DCT, "FRAME_PRED_DCT"},
> > +		  {V4L2_MPEG2_PIC_FLAG_CONCEALMENT_MV, "CONCEALMENT_MV"},
> > +		  {V4L2_MPEG2_PIC_FLAG_Q_SCALE_TYPE, "Q_SCALE_TYPE"},
> > +		  {V4L2_MPEG2_PIC_FLAG_INTRA_VLC, "INTA_VLC"},
> > +		  {V4L2_MPEG2_PIC_FLAG_ALT_SCAN, "ALT_SCAN"},
> > +		  {V4L2_MPEG2_PIC_FLAG_REPEAT_FIRST, "REPEAT_FIRST"},
> > +		  {V4L2_MPEG2_PIC_FLAG_PROGRESSIVE, "PROGRESSIVE"}),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->p.f_code,
> > +				   sizeof(__entry->p.f_code),
> > +				   false),
> > +		  __entry->p.picture_coding_type,
> > +		  __entry->p.picture_structure,
> > +		  __entry->p.intra_dc_precision
> > +	)
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_mpeg2_quant_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_mpeg2_quantisation *q),
> > +	TP_ARGS(q),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_mpeg2_quantisation, q)),
> > +	TP_fast_assign(__entry->q = *q;),
> > +	TP_printk("\nintra_quantiser_matrix %s \nnon_intra_quantiser_matrix %s\n"
> > +		  "chroma_intra_quantiser_matrix %s\nchroma_non_intra_quantiser_matrix %s\n",
> > +		  __print_array(__entry->q.intra_quantiser_matrix,
> > +				ARRAY_SIZE(__entry->q.intra_quantiser_matrix),
> > +				sizeof(__entry->q.intra_quantiser_matrix[0])),
> > +		  __print_array(__entry->q.non_intra_quantiser_matrix,
> > +				ARRAY_SIZE(__entry->q.non_intra_quantiser_matrix),
> > +				sizeof(__entry->q.non_intra_quantiser_matrix[0])),
> > +		  __print_array(__entry->q.chroma_intra_quantiser_matrix,
> > +				ARRAY_SIZE(__entry->q.chroma_intra_quantiser_matrix),
> > +				sizeof(__entry->q.chroma_intra_quantiser_matrix[0])),
> > +		  __print_array(__entry->q.chroma_non_intra_quantiser_matrix,
> > +				ARRAY_SIZE(__entry->q.chroma_non_intra_quantiser_matrix),
> > +				sizeof(__entry->q.chroma_non_intra_quantiser_matrix[0]))
> > +		  )
> > +)
> > +
> > +DEFINE_EVENT(v4l2_ctrl_mpeg2_seq_tmpl, v4l2_ctrl_mpeg2_sequence,
> > +	TP_PROTO(const struct v4l2_ctrl_mpeg2_sequence *s),
> > +	TP_ARGS(s)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_mpeg2_pic_tmpl, v4l2_ctrl_mpeg2_picture,
> > +	TP_PROTO(const struct v4l2_ctrl_mpeg2_picture *p),
> > +	TP_ARGS(p)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_mpeg2_quant_tmpl, v4l2_ctrl_mpeg2_quantisation,
> > +	TP_PROTO(const struct v4l2_ctrl_mpeg2_quantisation *q),
> > +	TP_ARGS(q)
> > +);
> > +
> > +#endif
> > +
> > +#undef TRACE_INCLUDE_PATH
> > +#undef TRACE_INCLUDE_FILE
> > +#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
> > +#define TRACE_INCLUDE_FILE visl-trace-mpeg2
> > +#include <trace/define_trace.h>
> > diff --git a/drivers/media/test-drivers/visl/visl-trace-points.c b/drivers/media/test-drivers/visl/visl-trace-points.c
> > new file mode 100644
> > index 000000000000..6aa98f90c20a
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-trace-points.c
> > @@ -0,0 +1,9 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#include "visl.h"
> > +
> > +#define CREATE_TRACE_POINTS
> > +#include "visl-trace-fwht.h"
> > +#include "visl-trace-mpeg2.h"
> > +#include "visl-trace-vp8.h"
> > +#include "visl-trace-vp9.h"
> > +#include "visl-trace-h264.h"
> > diff --git a/drivers/media/test-drivers/visl/visl-trace-vp8.h b/drivers/media/test-drivers/visl/visl-trace-vp8.h
> > new file mode 100644
> > index 000000000000..18c610ea18ab
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-trace-vp8.h
> > @@ -0,0 +1,156 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +#if !defined(_VISL_TRACE_VP8_H_) || defined(TRACE_HEADER_MULTI_READ)
> > +#define _VISL_TRACE_VP8_H_
> > +
> > +#include <linux/tracepoint.h>
> > +#include "visl.h"
> > +
> > +#undef TRACE_SYSTEM
> > +#define TRACE_SYSTEM visl_vp8_controls
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_vp8_entropy_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
> > +	TP_ARGS(f),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp8_frame, f)),
> > +	TP_fast_assign(__entry->f = *f;),
> > +	TP_printk("\nentropy.coeff_probs {%s}\n"
> > +		  "entropy.y_mode_probs %s\n"
> > +		  "entropy.uv_mode_probs %s\n"
> > +		  "entropy.mv_probs {%s}",
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->f.entropy.coeff_probs,
> > +				   sizeof(__entry->f.entropy.coeff_probs),
> > +				   false),
> > +		  __print_array(__entry->f.entropy.y_mode_probs,
> > +				ARRAY_SIZE(__entry->f.entropy.y_mode_probs),
> > +				sizeof(__entry->f.entropy.y_mode_probs[0])),
> > +		  __print_array(__entry->f.entropy.uv_mode_probs,
> > +				ARRAY_SIZE(__entry->f.entropy.uv_mode_probs),
> > +				sizeof(__entry->f.entropy.uv_mode_probs[0])),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->f.entropy.mv_probs,
> > +				   sizeof(__entry->f.entropy.mv_probs),
> > +				   false)
> > +		  )
> > +)
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_vp8_frame_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
> > +	TP_ARGS(f),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp8_frame, f)),
> > +	TP_fast_assign(__entry->f = *f;),
> > +	TP_printk("\nsegment.quant_update %s\n"
> > +		  "segment.lf_update %s\n"
> > +		  "segment.segment_probs %s\n"
> > +		  "segment.flags %s\n"
> > +		  "lf.ref_frm_delta %s\n"
> > +		  "lf.mb_mode_delta %s\n"
> > +		  "lf.sharpness_level %u\n"
> > +		  "lf.level %u\n"
> > +		  "lf.flags %s\n"
> > +		  "quant.y_ac_qi %u\n"
> > +		  "quant.y_dc_delta %d\n"
> > +		  "quant.y2_dc_delta %d\n"
> > +		  "quant.y2_ac_delta %d\n"
> > +		  "quant.uv_dc_delta %d\n"
> > +		  "quant.uv_ac_delta %d\n"
> > +		  "coder_state.range %u\n"
> > +		  "coder_state.value %u\n"
> > +		  "coder_state.bit_count %u\n"
> > +		  "width %u\n"
> > +		  "height %u\n"
> > +		  "horizontal_scale %u\n"
> > +		  "vertical_scale %u\n"
> > +		  "version %u\n"
> > +		  "prob_skip_false %u\n"
> > +		  "prob_intra %u\n"
> > +		  "prob_last %u\n"
> > +		  "prob_gf %u\n"
> > +		  "num_dct_parts %u\n"
> > +		  "first_part_size %u\n"
> > +		  "first_part_header_bits %u\n"
> > +		  "dct_part_sizes %s\n"
> > +		  "last_frame_ts %llu\n"
> > +		  "golden_frame_ts %llu\n"
> > +		  "alt_frame_ts %llu\n"
> > +		  "flags %s",
> > +		  __print_array(__entry->f.segment.quant_update,
> > +				ARRAY_SIZE(__entry->f.segment.quant_update),
> > +				sizeof(__entry->f.segment.quant_update[0])),
> > +		  __print_array(__entry->f.segment.lf_update,
> > +				ARRAY_SIZE(__entry->f.segment.lf_update),
> > +				sizeof(__entry->f.segment.lf_update[0])),
> > +		  __print_array(__entry->f.segment.segment_probs,
> > +				ARRAY_SIZE(__entry->f.segment.segment_probs),
> > +				sizeof(__entry->f.segment.segment_probs[0])),
> > +		  __print_flags(__entry->f.segment.flags, "|",
> > +		  {V4L2_VP8_SEGMENT_FLAG_ENABLED, "SEGMENT_ENABLED"},
> > +		  {V4L2_VP8_SEGMENT_FLAG_UPDATE_MAP, "SEGMENT_UPDATE_MAP"},
> > +		  {V4L2_VP8_SEGMENT_FLAG_UPDATE_FEATURE_DATA, "SEGMENT_UPDATE_FEATURE_DATA"},
> > +		  {V4L2_VP8_SEGMENT_FLAG_DELTA_VALUE_MODE, "SEGMENT_DELTA_VALUE_MODE"}),
> > +		  __print_array(__entry->f.lf.ref_frm_delta,
> > +				ARRAY_SIZE(__entry->f.lf.ref_frm_delta),
> > +				sizeof(__entry->f.lf.ref_frm_delta[0])),
> > +		  __print_array(__entry->f.lf.mb_mode_delta,
> > +				ARRAY_SIZE(__entry->f.lf.mb_mode_delta),
> > +				sizeof(__entry->f.lf.mb_mode_delta[0])),
> > +		  __entry->f.lf.sharpness_level,
> > +		  __entry->f.lf.level,
> > +		  __print_flags(__entry->f.lf.flags, "|",
> > +		  {V4L2_VP8_LF_ADJ_ENABLE, "LF_ADJ_ENABLED"},
> > +		  {V4L2_VP8_LF_DELTA_UPDATE, "LF_DELTA_UPDATE"},
> > +		  {V4L2_VP8_LF_FILTER_TYPE_SIMPLE, "LF_FILTER_TYPE_SIMPLE"}),
> > +		  __entry->f.quant.y_ac_qi,
> > +		  __entry->f.quant.y_dc_delta,
> > +		  __entry->f.quant.y2_dc_delta,
> > +		  __entry->f.quant.y2_ac_delta,
> > +		  __entry->f.quant.uv_dc_delta,
> > +		  __entry->f.quant.uv_ac_delta,
> > +		  __entry->f.coder_state.range,
> > +		  __entry->f.coder_state.value,
> > +		  __entry->f.coder_state.bit_count,
> > +		  __entry->f.width,
> > +		  __entry->f.height,
> > +		  __entry->f.horizontal_scale,
> > +		  __entry->f.vertical_scale,
> > +		  __entry->f.version,
> > +		  __entry->f.prob_skip_false,
> > +		  __entry->f.prob_intra,
> > +		  __entry->f.prob_last,
> > +		  __entry->f.prob_gf,
> > +		  __entry->f.num_dct_parts,
> > +		  __entry->f.first_part_size,
> > +		  __entry->f.first_part_header_bits,
> > +		  __print_array(__entry->f.dct_part_sizes,
> > +				ARRAY_SIZE(__entry->f.dct_part_sizes),
> > +				sizeof(__entry->f.dct_part_sizes[0])),
> > +		  __entry->f.last_frame_ts,
> > +		  __entry->f.golden_frame_ts,
> > +		  __entry->f.alt_frame_ts,
> > +		  __print_flags(__entry->f.flags, "|",
> > +		  {V4L2_VP8_FRAME_FLAG_KEY_FRAME, "KEY_FRAME"},
> > +		  {V4L2_VP8_FRAME_FLAG_EXPERIMENTAL, "EXPERIMENTAL"},
> > +		  {V4L2_VP8_FRAME_FLAG_SHOW_FRAME, "SHOW_FRAME"},
> > +		  {V4L2_VP8_FRAME_FLAG_MB_NO_SKIP_COEFF, "MB_NO_SKIP_COEFF"},
> > +		  {V4L2_VP8_FRAME_FLAG_SIGN_BIAS_GOLDEN, "SIGN_BIAS_GOLDEN"},
> > +		  {V4L2_VP8_FRAME_FLAG_SIGN_BIAS_ALT, "SIGN_BIAS_ALT"})
> > +		  )
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_vp8_frame_tmpl, v4l2_ctrl_vp8_frame,
> > +	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
> > +	TP_ARGS(f)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_vp8_entropy_tmpl, v4l2_ctrl_vp8_entropy,
> > +	TP_PROTO(const struct v4l2_ctrl_vp8_frame *f),
> > +	TP_ARGS(f)
> > +);
> > +
> > +#endif
> > +
> > +#undef TRACE_INCLUDE_PATH
> > +#undef TRACE_INCLUDE_FILE
> > +#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
> > +#define TRACE_INCLUDE_FILE visl-trace-vp8
> > +#include <trace/define_trace.h>
> > diff --git a/drivers/media/test-drivers/visl/visl-trace-vp9.h b/drivers/media/test-drivers/visl/visl-trace-vp9.h
> > new file mode 100644
> > index 000000000000..e0907231eac7
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-trace-vp9.h
> > @@ -0,0 +1,292 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +#if !defined(_VISL_TRACE_VP9_H_) || defined(TRACE_HEADER_MULTI_READ)
> > +#define _VISL_TRACE_VP9_H_
> > +
> > +#include <linux/tracepoint.h>
> > +#include "visl.h"
> > +
> > +#undef TRACE_SYSTEM
> > +#define TRACE_SYSTEM visl_vp9_controls
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_vp9_frame_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_vp9_frame *f),
> > +	TP_ARGS(f),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp9_frame, f)),
> > +	TP_fast_assign(__entry->f = *f;),
> > +	TP_printk("\nlf.ref_deltas %s\n"
> > +		  "lf.mode_deltas %s\n"
> > +		  "lf.level %u\n"
> > +		  "lf.sharpness %u\n"
> > +		  "lf.flags %s\n"
> > +		  "quant.base_q_idx %u\n"
> > +		  "quant.delta_q_y_dc %d\n"
> > +		  "quant.delta_q_uv_dc %d\n"
> > +		  "quant.delta_q_uv_ac %d\n"
> > +		  "seg.feature_data {%s}\n"
> > +		  "seg.feature_enabled %s\n"
> > +		  "seg.tree_probs %s\n"
> > +		  "seg.pred_probs %s\n"
> > +		  "seg.flags %s\n"
> > +		  "flags %s\n"
> > +		  "compressed_header_size %u\n"
> > +		  "uncompressed_header_size %u\n"
> > +		  "frame_width_minus_1 %u\n"
> > +		  "frame_height_minus_1 %u\n"
> > +		  "render_width_minus_1 %u\n"
> > +		  "render_height_minus_1 %u\n"
> > +		  "last_frame_ts %llu\n"
> > +		  "golden_frame_ts %llu\n"
> > +		  "alt_frame_ts %llu\n"
> > +		  "ref_frame_sign_bias %s\n"
> > +		  "reset_frame_context %s\n"
> > +		  "frame_context_idx %u\n"
> > +		  "profile %u\n"
> > +		  "bit_depth %u\n"
> > +		  "interpolation_filter %s\n"
> > +		  "tile_cols_log2 %u\n"
> > +		  "tile_rows_log_2 %u\n"
> > +		  "reference_mode %s\n",
> > +		  __print_array(__entry->f.lf.ref_deltas,
> > +				ARRAY_SIZE(__entry->f.lf.ref_deltas),
> > +				sizeof(__entry->f.lf.ref_deltas[0])),
> > +		  __print_array(__entry->f.lf.mode_deltas,
> > +				ARRAY_SIZE(__entry->f.lf.mode_deltas),
> > +				sizeof(__entry->f.lf.mode_deltas[0])),
> > +		  __entry->f.lf.level,
> > +		  __entry->f.lf.sharpness,
> > +		  __print_flags(__entry->f.lf.flags, "|",
> > +		  {V4L2_VP9_LOOP_FILTER_FLAG_DELTA_ENABLED, "DELTA_ENABLED"},
> > +		  {V4L2_VP9_LOOP_FILTER_FLAG_DELTA_UPDATE, "DELTA_UPDATE"}),
> > +		  __entry->f.quant.base_q_idx,
> > +		  __entry->f.quant.delta_q_y_dc,
> > +		  __entry->f.quant.delta_q_uv_dc,
> > +		  __entry->f.quant.delta_q_uv_ac,
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->f.seg.feature_data,
> > +				   sizeof(__entry->f.seg.feature_data),
> > +				   false),
> > +		  __print_array(__entry->f.seg.feature_enabled,
> > +				ARRAY_SIZE(__entry->f.seg.feature_enabled),
> > +				sizeof(__entry->f.seg.feature_enabled[0])),
> > +		  __print_array(__entry->f.seg.tree_probs,
> > +				ARRAY_SIZE(__entry->f.seg.tree_probs),
> > +				sizeof(__entry->f.seg.tree_probs[0])),
> > +		  __print_array(__entry->f.seg.pred_probs,
> > +				ARRAY_SIZE(__entry->f.seg.pred_probs),
> > +				sizeof(__entry->f.seg.pred_probs[0])),
> > +		  __print_flags(__entry->f.seg.flags, "|",
> > +		  {V4L2_VP9_SEGMENTATION_FLAG_ENABLED, "ENABLED"},
> > +		  {V4L2_VP9_SEGMENTATION_FLAG_UPDATE_MAP, "UPDATE_MAP"},
> > +		  {V4L2_VP9_SEGMENTATION_FLAG_TEMPORAL_UPDATE, "TEMPORAL_UPDATE"},
> > +		  {V4L2_VP9_SEGMENTATION_FLAG_UPDATE_DATA, "UPDATE_DATA"},
> > +		  {V4L2_VP9_SEGMENTATION_FLAG_ABS_OR_DELTA_UPDATE, "ABS_OR_DELTA_UPDATE"}),
> > +		  __print_flags(__entry->f.flags, "|",
> > +		  {V4L2_VP9_FRAME_FLAG_KEY_FRAME, "KEY_FRAME"},
> > +		  {V4L2_VP9_FRAME_FLAG_SHOW_FRAME, "SHOW_FRAME"},
> > +		  {V4L2_VP9_FRAME_FLAG_ERROR_RESILIENT, "ERROR_RESILIENT"},
> > +		  {V4L2_VP9_FRAME_FLAG_INTRA_ONLY, "INTRA_ONLY"},
> > +		  {V4L2_VP9_FRAME_FLAG_ALLOW_HIGH_PREC_MV, "ALLOW_HIGH_PREC_MV"},
> > +		  {V4L2_VP9_FRAME_FLAG_REFRESH_FRAME_CTX, "REFRESH_FRAME_CTX"},
> > +		  {V4L2_VP9_FRAME_FLAG_PARALLEL_DEC_MODE, "PARALLEL_DEC_MODE"},
> > +		  {V4L2_VP9_FRAME_FLAG_X_SUBSAMPLING, "X_SUBSAMPLING"},
> > +		  {V4L2_VP9_FRAME_FLAG_Y_SUBSAMPLING, "Y_SUBSAMPLING"},
> > +		  {V4L2_VP9_FRAME_FLAG_COLOR_RANGE_FULL_SWING, "COLOR_RANGE_FULL_SWING"}),
> > +		  __entry->f.compressed_header_size,
> > +		  __entry->f.uncompressed_header_size,
> > +		  __entry->f.frame_width_minus_1,
> > +		  __entry->f.frame_height_minus_1,
> > +		  __entry->f.render_width_minus_1,
> > +		  __entry->f.render_height_minus_1,
> > +		  __entry->f.last_frame_ts,
> > +		  __entry->f.golden_frame_ts,
> > +		  __entry->f.alt_frame_ts,
> > +		  __print_symbolic(__entry->f.ref_frame_sign_bias,
> > +		  {V4L2_VP9_SIGN_BIAS_LAST, "SIGN_BIAS_LAST"},
> > +		  {V4L2_VP9_SIGN_BIAS_GOLDEN, "SIGN_BIAS_GOLDEN"},
> > +		  {V4L2_VP9_SIGN_BIAS_ALT, "SIGN_BIAS_ALT"}),
> > +		  __print_symbolic(__entry->f.reset_frame_context,
> > +		  {V4L2_VP9_RESET_FRAME_CTX_NONE, "RESET_FRAME_CTX_NONE"},
> > +		  {V4L2_VP9_RESET_FRAME_CTX_SPEC, "RESET_FRAME_CTX_SPEC"},
> > +		  {V4L2_VP9_RESET_FRAME_CTX_ALL, "RESET_FRAME_CTX_ALL"}),
> > +		  __entry->f.frame_context_idx,
> > +		  __entry->f.profile,
> > +		  __entry->f.bit_depth,
> > +		  __print_symbolic(__entry->f.interpolation_filter,
> > +		  {V4L2_VP9_INTERP_FILTER_EIGHTTAP, "INTERP_FILTER_EIGHTTAP"},
> > +		  {V4L2_VP9_INTERP_FILTER_EIGHTTAP_SMOOTH, "INTERP_FILTER_EIGHTTAP_SMOOTH"},
> > +		  {V4L2_VP9_INTERP_FILTER_EIGHTTAP_SHARP, "INTERP_FILTER_EIGHTTAP_SHARP"},
> > +		  {V4L2_VP9_INTERP_FILTER_BILINEAR, "INTERP_FILTER_BILINEAR"},
> > +		  {V4L2_VP9_INTERP_FILTER_SWITCHABLE, "INTERP_FILTER_SWITCHABLE"}),
> > +		  __entry->f.tile_cols_log2,
> > +		  __entry->f.tile_rows_log2,
> > +		  __print_symbolic(__entry->f.reference_mode,
> > +		  {V4L2_VP9_REFERENCE_MODE_SINGLE_REFERENCE, "REFERENCE_MODE_SINGLE_REFERENCE"},
> > +		  {V4L2_VP9_REFERENCE_MODE_COMPOUND_REFERENCE, "REFERENCE_MODE_COMPOUND_REFERENCE"},
> > +		  {V4L2_VP9_REFERENCE_MODE_SELECT, "REFERENCE_MODE_SELECT"}))
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_vp9_compressed_hdr_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
> > +	TP_ARGS(h),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp9_compressed_hdr, h)),
> > +	TP_fast_assign(__entry->h = *h;),
> > +	TP_printk("\ntx_mode %s\n"
> > +		  "tx8 {%s}\n"
> > +		  "tx16 {%s}\n"
> > +		  "tx32 {%s}\n"
> > +		  "skip %s\n"
> > +		  "inter_mode {%s}\n"
> > +		  "interp_filter {%s}\n"
> > +		  "is_inter %s\n"
> > +		  "comp_mode %s\n"
> > +		  "single_ref {%s}\n"
> > +		  "comp_ref %s\n"
> > +		  "y_mode {%s}\n"
> > +		  "uv_mode {%s}\n"
> > +		  "partition {%s}\n",
> > +		  __print_symbolic(__entry->h.tx_mode,
> > +		  {V4L2_VP9_TX_MODE_ONLY_4X4, "TX_MODE_ONLY_4X4"},
> > +		  {V4L2_VP9_TX_MODE_ALLOW_8X8, "TX_MODE_ALLOW_8X8"},
> > +		  {V4L2_VP9_TX_MODE_ALLOW_16X16, "TX_MODE_ALLOW_16X16"},
> > +		  {V4L2_VP9_TX_MODE_ALLOW_32X32, "TX_MODE_ALLOW_32X32"},
> > +		  {V4L2_VP9_TX_MODE_SELECT, "TX_MODE_SELECT"}),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->h.tx8,
> > +				   sizeof(__entry->h.tx8),
> > +				   false),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->h.tx16,
> > +				   sizeof(__entry->h.tx16),
> > +				   false),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->h.tx32,
> > +				   sizeof(__entry->h.tx32),
> > +				   false),
> > +		  __print_array(__entry->h.skip,
> > +				ARRAY_SIZE(__entry->h.skip),
> > +				sizeof(__entry->h.skip[0])),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->h.inter_mode,
> > +				   sizeof(__entry->h.inter_mode),
> > +				   false),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->h.interp_filter,
> > +				   sizeof(__entry->h.interp_filter),
> > +				   false),
> > +		  __print_array(__entry->h.is_inter,
> > +				ARRAY_SIZE(__entry->h.is_inter),
> > +				sizeof(__entry->h.is_inter[0])),
> > +		  __print_array(__entry->h.comp_mode,
> > +				ARRAY_SIZE(__entry->h.comp_mode),
> > +				sizeof(__entry->h.comp_mode[0])),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->h.single_ref,
> > +				   sizeof(__entry->h.single_ref),
> > +				   false),
> > +		  __print_array(__entry->h.comp_ref,
> > +				ARRAY_SIZE(__entry->h.comp_ref),
> > +				sizeof(__entry->h.comp_ref[0])),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->h.y_mode,
> > +				   sizeof(__entry->h.y_mode),
> > +				   false),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->h.uv_mode,
> > +				   sizeof(__entry->h.uv_mode),
> > +				   false),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->h.partition,
> > +				   sizeof(__entry->h.partition),
> > +				   false)
> > +	)
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_ctrl_vp9_compressed_coef_tmpl,
> > +	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
> > +	TP_ARGS(h),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_ctrl_vp9_compressed_hdr, h)),
> > +	TP_fast_assign(__entry->h = *h;),
> > +	TP_printk("\n coef {%s}",
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->h.coef,
> > +				   sizeof(__entry->h.coef),
> > +				   false)
> > +	)
> > +);
> > +
> > +DECLARE_EVENT_CLASS(v4l2_vp9_mv_probs_tmpl,
> > +	TP_PROTO(const struct v4l2_vp9_mv_probs *p),
> > +	TP_ARGS(p),
> > +	TP_STRUCT__entry(__field_struct(struct v4l2_vp9_mv_probs, p)),
> > +	TP_fast_assign(__entry->p = *p;),
> > +	TP_printk("\n joint %s\n"
> > +		  "sign %s\n"
> > +		  "classes {%s}\n"
> > +		  "class0_bit %s\n"
> > +		  "bits {%s}\n"
> > +		  "class0_fr {%s}\n"
> > +		  "fr {%s}\n"
> > +		  "class0_hp %s\n"
> > +		  "hp %s\n",
> > +		  __print_array(__entry->p.joint,
> > +				ARRAY_SIZE(__entry->p.joint),
> > +				sizeof(__entry->p.joint[0])),
> > +		  __print_array(__entry->p.sign,
> > +				ARRAY_SIZE(__entry->p.sign),
> > +				sizeof(__entry->p.sign[0])),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->p.classes,
> > +				   sizeof(__entry->p.classes),
> > +				   false),
> > +		  __print_array(__entry->p.class0_bit,
> > +				ARRAY_SIZE(__entry->p.class0_bit),
> > +				sizeof(__entry->p.class0_bit[0])),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->p.bits,
> > +				   sizeof(__entry->p.bits),
> > +				   false),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->p.class0_fr,
> > +				   sizeof(__entry->p.class0_fr),
> > +				   false),
> > +		  __print_hex_dump("", DUMP_PREFIX_NONE, 32, 1,
> > +		  		   __entry->p.fr,
> > +				   sizeof(__entry->p.fr),
> > +				   false),
> > +		  __print_array(__entry->p.class0_hp,
> > +				ARRAY_SIZE(__entry->p.class0_hp),
> > +				sizeof(__entry->p.class0_hp[0])),
> > +		  __print_array(__entry->p.hp,
> > +				ARRAY_SIZE(__entry->p.hp),
> > +				sizeof(__entry->p.hp[0]))
> > +	)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_vp9_frame_tmpl, v4l2_ctrl_vp9_frame,
> > +	TP_PROTO(const struct v4l2_ctrl_vp9_frame *f),
> > +	TP_ARGS(f)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_vp9_compressed_hdr_tmpl, v4l2_ctrl_vp9_compressed_hdr,
> > +	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
> > +	TP_ARGS(h)
> > +);
> > +
> > +DEFINE_EVENT(v4l2_ctrl_vp9_compressed_coef_tmpl, v4l2_ctrl_vp9_compressed_coeff,
> > +	TP_PROTO(const struct v4l2_ctrl_vp9_compressed_hdr *h),
> > +	TP_ARGS(h)
> > +);
> > +
> > +
> > +DEFINE_EVENT(v4l2_vp9_mv_probs_tmpl, v4l2_vp9_mv_probs,
> > +	TP_PROTO(const struct v4l2_vp9_mv_probs *p),
> > +	TP_ARGS(p)
> > +);
> > +
> > +#endif
> > +
> > +#undef TRACE_INCLUDE_PATH
> > +#undef TRACE_INCLUDE_FILE
> > +#define TRACE_INCLUDE_PATH ../../drivers/media/test-drivers/visl
> > +#define TRACE_INCLUDE_FILE visl-trace-vp9
> > +#include <trace/define_trace.h>
> > diff --git a/drivers/media/test-drivers/visl/visl-video.c b/drivers/media/test-drivers/visl/visl-video.c
> > new file mode 100644
> > index 000000000000..d1eb7c374e16
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-video.c
> > @@ -0,0 +1,776 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +/*
> > + * A virtual stateless device for stateless uAPI development purposes.
> > + *
> > + * This tool's objective is to help the development and testing of userspace
> > + * applications that use the V4L2 stateless API to decode media.
> > + *
> > + * A userspace implementation can use visl to run a decoding loop even when no
> > + * hardware is available or when the kernel uAPI for the codec has not been
> > + * upstreamed yet. This can reveal bugs at an early stage.
> > + *
> > + * This driver can also trace the contents of the V4L2 controls submitted to it.
> > + * It can also dump the contents of the vb2 buffers through a debugfs
> > + * interface. This is in many ways similar to the tracing infrastructure
> > + * available for other popular encode/decode APIs out there and can help develop
> > + * a userspace application by using another (working) one as a reference.
> > + *
> > + * Note that no actual decoding of video frames is performed by visl. The V4L2
> > + * test pattern generator is used to write various debug information to the
> > + * capture buffers instead.
> > + *
> > + * Copyright (c) Collabora, Ltd.
> > + *
> > + * Based on the vim2m driver, that is:
> > + *
> > + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> > + * Pawel Osciak, <pawel@osciak.com>
> > + * Marek Szyprowski, <m.szyprowski@samsung.com>
> > + *
> > + * Based on the vicodec driver, that is:
> > + *
> > + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> > + *
> > + * Based on the Cedrus VPU driver, that is:
> > + *
> > + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> > + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> > + * Copyright (C) 2018 Bootlin
> > + */
> > +
> > +#include <linux/debugfs.h>
> > +#include <linux/font.h>
> > +#include <media/v4l2-event.h>
> > +#include <media/v4l2-ioctl.h>
> > +#include <media/videobuf2-vmalloc.h>
> > +
> > +#include "visl-video.h"
> > +
> > +#include "visl.h"
> > +#include "visl-debugfs.h"
> > +
> > +static void visl_set_current_codec(struct visl_ctx *ctx)
> > +{
> > +	switch (ctx->coded_fmt.fmt.pix_mp.pixelformat) {
> > +	case V4L2_PIX_FMT_FWHT_STATELESS:
> > +		ctx->current_codec = VISL_CODEC_FWHT;
> > +		break;
> > +	case V4L2_PIX_FMT_MPEG2_SLICE:
> > +		ctx->current_codec = VISL_CODEC_MPEG2;
> > +		break;
> > +	case V4L2_PIX_FMT_VP8_FRAME:
> > +		ctx->current_codec = VISL_CODEC_VP8;
> > +		break;
> > +	case V4L2_PIX_FMT_VP9_FRAME:
> > +		ctx->current_codec = VISL_CODEC_VP9;
> > +		break;
> > +	case V4L2_PIX_FMT_H264_SLICE:
> > +		ctx->current_codec = VISL_CODEC_H264;
> > +		break;
> > +	default:
> > +		ctx->current_codec = VISL_CODEC_NONE;
> > +		break;
> > +	}
> > +}
> > +
> > +static void visl_print_fmt(struct visl_ctx *ctx, const struct v4l2_format *f)
> > +{
> > +	const struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> > +	u32 i;
> > +
> > +	dprintk(ctx->dev, "width: %d\n", pix_mp->width);
> > +	dprintk(ctx->dev, "height: %d\n", pix_mp->height);
> > +	dprintk(ctx->dev, "pixelformat: %c%c%c%c\n",
> > +		pix_mp->pixelformat,
> > +		(pix_mp->pixelformat >> 8) & 0xff,
> > +		(pix_mp->pixelformat >> 16) & 0xff,
> > +		(pix_mp->pixelformat >> 24) & 0xff);
> > +
> > +	dprintk(ctx->dev, "field: %d\n", pix_mp->field);
> > +	dprintk(ctx->dev, "colorspace: %d\n", pix_mp->colorspace);
> > +	dprintk(ctx->dev, "num_planes: %d\n", pix_mp->num_planes);
> > +	dprintk(ctx->dev, "flags: %d\n", pix_mp->flags);
> > +	dprintk(ctx->dev, "quantization: %d\n", pix_mp->quantization);
> > +	dprintk(ctx->dev, "xfer_func: %d\n", pix_mp->xfer_func);
> > +
> > +	for (i = 0; i < pix_mp->num_planes; i++) {
> > +		dprintk(ctx->dev,
> > +			"plane[%d]: sizeimage: %d\n", i, pix_mp->plane_fmt[i].sizeimage);
> > +		dprintk(ctx->dev,
> > +			"plane[%d]: bytesperline: %d\n", i, pix_mp->plane_fmt[i].bytesperline);
> > +	}
> > +}
> > +
> > +static int visl_tpg_init(struct visl_ctx *ctx)
> > +{
> > +	const struct font_desc *font;
> > +	const char *font_name = "VGA8x16";
> > +	int ret;
> > +	u32 width = ctx->decoded_fmt.fmt.pix_mp.width;
> > +	u32 height = ctx->decoded_fmt.fmt.pix_mp.height;
> > +	struct v4l2_pix_format_mplane *f = &ctx->decoded_fmt.fmt.pix_mp;
> > +
> > +	tpg_free(&ctx->tpg);
> > +
> > +	font = find_font(font_name);
> > +	if (font) {
> > +		tpg_init(&ctx->tpg, width, height);
> > +
> > +		ret = tpg_alloc(&ctx->tpg, width);
> > +		if (ret)
> > +			goto err_alloc;
> > +
> > +		tpg_set_font(font->data);
> > +		ret = tpg_s_fourcc(&ctx->tpg,
> > +				   f->pixelformat);
> > +
> > +		if (!ret)
> > +			goto err_fourcc;
> > +
> > +		tpg_reset_source(&ctx->tpg, width, height, f->field);
> > +
> > +		tpg_s_pattern(&ctx->tpg, TPG_PAT_75_COLORBAR);
> > +
> > +		tpg_s_field(&ctx->tpg, f->field, false);
> > +		tpg_s_colorspace(&ctx->tpg, f->colorspace);
> > +		tpg_s_ycbcr_enc(&ctx->tpg, f->ycbcr_enc);
> > +		tpg_s_quantization(&ctx->tpg, f->quantization);
> > +		tpg_s_xfer_func(&ctx->tpg, f->xfer_func);
> > +	} else {
> > +		v4l2_err(&ctx->dev->v4l2_dev,
> > +			 "Font %s not found\n", font_name);
> > +
> > +		return -EINVAL;
> > +	}
> > +
> > +	dprintk(ctx->dev, "Initialized the V4L2 test pattern generator, w=%d, h=%d, max_w=%d\n",
> > +		width, height, width);
> > +
> > +	return 0;
> > +err_alloc:
> > +	return ret;
> > +err_fourcc:
> > +	tpg_free(&ctx->tpg);
> > +	return ret;
> > +}
> > +
> > +static const u32 visl_decoded_fmts[] = {
> > +	V4L2_PIX_FMT_NV12,
> > +	V4L2_PIX_FMT_YUV420,
> > +};
> > +
> > +const struct visl_coded_format_desc visl_coded_fmts[] = {
> > +	{
> > +		.pixelformat = V4L2_PIX_FMT_FWHT,

V4L2_PIX_FMT_FWHT ->  V4L2_PIX_FMT_FWHT_STATELESS

I was able to send an encoded fwht file to visl and got back a
test pattern as expected.  I used this command:

v4l2-ctl -d0 --stream-mmap --stream-out-mmap
--stream-from-hdr test-25fps.fwht --stream-to out.yuv


> > +		.frmsize = {
> > +			.min_width = 640,
> > +			.max_width = 4096,
> > +			.step_width = 1,
> > +			.min_height = 360,
> > +			.max_height = 2160,
> > +			.step_height = 1,
> > +		},
> > +		.ctrls = &visl_fwht_ctrls,
> > +		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
> > +		.decoded_fmts = visl_decoded_fmts,
> > +	},
> > +	{
> > +		.pixelformat = V4L2_PIX_FMT_MPEG2_SLICE,
> > +		.frmsize = {
> > +			.min_width = 16,
> > +			.max_width = 1920,
> > +			.step_width = 1,
> > +			.min_height = 16,
> > +			.max_height = 1152,
> > +			.step_height = 1,
> > +		},
> > +		.ctrls = &visl_mpeg2_ctrls,
> > +		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
> > +		.decoded_fmts = visl_decoded_fmts,
> > +	},
> > +	{
> > +		.pixelformat = V4L2_PIX_FMT_VP8_FRAME,
> > +		.frmsize = {
> > +			.min_width = 64,
> > +			.max_width = 16383,
> > +			.step_width = 1,
> > +			.min_height = 64,
> > +			.max_height = 16383,
> > +			.step_height = 1,
> > +		},
> > +		.ctrls = &visl_vp8_ctrls,
> > +		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
> > +		.decoded_fmts = visl_decoded_fmts,
> > +	},
> > +	{
> > +		.pixelformat = V4L2_PIX_FMT_VP9_FRAME,
> > +		.frmsize = {
> > +			.min_width = 64,
> > +			.max_width = 8192,
> > +			.step_width = 1,
> > +			.min_height = 64,
> > +			.max_height = 4352,
> > +			.step_height = 1,
> > +		},
> > +		.ctrls = &visl_vp9_ctrls,
> > +		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
> > +		.decoded_fmts = visl_decoded_fmts,
> > +	},
> > +	{
> > +		.pixelformat = V4L2_PIX_FMT_H264_SLICE,
> > +		.frmsize = {
> > +			.min_width = 64,
> > +			.max_width = 4096,
> > +			.step_width = 1,
> > +			.min_height = 64,
> > +			.max_height = 2304,
> > +			.step_height = 1,
> > +		},
> > +		.ctrls = &visl_h264_ctrls,
> > +		.num_decoded_fmts = ARRAY_SIZE(visl_decoded_fmts),
> > +		.decoded_fmts = visl_decoded_fmts,
> > +	},
> > +};
> > +
> > +const size_t num_coded_fmts = ARRAY_SIZE(visl_coded_fmts);
> > +
> > +static const struct visl_coded_format_desc*
> > +visl_find_coded_fmt_desc(u32 fourcc)
> > +{
> > +	unsigned int i;
> > +
> > +	for (i = 0; i < ARRAY_SIZE(visl_coded_fmts); i++) {
> > +		if (visl_coded_fmts[i].pixelformat == fourcc)
> > +			return &visl_coded_fmts[i];
> > +	}
> > +
> > +	return NULL;
> > +}
> > +
> > +static void visl_init_fmt(struct v4l2_format *f, u32 fourcc)
> > +{	memset(f, 0, sizeof(*f));
> > +	f->fmt.pix_mp.pixelformat = fourcc;
> > +	f->fmt.pix_mp.field = V4L2_FIELD_NONE;
> > +	f->fmt.pix_mp.colorspace = V4L2_COLORSPACE_REC709;
> > +	f->fmt.pix_mp.ycbcr_enc = V4L2_YCBCR_ENC_DEFAULT;
> > +	f->fmt.pix_mp.quantization = V4L2_QUANTIZATION_DEFAULT;
> > +	f->fmt.pix_mp.xfer_func = V4L2_XFER_FUNC_DEFAULT;
> > +}
> > +
> > +void visl_reset_coded_fmt(struct visl_ctx *ctx)
> > +{
> > +	struct v4l2_format *f = &ctx->coded_fmt;
> > +	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> > +
> > +	ctx->coded_format_desc = &visl_coded_fmts[0];
> > +	visl_init_fmt(f, ctx->coded_format_desc->pixelformat);
> > +
> > +	f->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
> > +	f->fmt.pix_mp.width = ctx->coded_format_desc->frmsize.min_width;
> > +	f->fmt.pix_mp.height = ctx->coded_format_desc->frmsize.min_height;
> > +
> > +	pix_mp->num_planes = 1;
> > +	pix_mp->plane_fmt[0].sizeimage = pix_mp->width * pix_mp->height * 8;
> > +
> > +	dprintk(ctx->dev, "OUTPUT format was set to:\n");
> > +	visl_print_fmt(ctx, &ctx->coded_fmt);
> > +
> > +	visl_set_current_codec(ctx);
> > +}
> > +
> > +int visl_reset_decoded_fmt(struct visl_ctx *ctx)
> > +{
> > +	struct v4l2_format *f = &ctx->decoded_fmt;
> > +	u32 decoded_fmt = ctx->coded_format_desc[0].decoded_fmts[0];
> > +
> > +	visl_init_fmt(f, decoded_fmt);
> > +
> > +	f->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
> > +
> > +	v4l2_fill_pixfmt_mp(&f->fmt.pix_mp,
> > +			    ctx->coded_format_desc->decoded_fmts[0],
> > +			    ctx->coded_fmt.fmt.pix_mp.width,
> > +			    ctx->coded_fmt.fmt.pix_mp.height);
> > +
> > +	dprintk(ctx->dev, "CAPTURE format was set to:\n");
> > +	visl_print_fmt(ctx, &ctx->decoded_fmt);
> > +
> > +	return visl_tpg_init(ctx);
> > +}
> > +
> > +int visl_set_default_format(struct visl_ctx *ctx)
> > +{
> > +	visl_reset_coded_fmt(ctx);
> > +	return visl_reset_decoded_fmt(ctx);
> > +}
> > +
> > +static struct visl_q_data *get_q_data(struct visl_ctx *ctx,
> > +				      enum v4l2_buf_type type)
> > +{
> > +	switch (type) {
> > +	case V4L2_BUF_TYPE_VIDEO_OUTPUT:
> > +	case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
> > +		return &ctx->q_data[V4L2_M2M_SRC];
> > +	case V4L2_BUF_TYPE_VIDEO_CAPTURE:
> > +	case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
> > +		return &ctx->q_data[V4L2_M2M_DST];
> > +	default:
> > +		break;
> > +	}
> > +	return NULL;
> > +}
> > +
> > +static int visl_querycap(struct file *file, void *priv,
> > +			 struct v4l2_capability *cap)
> > +{
> > +	strscpy(cap->driver, VISL_NAME, sizeof(cap->driver));
> > +	strscpy(cap->card, VISL_NAME, sizeof(cap->card));
> > +	snprintf(cap->bus_info, sizeof(cap->bus_info),
> > +		 "platform:%s", VISL_NAME);
> > +
> > +	return 0;
> > +}
> > +
> > +static int visl_enum_fmt_vid_cap(struct file *file, void *priv,
> > +				 struct v4l2_fmtdesc *f)
> > +{
> > +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> > +
> > +	if (f->index >= ctx->coded_format_desc->num_decoded_fmts)
> > +		return -EINVAL;
> > +
> > +	f->pixelformat = ctx->coded_format_desc->decoded_fmts[f->index];
> > +	return 0;
> > +}
> > +
> > +static int visl_enum_fmt_vid_out(struct file *file, void *priv,
> > +				 struct v4l2_fmtdesc *f)
> > +{
> > +	if (f->index >= ARRAY_SIZE(visl_coded_fmts))
> > +		return -EINVAL;
> > +
> > +	f->pixelformat = visl_coded_fmts[f->index].pixelformat;
> > +	return 0;
> > +}
> > +
> > +static int visl_g_fmt_vid_cap(struct file *file, void *priv,
> > +			      struct v4l2_format *f)
> > +{
> > +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> > +	*f = ctx->decoded_fmt;
> > +
> > +	return 0;
> > +}
> > +
> > +static int visl_g_fmt_vid_out(struct file *file, void *priv,
> > +			      struct v4l2_format *f)
> > +{
> > +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> > +
> > +	*f = ctx->coded_fmt;
> > +	return 0;
> > +}
> > +
> > +static int visl_try_fmt_vid_cap(struct file *file, void *priv,
> > +				struct v4l2_format *f)
> > +{
> > +	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> > +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> > +	const struct visl_coded_format_desc *coded_desc;
> > +	unsigned int i;
> > +
> > +	coded_desc = ctx->coded_format_desc;
> > +
> > +	for (i = 0; i < coded_desc->num_decoded_fmts; i++) {
> > +		if (coded_desc->decoded_fmts[i] == pix_mp->pixelformat)
> > +			break;
> > +	}
> > +
> > +	if (i == coded_desc->num_decoded_fmts)
> > +		pix_mp->pixelformat = coded_desc->decoded_fmts[0];
> > +
> > +	v4l2_apply_frmsize_constraints(&pix_mp->width,
> > +				       &pix_mp->height,
> > +				       &coded_desc->frmsize);
> > +
> > +	v4l2_fill_pixfmt_mp(pix_mp, pix_mp->pixelformat,
> > +			    pix_mp->width, pix_mp->height);
> > +
> > +	pix_mp->field = V4L2_FIELD_NONE;
> > +
> > +	return 0;
> > +}
> > +
> > +static int visl_try_fmt_vid_out(struct file *file, void *priv,
> > +				struct v4l2_format *f)
> > +{
> > +	struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp;
> > +	const struct visl_coded_format_desc *coded_desc;
> > +
> > +	coded_desc = visl_find_coded_fmt_desc(pix_mp->pixelformat);
> > +	if (!coded_desc) {
> > +		pix_mp->pixelformat = visl_coded_fmts[0].pixelformat;
> > +		coded_desc = &visl_coded_fmts[0];
> > +	}
> > +
> > +	v4l2_apply_frmsize_constraints(&pix_mp->width,
> > +				       &pix_mp->height,
> > +				       &coded_desc->frmsize);
> > +
> > +	pix_mp->field = V4L2_FIELD_NONE;
> > +	pix_mp->num_planes = 1;
> > +
> > +	return 0;
> > +}
> > +
> > +static int visl_s_fmt_vid_out(struct file *file, void *priv,
> > +			      struct v4l2_format *f)
> > +{
> > +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> > +	struct v4l2_m2m_ctx *m2m_ctx = ctx->fh.m2m_ctx;
> > +	const struct visl_coded_format_desc *desc;
> > +	struct vb2_queue *peer_vq;
> > +	int ret;
> > +
> > +	peer_vq = v4l2_m2m_get_vq(m2m_ctx, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
> > +	if (vb2_is_busy(peer_vq))
> > +		return -EBUSY;
> > +
> > +	dprintk(ctx->dev, "Trying to set the OUTPUT format to:\n");
> > +	visl_print_fmt(ctx, f);
> > +
> > +	ret = visl_try_fmt_vid_out(file, priv, f);
> > +	if (ret)
> > +		return ret;
> > +
> > +	desc = visl_find_coded_fmt_desc(f->fmt.pix_mp.pixelformat);
> > +	ctx->coded_format_desc = desc;
> > +	ctx->coded_fmt = *f;
> > +
> > +	v4l2_fill_pixfmt_mp(&ctx->coded_fmt.fmt.pix_mp,
> > +			    ctx->coded_fmt.fmt.pix_mp.pixelformat,
> > +			    ctx->coded_fmt.fmt.pix_mp.width,
> > +			    ctx->coded_fmt.fmt.pix_mp.height);
> 
> Since v4l2_fill_pixfmt_mp always returns -EINVAL, requesting buffers will
> crash if the application does not set sizeimage itself.
> 
> > +
> > +	ret = visl_reset_decoded_fmt(ctx);
> > +	if (ret)
> > +		return ret;
> > +
> > +	ctx->decoded_fmt.fmt.pix_mp.colorspace = f->fmt.pix_mp.colorspace;
> > +	ctx->decoded_fmt.fmt.pix_mp.xfer_func = f->fmt.pix_mp.xfer_func;
> > +	ctx->decoded_fmt.fmt.pix_mp.ycbcr_enc = f->fmt.pix_mp.ycbcr_enc;
> > +	ctx->decoded_fmt.fmt.pix_mp.quantization = f->fmt.pix_mp.quantization;
> > +
> > +	dprintk(ctx->dev, "OUTPUT format was set to:\n");
> > +	visl_print_fmt(ctx, &ctx->coded_fmt);
> > +
> > +	visl_set_current_codec(ctx);

Gstreamer won't complete H264 decoding with visl if gstreamer queues an output
buffer with the  V4L2_BUF_FLAG_M2M_HOLD_CAPTURE_BUF flag.
It solves the immediate problem in visl just by adding:
VB2_V4L2_FL_SUPPORTS_M2M_HOLD_CAPTURE_BUF
to the output buffer.


> > +	return 0;
> > +}
> > +
> > +static int visl_s_fmt_vid_cap(struct file *file, void *priv,
> > +			      struct v4l2_format *f)
> > +{
> > +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> > +	int ret;
> > +
> > +	dprintk(ctx->dev, "Trying to set the CAPTURE format to:\n");
> > +	visl_print_fmt(ctx, f);
> > +
> > +	ret = visl_try_fmt_vid_cap(file, priv, f);
> > +	if (ret)
> > +		return ret;
> > +
> > +	ctx->decoded_fmt = *f;
> > +
> > +	dprintk(ctx->dev, "CAPTURE format was set to:\n");
> > +	visl_print_fmt(ctx, &ctx->decoded_fmt);
> > +
> > +	visl_tpg_init(ctx);
> > +	return 0;
> > +}
> > +
> > +static int visl_enum_framesizes(struct file *file, void *priv,
> > +				struct v4l2_frmsizeenum *fsize)
> > +{
> > +	const struct visl_coded_format_desc *fmt;
> > +	struct visl_ctx *ctx = visl_file_to_ctx(file);
> > +
> > +	if (fsize->index != 0)
> > +		return -EINVAL;
> > +
> > +	fmt = visl_find_coded_fmt_desc(fsize->pixel_format);
> > +	if (!fmt) {
> > +		dprintk(ctx->dev,
> > +			"Unsupported format for the OUTPUT queue: %d\n",
> > +			fsize->pixel_format);
> > +
> > +		return -EINVAL;
> > +	}
> > +
> > +	fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
> > +	fsize->stepwise = fmt->frmsize;
> > +	return 0;
> > +}
> > +
> > +const struct v4l2_ioctl_ops visl_ioctl_ops = {
> > +	.vidioc_querycap		= visl_querycap,
> > +	.vidioc_enum_framesizes		= visl_enum_framesizes,
> > +
> > +	.vidioc_enum_fmt_vid_cap	= visl_enum_fmt_vid_cap,
> > +	.vidioc_g_fmt_vid_cap_mplane	= visl_g_fmt_vid_cap,
> > +	.vidioc_try_fmt_vid_cap_mplane	= visl_try_fmt_vid_cap,
> > +	.vidioc_s_fmt_vid_cap_mplane	= visl_s_fmt_vid_cap,
> > +
> > +	.vidioc_enum_fmt_vid_out	= visl_enum_fmt_vid_out,
> > +	.vidioc_g_fmt_vid_out_mplane	= visl_g_fmt_vid_out,
> > +	.vidioc_try_fmt_vid_out_mplane	= visl_try_fmt_vid_out,
> > +	.vidioc_s_fmt_vid_out_mplane	= visl_s_fmt_vid_out,
> > +
> > +	.vidioc_reqbufs			= v4l2_m2m_ioctl_reqbufs,
> > +	.vidioc_querybuf		= v4l2_m2m_ioctl_querybuf,
> > +	.vidioc_qbuf			= v4l2_m2m_ioctl_qbuf,
> > +	.vidioc_dqbuf			= v4l2_m2m_ioctl_dqbuf,
> > +	.vidioc_prepare_buf		= v4l2_m2m_ioctl_prepare_buf,
> > +	.vidioc_create_bufs		= v4l2_m2m_ioctl_create_bufs,
> > +	.vidioc_expbuf			= v4l2_m2m_ioctl_expbuf,
> > +
> > +	.vidioc_streamon		= v4l2_m2m_ioctl_streamon,
> > +	.vidioc_streamoff		= v4l2_m2m_ioctl_streamoff,
> > +
> > +	.vidioc_subscribe_event		= v4l2_ctrl_subscribe_event,
> > +	.vidioc_unsubscribe_event	= v4l2_event_unsubscribe,
> > +};
> > +
> > +static int visl_queue_setup(struct vb2_queue *vq,
> > +			    unsigned int *nbuffers,
> > +			    unsigned int *num_planes,
> > +			    unsigned int sizes[],
> > +			    struct device *alloc_devs[])
> > +{
> > +	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
> > +	struct v4l2_format *f;
> > +	u32 i;
> > +	char *qname;
> > +
> > +	if (V4L2_TYPE_IS_OUTPUT(vq->type)) {
> > +		f = &ctx->coded_fmt;
> > +		qname = "Output";
> > +	} else {
> > +		f = &ctx->decoded_fmt;
> > +		qname = "Capture";
> > +	}
> > +
> > +	if (*num_planes) {
> > +		if (*num_planes != f->fmt.pix_mp.num_planes)
> > +			return -EINVAL;
> > +
> > +		for (i = 0; i < f->fmt.pix_mp.num_planes; i++) {
> > +			if (sizes[i] < f->fmt.pix_mp.plane_fmt[i].sizeimage)
> > +				return -EINVAL;
> > +		}
> > +	} else {
> > +		*num_planes = f->fmt.pix_mp.num_planes;
> > +		for (i = 0; i < f->fmt.pix_mp.num_planes; i++)
> > +			sizes[i] = f->fmt.pix_mp.plane_fmt[i].sizeimage;
> > +	}
> > +
> > +	dprintk(ctx->dev, "%s: %d buffer(s) requested, num_planes=%d.\n",
> > +		qname, *nbuffers, *num_planes);
> > +
> > +	for (i = 0; i < f->fmt.pix_mp.num_planes; i++)
> > +		dprintk(ctx->dev, "plane[%d].sizeimage=%d\n",
> > +			i, f->fmt.pix_mp.plane_fmt[i].sizeimage);
> > +
> > +	return 0;
> > +}
> > +
> > +static void visl_queue_cleanup(struct vb2_queue *vq, u32 state)
> > +{
> > +	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
> > +	struct vb2_v4l2_buffer *vbuf;
> > +
> > +	dprintk(ctx->dev, "Cleaning up queues\n");
> > +	for (;;) {
> > +		if (V4L2_TYPE_IS_OUTPUT(vq->type))
> > +			vbuf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> > +		else
> > +			vbuf = v4l2_m2m_dst_buf_remove(ctx->fh.m2m_ctx);
> > +
> > +		if (!vbuf)
> > +			break;
> > +
> > +		v4l2_ctrl_request_complete(vbuf->vb2_buf.req_obj.req,
> > +					   &ctx->hdl);
> > +		dprintk(ctx->dev, "Marked request %p as complete\n",
> > +			vbuf->vb2_buf.req_obj.req);
> > +
> > +		v4l2_m2m_buf_done(vbuf, state);
> > +		dprintk(ctx->dev,
> > +			"Marked buffer %llu as done, state is %d\n",
> > +			vbuf->vb2_buf.timestamp,
> > +			state);
> > +	}
> > +}
> > +
> > +static int visl_buf_out_validate(struct vb2_buffer *vb)
> > +{
> > +	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> > +
> > +	vbuf->field = V4L2_FIELD_NONE;
> > +	return 0;
> > +}
> > +
> > +static int visl_buf_prepare(struct vb2_buffer *vb)
> > +{
> > +	struct vb2_queue *vq = vb->vb2_queue;
> > +	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
> > +	u32 plane_sz = vb2_plane_size(vb, 0);
> > +	struct v4l2_pix_format *pix_fmt;
> > +
> > +	if (V4L2_TYPE_IS_OUTPUT(vq->type))
> > +		pix_fmt = &ctx->coded_fmt.fmt.pix;
> > +	else
> > +		pix_fmt = &ctx->decoded_fmt.fmt.pix;
> > +
> > +	if (plane_sz < pix_fmt->sizeimage) {
> > +		v4l2_err(&ctx->dev->v4l2_dev, "plane[0] size is %d, sizeimage is %d\n",
> > +			 plane_sz, pix_fmt->sizeimage);
> > +		return -EINVAL;
> > +	}
> > +
> > +	vb2_set_plane_payload(vb, 0, pix_fmt->sizeimage);
> > +
> > +	return 0;
> > +}
> > +
> > +static int visl_start_streaming(struct vb2_queue *vq, unsigned int count)
> > +{
> > +	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
> > +	struct visl_q_data *q_data = get_q_data(ctx, vq->type);
> > +	int rc = 0;
> > +
> > +	if (!q_data) {
> > +		rc = -EINVAL;
> > +		goto err;
> > +	}
> > +
> > +	q_data->sequence = 0;
> > +
> > +	if (V4L2_TYPE_IS_CAPTURE(vq->type)) {
> > +		ctx->capture_streamon_jiffies = get_jiffies_64();
> > +		return 0;
> > +	}
> > +
> > +	if (WARN_ON(!ctx->coded_format_desc)) {
> > +		rc =  -EINVAL;
> > +		goto err;
> > +	}
> > +
> > +	return 0;
> > +
> > +err:
> > +	visl_queue_cleanup(vq, VB2_BUF_STATE_QUEUED);
> > +	return rc;
> > +}
> > +
> > +static void visl_stop_streaming(struct vb2_queue *vq)
> > +{
> > +	struct visl_ctx *ctx = vb2_get_drv_priv(vq);
> > +
> > +	dprintk(ctx->dev, "Stop streaming\n");
> > +	visl_queue_cleanup(vq, VB2_BUF_STATE_ERROR);
> > +}
> > +
> > +static void visl_buf_queue(struct vb2_buffer *vb)
> > +{
> > +	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
> > +	struct visl_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
> > +
> > +	v4l2_m2m_buf_queue(ctx->fh.m2m_ctx, vbuf);
> > +}
> > +
> > +static void visl_buf_request_complete(struct vb2_buffer *vb)
> > +{
> > +	struct visl_ctx *ctx = vb2_get_drv_priv(vb->vb2_queue);
> > +
> > +	v4l2_ctrl_request_complete(vb->req_obj.req, &ctx->hdl);
> > +}
> > +
> > +const struct vb2_ops visl_qops = {
> > +	.queue_setup          = visl_queue_setup,
> > +	.buf_out_validate     = visl_buf_out_validate,
> > +	.buf_prepare          = visl_buf_prepare,
> > +	.buf_queue            = visl_buf_queue,
> > +	.start_streaming      = visl_start_streaming,
> > +	.stop_streaming       = visl_stop_streaming,
> > +	.wait_prepare         = vb2_ops_wait_prepare,
> > +	.wait_finish          = vb2_ops_wait_finish,
> > +	.buf_request_complete = visl_buf_request_complete,
> > +};
> > +
> > +int visl_queue_init(void *priv, struct vb2_queue *src_vq,
> > +		    struct vb2_queue *dst_vq)
> > +{
> > +	struct visl_ctx *ctx = priv;
> > +	int ret;
> > +
> > +	src_vq->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
> > +	src_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
> > +	src_vq->drv_priv = ctx;
> > +	src_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
> > +	src_vq->ops = &visl_qops;
> > +	src_vq->mem_ops = &vb2_vmalloc_memops;
> > +	src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
> > +	src_vq->lock = &ctx->vb_mutex;
> > +	src_vq->supports_requests = true;
> > +
> > +	ret = vb2_queue_init(src_vq);
> > +	if (ret)
> > +		return ret;
> > +
> > +	dst_vq->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
> > +	dst_vq->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF;
> > +	dst_vq->drv_priv = ctx;
> > +	dst_vq->buf_struct_size = sizeof(struct v4l2_m2m_buffer);
> > +	dst_vq->ops = &visl_qops;
> > +	dst_vq->mem_ops = &vb2_vmalloc_memops;
> > +	dst_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
> > +	dst_vq->lock = &ctx->vb_mutex;
> > +
> > +	return vb2_queue_init(dst_vq);
> > +}
> > +
> > +int visl_request_validate(struct media_request *req)
> > +{
> > +	struct media_request_object *obj;
> > +	struct visl_ctx *ctx = NULL;
> > +	unsigned int count;
> > +
> > +	list_for_each_entry(obj, &req->objects, list) {
> > +		struct vb2_buffer *vb;
> > +
> > +		if (vb2_request_object_is_buffer(obj)) {
> > +			vb = container_of(obj, struct vb2_buffer, req_obj);
> > +			ctx = vb2_get_drv_priv(vb->vb2_queue);
> > +
> > +			break;
> > +		}
> > +	}
> > +
> > +	if (!ctx)
> > +		return -ENOENT;
> > +
> > +	count = vb2_request_buffer_cnt(req);
> > +	if (!count) {
> > +		v4l2_err(&ctx->dev->v4l2_dev,
> > +			 "No buffer was provided with the request\n");
> > +		return -ENOENT;
> > +	} else if (count > 1) {
> > +		v4l2_err(&ctx->dev->v4l2_dev,
> > +			 "More than one buffer was provided with the request\n");
> > +		return -EINVAL;
> > +	}
> > +
> > +	return vb2_request_validate(req);
> > +}
> > diff --git a/drivers/media/test-drivers/visl/visl-video.h b/drivers/media/test-drivers/visl/visl-video.h
> > new file mode 100644
> > index 000000000000..dbfc1c6a052d
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl-video.h
> > @@ -0,0 +1,61 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +/*
> > + * A virtual stateless device for stateless uAPI development purposes.
> > + *
> > + * This tool's objective is to help the development and testing of userspace
> > + * applications that use the V4L2 stateless API to decode media.
> > + *
> > + * A userspace implementation can use visl to run a decoding loop even when no
> > + * hardware is available or when the kernel uAPI for the codec has not been
> > + * upstreamed yet. This can reveal bugs at an early stage.
> > + *
> > + * This driver can also trace the contents of the V4L2 controls submitted to it.
> > + * It can also dump the contents of the vb2 buffers through a debugfs
> > + * interface. This is in many ways similar to the tracing infrastructure
> > + * available for other popular encode/decode APIs out there and can help develop
> > + * a userspace application by using another (working) one as a reference.
> > + *
> > + * Note that no actual decoding of video frames is performed by visl. The V4L2
> > + * test pattern generator is used to write various debug information to the
> > + * capture buffers instead.
> > + *
> > + * Copyright (c) Collabora, Ltd.
> > + *
> > + * Based on the vim2m driver, that is:
> > + *
> > + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> > + * Pawel Osciak, <pawel@osciak.com>
> > + * Marek Szyprowski, <m.szyprowski@samsung.com>
> > + *
> > + * Based on the vicodec driver, that is:
> > + *
> > + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> > + *
> > + * Based on the Cedrus VPU driver, that is:
> > + *
> > + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> > + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> > + * Copyright (C) 2018 Bootlin
> > + */
> > +
> > +#ifndef _VISL_VIDEO_H_
> > +#define _VISL_VIDEO_H_
> > +#include <media/v4l2-mem2mem.h>
> > +
> > +#include "visl.h"
> > +
> > +extern const struct v4l2_ioctl_ops visl_ioctl_ops;
> > +
> > +extern const struct visl_ctrls visl_fwht_ctrls;
> > +extern const struct visl_ctrls visl_mpeg2_ctrls;
> > +extern const struct visl_ctrls visl_vp8_ctrls;
> > +extern const struct visl_ctrls visl_vp9_ctrls;
> > +extern const struct visl_ctrls visl_h264_ctrls;
> > +
> > +int visl_queue_init(void *priv, struct vb2_queue *src_vq,
> > +		    struct vb2_queue *dst_vq);
> > +
> > +int visl_set_default_format(struct visl_ctx *ctx);
> > +int visl_request_validate(struct media_request *req);
> > +
> > +#endif /* _VISL_VIDEO_H_ */
> > diff --git a/drivers/media/test-drivers/visl/visl.h b/drivers/media/test-drivers/visl/visl.h
> > new file mode 100644
> > index 000000000000..a473d154805c
> > --- /dev/null
> > +++ b/drivers/media/test-drivers/visl/visl.h
> > @@ -0,0 +1,178 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +/*
> > + * A virtual stateless device for stateless uAPI development purposes.
> > + *
> > + * This tool's objective is to help the development and testing of userspace
> > + * applications that use the V4L2 stateless API to decode media.
> > + *
> > + * A userspace implementation can use visl to run a decoding loop even when no
> > + * hardware is available or when the kernel uAPI for the codec has not been
> > + * upstreamed yet. This can reveal bugs at an early stage.
> > + *
> > + * This driver can also trace the contents of the V4L2 controls submitted to it.
> > + * It can also dump the contents of the vb2 buffers through a debugfs
> > + * interface. This is in many ways similar to the tracing infrastructure
> > + * available for other popular encode/decode APIs out there and can help develop
> > + * a userspace application by using another (working) one as a reference.
> > + *
> > + * Note that no actual decoding of video frames is performed by visl. The V4L2
> > + * test pattern generator is used to write various debug information to the
> > + * capture buffers instead.
> > + *
> > + * Copyright (c) Collabora, Ltd.
> > + *
> > + * Based on the vim2m driver, that is:
> > + *
> > + * Copyright (c) 2009-2010 Samsung Electronics Co., Ltd.
> > + * Pawel Osciak, <pawel@osciak.com>
> > + * Marek Szyprowski, <m.szyprowski@samsung.com>
> > + *
> > + * Based on the vicodec driver, that is:
> > + *
> > + * Copyright 2018 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
> > + *
> > + * Based on the Cedrus VPU driver, that is:
> > + *
> > + * Copyright (C) 2016 Florent Revest <florent.revest@free-electrons.com>
> > + * Copyright (C) 2018 Paul Kocialkowski <paul.kocialkowski@bootlin.com>
> > + * Copyright (C) 2018 Bootlin
> > + */
> > +
> > +#ifndef _VISL_H_
> > +#define _VISL_H_
> > +
> > +#include <linux/debugfs.h>
> > +#include <linux/list.h>
> > +
> > +#include <media/v4l2-ctrls.h>
> > +#include <media/v4l2-device.h>
> > +#include <media/tpg/v4l2-tpg.h>
> > +
> > +#define VISL_NAME		"visl"
> > +#define VISL_M2M_NQUEUES	2
> > +
> > +#define TPG_STR_BUF_SZ		2048
> > +
> > +extern unsigned int visl_transtime_ms;
> > +
> > +struct visl_ctrls {
> > +	const struct visl_ctrl_desc *ctrls;
> > +	unsigned int num_ctrls;
> > +};
> > +
> > +struct visl_coded_format_desc {
> > +	u32 pixelformat;
> > +	struct v4l2_frmsize_stepwise frmsize;
> > +	const struct visl_ctrls *ctrls;
> > +	unsigned int num_decoded_fmts;
> > +	const u32 *decoded_fmts;
> > +};
> > +
> > +extern const struct visl_coded_format_desc visl_coded_fmts[];
> > +extern const size_t num_coded_fmts;
> > +
> > +enum {
> > +	V4L2_M2M_SRC = 0,
> > +	V4L2_M2M_DST = 1,
> > +};
> > +
> > +extern unsigned int visl_debug;
> > +#define dprintk(dev, fmt, arg...) \
> > +	v4l2_dbg(1, visl_debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg)
> > +
> > +extern int visl_dprintk_frame_start;
> > +extern unsigned int visl_dprintk_nframes;
> > +extern unsigned int keep_bitstream_buffers;
> > +extern int bitstream_trace_frame_start;
> > +extern unsigned int bitstream_trace_nframes;
> > +
> > +#define frame_dprintk(dev, current, fmt, arg...) \
> > +	do { \
> > +		if (visl_dprintk_frame_start > -1 && \
> > +		    current >= visl_dprintk_frame_start && \
> > +		    current < visl_dprintk_frame_start + visl_dprintk_nframes) \
> > +			dprintk(dev, fmt, ## arg); \
> > +	} while (0) \
> > +
> > +struct visl_q_data {
> > +	unsigned int		sequence;
> > +};
> > +
> > +struct visl_dev {
> > +	struct v4l2_device	v4l2_dev;
> > +	struct video_device	vfd;
> > +#ifdef CONFIG_MEDIA_CONTROLLER
> > +	struct media_device	mdev;
> > +#endif
> > +
> > +	struct mutex		dev_mutex;
> > +
> > +	struct v4l2_m2m_dev	*m2m_dev;
> > +
> > +#ifdef CONFIG_VISL_DEBUGFS
> > +	struct dentry		*debugfs_root;
> > +	struct dentry		*bitstream_debugfs;
> > +	struct list_head	bitstream_blobs;
> > +	/*
> > +	 * Protects the "blob" list as it can be accessed from "visl_release"
> > +	 * if keep_bitstream_buffers = 0 while some other client is tracing
> > +	 */
> > +	struct mutex		bitstream_lock;
> > +#endif
> > +};
> > +
> > +enum visl_codec {
> > +	VISL_CODEC_NONE,
> > +	VISL_CODEC_FWHT,
> > +	VISL_CODEC_MPEG2,
> > +	VISL_CODEC_VP8,
> > +	VISL_CODEC_VP9,
> > +	VISL_CODEC_H264,
> > +};
> > +
> > +struct visl_blob {
> > +	struct list_head list;
> > +	struct dentry *dentry;
> > +	u64 streamon_jiffies;
> > +	struct debugfs_blob_wrapper blob;
> > +};
> > +
> > +struct visl_ctx {
> > +	struct v4l2_fh		fh;
> > +	struct visl_dev	*dev;
> > +	struct v4l2_ctrl_handler hdl;
> > +
> > +	struct mutex		vb_mutex;
> > +
> > +	struct visl_q_data	q_data[VISL_M2M_NQUEUES];
> > +	enum   visl_codec	current_codec;
> > +
> > +	const struct visl_coded_format_desc *coded_format_desc;
> > +
> > +	struct v4l2_format	coded_fmt;
> > +	struct v4l2_format	decoded_fmt;
> > +
> > +	struct tpg_data		tpg;
> > +	u64			capture_streamon_jiffies;
> > +	char			*tpg_str_buf;
> > +};
> > +
> > +struct visl_ctrl_desc {
> > +	struct v4l2_ctrl_config cfg;
> > +};
> > +
> > +static inline struct visl_ctx *visl_file_to_ctx(struct file *file)
> > +{
> > +	return container_of(file->private_data, struct visl_ctx, fh);
> > +}
> > +
> > +static inline struct visl_ctx *visl_v4l2fh_to_ctx(struct v4l2_fh *v4l2_fh)
> > +{
> > +	return container_of(v4l2_fh, struct visl_ctx, fh);
> > +}
> > +
> > +void *visl_find_control_data(struct visl_ctx *ctx, u32 id);
> > +struct v4l2_ctrl *visl_find_control(struct visl_ctx *ctx, u32 id);
> > +u32 visl_control_num_elems(struct visl_ctx *ctx, u32 id);
> > +
> > +#endif /* _VISL_H_ */
> > -- 
> > 2.36.1
> > 

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-10-04 23:10 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-10 22:05 [RFC PATCH 0/2] Add the stateless AV1 uAPI and the VIVPU virtual driver to showcase it daniel.almeida
2021-08-10 22:05 ` [RFC PATCH 1/2] media: Add AV1 uAPI daniel.almeida
2021-08-11  0:57   ` kernel test robot
2021-09-02 15:10   ` Hans Verkuil
2022-01-28 15:45   ` Nicolas Dufresne
2022-02-02 15:13   ` Nicolas Dufresne
2021-08-10 22:05 ` [RFC PATCH 2/2] media: vivpu: add virtual VPU driver daniel.almeida
2021-09-02 16:05   ` Hans Verkuil
2022-06-06 21:26   ` [RFC PATCH v2] media: visl: add virtual stateless driver daniel.almeida
2022-06-07 12:02     ` Hans Verkuil
2022-06-08 15:31       ` Nicolas Dufresne
2022-08-19 20:43     ` Deborah Brouwer
2022-10-04 23:10       ` Deborah Brouwer
2021-09-02 15:43 ` [RFC PATCH 0/2] Add the stateless AV1 uAPI and the VIVPU virtual driver to showcase it Hans Verkuil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.