linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] Document memory-to-memory video codec interfaces
@ 2018-10-22 14:48 Tomasz Figa
  2018-10-22 14:48 ` [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface Tomasz Figa
                   ` (2 more replies)
  0 siblings, 3 replies; 41+ messages in thread
From: Tomasz Figa @ 2018-10-22 14:48 UTC (permalink / raw)
  To: linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Hans Verkuil,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Nicolas Dufresne, Paul Kocialkowski, Laurent Pinchart,
	dave.stevenson, Ezequiel Garcia, Maxime Jourdan, Tomasz Figa

It's been a while, but here is the v2 of the stateful mem2mem codec
interfaces documentation. Sorry for taking so long time to respin.

This series attempts to add the documentation of what was discussed
during Media Workshops at LinuxCon Europe 2012 in Barcelona and then
later Embedded Linux Conference Europe 2014 in Düsseldorf and then
eventually written down by Pawel Osciak and tweaked a bit by Chrome OS
video team (but mostly in a cosmetic way or making the document more
precise), during the several years of Chrome OS using the APIs in
production.

Note that most, if not all, of the API is already implemented in
existing mainline drivers, such as s5p-mfc or mtk-vcodec. Intention of
this series is just to formalize what we already have.

Thanks everyone for the huge amount of useful comments for the RFC and
v1. Much of the credits should go to Pawel Osciak too, for writing most
of the original text of the initial RFC.

Changes since v1:
(https://lore.kernel.org/patchwork/project/lkml/list/?series=360520)
Decoder:
 - Removed a note about querying all combinations of OUTPUT and CAPTURE
   frame sizes, since it would conflict with scaling/composiion support
   to be added later.
 - Removed the source change event after setting non-zero width and
   height on OUTPUT queue, since the change happens as a direct result
   of a client action.
 - Moved all the setup steps for CAPTURE queue out of Initialization
   and Dynamic resolution change into a common sequence called Capture
   setup, since they were mostly duplicate of each other.
 - Described steps to allocate buffers for higher resolution than the
   stream to prepare for future resolution changes.
 - Described a way to skip the initial header parsing and speculatively
   configure the CAPTURE queue (for gstreamer/ffmpeg compatibility).
 - Reordered CAPTURE setup steps so that all the driver queries are done
   first and only then a reconfiguration may be attempted or skipped.
 - Described VIDIOC_CREATE_BUFS as another way of allocating buffers.
 - Made the decoder signal the source change event as soon as the change
   is detected, to reduce pipeline stalls in case of buffers already
   good to continue decoding.
 - Stressed out the fact that a source change may happen even without a
   change in the coded resolution.
 - Described querying pixel aspect ratio using VIDIOC_CROPCAP.
 - Extended documentation of VIDIOC_DECODER_CMD and VIDIOC_G/S/TRY_FMT
   to more precisely describe the behavior of mem2mem decoders.
 - Clarified that 0 width and height are allowed for OUTPUT side of
   mem2mem decoders in the documentation of the v4l2_pix_fmt struct.

Encoder:
 - Removed width and height from CAPTURE (coded) format, since the coded
   resolution of the stream is an internal detail of the encoded stream.
 - Made the VIDIOC_S_FMT on OUTPUT mandatory, since the default format
   normally does not make sense (even if technically valid).
 - Changed the V4L2_SEL_TGT_CROP_BOUNDS and V4L2_SEL_TGT_CROP_DEFAULT
   selection targets to be equal to the full source frame to simplify
   internal handling in drivers for simple hardware.
 - Changed the V4L2_SEL_TGT_COMPOSE_DEFAULT selection target to be equal
   to |crop width|x|crop height|@(0,0) to simplify internal handling in
   drivers for simple hardware.
 - Removed V4L2_SEL_TGT_COMPOSE_PADDED, since the encoder does not write
   to the raw buffers.
 - Extended documentation of VIDIOC_ENCODER_CMD to more precisely
   describe the behavior of mem2mem encoders.
 - Clarified that 0 width and height are allowed for CAPTURE side of
   mem2mem encoders in the documentation of the v4l2_pix_fmt struct.

General:
 - Clarified that the Drain sequence valid only if both queues are
   streaming and stopping any of the queues would abort it, since there
   is nothing to drain, if OUTPUT is stopped and there is no way to
   signal the completion if CAPTURE is stopped.
 - Clarified that VIDIOC_STREAMON on any of the queues would resume the
   codec from stopped state, to be consistent with the documentation of
   VIDIOC_ENCODER/DECODER_CMD.
 - Documented the relation between timestamps of OUTPUT and CAPTURE
   buffers and how special cases of non-1:1 relation are handled.
 - Added missing sizeimage to bitstream format operations and removed
   the mistaken mentions from descriptions of respective REQBUFS calls.
 - Removed the Pause sections, since there is no notion of pause for
   mem2mem devices.
 - Added state machine diagrams.
 - Merged both glossaries into one in the decoder document and a
   reference to it in the encoder document.
 - Added missing terms to the glossary.
 - Added "Stateful" to the interface names.
 - Reworded the text to be more userspace-centric.
 - A number of other readability improvements suggested in review comments.

For changes since RFC see the v1:
https://lore.kernel.org/patchwork/project/lkml/list/?series=360520

Tomasz Figa (2):
  media: docs-rst: Document memory-to-memory video decoder interface
  media: docs-rst: Document memory-to-memory video encoder interface

 Documentation/media/uapi/v4l/dev-decoder.rst  | 1082 +++++++++++++++++
 Documentation/media/uapi/v4l/dev-encoder.rst  |  579 +++++++++
 Documentation/media/uapi/v4l/devices.rst      |    2 +
 Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |   10 +
 Documentation/media/uapi/v4l/v4l2.rst         |   12 +-
 .../media/uapi/v4l/vidioc-decoder-cmd.rst     |   40 +-
 .../media/uapi/v4l/vidioc-encoder-cmd.rst     |   38 +-
 Documentation/media/uapi/v4l/vidioc-g-fmt.rst |   14 +
 8 files changed, 1747 insertions(+), 30 deletions(-)
 create mode 100644 Documentation/media/uapi/v4l/dev-decoder.rst
 create mode 100644 Documentation/media/uapi/v4l/dev-encoder.rst

-- 
2.19.1.568.g152ad8e336-goog


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-10-22 14:48 [PATCH v2 0/2] Document memory-to-memory video codec interfaces Tomasz Figa
@ 2018-10-22 14:48 ` Tomasz Figa
  2018-10-29  9:45   ` Stanimir Varbanov
                     ` (3 more replies)
  2018-10-22 14:49 ` [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface Tomasz Figa
  2018-10-22 15:41 ` [PATCH v2 0/2] Document memory-to-memory video codec interfaces Hans Verkuil
  2 siblings, 4 replies; 41+ messages in thread
From: Tomasz Figa @ 2018-10-22 14:48 UTC (permalink / raw)
  To: linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Hans Verkuil,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Nicolas Dufresne, Paul Kocialkowski, Laurent Pinchart,
	dave.stevenson, Ezequiel Garcia, Maxime Jourdan, Tomasz Figa

Due to complexity of the video decoding process, the V4L2 drivers of
stateful decoder hardware require specific sequences of V4L2 API calls
to be followed. These include capability enumeration, initialization,
decoding, seek, pause, dynamic resolution change, drain and end of
stream.

Specifics of the above have been discussed during Media Workshops at
LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
Conference Europe 2014 in Düsseldorf. The de facto Codec API that
originated at those events was later implemented by the drivers we already
have merged in mainline, such as s5p-mfc or coda.

The only thing missing was the real specification included as a part of
Linux Media documentation. Fix it now and document the decoder part of
the Codec API.

Signed-off-by: Tomasz Figa <tfiga@chromium.org>
---
 Documentation/media/uapi/v4l/dev-decoder.rst  | 1082 +++++++++++++++++
 Documentation/media/uapi/v4l/devices.rst      |    1 +
 Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |    5 +
 Documentation/media/uapi/v4l/v4l2.rst         |   10 +-
 .../media/uapi/v4l/vidioc-decoder-cmd.rst     |   40 +-
 Documentation/media/uapi/v4l/vidioc-g-fmt.rst |   14 +
 6 files changed, 1137 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/media/uapi/v4l/dev-decoder.rst

diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
new file mode 100644
index 000000000000..09c7a6621b8e
--- /dev/null
+++ b/Documentation/media/uapi/v4l/dev-decoder.rst
@@ -0,0 +1,1082 @@
+.. -*- coding: utf-8; mode: rst -*-
+
+.. _decoder:
+
+*************************************************
+Memory-to-memory Stateful Video Decoder Interface
+*************************************************
+
+A stateful video decoder takes complete chunks of the bitstream (e.g. Annex-B
+H.264/HEVC stream, raw VP8/9 stream) and decodes them into raw video frames in
+display order. The decoder is expected not to require any additional information
+from the client to process these buffers.
+
+Performing software parsing, processing etc. of the stream in the driver in
+order to support this interface is strongly discouraged. In case such
+operations are needed, use of the Stateless Video Decoder Interface (in
+development) is strongly advised.
+
+Conventions and notation used in this document
+==============================================
+
+1. The general V4L2 API rules apply if not specified in this document
+   otherwise.
+
+2. The meaning of words “must”, “may”, “should”, etc. is as per RFC
+   2119.
+
+3. All steps not marked “optional” are required.
+
+4. :c:func:`VIDIOC_G_EXT_CTRLS`, :c:func:`VIDIOC_S_EXT_CTRLS` may be used
+   interchangeably with :c:func:`VIDIOC_G_CTRL`, :c:func:`VIDIOC_S_CTRL`,
+   unless specified otherwise.
+
+5. Single-plane API (see spec) and applicable structures may be used
+   interchangeably with Multi-plane API, unless specified otherwise,
+   depending on decoder capabilities and following the general V4L2
+   guidelines.
+
+6. i = [a..b]: sequence of integers from a to b, inclusive, i.e. i =
+   [0..2]: i = 0, 1, 2.
+
+7. Given an ``OUTPUT`` buffer A, A’ represents a buffer on the ``CAPTURE``
+   queue containing data (decoded frame/stream) that resulted from processing
+   buffer A.
+
+.. _decoder-glossary:
+
+Glossary
+========
+
+CAPTURE
+   the destination buffer queue; for decoder, the queue of buffers containing
+   decoded frames; for encoder, the queue of buffers containing encoded
+   bitstream; ``V4L2_BUF_TYPE_VIDEO_CAPTURE```` or
+   ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``; data are captured from the hardware
+   into ``CAPTURE`` buffers
+
+client
+   application client communicating with the decoder or encoder implementing
+   this interface
+
+coded format
+   encoded/compressed video bitstream format (e.g. H.264, VP8, etc.); see
+   also: raw format
+
+coded height
+   height for given coded resolution
+
+coded resolution
+   stream resolution in pixels aligned to codec and hardware requirements;
+   typically visible resolution rounded up to full macroblocks;
+   see also: visible resolution
+
+coded width
+   width for given coded resolution
+
+decode order
+   the order in which frames are decoded; may differ from display order if the
+   coded format includes a feature of frame reordering; for decoders,
+   ``OUTPUT`` buffers must be queued by the client in decode order; for
+   encoders ``CAPTURE`` buffers must be returned by the encoder in decode order
+
+destination
+   data resulting from the decode process; ``CAPTURE``
+
+display order
+   the order in which frames must be displayed; for encoders, ``OUTPUT``
+   buffers must be queued by the client in display order; for decoders,
+   ``CAPTURE`` buffers must be returned by the decoder in display order
+
+DPB
+   Decoded Picture Buffer; an H.264 term for a buffer that stores a decoded
+   raw frame available for reference in further decoding steps.
+
+EOS
+   end of stream
+
+IDR
+   Instantaneous Decoder Refresh; a type of a keyframe in H.264-encoded stream,
+   which clears the list of earlier reference frames (DPBs)
+
+keyframe
+   an encoded frame that does not reference frames decoded earlier, i.e.
+   can be decoded fully on its own.
+
+macroblock
+   a processing unit in image and video compression formats based on linear
+   block transforms (e.g. H.264, VP8, VP9); codec-specific, but for most of
+   popular codecs the size is 16x16 samples (pixels)
+
+OUTPUT
+   the source buffer queue; for decoders, the queue of buffers containing
+   encoded bitstream; for encoders, the queue of buffers containing raw frames;
+   ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or ``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE``; the
+   hardware is fed with data from ``OUTPUT`` buffers
+
+PPS
+   Picture Parameter Set; a type of metadata entity in H.264 bitstream
+
+raw format
+   uncompressed format containing raw pixel data (e.g. YUV, RGB formats)
+
+resume point
+   a point in the bitstream from which decoding may start/continue, without
+   any previous state/data present, e.g.: a keyframe (VP8/VP9) or
+   SPS/PPS/IDR sequence (H.264); a resume point is required to start decode
+   of a new stream, or to resume decoding after a seek
+
+source
+   data fed to the decoder or encoder; ``OUTPUT``
+
+source height
+   height in pixels for given source resolution; relevant to encoders only
+
+source resolution
+   resolution in pixels of source frames being source to the encoder and
+   subject to further cropping to the bounds of visible resolution; relevant to
+   encoders only
+
+source width
+   width in pixels for given source resolution; relevant to encoders only
+
+SPS
+   Sequence Parameter Set; a type of metadata entity in H.264 bitstream
+
+stream metadata
+   additional (non-visual) information contained inside encoded bitstream;
+   for example: coded resolution, visible resolution, codec profile
+
+visible height
+   height for given visible resolution; display height
+
+visible resolution
+   stream resolution of the visible picture, in pixels, to be used for
+   display purposes; must be smaller or equal to coded resolution;
+   display resolution
+
+visible width
+   width for given visible resolution; display width
+
+State machine
+=============
+
+.. kernel-render:: DOT
+   :alt: DOT digraph of decoder state machine
+   :caption: Decoder state machine
+
+   digraph decoder_state_machine {
+       node [shape = doublecircle, label="Decoding"] Decoding;
+
+       node [shape = circle, label="Initialization"] Initialization;
+       node [shape = circle, label="Capture\nsetup"] CaptureSetup;
+       node [shape = circle, label="Dynamic\nresolution\nchange"] ResChange;
+       node [shape = circle, label="Stopped"] Stopped;
+       node [shape = circle, label="Drain"] Drain;
+       node [shape = circle, label="Seek"] Seek;
+       node [shape = circle, label="End of stream"] EoS;
+
+       node [shape = point]; qi
+       qi -> Initialization [ label = "open()" ];
+
+       Initialization -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
+
+       CaptureSetup -> Stopped [ label = "CAPTURE\nbuffers\nready" ];
+
+       Decoding -> ResChange [ label = "Stream\nresolution\nchange" ];
+       Decoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
+       Decoding -> EoS [ label = "EoS mark\nin the stream" ];
+       Decoding -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
+       Decoding -> Stopped [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
+       Decoding -> Decoding;
+
+       ResChange -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
+       ResChange -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
+
+       EoS -> Drain [ label = "Implicit\ndrain" ];
+
+       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
+       Drain -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
+
+       Seek -> Decoding [ label = "VIDIOC_STREAMON(OUTPUT)" ];
+       Seek -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];
+
+       Stopped -> Decoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(CAPTURE)" ];
+       Stopped -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
+   }
+
+Querying capabilities
+=====================
+
+1. To enumerate the set of coded formats supported by the decoder, the
+   client may call :c:func:`VIDIOC_ENUM_FMT` on ``OUTPUT``.
+
+   * The full set of supported formats will be returned, regardless of the
+     format set on ``CAPTURE``.
+
+2. To enumerate the set of supported raw formats, the client may call
+   :c:func:`VIDIOC_ENUM_FMT` on ``CAPTURE``.
+
+   * Only the formats supported for the format currently active on ``OUTPUT``
+     will be returned.
+
+   * In order to enumerate raw formats supported by a given coded format,
+     the client must first set that coded format on ``OUTPUT`` and then
+     enumerate formats on ``CAPTURE``.
+
+3. The client may use :c:func:`VIDIOC_ENUM_FRAMESIZES` to detect supported
+   resolutions for a given format, passing desired pixel format in
+   :c:type:`v4l2_frmsizeenum` ``pixel_format``.
+
+   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a coded pixel
+     formats will include all possible coded resolutions supported by the
+     decoder for given coded pixel format.
+
+   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a raw pixel format
+     will include all possible frame buffer resolutions supported by the
+     decoder for given raw pixel format and the coded format currently set on
+     ``OUTPUT``.
+
+4. Supported profiles and levels for given format, if applicable, may be
+   queried using their respective controls via :c:func:`VIDIOC_QUERYCTRL`.
+
+Initialization
+==============
+
+1. **Optional.** Enumerate supported ``OUTPUT`` formats and resolutions. See
+   `Querying capabilities` above.
+
+2. Set the coded format on ``OUTPUT`` via :c:func:`VIDIOC_S_FMT`
+
+   * **Required fields:**
+
+     ``type``
+         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
+
+     ``pixelformat``
+         a coded pixel format
+
+     ``width``, ``height``
+         required only if cannot be parsed from the stream for the given
+         coded format; optional otherwise - set to zero to ignore
+
+     ``sizeimage``
+         desired size of ``OUTPUT`` buffers; the decoder may adjust it to
+         match hardware requirements
+
+     other fields
+         follow standard semantics
+
+   * **Return fields:**
+
+     ``sizeimage``
+         adjusted size of ``CAPTURE`` buffers
+
+   * If width and height are set to non-zero values, the ``CAPTURE`` format
+     will be updated with an appropriate frame buffer resolution instantly.
+     However, for coded formats that include stream resolution information,
+     after the decoder is done parsing the information from the stream, it will
+     update the ``CAPTURE`` format with new values and signal a source change
+     event.
+
+   .. warning::
+
+      Changing the ``OUTPUT`` format may change the currently set ``CAPTURE``
+      format. The decoder will derive a new ``CAPTURE`` format from the
+      ``OUTPUT`` format being set, including resolution, colorimetry
+      parameters, etc. If the client needs a specific ``CAPTURE`` format, it
+      must adjust it afterwards.
+
+3.  **Optional.** Query the minimum number of buffers required for ``OUTPUT``
+    queue via :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to
+    use more buffers than the minimum required by hardware/format.
+
+    * **Required fields:**
+
+      ``id``
+          set to ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
+
+    * **Return fields:**
+
+      ``value``
+          the minimum number of ``OUTPUT`` buffers required for the currently
+          set format
+
+4.  Allocate source (bitstream) buffers via :c:func:`VIDIOC_REQBUFS` on
+    ``OUTPUT``.
+
+    * **Required fields:**
+
+      ``count``
+          requested number of buffers to allocate; greater than zero
+
+      ``type``
+          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
+
+      ``memory``
+          follows standard semantics
+
+    * **Return fields:**
+
+      ``count``
+          the actual number of buffers allocated
+
+    .. warning::
+
+       The actual number of allocated buffers may differ from the ``count``
+       given. The client must check the updated value of ``count`` after the
+       call returns.
+
+    .. note::
+
+       To allocate more than the minimum number of buffers (for pipeline
+       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
+       control to get the minimum number of buffers required by the
+       decoder/format, and pass the obtained value plus the number of
+       additional buffers needed in the ``count`` field to
+       :c:func:`VIDIOC_REQBUFS`.
+
+    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``OUTPUT`` queue can be
+    used to have more control over buffer allocation.
+
+    * **Required fields:**
+
+      ``count``
+          requested number of buffers to allocate; greater than zero
+
+      ``type``
+          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
+
+      ``memory``
+          follows standard semantics
+
+      ``format``
+          follows standard semantics
+
+    * **Return fields:**
+
+      ``count``
+          adjusted to the number of allocated buffers
+
+    .. warning::
+
+       The actual number of allocated buffers may differ from the ``count``
+       given. The client must check the updated value of ``count`` after the
+       call returns.
+
+5.  Start streaming on the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`.
+
+6.  **This step only applies to coded formats that contain resolution information
+    in the stream.** Continue queuing/dequeuing bitstream buffers to/from the
+    ``OUTPUT`` queue via :c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`. The
+    buffers will be processed and returned to the client in order, until
+    required metadata to configure the ``CAPTURE`` queue are found. This is
+    indicated by the decoder sending a ``V4L2_EVENT_SOURCE_CHANGE`` event with
+    ``V4L2_EVENT_SRC_CH_RESOLUTION`` source change type.
+
+    * It is not an error if the first buffer does not contain enough data for
+      this to occur. Processing of the buffers will continue as long as more
+      data is needed.
+
+    * If data in a buffer that triggers the event is required to decode the
+      first frame, it will not be returned to the client, until the
+      initialization sequence completes and the frame is decoded.
+
+    * If the client sets width and height of the ``OUTPUT`` format to 0,
+      calling :c:func:`VIDIOC_G_FMT`, :c:func:`VIDIOC_S_FMT` or
+      :c:func:`VIDIOC_TRY_FMT` on the ``CAPTURE`` queue will return the
+      ``-EACCES`` error code, until the decoder configures ``CAPTURE`` format
+      according to stream metadata.
+
+    .. important::
+
+       Any client query issued after the decoder queues the event will return
+       values applying to the just parsed stream, including queue formats,
+       selection rectangles and controls.
+
+    .. note::
+
+       A client capable of acquiring stream parameters from the bitstream on
+       its own may attempt to set the width and height of the ``OUTPUT`` format
+       to non-zero values matching the coded size of the stream, skip this step
+       and continue with the `Capture setup` sequence. However, it must not
+       rely on any driver queries regarding stream parameters, such as
+       selection rectangles and controls, since the decoder has not parsed them
+       from the stream yet. If the values configured by the client do not match
+       those parsed by the decoder, a `Dynamic resolution change` will be
+       triggered to reconfigure them.
+
+    .. note::
+
+       No decoded frames are produced during this phase.
+
+7.  Continue with the `Capture setup` sequence.
+
+Capture setup
+=============
+
+1.  Call :c:func:`VIDIOC_G_FMT` on the ``CAPTURE`` queue to get format for the
+    destination buffers parsed/decoded from the bitstream.
+
+    * **Required fields:**
+
+      ``type``
+          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
+
+    * **Return fields:**
+
+      ``width``, ``height``
+          frame buffer resolution for the decoded frames
+
+      ``pixelformat``
+          pixel format for decoded frames
+
+      ``num_planes`` (for _MPLANE ``type`` only)
+          number of planes for pixelformat
+
+      ``sizeimage``, ``bytesperline``
+          as per standard semantics; matching frame buffer format
+
+    .. note::
+
+       The value of ``pixelformat`` may be any pixel format supported by the
+       decoder for the current stream. The decoder should choose a
+       preferred/optimal format for the default configuration. For example, a
+       YUV format may be preferred over an RGB format if an additional
+       conversion step would be required for the latter.
+
+2.  **Optional.** Acquire the visible resolution via
+    :c:func:`VIDIOC_G_SELECTION`.
+
+    * **Required fields:**
+
+      ``type``
+          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
+
+      ``target``
+          set to ``V4L2_SEL_TGT_COMPOSE``
+
+    * **Return fields:**
+
+      ``r.left``, ``r.top``, ``r.width``, ``r.height``
+          the visible rectangle; it must fit within the frame buffer resolution
+          returned by :c:func:`VIDIOC_G_FMT` on ``CAPTURE``.
+
+    * The following selection targets are supported on ``CAPTURE``:
+
+      ``V4L2_SEL_TGT_CROP_BOUNDS``
+          corresponds to the coded resolution of the stream
+
+      ``V4L2_SEL_TGT_CROP_DEFAULT``
+          the rectangle covering the part of the ``CAPTURE`` buffer that
+          contains meaningful picture data (visible area); width and height
+          will be equal to the visible resolution of the stream
+
+      ``V4L2_SEL_TGT_CROP``
+          the rectangle within the coded resolution to be output to
+          ``CAPTURE``; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``; read-only on
+          hardware without additional compose/scaling capabilities
+
+      ``V4L2_SEL_TGT_COMPOSE_BOUNDS``
+          the maximum rectangle within a ``CAPTURE`` buffer, which the cropped
+          frame can be output into; equal to ``V4L2_SEL_TGT_CROP`` if the
+          hardware does not support compose/scaling
+
+      ``V4L2_SEL_TGT_COMPOSE_DEFAULT``
+          equal to ``V4L2_SEL_TGT_CROP``
+
+      ``V4L2_SEL_TGT_COMPOSE``
+          the rectangle inside a ``CAPTURE`` buffer into which the cropped
+          frame is written; defaults to ``V4L2_SEL_TGT_COMPOSE_DEFAULT``;
+          read-only on hardware without additional compose/scaling capabilities
+
+      ``V4L2_SEL_TGT_COMPOSE_PADDED``
+          the rectangle inside a ``CAPTURE`` buffer which is overwritten by the
+          hardware; equal to ``V4L2_SEL_TGT_COMPOSE`` if the hardware does not
+          write padding pixels
+
+    .. warning::
+
+       The values are guaranteed to be meaningful only after the decoder
+       successfully parses the stream metadata. The client must not rely on the
+       query before that happens.
+
+3.  Query the minimum number of buffers required for the ``CAPTURE`` queue via
+    :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to use more
+    buffers than the minimum required by hardware/format.
+
+    * **Required fields:**
+
+      ``id``
+          set to ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE``
+
+    * **Return fields:**
+
+      ``value``
+          minimum number of buffers required to decode the stream parsed in
+          this initialization sequence.
+
+    .. note::
+
+       The minimum number of buffers must be at least the number required to
+       successfully decode the current stream. This may for example be the
+       required DPB size for an H.264 stream given the parsed stream
+       configuration (resolution, level).
+
+    .. warning::
+
+       The value is guaranteed to be meaningful only after the decoder
+       successfully parses the stream metadata. The client must not rely on the
+       query before that happens.
+
+4.  **Optional.** Enumerate ``CAPTURE`` formats via :c:func:`VIDIOC_ENUM_FMT` on
+    the ``CAPTURE`` queue. Once the stream information is parsed and known, the
+    client may use this ioctl to discover which raw formats are supported for
+    given stream and select one of them via :c:func:`VIDIOC_S_FMT`.
+
+    .. important::
+
+       The decoder will return only formats supported for the currently
+       established coded format, as per the ``OUTPUT`` format and/or stream
+       metadata parsed in this initialization sequence, even if more formats
+       may be supported by the decoder in general.
+
+       For example, a decoder may support YUV and RGB formats for resolutions
+       1920x1088 and lower, but only YUV for higher resolutions (due to
+       hardware limitations). After parsing a resolution of 1920x1088 or lower,
+       :c:func:`VIDIOC_ENUM_FMT` may return a set of YUV and RGB pixel formats,
+       but after parsing resolution higher than 1920x1088, the decoder will not
+       return RGB, unsupported for this resolution.
+
+       However, subsequent resolution change event triggered after
+       discovering a resolution change within the same stream may switch
+       the stream into a lower resolution and :c:func:`VIDIOC_ENUM_FMT`
+       would return RGB formats again in that case.
+
+5.  **Optional.** Set the ``CAPTURE`` format via :c:func:`VIDIOC_S_FMT` on the
+    ``CAPTURE`` queue. The client may choose a different format than
+    selected/suggested by the decoder in :c:func:`VIDIOC_G_FMT`.
+
+    * **Required fields:**
+
+      ``type``
+          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
+
+      ``pixelformat``
+          a raw pixel format
+
+    .. note::
+
+       The client may use :c:func:`VIDIOC_ENUM_FMT` after receiving the
+       ``V4L2_EVENT_SOURCE_CHANGE`` event to find out the set of raw formats
+       supported for the stream.
+
+6.  If all the following conditions are met, the client may resume the decoding
+    instantly:
+
+    * ``sizeimage`` of the new format (determined in previous steps) is less
+      than or equal to the size of currently allocated buffers,
+
+    * the number of buffers currently allocated is greater than or equal to the
+      minimum number of buffers acquired in previous steps. To fulfill this
+      requirement, the client may use :c:func:`VIDIOC_CREATE_BUFS` to add new
+      buffers.
+
+    In such case, the remaining steps do not apply and the client may resume
+    the decoding by one of the following actions:
+
+    * if the ``CAPTURE`` queue is streaming, call :c:func:`VIDIOC_DECODER_CMD`
+      with the ``V4L2_DEC_CMD_START`` command,
+
+    * if the ``CAPTURE`` queue is not streaming, call :c:func:`VIDIOC_STREAMON`
+      on the ``CAPTURE`` queue.
+
+    However, if the client intends to change the buffer set, to lower
+    memory usage or for any other reasons, it may be achieved by following
+    the steps below.
+
+7.  **If the** ``CAPTURE`` **queue is streaming,** keep queuing and dequeuing
+    buffers on the ``CAPTURE`` queue until a buffer marked with the
+    ``V4L2_BUF_FLAG_LAST`` flag is dequeued.
+
+8.  **If the** ``CAPTURE`` **queue is streaming,** call :c:func:`VIDIOC_STREAMOFF`
+    on the ``CAPTURE`` queue to stop streaming.
+
+    .. warning::
+
+       The ``OUTPUT`` queue must remain streaming. Calling
+       :c:func:`VIDIOC_STREAMOFF` on it would abort the sequence and trigger a
+       seek.
+
+9.  **If the** ``CAPTURE`` **queue has buffers allocated,** free the ``CAPTURE``
+    buffers using :c:func:`VIDIOC_REQBUFS`.
+
+    * **Required fields:**
+
+      ``count``
+          set to 0
+
+      ``type``
+          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
+
+      ``memory``
+          follows standard semantics
+
+10. Allocate ``CAPTURE`` buffers via :c:func:`VIDIOC_REQBUFS` on the
+    ``CAPTURE`` queue.
+
+    * **Required fields:**
+
+      ``count``
+          requested number of buffers to allocate; greater than zero
+
+      ``type``
+          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
+
+      ``memory``
+          follows standard semantics
+
+    * **Return fields:**
+
+      ``count``
+          actual number of buffers allocated
+
+    .. warning::
+
+       The actual number of allocated buffers may differ from the ``count``
+       given. The client must check the updated value of ``count`` after the
+       call returns.
+
+    .. note::
+
+       To allocate more than the minimum number of buffers (for pipeline
+       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE``
+       control to get the minimum number of buffers required, and pass the
+       obtained value plus the number of additional buffers needed in the
+       ``count`` field to :c:func:`VIDIOC_REQBUFS`.
+
+    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``CAPTURE`` queue can be
+    used to have more control over buffer allocation. For example, by
+    allocating buffers larger than the current ``CAPTURE`` format, future
+    resolution changes can be accommodated.
+
+    * **Required fields:**
+
+      ``count``
+          requested number of buffers to allocate; greater than zero
+
+      ``type``
+          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
+
+      ``memory``
+          follows standard semantics
+
+      ``format``
+          a format representing the maximum framebuffer resolution to be
+          accommodated by newly allocated buffers
+
+    * **Return fields:**
+
+      ``count``
+          adjusted to the number of allocated buffers
+
+    .. warning::
+
+       The actual number of allocated buffers may differ from the ``count``
+       given. The client must check the updated value of ``count`` after the
+       call returns.
+
+    .. note::
+
+       To allocate buffers for a format different than parsed from the stream
+       metadata, the client must proceed as follows, before the metadata
+       parsing is initiated:
+
+       * set width and height of the ``OUTPUT`` format to desired coded resolution to
+         let the decoder configure the ``CAPTURE`` format appropriately,
+
+       * query the ``CAPTURE`` format using :c:func:`VIDIOC_G_FMT` and save it
+         until this step.
+
+       The format obtained in the query may be then used with
+       :c:func:`VIDIOC_CREATE_BUFS` in this step to allocate the buffers.
+
+11. Call :c:func:`VIDIOC_STREAMON` on the ``CAPTURE`` queue to start decoding
+    frames.
+
+Decoding
+========
+
+This state is reached after the `Capture setup` sequence finishes succesfully.
+In this state, the client queues and dequeues buffers to both queues via
+:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
+semantics.
+
+The contents of the source ``OUTPUT`` buffers depend on the active coded pixel
+format and may be affected by codec-specific extended controls, as stated in
+the documentation of each format.
+
+Both queues operate independently, following the standard behavior of V4L2
+buffer queues and memory-to-memory devices. In addition, the order of decoded
+frames dequeued from the ``CAPTURE`` queue may differ from the order of queuing
+coded frames to the ``OUTPUT`` queue, due to properties of the selected coded
+format, e.g. frame reordering.
+
+The client must not assume any direct relationship between ``CAPTURE``
+and ``OUTPUT`` buffers and any specific timing of buffers becoming
+available to dequeue. Specifically,
+
+* a buffer queued to ``OUTPUT`` may result in no buffers being produced
+  on ``CAPTURE`` (e.g. if it does not contain encoded data, or if only
+  metadata syntax structures are present in it),
+
+* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced
+  on ``CAPTURE`` (if the encoded data contained more than one frame, or if
+  returning a decoded frame allowed the decoder to return a frame that
+  preceded it in decode, but succeeded it in the display order),
+
+* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
+  ``CAPTURE`` later into decode process, and/or after processing further
+  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
+  reordering is used,
+
+* buffers may become available on the ``CAPTURE`` queue without additional
+  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
+  ``OUTPUT`` buffers queued in the past whose decoding results are only
+  available at later time, due to specifics of the decoding process.
+
+.. note::
+
+   To allow matching decoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
+   originated from, the client can set the ``timestamp`` field of the
+   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
+   ``CAPTURE`` buffer(s), which resulted from decoding that ``OUTPUT`` buffer
+   will have their ``timestamp`` field set to the same value when dequeued.
+
+   In addition to the straighforward case of one ``OUTPUT`` buffer producing
+   one ``CAPTURE`` buffer, the following cases are defined:
+
+   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
+     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
+
+   * multiple ``OUTPUT`` buffers generate one ``CAPTURE`` buffer: timestamp of
+     the ``OUTPUT`` buffer queued last will be copied,
+
+   * the decoding order differs from the display order (i.e. the
+     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
+     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
+     and thus monotonicity of the timestamps cannot be guaranteed.
+
+During the decoding, the decoder may initiate one of the special sequences, as
+listed below. The sequences will result in the decoder returning all the
+``CAPTURE`` buffers that originated from all the ``OUTPUT`` buffers processed
+before the sequence started. Last of the buffers will have the
+``V4L2_BUF_FLAG_LAST`` flag set. To determine the sequence to follow, the client
+must check if there is any pending event and,
+
+* if a ``V4L2_EVENT_SOURCE_CHANGE`` event is pending, the `Dynamic resolution
+  change` sequence needs to be followed,
+
+* if a ``V4L2_EVENT_EOS`` event is pending, the `End of stream` sequence needs
+  to be followed.
+
+Some of the sequences can be intermixed with each other and need to be handled
+as they happen. The exact operation is documented for each sequence.
+
+Seek
+====
+
+Seek is controlled by the ``OUTPUT`` queue, as it is the source of coded data.
+The seek does not require any specific operation on the ``CAPTURE`` queue, but
+it may be affected as per normal decoder operation.
+
+1. Stop the ``OUTPUT`` queue to begin the seek sequence via
+   :c:func:`VIDIOC_STREAMOFF`.
+
+   * **Required fields:**
+
+     ``type``
+         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
+
+   * The decoder will drop all the pending ``OUTPUT`` buffers and they must be
+     treated as returned to the client (following standard semantics).
+
+2. Restart the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`
+
+   * **Required fields:**
+
+     ``type``
+         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
+
+   * The decoder will start accepting new source bitstream buffers after the
+     call returns.
+
+3. Start queuing buffers containing coded data after the seek to the ``OUTPUT``
+   queue until a suitable resume point is found.
+
+   .. note::
+
+      There is no requirement to begin queuing coded data starting exactly
+      from a resume point (e.g. SPS or a keyframe). Any queued ``OUTPUT``
+      buffers will be processed and returned to the client until a suitable
+      resume point is found.  While looking for a resume point, the decoder
+      should not produce any decoded frames into ``CAPTURE`` buffers.
+
+      Some hardware is known to mishandle seeks to a non-resume point. Such an
+      operation may result in an unspecified number of corrupted decoded frames
+      being made available on the ``CAPTURE`` queue. Drivers must ensure that
+      no fatal decoding errors or crashes occur, and implement any necessary
+      handling and workarounds for hardware issues related to seek operations.
+
+   .. warning::
+
+      In case of the H.264 codec, the client must take care not to seek over a
+      change of SPS/PPS. Even though the target frame could be a keyframe, the
+      stale SPS/PPS inside decoder state would lead to undefined results when
+      decoding. Although the decoder must handle such case without a crash or a
+      fatal decode error, the client must not expect a sensible decode output.
+
+4. After a resume point is found, the decoder will start returning ``CAPTURE``
+   buffers containing decoded frames.
+
+.. important::
+
+   A seek may result in the `Dynamic resolution change` sequence being
+   iniitated, due to the seek target having decoding parameters different from
+   the part of the stream decoded before the seek. The sequence must be handled
+   as per normal decoder operation.
+
+.. warning::
+
+   It is not specified when the ``CAPTURE`` queue starts producing buffers
+   containing decoded data from the ``OUTPUT`` buffers queued after the seek,
+   as it operates independently from the ``OUTPUT`` queue.
+
+   The decoder may return a number of remaining ``CAPTURE`` buffers containing
+   decoded frames originating from the ``OUTPUT`` buffers queued before the
+   seek sequence is performed.
+
+   The ``VIDIOC_STREAMOFF`` operation discards any remaining queued
+   ``OUTPUT`` buffers, which means that not all of the ``OUTPUT`` buffers
+   queued before the seek sequence may have matching ``CAPTURE`` buffers
+   produced.  For example, given the sequence of operations on the
+   ``OUTPUT`` queue:
+
+     QBUF(A), QBUF(B), STREAMOFF(), STREAMON(), QBUF(G), QBUF(H),
+
+   any of the following results on the ``CAPTURE`` queue is allowed:
+
+     {A’, B’, G’, H’}, {A’, G’, H’}, {G’, H’}.
+
+.. note::
+
+   To achieve instantaneous seek, the client may restart streaming on the
+   ``CAPTURE`` queue too to discard decoded, but not yet dequeued buffers.
+
+Dynamic resolution change
+=========================
+
+Streams that include resolution metadata in the bitstream may require switching
+to a different resolution during the decoding.
+
+The sequence starts when the decoder detects a coded frame with one or more of
+the following parameters different from previously established (and reflected
+by corresponding queries):
+
+* coded resolution (``OUTPUT`` width and height),
+
+* visible resolution (selection rectangles),
+
+* the minimum number of buffers needed for decoding.
+
+Whenever that happens, the decoder must proceed as follows:
+
+1.  After encountering a resolution change in the stream, the decoder sends a
+    ``V4L2_EVENT_SOURCE_CHANGE`` event with source change type set to
+    ``V4L2_EVENT_SRC_CH_RESOLUTION``.
+
+    .. important::
+
+       Any client query issued after the decoder queues the event will return
+       values applying to the stream after the resolution change, including
+       queue formats, selection rectangles and controls.
+
+2.  The decoder will then process and decode all remaining buffers from before
+    the resolution change point.
+
+    * The last buffer from before the change must be marked with the
+      ``V4L2_BUF_FLAG_LAST`` flag, similarly to the `Drain` sequence above.
+
+    .. warning::
+
+       The last buffer may be empty (with :c:type:`v4l2_buffer` ``bytesused``
+       = 0) and in such case it must be ignored by the client, as it does not
+       contain a decoded frame.
+
+    .. note::
+
+       Any attempt to dequeue more buffers beyond the buffer marked with
+       ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
+       :c:func:`VIDIOC_DQBUF`.
+
+The client must continue the sequence as described below to continue the
+decoding process.
+
+1.  Dequeue the source change event.
+
+    .. important::
+
+       A source change triggers an implicit decoder drain, similar to the
+       explicit `Drain` sequence. The decoder is stopped after it completes.
+       The decoding process must be resumed with either a pair of calls to
+       :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
+       ``CAPTURE`` queue, or a call to :c:func:`VIDIOC_DECODER_CMD` with the
+       ``V4L2_DEC_CMD_START`` command.
+
+2.  Continue with the `Capture setup` sequence.
+
+.. note::
+
+   During the resolution change sequence, the ``OUTPUT`` queue must remain
+   streaming. Calling :c:func:`VIDIOC_STREAMOFF` on the ``OUTPUT`` queue would
+   abort the sequence and initiate a seek.
+
+   In principle, the ``OUTPUT`` queue operates separately from the ``CAPTURE``
+   queue and this remains true for the duration of the entire resolution change
+   sequence as well.
+
+   The client should, for best performance and simplicity, keep queuing/dequeuing
+   buffers to/from the ``OUTPUT`` queue even while processing this sequence.
+
+Drain
+=====
+
+To ensure that all queued ``OUTPUT`` buffers have been processed and related
+``CAPTURE`` buffers output to the client, the client must follow the drain
+sequence described below. After the drain sequence ends, the client has
+received all decoded frames for all ``OUTPUT`` buffers queued before the
+sequence was started.
+
+1. Begin drain by issuing :c:func:`VIDIOC_DECODER_CMD`.
+
+   * **Required fields:**
+
+     ``cmd``
+         set to ``V4L2_DEC_CMD_STOP``
+
+     ``flags``
+         set to 0
+
+     ``pts``
+         set to 0
+
+   .. warning::
+
+   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
+   are streaming. For compatibility reasons, the call to
+   :c:func:`VIDIOC_DECODER_CMD` will not fail even if any of the queues is not
+   streaming, but at the same time it will not initiate the `Drain` sequence
+   and so the steps described below would not be applicable.
+
+2. Any ``OUTPUT`` buffers queued by the client before the
+   :c:func:`VIDIOC_DECODER_CMD` was issued will be processed and decoded as
+   normal. The client must continue to handle both queues independently,
+   similarly to normal decode operation. This includes,
+
+   * handling any operations triggered as a result of processing those buffers,
+     such as the `Dynamic resolution change` sequence, before continuing with
+     the drain sequence,
+
+   * queuing and dequeuing ``CAPTURE`` buffers, until a buffer marked with the
+     ``V4L2_BUF_FLAG_LAST`` flag is dequeued,
+
+     .. warning::
+
+        The last buffer may be empty (with :c:type:`v4l2_buffer`
+        ``bytesused`` = 0) and in such case it must be ignored by the client,
+        as it does not contain a decoded frame.
+
+     .. note::
+
+        Any attempt to dequeue more buffers beyond the buffer marked with
+        ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
+        :c:func:`VIDIOC_DQBUF`.
+
+   * dequeuing processed ``OUTPUT`` buffers, until all the buffers queued
+     before the ``V4L2_DEC_CMD_STOP`` command are dequeued.
+
+   * dequeuing the ``V4L2_EVENT_EOS`` event, if the client subscribed to it.
+
+   .. note::
+
+      For backwards compatibility, the decoder will signal a ``V4L2_EVENT_EOS``
+      event when the last the last frame has been decoded and all frames are
+      ready to be dequeued. It is a deprecated behavior and the client must not
+      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
+      instead.
+
+3. Once all the ``OUTPUT`` buffers queued before the ``V4L2_DEC_CMD_STOP`` call
+   and the last ``CAPTURE`` buffer are dequeued, the decoder is stopped and it
+   will accept, but not process any newly queued ``OUTPUT`` buffers until the
+   client issues any of the following operations:
+
+   * ``V4L2_DEC_CMD_START`` - the decoder will resume the operation normally,
+
+   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
+     ``CAPTURE`` queue - the decoder will resume the operation normally,
+     however any ``CAPTURE`` buffers still in the queue will be returned to the
+     client,
+
+   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
+     ``OUTPUT`` queue - any pending source buffers will be returned to the
+     client and the `Seek` sequence will be triggered.
+
+.. note::
+
+   Once the drain sequence is initiated, the client needs to drive it to
+   completion, as described by the steps above, unless it aborts the process by
+   issuing :c:func:`VIDIOC_STREAMOFF` on any of the ``OUTPUT`` or ``CAPTURE``
+   queues.  The client is not allowed to issue ``V4L2_DEC_CMD_START`` or
+   ``V4L2_DEC_CMD_STOP`` again while the drain sequence is in progress and they
+   will fail with -EBUSY error code if attempted.
+
+   Although mandatory, the availability of decoder commands may be queried
+   using :c:func:`VIDIOC_TRY_DECODER_CMD`.
+
+End of stream
+=============
+
+If the decoder encounters an end of stream marking in the stream, the decoder
+will initiate the `Drain` sequence, which the client must handle as described
+above, skipping the initial :c:func:`VIDIOC_DECODER_CMD`.
+
+Commit points
+=============
+
+Setting formats and allocating buffers trigger changes in the behavior of the
+decoder.
+
+1. Setting the format on the ``OUTPUT`` queue may change the set of formats
+   supported/advertised on the ``CAPTURE`` queue. In particular, it also means
+   that the ``CAPTURE`` format may be reset and the client must not rely on the
+   previously set format being preserved.
+
+2. Enumerating formats on the ``CAPTURE`` queue always returns only formats
+   supported for the current ``OUTPUT`` format.
+
+3. Setting the format on the ``CAPTURE`` queue does not change the list of
+   formats available on the ``OUTPUT`` queue. An attempt to set the ``CAPTURE``
+   format that is not supported for the currently selected ``OUTPUT`` format
+   will result in the decoder adjusting the requested ``CAPTURE`` format to a
+   supported one.
+
+4. Enumerating formats on the ``OUTPUT`` queue always returns the full set of
+   supported coded formats, irrespectively of the current ``CAPTURE`` format.
+
+5. While buffers are allocated on the ``OUTPUT`` queue, the client must not
+   change the format on the queue. Drivers will return the -EBUSY error code
+   for any such format change attempt.
+
+To summarize, setting formats and allocation must always start with the
+``OUTPUT`` queue and the ``OUTPUT`` queue is the master that governs the
+set of supported formats for the ``CAPTURE`` queue.
diff --git a/Documentation/media/uapi/v4l/devices.rst b/Documentation/media/uapi/v4l/devices.rst
index fb7f8c26cf09..12d43fe711cf 100644
--- a/Documentation/media/uapi/v4l/devices.rst
+++ b/Documentation/media/uapi/v4l/devices.rst
@@ -15,6 +15,7 @@ Interfaces
     dev-output
     dev-osd
     dev-codec
+    dev-decoder
     dev-effect
     dev-raw-vbi
     dev-sliced-vbi
diff --git a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
index 826f2305da01..ca5f2270a829 100644
--- a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
+++ b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
@@ -32,6 +32,11 @@ Single-planar format structure
 	to a multiple of the scale factor of any smaller planes. For
 	example when the image format is YUV 4:2:0, ``width`` and
 	``height`` must be multiples of two.
+
+	For compressed formats that contain the resolution information encoded
+	inside the stream, when fed to a stateful mem2mem decoder, the fields
+	may be zero to rely on the decoder to detect the right values. For more
+	details see :ref:`decoder` and format descriptions.
     * - __u32
       - ``pixelformat``
       - The pixel format or type of compression, set by the application.
diff --git a/Documentation/media/uapi/v4l/v4l2.rst b/Documentation/media/uapi/v4l/v4l2.rst
index b89e5621ae69..65dc096199ad 100644
--- a/Documentation/media/uapi/v4l/v4l2.rst
+++ b/Documentation/media/uapi/v4l/v4l2.rst
@@ -53,6 +53,10 @@ Authors, in alphabetical order:
 
   - Original author of the V4L2 API and documentation.
 
+- Figa, Tomasz <tfiga@chromium.org>
+
+  - Documented the memory-to-memory decoder interface.
+
 - H Schimek, Michael <mschimek@gmx.at>
 
   - Original author of the V4L2 API and documentation.
@@ -61,6 +65,10 @@ Authors, in alphabetical order:
 
   - Documented the Digital Video timings API.
 
+- Osciak, Pawel <posciak@chromium.org>
+
+  - Documented the memory-to-memory decoder interface.
+
 - Osciak, Pawel <pawel@osciak.com>
 
   - Designed and documented the multi-planar API.
@@ -85,7 +93,7 @@ Authors, in alphabetical order:
 
   - Designed and documented the VIDIOC_LOG_STATUS ioctl, the extended control ioctls, major parts of the sliced VBI API, the MPEG encoder and decoder APIs and the DV Timings API.
 
-**Copyright** |copy| 1999-2016: Bill Dirks, Michael H. Schimek, Hans Verkuil, Martin Rubli, Andy Walls, Muralidharan Karicheri, Mauro Carvalho Chehab, Pawel Osciak, Sakari Ailus & Antti Palosaari.
+**Copyright** |copy| 1999-2018: Bill Dirks, Michael H. Schimek, Hans Verkuil, Martin Rubli, Andy Walls, Muralidharan Karicheri, Mauro Carvalho Chehab, Pawel Osciak, Sakari Ailus & Antti Palosaari, Tomasz Figa
 
 Except when explicitly stated as GPL, programming examples within this
 part can be used and distributed without restrictions.
diff --git a/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst b/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
index 85c916b0ce07..2f73fe22a9cd 100644
--- a/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
+++ b/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
@@ -49,14 +49,16 @@ The ``cmd`` field must contain the command code. Some commands use the
 
 A :ref:`write() <func-write>` or :ref:`VIDIOC_STREAMON`
 call sends an implicit START command to the decoder if it has not been
-started yet.
+started yet. Applies to both queues of mem2mem decoders.
 
 A :ref:`close() <func-close>` or :ref:`VIDIOC_STREAMOFF <VIDIOC_STREAMON>`
 call of a streaming file descriptor sends an implicit immediate STOP
-command to the decoder, and all buffered data is discarded.
+command to the decoder, and all buffered data is discarded. Applies to both
+queues of mem2mem decoders.
 
-These ioctls are optional, not all drivers may support them. They were
-introduced in Linux 3.3.
+In principle, these ioctls are optional, not all drivers may support them. They were
+introduced in Linux 3.3. They are, however, mandatory for stateful mem2mem decoders
+(as further documented in :ref:`decoder`).
 
 
 .. tabularcolumns:: |p{1.1cm}|p{2.4cm}|p{1.2cm}|p{1.6cm}|p{10.6cm}|
@@ -160,26 +162,36 @@ introduced in Linux 3.3.
 	``V4L2_DEC_CMD_RESUME`` for that. This command has one flag:
 	``V4L2_DEC_CMD_START_MUTE_AUDIO``. If set, then audio will be
 	muted when playing back at a non-standard speed.
+
+	For stateful mem2mem decoders, the command may be also used to restart
+	the decoder in case of an implicit stop initiated by the decoder
+	itself, without the ``V4L2_DEC_CMD_STOP`` being called explicitly.
+	No flags or other arguments are accepted in case of mem2mem decoders.
+	See :ref:`decoder` for more details.
     * - ``V4L2_DEC_CMD_STOP``
       - 1
       - Stop the decoder. When the decoder is already stopped, this
 	command does nothing. This command has two flags: if
 	``V4L2_DEC_CMD_STOP_TO_BLACK`` is set, then the decoder will set
 	the picture to black after it stopped decoding. Otherwise the last
-	image will repeat. mem2mem decoders will stop producing new frames
-	altogether. They will send a ``V4L2_EVENT_EOS`` event when the
-	last frame has been decoded and all frames are ready to be
-	dequeued and will set the ``V4L2_BUF_FLAG_LAST`` buffer flag on
-	the last buffer of the capture queue to indicate there will be no
-	new buffers produced to dequeue. This buffer may be empty,
-	indicated by the driver setting the ``bytesused`` field to 0. Once
-	the ``V4L2_BUF_FLAG_LAST`` flag was set, the
-	:ref:`VIDIOC_DQBUF <VIDIOC_QBUF>` ioctl will not block anymore,
-	but return an ``EPIPE`` error code. If
+	image will repeat. If
 	``V4L2_DEC_CMD_STOP_IMMEDIATELY`` is set, then the decoder stops
 	immediately (ignoring the ``pts`` value), otherwise it will keep
 	decoding until timestamp >= pts or until the last of the pending
 	data from its internal buffers was decoded.
+
+	A stateful mem2mem decoder will proceed with decoding the source
+	buffers pending before the command is issued and then stop producing
+	new frames. It will send a ``V4L2_EVENT_EOS`` event when the last frame
+	has been decoded and all frames are ready to be dequeued and will set
+	the ``V4L2_BUF_FLAG_LAST`` buffer flag on the last buffer of the
+	capture queue to indicate there will be no new buffers produced to
+	dequeue. This buffer may be empty, indicated by the driver setting the
+	``bytesused`` field to 0. Once the buffer with the
+	``V4L2_BUF_FLAG_LAST`` flag set was dequeued, the :ref:`VIDIOC_DQBUF
+	<VIDIOC_QBUF>` ioctl will not block anymore, but return an ``EPIPE``
+	error code. No flags or other arguments are accepted in case of mem2mem
+	decoders.  See :ref:`decoder` for more details.
     * - ``V4L2_DEC_CMD_PAUSE``
       - 2
       - Pause the decoder. When the decoder has not been started yet, the
diff --git a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
index 3ead350e099f..0fc0b78a943e 100644
--- a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
+++ b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
@@ -53,6 +53,13 @@ devices that is either the struct
 member. When the requested buffer type is not supported drivers return
 an ``EINVAL`` error code.
 
+A stateful mem2mem decoder will not allow operations on the
+``V4L2_BUF_TYPE_VIDEO_CAPTURE`` or ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``
+buffer type until the corresponding ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or
+``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE`` buffer type is configured. If such an
+operation is attempted, drivers return an ``EACCES`` error code. Refer to
+:ref:`decoder` for more details.
+
 To change the current format parameters applications initialize the
 ``type`` field and all fields of the respective ``fmt`` union member.
 For details see the documentation of the various devices types in
@@ -145,6 +152,13 @@ On success 0 is returned, on error -1 and the ``errno`` variable is set
 appropriately. The generic error codes are described at the
 :ref:`Generic Error Codes <gen-errors>` chapter.
 
+EACCES
+    The format is not accessible until another buffer type is configured.
+    Relevant for the V4L2_BUF_TYPE_VIDEO_CAPTURE and
+    V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE buffer types of mem2mem decoders, which
+    require the format of V4L2_BUF_TYPE_VIDEO_OUTPUT or
+    V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE buffer type to be configured first.
+
 EINVAL
     The struct :c:type:`v4l2_format` ``type`` field is
     invalid or the requested buffer type not supported.
-- 
2.19.1.568.g152ad8e336-goog


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2018-10-22 14:48 [PATCH v2 0/2] Document memory-to-memory video codec interfaces Tomasz Figa
  2018-10-22 14:48 ` [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface Tomasz Figa
@ 2018-10-22 14:49 ` Tomasz Figa
  2018-11-12 13:23   ` Hans Verkuil
  2018-10-22 15:41 ` [PATCH v2 0/2] Document memory-to-memory video codec interfaces Hans Verkuil
  2 siblings, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2018-10-22 14:49 UTC (permalink / raw)
  To: linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Hans Verkuil,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Nicolas Dufresne, Paul Kocialkowski, Laurent Pinchart,
	dave.stevenson, Ezequiel Garcia, Maxime Jourdan, Tomasz Figa

Due to complexity of the video encoding process, the V4L2 drivers of
stateful encoder hardware require specific sequences of V4L2 API calls
to be followed. These include capability enumeration, initialization,
encoding, encode parameters change, drain and reset.

Specifics of the above have been discussed during Media Workshops at
LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
Conference Europe 2014 in Düsseldorf. The de facto Codec API that
originated at those events was later implemented by the drivers we already
have merged in mainline, such as s5p-mfc or coda.

The only thing missing was the real specification included as a part of
Linux Media documentation. Fix it now and document the encoder part of
the Codec API.

Signed-off-by: Tomasz Figa <tfiga@chromium.org>
---
 Documentation/media/uapi/v4l/dev-encoder.rst  | 579 ++++++++++++++++++
 Documentation/media/uapi/v4l/devices.rst      |   1 +
 Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |   5 +
 Documentation/media/uapi/v4l/v4l2.rst         |   2 +
 .../media/uapi/v4l/vidioc-encoder-cmd.rst     |  38 +-
 5 files changed, 610 insertions(+), 15 deletions(-)
 create mode 100644 Documentation/media/uapi/v4l/dev-encoder.rst

diff --git a/Documentation/media/uapi/v4l/dev-encoder.rst b/Documentation/media/uapi/v4l/dev-encoder.rst
new file mode 100644
index 000000000000..41139e5e48eb
--- /dev/null
+++ b/Documentation/media/uapi/v4l/dev-encoder.rst
@@ -0,0 +1,579 @@
+.. -*- coding: utf-8; mode: rst -*-
+
+.. _encoder:
+
+*************************************************
+Memory-to-memory Stateful Video Encoder Interface
+*************************************************
+
+A stateful video encoder takes raw video frames in display order and encodes
+them into a bitstream. It generates complete chunks of the bitstream, including
+all metadata, headers, etc. The resulting bitstream does not require any
+further post-processing by the client.
+
+Performing software stream processing, header generation etc. in the driver
+in order to support this interface is strongly discouraged. In case such
+operations are needed, use of the Stateless Video Encoder Interface (in
+development) is strongly advised.
+
+Conventions and notation used in this document
+==============================================
+
+1. The general V4L2 API rules apply if not specified in this document
+   otherwise.
+
+2. The meaning of words "must", "may", "should", etc. is as per RFC
+   2119.
+
+3. All steps not marked "optional" are required.
+
+4. :c:func:`VIDIOC_G_EXT_CTRLS`, :c:func:`VIDIOC_S_EXT_CTRLS` may be used
+   interchangeably with :c:func:`VIDIOC_G_CTRL`, :c:func:`VIDIOC_S_CTRL`,
+   unless specified otherwise.
+
+5. Single-plane API (see spec) and applicable structures may be used
+   interchangeably with Multi-plane API, unless specified otherwise,
+   depending on encoder capabilities and following the general V4L2
+   guidelines.
+
+6. i = [a..b]: sequence of integers from a to b, inclusive, i.e. i =
+   [0..2]: i = 0, 1, 2.
+
+7. Given an ``OUTPUT`` buffer A, A' represents a buffer on the ``CAPTURE``
+   queue containing data (encoded frame/stream) that resulted from processing
+   buffer A.
+
+Glossary
+========
+
+Refer to :ref:`decoder-glossary`.
+
+State machine
+=============
+
+.. kernel-render:: DOT
+   :alt: DOT digraph of encoder state machine
+   :caption: Encoder state machine
+
+   digraph encoder_state_machine {
+       node [shape = doublecircle, label="Encoding"] Encoding;
+
+       node [shape = circle, label="Initialization"] Initialization;
+       node [shape = circle, label="Stopped"] Stopped;
+       node [shape = circle, label="Drain"] Drain;
+       node [shape = circle, label="Reset"] Reset;
+
+       node [shape = point]; qi
+       qi -> Initialization [ label = "open()" ];
+
+       Initialization -> Encoding [ label = "Both queues streaming" ];
+
+       Encoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
+       Encoding -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
+       Encoding -> Stopped [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
+       Encoding -> Encoding;
+
+       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
+       Drain -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
+
+       Reset -> Encoding [ label = "VIDIOC_STREAMON(CAPTURE)" ];
+       Reset -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];
+
+       Stopped -> Encoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(OUTPUT)" ];
+       Stopped -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
+   }
+
+Querying capabilities
+=====================
+
+1. To enumerate the set of coded formats supported by the encoder, the
+   client may call :c:func:`VIDIOC_ENUM_FMT` on ``CAPTURE``.
+
+   * The full set of supported formats will be returned, regardless of the
+     format set on ``OUTPUT``.
+
+2. To enumerate the set of supported raw formats, the client may call
+   :c:func:`VIDIOC_ENUM_FMT` on ``OUTPUT``.
+
+   * Only the formats supported for the format currently active on ``CAPTURE``
+     will be returned.
+
+   * In order to enumerate raw formats supported by a given coded format,
+     the client must first set that coded format on ``CAPTURE`` and then
+     enumerate the formats on ``OUTPUT``.
+
+3. The client may use :c:func:`VIDIOC_ENUM_FRAMESIZES` to detect supported
+   resolutions for a given format, passing desired pixel format in
+   :c:type:`v4l2_frmsizeenum` ``pixel_format``.
+
+   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a coded pixel
+     format will include all possible coded resolutions supported by the
+     encoder for given coded pixel format.
+
+   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a raw pixel format
+     will include all possible frame buffer resolutions supported by the
+     encoder for given raw pixel format and coded format currently set on
+     ``CAPTURE``.
+
+4. Supported profiles and levels for given format, if applicable, may be
+   queried using their respective controls via :c:func:`VIDIOC_QUERYCTRL`.
+
+5. Any additional encoder capabilities may be discovered by querying
+   their respective controls.
+
+Initialization
+==============
+
+1. **Optional.** Enumerate supported formats and resolutions. See
+   `Querying capabilities` above.
+
+2. Set a coded format on the ``CAPTURE`` queue via :c:func:`VIDIOC_S_FMT`
+
+   * **Required fields:**
+
+     ``type``
+         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
+
+     ``pixelformat``
+         the coded format to be produced
+
+     ``sizeimage``
+         desired size of ``CAPTURE`` buffers; the encoder may adjust it to
+         match hardware requirements
+
+     other fields
+         follow standard semantics
+
+   * **Return fields:**
+
+     ``sizeimage``
+         adjusted size of ``CAPTURE`` buffers
+
+   .. warning::
+
+      Changing the ``CAPTURE`` format may change the currently set ``OUTPUT``
+      format. The encoder will derive a new ``OUTPUT`` format from the
+      ``CAPTURE`` format being set, including resolution, colorimetry
+      parameters, etc. If the client needs a specific ``OUTPUT`` format, it
+      must adjust it afterwards.
+
+3. **Optional.** Enumerate supported ``OUTPUT`` formats (raw formats for
+   source) for the selected coded format via :c:func:`VIDIOC_ENUM_FMT`.
+
+   * **Required fields:**
+
+     ``type``
+         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
+
+     other fields
+         follow standard semantics
+
+   * **Return fields:**
+
+     ``pixelformat``
+         raw format supported for the coded format currently selected on
+         the ``OUTPUT`` queue.
+
+     other fields
+         follow standard semantics
+
+4. Set the raw source format on the ``OUTPUT`` queue via
+   :c:func:`VIDIOC_S_FMT`.
+
+   * **Required fields:**
+
+     ``type``
+         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
+
+     ``pixelformat``
+         raw format of the source
+
+     ``width``, ``height``
+         source resolution
+
+     other fields
+         follow standard semantics
+
+   * **Return fields:**
+
+     ``width``, ``height``
+         may be adjusted by encoder to match alignment requirements, as
+         required by the currently selected formats
+
+     other fields
+         follow standard semantics
+
+   * Setting the source resolution will reset the selection rectangles to their
+     default values, based on the new resolution, as described in the step 5
+     below.
+
+5. **Optional.** Set the visible resolution for the stream metadata via
+   :c:func:`VIDIOC_S_SELECTION` on the ``OUTPUT`` queue.
+
+   * **Required fields:**
+
+     ``type``
+         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
+
+     ``target``
+         set to ``V4L2_SEL_TGT_CROP``
+
+     ``r.left``, ``r.top``, ``r.width``, ``r.height``
+         visible rectangle; this must fit within the `V4L2_SEL_TGT_CROP_BOUNDS`
+         rectangle and may be subject to adjustment to match codec and
+         hardware constraints
+
+   * **Return fields:**
+
+     ``r.left``, ``r.top``, ``r.width``, ``r.height``
+         visible rectangle adjusted by the encoder
+
+   * The following selection targets are supported on ``OUTPUT``:
+
+     ``V4L2_SEL_TGT_CROP_BOUNDS``
+         equal to the full source frame, matching the active ``OUTPUT``
+         format
+
+     ``V4L2_SEL_TGT_CROP_DEFAULT``
+         equal to ``V4L2_SEL_TGT_CROP_BOUNDS``
+
+     ``V4L2_SEL_TGT_CROP``
+         rectangle within the source buffer to be encoded into the
+         ``CAPTURE`` stream; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``
+
+     ``V4L2_SEL_TGT_COMPOSE_BOUNDS``
+         maximum rectangle within the coded resolution, which the cropped
+         source frame can be output into; if the hardware does not support
+         composition or scaling, then this is always equal to the rectangle of
+         width and height matching ``V4L2_SEL_TGT_CROP`` and located at (0, 0)
+
+     ``V4L2_SEL_TGT_COMPOSE_DEFAULT``
+         equal to a rectangle of width and height matching
+         ``V4L2_SEL_TGT_CROP`` and located at (0, 0)
+
+     ``V4L2_SEL_TGT_COMPOSE``
+         rectangle within the coded frame, which the cropped source frame
+         is to be output into; defaults to
+         ``V4L2_SEL_TGT_COMPOSE_DEFAULT``; read-only on hardware without
+         additional compose/scaling capabilities; resulting stream will
+         have this rectangle encoded as the visible rectangle in its
+         metadata
+
+   .. warning::
+
+      The encoder may adjust the crop/compose rectangles to the nearest
+      supported ones to meet codec and hardware requirements. The client needs
+      to check the adjusted rectangle returned by :c:func:`VIDIOC_S_SELECTION`.
+
+6. Allocate buffers for both ``OUTPUT`` and ``CAPTURE`` via
+   :c:func:`VIDIOC_REQBUFS`. This may be performed in any order.
+
+   * **Required fields:**
+
+     ``count``
+         requested number of buffers to allocate; greater than zero
+
+     ``type``
+         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT`` or
+         ``CAPTURE``
+
+     other fields
+         follow standard semantics
+
+   * **Return fields:**
+
+     ``count``
+          actual number of buffers allocated
+
+   .. warning::
+
+      The actual number of allocated buffers may differ from the ``count``
+      given. The client must check the updated value of ``count`` after the
+      call returns.
+
+   .. note::
+
+      To allocate more than the minimum number of buffers (for pipeline depth),
+      the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT`` or
+      ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE`` control respectively, to get the
+      minimum number of buffers required by the encoder/format, and pass the
+      obtained value plus the number of additional buffers needed in the
+      ``count`` field to :c:func:`VIDIOC_REQBUFS`.
+
+   Alternatively, :c:func:`VIDIOC_CREATE_BUFS` can be used to have more
+   control over buffer allocation.
+
+   * **Required fields:**
+
+     ``count``
+         requested number of buffers to allocate; greater than zero
+
+     ``type``
+         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
+
+     other fields
+         follow standard semantics
+
+   * **Return fields:**
+
+     ``count``
+         adjusted to the number of allocated buffers
+
+7. Begin streaming on both ``OUTPUT`` and ``CAPTURE`` queues via
+   :c:func:`VIDIOC_STREAMON`. This may be performed in any order. The actual
+   encoding process starts when both queues start streaming.
+
+.. note::
+
+   If the client stops the ``CAPTURE`` queue during the encode process and then
+   restarts it again, the encoder will begin generating a stream independent
+   from the stream generated before the stop. The exact constraints depend
+   on the coded format, but may include the following implications:
+
+   * encoded frames produced after the restart must not reference any
+     frames produced before the stop, e.g. no long term references for
+     H.264,
+
+   * any headers that must be included in a standalone stream must be
+     produced again, e.g. SPS and PPS for H.264.
+
+Encoding
+========
+
+This state is reached after the `Initialization` sequence finishes succesfully.
+In this state, client queues and dequeues buffers to both queues via
+:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
+semantics.
+
+The contents of encoded ``CAPTURE`` buffers depend on the active coded pixel
+format and may be affected by codec-specific extended controls, as stated
+in the documentation of each format.
+
+Both queues operate independently, following standard behavior of V4L2 buffer
+queues and memory-to-memory devices. In addition, the order of encoded frames
+dequeued from the ``CAPTURE`` queue may differ from the order of queuing raw
+frames to the ``OUTPUT`` queue, due to properties of the selected coded format,
+e.g. frame reordering.
+
+The client must not assume any direct relationship between ``CAPTURE`` and
+``OUTPUT`` buffers and any specific timing of buffers becoming
+available to dequeue. Specifically,
+
+* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced on
+  ``CAPTURE`` (if returning an encoded frame allowed the encoder to return a
+  frame that preceded it in display, but succeeded it in the decode order),
+
+* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
+  ``CAPTURE`` later into encode process, and/or after processing further
+  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
+  reordering is used,
+
+* buffers may become available on the ``CAPTURE`` queue without additional
+  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
+  ``OUTPUT`` buffers queued in the past whose decoding results are only
+  available at later time, due to specifics of the decoding process,
+
+* buffers queued to ``OUTPUT`` may not become available to dequeue instantly
+  after being encoded into a corresponding ``CATPURE`` buffer, e.g. if the
+  encoder needs to use the frame as a reference for encoding further frames.
+
+.. note::
+
+   To allow matching encoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
+   originated from, the client can set the ``timestamp`` field of the
+   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
+   ``CAPTURE`` buffer(s), which resulted from encoding that ``OUTPUT`` buffer
+   will have their ``timestamp`` field set to the same value when dequeued.
+
+   In addition to the straighforward case of one ``OUTPUT`` buffer producing
+   one ``CAPTURE`` buffer, the following cases are defined:
+
+   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
+     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
+
+   * the encoding order differs from the presentation order (i.e. the
+     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
+     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
+     and thus monotonicity of the timestamps cannot be guaranteed.
+
+.. note::
+
+   To let the client distinguish between frame types (keyframes, intermediate
+   frames; the exact list of types depends on the coded format), the
+   ``CAPTURE`` buffers will have corresponding flag bits set in their
+   :c:type:`v4l2_buffer` struct when dequeued. See the documentation of
+   :c:type:`v4l2_buffer` and each coded pixel format for exact list of flags
+   and their meanings.
+
+Encoding parameter changes
+==========================
+
+The client is allowed to use :c:func:`VIDIOC_S_CTRL` to change encoder
+parameters at any time. The availability of parameters is encoder-specific
+and the client must query the encoder to find the set of available controls.
+
+The ability to change each parameter during encoding is encoder-specific, as per
+the standard semantics of the V4L2 control interface. The client may attempt
+setting a control of its interest during encoding and if the operation fails
+with the -EBUSY error code, the ``CAPTURE`` queue needs to be stopped for the
+configuration change to be allowed (following the `Drain` sequence will be
+needed to avoid losing the already queued/encoded frames).
+
+The timing of parameter updates is encoder-specific, as per the standard
+semantics of the V4L2 control interface. If the client needs to apply the
+parameters exactly at specific frame, using the Request API should be
+considered, if supported by the encoder.
+
+Drain
+=====
+
+To ensure that all the queued ``OUTPUT`` buffers have been processed and the
+related ``CAPTURE`` buffers output to the client, the client must follow the
+drain sequence described below. After the drain sequence ends, the client has
+received all encoded frames for all ``OUTPUT`` buffers queued before the
+sequence was started.
+
+1. Begin the drain sequence by issuing :c:func:`VIDIOC_ENCODER_CMD`.
+
+   * **Required fields:**
+
+     ``cmd``
+         set to ``V4L2_ENC_CMD_STOP``
+
+     ``flags``
+         set to 0
+
+     ``pts``
+         set to 0
+
+   .. warning::
+
+   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
+   are streaming. For compatibility reasons, the call to
+   :c:func:`VIDIOC_ENCODER_CMD` will not fail even if any of the queues is not
+   streaming, but at the same time it will not initiate the `Drain` sequence
+   and so the steps described below would not be applicable.
+
+2. Any ``OUTPUT`` buffers queued by the client before the
+   :c:func:`VIDIOC_ENCODER_CMD` was issued will be processed and encoded as
+   normal. The client must continue to handle both queues independently,
+   similarly to normal encode operation. This includes,
+
+   * queuing and dequeuing ``CAPTURE`` buffers, until a buffer marked with the
+     ``V4L2_BUF_FLAG_LAST`` flag is dequeued,
+
+     .. warning::
+
+        The last buffer may be empty (with :c:type:`v4l2_buffer`
+        ``bytesused`` = 0) and in such case it must be ignored by the client,
+        as it does not contain an encoded frame.
+
+     .. note::
+
+        Any attempt to dequeue more buffers beyond the buffer marked with
+        ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
+        :c:func:`VIDIOC_DQBUF`.
+
+   * dequeuing processed ``OUTPUT`` buffers, until all the buffers queued
+     before the ``V4L2_ENC_CMD_STOP`` command are dequeued,
+
+   * dequeuing the ``V4L2_EVENT_EOS`` event, if the client subscribes to it.
+
+   .. note::
+
+      For backwards compatibility, the encoder will signal a ``V4L2_EVENT_EOS``
+      event when the last the last frame has been decoded and all frames are
+      ready to be dequeued. It is a deprecated behavior and the client must not
+      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
+      instead.
+
+3. Once all ``OUTPUT`` buffers queued before the ``V4L2_ENC_CMD_STOP`` call and
+   the last ``CAPTURE`` buffer are dequeued, the encoder is stopped and it will
+   accept, but not process any newly queued ``OUTPUT`` buffers until the client
+   issues any of the following operations:
+
+   * ``V4L2_ENC_CMD_START`` - the encoder will resume operation normally,
+
+   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
+     ``CAPTURE`` queue - the encoder will be reset (see the `Reset` sequence)
+     and then resume encoding,
+
+   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
+     ``OUTPUT`` queue - the encoder will resume operation normally, however any
+     source frames queued to the ``OUTPUT`` queue between ``V4L2_ENC_CMD_STOP``
+     and :c:func:`VIDIOC_STREAMOFF` will be discarded.
+
+.. note::
+
+   Once the drain sequence is initiated, the client needs to drive it to
+   completion, as described by the steps above, unless it aborts the process by
+   issuing :c:func:`VIDIOC_STREAMOFF` on any of the ``OUTPUT`` or ``CAPTURE``
+   queues.  The client is not allowed to issue ``V4L2_ENC_CMD_START`` or
+   ``V4L2_ENC_CMD_STOP`` again while the drain sequence is in progress and they
+   will fail with -EBUSY error code if attempted.
+
+   Although mandatory, the availability of encoder commands may be queried
+   using :c:func:`VIDIOC_TRY_ENCODER_CMD`.
+
+Reset
+=====
+
+The client may want to request the encoder to reinitialize the encoding, so
+that the following stream data becomes independent from the stream data
+generated before. Depending on the coded format, that may imply that,
+
+* encoded frames produced after the restart must not reference any frames
+  produced before the stop, e.g. no long term references for H.264,
+
+* any headers that must be included in a standalone stream must be produced
+  again, e.g. SPS and PPS for H.264.
+
+This can be achieved by performing the reset sequence.
+
+1. Perform the `Drain` sequence to ensure all the in-flight encoding finishes
+   and respective buffers are dequeued.
+
+2. Stop streaming on the ``CAPTURE`` queue via :c:func:`VIDIOC_STREAMOFF`. This
+   will return all currently queued ``CAPTURE`` buffers to the client, without
+   valid frame data.
+
+3. Start streaming on the ``CAPTURE`` queue via :c:func:`VIDIOC_STREAMON` and
+   continue with regular encoding sequence. The encoded frames produced into
+   ``CAPTURE`` buffers from now on will contain a standalone stream that can be
+   decoded without the need for frames encoded before the reset sequence,
+   starting at the first ``OUTPUT`` buffer queued after issuing the
+   `V4L2_ENC_CMD_STOP` of the `Drain` sequence.
+
+This sequence may be also used to change encoding parameters for encoders
+without the ability to change the parameters on the fly.
+
+Commit points
+=============
+
+Setting formats and allocating buffers triggers changes in the behavior of the
+encoder.
+
+1. Setting the format on the ``CAPTURE`` queue may change the set of formats
+   supported/advertised on the ``OUTPUT`` queue. In particular, it also means
+   that the ``OUTPUT`` format may be reset and the client must not rely on the
+   previously set format being preserved.
+
+2. Enumerating formats on the ``OUTPUT`` queue always returns only formats
+   supported for the current ``CAPTURE`` format.
+
+3. Setting the format on the ``OUTPUT`` queue does not change the list of
+   formats available on the ``CAPTURE`` queue. An attempt to set the ``OUTPUT``
+   format that is not supported for the currently selected ``CAPTURE`` format
+   will result in the encoder adjusting the requested ``OUTPUT`` format to a
+   supported one.
+
+4. Enumerating formats on the ``CAPTURE`` queue always returns the full set of
+   supported coded formats, irrespectively of the current ``OUTPUT`` format.
+
+5. While buffers are allocated on the ``CAPTURE`` queue, the client must not
+   change the format on the queue. Drivers will return the -EBUSY error code
+   for any such format change attempt.
+
+To summarize, setting formats and allocation must always start with the
+``CAPTURE`` queue and the ``CAPTURE`` queue is the master that governs the
+set of supported formats for the ``OUTPUT`` queue.
diff --git a/Documentation/media/uapi/v4l/devices.rst b/Documentation/media/uapi/v4l/devices.rst
index 12d43fe711cf..1822c66c2154 100644
--- a/Documentation/media/uapi/v4l/devices.rst
+++ b/Documentation/media/uapi/v4l/devices.rst
@@ -16,6 +16,7 @@ Interfaces
     dev-osd
     dev-codec
     dev-decoder
+    dev-encoder
     dev-effect
     dev-raw-vbi
     dev-sliced-vbi
diff --git a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
index ca5f2270a829..085089cd9577 100644
--- a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
+++ b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
@@ -37,6 +37,11 @@ Single-planar format structure
 	inside the stream, when fed to a stateful mem2mem decoder, the fields
 	may be zero to rely on the decoder to detect the right values. For more
 	details see :ref:`decoder` and format descriptions.
+
+	For compressed formats on the CAPTURE side of a stateful mem2mem
+	encoder, the fields must be zero, since the coded size is expected to
+	be calculated internally by the encoder itself, based on the OUTPUT
+	side. For more details see :ref:`encoder` and format descriptions.
     * - __u32
       - ``pixelformat``
       - The pixel format or type of compression, set by the application.
diff --git a/Documentation/media/uapi/v4l/v4l2.rst b/Documentation/media/uapi/v4l/v4l2.rst
index 65dc096199ad..2ef6693b9499 100644
--- a/Documentation/media/uapi/v4l/v4l2.rst
+++ b/Documentation/media/uapi/v4l/v4l2.rst
@@ -56,6 +56,7 @@ Authors, in alphabetical order:
 - Figa, Tomasz <tfiga@chromium.org>
 
   - Documented the memory-to-memory decoder interface.
+  - Documented the memory-to-memory encoder interface.
 
 - H Schimek, Michael <mschimek@gmx.at>
 
@@ -68,6 +69,7 @@ Authors, in alphabetical order:
 - Osciak, Pawel <posciak@chromium.org>
 
   - Documented the memory-to-memory decoder interface.
+  - Documented the memory-to-memory encoder interface.
 
 - Osciak, Pawel <pawel@osciak.com>
 
diff --git a/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst b/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst
index 5ae8c933b1b9..d571c53e761a 100644
--- a/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst
+++ b/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst
@@ -50,19 +50,23 @@ currently only used by the STOP command and contains one bit: If the
 until the end of the current *Group Of Pictures*, otherwise it will stop
 immediately.
 
-A :ref:`read() <func-read>` or :ref:`VIDIOC_STREAMON <VIDIOC_STREAMON>`
-call sends an implicit START command to the encoder if it has not been
-started yet. After a STOP command, :ref:`read() <func-read>` calls will read
+After a STOP command, :ref:`read() <func-read>` calls will read
 the remaining data buffered by the driver. When the buffer is empty,
 :ref:`read() <func-read>` will return zero and the next :ref:`read() <func-read>`
 call will restart the encoder.
 
+A :ref:`read() <func-read>` or :ref:`VIDIOC_STREAMON <VIDIOC_STREAMON>`
+call sends an implicit START command to the encoder if it has not been
+started yet. Applies to both queues of mem2mem encoders.
+
 A :ref:`close() <func-close>` or :ref:`VIDIOC_STREAMOFF <VIDIOC_STREAMON>`
 call of a streaming file descriptor sends an implicit immediate STOP to
-the encoder, and all buffered data is discarded.
+the encoder, and all buffered data is discarded. Applies to both queues of
+mem2mem encoders.
 
 These ioctls are optional, not all drivers may support them. They were
-introduced in Linux 2.6.21.
+introduced in Linux 2.6.21. They are, however, mandatory for stateful mem2mem
+encoders (as further documented in :ref:`encoder`).
 
 
 .. tabularcolumns:: |p{4.4cm}|p{4.4cm}|p{8.7cm}|
@@ -107,16 +111,20 @@ introduced in Linux 2.6.21.
       - Stop the encoder. When the ``V4L2_ENC_CMD_STOP_AT_GOP_END`` flag
 	is set, encoding will continue until the end of the current *Group
 	Of Pictures*, otherwise encoding will stop immediately. When the
-	encoder is already stopped, this command does nothing. mem2mem
-	encoders will send a ``V4L2_EVENT_EOS`` event when the last frame
-	has been encoded and all frames are ready to be dequeued and will
-	set the ``V4L2_BUF_FLAG_LAST`` buffer flag on the last buffer of
-	the capture queue to indicate there will be no new buffers
-	produced to dequeue. This buffer may be empty, indicated by the
-	driver setting the ``bytesused`` field to 0. Once the
-	``V4L2_BUF_FLAG_LAST`` flag was set, the
-	:ref:`VIDIOC_DQBUF <VIDIOC_QBUF>` ioctl will not block anymore,
-	but return an ``EPIPE`` error code.
+	encoder is already stopped, this command does nothing.
+
+	A stateful mem2mem encoder will proceed with encoding the source
+	buffers pending before the command is issued and then stop producing
+	new frames. It will send a ``V4L2_EVENT_EOS`` event when the last frame
+	has been encoded and all frames are ready to be dequeued and will set
+	the ``V4L2_BUF_FLAG_LAST`` buffer flag on the last buffer of the
+	capture queue to indicate there will be no new buffers produced to
+	dequeue. This buffer may be empty, indicated by the driver setting the
+	``bytesused`` field to 0. Once the buffer with the
+	``V4L2_BUF_FLAG_LAST`` flag set was dequeued, the :ref:`VIDIOC_DQBUF
+	<VIDIOC_QBUF>` ioctl will not block anymore, but return an ``EPIPE``
+	error code. No flags or other arguments are accepted in case of mem2mem
+	encoders.  See :ref:`encoder` for more details.
     * - ``V4L2_ENC_CMD_PAUSE``
       - 2
       - Pause the encoder. When the encoder has not been started yet, the
-- 
2.19.1.568.g152ad8e336-goog


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 0/2] Document memory-to-memory video codec interfaces
  2018-10-22 14:48 [PATCH v2 0/2] Document memory-to-memory video codec interfaces Tomasz Figa
  2018-10-22 14:48 ` [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface Tomasz Figa
  2018-10-22 14:49 ` [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface Tomasz Figa
@ 2018-10-22 15:41 ` Hans Verkuil
  2018-10-23  0:54   ` Tomasz Figa
  2 siblings, 1 reply; 41+ messages in thread
From: Hans Verkuil @ 2018-10-22 15:41 UTC (permalink / raw)
  To: Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Paweł Ościak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel, Tiffany Lin, Andrew-CT Chen,
	Stanimir Varbanov, Todor Tomov, Nicolas Dufresne,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

Hi Tomasz, Alexandre,

Thank you for all your work! Much appreciated.

I've applied both the stateful and stateless patches on top of the request_api branch
and made the final result available here:

https://hverkuil.home.xs4all.nl/request-api/

Tomasz, I got two warnings when building the doc tree, the patch below fixes it.

Regards,

	Hans

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>

diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
index 09c7a6621b8e..5522453ac39f 100644
--- a/Documentation/media/uapi/v4l/dev-decoder.rst
+++ b/Documentation/media/uapi/v4l/dev-decoder.rst
@@ -972,11 +972,11 @@ sequence was started.

    .. warning::

-   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
-   are streaming. For compatibility reasons, the call to
-   :c:func:`VIDIOC_DECODER_CMD` will not fail even if any of the queues is not
-   streaming, but at the same time it will not initiate the `Drain` sequence
-   and so the steps described below would not be applicable.
+      The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
+      are streaming. For compatibility reasons, the call to
+      :c:func:`VIDIOC_DECODER_CMD` will not fail even if any of the queues is not
+      streaming, but at the same time it will not initiate the `Drain` sequence
+      and so the steps described below would not be applicable.

 2. Any ``OUTPUT`` buffers queued by the client before the
    :c:func:`VIDIOC_DECODER_CMD` was issued will be processed and decoded as
diff --git a/Documentation/media/uapi/v4l/dev-encoder.rst b/Documentation/media/uapi/v4l/dev-encoder.rst
index 41139e5e48eb..7f49a7149067 100644
--- a/Documentation/media/uapi/v4l/dev-encoder.rst
+++ b/Documentation/media/uapi/v4l/dev-encoder.rst
@@ -448,11 +448,11 @@ sequence was started.

    .. warning::

-   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
-   are streaming. For compatibility reasons, the call to
-   :c:func:`VIDIOC_ENCODER_CMD` will not fail even if any of the queues is not
-   streaming, but at the same time it will not initiate the `Drain` sequence
-   and so the steps described below would not be applicable.
+      The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
+      are streaming. For compatibility reasons, the call to
+      :c:func:`VIDIOC_ENCODER_CMD` will not fail even if any of the queues is not
+      streaming, but at the same time it will not initiate the `Drain` sequence
+      and so the steps described below would not be applicable.

 2. Any ``OUTPUT`` buffers queued by the client before the
    :c:func:`VIDIOC_ENCODER_CMD` was issued will be processed and encoded as

^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 0/2] Document memory-to-memory video codec interfaces
  2018-10-22 15:41 ` [PATCH v2 0/2] Document memory-to-memory video codec interfaces Hans Verkuil
@ 2018-10-23  0:54   ` Tomasz Figa
  0 siblings, 0 replies; 41+ messages in thread
From: Tomasz Figa @ 2018-10-23  0:54 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Pawel Osciak, Alexandre Courbot, kamil,
	a.hajda, Kyungmin Park, jtp.park, Philipp Zabel,
	Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, todor.tomov, nicolas, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

Hi Hans,

On Tue, Oct 23, 2018 at 12:41 AM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>
> Hi Tomasz, Alexandre,
>
> Thank you for all your work! Much appreciated.
>
> I've applied both the stateful and stateless patches on top of the request_api branch
> and made the final result available here:
>
> https://hverkuil.home.xs4all.nl/request-api/
>
> Tomasz, I got two warnings when building the doc tree, the patch below fixes it.
>
> Regards,
>
>         Hans
>
> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
>
> diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
> index 09c7a6621b8e..5522453ac39f 100644
> --- a/Documentation/media/uapi/v4l/dev-decoder.rst
> +++ b/Documentation/media/uapi/v4l/dev-decoder.rst
> @@ -972,11 +972,11 @@ sequence was started.
>
>     .. warning::
>
> -   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues

This should also have been s/sentence/sequence/.

> -   are streaming. For compatibility reasons, the call to
> -   :c:func:`VIDIOC_DECODER_CMD` will not fail even if any of the queues is not
> -   streaming, but at the same time it will not initiate the `Drain` sequence
> -   and so the steps described below would not be applicable.
> +      The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
> +      are streaming. For compatibility reasons, the call to
> +      :c:func:`VIDIOC_DECODER_CMD` will not fail even if any of the queues is not
> +      streaming, but at the same time it will not initiate the `Drain` sequence
> +      and so the steps described below would not be applicable.
>
>  2. Any ``OUTPUT`` buffers queued by the client before the
>     :c:func:`VIDIOC_DECODER_CMD` was issued will be processed and decoded as
> diff --git a/Documentation/media/uapi/v4l/dev-encoder.rst b/Documentation/media/uapi/v4l/dev-encoder.rst
> index 41139e5e48eb..7f49a7149067 100644
> --- a/Documentation/media/uapi/v4l/dev-encoder.rst
> +++ b/Documentation/media/uapi/v4l/dev-encoder.rst
> @@ -448,11 +448,11 @@ sequence was started.
>
>     .. warning::
>
> -   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues

Ditto.

> -   are streaming. For compatibility reasons, the call to
> -   :c:func:`VIDIOC_ENCODER_CMD` will not fail even if any of the queues is not
> -   streaming, but at the same time it will not initiate the `Drain` sequence
> -   and so the steps described below would not be applicable.
> +      The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
> +      are streaming. For compatibility reasons, the call to
> +      :c:func:`VIDIOC_ENCODER_CMD` will not fail even if any of the queues is not
> +      streaming, but at the same time it will not initiate the `Drain` sequence
> +      and so the steps described below would not be applicable.

Last minute changes after proof reading...

Thanks for fixing up and uploading the html version!

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-10-22 14:48 ` [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface Tomasz Figa
@ 2018-10-29  9:45   ` Stanimir Varbanov
  2018-10-29 10:06     ` Tomasz Figa
  2018-11-12 11:37   ` Hans Verkuil
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 41+ messages in thread
From: Stanimir Varbanov @ 2018-10-29  9:45 UTC (permalink / raw)
  To: Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Hans Verkuil,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Nicolas Dufresne, Paul Kocialkowski, Laurent Pinchart,
	dave.stevenson, Ezequiel Garcia, Maxime Jourdan

Hi Tomasz,

On 10/22/2018 05:48 PM, Tomasz Figa wrote:
> Due to complexity of the video decoding process, the V4L2 drivers of
> stateful decoder hardware require specific sequences of V4L2 API calls
> to be followed. These include capability enumeration, initialization,
> decoding, seek, pause, dynamic resolution change, drain and end of
> stream.
> 
> Specifics of the above have been discussed during Media Workshops at
> LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
> Conference Europe 2014 in Düsseldorf. The de facto Codec API that
> originated at those events was later implemented by the drivers we already
> have merged in mainline, such as s5p-mfc or coda.
> 
> The only thing missing was the real specification included as a part of
> Linux Media documentation. Fix it now and document the decoder part of
> the Codec API.
> 
> Signed-off-by: Tomasz Figa <tfiga@chromium.org>
> ---
>  Documentation/media/uapi/v4l/dev-decoder.rst  | 1082 +++++++++++++++++
>  Documentation/media/uapi/v4l/devices.rst      |    1 +
>  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |    5 +
>  Documentation/media/uapi/v4l/v4l2.rst         |   10 +-
>  .../media/uapi/v4l/vidioc-decoder-cmd.rst     |   40 +-
>  Documentation/media/uapi/v4l/vidioc-g-fmt.rst |   14 +
>  6 files changed, 1137 insertions(+), 15 deletions(-)
>  create mode 100644 Documentation/media/uapi/v4l/dev-decoder.rst
> 
> diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
> new file mode 100644
> index 000000000000..09c7a6621b8e
> --- /dev/null
> +++ b/Documentation/media/uapi/v4l/dev-decoder.rst

<cut>

> +Capture setup
> +=============
> +

<cut>

> +
> +2.  **Optional.** Acquire the visible resolution via
> +    :c:func:`VIDIOC_G_SELECTION`.
> +
> +    * **Required fields:**
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> +
> +      ``target``
> +          set to ``V4L2_SEL_TGT_COMPOSE``
> +
> +    * **Return fields:**
> +
> +      ``r.left``, ``r.top``, ``r.width``, ``r.height``
> +          the visible rectangle; it must fit within the frame buffer resolution
> +          returned by :c:func:`VIDIOC_G_FMT` on ``CAPTURE``.
> +
> +    * The following selection targets are supported on ``CAPTURE``:
> +
> +      ``V4L2_SEL_TGT_CROP_BOUNDS``
> +          corresponds to the coded resolution of the stream
> +
> +      ``V4L2_SEL_TGT_CROP_DEFAULT``
> +          the rectangle covering the part of the ``CAPTURE`` buffer that
> +          contains meaningful picture data (visible area); width and height
> +          will be equal to the visible resolution of the stream
> +
> +      ``V4L2_SEL_TGT_CROP``
> +          the rectangle within the coded resolution to be output to
> +          ``CAPTURE``; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``; read-only on
> +          hardware without additional compose/scaling capabilities

Hans should correct me if I'm wrong but V4L2_SEL_TGT_CROP_xxx are
applicable over OUTPUT queue type?

> +
> +      ``V4L2_SEL_TGT_COMPOSE_BOUNDS``
> +          the maximum rectangle within a ``CAPTURE`` buffer, which the cropped
> +          frame can be output into; equal to ``V4L2_SEL_TGT_CROP`` if the
> +          hardware does not support compose/scaling
> +
> +      ``V4L2_SEL_TGT_COMPOSE_DEFAULT``
> +          equal to ``V4L2_SEL_TGT_CROP``
> +
> +      ``V4L2_SEL_TGT_COMPOSE``
> +          the rectangle inside a ``CAPTURE`` buffer into which the cropped
> +          frame is written; defaults to ``V4L2_SEL_TGT_COMPOSE_DEFAULT``;
> +          read-only on hardware without additional compose/scaling capabilities
> +
> +      ``V4L2_SEL_TGT_COMPOSE_PADDED``
> +          the rectangle inside a ``CAPTURE`` buffer which is overwritten by the
> +          hardware; equal to ``V4L2_SEL_TGT_COMPOSE`` if the hardware does not
> +          write padding pixels
> +
> +    .. warning::
> +
> +       The values are guaranteed to be meaningful only after the decoder
> +       successfully parses the stream metadata. The client must not rely on the
> +       query before that happens.
> +

-- 
regards,
Stan

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-10-29  9:45   ` Stanimir Varbanov
@ 2018-10-29 10:06     ` Tomasz Figa
  2018-10-29 10:07       ` Tomasz Figa
  0 siblings, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2018-10-29 10:06 UTC (permalink / raw)
  To: Stanimir Varbanov
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Hans Verkuil, Pawel Osciak,
	Alexandre Courbot, kamil, a.hajda, Kyungmin Park, jtp.park,
	Philipp Zabel, Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	todor.tomov, nicolas, Paul Kocialkowski, Laurent Pinchart,
	dave.stevenson, Ezequiel Garcia, Maxime Jourdan

Hi Stanimir,

On Mon, Oct 29, 2018 at 6:45 PM Stanimir Varbanov
<stanimir.varbanov@linaro.org> wrote:
>
> Hi Tomasz,
>
> On 10/22/2018 05:48 PM, Tomasz Figa wrote:
> > Due to complexity of the video decoding process, the V4L2 drivers of
> > stateful decoder hardware require specific sequences of V4L2 API calls
> > to be followed. These include capability enumeration, initialization,
> > decoding, seek, pause, dynamic resolution change, drain and end of
> > stream.
> >
> > Specifics of the above have been discussed during Media Workshops at
> > LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
> > Conference Europe 2014 in Düsseldorf. The de facto Codec API that
> > originated at those events was later implemented by the drivers we already
> > have merged in mainline, such as s5p-mfc or coda.
> >
> > The only thing missing was the real specification included as a part of
> > Linux Media documentation. Fix it now and document the decoder part of
> > the Codec API.
> >
> > Signed-off-by: Tomasz Figa <tfiga@chromium.org>
> > ---
> >  Documentation/media/uapi/v4l/dev-decoder.rst  | 1082 +++++++++++++++++
> >  Documentation/media/uapi/v4l/devices.rst      |    1 +
> >  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |    5 +
> >  Documentation/media/uapi/v4l/v4l2.rst         |   10 +-
> >  .../media/uapi/v4l/vidioc-decoder-cmd.rst     |   40 +-
> >  Documentation/media/uapi/v4l/vidioc-g-fmt.rst |   14 +
> >  6 files changed, 1137 insertions(+), 15 deletions(-)
> >  create mode 100644 Documentation/media/uapi/v4l/dev-decoder.rst
> >
> > diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
> > new file mode 100644
> > index 000000000000..09c7a6621b8e
> > --- /dev/null
> > +++ b/Documentation/media/uapi/v4l/dev-decoder.rst
>
> <cut>
>
> > +Capture setup
> > +=============
> > +
>
> <cut>
>
> > +
> > +2.  **Optional.** Acquire the visible resolution via
> > +    :c:func:`VIDIOC_G_SELECTION`.
> > +
> > +    * **Required fields:**
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> > +
> > +      ``target``
> > +          set to ``V4L2_SEL_TGT_COMPOSE``
> > +
> > +    * **Return fields:**
> > +
> > +      ``r.left``, ``r.top``, ``r.width``, ``r.height``
> > +          the visible rectangle; it must fit within the frame buffer resolution
> > +          returned by :c:func:`VIDIOC_G_FMT` on ``CAPTURE``.
> > +
> > +    * The following selection targets are supported on ``CAPTURE``:
> > +
> > +      ``V4L2_SEL_TGT_CROP_BOUNDS``
> > +          corresponds to the coded resolution of the stream
> > +
> > +      ``V4L2_SEL_TGT_CROP_DEFAULT``
> > +          the rectangle covering the part of the ``CAPTURE`` buffer that
> > +          contains meaningful picture data (visible area); width and height
> > +          will be equal to the visible resolution of the stream
> > +
> > +      ``V4L2_SEL_TGT_CROP``
> > +          the rectangle within the coded resolution to be output to
> > +          ``CAPTURE``; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``; read-only on
> > +          hardware without additional compose/scaling capabilities
>
> Hans should correct me if I'm wrong but V4L2_SEL_TGT_CROP_xxx are
> applicable over OUTPUT queue type?

There is no such restriction. CROP selection targets of an OUTPUT
queue apply to the video stream read from the buffers, COMPOSE targets
of an OUTPUT queue apply to the output of the queue and input to the
processing block (hardware) in case of mem2mem devices, then CROP
targets of a CAPTURE queue apply to the output of the processing and
SELECTION targets of a CAPTURE queue apply to the stream written to
the buffers.

For a decoder, the OUTPUT stream is just a sequence of bytes, so
selection API doesn't apply to it. The processing (decoding) produces
a video stream and so the necessary selection capabilities are exposed
on the CAPTURE queue.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-10-29 10:06     ` Tomasz Figa
@ 2018-10-29 10:07       ` Tomasz Figa
  0 siblings, 0 replies; 41+ messages in thread
From: Tomasz Figa @ 2018-10-29 10:07 UTC (permalink / raw)
  To: Stanimir Varbanov
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Hans Verkuil, Pawel Osciak,
	Alexandre Courbot, kamil, a.hajda, Kyungmin Park, jtp.park,
	Philipp Zabel, Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	todor.tomov, nicolas, Paul Kocialkowski, Laurent Pinchart,
	dave.stevenson, Ezequiel Garcia, Maxime Jourdan

On Mon, Oct 29, 2018 at 7:06 PM Tomasz Figa <tfiga@chromium.org> wrote:
>
> Hi Stanimir,
>
> On Mon, Oct 29, 2018 at 6:45 PM Stanimir Varbanov
> <stanimir.varbanov@linaro.org> wrote:
> >
> > Hi Tomasz,
> >
> > On 10/22/2018 05:48 PM, Tomasz Figa wrote:
> > > Due to complexity of the video decoding process, the V4L2 drivers of
> > > stateful decoder hardware require specific sequences of V4L2 API calls
> > > to be followed. These include capability enumeration, initialization,
> > > decoding, seek, pause, dynamic resolution change, drain and end of
> > > stream.
> > >
> > > Specifics of the above have been discussed during Media Workshops at
> > > LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
> > > Conference Europe 2014 in Düsseldorf. The de facto Codec API that
> > > originated at those events was later implemented by the drivers we already
> > > have merged in mainline, such as s5p-mfc or coda.
> > >
> > > The only thing missing was the real specification included as a part of
> > > Linux Media documentation. Fix it now and document the decoder part of
> > > the Codec API.
> > >
> > > Signed-off-by: Tomasz Figa <tfiga@chromium.org>
> > > ---
> > >  Documentation/media/uapi/v4l/dev-decoder.rst  | 1082 +++++++++++++++++
> > >  Documentation/media/uapi/v4l/devices.rst      |    1 +
> > >  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |    5 +
> > >  Documentation/media/uapi/v4l/v4l2.rst         |   10 +-
> > >  .../media/uapi/v4l/vidioc-decoder-cmd.rst     |   40 +-
> > >  Documentation/media/uapi/v4l/vidioc-g-fmt.rst |   14 +
> > >  6 files changed, 1137 insertions(+), 15 deletions(-)
> > >  create mode 100644 Documentation/media/uapi/v4l/dev-decoder.rst
> > >
> > > diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
> > > new file mode 100644
> > > index 000000000000..09c7a6621b8e
> > > --- /dev/null
> > > +++ b/Documentation/media/uapi/v4l/dev-decoder.rst
> >
> > <cut>
> >
> > > +Capture setup
> > > +=============
> > > +
> >
> > <cut>
> >
> > > +
> > > +2.  **Optional.** Acquire the visible resolution via
> > > +    :c:func:`VIDIOC_G_SELECTION`.
> > > +
> > > +    * **Required fields:**
> > > +
> > > +      ``type``
> > > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> > > +
> > > +      ``target``
> > > +          set to ``V4L2_SEL_TGT_COMPOSE``
> > > +
> > > +    * **Return fields:**
> > > +
> > > +      ``r.left``, ``r.top``, ``r.width``, ``r.height``
> > > +          the visible rectangle; it must fit within the frame buffer resolution
> > > +          returned by :c:func:`VIDIOC_G_FMT` on ``CAPTURE``.
> > > +
> > > +    * The following selection targets are supported on ``CAPTURE``:
> > > +
> > > +      ``V4L2_SEL_TGT_CROP_BOUNDS``
> > > +          corresponds to the coded resolution of the stream
> > > +
> > > +      ``V4L2_SEL_TGT_CROP_DEFAULT``
> > > +          the rectangle covering the part of the ``CAPTURE`` buffer that
> > > +          contains meaningful picture data (visible area); width and height
> > > +          will be equal to the visible resolution of the stream
> > > +
> > > +      ``V4L2_SEL_TGT_CROP``
> > > +          the rectangle within the coded resolution to be output to
> > > +          ``CAPTURE``; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``; read-only on
> > > +          hardware without additional compose/scaling capabilities
> >
> > Hans should correct me if I'm wrong but V4L2_SEL_TGT_CROP_xxx are
> > applicable over OUTPUT queue type?
>
> There is no such restriction. CROP selection targets of an OUTPUT
> queue apply to the video stream read from the buffers, COMPOSE targets
> of an OUTPUT queue apply to the output of the queue and input to the
> processing block (hardware) in case of mem2mem devices, then CROP
> targets of a CAPTURE queue apply to the output of the processing and
> SELECTION targets of a CAPTURE queue apply to the stream written to

I mean, COMPOSE targets. Sorry for the noise.

> the buffers.
>
> For a decoder, the OUTPUT stream is just a sequence of bytes, so
> selection API doesn't apply to it. The processing (decoding) produces
> a video stream and so the necessary selection capabilities are exposed
> on the CAPTURE queue.
>
> Best regards,
> Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-10-22 14:48 ` [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface Tomasz Figa
  2018-10-29  9:45   ` Stanimir Varbanov
@ 2018-11-12 11:37   ` Hans Verkuil
  2019-01-22 10:02     ` Tomasz Figa
  2018-11-12 15:04   ` Stanimir Varbanov
  2018-11-15 14:34   ` Hans Verkuil
  3 siblings, 1 reply; 41+ messages in thread
From: Hans Verkuil @ 2018-11-12 11:37 UTC (permalink / raw)
  To: Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Paweł Ościak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel, Tiffany Lin, Andrew-CT Chen,
	Stanimir Varbanov, Todor Tomov, Nicolas Dufresne,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

Hi Tomasz,

A general note for the stateful and stateless patches: they describe specific
use-cases of the more generic Codec Interface, and as such should be one
level deeper in the section hierarchy.

I.e. instead of being section 4.6/7/8:

https://hverkuil.home.xs4all.nl/request-api/uapi/v4l/devices.html

they should be 4.5.1/2/3.

On 10/22/2018 04:48 PM, Tomasz Figa wrote:
> Due to complexity of the video decoding process, the V4L2 drivers of
> stateful decoder hardware require specific sequences of V4L2 API calls
> to be followed. These include capability enumeration, initialization,
> decoding, seek, pause, dynamic resolution change, drain and end of
> stream.
> 
> Specifics of the above have been discussed during Media Workshops at
> LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
> Conference Europe 2014 in Düsseldorf. The de facto Codec API that
> originated at those events was later implemented by the drivers we already
> have merged in mainline, such as s5p-mfc or coda.
> 
> The only thing missing was the real specification included as a part of
> Linux Media documentation. Fix it now and document the decoder part of
> the Codec API.
> 
> Signed-off-by: Tomasz Figa <tfiga@chromium.org>
> ---
>  Documentation/media/uapi/v4l/dev-decoder.rst  | 1082 +++++++++++++++++
>  Documentation/media/uapi/v4l/devices.rst      |    1 +
>  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |    5 +
>  Documentation/media/uapi/v4l/v4l2.rst         |   10 +-
>  .../media/uapi/v4l/vidioc-decoder-cmd.rst     |   40 +-
>  Documentation/media/uapi/v4l/vidioc-g-fmt.rst |   14 +
>  6 files changed, 1137 insertions(+), 15 deletions(-)
>  create mode 100644 Documentation/media/uapi/v4l/dev-decoder.rst
> 
> diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
> new file mode 100644
> index 000000000000..09c7a6621b8e
> --- /dev/null
> +++ b/Documentation/media/uapi/v4l/dev-decoder.rst
> @@ -0,0 +1,1082 @@
> +.. -*- coding: utf-8; mode: rst -*-
> +
> +.. _decoder:
> +
> +*************************************************
> +Memory-to-memory Stateful Video Decoder Interface
> +*************************************************
> +
> +A stateful video decoder takes complete chunks of the bitstream (e.g. Annex-B
> +H.264/HEVC stream, raw VP8/9 stream) and decodes them into raw video frames in
> +display order. The decoder is expected not to require any additional information
> +from the client to process these buffers.
> +
> +Performing software parsing, processing etc. of the stream in the driver in
> +order to support this interface is strongly discouraged. In case such
> +operations are needed, use of the Stateless Video Decoder Interface (in
> +development) is strongly advised.
> +
> +Conventions and notation used in this document
> +==============================================
> +
> +1. The general V4L2 API rules apply if not specified in this document
> +   otherwise.
> +
> +2. The meaning of words “must”, “may”, “should”, etc. is as per RFC
> +   2119.

Make this a link to https://tools.ietf.org/html/rfc2119

> +
> +3. All steps not marked “optional” are required.
> +
> +4. :c:func:`VIDIOC_G_EXT_CTRLS`, :c:func:`VIDIOC_S_EXT_CTRLS` may be used

, -> and

> +   interchangeably with :c:func:`VIDIOC_G_CTRL`, :c:func:`VIDIOC_S_CTRL`,

, -> and

> +   unless specified otherwise.
> +
> +5. Single-plane API (see spec) and applicable structures may be used

plane -> planar

Replace 'spec' with a reference to
https://hverkuil.home.xs4all.nl/spec/uapi/v4l/planar-apis.html#planar-apis

> +   interchangeably with Multi-plane API, unless specified otherwise,

plane -> planar

> +   depending on decoder capabilities and following the general V4L2
> +   guidelines.
> +
> +6. i = [a..b]: sequence of integers from a to b, inclusive, i.e. i =
> +   [0..2]: i = 0, 1, 2.
> +
> +7. Given an ``OUTPUT`` buffer A, A’ represents a buffer on the ``CAPTURE``

A, A' -> A, then A'

> +   queue containing data (decoded frame/stream) that resulted from processing

'(decoded frame/stream)': not clear what is meant here. Can't you just drop it?
I think the meaning is clear without that text.

> +   buffer A.
> +
> +.. _decoder-glossary:
> +
> +Glossary
> +========
> +
> +CAPTURE
> +   the destination buffer queue; for decoder, the queue of buffers containing

decoder -> decoders

> +   decoded frames; for encoder, the queue of buffers containing encoded

encoder -> encoders

> +   bitstream; ``V4L2_BUF_TYPE_VIDEO_CAPTURE```` or

```` -> ``

> +   ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``; data are captured from the hardware

are -> is

> +   into ``CAPTURE`` buffers
> +
> +client
> +   application client communicating with the decoder or encoder implementing
> +   this interface
> +
> +coded format
> +   encoded/compressed video bitstream format (e.g. H.264, VP8, etc.); see
> +   also: raw format
> +
> +coded height
> +   height for given coded resolution
> +
> +coded resolution
> +   stream resolution in pixels aligned to codec and hardware requirements;
> +   typically visible resolution rounded up to full macroblocks;
> +   see also: visible resolution
> +
> +coded width
> +   width for given coded resolution
> +
> +decode order
> +   the order in which frames are decoded; may differ from display order if the
> +   coded format includes a feature of frame reordering; for decoders,
> +   ``OUTPUT`` buffers must be queued by the client in decode order; for
> +   encoders ``CAPTURE`` buffers must be returned by the encoder in decode order
> +
> +destination
> +   data resulting from the decode process; ``CAPTURE``

Do you mean:

see ``CAPTURE``

Right now there is just a 'CAPTURE' word after the ';', which looks weird.

> +
> +display order
> +   the order in which frames must be displayed; for encoders, ``OUTPUT``
> +   buffers must be queued by the client in display order; for decoders,
> +   ``CAPTURE`` buffers must be returned by the decoder in display order
> +
> +DPB
> +   Decoded Picture Buffer; an H.264 term for a buffer that stores a decoded
> +   raw frame available for reference in further decoding steps.
> +
> +EOS
> +   end of stream
> +
> +IDR
> +   Instantaneous Decoder Refresh; a type of a keyframe in H.264-encoded stream,
> +   which clears the list of earlier reference frames (DPBs)
> +
> +keyframe
> +   an encoded frame that does not reference frames decoded earlier, i.e.
> +   can be decoded fully on its own.
> +
> +macroblock
> +   a processing unit in image and video compression formats based on linear
> +   block transforms (e.g. H.264, VP8, VP9); codec-specific, but for most of
> +   popular codecs the size is 16x16 samples (pixels)
> +
> +OUTPUT
> +   the source buffer queue; for decoders, the queue of buffers containing
> +   encoded bitstream; for encoders, the queue of buffers containing raw frames;
> +   ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or ``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE``; the
> +   hardware is fed with data from ``OUTPUT`` buffers
> +
> +PPS
> +   Picture Parameter Set; a type of metadata entity in H.264 bitstream
> +
> +raw format
> +   uncompressed format containing raw pixel data (e.g. YUV, RGB formats)
> +
> +resume point
> +   a point in the bitstream from which decoding may start/continue, without
> +   any previous state/data present, e.g.: a keyframe (VP8/VP9) or
> +   SPS/PPS/IDR sequence (H.264); a resume point is required to start decode
> +   of a new stream, or to resume decoding after a seek
> +
> +source
> +   data fed to the decoder or encoder; ``OUTPUT``

Should be: 'see ``OUTPUT``', I think.

> +
> +source height
> +   height in pixels for given source resolution; relevant to encoders only
> +
> +source resolution
> +   resolution in pixels of source frames being source to the encoder and
> +   subject to further cropping to the bounds of visible resolution; relevant to
> +   encoders only
> +
> +source width
> +   width in pixels for given source resolution; relevant to encoders only

I would drop these three terms: they are not used in this document since this
describes a decoder and not an encoder.

> +
> +SPS
> +   Sequence Parameter Set; a type of metadata entity in H.264 bitstream
> +
> +stream metadata
> +   additional (non-visual) information contained inside encoded bitstream;
> +   for example: coded resolution, visible resolution, codec profile
> +
> +visible height
> +   height for given visible resolution; display height
> +
> +visible resolution
> +   stream resolution of the visible picture, in pixels, to be used for
> +   display purposes; must be smaller or equal to coded resolution;
> +   display resolution
> +
> +visible width
> +   width for given visible resolution; display width
> +
> +State machine
> +=============
> +
> +.. kernel-render:: DOT
> +   :alt: DOT digraph of decoder state machine
> +   :caption: Decoder state machine
> +
> +   digraph decoder_state_machine {
> +       node [shape = doublecircle, label="Decoding"] Decoding;
> +
> +       node [shape = circle, label="Initialization"] Initialization;
> +       node [shape = circle, label="Capture\nsetup"] CaptureSetup;
> +       node [shape = circle, label="Dynamic\nresolution\nchange"] ResChange;
> +       node [shape = circle, label="Stopped"] Stopped;
> +       node [shape = circle, label="Drain"] Drain;
> +       node [shape = circle, label="Seek"] Seek;
> +       node [shape = circle, label="End of stream"] EoS;
> +
> +       node [shape = point]; qi
> +       qi -> Initialization [ label = "open()" ];
> +
> +       Initialization -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
> +
> +       CaptureSetup -> Stopped [ label = "CAPTURE\nbuffers\nready" ];
> +
> +       Decoding -> ResChange [ label = "Stream\nresolution\nchange" ];
> +       Decoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
> +       Decoding -> EoS [ label = "EoS mark\nin the stream" ];
> +       Decoding -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +       Decoding -> Stopped [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> +       Decoding -> Decoding;
> +
> +       ResChange -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
> +       ResChange -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +
> +       EoS -> Drain [ label = "Implicit\ndrain" ];
> +
> +       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
> +       Drain -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +
> +       Seek -> Decoding [ label = "VIDIOC_STREAMON(OUTPUT)" ];
> +       Seek -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];
> +
> +       Stopped -> Decoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(CAPTURE)" ];
> +       Stopped -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +   }
> +
> +Querying capabilities
> +=====================
> +
> +1. To enumerate the set of coded formats supported by the decoder, the
> +   client may call :c:func:`VIDIOC_ENUM_FMT` on ``OUTPUT``.
> +
> +   * The full set of supported formats will be returned, regardless of the
> +     format set on ``CAPTURE``.
> +
> +2. To enumerate the set of supported raw formats, the client may call
> +   :c:func:`VIDIOC_ENUM_FMT` on ``CAPTURE``.
> +
> +   * Only the formats supported for the format currently active on ``OUTPUT``
> +     will be returned.
> +
> +   * In order to enumerate raw formats supported by a given coded format,
> +     the client must first set that coded format on ``OUTPUT`` and then
> +     enumerate formats on ``CAPTURE``.
> +
> +3. The client may use :c:func:`VIDIOC_ENUM_FRAMESIZES` to detect supported
> +   resolutions for a given format, passing desired pixel format in
> +   :c:type:`v4l2_frmsizeenum` ``pixel_format``.
> +
> +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a coded pixel
> +     formats will include all possible coded resolutions supported by the

formats -> format

> +     decoder for given coded pixel format.
> +
> +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a raw pixel format
> +     will include all possible frame buffer resolutions supported by the
> +     decoder for given raw pixel format and the coded format currently set on
> +     ``OUTPUT``.
> +
> +4. Supported profiles and levels for given format, if applicable, may be

given format -> the coded format currently set on ``OUTPUT``

> +   queried using their respective controls via :c:func:`VIDIOC_QUERYCTRL`.
> +
> +Initialization
> +==============
> +
> +1. **Optional.** Enumerate supported ``OUTPUT`` formats and resolutions. See
> +   `Querying capabilities` above.

I would drop this first step. It's obvious and it actually has nothing to do
with initialization as such.

> +
> +2. Set the coded format on ``OUTPUT`` via :c:func:`VIDIOC_S_FMT`
> +
> +   * **Required fields:**
> +
> +     ``type``
> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +     ``pixelformat``
> +         a coded pixel format
> +
> +     ``width``, ``height``
> +         required only if cannot be parsed from the stream for the given
> +         coded format; optional otherwise - set to zero to ignore
> +
> +     ``sizeimage``
> +         desired size of ``OUTPUT`` buffers; the decoder may adjust it to
> +         match hardware requirements
> +
> +     other fields
> +         follow standard semantics
> +
> +   * **Return fields:**
> +
> +     ``sizeimage``
> +         adjusted size of ``CAPTURE`` buffers

CAPTURE -> OUTPUT

> +
> +   * If width and height are set to non-zero values, the ``CAPTURE`` format
> +     will be updated with an appropriate frame buffer resolution instantly.
> +     However, for coded formats that include stream resolution information,
> +     after the decoder is done parsing the information from the stream, it will
> +     update the ``CAPTURE`` format with new values and signal a source change
> +     event.

What if the initial width and height specified by userspace matches the parsed
width and height? Do you still get a source change event? I think you should
always get this event since there are other parameters that depend on the parsing
of the meta data.

But that should be made explicit here.

> +
> +   .. warning::

I'd call this a note rather than a warning.

> +
> +      Changing the ``OUTPUT`` format may change the currently set ``CAPTURE``
> +      format. The decoder will derive a new ``CAPTURE`` format from the
> +      ``OUTPUT`` format being set, including resolution, colorimetry
> +      parameters, etc. If the client needs a specific ``CAPTURE`` format, it
> +      must adjust it afterwards.
> +
> +3.  **Optional.** Query the minimum number of buffers required for ``OUTPUT``
> +    queue via :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to
> +    use more buffers than the minimum required by hardware/format.

Why is this useful? As far as I can tell only the s5p-mfc *encoder* supports
this control, so this seems pointless. And since the output queue gets a bitstream
I don't see any reason for reading this control in a decoder.

> +
> +    * **Required fields:**
> +
> +      ``id``
> +          set to ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
> +
> +    * **Return fields:**
> +
> +      ``value``
> +          the minimum number of ``OUTPUT`` buffers required for the currently
> +          set format
> +
> +4.  Allocate source (bitstream) buffers via :c:func:`VIDIOC_REQBUFS` on
> +    ``OUTPUT``.
> +
> +    * **Required fields:**
> +
> +      ``count``
> +          requested number of buffers to allocate; greater than zero
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +      ``memory``
> +          follows standard semantics
> +
> +    * **Return fields:**
> +
> +      ``count``
> +          the actual number of buffers allocated
> +
> +    .. warning::
> +
> +       The actual number of allocated buffers may differ from the ``count``
> +       given. The client must check the updated value of ``count`` after the
> +       call returns.
> +
> +    .. note::
> +
> +       To allocate more than the minimum number of buffers (for pipeline
> +       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
> +       control to get the minimum number of buffers required by the
> +       decoder/format, and pass the obtained value plus the number of
> +       additional buffers needed in the ``count`` field to
> +       :c:func:`VIDIOC_REQBUFS`.

As mentioned above, this makes no sense for stateful decoders IMHO.

> +
> +    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``OUTPUT`` queue can be
> +    used to have more control over buffer allocation.
> +
> +    * **Required fields:**
> +
> +      ``count``
> +          requested number of buffers to allocate; greater than zero
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +      ``memory``
> +          follows standard semantics
> +
> +      ``format``
> +          follows standard semantics
> +
> +    * **Return fields:**
> +
> +      ``count``
> +          adjusted to the number of allocated buffers
> +
> +    .. warning::
> +
> +       The actual number of allocated buffers may differ from the ``count``
> +       given. The client must check the updated value of ``count`` after the
> +       call returns.
> +
> +5.  Start streaming on the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`.
> +
> +6.  **This step only applies to coded formats that contain resolution information
> +    in the stream.** Continue queuing/dequeuing bitstream buffers to/from the
> +    ``OUTPUT`` queue via :c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`. The
> +    buffers will be processed and returned to the client in order, until
> +    required metadata to configure the ``CAPTURE`` queue are found. This is
> +    indicated by the decoder sending a ``V4L2_EVENT_SOURCE_CHANGE`` event with
> +    ``V4L2_EVENT_SRC_CH_RESOLUTION`` source change type.
> +
> +    * It is not an error if the first buffer does not contain enough data for
> +      this to occur. Processing of the buffers will continue as long as more
> +      data is needed.
> +
> +    * If data in a buffer that triggers the event is required to decode the
> +      first frame, it will not be returned to the client, until the
> +      initialization sequence completes and the frame is decoded.
> +
> +    * If the client sets width and height of the ``OUTPUT`` format to 0,
> +      calling :c:func:`VIDIOC_G_FMT`, :c:func:`VIDIOC_S_FMT` or
> +      :c:func:`VIDIOC_TRY_FMT` on the ``CAPTURE`` queue will return the
> +      ``-EACCES`` error code, until the decoder configures ``CAPTURE`` format
> +      according to stream metadata.
> +
> +    .. important::
> +
> +       Any client query issued after the decoder queues the event will return
> +       values applying to the just parsed stream, including queue formats,
> +       selection rectangles and controls.
> +
> +    .. note::
> +
> +       A client capable of acquiring stream parameters from the bitstream on
> +       its own may attempt to set the width and height of the ``OUTPUT`` format
> +       to non-zero values matching the coded size of the stream, skip this step
> +       and continue with the `Capture setup` sequence. However, it must not
> +       rely on any driver queries regarding stream parameters, such as
> +       selection rectangles and controls, since the decoder has not parsed them
> +       from the stream yet. If the values configured by the client do not match
> +       those parsed by the decoder, a `Dynamic resolution change` will be
> +       triggered to reconfigure them.
> +
> +    .. note::
> +
> +       No decoded frames are produced during this phase.
> +
> +7.  Continue with the `Capture setup` sequence.
> +
> +Capture setup
> +=============
> +
> +1.  Call :c:func:`VIDIOC_G_FMT` on the ``CAPTURE`` queue to get format for the
> +    destination buffers parsed/decoded from the bitstream.
> +
> +    * **Required fields:**
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> +
> +    * **Return fields:**
> +
> +      ``width``, ``height``
> +          frame buffer resolution for the decoded frames
> +
> +      ``pixelformat``
> +          pixel format for decoded frames
> +
> +      ``num_planes`` (for _MPLANE ``type`` only)
> +          number of planes for pixelformat
> +
> +      ``sizeimage``, ``bytesperline``
> +          as per standard semantics; matching frame buffer format
> +
> +    .. note::
> +
> +       The value of ``pixelformat`` may be any pixel format supported by the
> +       decoder for the current stream. The decoder should choose a
> +       preferred/optimal format for the default configuration. For example, a
> +       YUV format may be preferred over an RGB format if an additional
> +       conversion step would be required for the latter.
> +
> +2.  **Optional.** Acquire the visible resolution via
> +    :c:func:`VIDIOC_G_SELECTION`.
> +
> +    * **Required fields:**
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> +
> +      ``target``
> +          set to ``V4L2_SEL_TGT_COMPOSE``
> +
> +    * **Return fields:**
> +
> +      ``r.left``, ``r.top``, ``r.width``, ``r.height``
> +          the visible rectangle; it must fit within the frame buffer resolution
> +          returned by :c:func:`VIDIOC_G_FMT` on ``CAPTURE``.
> +
> +    * The following selection targets are supported on ``CAPTURE``:
> +
> +      ``V4L2_SEL_TGT_CROP_BOUNDS``
> +          corresponds to the coded resolution of the stream
> +
> +      ``V4L2_SEL_TGT_CROP_DEFAULT``
> +          the rectangle covering the part of the ``CAPTURE`` buffer that
> +          contains meaningful picture data (visible area); width and height
> +          will be equal to the visible resolution of the stream
> +
> +      ``V4L2_SEL_TGT_CROP``
> +          the rectangle within the coded resolution to be output to
> +          ``CAPTURE``; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``; read-only on
> +          hardware without additional compose/scaling capabilities
> +
> +      ``V4L2_SEL_TGT_COMPOSE_BOUNDS``
> +          the maximum rectangle within a ``CAPTURE`` buffer, which the cropped
> +          frame can be output into; equal to ``V4L2_SEL_TGT_CROP`` if the
> +          hardware does not support compose/scaling
> +
> +      ``V4L2_SEL_TGT_COMPOSE_DEFAULT``
> +          equal to ``V4L2_SEL_TGT_CROP``
> +
> +      ``V4L2_SEL_TGT_COMPOSE``
> +          the rectangle inside a ``CAPTURE`` buffer into which the cropped
> +          frame is written; defaults to ``V4L2_SEL_TGT_COMPOSE_DEFAULT``;
> +          read-only on hardware without additional compose/scaling capabilities
> +
> +      ``V4L2_SEL_TGT_COMPOSE_PADDED``
> +          the rectangle inside a ``CAPTURE`` buffer which is overwritten by the
> +          hardware; equal to ``V4L2_SEL_TGT_COMPOSE`` if the hardware does not
> +          write padding pixels
> +
> +    .. warning::
> +
> +       The values are guaranteed to be meaningful only after the decoder
> +       successfully parses the stream metadata. The client must not rely on the
> +       query before that happens.
> +
> +3.  Query the minimum number of buffers required for the ``CAPTURE`` queue via
> +    :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to use more
> +    buffers than the minimum required by hardware/format.

Is this step optional or required? Can it change when a resolution change occurs?
How does this relate to the checks for the minimum number of buffers that REQBUFS
does?

The 'This is useful if' sentence suggests that it is optional, but I think that
sentence just confuses the issue.

> +
> +    * **Required fields:**
> +
> +      ``id``
> +          set to ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE``
> +
> +    * **Return fields:**
> +
> +      ``value``
> +          minimum number of buffers required to decode the stream parsed in
> +          this initialization sequence.
> +
> +    .. note::
> +
> +       The minimum number of buffers must be at least the number required to
> +       successfully decode the current stream. This may for example be the
> +       required DPB size for an H.264 stream given the parsed stream
> +       configuration (resolution, level).
> +
> +    .. warning::
> +
> +       The value is guaranteed to be meaningful only after the decoder
> +       successfully parses the stream metadata. The client must not rely on the
> +       query before that happens.
> +
> +4.  **Optional.** Enumerate ``CAPTURE`` formats via :c:func:`VIDIOC_ENUM_FMT` on
> +    the ``CAPTURE`` queue. Once the stream information is parsed and known, the
> +    client may use this ioctl to discover which raw formats are supported for
> +    given stream and select one of them via :c:func:`VIDIOC_S_FMT`.

Can the list returned here differ from the list returned in the 'Querying capabilities'
step? If so, then I assume it will always be a subset of what was returned in
the 'Querying' step?

> +
> +    .. important::
> +
> +       The decoder will return only formats supported for the currently
> +       established coded format, as per the ``OUTPUT`` format and/or stream
> +       metadata parsed in this initialization sequence, even if more formats
> +       may be supported by the decoder in general.
> +
> +       For example, a decoder may support YUV and RGB formats for resolutions
> +       1920x1088 and lower, but only YUV for higher resolutions (due to
> +       hardware limitations). After parsing a resolution of 1920x1088 or lower,
> +       :c:func:`VIDIOC_ENUM_FMT` may return a set of YUV and RGB pixel formats,
> +       but after parsing resolution higher than 1920x1088, the decoder will not
> +       return RGB, unsupported for this resolution.
> +
> +       However, subsequent resolution change event triggered after
> +       discovering a resolution change within the same stream may switch
> +       the stream into a lower resolution and :c:func:`VIDIOC_ENUM_FMT`
> +       would return RGB formats again in that case.
> +
> +5.  **Optional.** Set the ``CAPTURE`` format via :c:func:`VIDIOC_S_FMT` on the
> +    ``CAPTURE`` queue. The client may choose a different format than
> +    selected/suggested by the decoder in :c:func:`VIDIOC_G_FMT`.
> +
> +    * **Required fields:**
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> +
> +      ``pixelformat``
> +          a raw pixel format
> +
> +    .. note::
> +
> +       The client may use :c:func:`VIDIOC_ENUM_FMT` after receiving the
> +       ``V4L2_EVENT_SOURCE_CHANGE`` event to find out the set of raw formats
> +       supported for the stream.

Isn't this a duplicate of step 4? I think this note can be dropped.

> +
> +6.  If all the following conditions are met, the client may resume the decoding
> +    instantly:
> +
> +    * ``sizeimage`` of the new format (determined in previous steps) is less
> +      than or equal to the size of currently allocated buffers,
> +
> +    * the number of buffers currently allocated is greater than or equal to the
> +      minimum number of buffers acquired in previous steps. To fulfill this
> +      requirement, the client may use :c:func:`VIDIOC_CREATE_BUFS` to add new
> +      buffers.
> +
> +    In such case, the remaining steps do not apply and the client may resume
> +    the decoding by one of the following actions:
> +
> +    * if the ``CAPTURE`` queue is streaming, call :c:func:`VIDIOC_DECODER_CMD`
> +      with the ``V4L2_DEC_CMD_START`` command,
> +
> +    * if the ``CAPTURE`` queue is not streaming, call :c:func:`VIDIOC_STREAMON`
> +      on the ``CAPTURE`` queue.
> +
> +    However, if the client intends to change the buffer set, to lower
> +    memory usage or for any other reasons, it may be achieved by following
> +    the steps below.
> +
> +7.  **If the** ``CAPTURE`` **queue is streaming,** keep queuing and dequeuing
> +    buffers on the ``CAPTURE`` queue until a buffer marked with the
> +    ``V4L2_BUF_FLAG_LAST`` flag is dequeued.
> +
> +8.  **If the** ``CAPTURE`` **queue is streaming,** call :c:func:`VIDIOC_STREAMOFF`
> +    on the ``CAPTURE`` queue to stop streaming.
> +
> +    .. warning::
> +
> +       The ``OUTPUT`` queue must remain streaming. Calling
> +       :c:func:`VIDIOC_STREAMOFF` on it would abort the sequence and trigger a
> +       seek.
> +
> +9.  **If the** ``CAPTURE`` **queue has buffers allocated,** free the ``CAPTURE``
> +    buffers using :c:func:`VIDIOC_REQBUFS`.
> +
> +    * **Required fields:**
> +
> +      ``count``
> +          set to 0
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> +
> +      ``memory``
> +          follows standard semantics
> +
> +10. Allocate ``CAPTURE`` buffers via :c:func:`VIDIOC_REQBUFS` on the
> +    ``CAPTURE`` queue.
> +
> +    * **Required fields:**
> +
> +      ``count``
> +          requested number of buffers to allocate; greater than zero
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> +
> +      ``memory``
> +          follows standard semantics
> +
> +    * **Return fields:**
> +
> +      ``count``
> +          actual number of buffers allocated
> +
> +    .. warning::
> +
> +       The actual number of allocated buffers may differ from the ``count``
> +       given. The client must check the updated value of ``count`` after the
> +       call returns.
> +
> +    .. note::
> +
> +       To allocate more than the minimum number of buffers (for pipeline
> +       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE``
> +       control to get the minimum number of buffers required, and pass the
> +       obtained value plus the number of additional buffers needed in the
> +       ``count`` field to :c:func:`VIDIOC_REQBUFS`.

Same question as before: is it optional or required to obtain the value of this
control? And can't the driver just set the min_buffers_needed field in the capture
vb2_queue to the minimum number of buffers that are required?

Should you be allowed to allocate buffers at all if the capture format isn't
known? I.e. width/height is still 0. It makes no sense to call REQBUFS since
there is no format size known that REQBUFS can use.

> +
> +    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``CAPTURE`` queue can be
> +    used to have more control over buffer allocation. For example, by
> +    allocating buffers larger than the current ``CAPTURE`` format, future
> +    resolution changes can be accommodated.
> +
> +    * **Required fields:**
> +
> +      ``count``
> +          requested number of buffers to allocate; greater than zero
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> +
> +      ``memory``
> +          follows standard semantics
> +
> +      ``format``
> +          a format representing the maximum framebuffer resolution to be
> +          accommodated by newly allocated buffers
> +
> +    * **Return fields:**
> +
> +      ``count``
> +          adjusted to the number of allocated buffers
> +
> +    .. warning::
> +
> +       The actual number of allocated buffers may differ from the ``count``
> +       given. The client must check the updated value of ``count`` after the
> +       call returns.
> +
> +    .. note::
> +
> +       To allocate buffers for a format different than parsed from the stream
> +       metadata, the client must proceed as follows, before the metadata
> +       parsing is initiated:
> +
> +       * set width and height of the ``OUTPUT`` format to desired coded resolution to
> +         let the decoder configure the ``CAPTURE`` format appropriately,
> +
> +       * query the ``CAPTURE`` format using :c:func:`VIDIOC_G_FMT` and save it
> +         until this step.
> +
> +       The format obtained in the query may be then used with
> +       :c:func:`VIDIOC_CREATE_BUFS` in this step to allocate the buffers.
> +
> +11. Call :c:func:`VIDIOC_STREAMON` on the ``CAPTURE`` queue to start decoding
> +    frames.
> +
> +Decoding
> +========
> +
> +This state is reached after the `Capture setup` sequence finishes succesfully.
> +In this state, the client queues and dequeues buffers to both queues via
> +:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
> +semantics.
> +
> +The contents of the source ``OUTPUT`` buffers depend on the active coded pixel
> +format and may be affected by codec-specific extended controls, as stated in
> +the documentation of each format.
> +
> +Both queues operate independently, following the standard behavior of V4L2
> +buffer queues and memory-to-memory devices. In addition, the order of decoded
> +frames dequeued from the ``CAPTURE`` queue may differ from the order of queuing
> +coded frames to the ``OUTPUT`` queue, due to properties of the selected coded
> +format, e.g. frame reordering.
> +
> +The client must not assume any direct relationship between ``CAPTURE``
> +and ``OUTPUT`` buffers and any specific timing of buffers becoming
> +available to dequeue. Specifically,
> +
> +* a buffer queued to ``OUTPUT`` may result in no buffers being produced
> +  on ``CAPTURE`` (e.g. if it does not contain encoded data, or if only
> +  metadata syntax structures are present in it),
> +
> +* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced
> +  on ``CAPTURE`` (if the encoded data contained more than one frame, or if
> +  returning a decoded frame allowed the decoder to return a frame that
> +  preceded it in decode, but succeeded it in the display order),
> +
> +* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
> +  ``CAPTURE`` later into decode process, and/or after processing further
> +  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
> +  reordering is used,
> +
> +* buffers may become available on the ``CAPTURE`` queue without additional
> +  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
> +  ``OUTPUT`` buffers queued in the past whose decoding results are only
> +  available at later time, due to specifics of the decoding process.
> +
> +.. note::
> +
> +   To allow matching decoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
> +   originated from, the client can set the ``timestamp`` field of the
> +   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
> +   ``CAPTURE`` buffer(s), which resulted from decoding that ``OUTPUT`` buffer
> +   will have their ``timestamp`` field set to the same value when dequeued.
> +
> +   In addition to the straighforward case of one ``OUTPUT`` buffer producing
> +   one ``CAPTURE`` buffer, the following cases are defined:
> +
> +   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
> +     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
> +
> +   * multiple ``OUTPUT`` buffers generate one ``CAPTURE`` buffer: timestamp of
> +     the ``OUTPUT`` buffer queued last will be copied,
> +
> +   * the decoding order differs from the display order (i.e. the
> +     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
> +     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
> +     and thus monotonicity of the timestamps cannot be guaranteed.

Should stateful codecs be required to support 'tags'? See:

https://www.mail-archive.com/linux-media@vger.kernel.org/msg136314.html

To be honest, I'm inclined to require this for all m2m devices eventually.

> +
> +During the decoding, the decoder may initiate one of the special sequences, as
> +listed below. The sequences will result in the decoder returning all the
> +``CAPTURE`` buffers that originated from all the ``OUTPUT`` buffers processed
> +before the sequence started. Last of the buffers will have the
> +``V4L2_BUF_FLAG_LAST`` flag set. To determine the sequence to follow, the client
> +must check if there is any pending event and,
> +
> +* if a ``V4L2_EVENT_SOURCE_CHANGE`` event is pending, the `Dynamic resolution
> +  change` sequence needs to be followed,
> +
> +* if a ``V4L2_EVENT_EOS`` event is pending, the `End of stream` sequence needs
> +  to be followed.
> +
> +Some of the sequences can be intermixed with each other and need to be handled
> +as they happen. The exact operation is documented for each sequence.
> +
> +Seek
> +====
> +
> +Seek is controlled by the ``OUTPUT`` queue, as it is the source of coded data.
> +The seek does not require any specific operation on the ``CAPTURE`` queue, but
> +it may be affected as per normal decoder operation.
> +
> +1. Stop the ``OUTPUT`` queue to begin the seek sequence via
> +   :c:func:`VIDIOC_STREAMOFF`.
> +
> +   * **Required fields:**
> +
> +     ``type``
> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +   * The decoder will drop all the pending ``OUTPUT`` buffers and they must be
> +     treated as returned to the client (following standard semantics).
> +
> +2. Restart the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`
> +
> +   * **Required fields:**
> +
> +     ``type``
> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +   * The decoder will start accepting new source bitstream buffers after the
> +     call returns.
> +
> +3. Start queuing buffers containing coded data after the seek to the ``OUTPUT``
> +   queue until a suitable resume point is found.
> +
> +   .. note::
> +
> +      There is no requirement to begin queuing coded data starting exactly
> +      from a resume point (e.g. SPS or a keyframe). Any queued ``OUTPUT``
> +      buffers will be processed and returned to the client until a suitable
> +      resume point is found.  While looking for a resume point, the decoder
> +      should not produce any decoded frames into ``CAPTURE`` buffers.
> +
> +      Some hardware is known to mishandle seeks to a non-resume point. Such an
> +      operation may result in an unspecified number of corrupted decoded frames
> +      being made available on the ``CAPTURE`` queue. Drivers must ensure that
> +      no fatal decoding errors or crashes occur, and implement any necessary
> +      handling and workarounds for hardware issues related to seek operations.

Is there a requirement that those corrupted frames have V4L2_BUF_FLAG_ERROR set?
I.e., can userspace detect those currupted frames?

> +
> +   .. warning::
> +
> +      In case of the H.264 codec, the client must take care not to seek over a
> +      change of SPS/PPS. Even though the target frame could be a keyframe, the
> +      stale SPS/PPS inside decoder state would lead to undefined results when
> +      decoding. Although the decoder must handle such case without a crash or a
> +      fatal decode error, the client must not expect a sensible decode output.
> +
> +4. After a resume point is found, the decoder will start returning ``CAPTURE``
> +   buffers containing decoded frames.
> +
> +.. important::
> +
> +   A seek may result in the `Dynamic resolution change` sequence being
> +   iniitated, due to the seek target having decoding parameters different from
> +   the part of the stream decoded before the seek. The sequence must be handled
> +   as per normal decoder operation.
> +
> +.. warning::
> +
> +   It is not specified when the ``CAPTURE`` queue starts producing buffers
> +   containing decoded data from the ``OUTPUT`` buffers queued after the seek,
> +   as it operates independently from the ``OUTPUT`` queue.
> +
> +   The decoder may return a number of remaining ``CAPTURE`` buffers containing
> +   decoded frames originating from the ``OUTPUT`` buffers queued before the
> +   seek sequence is performed.
> +
> +   The ``VIDIOC_STREAMOFF`` operation discards any remaining queued
> +   ``OUTPUT`` buffers, which means that not all of the ``OUTPUT`` buffers
> +   queued before the seek sequence may have matching ``CAPTURE`` buffers
> +   produced.  For example, given the sequence of operations on the
> +   ``OUTPUT`` queue:
> +
> +     QBUF(A), QBUF(B), STREAMOFF(), STREAMON(), QBUF(G), QBUF(H),
> +
> +   any of the following results on the ``CAPTURE`` queue is allowed:
> +
> +     {A’, B’, G’, H’}, {A’, G’, H’}, {G’, H’}.

Isn't it the case that if you would want to avoid that, then you should call
DECODER_STOP, wait for the last buffer on the CAPTURE queue, then seek and
call DECODER_START. If you do that, then you should always get {A’, B’, G’, H’}.
(basically following the Drain sequence).

Admittedly, you typically want to do an instantaneous seek, so this is probably
not what you want to do normally.

It might help to have this documented in a separate note.

> +
> +.. note::
> +
> +   To achieve instantaneous seek, the client may restart streaming on the
> +   ``CAPTURE`` queue too to discard decoded, but not yet dequeued buffers.
> +
> +Dynamic resolution change
> +=========================
> +
> +Streams that include resolution metadata in the bitstream may require switching
> +to a different resolution during the decoding.
> +
> +The sequence starts when the decoder detects a coded frame with one or more of
> +the following parameters different from previously established (and reflected
> +by corresponding queries):
> +
> +* coded resolution (``OUTPUT`` width and height),
> +
> +* visible resolution (selection rectangles),
> +
> +* the minimum number of buffers needed for decoding.
> +
> +Whenever that happens, the decoder must proceed as follows:
> +
> +1.  After encountering a resolution change in the stream, the decoder sends a
> +    ``V4L2_EVENT_SOURCE_CHANGE`` event with source change type set to
> +    ``V4L2_EVENT_SRC_CH_RESOLUTION``.
> +
> +    .. important::
> +
> +       Any client query issued after the decoder queues the event will return
> +       values applying to the stream after the resolution change, including
> +       queue formats, selection rectangles and controls.
> +
> +2.  The decoder will then process and decode all remaining buffers from before
> +    the resolution change point.
> +
> +    * The last buffer from before the change must be marked with the
> +      ``V4L2_BUF_FLAG_LAST`` flag, similarly to the `Drain` sequence above.
> +
> +    .. warning::
> +
> +       The last buffer may be empty (with :c:type:`v4l2_buffer` ``bytesused``
> +       = 0) and in such case it must be ignored by the client, as it does not
> +       contain a decoded frame.
> +
> +    .. note::
> +
> +       Any attempt to dequeue more buffers beyond the buffer marked with
> +       ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
> +       :c:func:`VIDIOC_DQBUF`.
> +
> +The client must continue the sequence as described below to continue the
> +decoding process.
> +
> +1.  Dequeue the source change event.
> +
> +    .. important::
> +
> +       A source change triggers an implicit decoder drain, similar to the
> +       explicit `Drain` sequence. The decoder is stopped after it completes.
> +       The decoding process must be resumed with either a pair of calls to
> +       :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> +       ``CAPTURE`` queue, or a call to :c:func:`VIDIOC_DECODER_CMD` with the
> +       ``V4L2_DEC_CMD_START`` command.
> +
> +2.  Continue with the `Capture setup` sequence.
> +
> +.. note::
> +
> +   During the resolution change sequence, the ``OUTPUT`` queue must remain
> +   streaming. Calling :c:func:`VIDIOC_STREAMOFF` on the ``OUTPUT`` queue would
> +   abort the sequence and initiate a seek.
> +
> +   In principle, the ``OUTPUT`` queue operates separately from the ``CAPTURE``
> +   queue and this remains true for the duration of the entire resolution change
> +   sequence as well.
> +
> +   The client should, for best performance and simplicity, keep queuing/dequeuing
> +   buffers to/from the ``OUTPUT`` queue even while processing this sequence.
> +
> +Drain
> +=====
> +
> +To ensure that all queued ``OUTPUT`` buffers have been processed and related
> +``CAPTURE`` buffers output to the client, the client must follow the drain
> +sequence described below. After the drain sequence ends, the client has
> +received all decoded frames for all ``OUTPUT`` buffers queued before the
> +sequence was started.
> +
> +1. Begin drain by issuing :c:func:`VIDIOC_DECODER_CMD`.
> +
> +   * **Required fields:**
> +
> +     ``cmd``
> +         set to ``V4L2_DEC_CMD_STOP``
> +
> +     ``flags``
> +         set to 0
> +
> +     ``pts``
> +         set to 0
> +
> +   .. warning::
> +
> +   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues

'sentence'? You mean 'decoder command'?

> +   are streaming. For compatibility reasons, the call to
> +   :c:func:`VIDIOC_DECODER_CMD` will not fail even if any of the queues is not
> +   streaming, but at the same time it will not initiate the `Drain` sequence
> +   and so the steps described below would not be applicable.
> +
> +2. Any ``OUTPUT`` buffers queued by the client before the
> +   :c:func:`VIDIOC_DECODER_CMD` was issued will be processed and decoded as
> +   normal. The client must continue to handle both queues independently,
> +   similarly to normal decode operation. This includes,
> +
> +   * handling any operations triggered as a result of processing those buffers,
> +     such as the `Dynamic resolution change` sequence, before continuing with
> +     the drain sequence,
> +
> +   * queuing and dequeuing ``CAPTURE`` buffers, until a buffer marked with the
> +     ``V4L2_BUF_FLAG_LAST`` flag is dequeued,
> +
> +     .. warning::
> +
> +        The last buffer may be empty (with :c:type:`v4l2_buffer`
> +        ``bytesused`` = 0) and in such case it must be ignored by the client,
> +        as it does not contain a decoded frame.
> +
> +     .. note::
> +
> +        Any attempt to dequeue more buffers beyond the buffer marked with
> +        ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
> +        :c:func:`VIDIOC_DQBUF`.
> +
> +   * dequeuing processed ``OUTPUT`` buffers, until all the buffers queued
> +     before the ``V4L2_DEC_CMD_STOP`` command are dequeued.
> +
> +   * dequeuing the ``V4L2_EVENT_EOS`` event, if the client subscribed to it.
> +
> +   .. note::
> +
> +      For backwards compatibility, the decoder will signal a ``V4L2_EVENT_EOS``
> +      event when the last the last frame has been decoded and all frames are

'the last the last' -> the last

> +      ready to be dequeued. It is a deprecated behavior and the client must not
> +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
> +      instead.
> +
> +3. Once all the ``OUTPUT`` buffers queued before the ``V4L2_DEC_CMD_STOP`` call
> +   and the last ``CAPTURE`` buffer are dequeued, the decoder is stopped and it

This sentence is a bit confusing. This is better IMHO:

3. Once all the ``OUTPUT`` buffers queued before the ``V4L2_DEC_CMD_STOP`` call
   are dequeued and the last ``CAPTURE`` buffer is dequeued, the decoder is stopped and it

> +   will accept, but not process any newly queued ``OUTPUT`` buffers until the

process any -> process, any

> +   client issues any of the following operations:
> +
> +   * ``V4L2_DEC_CMD_START`` - the decoder will resume the operation normally,
> +
> +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> +     ``CAPTURE`` queue - the decoder will resume the operation normally,
> +     however any ``CAPTURE`` buffers still in the queue will be returned to the
> +     client,
> +
> +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> +     ``OUTPUT`` queue - any pending source buffers will be returned to the
> +     client and the `Seek` sequence will be triggered.
> +
> +.. note::
> +
> +   Once the drain sequence is initiated, the client needs to drive it to
> +   completion, as described by the steps above, unless it aborts the process by
> +   issuing :c:func:`VIDIOC_STREAMOFF` on any of the ``OUTPUT`` or ``CAPTURE``
> +   queues.  The client is not allowed to issue ``V4L2_DEC_CMD_START`` or
> +   ``V4L2_DEC_CMD_STOP`` again while the drain sequence is in progress and they
> +   will fail with -EBUSY error code if attempted.
> +
> +   Although mandatory, the availability of decoder commands may be queried
> +   using :c:func:`VIDIOC_TRY_DECODER_CMD`.
> +
> +End of stream
> +=============
> +
> +If the decoder encounters an end of stream marking in the stream, the decoder
> +will initiate the `Drain` sequence, which the client must handle as described
> +above, skipping the initial :c:func:`VIDIOC_DECODER_CMD`.
> +
> +Commit points
> +=============
> +
> +Setting formats and allocating buffers trigger changes in the behavior of the
> +decoder.
> +
> +1. Setting the format on the ``OUTPUT`` queue may change the set of formats
> +   supported/advertised on the ``CAPTURE`` queue. In particular, it also means
> +   that the ``CAPTURE`` format may be reset and the client must not rely on the
> +   previously set format being preserved.
> +
> +2. Enumerating formats on the ``CAPTURE`` queue always returns only formats
> +   supported for the current ``OUTPUT`` format.
> +
> +3. Setting the format on the ``CAPTURE`` queue does not change the list of
> +   formats available on the ``OUTPUT`` queue. An attempt to set the ``CAPTURE``
> +   format that is not supported for the currently selected ``OUTPUT`` format
> +   will result in the decoder adjusting the requested ``CAPTURE`` format to a
> +   supported one.
> +
> +4. Enumerating formats on the ``OUTPUT`` queue always returns the full set of
> +   supported coded formats, irrespectively of the current ``CAPTURE`` format.
> +
> +5. While buffers are allocated on the ``OUTPUT`` queue, the client must not
> +   change the format on the queue. Drivers will return the -EBUSY error code
> +   for any such format change attempt.
> +
> +To summarize, setting formats and allocation must always start with the
> +``OUTPUT`` queue and the ``OUTPUT`` queue is the master that governs the
> +set of supported formats for the ``CAPTURE`` queue.
> diff --git a/Documentation/media/uapi/v4l/devices.rst b/Documentation/media/uapi/v4l/devices.rst
> index fb7f8c26cf09..12d43fe711cf 100644
> --- a/Documentation/media/uapi/v4l/devices.rst
> +++ b/Documentation/media/uapi/v4l/devices.rst
> @@ -15,6 +15,7 @@ Interfaces
>      dev-output
>      dev-osd
>      dev-codec
> +    dev-decoder
>      dev-effect
>      dev-raw-vbi
>      dev-sliced-vbi
> diff --git a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> index 826f2305da01..ca5f2270a829 100644
> --- a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> +++ b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> @@ -32,6 +32,11 @@ Single-planar format structure
>  	to a multiple of the scale factor of any smaller planes. For
>  	example when the image format is YUV 4:2:0, ``width`` and
>  	``height`` must be multiples of two.
> +
> +	For compressed formats that contain the resolution information encoded
> +	inside the stream, when fed to a stateful mem2mem decoder, the fields
> +	may be zero to rely on the decoder to detect the right values. For more
> +	details see :ref:`decoder` and format descriptions.
>      * - __u32
>        - ``pixelformat``
>        - The pixel format or type of compression, set by the application.
> diff --git a/Documentation/media/uapi/v4l/v4l2.rst b/Documentation/media/uapi/v4l/v4l2.rst
> index b89e5621ae69..65dc096199ad 100644
> --- a/Documentation/media/uapi/v4l/v4l2.rst
> +++ b/Documentation/media/uapi/v4l/v4l2.rst
> @@ -53,6 +53,10 @@ Authors, in alphabetical order:
>  
>    - Original author of the V4L2 API and documentation.
>  
> +- Figa, Tomasz <tfiga@chromium.org>
> +
> +  - Documented the memory-to-memory decoder interface.
> +
>  - H Schimek, Michael <mschimek@gmx.at>
>  
>    - Original author of the V4L2 API and documentation.
> @@ -61,6 +65,10 @@ Authors, in alphabetical order:
>  
>    - Documented the Digital Video timings API.
>  
> +- Osciak, Pawel <posciak@chromium.org>
> +
> +  - Documented the memory-to-memory decoder interface.
> +
>  - Osciak, Pawel <pawel@osciak.com>
>  
>    - Designed and documented the multi-planar API.
> @@ -85,7 +93,7 @@ Authors, in alphabetical order:
>  
>    - Designed and documented the VIDIOC_LOG_STATUS ioctl, the extended control ioctls, major parts of the sliced VBI API, the MPEG encoder and decoder APIs and the DV Timings API.
>  
> -**Copyright** |copy| 1999-2016: Bill Dirks, Michael H. Schimek, Hans Verkuil, Martin Rubli, Andy Walls, Muralidharan Karicheri, Mauro Carvalho Chehab, Pawel Osciak, Sakari Ailus & Antti Palosaari.
> +**Copyright** |copy| 1999-2018: Bill Dirks, Michael H. Schimek, Hans Verkuil, Martin Rubli, Andy Walls, Muralidharan Karicheri, Mauro Carvalho Chehab, Pawel Osciak, Sakari Ailus & Antti Palosaari, Tomasz Figa
>  
>  Except when explicitly stated as GPL, programming examples within this
>  part can be used and distributed without restrictions.
> diff --git a/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst b/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
> index 85c916b0ce07..2f73fe22a9cd 100644
> --- a/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
> +++ b/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
> @@ -49,14 +49,16 @@ The ``cmd`` field must contain the command code. Some commands use the
>  
>  A :ref:`write() <func-write>` or :ref:`VIDIOC_STREAMON`
>  call sends an implicit START command to the decoder if it has not been
> -started yet.
> +started yet. Applies to both queues of mem2mem decoders.
>  
>  A :ref:`close() <func-close>` or :ref:`VIDIOC_STREAMOFF <VIDIOC_STREAMON>`
>  call of a streaming file descriptor sends an implicit immediate STOP
> -command to the decoder, and all buffered data is discarded.
> +command to the decoder, and all buffered data is discarded. Applies to both
> +queues of mem2mem decoders.
>  
> -These ioctls are optional, not all drivers may support them. They were
> -introduced in Linux 3.3.
> +In principle, these ioctls are optional, not all drivers may support them. They were
> +introduced in Linux 3.3. They are, however, mandatory for stateful mem2mem decoders
> +(as further documented in :ref:`decoder`).
>  
>  
>  .. tabularcolumns:: |p{1.1cm}|p{2.4cm}|p{1.2cm}|p{1.6cm}|p{10.6cm}|
> @@ -160,26 +162,36 @@ introduced in Linux 3.3.
>  	``V4L2_DEC_CMD_RESUME`` for that. This command has one flag:
>  	``V4L2_DEC_CMD_START_MUTE_AUDIO``. If set, then audio will be
>  	muted when playing back at a non-standard speed.
> +
> +	For stateful mem2mem decoders, the command may be also used to restart
> +	the decoder in case of an implicit stop initiated by the decoder
> +	itself, without the ``V4L2_DEC_CMD_STOP`` being called explicitly.
> +	No flags or other arguments are accepted in case of mem2mem decoders.
> +	See :ref:`decoder` for more details.
>      * - ``V4L2_DEC_CMD_STOP``
>        - 1
>        - Stop the decoder. When the decoder is already stopped, this
>  	command does nothing. This command has two flags: if
>  	``V4L2_DEC_CMD_STOP_TO_BLACK`` is set, then the decoder will set
>  	the picture to black after it stopped decoding. Otherwise the last
> -	image will repeat. mem2mem decoders will stop producing new frames
> -	altogether. They will send a ``V4L2_EVENT_EOS`` event when the
> -	last frame has been decoded and all frames are ready to be
> -	dequeued and will set the ``V4L2_BUF_FLAG_LAST`` buffer flag on
> -	the last buffer of the capture queue to indicate there will be no
> -	new buffers produced to dequeue. This buffer may be empty,
> -	indicated by the driver setting the ``bytesused`` field to 0. Once
> -	the ``V4L2_BUF_FLAG_LAST`` flag was set, the
> -	:ref:`VIDIOC_DQBUF <VIDIOC_QBUF>` ioctl will not block anymore,
> -	but return an ``EPIPE`` error code. If
> +	image will repeat. If
>  	``V4L2_DEC_CMD_STOP_IMMEDIATELY`` is set, then the decoder stops
>  	immediately (ignoring the ``pts`` value), otherwise it will keep
>  	decoding until timestamp >= pts or until the last of the pending
>  	data from its internal buffers was decoded.
> +
> +	A stateful mem2mem decoder will proceed with decoding the source
> +	buffers pending before the command is issued and then stop producing
> +	new frames. It will send a ``V4L2_EVENT_EOS`` event when the last frame
> +	has been decoded and all frames are ready to be dequeued and will set
> +	the ``V4L2_BUF_FLAG_LAST`` buffer flag on the last buffer of the
> +	capture queue to indicate there will be no new buffers produced to
> +	dequeue. This buffer may be empty, indicated by the driver setting the
> +	``bytesused`` field to 0. Once the buffer with the
> +	``V4L2_BUF_FLAG_LAST`` flag set was dequeued, the :ref:`VIDIOC_DQBUF
> +	<VIDIOC_QBUF>` ioctl will not block anymore, but return an ``EPIPE``
> +	error code. No flags or other arguments are accepted in case of mem2mem
> +	decoders.  See :ref:`decoder` for more details.
>      * - ``V4L2_DEC_CMD_PAUSE``
>        - 2
>        - Pause the decoder. When the decoder has not been started yet, the
> diff --git a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
> index 3ead350e099f..0fc0b78a943e 100644
> --- a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
> +++ b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
> @@ -53,6 +53,13 @@ devices that is either the struct
>  member. When the requested buffer type is not supported drivers return
>  an ``EINVAL`` error code.
>  
> +A stateful mem2mem decoder will not allow operations on the
> +``V4L2_BUF_TYPE_VIDEO_CAPTURE`` or ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``
> +buffer type until the corresponding ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or
> +``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE`` buffer type is configured. If such an
> +operation is attempted, drivers return an ``EACCES`` error code. Refer to
> +:ref:`decoder` for more details.

This isn't right. EACCES is returned as long as the output format resolution is
unknown. If it is set explicitly, then this will work without an error.

> +
>  To change the current format parameters applications initialize the
>  ``type`` field and all fields of the respective ``fmt`` union member.
>  For details see the documentation of the various devices types in
> @@ -145,6 +152,13 @@ On success 0 is returned, on error -1 and the ``errno`` variable is set
>  appropriately. The generic error codes are described at the
>  :ref:`Generic Error Codes <gen-errors>` chapter.
>  
> +EACCES
> +    The format is not accessible until another buffer type is configured.
> +    Relevant for the V4L2_BUF_TYPE_VIDEO_CAPTURE and
> +    V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE buffer types of mem2mem decoders, which
> +    require the format of V4L2_BUF_TYPE_VIDEO_OUTPUT or
> +    V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE buffer type to be configured first.

Ditto.

> +
>  EINVAL
>      The struct :c:type:`v4l2_format` ``type`` field is
>      invalid or the requested buffer type not supported.
> 

Regards,

	Hans

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2018-10-22 14:49 ` [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface Tomasz Figa
@ 2018-11-12 13:23   ` Hans Verkuil
  2018-11-17  4:18     ` Nicolas Dufresne
  2019-01-23  9:52     ` Tomasz Figa
  0 siblings, 2 replies; 41+ messages in thread
From: Hans Verkuil @ 2018-11-12 13:23 UTC (permalink / raw)
  To: Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Paweł Ościak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel, Tiffany Lin, Andrew-CT Chen,
	Stanimir Varbanov, Todor Tomov, Nicolas Dufresne,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

On 10/22/2018 04:49 PM, Tomasz Figa wrote:
> Due to complexity of the video encoding process, the V4L2 drivers of
> stateful encoder hardware require specific sequences of V4L2 API calls
> to be followed. These include capability enumeration, initialization,
> encoding, encode parameters change, drain and reset.
> 
> Specifics of the above have been discussed during Media Workshops at
> LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
> Conference Europe 2014 in Düsseldorf. The de facto Codec API that
> originated at those events was later implemented by the drivers we already
> have merged in mainline, such as s5p-mfc or coda.
> 
> The only thing missing was the real specification included as a part of
> Linux Media documentation. Fix it now and document the encoder part of
> the Codec API.
> 
> Signed-off-by: Tomasz Figa <tfiga@chromium.org>
> ---
>  Documentation/media/uapi/v4l/dev-encoder.rst  | 579 ++++++++++++++++++
>  Documentation/media/uapi/v4l/devices.rst      |   1 +
>  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |   5 +
>  Documentation/media/uapi/v4l/v4l2.rst         |   2 +
>  .../media/uapi/v4l/vidioc-encoder-cmd.rst     |  38 +-
>  5 files changed, 610 insertions(+), 15 deletions(-)
>  create mode 100644 Documentation/media/uapi/v4l/dev-encoder.rst
> 
> diff --git a/Documentation/media/uapi/v4l/dev-encoder.rst b/Documentation/media/uapi/v4l/dev-encoder.rst
> new file mode 100644
> index 000000000000..41139e5e48eb
> --- /dev/null
> +++ b/Documentation/media/uapi/v4l/dev-encoder.rst
> @@ -0,0 +1,579 @@
> +.. -*- coding: utf-8; mode: rst -*-
> +
> +.. _encoder:
> +
> +*************************************************
> +Memory-to-memory Stateful Video Encoder Interface
> +*************************************************
> +
> +A stateful video encoder takes raw video frames in display order and encodes
> +them into a bitstream. It generates complete chunks of the bitstream, including
> +all metadata, headers, etc. The resulting bitstream does not require any
> +further post-processing by the client.
> +
> +Performing software stream processing, header generation etc. in the driver
> +in order to support this interface is strongly discouraged. In case such
> +operations are needed, use of the Stateless Video Encoder Interface (in
> +development) is strongly advised.
> +
> +Conventions and notation used in this document
> +==============================================
> +
> +1. The general V4L2 API rules apply if not specified in this document
> +   otherwise.
> +
> +2. The meaning of words "must", "may", "should", etc. is as per RFC
> +   2119.
> +
> +3. All steps not marked "optional" are required.
> +
> +4. :c:func:`VIDIOC_G_EXT_CTRLS`, :c:func:`VIDIOC_S_EXT_CTRLS` may be used
> +   interchangeably with :c:func:`VIDIOC_G_CTRL`, :c:func:`VIDIOC_S_CTRL`,
> +   unless specified otherwise.
> +
> +5. Single-plane API (see spec) and applicable structures may be used
> +   interchangeably with Multi-plane API, unless specified otherwise,
> +   depending on encoder capabilities and following the general V4L2
> +   guidelines.
> +
> +6. i = [a..b]: sequence of integers from a to b, inclusive, i.e. i =
> +   [0..2]: i = 0, 1, 2.
> +
> +7. Given an ``OUTPUT`` buffer A, A' represents a buffer on the ``CAPTURE``
> +   queue containing data (encoded frame/stream) that resulted from processing
> +   buffer A.

The same comments as I mentioned for the previous patch apply to this section.

> +
> +Glossary
> +========
> +
> +Refer to :ref:`decoder-glossary`.

Ah, you refer to the same glossary. Then my comment about the source resolution
terms is obviously wrong.

I wonder if it wouldn't be better to split off the sections above into a separate
HW codec intro section where you explain the differences between stateful/stateless
encoders and decoders, and add the conventions and glossary.

After that you have the three documents for each variant (later four when we get
stateless encoders).

Up to you, and it can be done later in a follow-up patch.

> +
> +State machine
> +=============
> +
> +.. kernel-render:: DOT
> +   :alt: DOT digraph of encoder state machine
> +   :caption: Encoder state machine
> +
> +   digraph encoder_state_machine {
> +       node [shape = doublecircle, label="Encoding"] Encoding;
> +
> +       node [shape = circle, label="Initialization"] Initialization;
> +       node [shape = circle, label="Stopped"] Stopped;
> +       node [shape = circle, label="Drain"] Drain;
> +       node [shape = circle, label="Reset"] Reset;
> +
> +       node [shape = point]; qi
> +       qi -> Initialization [ label = "open()" ];
> +
> +       Initialization -> Encoding [ label = "Both queues streaming" ];
> +
> +       Encoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
> +       Encoding -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> +       Encoding -> Stopped [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +       Encoding -> Encoding;
> +
> +       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
> +       Drain -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> +
> +       Reset -> Encoding [ label = "VIDIOC_STREAMON(CAPTURE)" ];
> +       Reset -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];
> +
> +       Stopped -> Encoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(OUTPUT)" ];
> +       Stopped -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> +   }
> +
> +Querying capabilities
> +=====================
> +
> +1. To enumerate the set of coded formats supported by the encoder, the
> +   client may call :c:func:`VIDIOC_ENUM_FMT` on ``CAPTURE``.
> +
> +   * The full set of supported formats will be returned, regardless of the
> +     format set on ``OUTPUT``.
> +
> +2. To enumerate the set of supported raw formats, the client may call
> +   :c:func:`VIDIOC_ENUM_FMT` on ``OUTPUT``.
> +
> +   * Only the formats supported for the format currently active on ``CAPTURE``
> +     will be returned.
> +
> +   * In order to enumerate raw formats supported by a given coded format,
> +     the client must first set that coded format on ``CAPTURE`` and then
> +     enumerate the formats on ``OUTPUT``.
> +
> +3. The client may use :c:func:`VIDIOC_ENUM_FRAMESIZES` to detect supported
> +   resolutions for a given format, passing desired pixel format in
> +   :c:type:`v4l2_frmsizeenum` ``pixel_format``.
> +
> +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a coded pixel
> +     format will include all possible coded resolutions supported by the
> +     encoder for given coded pixel format.
> +
> +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a raw pixel format
> +     will include all possible frame buffer resolutions supported by the
> +     encoder for given raw pixel format and coded format currently set on
> +     ``CAPTURE``.
> +
> +4. Supported profiles and levels for given format, if applicable, may be

format -> the coded format currently set on ``CAPTURE``

> +   queried using their respective controls via :c:func:`VIDIOC_QUERYCTRL`.
> +
> +5. Any additional encoder capabilities may be discovered by querying
> +   their respective controls.
> +
> +Initialization
> +==============
> +
> +1. **Optional.** Enumerate supported formats and resolutions. See
> +   `Querying capabilities` above.

Can be dropped IMHO.

> +
> +2. Set a coded format on the ``CAPTURE`` queue via :c:func:`VIDIOC_S_FMT`
> +
> +   * **Required fields:**
> +
> +     ``type``
> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> +
> +     ``pixelformat``
> +         the coded format to be produced
> +
> +     ``sizeimage``
> +         desired size of ``CAPTURE`` buffers; the encoder may adjust it to
> +         match hardware requirements
> +
> +     other fields
> +         follow standard semantics
> +
> +   * **Return fields:**
> +
> +     ``sizeimage``
> +         adjusted size of ``CAPTURE`` buffers
> +
> +   .. warning::
> +
> +      Changing the ``CAPTURE`` format may change the currently set ``OUTPUT``
> +      format. The encoder will derive a new ``OUTPUT`` format from the
> +      ``CAPTURE`` format being set, including resolution, colorimetry
> +      parameters, etc. If the client needs a specific ``OUTPUT`` format, it
> +      must adjust it afterwards.
> +
> +3. **Optional.** Enumerate supported ``OUTPUT`` formats (raw formats for
> +   source) for the selected coded format via :c:func:`VIDIOC_ENUM_FMT`.

Does this return the same set of formats as in the 'Querying Capabilities' phase?

> +
> +   * **Required fields:**
> +
> +     ``type``
> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +     other fields
> +         follow standard semantics
> +
> +   * **Return fields:**
> +
> +     ``pixelformat``
> +         raw format supported for the coded format currently selected on
> +         the ``OUTPUT`` queue.

OUTPUT -> CAPTURE

> +
> +     other fields
> +         follow standard semantics
> +
> +4. Set the raw source format on the ``OUTPUT`` queue via
> +   :c:func:`VIDIOC_S_FMT`.
> +
> +   * **Required fields:**
> +
> +     ``type``
> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +     ``pixelformat``
> +         raw format of the source
> +
> +     ``width``, ``height``
> +         source resolution
> +
> +     other fields
> +         follow standard semantics
> +
> +   * **Return fields:**
> +
> +     ``width``, ``height``
> +         may be adjusted by encoder to match alignment requirements, as
> +         required by the currently selected formats
> +
> +     other fields
> +         follow standard semantics
> +
> +   * Setting the source resolution will reset the selection rectangles to their
> +     default values, based on the new resolution, as described in the step 5
> +     below.
> +
> +5. **Optional.** Set the visible resolution for the stream metadata via
> +   :c:func:`VIDIOC_S_SELECTION` on the ``OUTPUT`` queue.
> +
> +   * **Required fields:**
> +
> +     ``type``
> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +     ``target``
> +         set to ``V4L2_SEL_TGT_CROP``
> +
> +     ``r.left``, ``r.top``, ``r.width``, ``r.height``
> +         visible rectangle; this must fit within the `V4L2_SEL_TGT_CROP_BOUNDS`
> +         rectangle and may be subject to adjustment to match codec and
> +         hardware constraints
> +
> +   * **Return fields:**
> +
> +     ``r.left``, ``r.top``, ``r.width``, ``r.height``
> +         visible rectangle adjusted by the encoder
> +
> +   * The following selection targets are supported on ``OUTPUT``:
> +
> +     ``V4L2_SEL_TGT_CROP_BOUNDS``
> +         equal to the full source frame, matching the active ``OUTPUT``
> +         format
> +
> +     ``V4L2_SEL_TGT_CROP_DEFAULT``
> +         equal to ``V4L2_SEL_TGT_CROP_BOUNDS``
> +
> +     ``V4L2_SEL_TGT_CROP``
> +         rectangle within the source buffer to be encoded into the
> +         ``CAPTURE`` stream; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``

Since this defaults to the CROP_DEFAULT rectangle this means that if you have
a 16x16 macroblock size and you want to encode 1080p, you will always have to
explicitly set the CROP rectangle to 1920x1080, right? Since the default will
be 1088 instead of 1080.

It is probably wise to explicitly mention this.

> +
> +     ``V4L2_SEL_TGT_COMPOSE_BOUNDS``
> +         maximum rectangle within the coded resolution, which the cropped
> +         source frame can be output into; if the hardware does not support

output -> composed

> +         composition or scaling, then this is always equal to the rectangle of
> +         width and height matching ``V4L2_SEL_TGT_CROP`` and located at (0, 0)
> +
> +     ``V4L2_SEL_TGT_COMPOSE_DEFAULT``
> +         equal to a rectangle of width and height matching
> +         ``V4L2_SEL_TGT_CROP`` and located at (0, 0)
> +
> +     ``V4L2_SEL_TGT_COMPOSE``
> +         rectangle within the coded frame, which the cropped source frame
> +         is to be output into; defaults to

output -> composed

> +         ``V4L2_SEL_TGT_COMPOSE_DEFAULT``; read-only on hardware without
> +         additional compose/scaling capabilities; resulting stream will
> +         have this rectangle encoded as the visible rectangle in its
> +         metadata
> +
> +   .. warning::
> +
> +      The encoder may adjust the crop/compose rectangles to the nearest
> +      supported ones to meet codec and hardware requirements. The client needs
> +      to check the adjusted rectangle returned by :c:func:`VIDIOC_S_SELECTION`.
> +
> +6. Allocate buffers for both ``OUTPUT`` and ``CAPTURE`` via
> +   :c:func:`VIDIOC_REQBUFS`. This may be performed in any order.
> +
> +   * **Required fields:**
> +
> +     ``count``
> +         requested number of buffers to allocate; greater than zero
> +
> +     ``type``
> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT`` or
> +         ``CAPTURE``
> +
> +     other fields
> +         follow standard semantics
> +
> +   * **Return fields:**
> +
> +     ``count``
> +          actual number of buffers allocated
> +
> +   .. warning::
> +
> +      The actual number of allocated buffers may differ from the ``count``
> +      given. The client must check the updated value of ``count`` after the
> +      call returns.
> +
> +   .. note::
> +
> +      To allocate more than the minimum number of buffers (for pipeline depth),
> +      the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT`` or
> +      ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE`` control respectively, to get the
> +      minimum number of buffers required by the encoder/format, and pass the
> +      obtained value plus the number of additional buffers needed in the
> +      ``count`` field to :c:func:`VIDIOC_REQBUFS`.

Does V4L2_CID_MIN_BUFFERS_FOR_CAPTURE make any sense for encoders?

V4L2_CID_MIN_BUFFERS_FOR_OUTPUT can make sense depending on GOP size etc.

> +
> +   Alternatively, :c:func:`VIDIOC_CREATE_BUFS` can be used to have more
> +   control over buffer allocation.
> +
> +   * **Required fields:**
> +
> +     ``count``
> +         requested number of buffers to allocate; greater than zero
> +
> +     ``type``
> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +     other fields
> +         follow standard semantics
> +
> +   * **Return fields:**
> +
> +     ``count``
> +         adjusted to the number of allocated buffers
> +
> +7. Begin streaming on both ``OUTPUT`` and ``CAPTURE`` queues via
> +   :c:func:`VIDIOC_STREAMON`. This may be performed in any order. The actual
> +   encoding process starts when both queues start streaming.
> +
> +.. note::
> +
> +   If the client stops the ``CAPTURE`` queue during the encode process and then
> +   restarts it again, the encoder will begin generating a stream independent
> +   from the stream generated before the stop. The exact constraints depend
> +   on the coded format, but may include the following implications:
> +
> +   * encoded frames produced after the restart must not reference any
> +     frames produced before the stop, e.g. no long term references for
> +     H.264,
> +
> +   * any headers that must be included in a standalone stream must be
> +     produced again, e.g. SPS and PPS for H.264.
> +
> +Encoding
> +========
> +
> +This state is reached after the `Initialization` sequence finishes succesfully.

successfully

> +In this state, client queues and dequeues buffers to both queues via

client -> the client

> +:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
> +semantics.
> +
> +The contents of encoded ``CAPTURE`` buffers depend on the active coded pixel

contents ... depend -> content ... depends

> +format and may be affected by codec-specific extended controls, as stated
> +in the documentation of each format.
> +
> +Both queues operate independently, following standard behavior of V4L2 buffer
> +queues and memory-to-memory devices. In addition, the order of encoded frames
> +dequeued from the ``CAPTURE`` queue may differ from the order of queuing raw
> +frames to the ``OUTPUT`` queue, due to properties of the selected coded format,
> +e.g. frame reordering.
> +
> +The client must not assume any direct relationship between ``CAPTURE`` and
> +``OUTPUT`` buffers and any specific timing of buffers becoming
> +available to dequeue. Specifically,
> +
> +* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced on
> +  ``CAPTURE`` (if returning an encoded frame allowed the encoder to return a
> +  frame that preceded it in display, but succeeded it in the decode order),
> +
> +* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
> +  ``CAPTURE`` later into encode process, and/or after processing further
> +  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
> +  reordering is used,
> +
> +* buffers may become available on the ``CAPTURE`` queue without additional
> +  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
> +  ``OUTPUT`` buffers queued in the past whose decoding results are only
> +  available at later time, due to specifics of the decoding process,
> +
> +* buffers queued to ``OUTPUT`` may not become available to dequeue instantly
> +  after being encoded into a corresponding ``CATPURE`` buffer, e.g. if the
> +  encoder needs to use the frame as a reference for encoding further frames.
> +
> +.. note::
> +
> +   To allow matching encoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
> +   originated from, the client can set the ``timestamp`` field of the
> +   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
> +   ``CAPTURE`` buffer(s), which resulted from encoding that ``OUTPUT`` buffer
> +   will have their ``timestamp`` field set to the same value when dequeued.
> +
> +   In addition to the straighforward case of one ``OUTPUT`` buffer producing
> +   one ``CAPTURE`` buffer, the following cases are defined:
> +
> +   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
> +     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
> +
> +   * the encoding order differs from the presentation order (i.e. the
> +     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
> +     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
> +     and thus monotonicity of the timestamps cannot be guaranteed.
> +
> +.. note::
> +
> +   To let the client distinguish between frame types (keyframes, intermediate
> +   frames; the exact list of types depends on the coded format), the
> +   ``CAPTURE`` buffers will have corresponding flag bits set in their
> +   :c:type:`v4l2_buffer` struct when dequeued. See the documentation of
> +   :c:type:`v4l2_buffer` and each coded pixel format for exact list of flags
> +   and their meanings.

Is this required? (I think it should, but it isn't the case today).

Is the current set of buffer flags (Key/B/P frame) sufficient for the current
set of codecs?

> +
> +Encoding parameter changes
> +==========================
> +
> +The client is allowed to use :c:func:`VIDIOC_S_CTRL` to change encoder
> +parameters at any time. The availability of parameters is encoder-specific
> +and the client must query the encoder to find the set of available controls.
> +
> +The ability to change each parameter during encoding is encoder-specific, as per
> +the standard semantics of the V4L2 control interface. The client may attempt
> +setting a control of its interest during encoding and if the operation fails

I'd simplify this:

The client may attempt to set a control during encoding...

> +with the -EBUSY error code, the ``CAPTURE`` queue needs to be stopped for the
> +configuration change to be allowed (following the `Drain` sequence will be
> +needed to avoid losing the already queued/encoded frames).

Rephrase:

...to be allowed. To do this follow the `Drain` sequence to avoid losing the
already queued/encoded frames.

> +
> +The timing of parameter updates is encoder-specific, as per the standard
> +semantics of the V4L2 control interface. If the client needs to apply the
> +parameters exactly at specific frame, using the Request API should be

Change this to a reference to the Request API section.

> +considered, if supported by the encoder.
> +
> +Drain
> +=====
> +
> +To ensure that all the queued ``OUTPUT`` buffers have been processed and the
> +related ``CAPTURE`` buffers output to the client, the client must follow the

output -> are output

or perhaps better (up to you): are given

> +drain sequence described below. After the drain sequence ends, the client has
> +received all encoded frames for all ``OUTPUT`` buffers queued before the
> +sequence was started.
> +
> +1. Begin the drain sequence by issuing :c:func:`VIDIOC_ENCODER_CMD`.
> +
> +   * **Required fields:**
> +
> +     ``cmd``
> +         set to ``V4L2_ENC_CMD_STOP``
> +
> +     ``flags``
> +         set to 0
> +
> +     ``pts``
> +         set to 0
> +
> +   .. warning::
> +
> +   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
> +   are streaming. For compatibility reasons, the call to
> +   :c:func:`VIDIOC_ENCODER_CMD` will not fail even if any of the queues is not
> +   streaming, but at the same time it will not initiate the `Drain` sequence
> +   and so the steps described below would not be applicable.
> +
> +2. Any ``OUTPUT`` buffers queued by the client before the
> +   :c:func:`VIDIOC_ENCODER_CMD` was issued will be processed and encoded as
> +   normal. The client must continue to handle both queues independently,
> +   similarly to normal encode operation. This includes,
> +
> +   * queuing and dequeuing ``CAPTURE`` buffers, until a buffer marked with the
> +     ``V4L2_BUF_FLAG_LAST`` flag is dequeued,
> +
> +     .. warning::
> +
> +        The last buffer may be empty (with :c:type:`v4l2_buffer`
> +        ``bytesused`` = 0) and in such case it must be ignored by the client,

such -> that

Check the previous patch as well if you used the phrase 'such case' and replace
it with 'that case'.

> +        as it does not contain an encoded frame.
> +
> +     .. note::
> +
> +        Any attempt to dequeue more buffers beyond the buffer marked with
> +        ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
> +        :c:func:`VIDIOC_DQBUF`.
> +
> +   * dequeuing processed ``OUTPUT`` buffers, until all the buffers queued
> +     before the ``V4L2_ENC_CMD_STOP`` command are dequeued,
> +
> +   * dequeuing the ``V4L2_EVENT_EOS`` event, if the client subscribes to it.
> +
> +   .. note::
> +
> +      For backwards compatibility, the encoder will signal a ``V4L2_EVENT_EOS``
> +      event when the last the last frame has been decoded and all frames are

the last the last -> the last

> +      ready to be dequeued. It is a deprecated behavior and the client must not

is a -> is

> +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
> +      instead.

Question: should new codec drivers still implement the EOS event?

> +
> +3. Once all ``OUTPUT`` buffers queued before the ``V4L2_ENC_CMD_STOP`` call and
> +   the last ``CAPTURE`` buffer are dequeued, the encoder is stopped and it will
> +   accept, but not process any newly queued ``OUTPUT`` buffers until the client
> +   issues any of the following operations:
> +
> +   * ``V4L2_ENC_CMD_START`` - the encoder will resume operation normally,

Perhaps mention that this does not reset the encoder? It's not immediately clear
when reading this.

> +
> +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> +     ``CAPTURE`` queue - the encoder will be reset (see the `Reset` sequence)
> +     and then resume encoding,
> +
> +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> +     ``OUTPUT`` queue - the encoder will resume operation normally, however any
> +     source frames queued to the ``OUTPUT`` queue between ``V4L2_ENC_CMD_STOP``
> +     and :c:func:`VIDIOC_STREAMOFF` will be discarded.
> +
> +.. note::
> +
> +   Once the drain sequence is initiated, the client needs to drive it to
> +   completion, as described by the steps above, unless it aborts the process by
> +   issuing :c:func:`VIDIOC_STREAMOFF` on any of the ``OUTPUT`` or ``CAPTURE``
> +   queues.  The client is not allowed to issue ``V4L2_ENC_CMD_START`` or
> +   ``V4L2_ENC_CMD_STOP`` again while the drain sequence is in progress and they
> +   will fail with -EBUSY error code if attempted.
> +
> +   Although mandatory, the availability of encoder commands may be queried
> +   using :c:func:`VIDIOC_TRY_ENCODER_CMD`.
> +
> +Reset
> +=====
> +
> +The client may want to request the encoder to reinitialize the encoding, so
> +that the following stream data becomes independent from the stream data
> +generated before. Depending on the coded format, that may imply that,

that, -> that:

> +
> +* encoded frames produced after the restart must not reference any frames
> +  produced before the stop, e.g. no long term references for H.264,
> +
> +* any headers that must be included in a standalone stream must be produced
> +  again, e.g. SPS and PPS for H.264.
> +
> +This can be achieved by performing the reset sequence.
> +
> +1. Perform the `Drain` sequence to ensure all the in-flight encoding finishes
> +   and respective buffers are dequeued.
> +
> +2. Stop streaming on the ``CAPTURE`` queue via :c:func:`VIDIOC_STREAMOFF`. This
> +   will return all currently queued ``CAPTURE`` buffers to the client, without
> +   valid frame data.
> +
> +3. Start streaming on the ``CAPTURE`` queue via :c:func:`VIDIOC_STREAMON` and
> +   continue with regular encoding sequence. The encoded frames produced into
> +   ``CAPTURE`` buffers from now on will contain a standalone stream that can be
> +   decoded without the need for frames encoded before the reset sequence,
> +   starting at the first ``OUTPUT`` buffer queued after issuing the
> +   `V4L2_ENC_CMD_STOP` of the `Drain` sequence.
> +
> +This sequence may be also used to change encoding parameters for encoders
> +without the ability to change the parameters on the fly.
> +
> +Commit points
> +=============
> +
> +Setting formats and allocating buffers triggers changes in the behavior of the
> +encoder.
> +
> +1. Setting the format on the ``CAPTURE`` queue may change the set of formats
> +   supported/advertised on the ``OUTPUT`` queue. In particular, it also means
> +   that the ``OUTPUT`` format may be reset and the client must not rely on the
> +   previously set format being preserved.
> +
> +2. Enumerating formats on the ``OUTPUT`` queue always returns only formats
> +   supported for the current ``CAPTURE`` format.
> +
> +3. Setting the format on the ``OUTPUT`` queue does not change the list of
> +   formats available on the ``CAPTURE`` queue. An attempt to set the ``OUTPUT``
> +   format that is not supported for the currently selected ``CAPTURE`` format
> +   will result in the encoder adjusting the requested ``OUTPUT`` format to a
> +   supported one.
> +
> +4. Enumerating formats on the ``CAPTURE`` queue always returns the full set of
> +   supported coded formats, irrespectively of the current ``OUTPUT`` format.
> +
> +5. While buffers are allocated on the ``CAPTURE`` queue, the client must not
> +   change the format on the queue. Drivers will return the -EBUSY error code
> +   for any such format change attempt.
> +
> +To summarize, setting formats and allocation must always start with the
> +``CAPTURE`` queue and the ``CAPTURE`` queue is the master that governs the
> +set of supported formats for the ``OUTPUT`` queue.
> diff --git a/Documentation/media/uapi/v4l/devices.rst b/Documentation/media/uapi/v4l/devices.rst
> index 12d43fe711cf..1822c66c2154 100644
> --- a/Documentation/media/uapi/v4l/devices.rst
> +++ b/Documentation/media/uapi/v4l/devices.rst
> @@ -16,6 +16,7 @@ Interfaces
>      dev-osd
>      dev-codec
>      dev-decoder
> +    dev-encoder
>      dev-effect
>      dev-raw-vbi
>      dev-sliced-vbi
> diff --git a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> index ca5f2270a829..085089cd9577 100644
> --- a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> +++ b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> @@ -37,6 +37,11 @@ Single-planar format structure
>  	inside the stream, when fed to a stateful mem2mem decoder, the fields
>  	may be zero to rely on the decoder to detect the right values. For more
>  	details see :ref:`decoder` and format descriptions.
> +
> +	For compressed formats on the CAPTURE side of a stateful mem2mem
> +	encoder, the fields must be zero, since the coded size is expected to
> +	be calculated internally by the encoder itself, based on the OUTPUT
> +	side. For more details see :ref:`encoder` and format descriptions.

The encoder document doesn't actually mention this. I think it should, though.

I'm a bit uncertain about this: the expected resolution might impact the
sizeimage value: i.e. encoding 640x480 requires much less memory then
encoding 4k video. If this is required to be 0x0, then the driver has to
fill in a worst-case sizeimage value. It might make more sense to say that
if a non-zero resolution is given, then the driver will attempt to
calculate a sensible sizeimage value.

>      * - __u32
>        - ``pixelformat``
>        - The pixel format or type of compression, set by the application.
> diff --git a/Documentation/media/uapi/v4l/v4l2.rst b/Documentation/media/uapi/v4l/v4l2.rst
> index 65dc096199ad..2ef6693b9499 100644
> --- a/Documentation/media/uapi/v4l/v4l2.rst
> +++ b/Documentation/media/uapi/v4l/v4l2.rst
> @@ -56,6 +56,7 @@ Authors, in alphabetical order:
>  - Figa, Tomasz <tfiga@chromium.org>
>  
>    - Documented the memory-to-memory decoder interface.
> +  - Documented the memory-to-memory encoder interface.
>  
>  - H Schimek, Michael <mschimek@gmx.at>
>  
> @@ -68,6 +69,7 @@ Authors, in alphabetical order:
>  - Osciak, Pawel <posciak@chromium.org>
>  
>    - Documented the memory-to-memory decoder interface.
> +  - Documented the memory-to-memory encoder interface.
>  
>  - Osciak, Pawel <pawel@osciak.com>
>  
> diff --git a/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst b/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst
> index 5ae8c933b1b9..d571c53e761a 100644
> --- a/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst
> +++ b/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst
> @@ -50,19 +50,23 @@ currently only used by the STOP command and contains one bit: If the
>  until the end of the current *Group Of Pictures*, otherwise it will stop
>  immediately.
>  
> -A :ref:`read() <func-read>` or :ref:`VIDIOC_STREAMON <VIDIOC_STREAMON>`
> -call sends an implicit START command to the encoder if it has not been
> -started yet. After a STOP command, :ref:`read() <func-read>` calls will read
> +After a STOP command, :ref:`read() <func-read>` calls will read
>  the remaining data buffered by the driver. When the buffer is empty,
>  :ref:`read() <func-read>` will return zero and the next :ref:`read() <func-read>`
>  call will restart the encoder.
>  
> +A :ref:`read() <func-read>` or :ref:`VIDIOC_STREAMON <VIDIOC_STREAMON>`
> +call sends an implicit START command to the encoder if it has not been
> +started yet. Applies to both queues of mem2mem encoders.
> +
>  A :ref:`close() <func-close>` or :ref:`VIDIOC_STREAMOFF <VIDIOC_STREAMON>`
>  call of a streaming file descriptor sends an implicit immediate STOP to
> -the encoder, and all buffered data is discarded.
> +the encoder, and all buffered data is discarded. Applies to both queues of
> +mem2mem encoders.
>  
>  These ioctls are optional, not all drivers may support them. They were
> -introduced in Linux 2.6.21.
> +introduced in Linux 2.6.21. They are, however, mandatory for stateful mem2mem
> +encoders (as further documented in :ref:`encoder`).
>  
>  
>  .. tabularcolumns:: |p{4.4cm}|p{4.4cm}|p{8.7cm}|
> @@ -107,16 +111,20 @@ introduced in Linux 2.6.21.
>        - Stop the encoder. When the ``V4L2_ENC_CMD_STOP_AT_GOP_END`` flag
>  	is set, encoding will continue until the end of the current *Group
>  	Of Pictures*, otherwise encoding will stop immediately. When the
> -	encoder is already stopped, this command does nothing. mem2mem
> -	encoders will send a ``V4L2_EVENT_EOS`` event when the last frame
> -	has been encoded and all frames are ready to be dequeued and will
> -	set the ``V4L2_BUF_FLAG_LAST`` buffer flag on the last buffer of
> -	the capture queue to indicate there will be no new buffers
> -	produced to dequeue. This buffer may be empty, indicated by the
> -	driver setting the ``bytesused`` field to 0. Once the
> -	``V4L2_BUF_FLAG_LAST`` flag was set, the
> -	:ref:`VIDIOC_DQBUF <VIDIOC_QBUF>` ioctl will not block anymore,
> -	but return an ``EPIPE`` error code.
> +	encoder is already stopped, this command does nothing.
> +
> +	A stateful mem2mem encoder will proceed with encoding the source
> +	buffers pending before the command is issued and then stop producing
> +	new frames. It will send a ``V4L2_EVENT_EOS`` event when the last frame
> +	has been encoded and all frames are ready to be dequeued and will set
> +	the ``V4L2_BUF_FLAG_LAST`` buffer flag on the last buffer of the
> +	capture queue to indicate there will be no new buffers produced to
> +	dequeue. This buffer may be empty, indicated by the driver setting the
> +	``bytesused`` field to 0. Once the buffer with the
> +	``V4L2_BUF_FLAG_LAST`` flag set was dequeued, the :ref:`VIDIOC_DQBUF
> +	<VIDIOC_QBUF>` ioctl will not block anymore, but return an ``EPIPE``
> +	error code. No flags or other arguments are accepted in case of mem2mem
> +	encoders.  See :ref:`encoder` for more details.
>      * - ``V4L2_ENC_CMD_PAUSE``
>        - 2
>        - Pause the encoder. When the encoder has not been started yet, the
> 

Regards,

	Hans

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-10-22 14:48 ` [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface Tomasz Figa
  2018-10-29  9:45   ` Stanimir Varbanov
  2018-11-12 11:37   ` Hans Verkuil
@ 2018-11-12 15:04   ` Stanimir Varbanov
  2018-11-15 14:34   ` Hans Verkuil
  3 siblings, 0 replies; 41+ messages in thread
From: Stanimir Varbanov @ 2018-11-12 15:04 UTC (permalink / raw)
  To: Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Hans Verkuil,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Nicolas Dufresne, Paul Kocialkowski, Laurent Pinchart,
	dave.stevenson, Ezequiel Garcia, Maxime Jourdan

Hi Tomasz,

On 10/22/18 5:48 PM, Tomasz Figa wrote:
> Due to complexity of the video decoding process, the V4L2 drivers of
> stateful decoder hardware require specific sequences of V4L2 API calls
> to be followed. These include capability enumeration, initialization,
> decoding, seek, pause, dynamic resolution change, drain and end of
> stream.
> 
> Specifics of the above have been discussed during Media Workshops at
> LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
> Conference Europe 2014 in Düsseldorf. The de facto Codec API that
> originated at those events was later implemented by the drivers we already
> have merged in mainline, such as s5p-mfc or coda.
> 
> The only thing missing was the real specification included as a part of
> Linux Media documentation. Fix it now and document the decoder part of
> the Codec API.
> 
> Signed-off-by: Tomasz Figa <tfiga@chromium.org>
> ---
>  Documentation/media/uapi/v4l/dev-decoder.rst  | 1082 +++++++++++++++++
>  Documentation/media/uapi/v4l/devices.rst      |    1 +
>  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |    5 +
>  Documentation/media/uapi/v4l/v4l2.rst         |   10 +-
>  .../media/uapi/v4l/vidioc-decoder-cmd.rst     |   40 +-
>  Documentation/media/uapi/v4l/vidioc-g-fmt.rst |   14 +
>  6 files changed, 1137 insertions(+), 15 deletions(-)
>  create mode 100644 Documentation/media/uapi/v4l/dev-decoder.rst
> 
> diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
> new file mode 100644
> index 000000000000..09c7a6621b8e
> --- /dev/null
> +++ b/Documentation/media/uapi/v4l/dev-decoder.rst


> +State machine
> +=============
> +
> +.. kernel-render:: DOT
> +   :alt: DOT digraph of decoder state machine
> +   :caption: Decoder state machine
> +
> +   digraph decoder_state_machine {
> +       node [shape = doublecircle, label="Decoding"] Decoding;
> +
> +       node [shape = circle, label="Initialization"] Initialization;
> +       node [shape = circle, label="Capture\nsetup"] CaptureSetup;
> +       node [shape = circle, label="Dynamic\nresolution\nchange"] ResChange;
> +       node [shape = circle, label="Stopped"] Stopped;
> +       node [shape = circle, label="Drain"] Drain;
> +       node [shape = circle, label="Seek"] Seek;
> +       node [shape = circle, label="End of stream"] EoS;
> +
> +       node [shape = point]; qi
> +       qi -> Initialization [ label = "open()" ];
> +
> +       Initialization -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
> +
> +       CaptureSetup -> Stopped [ label = "CAPTURE\nbuffers\nready" ];
> +
> +       Decoding -> ResChange [ label = "Stream\nresolution\nchange" ];
> +       Decoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
> +       Decoding -> EoS [ label = "EoS mark\nin the stream" ];
> +       Decoding -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +       Decoding -> Stopped [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> +       Decoding -> Decoding;
> +
> +       ResChange -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
> +       ResChange -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +
> +       EoS -> Drain [ label = "Implicit\ndrain" ];
> +
> +       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
> +       Drain -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +
> +       Seek -> Decoding [ label = "VIDIOC_STREAMON(OUTPUT)" ];
> +       Seek -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];

Shouldn't this be [ label = "VIDIOC_STREAMOFF(CAPTURE)" ], for me it is
looks more natural for v4l2?

For example I want to exit immediately from decoding state with calls to
streamoff(OUTPUT) and streamoff(CAPTURE). This could be when you press
ctrl-c while playing video, in this case I don't expect EoS nor buffers
draining.

> +
> +       Stopped -> Decoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(CAPTURE)" ];
> +       Stopped -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +   }
> +


-- 
regards,
Stan

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-10-22 14:48 ` [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface Tomasz Figa
                     ` (2 preceding siblings ...)
  2018-11-12 15:04   ` Stanimir Varbanov
@ 2018-11-15 14:34   ` Hans Verkuil
  2018-11-17  4:31     ` Nicolas Dufresne
  3 siblings, 1 reply; 41+ messages in thread
From: Hans Verkuil @ 2018-11-15 14:34 UTC (permalink / raw)
  To: Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Paweł Ościak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel, Tiffany Lin, Andrew-CT Chen,
	Stanimir Varbanov, Todor Tomov, Nicolas Dufresne,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

On 10/22/2018 04:48 PM, Tomasz Figa wrote:
> Due to complexity of the video decoding process, the V4L2 drivers of
> stateful decoder hardware require specific sequences of V4L2 API calls
> to be followed. These include capability enumeration, initialization,
> decoding, seek, pause, dynamic resolution change, drain and end of
> stream.
> 
> Specifics of the above have been discussed during Media Workshops at
> LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
> Conference Europe 2014 in Düsseldorf. The de facto Codec API that
> originated at those events was later implemented by the drivers we already
> have merged in mainline, such as s5p-mfc or coda.
> 
> The only thing missing was the real specification included as a part of
> Linux Media documentation. Fix it now and document the decoder part of
> the Codec API.
> 
> Signed-off-by: Tomasz Figa <tfiga@chromium.org>
> ---
>  Documentation/media/uapi/v4l/dev-decoder.rst  | 1082 +++++++++++++++++
>  Documentation/media/uapi/v4l/devices.rst      |    1 +
>  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |    5 +
>  Documentation/media/uapi/v4l/v4l2.rst         |   10 +-
>  .../media/uapi/v4l/vidioc-decoder-cmd.rst     |   40 +-
>  Documentation/media/uapi/v4l/vidioc-g-fmt.rst |   14 +
>  6 files changed, 1137 insertions(+), 15 deletions(-)
>  create mode 100644 Documentation/media/uapi/v4l/dev-decoder.rst
> 
> diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
> new file mode 100644
> index 000000000000..09c7a6621b8e
> --- /dev/null
> +++ b/Documentation/media/uapi/v4l/dev-decoder.rst
> @@ -0,0 +1,1082 @@
> +.. -*- coding: utf-8; mode: rst -*-
> +
> +.. _decoder:
> +
> +*************************************************
> +Memory-to-memory Stateful Video Decoder Interface
> +*************************************************
> +
> +A stateful video decoder takes complete chunks of the bitstream (e.g. Annex-B
> +H.264/HEVC stream, raw VP8/9 stream) and decodes them into raw video frames in
> +display order. The decoder is expected not to require any additional information
> +from the client to process these buffers.
> +
> +Performing software parsing, processing etc. of the stream in the driver in
> +order to support this interface is strongly discouraged. In case such
> +operations are needed, use of the Stateless Video Decoder Interface (in
> +development) is strongly advised.
> +
> +Conventions and notation used in this document
> +==============================================
> +
> +1. The general V4L2 API rules apply if not specified in this document
> +   otherwise.
> +
> +2. The meaning of words “must”, “may”, “should”, etc. is as per RFC
> +   2119.
> +
> +3. All steps not marked “optional” are required.
> +
> +4. :c:func:`VIDIOC_G_EXT_CTRLS`, :c:func:`VIDIOC_S_EXT_CTRLS` may be used
> +   interchangeably with :c:func:`VIDIOC_G_CTRL`, :c:func:`VIDIOC_S_CTRL`,
> +   unless specified otherwise.
> +
> +5. Single-plane API (see spec) and applicable structures may be used
> +   interchangeably with Multi-plane API, unless specified otherwise,
> +   depending on decoder capabilities and following the general V4L2
> +   guidelines.
> +
> +6. i = [a..b]: sequence of integers from a to b, inclusive, i.e. i =
> +   [0..2]: i = 0, 1, 2.
> +
> +7. Given an ``OUTPUT`` buffer A, A’ represents a buffer on the ``CAPTURE``
> +   queue containing data (decoded frame/stream) that resulted from processing
> +   buffer A.
> +
> +.. _decoder-glossary:
> +
> +Glossary
> +========
> +
> +CAPTURE
> +   the destination buffer queue; for decoder, the queue of buffers containing
> +   decoded frames; for encoder, the queue of buffers containing encoded
> +   bitstream; ``V4L2_BUF_TYPE_VIDEO_CAPTURE```` or
> +   ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``; data are captured from the hardware
> +   into ``CAPTURE`` buffers
> +
> +client
> +   application client communicating with the decoder or encoder implementing
> +   this interface
> +
> +coded format
> +   encoded/compressed video bitstream format (e.g. H.264, VP8, etc.); see
> +   also: raw format
> +
> +coded height
> +   height for given coded resolution
> +
> +coded resolution
> +   stream resolution in pixels aligned to codec and hardware requirements;
> +   typically visible resolution rounded up to full macroblocks;
> +   see also: visible resolution
> +
> +coded width
> +   width for given coded resolution
> +
> +decode order
> +   the order in which frames are decoded; may differ from display order if the
> +   coded format includes a feature of frame reordering; for decoders,
> +   ``OUTPUT`` buffers must be queued by the client in decode order; for
> +   encoders ``CAPTURE`` buffers must be returned by the encoder in decode order
> +
> +destination
> +   data resulting from the decode process; ``CAPTURE``
> +
> +display order
> +   the order in which frames must be displayed; for encoders, ``OUTPUT``
> +   buffers must be queued by the client in display order; for decoders,
> +   ``CAPTURE`` buffers must be returned by the decoder in display order
> +
> +DPB
> +   Decoded Picture Buffer; an H.264 term for a buffer that stores a decoded
> +   raw frame available for reference in further decoding steps.
> +
> +EOS
> +   end of stream
> +
> +IDR
> +   Instantaneous Decoder Refresh; a type of a keyframe in H.264-encoded stream,
> +   which clears the list of earlier reference frames (DPBs)
> +
> +keyframe
> +   an encoded frame that does not reference frames decoded earlier, i.e.
> +   can be decoded fully on its own.
> +
> +macroblock
> +   a processing unit in image and video compression formats based on linear
> +   block transforms (e.g. H.264, VP8, VP9); codec-specific, but for most of
> +   popular codecs the size is 16x16 samples (pixels)
> +
> +OUTPUT
> +   the source buffer queue; for decoders, the queue of buffers containing
> +   encoded bitstream; for encoders, the queue of buffers containing raw frames;
> +   ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or ``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE``; the
> +   hardware is fed with data from ``OUTPUT`` buffers
> +
> +PPS
> +   Picture Parameter Set; a type of metadata entity in H.264 bitstream
> +
> +raw format
> +   uncompressed format containing raw pixel data (e.g. YUV, RGB formats)
> +
> +resume point
> +   a point in the bitstream from which decoding may start/continue, without
> +   any previous state/data present, e.g.: a keyframe (VP8/VP9) or
> +   SPS/PPS/IDR sequence (H.264); a resume point is required to start decode
> +   of a new stream, or to resume decoding after a seek
> +
> +source
> +   data fed to the decoder or encoder; ``OUTPUT``
> +
> +source height
> +   height in pixels for given source resolution; relevant to encoders only
> +
> +source resolution
> +   resolution in pixels of source frames being source to the encoder and
> +   subject to further cropping to the bounds of visible resolution; relevant to
> +   encoders only
> +
> +source width
> +   width in pixels for given source resolution; relevant to encoders only
> +
> +SPS
> +   Sequence Parameter Set; a type of metadata entity in H.264 bitstream
> +
> +stream metadata
> +   additional (non-visual) information contained inside encoded bitstream;
> +   for example: coded resolution, visible resolution, codec profile
> +
> +visible height
> +   height for given visible resolution; display height
> +
> +visible resolution
> +   stream resolution of the visible picture, in pixels, to be used for
> +   display purposes; must be smaller or equal to coded resolution;
> +   display resolution
> +
> +visible width
> +   width for given visible resolution; display width
> +
> +State machine
> +=============
> +
> +.. kernel-render:: DOT
> +   :alt: DOT digraph of decoder state machine
> +   :caption: Decoder state machine
> +
> +   digraph decoder_state_machine {
> +       node [shape = doublecircle, label="Decoding"] Decoding;
> +
> +       node [shape = circle, label="Initialization"] Initialization;
> +       node [shape = circle, label="Capture\nsetup"] CaptureSetup;
> +       node [shape = circle, label="Dynamic\nresolution\nchange"] ResChange;
> +       node [shape = circle, label="Stopped"] Stopped;
> +       node [shape = circle, label="Drain"] Drain;
> +       node [shape = circle, label="Seek"] Seek;
> +       node [shape = circle, label="End of stream"] EoS;
> +
> +       node [shape = point]; qi
> +       qi -> Initialization [ label = "open()" ];
> +
> +       Initialization -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
> +
> +       CaptureSetup -> Stopped [ label = "CAPTURE\nbuffers\nready" ];
> +
> +       Decoding -> ResChange [ label = "Stream\nresolution\nchange" ];
> +       Decoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
> +       Decoding -> EoS [ label = "EoS mark\nin the stream" ];
> +       Decoding -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +       Decoding -> Stopped [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> +       Decoding -> Decoding;
> +
> +       ResChange -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
> +       ResChange -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +
> +       EoS -> Drain [ label = "Implicit\ndrain" ];
> +
> +       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
> +       Drain -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +
> +       Seek -> Decoding [ label = "VIDIOC_STREAMON(OUTPUT)" ];
> +       Seek -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];
> +
> +       Stopped -> Decoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(CAPTURE)" ];
> +       Stopped -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> +   }
> +
> +Querying capabilities
> +=====================
> +
> +1. To enumerate the set of coded formats supported by the decoder, the
> +   client may call :c:func:`VIDIOC_ENUM_FMT` on ``OUTPUT``.
> +
> +   * The full set of supported formats will be returned, regardless of the
> +     format set on ``CAPTURE``.
> +
> +2. To enumerate the set of supported raw formats, the client may call
> +   :c:func:`VIDIOC_ENUM_FMT` on ``CAPTURE``.
> +
> +   * Only the formats supported for the format currently active on ``OUTPUT``
> +     will be returned.
> +
> +   * In order to enumerate raw formats supported by a given coded format,
> +     the client must first set that coded format on ``OUTPUT`` and then
> +     enumerate formats on ``CAPTURE``.
> +
> +3. The client may use :c:func:`VIDIOC_ENUM_FRAMESIZES` to detect supported
> +   resolutions for a given format, passing desired pixel format in
> +   :c:type:`v4l2_frmsizeenum` ``pixel_format``.
> +
> +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a coded pixel
> +     formats will include all possible coded resolutions supported by the
> +     decoder for given coded pixel format.
> +
> +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a raw pixel format
> +     will include all possible frame buffer resolutions supported by the
> +     decoder for given raw pixel format and the coded format currently set on
> +     ``OUTPUT``.
> +
> +4. Supported profiles and levels for given format, if applicable, may be
> +   queried using their respective controls via :c:func:`VIDIOC_QUERYCTRL`.
> +
> +Initialization
> +==============
> +
> +1. **Optional.** Enumerate supported ``OUTPUT`` formats and resolutions. See
> +   `Querying capabilities` above.
> +
> +2. Set the coded format on ``OUTPUT`` via :c:func:`VIDIOC_S_FMT`
> +
> +   * **Required fields:**
> +
> +     ``type``
> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +     ``pixelformat``
> +         a coded pixel format
> +
> +     ``width``, ``height``
> +         required only if cannot be parsed from the stream for the given
> +         coded format; optional otherwise - set to zero to ignore
> +
> +     ``sizeimage``
> +         desired size of ``OUTPUT`` buffers; the decoder may adjust it to
> +         match hardware requirements
> +
> +     other fields
> +         follow standard semantics
> +
> +   * **Return fields:**
> +
> +     ``sizeimage``
> +         adjusted size of ``CAPTURE`` buffers
> +
> +   * If width and height are set to non-zero values, the ``CAPTURE`` format
> +     will be updated with an appropriate frame buffer resolution instantly.
> +     However, for coded formats that include stream resolution information,
> +     after the decoder is done parsing the information from the stream, it will
> +     update the ``CAPTURE`` format with new values and signal a source change
> +     event.
> +
> +   .. warning::
> +
> +      Changing the ``OUTPUT`` format may change the currently set ``CAPTURE``
> +      format. The decoder will derive a new ``CAPTURE`` format from the
> +      ``OUTPUT`` format being set, including resolution, colorimetry
> +      parameters, etc. If the client needs a specific ``CAPTURE`` format, it
> +      must adjust it afterwards.
> +
> +3.  **Optional.** Query the minimum number of buffers required for ``OUTPUT``
> +    queue via :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to
> +    use more buffers than the minimum required by hardware/format.
> +
> +    * **Required fields:**
> +
> +      ``id``
> +          set to ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
> +
> +    * **Return fields:**
> +
> +      ``value``
> +          the minimum number of ``OUTPUT`` buffers required for the currently
> +          set format
> +
> +4.  Allocate source (bitstream) buffers via :c:func:`VIDIOC_REQBUFS` on
> +    ``OUTPUT``.
> +
> +    * **Required fields:**
> +
> +      ``count``
> +          requested number of buffers to allocate; greater than zero
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +      ``memory``
> +          follows standard semantics
> +
> +    * **Return fields:**
> +
> +      ``count``
> +          the actual number of buffers allocated
> +
> +    .. warning::
> +
> +       The actual number of allocated buffers may differ from the ``count``
> +       given. The client must check the updated value of ``count`` after the
> +       call returns.
> +
> +    .. note::
> +
> +       To allocate more than the minimum number of buffers (for pipeline
> +       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
> +       control to get the minimum number of buffers required by the
> +       decoder/format, and pass the obtained value plus the number of
> +       additional buffers needed in the ``count`` field to
> +       :c:func:`VIDIOC_REQBUFS`.
> +
> +    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``OUTPUT`` queue can be
> +    used to have more control over buffer allocation.
> +
> +    * **Required fields:**
> +
> +      ``count``
> +          requested number of buffers to allocate; greater than zero
> +
> +      ``type``
> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> +
> +      ``memory``
> +          follows standard semantics
> +
> +      ``format``
> +          follows standard semantics
> +
> +    * **Return fields:**
> +
> +      ``count``
> +          adjusted to the number of allocated buffers
> +
> +    .. warning::
> +
> +       The actual number of allocated buffers may differ from the ``count``
> +       given. The client must check the updated value of ``count`` after the
> +       call returns.
> +
> +5.  Start streaming on the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`.
> +
> +6.  **This step only applies to coded formats that contain resolution information
> +    in the stream.**

As far as I know all codecs have resolution/metadata in the stream.

As discussed in the "[PATCH vicodec v4 0/3] Add support to more pixel formats in
vicodec" thread, it is easiest to assume that there is always metadata.

Perhaps there should be a single mention somewhere that such codecs are not
supported at the moment, but to be frank how can you decode a stream without
it containing such essential information? You are much more likely to implement
such a codec as a stateless codec.

So I would just drop this sentence here (and perhaps at other places in this
document or the encoder document as well).

Regards,

	Hans

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2018-11-12 13:23   ` Hans Verkuil
@ 2018-11-17  4:18     ` Nicolas Dufresne
  2018-11-17 11:37       ` Hans Verkuil
  2019-01-23  9:52     ` Tomasz Figa
  1 sibling, 1 reply; 41+ messages in thread
From: Nicolas Dufresne @ 2018-11-17  4:18 UTC (permalink / raw)
  To: Hans Verkuil, Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Paweł Ościak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel, Tiffany Lin, Andrew-CT Chen,
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

Le lundi 12 novembre 2018 à 14:23 +0100, Hans Verkuil a écrit :
> On 10/22/2018 04:49 PM, Tomasz Figa wrote:
> > Due to complexity of the video encoding process, the V4L2 drivers of
> > stateful encoder hardware require specific sequences of V4L2 API calls
> > to be followed. These include capability enumeration, initialization,
> > encoding, encode parameters change, drain and reset.
> > 
> > Specifics of the above have been discussed during Media Workshops at
> > LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
> > Conference Europe 2014 in Düsseldorf. The de facto Codec API that
> > originated at those events was later implemented by the drivers we already
> > have merged in mainline, such as s5p-mfc or coda.
> > 
> > The only thing missing was the real specification included as a part of
> > Linux Media documentation. Fix it now and document the encoder part of
> > the Codec API.
> > 
> > Signed-off-by: Tomasz Figa <tfiga@chromium.org>
> > ---
> >  Documentation/media/uapi/v4l/dev-encoder.rst  | 579 ++++++++++++++++++
> >  Documentation/media/uapi/v4l/devices.rst      |   1 +
> >  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |   5 +
> >  Documentation/media/uapi/v4l/v4l2.rst         |   2 +
> >  .../media/uapi/v4l/vidioc-encoder-cmd.rst     |  38 +-
> >  5 files changed, 610 insertions(+), 15 deletions(-)
> >  create mode 100644 Documentation/media/uapi/v4l/dev-encoder.rst
> > 
> > diff --git a/Documentation/media/uapi/v4l/dev-encoder.rst b/Documentation/media/uapi/v4l/dev-encoder.rst
> > new file mode 100644
> > index 000000000000..41139e5e48eb
> > --- /dev/null
> > +++ b/Documentation/media/uapi/v4l/dev-encoder.rst
> > @@ -0,0 +1,579 @@
> > +.. -*- coding: utf-8; mode: rst -*-
> > +
> > +.. _encoder:
> > +
> > +*************************************************
> > +Memory-to-memory Stateful Video Encoder Interface
> > +*************************************************
> > +
> > +A stateful video encoder takes raw video frames in display order and encodes
> > +them into a bitstream. It generates complete chunks of the bitstream, including
> > +all metadata, headers, etc. The resulting bitstream does not require any
> > +further post-processing by the client.
> > +
> > +Performing software stream processing, header generation etc. in the driver
> > +in order to support this interface is strongly discouraged. In case such
> > +operations are needed, use of the Stateless Video Encoder Interface (in
> > +development) is strongly advised.
> > +
> > +Conventions and notation used in this document
> > +==============================================
> > +
> > +1. The general V4L2 API rules apply if not specified in this document
> > +   otherwise.
> > +
> > +2. The meaning of words "must", "may", "should", etc. is as per RFC
> > +   2119.
> > +
> > +3. All steps not marked "optional" are required.
> > +
> > +4. :c:func:`VIDIOC_G_EXT_CTRLS`, :c:func:`VIDIOC_S_EXT_CTRLS` may be used
> > +   interchangeably with :c:func:`VIDIOC_G_CTRL`, :c:func:`VIDIOC_S_CTRL`,
> > +   unless specified otherwise.
> > +
> > +5. Single-plane API (see spec) and applicable structures may be used
> > +   interchangeably with Multi-plane API, unless specified otherwise,
> > +   depending on encoder capabilities and following the general V4L2
> > +   guidelines.
> > +
> > +6. i = [a..b]: sequence of integers from a to b, inclusive, i.e. i =
> > +   [0..2]: i = 0, 1, 2.
> > +
> > +7. Given an ``OUTPUT`` buffer A, A' represents a buffer on the ``CAPTURE``
> > +   queue containing data (encoded frame/stream) that resulted from processing
> > +   buffer A.
> 
> The same comments as I mentioned for the previous patch apply to this section.
> 
> > +
> > +Glossary
> > +========
> > +
> > +Refer to :ref:`decoder-glossary`.
> 
> Ah, you refer to the same glossary. Then my comment about the source resolution
> terms is obviously wrong.
> 
> I wonder if it wouldn't be better to split off the sections above into a separate
> HW codec intro section where you explain the differences between stateful/stateless
> encoders and decoders, and add the conventions and glossary.
> 
> After that you have the three documents for each variant (later four when we get
> stateless encoders).
> 
> Up to you, and it can be done later in a follow-up patch.
> 
> > +
> > +State machine
> > +=============
> > +
> > +.. kernel-render:: DOT
> > +   :alt: DOT digraph of encoder state machine
> > +   :caption: Encoder state machine
> > +
> > +   digraph encoder_state_machine {
> > +       node [shape = doublecircle, label="Encoding"] Encoding;
> > +
> > +       node [shape = circle, label="Initialization"] Initialization;
> > +       node [shape = circle, label="Stopped"] Stopped;
> > +       node [shape = circle, label="Drain"] Drain;
> > +       node [shape = circle, label="Reset"] Reset;
> > +
> > +       node [shape = point]; qi
> > +       qi -> Initialization [ label = "open()" ];
> > +
> > +       Initialization -> Encoding [ label = "Both queues streaming" ];
> > +
> > +       Encoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
> > +       Encoding -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> > +       Encoding -> Stopped [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> > +       Encoding -> Encoding;
> > +
> > +       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
> > +       Drain -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> > +
> > +       Reset -> Encoding [ label = "VIDIOC_STREAMON(CAPTURE)" ];
> > +       Reset -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];
> > +
> > +       Stopped -> Encoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(OUTPUT)" ];
> > +       Stopped -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> > +   }
> > +
> > +Querying capabilities
> > +=====================
> > +
> > +1. To enumerate the set of coded formats supported by the encoder, the
> > +   client may call :c:func:`VIDIOC_ENUM_FMT` on ``CAPTURE``.
> > +
> > +   * The full set of supported formats will be returned, regardless of the
> > +     format set on ``OUTPUT``.
> > +
> > +2. To enumerate the set of supported raw formats, the client may call
> > +   :c:func:`VIDIOC_ENUM_FMT` on ``OUTPUT``.
> > +
> > +   * Only the formats supported for the format currently active on ``CAPTURE``
> > +     will be returned.
> > +
> > +   * In order to enumerate raw formats supported by a given coded format,
> > +     the client must first set that coded format on ``CAPTURE`` and then
> > +     enumerate the formats on ``OUTPUT``.
> > +
> > +3. The client may use :c:func:`VIDIOC_ENUM_FRAMESIZES` to detect supported
> > +   resolutions for a given format, passing desired pixel format in
> > +   :c:type:`v4l2_frmsizeenum` ``pixel_format``.
> > +
> > +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a coded pixel
> > +     format will include all possible coded resolutions supported by the
> > +     encoder for given coded pixel format.
> > +
> > +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a raw pixel format
> > +     will include all possible frame buffer resolutions supported by the
> > +     encoder for given raw pixel format and coded format currently set on
> > +     ``CAPTURE``.
> > +
> > +4. Supported profiles and levels for given format, if applicable, may be
> 
> format -> the coded format currently set on ``CAPTURE``
> 
> > +   queried using their respective controls via :c:func:`VIDIOC_QUERYCTRL`.
> > +
> > +5. Any additional encoder capabilities may be discovered by querying
> > +   their respective controls.
> > +
> > +Initialization
> > +==============
> > +
> > +1. **Optional.** Enumerate supported formats and resolutions. See
> > +   `Querying capabilities` above.
> 
> Can be dropped IMHO.
> 
> > +
> > +2. Set a coded format on the ``CAPTURE`` queue via :c:func:`VIDIOC_S_FMT`
> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> > +
> > +     ``pixelformat``
> > +         the coded format to be produced
> > +
> > +     ``sizeimage``
> > +         desired size of ``CAPTURE`` buffers; the encoder may adjust it to
> > +         match hardware requirements
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``sizeimage``
> > +         adjusted size of ``CAPTURE`` buffers
> > +
> > +   .. warning::
> > +
> > +      Changing the ``CAPTURE`` format may change the currently set ``OUTPUT``
> > +      format. The encoder will derive a new ``OUTPUT`` format from the
> > +      ``CAPTURE`` format being set, including resolution, colorimetry
> > +      parameters, etc. If the client needs a specific ``OUTPUT`` format, it
> > +      must adjust it afterwards.
> > +
> > +3. **Optional.** Enumerate supported ``OUTPUT`` formats (raw formats for
> > +   source) for the selected coded format via :c:func:`VIDIOC_ENUM_FMT`.
> 
> Does this return the same set of formats as in the 'Querying Capabilities' phase?
> 
> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``pixelformat``
> > +         raw format supported for the coded format currently selected on
> > +         the ``OUTPUT`` queue.
> 
> OUTPUT -> CAPTURE
> 
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +4. Set the raw source format on the ``OUTPUT`` queue via
> > +   :c:func:`VIDIOC_S_FMT`.
> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +     ``pixelformat``
> > +         raw format of the source
> > +
> > +     ``width``, ``height``
> > +         source resolution
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``width``, ``height``
> > +         may be adjusted by encoder to match alignment requirements, as
> > +         required by the currently selected formats
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * Setting the source resolution will reset the selection rectangles to their
> > +     default values, based on the new resolution, as described in the step 5
> > +     below.
> > +
> > +5. **Optional.** Set the visible resolution for the stream metadata via
> > +   :c:func:`VIDIOC_S_SELECTION` on the ``OUTPUT`` queue.
> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +     ``target``
> > +         set to ``V4L2_SEL_TGT_CROP``
> > +
> > +     ``r.left``, ``r.top``, ``r.width``, ``r.height``
> > +         visible rectangle; this must fit within the `V4L2_SEL_TGT_CROP_BOUNDS`
> > +         rectangle and may be subject to adjustment to match codec and
> > +         hardware constraints
> > +
> > +   * **Return fields:**
> > +
> > +     ``r.left``, ``r.top``, ``r.width``, ``r.height``
> > +         visible rectangle adjusted by the encoder
> > +
> > +   * The following selection targets are supported on ``OUTPUT``:
> > +
> > +     ``V4L2_SEL_TGT_CROP_BOUNDS``
> > +         equal to the full source frame, matching the active ``OUTPUT``
> > +         format
> > +
> > +     ``V4L2_SEL_TGT_CROP_DEFAULT``
> > +         equal to ``V4L2_SEL_TGT_CROP_BOUNDS``
> > +
> > +     ``V4L2_SEL_TGT_CROP``
> > +         rectangle within the source buffer to be encoded into the
> > +         ``CAPTURE`` stream; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``
> 
> Since this defaults to the CROP_DEFAULT rectangle this means that if you have
> a 16x16 macroblock size and you want to encode 1080p, you will always have to
> explicitly set the CROP rectangle to 1920x1080, right? Since the default will
> be 1088 instead of 1080.
> 
> It is probably wise to explicitly mention this.
> 
> > +
> > +     ``V4L2_SEL_TGT_COMPOSE_BOUNDS``
> > +         maximum rectangle within the coded resolution, which the cropped
> > +         source frame can be output into; if the hardware does not support
> 
> output -> composed
> 
> > +         composition or scaling, then this is always equal to the rectangle of
> > +         width and height matching ``V4L2_SEL_TGT_CROP`` and located at (0, 0)
> > +
> > +     ``V4L2_SEL_TGT_COMPOSE_DEFAULT``
> > +         equal to a rectangle of width and height matching
> > +         ``V4L2_SEL_TGT_CROP`` and located at (0, 0)
> > +
> > +     ``V4L2_SEL_TGT_COMPOSE``
> > +         rectangle within the coded frame, which the cropped source frame
> > +         is to be output into; defaults to
> 
> output -> composed
> 
> > +         ``V4L2_SEL_TGT_COMPOSE_DEFAULT``; read-only on hardware without
> > +         additional compose/scaling capabilities; resulting stream will
> > +         have this rectangle encoded as the visible rectangle in its
> > +         metadata
> > +
> > +   .. warning::
> > +
> > +      The encoder may adjust the crop/compose rectangles to the nearest
> > +      supported ones to meet codec and hardware requirements. The client needs
> > +      to check the adjusted rectangle returned by :c:func:`VIDIOC_S_SELECTION`.
> > +
> > +6. Allocate buffers for both ``OUTPUT`` and ``CAPTURE`` via
> > +   :c:func:`VIDIOC_REQBUFS`. This may be performed in any order.
> > +
> > +   * **Required fields:**
> > +
> > +     ``count``
> > +         requested number of buffers to allocate; greater than zero
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT`` or
> > +         ``CAPTURE``
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``count``
> > +          actual number of buffers allocated
> > +
> > +   .. warning::
> > +
> > +      The actual number of allocated buffers may differ from the ``count``
> > +      given. The client must check the updated value of ``count`` after the
> > +      call returns.
> > +
> > +   .. note::
> > +
> > +      To allocate more than the minimum number of buffers (for pipeline depth),
> > +      the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT`` or
> > +      ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE`` control respectively, to get the
> > +      minimum number of buffers required by the encoder/format, and pass the
> > +      obtained value plus the number of additional buffers needed in the
> > +      ``count`` field to :c:func:`VIDIOC_REQBUFS`.
> 
> Does V4L2_CID_MIN_BUFFERS_FOR_CAPTURE make any sense for encoders?

We do account for it in GStreamer (the capture/output handling is
generic), but I don't know if it's being used anywhere. 

> 
> V4L2_CID_MIN_BUFFERS_FOR_OUTPUT can make sense depending on GOP size etc.

Not really the GOP size. In video conference we often run the encoder
with an open GOP and do key frame request on demand. Mostly the DPB
size. DPB is specific term use in certain CODEC, but they nearly all
keep some backlog, except for key frame only codecs (jpeg, png, etc.).

> 
> > +
> > +   Alternatively, :c:func:`VIDIOC_CREATE_BUFS` can be used to have more
> > +   control over buffer allocation.
> > +
> > +   * **Required fields:**
> > +
> > +     ``count``
> > +         requested number of buffers to allocate; greater than zero
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``count``
> > +         adjusted to the number of allocated buffers
> > +
> > +7. Begin streaming on both ``OUTPUT`` and ``CAPTURE`` queues via
> > +   :c:func:`VIDIOC_STREAMON`. This may be performed in any order. The actual
> > +   encoding process starts when both queues start streaming.
> > +
> > +.. note::
> > +
> > +   If the client stops the ``CAPTURE`` queue during the encode process and then
> > +   restarts it again, the encoder will begin generating a stream independent
> > +   from the stream generated before the stop. The exact constraints depend
> > +   on the coded format, but may include the following implications:
> > +
> > +   * encoded frames produced after the restart must not reference any
> > +     frames produced before the stop, e.g. no long term references for
> > +     H.264,
> > +
> > +   * any headers that must be included in a standalone stream must be
> > +     produced again, e.g. SPS and PPS for H.264.
> > +
> > +Encoding
> > +========
> > +
> > +This state is reached after the `Initialization` sequence finishes succesfully.
> 
> successfully
> 
> > +In this state, client queues and dequeues buffers to both queues via
> 
> client -> the client
> 
> > +:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
> > +semantics.
> > +
> > +The contents of encoded ``CAPTURE`` buffers depend on the active coded pixel
> 
> contents ... depend -> content ... depends
> 
> > +format and may be affected by codec-specific extended controls, as stated
> > +in the documentation of each format.
> > +
> > +Both queues operate independently, following standard behavior of V4L2 buffer
> > +queues and memory-to-memory devices. In addition, the order of encoded frames
> > +dequeued from the ``CAPTURE`` queue may differ from the order of queuing raw
> > +frames to the ``OUTPUT`` queue, due to properties of the selected coded format,
> > +e.g. frame reordering.
> > +
> > +The client must not assume any direct relationship between ``CAPTURE`` and
> > +``OUTPUT`` buffers and any specific timing of buffers becoming
> > +available to dequeue. Specifically,
> > +
> > +* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced on
> > +  ``CAPTURE`` (if returning an encoded frame allowed the encoder to return a
> > +  frame that preceded it in display, but succeeded it in the decode order),
> > +
> > +* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
> > +  ``CAPTURE`` later into encode process, and/or after processing further
> > +  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
> > +  reordering is used,
> > +
> > +* buffers may become available on the ``CAPTURE`` queue without additional
> > +  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
> > +  ``OUTPUT`` buffers queued in the past whose decoding results are only
> > +  available at later time, due to specifics of the decoding process,
> > +
> > +* buffers queued to ``OUTPUT`` may not become available to dequeue instantly
> > +  after being encoded into a corresponding ``CATPURE`` buffer, e.g. if the
> > +  encoder needs to use the frame as a reference for encoding further frames.
> > +
> > +.. note::
> > +
> > +   To allow matching encoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
> > +   originated from, the client can set the ``timestamp`` field of the
> > +   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
> > +   ``CAPTURE`` buffer(s), which resulted from encoding that ``OUTPUT`` buffer
> > +   will have their ``timestamp`` field set to the same value when dequeued.
> > +
> > +   In addition to the straighforward case of one ``OUTPUT`` buffer producing
> > +   one ``CAPTURE`` buffer, the following cases are defined:
> > +
> > +   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
> > +     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
> > +
> > +   * the encoding order differs from the presentation order (i.e. the
> > +     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
> > +     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
> > +     and thus monotonicity of the timestamps cannot be guaranteed.
> > +
> > +.. note::
> > +
> > +   To let the client distinguish between frame types (keyframes, intermediate
> > +   frames; the exact list of types depends on the coded format), the
> > +   ``CAPTURE`` buffers will have corresponding flag bits set in their
> > +   :c:type:`v4l2_buffer` struct when dequeued. See the documentation of
> > +   :c:type:`v4l2_buffer` and each coded pixel format for exact list of flags
> > +   and their meanings.
> 
> Is this required? (I think it should, but it isn't the case today).

Most CODEC do provide this metadata, if it's absent, placing the stream
into a container may require more parsing.

> 
> Is the current set of buffer flags (Key/B/P frame) sufficient for the current
> set of codecs?

Doe it matter ? It can be extended when new codec get added. There is a
lot things in AV1 as an example, but we can add these when HW and
drivers starts shipping.

> 
> > +
> > +Encoding parameter changes
> > +==========================
> > +
> > +The client is allowed to use :c:func:`VIDIOC_S_CTRL` to change encoder
> > +parameters at any time. The availability of parameters is encoder-specific
> > +and the client must query the encoder to find the set of available controls.
> > +
> > +The ability to change each parameter during encoding is encoder-specific, as per
> > +the standard semantics of the V4L2 control interface. The client may attempt
> > +setting a control of its interest during encoding and if the operation fails
> 
> I'd simplify this:
> 
> The client may attempt to set a control during encoding...
> 
> > +with the -EBUSY error code, the ``CAPTURE`` queue needs to be stopped for the
> > +configuration change to be allowed (following the `Drain` sequence will be
> > +needed to avoid losing the already queued/encoded frames).
> 
> Rephrase:
> 
> ...to be allowed. To do this follow the `Drain` sequence to avoid losing the
> already queued/encoded frames.
> 
> > +
> > +The timing of parameter updates is encoder-specific, as per the standard
> > +semantics of the V4L2 control interface. If the client needs to apply the
> > +parameters exactly at specific frame, using the Request API should be
> 
> Change this to a reference to the Request API section.
> 
> > +considered, if supported by the encoder.
> > +
> > +Drain
> > +=====
> > +
> > +To ensure that all the queued ``OUTPUT`` buffers have been processed and the
> > +related ``CAPTURE`` buffers output to the client, the client must follow the
> 
> output -> are output
> 
> or perhaps better (up to you): are given
> 
> > +drain sequence described below. After the drain sequence ends, the client has
> > +received all encoded frames for all ``OUTPUT`` buffers queued before the
> > +sequence was started.
> > +
> > +1. Begin the drain sequence by issuing :c:func:`VIDIOC_ENCODER_CMD`.
> > +
> > +   * **Required fields:**
> > +
> > +     ``cmd``
> > +         set to ``V4L2_ENC_CMD_STOP``
> > +
> > +     ``flags``
> > +         set to 0
> > +
> > +     ``pts``
> > +         set to 0
> > +
> > +   .. warning::
> > +
> > +   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
> > +   are streaming. For compatibility reasons, the call to
> > +   :c:func:`VIDIOC_ENCODER_CMD` will not fail even if any of the queues is not
> > +   streaming, but at the same time it will not initiate the `Drain` sequence
> > +   and so the steps described below would not be applicable.
> > +
> > +2. Any ``OUTPUT`` buffers queued by the client before the
> > +   :c:func:`VIDIOC_ENCODER_CMD` was issued will be processed and encoded as
> > +   normal. The client must continue to handle both queues independently,
> > +   similarly to normal encode operation. This includes,
> > +
> > +   * queuing and dequeuing ``CAPTURE`` buffers, until a buffer marked with the
> > +     ``V4L2_BUF_FLAG_LAST`` flag is dequeued,
> > +
> > +     .. warning::
> > +
> > +        The last buffer may be empty (with :c:type:`v4l2_buffer`
> > +        ``bytesused`` = 0) and in such case it must be ignored by the client,
> 
> such -> that
> 
> Check the previous patch as well if you used the phrase 'such case' and replace
> it with 'that case'.
> 
> > +        as it does not contain an encoded frame.
> > +
> > +     .. note::
> > +
> > +        Any attempt to dequeue more buffers beyond the buffer marked with
> > +        ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
> > +        :c:func:`VIDIOC_DQBUF`.
> > +
> > +   * dequeuing processed ``OUTPUT`` buffers, until all the buffers queued
> > +     before the ``V4L2_ENC_CMD_STOP`` command are dequeued,
> > +
> > +   * dequeuing the ``V4L2_EVENT_EOS`` event, if the client subscribes to it.
> > +
> > +   .. note::
> > +
> > +      For backwards compatibility, the encoder will signal a ``V4L2_EVENT_EOS``
> > +      event when the last the last frame has been decoded and all frames are
> 
> the last the last -> the last
> 
> > +      ready to be dequeued. It is a deprecated behavior and the client must not
> 
> is a -> is
> 
> > +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
> > +      instead.
> 
> Question: should new codec drivers still implement the EOS event?

I'm been asking around, but I think here is a good place. Do we really
need the FLAG_LAST in userspace ? Userspace can also wait for the first
EPIPE return from DQBUF.

> 
> > +
> > +3. Once all ``OUTPUT`` buffers queued before the ``V4L2_ENC_CMD_STOP`` call and
> > +   the last ``CAPTURE`` buffer are dequeued, the encoder is stopped and it will
> > +   accept, but not process any newly queued ``OUTPUT`` buffers until the client
> > +   issues any of the following operations:
> > +
> > +   * ``V4L2_ENC_CMD_START`` - the encoder will resume operation normally,
> 
> Perhaps mention that this does not reset the encoder? It's not immediately clear
> when reading this.

Which drivers supports this ? I believe I tried with Exynos in the
past, and that didn't work. How do we know if a driver supports this or
not. Do we make it mandatory ? When it's not supported, it basically
mean userspace need to cache and resend the header in userspace, and
also need to skip to some sync point.

> 
> > +
> > +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> > +     ``CAPTURE`` queue - the encoder will be reset (see the `Reset` sequence)
> > +     and then resume encoding,
> > +
> > +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> > +     ``OUTPUT`` queue - the encoder will resume operation normally, however any
> > +     source frames queued to the ``OUTPUT`` queue between ``V4L2_ENC_CMD_STOP``
> > +     and :c:func:`VIDIOC_STREAMOFF` will be discarded.
> > +
> > +.. note::
> > +
> > +   Once the drain sequence is initiated, the client needs to drive it to
> > +   completion, as described by the steps above, unless it aborts the process by
> > +   issuing :c:func:`VIDIOC_STREAMOFF` on any of the ``OUTPUT`` or ``CAPTURE``
> > +   queues.  The client is not allowed to issue ``V4L2_ENC_CMD_START`` or
> > +   ``V4L2_ENC_CMD_STOP`` again while the drain sequence is in progress and they
> > +   will fail with -EBUSY error code if attempted.
> > +
> > +   Although mandatory, the availability of encoder commands may be queried
> > +   using :c:func:`VIDIOC_TRY_ENCODER_CMD`.
> > +
> > +Reset
> > +=====
> > +
> > +The client may want to request the encoder to reinitialize the encoding, so
> > +that the following stream data becomes independent from the stream data
> > +generated before. Depending on the coded format, that may imply that,
> 
> that, -> that:
> 
> > +
> > +* encoded frames produced after the restart must not reference any frames
> > +  produced before the stop, e.g. no long term references for H.264,
> > +
> > +* any headers that must be included in a standalone stream must be produced
> > +  again, e.g. SPS and PPS for H.264.
> > +
> > +This can be achieved by performing the reset sequence.
> > +
> > +1. Perform the `Drain` sequence to ensure all the in-flight encoding finishes
> > +   and respective buffers are dequeued.
> > +
> > +2. Stop streaming on the ``CAPTURE`` queue via :c:func:`VIDIOC_STREAMOFF`. This
> > +   will return all currently queued ``CAPTURE`` buffers to the client, without
> > +   valid frame data.
> > +
> > +3. Start streaming on the ``CAPTURE`` queue via :c:func:`VIDIOC_STREAMON` and
> > +   continue with regular encoding sequence. The encoded frames produced into
> > +   ``CAPTURE`` buffers from now on will contain a standalone stream that can be
> > +   decoded without the need for frames encoded before the reset sequence,
> > +   starting at the first ``OUTPUT`` buffer queued after issuing the
> > +   `V4L2_ENC_CMD_STOP` of the `Drain` sequence.
> > +
> > +This sequence may be also used to change encoding parameters for encoders
> > +without the ability to change the parameters on the fly.
> > +
> > +Commit points
> > +=============
> > +
> > +Setting formats and allocating buffers triggers changes in the behavior of the
> > +encoder.
> > +
> > +1. Setting the format on the ``CAPTURE`` queue may change the set of formats
> > +   supported/advertised on the ``OUTPUT`` queue. In particular, it also means
> > +   that the ``OUTPUT`` format may be reset and the client must not rely on the
> > +   previously set format being preserved.
> > +
> > +2. Enumerating formats on the ``OUTPUT`` queue always returns only formats
> > +   supported for the current ``CAPTURE`` format.
> > +
> > +3. Setting the format on the ``OUTPUT`` queue does not change the list of
> > +   formats available on the ``CAPTURE`` queue. An attempt to set the ``OUTPUT``
> > +   format that is not supported for the currently selected ``CAPTURE`` format
> > +   will result in the encoder adjusting the requested ``OUTPUT`` format to a
> > +   supported one.
> > +
> > +4. Enumerating formats on the ``CAPTURE`` queue always returns the full set of
> > +   supported coded formats, irrespectively of the current ``OUTPUT`` format.
> > +
> > +5. While buffers are allocated on the ``CAPTURE`` queue, the client must not
> > +   change the format on the queue. Drivers will return the -EBUSY error code
> > +   for any such format change attempt.
> > +
> > +To summarize, setting formats and allocation must always start with the
> > +``CAPTURE`` queue and the ``CAPTURE`` queue is the master that governs the
> > +set of supported formats for the ``OUTPUT`` queue.
> > diff --git a/Documentation/media/uapi/v4l/devices.rst b/Documentation/media/uapi/v4l/devices.rst
> > index 12d43fe711cf..1822c66c2154 100644
> > --- a/Documentation/media/uapi/v4l/devices.rst
> > +++ b/Documentation/media/uapi/v4l/devices.rst
> > @@ -16,6 +16,7 @@ Interfaces
> >      dev-osd
> >      dev-codec
> >      dev-decoder
> > +    dev-encoder
> >      dev-effect
> >      dev-raw-vbi
> >      dev-sliced-vbi
> > diff --git a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> > index ca5f2270a829..085089cd9577 100644
> > --- a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> > +++ b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> > @@ -37,6 +37,11 @@ Single-planar format structure
> >  	inside the stream, when fed to a stateful mem2mem decoder, the fields
> >  	may be zero to rely on the decoder to detect the right values. For more
> >  	details see :ref:`decoder` and format descriptions.
> > +
> > +	For compressed formats on the CAPTURE side of a stateful mem2mem
> > +	encoder, the fields must be zero, since the coded size is expected to
> > +	be calculated internally by the encoder itself, based on the OUTPUT
> > +	side. For more details see :ref:`encoder` and format descriptions.
> 
> The encoder document doesn't actually mention this. I think it should, though.
> 
> I'm a bit uncertain about this: the expected resolution might impact the
> sizeimage value: i.e. encoding 640x480 requires much less memory then
> encoding 4k video. If this is required to be 0x0, then the driver has to
> fill in a worst-case sizeimage value. It might make more sense to say that
> if a non-zero resolution is given, then the driver will attempt to
> calculate a sensible sizeimage value.
> 
> >      * - __u32
> >        - ``pixelformat``
> >        - The pixel format or type of compression, set by the application.
> > diff --git a/Documentation/media/uapi/v4l/v4l2.rst b/Documentation/media/uapi/v4l/v4l2.rst
> > index 65dc096199ad..2ef6693b9499 100644
> > --- a/Documentation/media/uapi/v4l/v4l2.rst
> > +++ b/Documentation/media/uapi/v4l/v4l2.rst
> > @@ -56,6 +56,7 @@ Authors, in alphabetical order:
> >  - Figa, Tomasz <tfiga@chromium.org>
> >  
> >    - Documented the memory-to-memory decoder interface.
> > +  - Documented the memory-to-memory encoder interface.
> >  
> >  - H Schimek, Michael <mschimek@gmx.at>
> >  
> > @@ -68,6 +69,7 @@ Authors, in alphabetical order:
> >  - Osciak, Pawel <posciak@chromium.org>
> >  
> >    - Documented the memory-to-memory decoder interface.
> > +  - Documented the memory-to-memory encoder interface.
> >  
> >  - Osciak, Pawel <pawel@osciak.com>
> >  
> > diff --git a/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst b/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst
> > index 5ae8c933b1b9..d571c53e761a 100644
> > --- a/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst
> > +++ b/Documentation/media/uapi/v4l/vidioc-encoder-cmd.rst
> > @@ -50,19 +50,23 @@ currently only used by the STOP command and contains one bit: If the
> >  until the end of the current *Group Of Pictures*, otherwise it will stop
> >  immediately.
> >  
> > -A :ref:`read() <func-read>` or :ref:`VIDIOC_STREAMON <VIDIOC_STREAMON>`
> > -call sends an implicit START command to the encoder if it has not been
> > -started yet. After a STOP command, :ref:`read() <func-read>` calls will read
> > +After a STOP command, :ref:`read() <func-read>` calls will read
> >  the remaining data buffered by the driver. When the buffer is empty,
> >  :ref:`read() <func-read>` will return zero and the next :ref:`read() <func-read>`
> >  call will restart the encoder.
> >  
> > +A :ref:`read() <func-read>` or :ref:`VIDIOC_STREAMON <VIDIOC_STREAMON>`
> > +call sends an implicit START command to the encoder if it has not been
> > +started yet. Applies to both queues of mem2mem encoders.
> > +
> >  A :ref:`close() <func-close>` or :ref:`VIDIOC_STREAMOFF <VIDIOC_STREAMON>`
> >  call of a streaming file descriptor sends an implicit immediate STOP to
> > -the encoder, and all buffered data is discarded.
> > +the encoder, and all buffered data is discarded. Applies to both queues of
> > +mem2mem encoders.
> >  
> >  These ioctls are optional, not all drivers may support them. They were
> > -introduced in Linux 2.6.21.
> > +introduced in Linux 2.6.21. They are, however, mandatory for stateful mem2mem
> > +encoders (as further documented in :ref:`encoder`).
> >  
> >  
> >  .. tabularcolumns:: |p{4.4cm}|p{4.4cm}|p{8.7cm}|
> > @@ -107,16 +111,20 @@ introduced in Linux 2.6.21.
> >        - Stop the encoder. When the ``V4L2_ENC_CMD_STOP_AT_GOP_END`` flag
> >  	is set, encoding will continue until the end of the current *Group
> >  	Of Pictures*, otherwise encoding will stop immediately. When the
> > -	encoder is already stopped, this command does nothing. mem2mem
> > -	encoders will send a ``V4L2_EVENT_EOS`` event when the last frame
> > -	has been encoded and all frames are ready to be dequeued and will
> > -	set the ``V4L2_BUF_FLAG_LAST`` buffer flag on the last buffer of
> > -	the capture queue to indicate there will be no new buffers
> > -	produced to dequeue. This buffer may be empty, indicated by the
> > -	driver setting the ``bytesused`` field to 0. Once the
> > -	``V4L2_BUF_FLAG_LAST`` flag was set, the
> > -	:ref:`VIDIOC_DQBUF <VIDIOC_QBUF>` ioctl will not block anymore,
> > -	but return an ``EPIPE`` error code.
> > +	encoder is already stopped, this command does nothing.
> > +
> > +	A stateful mem2mem encoder will proceed with encoding the source
> > +	buffers pending before the command is issued and then stop producing
> > +	new frames. It will send a ``V4L2_EVENT_EOS`` event when the last frame
> > +	has been encoded and all frames are ready to be dequeued and will set
> > +	the ``V4L2_BUF_FLAG_LAST`` buffer flag on the last buffer of the
> > +	capture queue to indicate there will be no new buffers produced to
> > +	dequeue. This buffer may be empty, indicated by the driver setting the
> > +	``bytesused`` field to 0. Once the buffer with the
> > +	``V4L2_BUF_FLAG_LAST`` flag set was dequeued, the :ref:`VIDIOC_DQBUF
> > +	<VIDIOC_QBUF>` ioctl will not block anymore, but return an ``EPIPE``
> > +	error code. No flags or other arguments are accepted in case of mem2mem
> > +	encoders.  See :ref:`encoder` for more details.
> >      * - ``V4L2_ENC_CMD_PAUSE``
> >        - 2
> >        - Pause the encoder. When the encoder has not been started yet, the
> > 
> 
> Regards,
> 
> 	Hans


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-11-15 14:34   ` Hans Verkuil
@ 2018-11-17  4:31     ` Nicolas Dufresne
  2018-11-17 11:43       ` Hans Verkuil
  0 siblings, 1 reply; 41+ messages in thread
From: Nicolas Dufresne @ 2018-11-17  4:31 UTC (permalink / raw)
  To: Hans Verkuil, Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Paweł Ościak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel, Tiffany Lin, Andrew-CT Chen,
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

Le jeudi 15 novembre 2018 à 15:34 +0100, Hans Verkuil a écrit :
> On 10/22/2018 04:48 PM, Tomasz Figa wrote:
> > Due to complexity of the video decoding process, the V4L2 drivers of
> > stateful decoder hardware require specific sequences of V4L2 API calls
> > to be followed. These include capability enumeration, initialization,
> > decoding, seek, pause, dynamic resolution change, drain and end of
> > stream.
> > 
> > Specifics of the above have been discussed during Media Workshops at
> > LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
> > Conference Europe 2014 in Düsseldorf. The de facto Codec API that
> > originated at those events was later implemented by the drivers we already
> > have merged in mainline, such as s5p-mfc or coda.
> > 
> > The only thing missing was the real specification included as a part of
> > Linux Media documentation. Fix it now and document the decoder part of
> > the Codec API.
> > 
> > Signed-off-by: Tomasz Figa <tfiga@chromium.org>
> > ---
> >  Documentation/media/uapi/v4l/dev-decoder.rst  | 1082 +++++++++++++++++
> >  Documentation/media/uapi/v4l/devices.rst      |    1 +
> >  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |    5 +
> >  Documentation/media/uapi/v4l/v4l2.rst         |   10 +-
> >  .../media/uapi/v4l/vidioc-decoder-cmd.rst     |   40 +-
> >  Documentation/media/uapi/v4l/vidioc-g-fmt.rst |   14 +
> >  6 files changed, 1137 insertions(+), 15 deletions(-)
> >  create mode 100644 Documentation/media/uapi/v4l/dev-decoder.rst
> > 
> > diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
> > new file mode 100644
> > index 000000000000..09c7a6621b8e
> > --- /dev/null
> > +++ b/Documentation/media/uapi/v4l/dev-decoder.rst
> > @@ -0,0 +1,1082 @@
> > +.. -*- coding: utf-8; mode: rst -*-
> > +
> > +.. _decoder:
> > +
> > +*************************************************
> > +Memory-to-memory Stateful Video Decoder Interface
> > +*************************************************
> > +
> > +A stateful video decoder takes complete chunks of the bitstream (e.g. Annex-B
> > +H.264/HEVC stream, raw VP8/9 stream) and decodes them into raw video frames in
> > +display order. The decoder is expected not to require any additional information
> > +from the client to process these buffers.
> > +
> > +Performing software parsing, processing etc. of the stream in the driver in
> > +order to support this interface is strongly discouraged. In case such
> > +operations are needed, use of the Stateless Video Decoder Interface (in
> > +development) is strongly advised.
> > +
> > +Conventions and notation used in this document
> > +==============================================
> > +
> > +1. The general V4L2 API rules apply if not specified in this document
> > +   otherwise.
> > +
> > +2. The meaning of words “must”, “may”, “should”, etc. is as per RFC
> > +   2119.
> > +
> > +3. All steps not marked “optional” are required.
> > +
> > +4. :c:func:`VIDIOC_G_EXT_CTRLS`, :c:func:`VIDIOC_S_EXT_CTRLS` may be used
> > +   interchangeably with :c:func:`VIDIOC_G_CTRL`, :c:func:`VIDIOC_S_CTRL`,
> > +   unless specified otherwise.
> > +
> > +5. Single-plane API (see spec) and applicable structures may be used
> > +   interchangeably with Multi-plane API, unless specified otherwise,
> > +   depending on decoder capabilities and following the general V4L2
> > +   guidelines.
> > +
> > +6. i = [a..b]: sequence of integers from a to b, inclusive, i.e. i =
> > +   [0..2]: i = 0, 1, 2.
> > +
> > +7. Given an ``OUTPUT`` buffer A, A’ represents a buffer on the ``CAPTURE``
> > +   queue containing data (decoded frame/stream) that resulted from processing
> > +   buffer A.
> > +
> > +.. _decoder-glossary:
> > +
> > +Glossary
> > +========
> > +
> > +CAPTURE
> > +   the destination buffer queue; for decoder, the queue of buffers containing
> > +   decoded frames; for encoder, the queue of buffers containing encoded
> > +   bitstream; ``V4L2_BUF_TYPE_VIDEO_CAPTURE```` or
> > +   ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``; data are captured from the hardware
> > +   into ``CAPTURE`` buffers
> > +
> > +client
> > +   application client communicating with the decoder or encoder implementing
> > +   this interface
> > +
> > +coded format
> > +   encoded/compressed video bitstream format (e.g. H.264, VP8, etc.); see
> > +   also: raw format
> > +
> > +coded height
> > +   height for given coded resolution
> > +
> > +coded resolution
> > +   stream resolution in pixels aligned to codec and hardware requirements;
> > +   typically visible resolution rounded up to full macroblocks;
> > +   see also: visible resolution
> > +
> > +coded width
> > +   width for given coded resolution
> > +
> > +decode order
> > +   the order in which frames are decoded; may differ from display order if the
> > +   coded format includes a feature of frame reordering; for decoders,
> > +   ``OUTPUT`` buffers must be queued by the client in decode order; for
> > +   encoders ``CAPTURE`` buffers must be returned by the encoder in decode order
> > +
> > +destination
> > +   data resulting from the decode process; ``CAPTURE``
> > +
> > +display order
> > +   the order in which frames must be displayed; for encoders, ``OUTPUT``
> > +   buffers must be queued by the client in display order; for decoders,
> > +   ``CAPTURE`` buffers must be returned by the decoder in display order
> > +
> > +DPB
> > +   Decoded Picture Buffer; an H.264 term for a buffer that stores a decoded
> > +   raw frame available for reference in further decoding steps.
> > +
> > +EOS
> > +   end of stream
> > +
> > +IDR
> > +   Instantaneous Decoder Refresh; a type of a keyframe in H.264-encoded stream,
> > +   which clears the list of earlier reference frames (DPBs)
> > +
> > +keyframe
> > +   an encoded frame that does not reference frames decoded earlier, i.e.
> > +   can be decoded fully on its own.
> > +
> > +macroblock
> > +   a processing unit in image and video compression formats based on linear
> > +   block transforms (e.g. H.264, VP8, VP9); codec-specific, but for most of
> > +   popular codecs the size is 16x16 samples (pixels)
> > +
> > +OUTPUT
> > +   the source buffer queue; for decoders, the queue of buffers containing
> > +   encoded bitstream; for encoders, the queue of buffers containing raw frames;
> > +   ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or ``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE``; the
> > +   hardware is fed with data from ``OUTPUT`` buffers
> > +
> > +PPS
> > +   Picture Parameter Set; a type of metadata entity in H.264 bitstream
> > +
> > +raw format
> > +   uncompressed format containing raw pixel data (e.g. YUV, RGB formats)
> > +
> > +resume point
> > +   a point in the bitstream from which decoding may start/continue, without
> > +   any previous state/data present, e.g.: a keyframe (VP8/VP9) or
> > +   SPS/PPS/IDR sequence (H.264); a resume point is required to start decode
> > +   of a new stream, or to resume decoding after a seek
> > +
> > +source
> > +   data fed to the decoder or encoder; ``OUTPUT``
> > +
> > +source height
> > +   height in pixels for given source resolution; relevant to encoders only
> > +
> > +source resolution
> > +   resolution in pixels of source frames being source to the encoder and
> > +   subject to further cropping to the bounds of visible resolution; relevant to
> > +   encoders only
> > +
> > +source width
> > +   width in pixels for given source resolution; relevant to encoders only
> > +
> > +SPS
> > +   Sequence Parameter Set; a type of metadata entity in H.264 bitstream
> > +
> > +stream metadata
> > +   additional (non-visual) information contained inside encoded bitstream;
> > +   for example: coded resolution, visible resolution, codec profile
> > +
> > +visible height
> > +   height for given visible resolution; display height
> > +
> > +visible resolution
> > +   stream resolution of the visible picture, in pixels, to be used for
> > +   display purposes; must be smaller or equal to coded resolution;
> > +   display resolution
> > +
> > +visible width
> > +   width for given visible resolution; display width
> > +
> > +State machine
> > +=============
> > +
> > +.. kernel-render:: DOT
> > +   :alt: DOT digraph of decoder state machine
> > +   :caption: Decoder state machine
> > +
> > +   digraph decoder_state_machine {
> > +       node [shape = doublecircle, label="Decoding"] Decoding;
> > +
> > +       node [shape = circle, label="Initialization"] Initialization;
> > +       node [shape = circle, label="Capture\nsetup"] CaptureSetup;
> > +       node [shape = circle, label="Dynamic\nresolution\nchange"] ResChange;
> > +       node [shape = circle, label="Stopped"] Stopped;
> > +       node [shape = circle, label="Drain"] Drain;
> > +       node [shape = circle, label="Seek"] Seek;
> > +       node [shape = circle, label="End of stream"] EoS;
> > +
> > +       node [shape = point]; qi
> > +       qi -> Initialization [ label = "open()" ];
> > +
> > +       Initialization -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
> > +
> > +       CaptureSetup -> Stopped [ label = "CAPTURE\nbuffers\nready" ];
> > +
> > +       Decoding -> ResChange [ label = "Stream\nresolution\nchange" ];
> > +       Decoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
> > +       Decoding -> EoS [ label = "EoS mark\nin the stream" ];
> > +       Decoding -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> > +       Decoding -> Stopped [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> > +       Decoding -> Decoding;
> > +
> > +       ResChange -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
> > +       ResChange -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> > +
> > +       EoS -> Drain [ label = "Implicit\ndrain" ];
> > +
> > +       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
> > +       Drain -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> > +
> > +       Seek -> Decoding [ label = "VIDIOC_STREAMON(OUTPUT)" ];
> > +       Seek -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];
> > +
> > +       Stopped -> Decoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(CAPTURE)" ];
> > +       Stopped -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> > +   }
> > +
> > +Querying capabilities
> > +=====================
> > +
> > +1. To enumerate the set of coded formats supported by the decoder, the
> > +   client may call :c:func:`VIDIOC_ENUM_FMT` on ``OUTPUT``.
> > +
> > +   * The full set of supported formats will be returned, regardless of the
> > +     format set on ``CAPTURE``.
> > +
> > +2. To enumerate the set of supported raw formats, the client may call
> > +   :c:func:`VIDIOC_ENUM_FMT` on ``CAPTURE``.
> > +
> > +   * Only the formats supported for the format currently active on ``OUTPUT``
> > +     will be returned.
> > +
> > +   * In order to enumerate raw formats supported by a given coded format,
> > +     the client must first set that coded format on ``OUTPUT`` and then
> > +     enumerate formats on ``CAPTURE``.
> > +
> > +3. The client may use :c:func:`VIDIOC_ENUM_FRAMESIZES` to detect supported
> > +   resolutions for a given format, passing desired pixel format in
> > +   :c:type:`v4l2_frmsizeenum` ``pixel_format``.
> > +
> > +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a coded pixel
> > +     formats will include all possible coded resolutions supported by the
> > +     decoder for given coded pixel format.
> > +
> > +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a raw pixel format
> > +     will include all possible frame buffer resolutions supported by the
> > +     decoder for given raw pixel format and the coded format currently set on
> > +     ``OUTPUT``.
> > +
> > +4. Supported profiles and levels for given format, if applicable, may be
> > +   queried using their respective controls via :c:func:`VIDIOC_QUERYCTRL`.
> > +
> > +Initialization
> > +==============
> > +
> > +1. **Optional.** Enumerate supported ``OUTPUT`` formats and resolutions. See
> > +   `Querying capabilities` above.
> > +
> > +2. Set the coded format on ``OUTPUT`` via :c:func:`VIDIOC_S_FMT`
> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +     ``pixelformat``
> > +         a coded pixel format
> > +
> > +     ``width``, ``height``
> > +         required only if cannot be parsed from the stream for the given
> > +         coded format; optional otherwise - set to zero to ignore
> > +
> > +     ``sizeimage``
> > +         desired size of ``OUTPUT`` buffers; the decoder may adjust it to
> > +         match hardware requirements
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``sizeimage``
> > +         adjusted size of ``CAPTURE`` buffers
> > +
> > +   * If width and height are set to non-zero values, the ``CAPTURE`` format
> > +     will be updated with an appropriate frame buffer resolution instantly.
> > +     However, for coded formats that include stream resolution information,
> > +     after the decoder is done parsing the information from the stream, it will
> > +     update the ``CAPTURE`` format with new values and signal a source change
> > +     event.
> > +
> > +   .. warning::
> > +
> > +      Changing the ``OUTPUT`` format may change the currently set ``CAPTURE``
> > +      format. The decoder will derive a new ``CAPTURE`` format from the
> > +      ``OUTPUT`` format being set, including resolution, colorimetry
> > +      parameters, etc. If the client needs a specific ``CAPTURE`` format, it
> > +      must adjust it afterwards.
> > +
> > +3.  **Optional.** Query the minimum number of buffers required for ``OUTPUT``
> > +    queue via :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to
> > +    use more buffers than the minimum required by hardware/format.
> > +
> > +    * **Required fields:**
> > +
> > +      ``id``
> > +          set to ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
> > +
> > +    * **Return fields:**
> > +
> > +      ``value``
> > +          the minimum number of ``OUTPUT`` buffers required for the currently
> > +          set format
> > +
> > +4.  Allocate source (bitstream) buffers via :c:func:`VIDIOC_REQBUFS` on
> > +    ``OUTPUT``.
> > +
> > +    * **Required fields:**
> > +
> > +      ``count``
> > +          requested number of buffers to allocate; greater than zero
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +      ``memory``
> > +          follows standard semantics
> > +
> > +    * **Return fields:**
> > +
> > +      ``count``
> > +          the actual number of buffers allocated
> > +
> > +    .. warning::
> > +
> > +       The actual number of allocated buffers may differ from the ``count``
> > +       given. The client must check the updated value of ``count`` after the
> > +       call returns.
> > +
> > +    .. note::
> > +
> > +       To allocate more than the minimum number of buffers (for pipeline
> > +       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
> > +       control to get the minimum number of buffers required by the
> > +       decoder/format, and pass the obtained value plus the number of
> > +       additional buffers needed in the ``count`` field to
> > +       :c:func:`VIDIOC_REQBUFS`.
> > +
> > +    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``OUTPUT`` queue can be
> > +    used to have more control over buffer allocation.
> > +
> > +    * **Required fields:**
> > +
> > +      ``count``
> > +          requested number of buffers to allocate; greater than zero
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +      ``memory``
> > +          follows standard semantics
> > +
> > +      ``format``
> > +          follows standard semantics
> > +
> > +    * **Return fields:**
> > +
> > +      ``count``
> > +          adjusted to the number of allocated buffers
> > +
> > +    .. warning::
> > +
> > +       The actual number of allocated buffers may differ from the ``count``
> > +       given. The client must check the updated value of ``count`` after the
> > +       call returns.
> > +
> > +5.  Start streaming on the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`.
> > +
> > +6.  **This step only applies to coded formats that contain resolution information
> > +    in the stream.**
> 
> As far as I know all codecs have resolution/metadata in the stream.

Was this comment about what we currently support in V4L2 interface ? In
real life, there is CODEC that works only with out-of-band codec data.
A well known one is AVC1 (and HVC1). In this mode, the AVC H264 does
not have start code, and the headers are not allowed in the bitstream
itself. This format is much more efficient to process then AVC Annex B,
since you can just read the NAL size and jump over instead of scanning
for start code. This is the format used in the very popular ISOMP4
container.

The other sign that these codecs do exist is the recurence of the
notion of codec data in pretty much every codec abstraction there is
(ffmpeg, GStreamer, Android Media Codec)
> 
> As discussed in the "[PATCH vicodec v4 0/3] Add support to more pixel formats in
> vicodec" thread, it is easiest to assume that there is always metadata.
> 
> Perhaps there should be a single mention somewhere that such codecs are not
> supported at the moment, but to be frank how can you decode a stream without
> it containing such essential information? You are much more likely to implement
> such a codec as a stateless codec.

That is I believe a miss-interpretation of what a stateless codec is.
It's not because you have to set one blob of CODEC data after S_FMT on
a specific control that this CODEC becomes stateless. The fact is not
supported now is just become we didn't have come accross HW that
supports these.

FFMPEG offers a state full software codec, and you still have this
codec_data blob for many of the format. They also only supports AVC1,
they parser will always convert the stream to that, because it's just
more efficient format. Android Media Codec works similarly. What keeps
them stateful, is that you don't need to parse it, you don't even need
to know what these data contains. They are blobs placed in the
container that you pass as-is to the decoder.

> 
> So I would just drop this sentence here (and perhaps at other places in this
> document or the encoder document as well).
> 
> Regards,
> 
> 	Hans


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2018-11-17  4:18     ` Nicolas Dufresne
@ 2018-11-17 11:37       ` Hans Verkuil
  2018-11-18  1:34         ` Nicolas Dufresne
  2019-01-23 10:00         ` Tomasz Figa
  0 siblings, 2 replies; 41+ messages in thread
From: Hans Verkuil @ 2018-11-17 11:37 UTC (permalink / raw)
  To: Nicolas Dufresne, Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Paweł Ościak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel, Tiffany Lin, Andrew-CT Chen,
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On 11/17/2018 05:18 AM, Nicolas Dufresne wrote:
> Le lundi 12 novembre 2018 à 14:23 +0100, Hans Verkuil a écrit :
>> On 10/22/2018 04:49 PM, Tomasz Figa wrote:
>>> Due to complexity of the video encoding process, the V4L2 drivers of
>>> stateful encoder hardware require specific sequences of V4L2 API calls
>>> to be followed. These include capability enumeration, initialization,
>>> encoding, encode parameters change, drain and reset.
>>>
>>> Specifics of the above have been discussed during Media Workshops at
>>> LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
>>> Conference Europe 2014 in Düsseldorf. The de facto Codec API that
>>> originated at those events was later implemented by the drivers we already
>>> have merged in mainline, such as s5p-mfc or coda.
>>>
>>> The only thing missing was the real specification included as a part of
>>> Linux Media documentation. Fix it now and document the encoder part of
>>> the Codec API.
>>>
>>> Signed-off-by: Tomasz Figa <tfiga@chromium.org>
>>> ---
>>>  Documentation/media/uapi/v4l/dev-encoder.rst  | 579 ++++++++++++++++++
>>>  Documentation/media/uapi/v4l/devices.rst      |   1 +
>>>  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |   5 +
>>>  Documentation/media/uapi/v4l/v4l2.rst         |   2 +
>>>  .../media/uapi/v4l/vidioc-encoder-cmd.rst     |  38 +-
>>>  5 files changed, 610 insertions(+), 15 deletions(-)
>>>  create mode 100644 Documentation/media/uapi/v4l/dev-encoder.rst
>>>
>>> diff --git a/Documentation/media/uapi/v4l/dev-encoder.rst b/Documentation/media/uapi/v4l/dev-encoder.rst
>>> new file mode 100644
>>> index 000000000000..41139e5e48eb
>>> --- /dev/null
>>> +++ b/Documentation/media/uapi/v4l/dev-encoder.rst

<snip>

>>> +6. Allocate buffers for both ``OUTPUT`` and ``CAPTURE`` via
>>> +   :c:func:`VIDIOC_REQBUFS`. This may be performed in any order.
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``count``
>>> +         requested number of buffers to allocate; greater than zero
>>> +
>>> +     ``type``
>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT`` or
>>> +         ``CAPTURE``
>>> +
>>> +     other fields
>>> +         follow standard semantics
>>> +
>>> +   * **Return fields:**
>>> +
>>> +     ``count``
>>> +          actual number of buffers allocated
>>> +
>>> +   .. warning::
>>> +
>>> +      The actual number of allocated buffers may differ from the ``count``
>>> +      given. The client must check the updated value of ``count`` after the
>>> +      call returns.
>>> +
>>> +   .. note::
>>> +
>>> +      To allocate more than the minimum number of buffers (for pipeline depth),
>>> +      the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT`` or
>>> +      ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE`` control respectively, to get the
>>> +      minimum number of buffers required by the encoder/format, and pass the
>>> +      obtained value plus the number of additional buffers needed in the
>>> +      ``count`` field to :c:func:`VIDIOC_REQBUFS`.
>>
>> Does V4L2_CID_MIN_BUFFERS_FOR_CAPTURE make any sense for encoders?
> 
> We do account for it in GStreamer (the capture/output handling is
> generic), but I don't know if it's being used anywhere. 

Do you use this value directly for REQBUFS, or do you use it as the minimum
value but in practice use more buffers?

> 
>>
>> V4L2_CID_MIN_BUFFERS_FOR_OUTPUT can make sense depending on GOP size etc.
> 
> Not really the GOP size. In video conference we often run the encoder
> with an open GOP and do key frame request on demand. Mostly the DPB
> size. DPB is specific term use in certain CODEC, but they nearly all
> keep some backlog, except for key frame only codecs (jpeg, png, etc.).
> 
>>

<snip>

>>> +.. note::
>>> +
>>> +   To let the client distinguish between frame types (keyframes, intermediate
>>> +   frames; the exact list of types depends on the coded format), the
>>> +   ``CAPTURE`` buffers will have corresponding flag bits set in their
>>> +   :c:type:`v4l2_buffer` struct when dequeued. See the documentation of
>>> +   :c:type:`v4l2_buffer` and each coded pixel format for exact list of flags
>>> +   and their meanings.
>>
>> Is this required? (I think it should, but it isn't the case today).
> 
> Most CODEC do provide this metadata, if it's absent, placing the stream
> into a container may require more parsing.
> 
>>
>> Is the current set of buffer flags (Key/B/P frame) sufficient for the current
>> set of codecs?
> 
> Doe it matter ? It can be extended when new codec get added. There is a
> lot things in AV1 as an example, but we can add these when HW and
> drivers starts shipping.

I was just curious :-) It doesn't really matter, I agree.

<snip>

>>> +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
>>> +      instead.
>>
>> Question: should new codec drivers still implement the EOS event?
> 
> I'm been asking around, but I think here is a good place. Do we really
> need the FLAG_LAST in userspace ? Userspace can also wait for the first
> EPIPE return from DQBUF.

I'm interested in hearing Tomasz' opinion. This flag is used already, so there
definitely is a backwards compatibility issue here.

> 
>>
>>> +
>>> +3. Once all ``OUTPUT`` buffers queued before the ``V4L2_ENC_CMD_STOP`` call and
>>> +   the last ``CAPTURE`` buffer are dequeued, the encoder is stopped and it will
>>> +   accept, but not process any newly queued ``OUTPUT`` buffers until the client
>>> +   issues any of the following operations:
>>> +
>>> +   * ``V4L2_ENC_CMD_START`` - the encoder will resume operation normally,
>>
>> Perhaps mention that this does not reset the encoder? It's not immediately clear
>> when reading this.
> 
> Which drivers supports this ? I believe I tried with Exynos in the
> past, and that didn't work. How do we know if a driver supports this or
> not. Do we make it mandatory ? When it's not supported, it basically
> mean userspace need to cache and resend the header in userspace, and
> also need to skip to some sync point.

Once we agree on the spec, then the next step will be to add good compliance
checks and update drivers that fail the tests.

To check if the driver support this ioctl you can call VIDIOC_TRY_ENCODER_CMD
to see if the functionality is supported.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-11-17  4:31     ` Nicolas Dufresne
@ 2018-11-17 11:43       ` Hans Verkuil
  2018-11-18  1:25         ` Nicolas Dufresne
  0 siblings, 1 reply; 41+ messages in thread
From: Hans Verkuil @ 2018-11-17 11:43 UTC (permalink / raw)
  To: Nicolas Dufresne, Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Paweł Ościak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel, Tiffany Lin, Andrew-CT Chen,
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On 11/17/2018 05:31 AM, Nicolas Dufresne wrote:
> Le jeudi 15 novembre 2018 à 15:34 +0100, Hans Verkuil a écrit :
>> On 10/22/2018 04:48 PM, Tomasz Figa wrote:
>>> Due to complexity of the video decoding process, the V4L2 drivers of
>>> stateful decoder hardware require specific sequences of V4L2 API calls
>>> to be followed. These include capability enumeration, initialization,
>>> decoding, seek, pause, dynamic resolution change, drain and end of
>>> stream.
>>>
>>> Specifics of the above have been discussed during Media Workshops at
>>> LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
>>> Conference Europe 2014 in Düsseldorf. The de facto Codec API that
>>> originated at those events was later implemented by the drivers we already
>>> have merged in mainline, such as s5p-mfc or coda.
>>>
>>> The only thing missing was the real specification included as a part of
>>> Linux Media documentation. Fix it now and document the decoder part of
>>> the Codec API.
>>>
>>> Signed-off-by: Tomasz Figa <tfiga@chromium.org>
>>> ---
>>>  Documentation/media/uapi/v4l/dev-decoder.rst  | 1082 +++++++++++++++++
>>>  Documentation/media/uapi/v4l/devices.rst      |    1 +
>>>  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |    5 +
>>>  Documentation/media/uapi/v4l/v4l2.rst         |   10 +-
>>>  .../media/uapi/v4l/vidioc-decoder-cmd.rst     |   40 +-
>>>  Documentation/media/uapi/v4l/vidioc-g-fmt.rst |   14 +
>>>  6 files changed, 1137 insertions(+), 15 deletions(-)
>>>  create mode 100644 Documentation/media/uapi/v4l/dev-decoder.rst
>>>
>>> diff --git a/Documentation/media/uapi/v4l/dev-decoder.rst b/Documentation/media/uapi/v4l/dev-decoder.rst
>>> new file mode 100644
>>> index 000000000000..09c7a6621b8e
>>> --- /dev/null
>>> +++ b/Documentation/media/uapi/v4l/dev-decoder.rst
>>> @@ -0,0 +1,1082 @@
>>> +.. -*- coding: utf-8; mode: rst -*-
>>> +
>>> +.. _decoder:
>>> +
>>> +*************************************************
>>> +Memory-to-memory Stateful Video Decoder Interface
>>> +*************************************************
>>> +
>>> +A stateful video decoder takes complete chunks of the bitstream (e.g. Annex-B
>>> +H.264/HEVC stream, raw VP8/9 stream) and decodes them into raw video frames in
>>> +display order. The decoder is expected not to require any additional information
>>> +from the client to process these buffers.
>>> +
>>> +Performing software parsing, processing etc. of the stream in the driver in
>>> +order to support this interface is strongly discouraged. In case such
>>> +operations are needed, use of the Stateless Video Decoder Interface (in
>>> +development) is strongly advised.
>>> +
>>> +Conventions and notation used in this document
>>> +==============================================
>>> +
>>> +1. The general V4L2 API rules apply if not specified in this document
>>> +   otherwise.
>>> +
>>> +2. The meaning of words “must”, “may”, “should”, etc. is as per RFC
>>> +   2119.
>>> +
>>> +3. All steps not marked “optional” are required.
>>> +
>>> +4. :c:func:`VIDIOC_G_EXT_CTRLS`, :c:func:`VIDIOC_S_EXT_CTRLS` may be used
>>> +   interchangeably with :c:func:`VIDIOC_G_CTRL`, :c:func:`VIDIOC_S_CTRL`,
>>> +   unless specified otherwise.
>>> +
>>> +5. Single-plane API (see spec) and applicable structures may be used
>>> +   interchangeably with Multi-plane API, unless specified otherwise,
>>> +   depending on decoder capabilities and following the general V4L2
>>> +   guidelines.
>>> +
>>> +6. i = [a..b]: sequence of integers from a to b, inclusive, i.e. i =
>>> +   [0..2]: i = 0, 1, 2.
>>> +
>>> +7. Given an ``OUTPUT`` buffer A, A’ represents a buffer on the ``CAPTURE``
>>> +   queue containing data (decoded frame/stream) that resulted from processing
>>> +   buffer A.
>>> +
>>> +.. _decoder-glossary:
>>> +
>>> +Glossary
>>> +========
>>> +
>>> +CAPTURE
>>> +   the destination buffer queue; for decoder, the queue of buffers containing
>>> +   decoded frames; for encoder, the queue of buffers containing encoded
>>> +   bitstream; ``V4L2_BUF_TYPE_VIDEO_CAPTURE```` or
>>> +   ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``; data are captured from the hardware
>>> +   into ``CAPTURE`` buffers
>>> +
>>> +client
>>> +   application client communicating with the decoder or encoder implementing
>>> +   this interface
>>> +
>>> +coded format
>>> +   encoded/compressed video bitstream format (e.g. H.264, VP8, etc.); see
>>> +   also: raw format
>>> +
>>> +coded height
>>> +   height for given coded resolution
>>> +
>>> +coded resolution
>>> +   stream resolution in pixels aligned to codec and hardware requirements;
>>> +   typically visible resolution rounded up to full macroblocks;
>>> +   see also: visible resolution
>>> +
>>> +coded width
>>> +   width for given coded resolution
>>> +
>>> +decode order
>>> +   the order in which frames are decoded; may differ from display order if the
>>> +   coded format includes a feature of frame reordering; for decoders,
>>> +   ``OUTPUT`` buffers must be queued by the client in decode order; for
>>> +   encoders ``CAPTURE`` buffers must be returned by the encoder in decode order
>>> +
>>> +destination
>>> +   data resulting from the decode process; ``CAPTURE``
>>> +
>>> +display order
>>> +   the order in which frames must be displayed; for encoders, ``OUTPUT``
>>> +   buffers must be queued by the client in display order; for decoders,
>>> +   ``CAPTURE`` buffers must be returned by the decoder in display order
>>> +
>>> +DPB
>>> +   Decoded Picture Buffer; an H.264 term for a buffer that stores a decoded
>>> +   raw frame available for reference in further decoding steps.
>>> +
>>> +EOS
>>> +   end of stream
>>> +
>>> +IDR
>>> +   Instantaneous Decoder Refresh; a type of a keyframe in H.264-encoded stream,
>>> +   which clears the list of earlier reference frames (DPBs)
>>> +
>>> +keyframe
>>> +   an encoded frame that does not reference frames decoded earlier, i.e.
>>> +   can be decoded fully on its own.
>>> +
>>> +macroblock
>>> +   a processing unit in image and video compression formats based on linear
>>> +   block transforms (e.g. H.264, VP8, VP9); codec-specific, but for most of
>>> +   popular codecs the size is 16x16 samples (pixels)
>>> +
>>> +OUTPUT
>>> +   the source buffer queue; for decoders, the queue of buffers containing
>>> +   encoded bitstream; for encoders, the queue of buffers containing raw frames;
>>> +   ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or ``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE``; the
>>> +   hardware is fed with data from ``OUTPUT`` buffers
>>> +
>>> +PPS
>>> +   Picture Parameter Set; a type of metadata entity in H.264 bitstream
>>> +
>>> +raw format
>>> +   uncompressed format containing raw pixel data (e.g. YUV, RGB formats)
>>> +
>>> +resume point
>>> +   a point in the bitstream from which decoding may start/continue, without
>>> +   any previous state/data present, e.g.: a keyframe (VP8/VP9) or
>>> +   SPS/PPS/IDR sequence (H.264); a resume point is required to start decode
>>> +   of a new stream, or to resume decoding after a seek
>>> +
>>> +source
>>> +   data fed to the decoder or encoder; ``OUTPUT``
>>> +
>>> +source height
>>> +   height in pixels for given source resolution; relevant to encoders only
>>> +
>>> +source resolution
>>> +   resolution in pixels of source frames being source to the encoder and
>>> +   subject to further cropping to the bounds of visible resolution; relevant to
>>> +   encoders only
>>> +
>>> +source width
>>> +   width in pixels for given source resolution; relevant to encoders only
>>> +
>>> +SPS
>>> +   Sequence Parameter Set; a type of metadata entity in H.264 bitstream
>>> +
>>> +stream metadata
>>> +   additional (non-visual) information contained inside encoded bitstream;
>>> +   for example: coded resolution, visible resolution, codec profile
>>> +
>>> +visible height
>>> +   height for given visible resolution; display height
>>> +
>>> +visible resolution
>>> +   stream resolution of the visible picture, in pixels, to be used for
>>> +   display purposes; must be smaller or equal to coded resolution;
>>> +   display resolution
>>> +
>>> +visible width
>>> +   width for given visible resolution; display width
>>> +
>>> +State machine
>>> +=============
>>> +
>>> +.. kernel-render:: DOT
>>> +   :alt: DOT digraph of decoder state machine
>>> +   :caption: Decoder state machine
>>> +
>>> +   digraph decoder_state_machine {
>>> +       node [shape = doublecircle, label="Decoding"] Decoding;
>>> +
>>> +       node [shape = circle, label="Initialization"] Initialization;
>>> +       node [shape = circle, label="Capture\nsetup"] CaptureSetup;
>>> +       node [shape = circle, label="Dynamic\nresolution\nchange"] ResChange;
>>> +       node [shape = circle, label="Stopped"] Stopped;
>>> +       node [shape = circle, label="Drain"] Drain;
>>> +       node [shape = circle, label="Seek"] Seek;
>>> +       node [shape = circle, label="End of stream"] EoS;
>>> +
>>> +       node [shape = point]; qi
>>> +       qi -> Initialization [ label = "open()" ];
>>> +
>>> +       Initialization -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
>>> +
>>> +       CaptureSetup -> Stopped [ label = "CAPTURE\nbuffers\nready" ];
>>> +
>>> +       Decoding -> ResChange [ label = "Stream\nresolution\nchange" ];
>>> +       Decoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
>>> +       Decoding -> EoS [ label = "EoS mark\nin the stream" ];
>>> +       Decoding -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
>>> +       Decoding -> Stopped [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
>>> +       Decoding -> Decoding;
>>> +
>>> +       ResChange -> CaptureSetup [ label = "CAPTURE\nformat\nestablished" ];
>>> +       ResChange -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
>>> +
>>> +       EoS -> Drain [ label = "Implicit\ndrain" ];
>>> +
>>> +       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
>>> +       Drain -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
>>> +
>>> +       Seek -> Decoding [ label = "VIDIOC_STREAMON(OUTPUT)" ];
>>> +       Seek -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];
>>> +
>>> +       Stopped -> Decoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(CAPTURE)" ];
>>> +       Stopped -> Seek [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
>>> +   }
>>> +
>>> +Querying capabilities
>>> +=====================
>>> +
>>> +1. To enumerate the set of coded formats supported by the decoder, the
>>> +   client may call :c:func:`VIDIOC_ENUM_FMT` on ``OUTPUT``.
>>> +
>>> +   * The full set of supported formats will be returned, regardless of the
>>> +     format set on ``CAPTURE``.
>>> +
>>> +2. To enumerate the set of supported raw formats, the client may call
>>> +   :c:func:`VIDIOC_ENUM_FMT` on ``CAPTURE``.
>>> +
>>> +   * Only the formats supported for the format currently active on ``OUTPUT``
>>> +     will be returned.
>>> +
>>> +   * In order to enumerate raw formats supported by a given coded format,
>>> +     the client must first set that coded format on ``OUTPUT`` and then
>>> +     enumerate formats on ``CAPTURE``.
>>> +
>>> +3. The client may use :c:func:`VIDIOC_ENUM_FRAMESIZES` to detect supported
>>> +   resolutions for a given format, passing desired pixel format in
>>> +   :c:type:`v4l2_frmsizeenum` ``pixel_format``.
>>> +
>>> +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a coded pixel
>>> +     formats will include all possible coded resolutions supported by the
>>> +     decoder for given coded pixel format.
>>> +
>>> +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a raw pixel format
>>> +     will include all possible frame buffer resolutions supported by the
>>> +     decoder for given raw pixel format and the coded format currently set on
>>> +     ``OUTPUT``.
>>> +
>>> +4. Supported profiles and levels for given format, if applicable, may be
>>> +   queried using their respective controls via :c:func:`VIDIOC_QUERYCTRL`.
>>> +
>>> +Initialization
>>> +==============
>>> +
>>> +1. **Optional.** Enumerate supported ``OUTPUT`` formats and resolutions. See
>>> +   `Querying capabilities` above.
>>> +
>>> +2. Set the coded format on ``OUTPUT`` via :c:func:`VIDIOC_S_FMT`
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``type``
>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +     ``pixelformat``
>>> +         a coded pixel format
>>> +
>>> +     ``width``, ``height``
>>> +         required only if cannot be parsed from the stream for the given
>>> +         coded format; optional otherwise - set to zero to ignore
>>> +
>>> +     ``sizeimage``
>>> +         desired size of ``OUTPUT`` buffers; the decoder may adjust it to
>>> +         match hardware requirements
>>> +
>>> +     other fields
>>> +         follow standard semantics
>>> +
>>> +   * **Return fields:**
>>> +
>>> +     ``sizeimage``
>>> +         adjusted size of ``CAPTURE`` buffers
>>> +
>>> +   * If width and height are set to non-zero values, the ``CAPTURE`` format
>>> +     will be updated with an appropriate frame buffer resolution instantly.
>>> +     However, for coded formats that include stream resolution information,
>>> +     after the decoder is done parsing the information from the stream, it will
>>> +     update the ``CAPTURE`` format with new values and signal a source change
>>> +     event.
>>> +
>>> +   .. warning::
>>> +
>>> +      Changing the ``OUTPUT`` format may change the currently set ``CAPTURE``
>>> +      format. The decoder will derive a new ``CAPTURE`` format from the
>>> +      ``OUTPUT`` format being set, including resolution, colorimetry
>>> +      parameters, etc. If the client needs a specific ``CAPTURE`` format, it
>>> +      must adjust it afterwards.
>>> +
>>> +3.  **Optional.** Query the minimum number of buffers required for ``OUTPUT``
>>> +    queue via :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to
>>> +    use more buffers than the minimum required by hardware/format.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``id``
>>> +          set to ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``value``
>>> +          the minimum number of ``OUTPUT`` buffers required for the currently
>>> +          set format
>>> +
>>> +4.  Allocate source (bitstream) buffers via :c:func:`VIDIOC_REQBUFS` on
>>> +    ``OUTPUT``.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``count``
>>> +          requested number of buffers to allocate; greater than zero
>>> +
>>> +      ``type``
>>> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +      ``memory``
>>> +          follows standard semantics
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``count``
>>> +          the actual number of buffers allocated
>>> +
>>> +    .. warning::
>>> +
>>> +       The actual number of allocated buffers may differ from the ``count``
>>> +       given. The client must check the updated value of ``count`` after the
>>> +       call returns.
>>> +
>>> +    .. note::
>>> +
>>> +       To allocate more than the minimum number of buffers (for pipeline
>>> +       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
>>> +       control to get the minimum number of buffers required by the
>>> +       decoder/format, and pass the obtained value plus the number of
>>> +       additional buffers needed in the ``count`` field to
>>> +       :c:func:`VIDIOC_REQBUFS`.
>>> +
>>> +    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``OUTPUT`` queue can be
>>> +    used to have more control over buffer allocation.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``count``
>>> +          requested number of buffers to allocate; greater than zero
>>> +
>>> +      ``type``
>>> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +      ``memory``
>>> +          follows standard semantics
>>> +
>>> +      ``format``
>>> +          follows standard semantics
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``count``
>>> +          adjusted to the number of allocated buffers
>>> +
>>> +    .. warning::
>>> +
>>> +       The actual number of allocated buffers may differ from the ``count``
>>> +       given. The client must check the updated value of ``count`` after the
>>> +       call returns.
>>> +
>>> +5.  Start streaming on the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`.
>>> +
>>> +6.  **This step only applies to coded formats that contain resolution information
>>> +    in the stream.**
>>
>> As far as I know all codecs have resolution/metadata in the stream.
> 
> Was this comment about what we currently support in V4L2 interface ? In

Yes, I was talking about all V4L2 codecs.

> real life, there is CODEC that works only with out-of-band codec data.
> A well known one is AVC1 (and HVC1). In this mode, the AVC H264 does
> not have start code, and the headers are not allowed in the bitstream
> itself. This format is much more efficient to process then AVC Annex B,
> since you can just read the NAL size and jump over instead of scanning
> for start code. This is the format used in the very popular ISOMP4
> container.

How would such a codec handle resolution changes? Or is that not allowed?

> 
> The other sign that these codecs do exist is the recurence of the
> notion of codec data in pretty much every codec abstraction there is
> (ffmpeg, GStreamer, Android Media Codec)
>>
>> As discussed in the "[PATCH vicodec v4 0/3] Add support to more pixel formats in
>> vicodec" thread, it is easiest to assume that there is always metadata.
>>
>> Perhaps there should be a single mention somewhere that such codecs are not
>> supported at the moment, but to be frank how can you decode a stream without
>> it containing such essential information? You are much more likely to implement
>> such a codec as a stateless codec.
> 
> That is I believe a miss-interpretation of what a stateless codec is.
> It's not because you have to set one blob of CODEC data after S_FMT on
> a specific control that this CODEC becomes stateless. The fact is not
> supported now is just become we didn't have come accross HW that
> supports these.
> 
> FFMPEG offers a state full software codec, and you still have this
> codec_data blob for many of the format. They also only supports AVC1,
> they parser will always convert the stream to that, because it's just
> more efficient format. Android Media Codec works similarly. What keeps
> them stateful, is that you don't need to parse it, you don't even need
> to know what these data contains. They are blobs placed in the
> container that you pass as-is to the decoder.

This is all useful information, thank you.

I think I stand by my recommendation to add a statement that we don't
support such codecs at the moment. When we add support for such a codec,
then we will need to update the spec with the right rules.

I think it is difficult to make assumptions without having seen how a
typical stateful HW codec would handle this.

Regards,

	Hans

> 
>>
>> So I would just drop this sentence here (and perhaps at other places in this
>> document or the encoder document as well).
>>
>> Regards,
>>
>> 	Hans
> 


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-11-17 11:43       ` Hans Verkuil
@ 2018-11-18  1:25         ` Nicolas Dufresne
  0 siblings, 0 replies; 41+ messages in thread
From: Nicolas Dufresne @ 2018-11-18  1:25 UTC (permalink / raw)
  To: Hans Verkuil, Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Paweł Ościak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel, Tiffany Lin, Andrew-CT Chen,
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

Le samedi 17 novembre 2018 à 12:43 +0100, Hans Verkuil a écrit :
> > > As far as I know all codecs have resolution/metadata in the stream.
> > 
> > Was this comment about what we currently support in V4L2 interface ? In
> 
> Yes, I was talking about all V4L2 codecs.
> 
> > real life, there is CODEC that works only with out-of-band codec data.
> > A well known one is AVC1 (and HVC1). In this mode, the AVC H264 does
> > not have start code, and the headers are not allowed in the bitstream
> > itself. This format is much more efficient to process then AVC Annex B,
> > since you can just read the NAL size and jump over instead of scanning
> > for start code. This is the format used in the very popular ISOMP4
> > container.
> 
> How would such a codec handle resolution changes? Or is that not allowed?

That's a good question. It is of course allowed, but you'd need the
request API if you want to queue this change (i.e. if you want a
resolution change using the events). Meanwhile, one can always  do
CMD_STOP, wait for the decoder to be drained, pass the new codec data,
push frames / restart streaming.

The former is the only mode implemented in GStreamer, so I'll let
Tomasz comment more on his thought how this could work.

One of the main issue with the resolution change mechanism (even
without codec data) is that you have no guaranty that the input buffer
(buffers on V4L2 OUTPUT queue) are large enough to fit a full encoded
frame of the new resolution. So userspace should have some formula to
calculate the required size, and use the CMD_STOP / drain method to be
able to re-allocate these buffers. So the newest resolution change
method, to be implemented correctly require more work (but is all
doable).

Nicolas


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2018-11-17 11:37       ` Hans Verkuil
@ 2018-11-18  1:34         ` Nicolas Dufresne
  2019-01-23 10:02           ` Tomasz Figa
  2019-01-23 10:00         ` Tomasz Figa
  1 sibling, 1 reply; 41+ messages in thread
From: Nicolas Dufresne @ 2018-11-18  1:34 UTC (permalink / raw)
  To: Hans Verkuil, Tomasz Figa, linux-media
  Cc: linux-kernel, Mauro Carvalho Chehab, Paweł Ościak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel, Tiffany Lin, Andrew-CT Chen,
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

Le samedi 17 novembre 2018 à 12:37 +0100, Hans Verkuil a écrit :
> > > Does V4L2_CID_MIN_BUFFERS_FOR_CAPTURE make any sense for encoders?
> > 
> > We do account for it in GStreamer (the capture/output handling is
> > generic), but I don't know if it's being used anywhere. 
> 
> Do you use this value directly for REQBUFS, or do you use it as the minimum
> value but in practice use more buffers?

We add more buffers to that value. We assume this value is what will be
held by the driver, hence without adding some buffers, the driver would
go idle as soon as one is dequeued. We also need to allocate for the
importing driver.

In general, if we have a pipeline with Driver A sending to Driver B,
both driver will require a certain amount of buffers to operate. E.g.
with DRM display, the driver will hold on 1 buffer (the scannout
buffer).

In GStreamer, it's implemented generically, so we do:

  MIN_BUFFERS_FOR + remote_min + 1

If only MIN_BUFFERS_FOR was allocated, ignoring remote driver
requirement, the streaming will likely get stuck.

regards,
Nicolas


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2018-11-12 11:37   ` Hans Verkuil
@ 2019-01-22 10:02     ` Tomasz Figa
  2019-01-22 14:47       ` Hans Verkuil
  0 siblings, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2019-01-22 10:02 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Pawel Osciak, Alexandre Courbot, kamil,
	a.hajda, Kyungmin Park, jtp.park, Philipp Zabel,
	Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, todor.tomov, nicolas, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On Mon, Nov 12, 2018 at 8:37 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>
> Hi Tomasz,
>
> A general note for the stateful and stateless patches: they describe specific
> use-cases of the more generic Codec Interface, and as such should be one
> level deeper in the section hierarchy.

I wonder what exactly this Codec Interface is. Is it a historical name
for mem2mem? If so, perhaps it would make sense to rename it?

>
> I.e. instead of being section 4.6/7/8:
>
> https://hverkuil.home.xs4all.nl/request-api/uapi/v4l/devices.html
>
> they should be 4.5.1/2/3.
>

FYI, the first RFC started like that, but it only made the spec
difficult to navigate and the section numbers too long.

Still, no strong opinion. I'm okay moving it there, if you think it's better.

> On 10/22/2018 04:48 PM, Tomasz Figa wrote:
> > Due to complexity of the video decoding process, the V4L2 drivers of
> > stateful decoder hardware require specific sequences of V4L2 API calls
> > to be followed. These include capability enumeration, initialization,
> > decoding, seek, pause, dynamic resolution change, drain and end of
> > stream.
[snipping any comments that I agree with]
> > +
> > +source height
> > +   height in pixels for given source resolution; relevant to encoders only
> > +
> > +source resolution
> > +   resolution in pixels of source frames being source to the encoder and
> > +   subject to further cropping to the bounds of visible resolution; relevant to
> > +   encoders only
> > +
> > +source width
> > +   width in pixels for given source resolution; relevant to encoders only
>
> I would drop these three terms: they are not used in this document since this
> describes a decoder and not an encoder.
>

The glossary is shared between encoder and decoder, as suggested in
previous round of review.

[snip]
> > +
> > +   * If width and height are set to non-zero values, the ``CAPTURE`` format
> > +     will be updated with an appropriate frame buffer resolution instantly.
> > +     However, for coded formats that include stream resolution information,
> > +     after the decoder is done parsing the information from the stream, it will
> > +     update the ``CAPTURE`` format with new values and signal a source change
> > +     event.
>
> What if the initial width and height specified by userspace matches the parsed
> width and height? Do you still get a source change event? I think you should
> always get this event since there are other parameters that depend on the parsing
> of the meta data.
>
> But that should be made explicit here.
>

Yes, the change event should happen always after the driver determines
the format of the stream. Will specify it explicitly.

> > +
> > +   .. warning::
>
> I'd call this a note rather than a warning.
>

I think it deserves at least the "important" level, since it informs
about the side effects of the call, affecting any actions that the
client might have done before.


> > +
> > +      Changing the ``OUTPUT`` format may change the currently set ``CAPTURE``
> > +      format. The decoder will derive a new ``CAPTURE`` format from the
> > +      ``OUTPUT`` format being set, including resolution, colorimetry
> > +      parameters, etc. If the client needs a specific ``CAPTURE`` format, it
> > +      must adjust it afterwards.
> > +
> > +3.  **Optional.** Query the minimum number of buffers required for ``OUTPUT``
> > +    queue via :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to
> > +    use more buffers than the minimum required by hardware/format.
>
> Why is this useful? As far as I can tell only the s5p-mfc *encoder* supports
> this control, so this seems pointless. And since the output queue gets a bitstream
> I don't see any reason for reading this control in a decoder.
>

Indeed, querying this for bitstream buffers probably doesn't make much
sense. I'll remove it.

> > +
> > +    * **Required fields:**
> > +
> > +      ``id``
> > +          set to ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
> > +
> > +    * **Return fields:**
> > +
> > +      ``value``
> > +          the minimum number of ``OUTPUT`` buffers required for the currently
> > +          set format
> > +
> > +4.  Allocate source (bitstream) buffers via :c:func:`VIDIOC_REQBUFS` on
> > +    ``OUTPUT``.
> > +
> > +    * **Required fields:**
> > +
> > +      ``count``
> > +          requested number of buffers to allocate; greater than zero
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +      ``memory``
> > +          follows standard semantics
> > +
> > +    * **Return fields:**
> > +
> > +      ``count``
> > +          the actual number of buffers allocated
> > +
> > +    .. warning::
> > +
> > +       The actual number of allocated buffers may differ from the ``count``
> > +       given. The client must check the updated value of ``count`` after the
> > +       call returns.
> > +
> > +    .. note::
> > +
> > +       To allocate more than the minimum number of buffers (for pipeline
> > +       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
> > +       control to get the minimum number of buffers required by the
> > +       decoder/format, and pass the obtained value plus the number of
> > +       additional buffers needed in the ``count`` field to
> > +       :c:func:`VIDIOC_REQBUFS`.
>
> As mentioned above, this makes no sense for stateful decoders IMHO.
>

Ack.

> > +
> > +    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``OUTPUT`` queue can be
> > +    used to have more control over buffer allocation.
> > +
> > +    * **Required fields:**
> > +
> > +      ``count``
> > +          requested number of buffers to allocate; greater than zero
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +      ``memory``
> > +          follows standard semantics
> > +
> > +      ``format``
> > +          follows standard semantics
> > +
> > +    * **Return fields:**
> > +
> > +      ``count``
> > +          adjusted to the number of allocated buffers
> > +
> > +    .. warning::
> > +
> > +       The actual number of allocated buffers may differ from the ``count``
> > +       given. The client must check the updated value of ``count`` after the
> > +       call returns.
> > +
> > +5.  Start streaming on the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`.
> > +
> > +6.  **This step only applies to coded formats that contain resolution information
> > +    in the stream.** Continue queuing/dequeuing bitstream buffers to/from the
> > +    ``OUTPUT`` queue via :c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`. The
> > +    buffers will be processed and returned to the client in order, until
> > +    required metadata to configure the ``CAPTURE`` queue are found. This is
> > +    indicated by the decoder sending a ``V4L2_EVENT_SOURCE_CHANGE`` event with
> > +    ``V4L2_EVENT_SRC_CH_RESOLUTION`` source change type.
> > +
> > +    * It is not an error if the first buffer does not contain enough data for
> > +      this to occur. Processing of the buffers will continue as long as more
> > +      data is needed.
> > +
> > +    * If data in a buffer that triggers the event is required to decode the
> > +      first frame, it will not be returned to the client, until the
> > +      initialization sequence completes and the frame is decoded.
> > +
> > +    * If the client sets width and height of the ``OUTPUT`` format to 0,
> > +      calling :c:func:`VIDIOC_G_FMT`, :c:func:`VIDIOC_S_FMT` or
> > +      :c:func:`VIDIOC_TRY_FMT` on the ``CAPTURE`` queue will return the
> > +      ``-EACCES`` error code, until the decoder configures ``CAPTURE`` format
> > +      according to stream metadata.
> > +
> > +    .. important::
> > +
> > +       Any client query issued after the decoder queues the event will return
> > +       values applying to the just parsed stream, including queue formats,
> > +       selection rectangles and controls.
> > +
> > +    .. note::
> > +
> > +       A client capable of acquiring stream parameters from the bitstream on
> > +       its own may attempt to set the width and height of the ``OUTPUT`` format
> > +       to non-zero values matching the coded size of the stream, skip this step
> > +       and continue with the `Capture setup` sequence. However, it must not
> > +       rely on any driver queries regarding stream parameters, such as
> > +       selection rectangles and controls, since the decoder has not parsed them
> > +       from the stream yet. If the values configured by the client do not match
> > +       those parsed by the decoder, a `Dynamic resolution change` will be
> > +       triggered to reconfigure them.
> > +
> > +    .. note::
> > +
> > +       No decoded frames are produced during this phase.
> > +
> > +7.  Continue with the `Capture setup` sequence.
> > +
> > +Capture setup
> > +=============
> > +
> > +1.  Call :c:func:`VIDIOC_G_FMT` on the ``CAPTURE`` queue to get format for the
> > +    destination buffers parsed/decoded from the bitstream.
> > +
> > +    * **Required fields:**
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> > +
> > +    * **Return fields:**
> > +
> > +      ``width``, ``height``
> > +          frame buffer resolution for the decoded frames
> > +
> > +      ``pixelformat``
> > +          pixel format for decoded frames
> > +
> > +      ``num_planes`` (for _MPLANE ``type`` only)
> > +          number of planes for pixelformat
> > +
> > +      ``sizeimage``, ``bytesperline``
> > +          as per standard semantics; matching frame buffer format
> > +
> > +    .. note::
> > +
> > +       The value of ``pixelformat`` may be any pixel format supported by the
> > +       decoder for the current stream. The decoder should choose a
> > +       preferred/optimal format for the default configuration. For example, a
> > +       YUV format may be preferred over an RGB format if an additional
> > +       conversion step would be required for the latter.
> > +
> > +2.  **Optional.** Acquire the visible resolution via
> > +    :c:func:`VIDIOC_G_SELECTION`.
> > +
> > +    * **Required fields:**
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> > +
> > +      ``target``
> > +          set to ``V4L2_SEL_TGT_COMPOSE``
> > +
> > +    * **Return fields:**
> > +
> > +      ``r.left``, ``r.top``, ``r.width``, ``r.height``
> > +          the visible rectangle; it must fit within the frame buffer resolution
> > +          returned by :c:func:`VIDIOC_G_FMT` on ``CAPTURE``.
> > +
> > +    * The following selection targets are supported on ``CAPTURE``:
> > +
> > +      ``V4L2_SEL_TGT_CROP_BOUNDS``
> > +          corresponds to the coded resolution of the stream
> > +
> > +      ``V4L2_SEL_TGT_CROP_DEFAULT``
> > +          the rectangle covering the part of the ``CAPTURE`` buffer that
> > +          contains meaningful picture data (visible area); width and height
> > +          will be equal to the visible resolution of the stream
> > +
> > +      ``V4L2_SEL_TGT_CROP``
> > +          the rectangle within the coded resolution to be output to
> > +          ``CAPTURE``; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``; read-only on
> > +          hardware without additional compose/scaling capabilities
> > +
> > +      ``V4L2_SEL_TGT_COMPOSE_BOUNDS``
> > +          the maximum rectangle within a ``CAPTURE`` buffer, which the cropped
> > +          frame can be output into; equal to ``V4L2_SEL_TGT_CROP`` if the
> > +          hardware does not support compose/scaling
> > +
> > +      ``V4L2_SEL_TGT_COMPOSE_DEFAULT``
> > +          equal to ``V4L2_SEL_TGT_CROP``
> > +
> > +      ``V4L2_SEL_TGT_COMPOSE``
> > +          the rectangle inside a ``CAPTURE`` buffer into which the cropped
> > +          frame is written; defaults to ``V4L2_SEL_TGT_COMPOSE_DEFAULT``;
> > +          read-only on hardware without additional compose/scaling capabilities
> > +
> > +      ``V4L2_SEL_TGT_COMPOSE_PADDED``
> > +          the rectangle inside a ``CAPTURE`` buffer which is overwritten by the
> > +          hardware; equal to ``V4L2_SEL_TGT_COMPOSE`` if the hardware does not
> > +          write padding pixels
> > +
> > +    .. warning::
> > +
> > +       The values are guaranteed to be meaningful only after the decoder
> > +       successfully parses the stream metadata. The client must not rely on the
> > +       query before that happens.
> > +
> > +3.  Query the minimum number of buffers required for the ``CAPTURE`` queue via
> > +    :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to use more
> > +    buffers than the minimum required by hardware/format.
>
> Is this step optional or required? Can it change when a resolution change occurs?

Probably not with a simple resolution change, but a case when a stream
is changed on the fly would trigger what we call "resolution change"
here, but what would effectively be a "source change" and that could
include a change in the number of required CAPTURE buffers.

> How does this relate to the checks for the minimum number of buffers that REQBUFS
> does?

The control returns the minimum that REQBUFS would allow, so the
application can add few more buffers on top of that and improve the
pipelining.

>
> The 'This is useful if' sentence suggests that it is optional, but I think that
> sentence just confuses the issue.
>

It used to be optional and I didn't rephrase it after turning it into
mandatory. How about:

    This enables the client to request more buffers
    than the minimum required by hardware/format and achieve better pipelining.

> > +
> > +    * **Required fields:**
> > +
> > +      ``id``
> > +          set to ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE``
> > +
> > +    * **Return fields:**
> > +
> > +      ``value``
> > +          minimum number of buffers required to decode the stream parsed in
> > +          this initialization sequence.
> > +
> > +    .. note::
> > +
> > +       The minimum number of buffers must be at least the number required to
> > +       successfully decode the current stream. This may for example be the
> > +       required DPB size for an H.264 stream given the parsed stream
> > +       configuration (resolution, level).
> > +
> > +    .. warning::
> > +
> > +       The value is guaranteed to be meaningful only after the decoder
> > +       successfully parses the stream metadata. The client must not rely on the
> > +       query before that happens.
> > +
> > +4.  **Optional.** Enumerate ``CAPTURE`` formats via :c:func:`VIDIOC_ENUM_FMT` on
> > +    the ``CAPTURE`` queue. Once the stream information is parsed and known, the
> > +    client may use this ioctl to discover which raw formats are supported for
> > +    given stream and select one of them via :c:func:`VIDIOC_S_FMT`.
>
> Can the list returned here differ from the list returned in the 'Querying capabilities'
> step? If so, then I assume it will always be a subset of what was returned in
> the 'Querying' step?

Depends on whether you're considering just VIDIOC_ENUM_FMT or also
VIDIOC_G_FMT and VIDIOC_ENUM_FRAMESIZES.

The initial VIDIOC_ENUM_FMT has no way to account for any resolution
constraints of the formats, so the list would include all raw pixel
formats that the decoder can handle with selected coded pixel format.
However, the list can be further narrowed down by using
VIDIOC_ENUM_FRAMESIZES, to restrict each raw format only to the
resolutions it can handle.

The VIDIOC_ENUM_FMT call in this sequence (after getting the stream
information) would have the knowledge about the resolution, so the
list returned here would only include the formats that can be actually
handled. It should match the result of the initial query using both
VIDIOC_ENUM_FMT and VIDIOC_ENUM_FRAMESIZES.

>
> > +
> > +    .. important::
> > +
> > +       The decoder will return only formats supported for the currently
> > +       established coded format, as per the ``OUTPUT`` format and/or stream
> > +       metadata parsed in this initialization sequence, even if more formats
> > +       may be supported by the decoder in general.
> > +
> > +       For example, a decoder may support YUV and RGB formats for resolutions
> > +       1920x1088 and lower, but only YUV for higher resolutions (due to
> > +       hardware limitations). After parsing a resolution of 1920x1088 or lower,
> > +       :c:func:`VIDIOC_ENUM_FMT` may return a set of YUV and RGB pixel formats,
> > +       but after parsing resolution higher than 1920x1088, the decoder will not
> > +       return RGB, unsupported for this resolution.
> > +
> > +       However, subsequent resolution change event triggered after
> > +       discovering a resolution change within the same stream may switch
> > +       the stream into a lower resolution and :c:func:`VIDIOC_ENUM_FMT`
> > +       would return RGB formats again in that case.
> > +
> > +5.  **Optional.** Set the ``CAPTURE`` format via :c:func:`VIDIOC_S_FMT` on the
> > +    ``CAPTURE`` queue. The client may choose a different format than
> > +    selected/suggested by the decoder in :c:func:`VIDIOC_G_FMT`.
> > +
> > +    * **Required fields:**
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> > +
> > +      ``pixelformat``
> > +          a raw pixel format
> > +
> > +    .. note::
> > +
> > +       The client may use :c:func:`VIDIOC_ENUM_FMT` after receiving the
> > +       ``V4L2_EVENT_SOURCE_CHANGE`` event to find out the set of raw formats
> > +       supported for the stream.
>
> Isn't this a duplicate of step 4? I think this note can be dropped.
>

Ack.

> > +
> > +6.  If all the following conditions are met, the client may resume the decoding
> > +    instantly:
> > +
> > +    * ``sizeimage`` of the new format (determined in previous steps) is less
> > +      than or equal to the size of currently allocated buffers,
> > +
> > +    * the number of buffers currently allocated is greater than or equal to the
> > +      minimum number of buffers acquired in previous steps. To fulfill this
> > +      requirement, the client may use :c:func:`VIDIOC_CREATE_BUFS` to add new
> > +      buffers.
> > +
> > +    In such case, the remaining steps do not apply and the client may resume
> > +    the decoding by one of the following actions:
> > +
> > +    * if the ``CAPTURE`` queue is streaming, call :c:func:`VIDIOC_DECODER_CMD`
> > +      with the ``V4L2_DEC_CMD_START`` command,
> > +
> > +    * if the ``CAPTURE`` queue is not streaming, call :c:func:`VIDIOC_STREAMON`
> > +      on the ``CAPTURE`` queue.
> > +
> > +    However, if the client intends to change the buffer set, to lower
> > +    memory usage or for any other reasons, it may be achieved by following
> > +    the steps below.
> > +
> > +7.  **If the** ``CAPTURE`` **queue is streaming,** keep queuing and dequeuing
> > +    buffers on the ``CAPTURE`` queue until a buffer marked with the
> > +    ``V4L2_BUF_FLAG_LAST`` flag is dequeued.
> > +
> > +8.  **If the** ``CAPTURE`` **queue is streaming,** call :c:func:`VIDIOC_STREAMOFF`
> > +    on the ``CAPTURE`` queue to stop streaming.
> > +
> > +    .. warning::
> > +
> > +       The ``OUTPUT`` queue must remain streaming. Calling
> > +       :c:func:`VIDIOC_STREAMOFF` on it would abort the sequence and trigger a
> > +       seek.
> > +
> > +9.  **If the** ``CAPTURE`` **queue has buffers allocated,** free the ``CAPTURE``
> > +    buffers using :c:func:`VIDIOC_REQBUFS`.
> > +
> > +    * **Required fields:**
> > +
> > +      ``count``
> > +          set to 0
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> > +
> > +      ``memory``
> > +          follows standard semantics
> > +
> > +10. Allocate ``CAPTURE`` buffers via :c:func:`VIDIOC_REQBUFS` on the
> > +    ``CAPTURE`` queue.
> > +
> > +    * **Required fields:**
> > +
> > +      ``count``
> > +          requested number of buffers to allocate; greater than zero
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> > +
> > +      ``memory``
> > +          follows standard semantics
> > +
> > +    * **Return fields:**
> > +
> > +      ``count``
> > +          actual number of buffers allocated
> > +
> > +    .. warning::
> > +
> > +       The actual number of allocated buffers may differ from the ``count``
> > +       given. The client must check the updated value of ``count`` after the
> > +       call returns.
> > +
> > +    .. note::
> > +
> > +       To allocate more than the minimum number of buffers (for pipeline
> > +       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE``
> > +       control to get the minimum number of buffers required, and pass the
> > +       obtained value plus the number of additional buffers needed in the
> > +       ``count`` field to :c:func:`VIDIOC_REQBUFS`.
>
> Same question as before: is it optional or required to obtain the value of this
> control? And can't the driver just set the min_buffers_needed field in the capture
> vb2_queue to the minimum number of buffers that are required?

min_buffers_needed is the number of buffers that must be queued before
start_streaming() can be called, so not relevant here. The control is
about the number of buffers to be allocated. Although the drivers must
ensure that REQBUFS allocates the absolute minimum number of buffers
for the decoding to be able to progress, the pipeline depth is
something that the applications should control (e.g. depending on the
consumers of the decoded buffers) and this is allowed by this control.

>
> Should you be allowed to allocate buffers at all if the capture format isn't
> known? I.e. width/height is still 0. It makes no sense to call REQBUFS since
> there is no format size known that REQBUFS can use.
>

Indeed, REQBUFS(CAPTURE) must not be allowed before the stream
information is known (regardless of whether it comes from the OUTPUT
format or is parsed from the stream). Let me add this to the related
note in the Initialization sequence, which already includes
VIDIOC_*_FMT.

For the Capture setup sequence, though, it's expected to happen when
the stream information is already known, so I wouldn't change the
description here.

> > +
> > +    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``CAPTURE`` queue can be
> > +    used to have more control over buffer allocation. For example, by
> > +    allocating buffers larger than the current ``CAPTURE`` format, future
> > +    resolution changes can be accommodated.
> > +
> > +    * **Required fields:**
> > +
> > +      ``count``
> > +          requested number of buffers to allocate; greater than zero
> > +
> > +      ``type``
> > +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> > +
> > +      ``memory``
> > +          follows standard semantics
> > +
> > +      ``format``
> > +          a format representing the maximum framebuffer resolution to be
> > +          accommodated by newly allocated buffers
> > +
> > +    * **Return fields:**
> > +
> > +      ``count``
> > +          adjusted to the number of allocated buffers
> > +
> > +    .. warning::
> > +
> > +       The actual number of allocated buffers may differ from the ``count``
> > +       given. The client must check the updated value of ``count`` after the
> > +       call returns.
> > +
> > +    .. note::
> > +
> > +       To allocate buffers for a format different than parsed from the stream
> > +       metadata, the client must proceed as follows, before the metadata
> > +       parsing is initiated:
> > +
> > +       * set width and height of the ``OUTPUT`` format to desired coded resolution to
> > +         let the decoder configure the ``CAPTURE`` format appropriately,
> > +
> > +       * query the ``CAPTURE`` format using :c:func:`VIDIOC_G_FMT` and save it
> > +         until this step.
> > +
> > +       The format obtained in the query may be then used with
> > +       :c:func:`VIDIOC_CREATE_BUFS` in this step to allocate the buffers.
> > +
> > +11. Call :c:func:`VIDIOC_STREAMON` on the ``CAPTURE`` queue to start decoding
> > +    frames.
> > +
> > +Decoding
> > +========
> > +
> > +This state is reached after the `Capture setup` sequence finishes succesfully.
> > +In this state, the client queues and dequeues buffers to both queues via
> > +:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
> > +semantics.
> > +
> > +The contents of the source ``OUTPUT`` buffers depend on the active coded pixel
> > +format and may be affected by codec-specific extended controls, as stated in
> > +the documentation of each format.
> > +
> > +Both queues operate independently, following the standard behavior of V4L2
> > +buffer queues and memory-to-memory devices. In addition, the order of decoded
> > +frames dequeued from the ``CAPTURE`` queue may differ from the order of queuing
> > +coded frames to the ``OUTPUT`` queue, due to properties of the selected coded
> > +format, e.g. frame reordering.
> > +
> > +The client must not assume any direct relationship between ``CAPTURE``
> > +and ``OUTPUT`` buffers and any specific timing of buffers becoming
> > +available to dequeue. Specifically,
> > +
> > +* a buffer queued to ``OUTPUT`` may result in no buffers being produced
> > +  on ``CAPTURE`` (e.g. if it does not contain encoded data, or if only
> > +  metadata syntax structures are present in it),
> > +
> > +* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced
> > +  on ``CAPTURE`` (if the encoded data contained more than one frame, or if
> > +  returning a decoded frame allowed the decoder to return a frame that
> > +  preceded it in decode, but succeeded it in the display order),
> > +
> > +* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
> > +  ``CAPTURE`` later into decode process, and/or after processing further
> > +  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
> > +  reordering is used,
> > +
> > +* buffers may become available on the ``CAPTURE`` queue without additional
> > +  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
> > +  ``OUTPUT`` buffers queued in the past whose decoding results are only
> > +  available at later time, due to specifics of the decoding process.
> > +
> > +.. note::
> > +
> > +   To allow matching decoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
> > +   originated from, the client can set the ``timestamp`` field of the
> > +   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
> > +   ``CAPTURE`` buffer(s), which resulted from decoding that ``OUTPUT`` buffer
> > +   will have their ``timestamp`` field set to the same value when dequeued.
> > +
> > +   In addition to the straighforward case of one ``OUTPUT`` buffer producing
> > +   one ``CAPTURE`` buffer, the following cases are defined:
> > +
> > +   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
> > +     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
> > +
> > +   * multiple ``OUTPUT`` buffers generate one ``CAPTURE`` buffer: timestamp of
> > +     the ``OUTPUT`` buffer queued last will be copied,
> > +
> > +   * the decoding order differs from the display order (i.e. the
> > +     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
> > +     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
> > +     and thus monotonicity of the timestamps cannot be guaranteed.
>
> Should stateful codecs be required to support 'tags'? See:
>
> https://www.mail-archive.com/linux-media@vger.kernel.org/msg136314.html
>
> To be honest, I'm inclined to require this for all m2m devices eventually.
>

I guess this goes outside of the scope now, since we deferred tags.

Other than that, I would indeed make all m2m devices support tags,
since it shouldn't differ from the timestamp copy feature as we have
now.

> > +
> > +During the decoding, the decoder may initiate one of the special sequences, as
> > +listed below. The sequences will result in the decoder returning all the
> > +``CAPTURE`` buffers that originated from all the ``OUTPUT`` buffers processed
> > +before the sequence started. Last of the buffers will have the
> > +``V4L2_BUF_FLAG_LAST`` flag set. To determine the sequence to follow, the client
> > +must check if there is any pending event and,
> > +
> > +* if a ``V4L2_EVENT_SOURCE_CHANGE`` event is pending, the `Dynamic resolution
> > +  change` sequence needs to be followed,
> > +
> > +* if a ``V4L2_EVENT_EOS`` event is pending, the `End of stream` sequence needs
> > +  to be followed.
> > +
> > +Some of the sequences can be intermixed with each other and need to be handled
> > +as they happen. The exact operation is documented for each sequence.
> > +
> > +Seek
> > +====
> > +
> > +Seek is controlled by the ``OUTPUT`` queue, as it is the source of coded data.
> > +The seek does not require any specific operation on the ``CAPTURE`` queue, but
> > +it may be affected as per normal decoder operation.
> > +
> > +1. Stop the ``OUTPUT`` queue to begin the seek sequence via
> > +   :c:func:`VIDIOC_STREAMOFF`.
> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +   * The decoder will drop all the pending ``OUTPUT`` buffers and they must be
> > +     treated as returned to the client (following standard semantics).
> > +
> > +2. Restart the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`
> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +   * The decoder will start accepting new source bitstream buffers after the
> > +     call returns.
> > +
> > +3. Start queuing buffers containing coded data after the seek to the ``OUTPUT``
> > +   queue until a suitable resume point is found.
> > +
> > +   .. note::
> > +
> > +      There is no requirement to begin queuing coded data starting exactly
> > +      from a resume point (e.g. SPS or a keyframe). Any queued ``OUTPUT``
> > +      buffers will be processed and returned to the client until a suitable
> > +      resume point is found.  While looking for a resume point, the decoder
> > +      should not produce any decoded frames into ``CAPTURE`` buffers.
> > +
> > +      Some hardware is known to mishandle seeks to a non-resume point. Such an
> > +      operation may result in an unspecified number of corrupted decoded frames
> > +      being made available on the ``CAPTURE`` queue. Drivers must ensure that
> > +      no fatal decoding errors or crashes occur, and implement any necessary
> > +      handling and workarounds for hardware issues related to seek operations.
>
> Is there a requirement that those corrupted frames have V4L2_BUF_FLAG_ERROR set?
> I.e., can userspace detect those currupted frames?
>

I think the question is whether the kernel driver can actually detect
those corrupted frames. We can't guarantee reporting errors to the
userspace, if the hardware doesn't actually report them.

Could we perhaps keep this an open question and possibly address with
some extension that could be an opt in for the decoders that can
report errors?

> > +
> > +   .. warning::
> > +
> > +      In case of the H.264 codec, the client must take care not to seek over a
> > +      change of SPS/PPS. Even though the target frame could be a keyframe, the
> > +      stale SPS/PPS inside decoder state would lead to undefined results when
> > +      decoding. Although the decoder must handle such case without a crash or a
> > +      fatal decode error, the client must not expect a sensible decode output.
> > +
> > +4. After a resume point is found, the decoder will start returning ``CAPTURE``
> > +   buffers containing decoded frames.
> > +
> > +.. important::
> > +
> > +   A seek may result in the `Dynamic resolution change` sequence being
> > +   iniitated, due to the seek target having decoding parameters different from
> > +   the part of the stream decoded before the seek. The sequence must be handled
> > +   as per normal decoder operation.
> > +
> > +.. warning::
> > +
> > +   It is not specified when the ``CAPTURE`` queue starts producing buffers
> > +   containing decoded data from the ``OUTPUT`` buffers queued after the seek,
> > +   as it operates independently from the ``OUTPUT`` queue.
> > +
> > +   The decoder may return a number of remaining ``CAPTURE`` buffers containing
> > +   decoded frames originating from the ``OUTPUT`` buffers queued before the
> > +   seek sequence is performed.
> > +
> > +   The ``VIDIOC_STREAMOFF`` operation discards any remaining queued
> > +   ``OUTPUT`` buffers, which means that not all of the ``OUTPUT`` buffers
> > +   queued before the seek sequence may have matching ``CAPTURE`` buffers
> > +   produced.  For example, given the sequence of operations on the
> > +   ``OUTPUT`` queue:
> > +
> > +     QBUF(A), QBUF(B), STREAMOFF(), STREAMON(), QBUF(G), QBUF(H),
> > +
> > +   any of the following results on the ``CAPTURE`` queue is allowed:
> > +
> > +     {A’, B’, G’, H’}, {A’, G’, H’}, {G’, H’}.
>
> Isn't it the case that if you would want to avoid that, then you should call
> DECODER_STOP, wait for the last buffer on the CAPTURE queue, then seek and
> call DECODER_START. If you do that, then you should always get {A’, B’, G’, H’}.
> (basically following the Drain sequence).

Yes, it is, but I think it depends on the application needs. Here we
just give a primitive to change the place in the stream that's being
decoded (or change the stream on the fly).

Actually, with the timestamp copy, I guess we wouldn't even need to do
the DECODER_STOP, as we could just discard the CAPTURE buffers until
we get one that matches the timestamp of the first OUTPUT buffer after
the seek.

>
> Admittedly, you typically want to do an instantaneous seek, so this is probably
> not what you want to do normally.
>
> It might help to have this documented in a separate note.
>

The instantaneous seek is documented below. I'm not sure if there is
any practical need to document the other case, but I could add a
sentence like below to the warning above. What do you think?

   The ``VIDIOC_STREAMOFF`` operation discards any remaining queued
   ``OUTPUT`` buffers, which means that not all of the ``OUTPUT`` buffers
   queued before the seek sequence may have matching ``CAPTURE`` buffers
   produced.  For example, given the sequence of operations on the
   ``OUTPUT`` queue:

     QBUF(A), QBUF(B), STREAMOFF(), STREAMON(), QBUF(G), QBUF(H),

   any of the following results on the ``CAPTURE`` queue is allowed:

     {A’, B’, G’, H’}, {A’, G’, H’}, {G’, H’}.

   To determine the CAPTURE buffer containing the first decoded frame
after the seek,
   the client may observe the timestamps to match the CAPTURE and OUTPUT buffers
   or use V4L2_DEC_CMD_STOP and V4L2_DEC_CMD_START to drain the decoder.

> > +
> > +.. note::
> > +
> > +   To achieve instantaneous seek, the client may restart streaming on the
> > +   ``CAPTURE`` queue too to discard decoded, but not yet dequeued buffers.
> > +
> > +Dynamic resolution change
> > +=========================
> > +
> > +Streams that include resolution metadata in the bitstream may require switching
> > +to a different resolution during the decoding.
> > +
> > +The sequence starts when the decoder detects a coded frame with one or more of
> > +the following parameters different from previously established (and reflected
> > +by corresponding queries):
> > +
> > +* coded resolution (``OUTPUT`` width and height),
> > +
> > +* visible resolution (selection rectangles),
> > +
> > +* the minimum number of buffers needed for decoding.
> > +
> > +Whenever that happens, the decoder must proceed as follows:
> > +
> > +1.  After encountering a resolution change in the stream, the decoder sends a
> > +    ``V4L2_EVENT_SOURCE_CHANGE`` event with source change type set to
> > +    ``V4L2_EVENT_SRC_CH_RESOLUTION``.
> > +
> > +    .. important::
> > +
> > +       Any client query issued after the decoder queues the event will return
> > +       values applying to the stream after the resolution change, including
> > +       queue formats, selection rectangles and controls.
> > +
> > +2.  The decoder will then process and decode all remaining buffers from before
> > +    the resolution change point.
> > +
> > +    * The last buffer from before the change must be marked with the
> > +      ``V4L2_BUF_FLAG_LAST`` flag, similarly to the `Drain` sequence above.
> > +
> > +    .. warning::
> > +
> > +       The last buffer may be empty (with :c:type:`v4l2_buffer` ``bytesused``
> > +       = 0) and in such case it must be ignored by the client, as it does not
> > +       contain a decoded frame.
> > +
> > +    .. note::
> > +
> > +       Any attempt to dequeue more buffers beyond the buffer marked with
> > +       ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
> > +       :c:func:`VIDIOC_DQBUF`.
> > +
> > +The client must continue the sequence as described below to continue the
> > +decoding process.
> > +
> > +1.  Dequeue the source change event.
> > +
> > +    .. important::
> > +
> > +       A source change triggers an implicit decoder drain, similar to the
> > +       explicit `Drain` sequence. The decoder is stopped after it completes.
> > +       The decoding process must be resumed with either a pair of calls to
> > +       :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> > +       ``CAPTURE`` queue, or a call to :c:func:`VIDIOC_DECODER_CMD` with the
> > +       ``V4L2_DEC_CMD_START`` command.
> > +
> > +2.  Continue with the `Capture setup` sequence.
> > +
> > +.. note::
> > +
> > +   During the resolution change sequence, the ``OUTPUT`` queue must remain
> > +   streaming. Calling :c:func:`VIDIOC_STREAMOFF` on the ``OUTPUT`` queue would
> > +   abort the sequence and initiate a seek.
> > +
> > +   In principle, the ``OUTPUT`` queue operates separately from the ``CAPTURE``
> > +   queue and this remains true for the duration of the entire resolution change
> > +   sequence as well.
> > +
> > +   The client should, for best performance and simplicity, keep queuing/dequeuing
> > +   buffers to/from the ``OUTPUT`` queue even while processing this sequence.
> > +
> > +Drain
> > +=====
> > +
> > +To ensure that all queued ``OUTPUT`` buffers have been processed and related
> > +``CAPTURE`` buffers output to the client, the client must follow the drain
> > +sequence described below. After the drain sequence ends, the client has
> > +received all decoded frames for all ``OUTPUT`` buffers queued before the
> > +sequence was started.
> > +
> > +1. Begin drain by issuing :c:func:`VIDIOC_DECODER_CMD`.
> > +
> > +   * **Required fields:**
> > +
> > +     ``cmd``
> > +         set to ``V4L2_DEC_CMD_STOP``
> > +
> > +     ``flags``
> > +         set to 0
> > +
> > +     ``pts``
> > +         set to 0
> > +
> > +   .. warning::
> > +
> > +   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
>
> 'sentence'? You mean 'decoder command'?

Sequence. :)

>
> > +   are streaming. For compatibility reasons, the call to
> > +   :c:func:`VIDIOC_DECODER_CMD` will not fail even if any of the queues is not
> > +   streaming, but at the same time it will not initiate the `Drain` sequence
> > +   and so the steps described below would not be applicable.
> > +
> > +2. Any ``OUTPUT`` buffers queued by the client before the
> > +   :c:func:`VIDIOC_DECODER_CMD` was issued will be processed and decoded as
> > +   normal. The client must continue to handle both queues independently,
> > +   similarly to normal decode operation. This includes,
> > +
> > +   * handling any operations triggered as a result of processing those buffers,
> > +     such as the `Dynamic resolution change` sequence, before continuing with
> > +     the drain sequence,
> > +
> > +   * queuing and dequeuing ``CAPTURE`` buffers, until a buffer marked with the
> > +     ``V4L2_BUF_FLAG_LAST`` flag is dequeued,
> > +
> > +     .. warning::
> > +
> > +        The last buffer may be empty (with :c:type:`v4l2_buffer`
> > +        ``bytesused`` = 0) and in such case it must be ignored by the client,
> > +        as it does not contain a decoded frame.
> > +
> > +     .. note::
> > +
> > +        Any attempt to dequeue more buffers beyond the buffer marked with
> > +        ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
> > +        :c:func:`VIDIOC_DQBUF`.
> > +
> > +   * dequeuing processed ``OUTPUT`` buffers, until all the buffers queued
> > +     before the ``V4L2_DEC_CMD_STOP`` command are dequeued.
> > +
> > +   * dequeuing the ``V4L2_EVENT_EOS`` event, if the client subscribed to it.
> > +
> > +   .. note::
> > +
> > +      For backwards compatibility, the decoder will signal a ``V4L2_EVENT_EOS``
> > +      event when the last the last frame has been decoded and all frames are
>
> 'the last the last' -> the last

Ack.

>
> > +      ready to be dequeued. It is a deprecated behavior and the client must not
> > +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
> > +      instead.
> > +
> > +3. Once all the ``OUTPUT`` buffers queued before the ``V4L2_DEC_CMD_STOP`` call
> > +   and the last ``CAPTURE`` buffer are dequeued, the decoder is stopped and it
>
> This sentence is a bit confusing. This is better IMHO:
>
> 3. Once all the ``OUTPUT`` buffers queued before the ``V4L2_DEC_CMD_STOP`` call
>    are dequeued and the last ``CAPTURE`` buffer is dequeued, the decoder is stopped and it
>

Ack.

> > +   will accept, but not process any newly queued ``OUTPUT`` buffers until the
>
> process any -> process, any
>

Ack.

> > +   client issues any of the following operations:
> > +
> > +   * ``V4L2_DEC_CMD_START`` - the decoder will resume the operation normally,
> > +
> > +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> > +     ``CAPTURE`` queue - the decoder will resume the operation normally,
> > +     however any ``CAPTURE`` buffers still in the queue will be returned to the
> > +     client,
> > +
> > +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> > +     ``OUTPUT`` queue - any pending source buffers will be returned to the
> > +     client and the `Seek` sequence will be triggered.
> > +
> > +.. note::
> > +
> > +   Once the drain sequence is initiated, the client needs to drive it to
> > +   completion, as described by the steps above, unless it aborts the process by
> > +   issuing :c:func:`VIDIOC_STREAMOFF` on any of the ``OUTPUT`` or ``CAPTURE``
> > +   queues.  The client is not allowed to issue ``V4L2_DEC_CMD_START`` or
> > +   ``V4L2_DEC_CMD_STOP`` again while the drain sequence is in progress and they
> > +   will fail with -EBUSY error code if attempted.
> > +
> > +   Although mandatory, the availability of decoder commands may be queried
> > +   using :c:func:`VIDIOC_TRY_DECODER_CMD`.
> > +
> > +End of stream
> > +=============
> > +
> > +If the decoder encounters an end of stream marking in the stream, the decoder
> > +will initiate the `Drain` sequence, which the client must handle as described
> > +above, skipping the initial :c:func:`VIDIOC_DECODER_CMD`.
> > +
> > +Commit points
> > +=============
> > +
> > +Setting formats and allocating buffers trigger changes in the behavior of the
> > +decoder.
> > +
> > +1. Setting the format on the ``OUTPUT`` queue may change the set of formats
> > +   supported/advertised on the ``CAPTURE`` queue. In particular, it also means
> > +   that the ``CAPTURE`` format may be reset and the client must not rely on the
> > +   previously set format being preserved.
> > +
> > +2. Enumerating formats on the ``CAPTURE`` queue always returns only formats
> > +   supported for the current ``OUTPUT`` format.
> > +
> > +3. Setting the format on the ``CAPTURE`` queue does not change the list of
> > +   formats available on the ``OUTPUT`` queue. An attempt to set the ``CAPTURE``
> > +   format that is not supported for the currently selected ``OUTPUT`` format
> > +   will result in the decoder adjusting the requested ``CAPTURE`` format to a
> > +   supported one.
> > +
> > +4. Enumerating formats on the ``OUTPUT`` queue always returns the full set of
> > +   supported coded formats, irrespectively of the current ``CAPTURE`` format.
> > +
> > +5. While buffers are allocated on the ``OUTPUT`` queue, the client must not
> > +   change the format on the queue. Drivers will return the -EBUSY error code
> > +   for any such format change attempt.
> > +
> > +To summarize, setting formats and allocation must always start with the
> > +``OUTPUT`` queue and the ``OUTPUT`` queue is the master that governs the
> > +set of supported formats for the ``CAPTURE`` queue.
> > diff --git a/Documentation/media/uapi/v4l/devices.rst b/Documentation/media/uapi/v4l/devices.rst
> > index fb7f8c26cf09..12d43fe711cf 100644
> > --- a/Documentation/media/uapi/v4l/devices.rst
> > +++ b/Documentation/media/uapi/v4l/devices.rst
> > @@ -15,6 +15,7 @@ Interfaces
> >      dev-output
> >      dev-osd
> >      dev-codec
> > +    dev-decoder
> >      dev-effect
> >      dev-raw-vbi
> >      dev-sliced-vbi
> > diff --git a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> > index 826f2305da01..ca5f2270a829 100644
> > --- a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> > +++ b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> > @@ -32,6 +32,11 @@ Single-planar format structure
> >       to a multiple of the scale factor of any smaller planes. For
> >       example when the image format is YUV 4:2:0, ``width`` and
> >       ``height`` must be multiples of two.
> > +
> > +     For compressed formats that contain the resolution information encoded
> > +     inside the stream, when fed to a stateful mem2mem decoder, the fields
> > +     may be zero to rely on the decoder to detect the right values. For more
> > +     details see :ref:`decoder` and format descriptions.
> >      * - __u32
> >        - ``pixelformat``
> >        - The pixel format or type of compression, set by the application.
> > diff --git a/Documentation/media/uapi/v4l/v4l2.rst b/Documentation/media/uapi/v4l/v4l2.rst
> > index b89e5621ae69..65dc096199ad 100644
> > --- a/Documentation/media/uapi/v4l/v4l2.rst
> > +++ b/Documentation/media/uapi/v4l/v4l2.rst
> > @@ -53,6 +53,10 @@ Authors, in alphabetical order:
> >
> >    - Original author of the V4L2 API and documentation.
> >
> > +- Figa, Tomasz <tfiga@chromium.org>
> > +
> > +  - Documented the memory-to-memory decoder interface.
> > +
> >  - H Schimek, Michael <mschimek@gmx.at>
> >
> >    - Original author of the V4L2 API and documentation.
> > @@ -61,6 +65,10 @@ Authors, in alphabetical order:
> >
> >    - Documented the Digital Video timings API.
> >
> > +- Osciak, Pawel <posciak@chromium.org>
> > +
> > +  - Documented the memory-to-memory decoder interface.
> > +
> >  - Osciak, Pawel <pawel@osciak.com>
> >
> >    - Designed and documented the multi-planar API.
> > @@ -85,7 +93,7 @@ Authors, in alphabetical order:
> >
> >    - Designed and documented the VIDIOC_LOG_STATUS ioctl, the extended control ioctls, major parts of the sliced VBI API, the MPEG encoder and decoder APIs and the DV Timings API.
> >
> > -**Copyright** |copy| 1999-2016: Bill Dirks, Michael H. Schimek, Hans Verkuil, Martin Rubli, Andy Walls, Muralidharan Karicheri, Mauro Carvalho Chehab, Pawel Osciak, Sakari Ailus & Antti Palosaari.
> > +**Copyright** |copy| 1999-2018: Bill Dirks, Michael H. Schimek, Hans Verkuil, Martin Rubli, Andy Walls, Muralidharan Karicheri, Mauro Carvalho Chehab, Pawel Osciak, Sakari Ailus & Antti Palosaari, Tomasz Figa
> >
> >  Except when explicitly stated as GPL, programming examples within this
> >  part can be used and distributed without restrictions.
> > diff --git a/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst b/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
> > index 85c916b0ce07..2f73fe22a9cd 100644
> > --- a/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
> > +++ b/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
> > @@ -49,14 +49,16 @@ The ``cmd`` field must contain the command code. Some commands use the
> >
> >  A :ref:`write() <func-write>` or :ref:`VIDIOC_STREAMON`
> >  call sends an implicit START command to the decoder if it has not been
> > -started yet.
> > +started yet. Applies to both queues of mem2mem decoders.
> >
> >  A :ref:`close() <func-close>` or :ref:`VIDIOC_STREAMOFF <VIDIOC_STREAMON>`
> >  call of a streaming file descriptor sends an implicit immediate STOP
> > -command to the decoder, and all buffered data is discarded.
> > +command to the decoder, and all buffered data is discarded. Applies to both
> > +queues of mem2mem decoders.
> >
> > -These ioctls are optional, not all drivers may support them. They were
> > -introduced in Linux 3.3.
> > +In principle, these ioctls are optional, not all drivers may support them. They were
> > +introduced in Linux 3.3. They are, however, mandatory for stateful mem2mem decoders
> > +(as further documented in :ref:`decoder`).
> >
> >
> >  .. tabularcolumns:: |p{1.1cm}|p{2.4cm}|p{1.2cm}|p{1.6cm}|p{10.6cm}|
> > @@ -160,26 +162,36 @@ introduced in Linux 3.3.
> >       ``V4L2_DEC_CMD_RESUME`` for that. This command has one flag:
> >       ``V4L2_DEC_CMD_START_MUTE_AUDIO``. If set, then audio will be
> >       muted when playing back at a non-standard speed.
> > +
> > +     For stateful mem2mem decoders, the command may be also used to restart
> > +     the decoder in case of an implicit stop initiated by the decoder
> > +     itself, without the ``V4L2_DEC_CMD_STOP`` being called explicitly.
> > +     No flags or other arguments are accepted in case of mem2mem decoders.
> > +     See :ref:`decoder` for more details.
> >      * - ``V4L2_DEC_CMD_STOP``
> >        - 1
> >        - Stop the decoder. When the decoder is already stopped, this
> >       command does nothing. This command has two flags: if
> >       ``V4L2_DEC_CMD_STOP_TO_BLACK`` is set, then the decoder will set
> >       the picture to black after it stopped decoding. Otherwise the last
> > -     image will repeat. mem2mem decoders will stop producing new frames
> > -     altogether. They will send a ``V4L2_EVENT_EOS`` event when the
> > -     last frame has been decoded and all frames are ready to be
> > -     dequeued and will set the ``V4L2_BUF_FLAG_LAST`` buffer flag on
> > -     the last buffer of the capture queue to indicate there will be no
> > -     new buffers produced to dequeue. This buffer may be empty,
> > -     indicated by the driver setting the ``bytesused`` field to 0. Once
> > -     the ``V4L2_BUF_FLAG_LAST`` flag was set, the
> > -     :ref:`VIDIOC_DQBUF <VIDIOC_QBUF>` ioctl will not block anymore,
> > -     but return an ``EPIPE`` error code. If
> > +     image will repeat. If
> >       ``V4L2_DEC_CMD_STOP_IMMEDIATELY`` is set, then the decoder stops
> >       immediately (ignoring the ``pts`` value), otherwise it will keep
> >       decoding until timestamp >= pts or until the last of the pending
> >       data from its internal buffers was decoded.
> > +
> > +     A stateful mem2mem decoder will proceed with decoding the source
> > +     buffers pending before the command is issued and then stop producing
> > +     new frames. It will send a ``V4L2_EVENT_EOS`` event when the last frame
> > +     has been decoded and all frames are ready to be dequeued and will set
> > +     the ``V4L2_BUF_FLAG_LAST`` buffer flag on the last buffer of the
> > +     capture queue to indicate there will be no new buffers produced to
> > +     dequeue. This buffer may be empty, indicated by the driver setting the
> > +     ``bytesused`` field to 0. Once the buffer with the
> > +     ``V4L2_BUF_FLAG_LAST`` flag set was dequeued, the :ref:`VIDIOC_DQBUF
> > +     <VIDIOC_QBUF>` ioctl will not block anymore, but return an ``EPIPE``
> > +     error code. No flags or other arguments are accepted in case of mem2mem
> > +     decoders.  See :ref:`decoder` for more details.
> >      * - ``V4L2_DEC_CMD_PAUSE``
> >        - 2
> >        - Pause the decoder. When the decoder has not been started yet, the
> > diff --git a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
> > index 3ead350e099f..0fc0b78a943e 100644
> > --- a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
> > +++ b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
> > @@ -53,6 +53,13 @@ devices that is either the struct
> >  member. When the requested buffer type is not supported drivers return
> >  an ``EINVAL`` error code.
> >
> > +A stateful mem2mem decoder will not allow operations on the
> > +``V4L2_BUF_TYPE_VIDEO_CAPTURE`` or ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``
> > +buffer type until the corresponding ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or
> > +``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE`` buffer type is configured. If such an
> > +operation is attempted, drivers return an ``EACCES`` error code. Refer to
> > +:ref:`decoder` for more details.
>
> This isn't right. EACCES is returned as long as the output format resolution is
> unknown. If it is set explicitly, then this will work without an error.

I think that's what is written above. The stream resolution is known
when the applicable OUTPUT queue is configured, which lets the driver
determine the format constraints on the applicable CAPTURE queue. If
it's not clear, could you help rephrasing?

>
> > +
> >  To change the current format parameters applications initialize the
> >  ``type`` field and all fields of the respective ``fmt`` union member.
> >  For details see the documentation of the various devices types in
> > @@ -145,6 +152,13 @@ On success 0 is returned, on error -1 and the ``errno`` variable is set
> >  appropriately. The generic error codes are described at the
> >  :ref:`Generic Error Codes <gen-errors>` chapter.
> >
> > +EACCES
> > +    The format is not accessible until another buffer type is configured.
> > +    Relevant for the V4L2_BUF_TYPE_VIDEO_CAPTURE and
> > +    V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE buffer types of mem2mem decoders, which
> > +    require the format of V4L2_BUF_TYPE_VIDEO_OUTPUT or
> > +    V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE buffer type to be configured first.
>
> Ditto.

Ditto.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2019-01-22 10:02     ` Tomasz Figa
@ 2019-01-22 14:47       ` Hans Verkuil
  2019-01-23  5:27         ` Tomasz Figa
  0 siblings, 1 reply; 41+ messages in thread
From: Hans Verkuil @ 2019-01-22 14:47 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Pawel Osciak, Alexandre Courbot, kamil,
	a.hajda, Kyungmin Park, jtp.park, Philipp Zabel,
	Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, todor.tomov, nicolas, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On 01/22/19 11:02, Tomasz Figa wrote:
> On Mon, Nov 12, 2018 at 8:37 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>>
>> Hi Tomasz,
>>
>> A general note for the stateful and stateless patches: they describe specific
>> use-cases of the more generic Codec Interface, and as such should be one
>> level deeper in the section hierarchy.
> 
> I wonder what exactly this Codec Interface is. Is it a historical name
> for mem2mem? If so, perhaps it would make sense to rename it?

Yeah, it should be renamed to "Video Memory-to-Memory Interface", and the
codecs are just specific instances of such an interface.

> 
>>
>> I.e. instead of being section 4.6/7/8:
>>
>> https://hverkuil.home.xs4all.nl/request-api/uapi/v4l/devices.html
>>
>> they should be 4.5.1/2/3.
>>
> 
> FYI, the first RFC started like that, but it only made the spec
> difficult to navigate and the section numbers too long.
> 
> Still, no strong opinion. I'm okay moving it there, if you think it's better.

It should be moved and the interface name should be renamed. It makes a lot
more sense with those changes.

I've posted a patch for this.

> 
>> On 10/22/2018 04:48 PM, Tomasz Figa wrote:
>>> Due to complexity of the video decoding process, the V4L2 drivers of
>>> stateful decoder hardware require specific sequences of V4L2 API calls
>>> to be followed. These include capability enumeration, initialization,
>>> decoding, seek, pause, dynamic resolution change, drain and end of
>>> stream.
> [snipping any comments that I agree with]
>>> +
>>> +source height
>>> +   height in pixels for given source resolution; relevant to encoders only
>>> +
>>> +source resolution
>>> +   resolution in pixels of source frames being source to the encoder and
>>> +   subject to further cropping to the bounds of visible resolution; relevant to
>>> +   encoders only
>>> +
>>> +source width
>>> +   width in pixels for given source resolution; relevant to encoders only
>>
>> I would drop these three terms: they are not used in this document since this
>> describes a decoder and not an encoder.
>>
> 
> The glossary is shared between encoder and decoder, as suggested in
> previous round of review.
> 
> [snip]
>>> +
>>> +   * If width and height are set to non-zero values, the ``CAPTURE`` format
>>> +     will be updated with an appropriate frame buffer resolution instantly.
>>> +     However, for coded formats that include stream resolution information,
>>> +     after the decoder is done parsing the information from the stream, it will
>>> +     update the ``CAPTURE`` format with new values and signal a source change
>>> +     event.
>>
>> What if the initial width and height specified by userspace matches the parsed
>> width and height? Do you still get a source change event? I think you should
>> always get this event since there are other parameters that depend on the parsing
>> of the meta data.
>>
>> But that should be made explicit here.
>>
> 
> Yes, the change event should happen always after the driver determines
> the format of the stream. Will specify it explicitly.
> 
>>> +
>>> +   .. warning::
>>
>> I'd call this a note rather than a warning.
>>
> 
> I think it deserves at least the "important" level, since it informs
> about the side effects of the call, affecting any actions that the
> client might have done before.
> 
> 
>>> +
>>> +      Changing the ``OUTPUT`` format may change the currently set ``CAPTURE``
>>> +      format. The decoder will derive a new ``CAPTURE`` format from the
>>> +      ``OUTPUT`` format being set, including resolution, colorimetry
>>> +      parameters, etc. If the client needs a specific ``CAPTURE`` format, it
>>> +      must adjust it afterwards.
>>> +
>>> +3.  **Optional.** Query the minimum number of buffers required for ``OUTPUT``
>>> +    queue via :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to
>>> +    use more buffers than the minimum required by hardware/format.
>>
>> Why is this useful? As far as I can tell only the s5p-mfc *encoder* supports
>> this control, so this seems pointless. And since the output queue gets a bitstream
>> I don't see any reason for reading this control in a decoder.
>>
> 
> Indeed, querying this for bitstream buffers probably doesn't make much
> sense. I'll remove it.
> 
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``id``
>>> +          set to ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``value``
>>> +          the minimum number of ``OUTPUT`` buffers required for the currently
>>> +          set format
>>> +
>>> +4.  Allocate source (bitstream) buffers via :c:func:`VIDIOC_REQBUFS` on
>>> +    ``OUTPUT``.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``count``
>>> +          requested number of buffers to allocate; greater than zero
>>> +
>>> +      ``type``
>>> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +      ``memory``
>>> +          follows standard semantics
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``count``
>>> +          the actual number of buffers allocated
>>> +
>>> +    .. warning::
>>> +
>>> +       The actual number of allocated buffers may differ from the ``count``
>>> +       given. The client must check the updated value of ``count`` after the
>>> +       call returns.
>>> +
>>> +    .. note::
>>> +
>>> +       To allocate more than the minimum number of buffers (for pipeline
>>> +       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT``
>>> +       control to get the minimum number of buffers required by the
>>> +       decoder/format, and pass the obtained value plus the number of
>>> +       additional buffers needed in the ``count`` field to
>>> +       :c:func:`VIDIOC_REQBUFS`.
>>
>> As mentioned above, this makes no sense for stateful decoders IMHO.
>>
> 
> Ack.
> 
>>> +
>>> +    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``OUTPUT`` queue can be
>>> +    used to have more control over buffer allocation.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``count``
>>> +          requested number of buffers to allocate; greater than zero
>>> +
>>> +      ``type``
>>> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +      ``memory``
>>> +          follows standard semantics
>>> +
>>> +      ``format``
>>> +          follows standard semantics
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``count``
>>> +          adjusted to the number of allocated buffers
>>> +
>>> +    .. warning::
>>> +
>>> +       The actual number of allocated buffers may differ from the ``count``
>>> +       given. The client must check the updated value of ``count`` after the
>>> +       call returns.
>>> +
>>> +5.  Start streaming on the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`.
>>> +
>>> +6.  **This step only applies to coded formats that contain resolution information
>>> +    in the stream.** Continue queuing/dequeuing bitstream buffers to/from the
>>> +    ``OUTPUT`` queue via :c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`. The
>>> +    buffers will be processed and returned to the client in order, until
>>> +    required metadata to configure the ``CAPTURE`` queue are found. This is
>>> +    indicated by the decoder sending a ``V4L2_EVENT_SOURCE_CHANGE`` event with
>>> +    ``V4L2_EVENT_SRC_CH_RESOLUTION`` source change type.
>>> +
>>> +    * It is not an error if the first buffer does not contain enough data for
>>> +      this to occur. Processing of the buffers will continue as long as more
>>> +      data is needed.
>>> +
>>> +    * If data in a buffer that triggers the event is required to decode the
>>> +      first frame, it will not be returned to the client, until the
>>> +      initialization sequence completes and the frame is decoded.
>>> +
>>> +    * If the client sets width and height of the ``OUTPUT`` format to 0,
>>> +      calling :c:func:`VIDIOC_G_FMT`, :c:func:`VIDIOC_S_FMT` or
>>> +      :c:func:`VIDIOC_TRY_FMT` on the ``CAPTURE`` queue will return the
>>> +      ``-EACCES`` error code, until the decoder configures ``CAPTURE`` format
>>> +      according to stream metadata.
>>> +
>>> +    .. important::
>>> +
>>> +       Any client query issued after the decoder queues the event will return
>>> +       values applying to the just parsed stream, including queue formats,
>>> +       selection rectangles and controls.
>>> +
>>> +    .. note::
>>> +
>>> +       A client capable of acquiring stream parameters from the bitstream on
>>> +       its own may attempt to set the width and height of the ``OUTPUT`` format
>>> +       to non-zero values matching the coded size of the stream, skip this step
>>> +       and continue with the `Capture setup` sequence. However, it must not
>>> +       rely on any driver queries regarding stream parameters, such as
>>> +       selection rectangles and controls, since the decoder has not parsed them
>>> +       from the stream yet. If the values configured by the client do not match
>>> +       those parsed by the decoder, a `Dynamic resolution change` will be
>>> +       triggered to reconfigure them.
>>> +
>>> +    .. note::
>>> +
>>> +       No decoded frames are produced during this phase.
>>> +
>>> +7.  Continue with the `Capture setup` sequence.
>>> +
>>> +Capture setup
>>> +=============
>>> +
>>> +1.  Call :c:func:`VIDIOC_G_FMT` on the ``CAPTURE`` queue to get format for the
>>> +    destination buffers parsed/decoded from the bitstream.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``type``
>>> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``width``, ``height``
>>> +          frame buffer resolution for the decoded frames
>>> +
>>> +      ``pixelformat``
>>> +          pixel format for decoded frames
>>> +
>>> +      ``num_planes`` (for _MPLANE ``type`` only)
>>> +          number of planes for pixelformat
>>> +
>>> +      ``sizeimage``, ``bytesperline``
>>> +          as per standard semantics; matching frame buffer format
>>> +
>>> +    .. note::
>>> +
>>> +       The value of ``pixelformat`` may be any pixel format supported by the
>>> +       decoder for the current stream. The decoder should choose a
>>> +       preferred/optimal format for the default configuration. For example, a
>>> +       YUV format may be preferred over an RGB format if an additional
>>> +       conversion step would be required for the latter.
>>> +
>>> +2.  **Optional.** Acquire the visible resolution via
>>> +    :c:func:`VIDIOC_G_SELECTION`.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``type``
>>> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
>>> +
>>> +      ``target``
>>> +          set to ``V4L2_SEL_TGT_COMPOSE``
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``r.left``, ``r.top``, ``r.width``, ``r.height``
>>> +          the visible rectangle; it must fit within the frame buffer resolution
>>> +          returned by :c:func:`VIDIOC_G_FMT` on ``CAPTURE``.
>>> +
>>> +    * The following selection targets are supported on ``CAPTURE``:
>>> +
>>> +      ``V4L2_SEL_TGT_CROP_BOUNDS``
>>> +          corresponds to the coded resolution of the stream
>>> +
>>> +      ``V4L2_SEL_TGT_CROP_DEFAULT``
>>> +          the rectangle covering the part of the ``CAPTURE`` buffer that
>>> +          contains meaningful picture data (visible area); width and height
>>> +          will be equal to the visible resolution of the stream
>>> +
>>> +      ``V4L2_SEL_TGT_CROP``
>>> +          the rectangle within the coded resolution to be output to
>>> +          ``CAPTURE``; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``; read-only on
>>> +          hardware without additional compose/scaling capabilities
>>> +
>>> +      ``V4L2_SEL_TGT_COMPOSE_BOUNDS``
>>> +          the maximum rectangle within a ``CAPTURE`` buffer, which the cropped
>>> +          frame can be output into; equal to ``V4L2_SEL_TGT_CROP`` if the
>>> +          hardware does not support compose/scaling
>>> +
>>> +      ``V4L2_SEL_TGT_COMPOSE_DEFAULT``
>>> +          equal to ``V4L2_SEL_TGT_CROP``
>>> +
>>> +      ``V4L2_SEL_TGT_COMPOSE``
>>> +          the rectangle inside a ``CAPTURE`` buffer into which the cropped
>>> +          frame is written; defaults to ``V4L2_SEL_TGT_COMPOSE_DEFAULT``;
>>> +          read-only on hardware without additional compose/scaling capabilities
>>> +
>>> +      ``V4L2_SEL_TGT_COMPOSE_PADDED``
>>> +          the rectangle inside a ``CAPTURE`` buffer which is overwritten by the
>>> +          hardware; equal to ``V4L2_SEL_TGT_COMPOSE`` if the hardware does not
>>> +          write padding pixels
>>> +
>>> +    .. warning::
>>> +
>>> +       The values are guaranteed to be meaningful only after the decoder
>>> +       successfully parses the stream metadata. The client must not rely on the
>>> +       query before that happens.
>>> +
>>> +3.  Query the minimum number of buffers required for the ``CAPTURE`` queue via
>>> +    :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to use more
>>> +    buffers than the minimum required by hardware/format.
>>
>> Is this step optional or required? Can it change when a resolution change occurs?
> 
> Probably not with a simple resolution change, but a case when a stream
> is changed on the fly would trigger what we call "resolution change"
> here, but what would effectively be a "source change" and that could
> include a change in the number of required CAPTURE buffers.
> 
>> How does this relate to the checks for the minimum number of buffers that REQBUFS
>> does?
> 
> The control returns the minimum that REQBUFS would allow, so the
> application can add few more buffers on top of that and improve the
> pipelining.
> 
>>
>> The 'This is useful if' sentence suggests that it is optional, but I think that
>> sentence just confuses the issue.
>>
> 
> It used to be optional and I didn't rephrase it after turning it into
> mandatory. How about:
> 
>     This enables the client to request more buffers
>     than the minimum required by hardware/format and achieve better pipelining.

Hmm, OK. It'll do, I guess. I never liked these MIN_BUFFERS controls, I wish they
would return something like the recommended number of buffers that will give you
decent performance.

> 
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``id``
>>> +          set to ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE``
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``value``
>>> +          minimum number of buffers required to decode the stream parsed in
>>> +          this initialization sequence.
>>> +
>>> +    .. note::
>>> +
>>> +       The minimum number of buffers must be at least the number required to
>>> +       successfully decode the current stream. This may for example be the
>>> +       required DPB size for an H.264 stream given the parsed stream
>>> +       configuration (resolution, level).
>>> +
>>> +    .. warning::
>>> +
>>> +       The value is guaranteed to be meaningful only after the decoder
>>> +       successfully parses the stream metadata. The client must not rely on the
>>> +       query before that happens.
>>> +
>>> +4.  **Optional.** Enumerate ``CAPTURE`` formats via :c:func:`VIDIOC_ENUM_FMT` on
>>> +    the ``CAPTURE`` queue. Once the stream information is parsed and known, the
>>> +    client may use this ioctl to discover which raw formats are supported for
>>> +    given stream and select one of them via :c:func:`VIDIOC_S_FMT`.
>>
>> Can the list returned here differ from the list returned in the 'Querying capabilities'
>> step? If so, then I assume it will always be a subset of what was returned in
>> the 'Querying' step?
> 
> Depends on whether you're considering just VIDIOC_ENUM_FMT or also
> VIDIOC_G_FMT and VIDIOC_ENUM_FRAMESIZES.
> 
> The initial VIDIOC_ENUM_FMT has no way to account for any resolution
> constraints of the formats, so the list would include all raw pixel
> formats that the decoder can handle with selected coded pixel format.
> However, the list can be further narrowed down by using
> VIDIOC_ENUM_FRAMESIZES, to restrict each raw format only to the
> resolutions it can handle.
> 
> The VIDIOC_ENUM_FMT call in this sequence (after getting the stream
> information) would have the knowledge about the resolution, so the
> list returned here would only include the formats that can be actually
> handled. It should match the result of the initial query using both
> VIDIOC_ENUM_FMT and VIDIOC_ENUM_FRAMESIZES.

Right, so this will be a subset of the initial query taking the resolution
into account.

> 
>>
>>> +
>>> +    .. important::
>>> +
>>> +       The decoder will return only formats supported for the currently
>>> +       established coded format, as per the ``OUTPUT`` format and/or stream
>>> +       metadata parsed in this initialization sequence, even if more formats
>>> +       may be supported by the decoder in general.
>>> +
>>> +       For example, a decoder may support YUV and RGB formats for resolutions
>>> +       1920x1088 and lower, but only YUV for higher resolutions (due to
>>> +       hardware limitations). After parsing a resolution of 1920x1088 or lower,
>>> +       :c:func:`VIDIOC_ENUM_FMT` may return a set of YUV and RGB pixel formats,
>>> +       but after parsing resolution higher than 1920x1088, the decoder will not
>>> +       return RGB, unsupported for this resolution.
>>> +
>>> +       However, subsequent resolution change event triggered after
>>> +       discovering a resolution change within the same stream may switch
>>> +       the stream into a lower resolution and :c:func:`VIDIOC_ENUM_FMT`
>>> +       would return RGB formats again in that case.
>>> +
>>> +5.  **Optional.** Set the ``CAPTURE`` format via :c:func:`VIDIOC_S_FMT` on the
>>> +    ``CAPTURE`` queue. The client may choose a different format than
>>> +    selected/suggested by the decoder in :c:func:`VIDIOC_G_FMT`.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``type``
>>> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
>>> +
>>> +      ``pixelformat``
>>> +          a raw pixel format
>>> +
>>> +    .. note::
>>> +
>>> +       The client may use :c:func:`VIDIOC_ENUM_FMT` after receiving the
>>> +       ``V4L2_EVENT_SOURCE_CHANGE`` event to find out the set of raw formats
>>> +       supported for the stream.
>>
>> Isn't this a duplicate of step 4? I think this note can be dropped.
>>
> 
> Ack.
> 
>>> +
>>> +6.  If all the following conditions are met, the client may resume the decoding
>>> +    instantly:
>>> +
>>> +    * ``sizeimage`` of the new format (determined in previous steps) is less
>>> +      than or equal to the size of currently allocated buffers,
>>> +
>>> +    * the number of buffers currently allocated is greater than or equal to the
>>> +      minimum number of buffers acquired in previous steps. To fulfill this
>>> +      requirement, the client may use :c:func:`VIDIOC_CREATE_BUFS` to add new
>>> +      buffers.
>>> +
>>> +    In such case, the remaining steps do not apply and the client may resume
>>> +    the decoding by one of the following actions:
>>> +
>>> +    * if the ``CAPTURE`` queue is streaming, call :c:func:`VIDIOC_DECODER_CMD`
>>> +      with the ``V4L2_DEC_CMD_START`` command,
>>> +
>>> +    * if the ``CAPTURE`` queue is not streaming, call :c:func:`VIDIOC_STREAMON`
>>> +      on the ``CAPTURE`` queue.
>>> +
>>> +    However, if the client intends to change the buffer set, to lower
>>> +    memory usage or for any other reasons, it may be achieved by following
>>> +    the steps below.
>>> +
>>> +7.  **If the** ``CAPTURE`` **queue is streaming,** keep queuing and dequeuing
>>> +    buffers on the ``CAPTURE`` queue until a buffer marked with the
>>> +    ``V4L2_BUF_FLAG_LAST`` flag is dequeued.
>>> +
>>> +8.  **If the** ``CAPTURE`` **queue is streaming,** call :c:func:`VIDIOC_STREAMOFF`
>>> +    on the ``CAPTURE`` queue to stop streaming.
>>> +
>>> +    .. warning::
>>> +
>>> +       The ``OUTPUT`` queue must remain streaming. Calling
>>> +       :c:func:`VIDIOC_STREAMOFF` on it would abort the sequence and trigger a
>>> +       seek.
>>> +
>>> +9.  **If the** ``CAPTURE`` **queue has buffers allocated,** free the ``CAPTURE``
>>> +    buffers using :c:func:`VIDIOC_REQBUFS`.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``count``
>>> +          set to 0
>>> +
>>> +      ``type``
>>> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
>>> +
>>> +      ``memory``
>>> +          follows standard semantics
>>> +
>>> +10. Allocate ``CAPTURE`` buffers via :c:func:`VIDIOC_REQBUFS` on the
>>> +    ``CAPTURE`` queue.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``count``
>>> +          requested number of buffers to allocate; greater than zero
>>> +
>>> +      ``type``
>>> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
>>> +
>>> +      ``memory``
>>> +          follows standard semantics
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``count``
>>> +          actual number of buffers allocated
>>> +
>>> +    .. warning::
>>> +
>>> +       The actual number of allocated buffers may differ from the ``count``
>>> +       given. The client must check the updated value of ``count`` after the
>>> +       call returns.
>>> +
>>> +    .. note::
>>> +
>>> +       To allocate more than the minimum number of buffers (for pipeline
>>> +       depth), the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE``
>>> +       control to get the minimum number of buffers required, and pass the
>>> +       obtained value plus the number of additional buffers needed in the
>>> +       ``count`` field to :c:func:`VIDIOC_REQBUFS`.
>>
>> Same question as before: is it optional or required to obtain the value of this
>> control? And can't the driver just set the min_buffers_needed field in the capture
>> vb2_queue to the minimum number of buffers that are required?
> 
> min_buffers_needed is the number of buffers that must be queued before
> start_streaming() can be called, so not relevant here.

Yeah, sorry about that. I clearly misunderstood this.

 The control is
> about the number of buffers to be allocated. Although the drivers must
> ensure that REQBUFS allocates the absolute minimum number of buffers
> for the decoding to be able to progress, the pipeline depth is
> something that the applications should control (e.g. depending on the
> consumers of the decoded buffers) and this is allowed by this control.
> 
>>
>> Should you be allowed to allocate buffers at all if the capture format isn't
>> known? I.e. width/height is still 0. It makes no sense to call REQBUFS since
>> there is no format size known that REQBUFS can use.
>>
> 
> Indeed, REQBUFS(CAPTURE) must not be allowed before the stream
> information is known (regardless of whether it comes from the OUTPUT
> format or is parsed from the stream). Let me add this to the related
> note in the Initialization sequence, which already includes
> VIDIOC_*_FMT.
> 
> For the Capture setup sequence, though, it's expected to happen when
> the stream information is already known, so I wouldn't change the
> description here.

Makes sense.

> 
>>> +
>>> +    Alternatively, :c:func:`VIDIOC_CREATE_BUFS` on the ``CAPTURE`` queue can be
>>> +    used to have more control over buffer allocation. For example, by
>>> +    allocating buffers larger than the current ``CAPTURE`` format, future
>>> +    resolution changes can be accommodated.
>>> +
>>> +    * **Required fields:**
>>> +
>>> +      ``count``
>>> +          requested number of buffers to allocate; greater than zero
>>> +
>>> +      ``type``
>>> +          a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
>>> +
>>> +      ``memory``
>>> +          follows standard semantics
>>> +
>>> +      ``format``
>>> +          a format representing the maximum framebuffer resolution to be
>>> +          accommodated by newly allocated buffers
>>> +
>>> +    * **Return fields:**
>>> +
>>> +      ``count``
>>> +          adjusted to the number of allocated buffers
>>> +
>>> +    .. warning::
>>> +
>>> +       The actual number of allocated buffers may differ from the ``count``
>>> +       given. The client must check the updated value of ``count`` after the
>>> +       call returns.
>>> +
>>> +    .. note::
>>> +
>>> +       To allocate buffers for a format different than parsed from the stream
>>> +       metadata, the client must proceed as follows, before the metadata
>>> +       parsing is initiated:
>>> +
>>> +       * set width and height of the ``OUTPUT`` format to desired coded resolution to
>>> +         let the decoder configure the ``CAPTURE`` format appropriately,
>>> +
>>> +       * query the ``CAPTURE`` format using :c:func:`VIDIOC_G_FMT` and save it
>>> +         until this step.
>>> +
>>> +       The format obtained in the query may be then used with
>>> +       :c:func:`VIDIOC_CREATE_BUFS` in this step to allocate the buffers.
>>> +
>>> +11. Call :c:func:`VIDIOC_STREAMON` on the ``CAPTURE`` queue to start decoding
>>> +    frames.
>>> +
>>> +Decoding
>>> +========
>>> +
>>> +This state is reached after the `Capture setup` sequence finishes succesfully.
>>> +In this state, the client queues and dequeues buffers to both queues via
>>> +:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
>>> +semantics.
>>> +
>>> +The contents of the source ``OUTPUT`` buffers depend on the active coded pixel
>>> +format and may be affected by codec-specific extended controls, as stated in
>>> +the documentation of each format.
>>> +
>>> +Both queues operate independently, following the standard behavior of V4L2
>>> +buffer queues and memory-to-memory devices. In addition, the order of decoded
>>> +frames dequeued from the ``CAPTURE`` queue may differ from the order of queuing
>>> +coded frames to the ``OUTPUT`` queue, due to properties of the selected coded
>>> +format, e.g. frame reordering.
>>> +
>>> +The client must not assume any direct relationship between ``CAPTURE``
>>> +and ``OUTPUT`` buffers and any specific timing of buffers becoming
>>> +available to dequeue. Specifically,
>>> +
>>> +* a buffer queued to ``OUTPUT`` may result in no buffers being produced
>>> +  on ``CAPTURE`` (e.g. if it does not contain encoded data, or if only
>>> +  metadata syntax structures are present in it),
>>> +
>>> +* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced
>>> +  on ``CAPTURE`` (if the encoded data contained more than one frame, or if
>>> +  returning a decoded frame allowed the decoder to return a frame that
>>> +  preceded it in decode, but succeeded it in the display order),
>>> +
>>> +* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
>>> +  ``CAPTURE`` later into decode process, and/or after processing further
>>> +  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
>>> +  reordering is used,
>>> +
>>> +* buffers may become available on the ``CAPTURE`` queue without additional
>>> +  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
>>> +  ``OUTPUT`` buffers queued in the past whose decoding results are only
>>> +  available at later time, due to specifics of the decoding process.
>>> +
>>> +.. note::
>>> +
>>> +   To allow matching decoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
>>> +   originated from, the client can set the ``timestamp`` field of the
>>> +   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
>>> +   ``CAPTURE`` buffer(s), which resulted from decoding that ``OUTPUT`` buffer
>>> +   will have their ``timestamp`` field set to the same value when dequeued.
>>> +
>>> +   In addition to the straighforward case of one ``OUTPUT`` buffer producing

straighforward -> straightforward

>>> +   one ``CAPTURE`` buffer, the following cases are defined:
>>> +
>>> +   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
>>> +     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
>>> +
>>> +   * multiple ``OUTPUT`` buffers generate one ``CAPTURE`` buffer: timestamp of
>>> +     the ``OUTPUT`` buffer queued last will be copied,
>>> +
>>> +   * the decoding order differs from the display order (i.e. the
>>> +     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
>>> +     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
>>> +     and thus monotonicity of the timestamps cannot be guaranteed.

I think this last point should be rewritten. The timestamp is just a value that
is copied, there are no monotonicity requirements for m2m devices in general.

>>
>> Should stateful codecs be required to support 'tags'? See:
>>
>> https://www.mail-archive.com/linux-media@vger.kernel.org/msg136314.html
>>
>> To be honest, I'm inclined to require this for all m2m devices eventually.
>>
> 
> I guess this goes outside of the scope now, since we deferred tags.
> 
> Other than that, I would indeed make all m2m devices support tags,
> since it shouldn't differ from the timestamp copy feature as we have
> now.

I agree.

> 
>>> +
>>> +During the decoding, the decoder may initiate one of the special sequences, as
>>> +listed below. The sequences will result in the decoder returning all the
>>> +``CAPTURE`` buffers that originated from all the ``OUTPUT`` buffers processed
>>> +before the sequence started. Last of the buffers will have the
>>> +``V4L2_BUF_FLAG_LAST`` flag set. To determine the sequence to follow, the client
>>> +must check if there is any pending event and,
>>> +
>>> +* if a ``V4L2_EVENT_SOURCE_CHANGE`` event is pending, the `Dynamic resolution
>>> +  change` sequence needs to be followed,
>>> +
>>> +* if a ``V4L2_EVENT_EOS`` event is pending, the `End of stream` sequence needs
>>> +  to be followed.
>>> +
>>> +Some of the sequences can be intermixed with each other and need to be handled
>>> +as they happen. The exact operation is documented for each sequence.
>>> +
>>> +Seek
>>> +====
>>> +
>>> +Seek is controlled by the ``OUTPUT`` queue, as it is the source of coded data.
>>> +The seek does not require any specific operation on the ``CAPTURE`` queue, but
>>> +it may be affected as per normal decoder operation.
>>> +
>>> +1. Stop the ``OUTPUT`` queue to begin the seek sequence via
>>> +   :c:func:`VIDIOC_STREAMOFF`.
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``type``
>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +   * The decoder will drop all the pending ``OUTPUT`` buffers and they must be
>>> +     treated as returned to the client (following standard semantics).
>>> +
>>> +2. Restart the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``type``
>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +   * The decoder will start accepting new source bitstream buffers after the
>>> +     call returns.
>>> +
>>> +3. Start queuing buffers containing coded data after the seek to the ``OUTPUT``
>>> +   queue until a suitable resume point is found.
>>> +
>>> +   .. note::
>>> +
>>> +      There is no requirement to begin queuing coded data starting exactly
>>> +      from a resume point (e.g. SPS or a keyframe). Any queued ``OUTPUT``
>>> +      buffers will be processed and returned to the client until a suitable
>>> +      resume point is found.  While looking for a resume point, the decoder
>>> +      should not produce any decoded frames into ``CAPTURE`` buffers.
>>> +
>>> +      Some hardware is known to mishandle seeks to a non-resume point. Such an
>>> +      operation may result in an unspecified number of corrupted decoded frames
>>> +      being made available on the ``CAPTURE`` queue. Drivers must ensure that
>>> +      no fatal decoding errors or crashes occur, and implement any necessary
>>> +      handling and workarounds for hardware issues related to seek operations.
>>
>> Is there a requirement that those corrupted frames have V4L2_BUF_FLAG_ERROR set?
>> I.e., can userspace detect those currupted frames?
>>
> 
> I think the question is whether the kernel driver can actually detect
> those corrupted frames. We can't guarantee reporting errors to the
> userspace, if the hardware doesn't actually report them.
> 
> Could we perhaps keep this an open question and possibly address with
> some extension that could be an opt in for the decoders that can
> report errors?

Hmm, how about: If the hardware can detect such corrupted decoded frames, then
it shall set V4L2_BUF_FLAG_ERROR.

> 
>>> +
>>> +   .. warning::
>>> +
>>> +      In case of the H.264 codec, the client must take care not to seek over a
>>> +      change of SPS/PPS. Even though the target frame could be a keyframe, the
>>> +      stale SPS/PPS inside decoder state would lead to undefined results when
>>> +      decoding. Although the decoder must handle such case without a crash or a
>>> +      fatal decode error, the client must not expect a sensible decode output.
>>> +
>>> +4. After a resume point is found, the decoder will start returning ``CAPTURE``
>>> +   buffers containing decoded frames.
>>> +
>>> +.. important::
>>> +
>>> +   A seek may result in the `Dynamic resolution change` sequence being
>>> +   iniitated, due to the seek target having decoding parameters different from
>>> +   the part of the stream decoded before the seek. The sequence must be handled
>>> +   as per normal decoder operation.
>>> +
>>> +.. warning::
>>> +
>>> +   It is not specified when the ``CAPTURE`` queue starts producing buffers
>>> +   containing decoded data from the ``OUTPUT`` buffers queued after the seek,
>>> +   as it operates independently from the ``OUTPUT`` queue.
>>> +
>>> +   The decoder may return a number of remaining ``CAPTURE`` buffers containing
>>> +   decoded frames originating from the ``OUTPUT`` buffers queued before the
>>> +   seek sequence is performed.
>>> +
>>> +   The ``VIDIOC_STREAMOFF`` operation discards any remaining queued
>>> +   ``OUTPUT`` buffers, which means that not all of the ``OUTPUT`` buffers
>>> +   queued before the seek sequence may have matching ``CAPTURE`` buffers
>>> +   produced.  For example, given the sequence of operations on the
>>> +   ``OUTPUT`` queue:
>>> +
>>> +     QBUF(A), QBUF(B), STREAMOFF(), STREAMON(), QBUF(G), QBUF(H),
>>> +
>>> +   any of the following results on the ``CAPTURE`` queue is allowed:
>>> +
>>> +     {A’, B’, G’, H’}, {A’, G’, H’}, {G’, H’}.
>>
>> Isn't it the case that if you would want to avoid that, then you should call
>> DECODER_STOP, wait for the last buffer on the CAPTURE queue, then seek and
>> call DECODER_START. If you do that, then you should always get {A’, B’, G’, H’}.
>> (basically following the Drain sequence).
> 
> Yes, it is, but I think it depends on the application needs. Here we
> just give a primitive to change the place in the stream that's being
> decoded (or change the stream on the fly).
> 
> Actually, with the timestamp copy, I guess we wouldn't even need to do
> the DECODER_STOP, as we could just discard the CAPTURE buffers until
> we get one that matches the timestamp of the first OUTPUT buffer after
> the seek.
> 
>>
>> Admittedly, you typically want to do an instantaneous seek, so this is probably
>> not what you want to do normally.
>>
>> It might help to have this documented in a separate note.
>>
> 
> The instantaneous seek is documented below. I'm not sure if there is
> any practical need to document the other case, but I could add a
> sentence like below to the warning above. What do you think?
> 
>    The ``VIDIOC_STREAMOFF`` operation discards any remaining queued
>    ``OUTPUT`` buffers, which means that not all of the ``OUTPUT`` buffers
>    queued before the seek sequence may have matching ``CAPTURE`` buffers
>    produced.  For example, given the sequence of operations on the
>    ``OUTPUT`` queue:
> 
>      QBUF(A), QBUF(B), STREAMOFF(), STREAMON(), QBUF(G), QBUF(H),
> 
>    any of the following results on the ``CAPTURE`` queue is allowed:

is allowed -> are allowed

> 
>      {A’, B’, G’, H’}, {A’, G’, H’}, {G’, H’}.
> 
>    To determine the CAPTURE buffer containing the first decoded frame
> after the seek,
>    the client may observe the timestamps to match the CAPTURE and OUTPUT buffers
>    or use V4L2_DEC_CMD_STOP and V4L2_DEC_CMD_START to drain the decoder.

Ack.

> 
>>> +
>>> +.. note::
>>> +
>>> +   To achieve instantaneous seek, the client may restart streaming on the
>>> +   ``CAPTURE`` queue too to discard decoded, but not yet dequeued buffers.
>>> +
>>> +Dynamic resolution change
>>> +=========================
>>> +
>>> +Streams that include resolution metadata in the bitstream may require switching
>>> +to a different resolution during the decoding.
>>> +
>>> +The sequence starts when the decoder detects a coded frame with one or more of
>>> +the following parameters different from previously established (and reflected
>>> +by corresponding queries):
>>> +
>>> +* coded resolution (``OUTPUT`` width and height),
>>> +
>>> +* visible resolution (selection rectangles),
>>> +
>>> +* the minimum number of buffers needed for decoding.
>>> +
>>> +Whenever that happens, the decoder must proceed as follows:
>>> +
>>> +1.  After encountering a resolution change in the stream, the decoder sends a
>>> +    ``V4L2_EVENT_SOURCE_CHANGE`` event with source change type set to
>>> +    ``V4L2_EVENT_SRC_CH_RESOLUTION``.
>>> +
>>> +    .. important::
>>> +
>>> +       Any client query issued after the decoder queues the event will return
>>> +       values applying to the stream after the resolution change, including
>>> +       queue formats, selection rectangles and controls.
>>> +
>>> +2.  The decoder will then process and decode all remaining buffers from before
>>> +    the resolution change point.
>>> +
>>> +    * The last buffer from before the change must be marked with the
>>> +      ``V4L2_BUF_FLAG_LAST`` flag, similarly to the `Drain` sequence above.
>>> +
>>> +    .. warning::
>>> +
>>> +       The last buffer may be empty (with :c:type:`v4l2_buffer` ``bytesused``
>>> +       = 0) and in such case it must be ignored by the client, as it does not
>>> +       contain a decoded frame.
>>> +
>>> +    .. note::
>>> +
>>> +       Any attempt to dequeue more buffers beyond the buffer marked with
>>> +       ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
>>> +       :c:func:`VIDIOC_DQBUF`.
>>> +
>>> +The client must continue the sequence as described below to continue the
>>> +decoding process.
>>> +
>>> +1.  Dequeue the source change event.
>>> +
>>> +    .. important::
>>> +
>>> +       A source change triggers an implicit decoder drain, similar to the
>>> +       explicit `Drain` sequence. The decoder is stopped after it completes.
>>> +       The decoding process must be resumed with either a pair of calls to
>>> +       :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
>>> +       ``CAPTURE`` queue, or a call to :c:func:`VIDIOC_DECODER_CMD` with the
>>> +       ``V4L2_DEC_CMD_START`` command.
>>> +
>>> +2.  Continue with the `Capture setup` sequence.
>>> +
>>> +.. note::
>>> +
>>> +   During the resolution change sequence, the ``OUTPUT`` queue must remain
>>> +   streaming. Calling :c:func:`VIDIOC_STREAMOFF` on the ``OUTPUT`` queue would
>>> +   abort the sequence and initiate a seek.
>>> +
>>> +   In principle, the ``OUTPUT`` queue operates separately from the ``CAPTURE``
>>> +   queue and this remains true for the duration of the entire resolution change
>>> +   sequence as well.
>>> +
>>> +   The client should, for best performance and simplicity, keep queuing/dequeuing
>>> +   buffers to/from the ``OUTPUT`` queue even while processing this sequence.
>>> +
>>> +Drain
>>> +=====
>>> +
>>> +To ensure that all queued ``OUTPUT`` buffers have been processed and related
>>> +``CAPTURE`` buffers output to the client, the client must follow the drain
>>> +sequence described below. After the drain sequence ends, the client has
>>> +received all decoded frames for all ``OUTPUT`` buffers queued before the
>>> +sequence was started.
>>> +
>>> +1. Begin drain by issuing :c:func:`VIDIOC_DECODER_CMD`.
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``cmd``
>>> +         set to ``V4L2_DEC_CMD_STOP``
>>> +
>>> +     ``flags``
>>> +         set to 0
>>> +
>>> +     ``pts``
>>> +         set to 0
>>> +
>>> +   .. warning::
>>> +
>>> +   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
>>
>> 'sentence'? You mean 'decoder command'?
> 
> Sequence. :)

Ah, that makes a lot more sense!

> 
>>
>>> +   are streaming. For compatibility reasons, the call to
>>> +   :c:func:`VIDIOC_DECODER_CMD` will not fail even if any of the queues is not
>>> +   streaming, but at the same time it will not initiate the `Drain` sequence
>>> +   and so the steps described below would not be applicable.
>>> +
>>> +2. Any ``OUTPUT`` buffers queued by the client before the
>>> +   :c:func:`VIDIOC_DECODER_CMD` was issued will be processed and decoded as
>>> +   normal. The client must continue to handle both queues independently,
>>> +   similarly to normal decode operation. This includes,
>>> +
>>> +   * handling any operations triggered as a result of processing those buffers,
>>> +     such as the `Dynamic resolution change` sequence, before continuing with
>>> +     the drain sequence,
>>> +
>>> +   * queuing and dequeuing ``CAPTURE`` buffers, until a buffer marked with the
>>> +     ``V4L2_BUF_FLAG_LAST`` flag is dequeued,
>>> +
>>> +     .. warning::
>>> +
>>> +        The last buffer may be empty (with :c:type:`v4l2_buffer`
>>> +        ``bytesused`` = 0) and in such case it must be ignored by the client,
>>> +        as it does not contain a decoded frame.
>>> +
>>> +     .. note::
>>> +
>>> +        Any attempt to dequeue more buffers beyond the buffer marked with
>>> +        ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
>>> +        :c:func:`VIDIOC_DQBUF`.
>>> +
>>> +   * dequeuing processed ``OUTPUT`` buffers, until all the buffers queued
>>> +     before the ``V4L2_DEC_CMD_STOP`` command are dequeued.
>>> +
>>> +   * dequeuing the ``V4L2_EVENT_EOS`` event, if the client subscribed to it.
>>> +
>>> +   .. note::
>>> +
>>> +      For backwards compatibility, the decoder will signal a ``V4L2_EVENT_EOS``
>>> +      event when the last the last frame has been decoded and all frames are
>>
>> 'the last the last' -> the last
> 
> Ack.
> 
>>
>>> +      ready to be dequeued. It is a deprecated behavior and the client must not
>>> +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
>>> +      instead.
>>> +
>>> +3. Once all the ``OUTPUT`` buffers queued before the ``V4L2_DEC_CMD_STOP`` call
>>> +   and the last ``CAPTURE`` buffer are dequeued, the decoder is stopped and it
>>
>> This sentence is a bit confusing. This is better IMHO:
>>
>> 3. Once all the ``OUTPUT`` buffers queued before the ``V4L2_DEC_CMD_STOP`` call
>>    are dequeued and the last ``CAPTURE`` buffer is dequeued, the decoder is stopped and it
>>
> 
> Ack.
> 
>>> +   will accept, but not process any newly queued ``OUTPUT`` buffers until the
>>
>> process any -> process, any
>>
> 
> Ack.
> 
>>> +   client issues any of the following operations:
>>> +
>>> +   * ``V4L2_DEC_CMD_START`` - the decoder will resume the operation normally,
>>> +
>>> +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
>>> +     ``CAPTURE`` queue - the decoder will resume the operation normally,
>>> +     however any ``CAPTURE`` buffers still in the queue will be returned to the
>>> +     client,
>>> +
>>> +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
>>> +     ``OUTPUT`` queue - any pending source buffers will be returned to the
>>> +     client and the `Seek` sequence will be triggered.
>>> +
>>> +.. note::
>>> +
>>> +   Once the drain sequence is initiated, the client needs to drive it to
>>> +   completion, as described by the steps above, unless it aborts the process by
>>> +   issuing :c:func:`VIDIOC_STREAMOFF` on any of the ``OUTPUT`` or ``CAPTURE``
>>> +   queues.  The client is not allowed to issue ``V4L2_DEC_CMD_START`` or
>>> +   ``V4L2_DEC_CMD_STOP`` again while the drain sequence is in progress and they
>>> +   will fail with -EBUSY error code if attempted.
>>> +
>>> +   Although mandatory, the availability of decoder commands may be queried
>>> +   using :c:func:`VIDIOC_TRY_DECODER_CMD`.
>>> +
>>> +End of stream
>>> +=============
>>> +
>>> +If the decoder encounters an end of stream marking in the stream, the decoder
>>> +will initiate the `Drain` sequence, which the client must handle as described
>>> +above, skipping the initial :c:func:`VIDIOC_DECODER_CMD`.
>>> +
>>> +Commit points
>>> +=============
>>> +
>>> +Setting formats and allocating buffers trigger changes in the behavior of the
>>> +decoder.
>>> +
>>> +1. Setting the format on the ``OUTPUT`` queue may change the set of formats
>>> +   supported/advertised on the ``CAPTURE`` queue. In particular, it also means
>>> +   that the ``CAPTURE`` format may be reset and the client must not rely on the
>>> +   previously set format being preserved.
>>> +
>>> +2. Enumerating formats on the ``CAPTURE`` queue always returns only formats
>>> +   supported for the current ``OUTPUT`` format.
>>> +
>>> +3. Setting the format on the ``CAPTURE`` queue does not change the list of
>>> +   formats available on the ``OUTPUT`` queue. An attempt to set the ``CAPTURE``
>>> +   format that is not supported for the currently selected ``OUTPUT`` format
>>> +   will result in the decoder adjusting the requested ``CAPTURE`` format to a
>>> +   supported one.
>>> +
>>> +4. Enumerating formats on the ``OUTPUT`` queue always returns the full set of
>>> +   supported coded formats, irrespectively of the current ``CAPTURE`` format.
>>> +
>>> +5. While buffers are allocated on the ``OUTPUT`` queue, the client must not
>>> +   change the format on the queue. Drivers will return the -EBUSY error code
>>> +   for any such format change attempt.
>>> +
>>> +To summarize, setting formats and allocation must always start with the
>>> +``OUTPUT`` queue and the ``OUTPUT`` queue is the master that governs the
>>> +set of supported formats for the ``CAPTURE`` queue.
>>> diff --git a/Documentation/media/uapi/v4l/devices.rst b/Documentation/media/uapi/v4l/devices.rst
>>> index fb7f8c26cf09..12d43fe711cf 100644
>>> --- a/Documentation/media/uapi/v4l/devices.rst
>>> +++ b/Documentation/media/uapi/v4l/devices.rst
>>> @@ -15,6 +15,7 @@ Interfaces
>>>      dev-output
>>>      dev-osd
>>>      dev-codec
>>> +    dev-decoder
>>>      dev-effect
>>>      dev-raw-vbi
>>>      dev-sliced-vbi
>>> diff --git a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
>>> index 826f2305da01..ca5f2270a829 100644
>>> --- a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
>>> +++ b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
>>> @@ -32,6 +32,11 @@ Single-planar format structure
>>>       to a multiple of the scale factor of any smaller planes. For
>>>       example when the image format is YUV 4:2:0, ``width`` and
>>>       ``height`` must be multiples of two.
>>> +
>>> +     For compressed formats that contain the resolution information encoded
>>> +     inside the stream, when fed to a stateful mem2mem decoder, the fields
>>> +     may be zero to rely on the decoder to detect the right values. For more
>>> +     details see :ref:`decoder` and format descriptions.
>>>      * - __u32
>>>        - ``pixelformat``
>>>        - The pixel format or type of compression, set by the application.
>>> diff --git a/Documentation/media/uapi/v4l/v4l2.rst b/Documentation/media/uapi/v4l/v4l2.rst
>>> index b89e5621ae69..65dc096199ad 100644
>>> --- a/Documentation/media/uapi/v4l/v4l2.rst
>>> +++ b/Documentation/media/uapi/v4l/v4l2.rst
>>> @@ -53,6 +53,10 @@ Authors, in alphabetical order:
>>>
>>>    - Original author of the V4L2 API and documentation.
>>>
>>> +- Figa, Tomasz <tfiga@chromium.org>
>>> +
>>> +  - Documented the memory-to-memory decoder interface.
>>> +
>>>  - H Schimek, Michael <mschimek@gmx.at>
>>>
>>>    - Original author of the V4L2 API and documentation.
>>> @@ -61,6 +65,10 @@ Authors, in alphabetical order:
>>>
>>>    - Documented the Digital Video timings API.
>>>
>>> +- Osciak, Pawel <posciak@chromium.org>
>>> +
>>> +  - Documented the memory-to-memory decoder interface.
>>> +
>>>  - Osciak, Pawel <pawel@osciak.com>
>>>
>>>    - Designed and documented the multi-planar API.
>>> @@ -85,7 +93,7 @@ Authors, in alphabetical order:
>>>
>>>    - Designed and documented the VIDIOC_LOG_STATUS ioctl, the extended control ioctls, major parts of the sliced VBI API, the MPEG encoder and decoder APIs and the DV Timings API.
>>>
>>> -**Copyright** |copy| 1999-2016: Bill Dirks, Michael H. Schimek, Hans Verkuil, Martin Rubli, Andy Walls, Muralidharan Karicheri, Mauro Carvalho Chehab, Pawel Osciak, Sakari Ailus & Antti Palosaari.
>>> +**Copyright** |copy| 1999-2018: Bill Dirks, Michael H. Schimek, Hans Verkuil, Martin Rubli, Andy Walls, Muralidharan Karicheri, Mauro Carvalho Chehab, Pawel Osciak, Sakari Ailus & Antti Palosaari, Tomasz Figa
>>>
>>>  Except when explicitly stated as GPL, programming examples within this
>>>  part can be used and distributed without restrictions.
>>> diff --git a/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst b/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
>>> index 85c916b0ce07..2f73fe22a9cd 100644
>>> --- a/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
>>> +++ b/Documentation/media/uapi/v4l/vidioc-decoder-cmd.rst
>>> @@ -49,14 +49,16 @@ The ``cmd`` field must contain the command code. Some commands use the
>>>
>>>  A :ref:`write() <func-write>` or :ref:`VIDIOC_STREAMON`
>>>  call sends an implicit START command to the decoder if it has not been
>>> -started yet.
>>> +started yet. Applies to both queues of mem2mem decoders.
>>>
>>>  A :ref:`close() <func-close>` or :ref:`VIDIOC_STREAMOFF <VIDIOC_STREAMON>`
>>>  call of a streaming file descriptor sends an implicit immediate STOP
>>> -command to the decoder, and all buffered data is discarded.
>>> +command to the decoder, and all buffered data is discarded. Applies to both
>>> +queues of mem2mem decoders.
>>>
>>> -These ioctls are optional, not all drivers may support them. They were
>>> -introduced in Linux 3.3.
>>> +In principle, these ioctls are optional, not all drivers may support them. They were
>>> +introduced in Linux 3.3. They are, however, mandatory for stateful mem2mem decoders
>>> +(as further documented in :ref:`decoder`).
>>>
>>>
>>>  .. tabularcolumns:: |p{1.1cm}|p{2.4cm}|p{1.2cm}|p{1.6cm}|p{10.6cm}|
>>> @@ -160,26 +162,36 @@ introduced in Linux 3.3.
>>>       ``V4L2_DEC_CMD_RESUME`` for that. This command has one flag:
>>>       ``V4L2_DEC_CMD_START_MUTE_AUDIO``. If set, then audio will be
>>>       muted when playing back at a non-standard speed.
>>> +
>>> +     For stateful mem2mem decoders, the command may be also used to restart
>>> +     the decoder in case of an implicit stop initiated by the decoder
>>> +     itself, without the ``V4L2_DEC_CMD_STOP`` being called explicitly.
>>> +     No flags or other arguments are accepted in case of mem2mem decoders.
>>> +     See :ref:`decoder` for more details.
>>>      * - ``V4L2_DEC_CMD_STOP``
>>>        - 1
>>>        - Stop the decoder. When the decoder is already stopped, this
>>>       command does nothing. This command has two flags: if
>>>       ``V4L2_DEC_CMD_STOP_TO_BLACK`` is set, then the decoder will set
>>>       the picture to black after it stopped decoding. Otherwise the last
>>> -     image will repeat. mem2mem decoders will stop producing new frames
>>> -     altogether. They will send a ``V4L2_EVENT_EOS`` event when the
>>> -     last frame has been decoded and all frames are ready to be
>>> -     dequeued and will set the ``V4L2_BUF_FLAG_LAST`` buffer flag on
>>> -     the last buffer of the capture queue to indicate there will be no
>>> -     new buffers produced to dequeue. This buffer may be empty,
>>> -     indicated by the driver setting the ``bytesused`` field to 0. Once
>>> -     the ``V4L2_BUF_FLAG_LAST`` flag was set, the
>>> -     :ref:`VIDIOC_DQBUF <VIDIOC_QBUF>` ioctl will not block anymore,
>>> -     but return an ``EPIPE`` error code. If
>>> +     image will repeat. If
>>>       ``V4L2_DEC_CMD_STOP_IMMEDIATELY`` is set, then the decoder stops
>>>       immediately (ignoring the ``pts`` value), otherwise it will keep
>>>       decoding until timestamp >= pts or until the last of the pending
>>>       data from its internal buffers was decoded.
>>> +
>>> +     A stateful mem2mem decoder will proceed with decoding the source
>>> +     buffers pending before the command is issued and then stop producing
>>> +     new frames. It will send a ``V4L2_EVENT_EOS`` event when the last frame
>>> +     has been decoded and all frames are ready to be dequeued and will set
>>> +     the ``V4L2_BUF_FLAG_LAST`` buffer flag on the last buffer of the
>>> +     capture queue to indicate there will be no new buffers produced to
>>> +     dequeue. This buffer may be empty, indicated by the driver setting the
>>> +     ``bytesused`` field to 0. Once the buffer with the
>>> +     ``V4L2_BUF_FLAG_LAST`` flag set was dequeued, the :ref:`VIDIOC_DQBUF
>>> +     <VIDIOC_QBUF>` ioctl will not block anymore, but return an ``EPIPE``
>>> +     error code. No flags or other arguments are accepted in case of mem2mem
>>> +     decoders.  See :ref:`decoder` for more details.
>>>      * - ``V4L2_DEC_CMD_PAUSE``
>>>        - 2
>>>        - Pause the decoder. When the decoder has not been started yet, the
>>> diff --git a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
>>> index 3ead350e099f..0fc0b78a943e 100644
>>> --- a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
>>> +++ b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
>>> @@ -53,6 +53,13 @@ devices that is either the struct
>>>  member. When the requested buffer type is not supported drivers return
>>>  an ``EINVAL`` error code.
>>>
>>> +A stateful mem2mem decoder will not allow operations on the
>>> +``V4L2_BUF_TYPE_VIDEO_CAPTURE`` or ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``
>>> +buffer type until the corresponding ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or
>>> +``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE`` buffer type is configured. If such an
>>> +operation is attempted, drivers return an ``EACCES`` error code. Refer to
>>> +:ref:`decoder` for more details.
>>
>> This isn't right. EACCES is returned as long as the output format resolution is
>> unknown. If it is set explicitly, then this will work without an error.

Ah, sorry, I phrased that poorly. Let me try again:

This isn't right. EACCES is returned for CAPTURE operations as long as the
output format resolution is unknown or the CAPTURE format is explicitly set.
If the CAPTURE format is set explicitly, then this will work without an error.

> 
> I think that's what is written above. The stream resolution is known
> when the applicable OUTPUT queue is configured, which lets the driver
> determine the format constraints on the applicable CAPTURE queue. If
> it's not clear, could you help rephrasing?
> 
>>
>>> +
>>>  To change the current format parameters applications initialize the
>>>  ``type`` field and all fields of the respective ``fmt`` union member.
>>>  For details see the documentation of the various devices types in
>>> @@ -145,6 +152,13 @@ On success 0 is returned, on error -1 and the ``errno`` variable is set
>>>  appropriately. The generic error codes are described at the
>>>  :ref:`Generic Error Codes <gen-errors>` chapter.
>>>
>>> +EACCES
>>> +    The format is not accessible until another buffer type is configured.
>>> +    Relevant for the V4L2_BUF_TYPE_VIDEO_CAPTURE and
>>> +    V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE buffer types of mem2mem decoders, which
>>> +    require the format of V4L2_BUF_TYPE_VIDEO_OUTPUT or
>>> +    V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE buffer type to be configured first.
>>
>> Ditto.
> 
> Ditto.
> 
> Best regards,
> Tomasz
> 

Regards,

	Hans

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2019-01-22 14:47       ` Hans Verkuil
@ 2019-01-23  5:27         ` Tomasz Figa
  2019-01-23  8:10           ` Hans Verkuil
  2019-01-24  9:06           ` Tomasz Figa
  0 siblings, 2 replies; 41+ messages in thread
From: Tomasz Figa @ 2019-01-23  5:27 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Pawel Osciak, Alexandre Courbot, kamil,
	a.hajda, Kyungmin Park, jtp.park, Philipp Zabel,
	Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, todor.tomov, nicolas, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On Tue, Jan 22, 2019 at 11:47 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>
> On 01/22/19 11:02, Tomasz Figa wrote:
> > On Mon, Nov 12, 2018 at 8:37 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
> >>
> >> Hi Tomasz,
> >>
> >> A general note for the stateful and stateless patches: they describe specific
> >> use-cases of the more generic Codec Interface, and as such should be one
> >> level deeper in the section hierarchy.
> >
> > I wonder what exactly this Codec Interface is. Is it a historical name
> > for mem2mem? If so, perhaps it would make sense to rename it?
>
> Yeah, it should be renamed to "Video Memory-to-Memory Interface", and the
> codecs are just specific instances of such an interface.
>

Ack.

> >
> >>
> >> I.e. instead of being section 4.6/7/8:
> >>
> >> https://hverkuil.home.xs4all.nl/request-api/uapi/v4l/devices.html
> >>
> >> they should be 4.5.1/2/3.
> >>
> >
> > FYI, the first RFC started like that, but it only made the spec
> > difficult to navigate and the section numbers too long.
> >
> > Still, no strong opinion. I'm okay moving it there, if you think it's better.
>
> It should be moved and the interface name should be renamed. It makes a lot
> more sense with those changes.
>
> I've posted a patch for this.
>

Thanks. I've rebased on top of it.

[snip]
> >>> +3.  Query the minimum number of buffers required for the ``CAPTURE`` queue via
> >>> +    :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to use more
> >>> +    buffers than the minimum required by hardware/format.
> >>
> >> Is this step optional or required? Can it change when a resolution change occurs?
> >
> > Probably not with a simple resolution change, but a case when a stream
> > is changed on the fly would trigger what we call "resolution change"
> > here, but what would effectively be a "source change" and that could
> > include a change in the number of required CAPTURE buffers.
> >
> >> How does this relate to the checks for the minimum number of buffers that REQBUFS
> >> does?
> >
> > The control returns the minimum that REQBUFS would allow, so the
> > application can add few more buffers on top of that and improve the
> > pipelining.
> >
> >>
> >> The 'This is useful if' sentence suggests that it is optional, but I think that
> >> sentence just confuses the issue.
> >>
> >
> > It used to be optional and I didn't rephrase it after turning it into
> > mandatory. How about:
> >
> >     This enables the client to request more buffers
> >     than the minimum required by hardware/format and achieve better pipelining.
>
> Hmm, OK. It'll do, I guess. I never liked these MIN_BUFFERS controls, I wish they
> would return something like the recommended number of buffers that will give you
> decent performance.
>

The problem here is that the kernel doesn't know what is decent for
the application, since it doesn't know how the results of the decoding
are used. Over-allocating would result to a waste of memory, which
could then make it less than decent for memory-constrained
applications.

> >
> >>> +
> >>> +    * **Required fields:**
> >>> +
> >>> +      ``id``
> >>> +          set to ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE``
> >>> +
> >>> +    * **Return fields:**
> >>> +
> >>> +      ``value``
> >>> +          minimum number of buffers required to decode the stream parsed in
> >>> +          this initialization sequence.
> >>> +
> >>> +    .. note::
> >>> +
> >>> +       The minimum number of buffers must be at least the number required to
> >>> +       successfully decode the current stream. This may for example be the
> >>> +       required DPB size for an H.264 stream given the parsed stream
> >>> +       configuration (resolution, level).
> >>> +
> >>> +    .. warning::
> >>> +
> >>> +       The value is guaranteed to be meaningful only after the decoder
> >>> +       successfully parses the stream metadata. The client must not rely on the
> >>> +       query before that happens.
> >>> +
> >>> +4.  **Optional.** Enumerate ``CAPTURE`` formats via :c:func:`VIDIOC_ENUM_FMT` on
> >>> +    the ``CAPTURE`` queue. Once the stream information is parsed and known, the
> >>> +    client may use this ioctl to discover which raw formats are supported for
> >>> +    given stream and select one of them via :c:func:`VIDIOC_S_FMT`.
> >>
> >> Can the list returned here differ from the list returned in the 'Querying capabilities'
> >> step? If so, then I assume it will always be a subset of what was returned in
> >> the 'Querying' step?
> >
> > Depends on whether you're considering just VIDIOC_ENUM_FMT or also
> > VIDIOC_G_FMT and VIDIOC_ENUM_FRAMESIZES.
> >
> > The initial VIDIOC_ENUM_FMT has no way to account for any resolution
> > constraints of the formats, so the list would include all raw pixel
> > formats that the decoder can handle with selected coded pixel format.
> > However, the list can be further narrowed down by using
> > VIDIOC_ENUM_FRAMESIZES, to restrict each raw format only to the
> > resolutions it can handle.
> >
> > The VIDIOC_ENUM_FMT call in this sequence (after getting the stream
> > information) would have the knowledge about the resolution, so the
> > list returned here would only include the formats that can be actually
> > handled. It should match the result of the initial query using both
> > VIDIOC_ENUM_FMT and VIDIOC_ENUM_FRAMESIZES.
>
> Right, so this will be a subset of the initial query taking the resolution
> into account.
>

Do you think it could be worth adding a note about this?

[snip]
> >>> +Decoding
> >>> +========
> >>> +
> >>> +This state is reached after the `Capture setup` sequence finishes succesfully.
> >>> +In this state, the client queues and dequeues buffers to both queues via
> >>> +:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
> >>> +semantics.
> >>> +
> >>> +The contents of the source ``OUTPUT`` buffers depend on the active coded pixel
> >>> +format and may be affected by codec-specific extended controls, as stated in
> >>> +the documentation of each format.
> >>> +
> >>> +Both queues operate independently, following the standard behavior of V4L2
> >>> +buffer queues and memory-to-memory devices. In addition, the order of decoded
> >>> +frames dequeued from the ``CAPTURE`` queue may differ from the order of queuing
> >>> +coded frames to the ``OUTPUT`` queue, due to properties of the selected coded
> >>> +format, e.g. frame reordering.
> >>> +
> >>> +The client must not assume any direct relationship between ``CAPTURE``
> >>> +and ``OUTPUT`` buffers and any specific timing of buffers becoming
> >>> +available to dequeue. Specifically,
> >>> +
> >>> +* a buffer queued to ``OUTPUT`` may result in no buffers being produced
> >>> +  on ``CAPTURE`` (e.g. if it does not contain encoded data, or if only
> >>> +  metadata syntax structures are present in it),
> >>> +
> >>> +* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced
> >>> +  on ``CAPTURE`` (if the encoded data contained more than one frame, or if
> >>> +  returning a decoded frame allowed the decoder to return a frame that
> >>> +  preceded it in decode, but succeeded it in the display order),
> >>> +
> >>> +* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
> >>> +  ``CAPTURE`` later into decode process, and/or after processing further
> >>> +  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
> >>> +  reordering is used,
> >>> +
> >>> +* buffers may become available on the ``CAPTURE`` queue without additional
> >>> +  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
> >>> +  ``OUTPUT`` buffers queued in the past whose decoding results are only
> >>> +  available at later time, due to specifics of the decoding process.
> >>> +
> >>> +.. note::
> >>> +
> >>> +   To allow matching decoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
> >>> +   originated from, the client can set the ``timestamp`` field of the
> >>> +   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
> >>> +   ``CAPTURE`` buffer(s), which resulted from decoding that ``OUTPUT`` buffer
> >>> +   will have their ``timestamp`` field set to the same value when dequeued.
> >>> +
> >>> +   In addition to the straighforward case of one ``OUTPUT`` buffer producing
>
> straighforward -> straightforward
>

Ack.

> >>> +   one ``CAPTURE`` buffer, the following cases are defined:
> >>> +
> >>> +   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
> >>> +     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
> >>> +
> >>> +   * multiple ``OUTPUT`` buffers generate one ``CAPTURE`` buffer: timestamp of
> >>> +     the ``OUTPUT`` buffer queued last will be copied,
> >>> +
> >>> +   * the decoding order differs from the display order (i.e. the
> >>> +     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
> >>> +     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
> >>> +     and thus monotonicity of the timestamps cannot be guaranteed.
>
> I think this last point should be rewritten. The timestamp is just a value that
> is copied, there are no monotonicity requirements for m2m devices in general.
>

Actually I just realized the last point might not even be achievable
for some of the decoders (s5p-mfc, mtk-vcodec), as they don't report
which frame originates from which bitstream buffer and the driver just
picks the most recently consumed OUTPUT buffer to copy the timestamp
from. (s5p-mfc actually "forgets" to set the timestamp in some cases
too...)

I need to think a bit more about this.

[snip]
> >>> +Seek
> >>> +====
> >>> +
> >>> +Seek is controlled by the ``OUTPUT`` queue, as it is the source of coded data.
> >>> +The seek does not require any specific operation on the ``CAPTURE`` queue, but
> >>> +it may be affected as per normal decoder operation.
> >>> +
> >>> +1. Stop the ``OUTPUT`` queue to begin the seek sequence via
> >>> +   :c:func:`VIDIOC_STREAMOFF`.
> >>> +
> >>> +   * **Required fields:**
> >>> +
> >>> +     ``type``
> >>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> >>> +
> >>> +   * The decoder will drop all the pending ``OUTPUT`` buffers and they must be
> >>> +     treated as returned to the client (following standard semantics).
> >>> +
> >>> +2. Restart the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`
> >>> +
> >>> +   * **Required fields:**
> >>> +
> >>> +     ``type``
> >>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> >>> +
> >>> +   * The decoder will start accepting new source bitstream buffers after the
> >>> +     call returns.
> >>> +
> >>> +3. Start queuing buffers containing coded data after the seek to the ``OUTPUT``
> >>> +   queue until a suitable resume point is found.
> >>> +
> >>> +   .. note::
> >>> +
> >>> +      There is no requirement to begin queuing coded data starting exactly
> >>> +      from a resume point (e.g. SPS or a keyframe). Any queued ``OUTPUT``
> >>> +      buffers will be processed and returned to the client until a suitable
> >>> +      resume point is found.  While looking for a resume point, the decoder
> >>> +      should not produce any decoded frames into ``CAPTURE`` buffers.
> >>> +
> >>> +      Some hardware is known to mishandle seeks to a non-resume point. Such an
> >>> +      operation may result in an unspecified number of corrupted decoded frames
> >>> +      being made available on the ``CAPTURE`` queue. Drivers must ensure that
> >>> +      no fatal decoding errors or crashes occur, and implement any necessary
> >>> +      handling and workarounds for hardware issues related to seek operations.
> >>
> >> Is there a requirement that those corrupted frames have V4L2_BUF_FLAG_ERROR set?
> >> I.e., can userspace detect those currupted frames?
> >>
> >
> > I think the question is whether the kernel driver can actually detect
> > those corrupted frames. We can't guarantee reporting errors to the
> > userspace, if the hardware doesn't actually report them.
> >
> > Could we perhaps keep this an open question and possibly address with
> > some extension that could be an opt in for the decoders that can
> > report errors?
>
> Hmm, how about: If the hardware can detect such corrupted decoded frames, then
> it shall set V4L2_BUF_FLAG_ERROR.
>

Sounds good to me.

Actually, let me add a paragraph about error handling in the decoding
section, since it's a general problem, not limited to seeking. I can
then refer to it from the Seek sequence.

> >
> >>> +
> >>> +   .. warning::
> >>> +
> >>> +      In case of the H.264 codec, the client must take care not to seek over a
> >>> +      change of SPS/PPS. Even though the target frame could be a keyframe, the
> >>> +      stale SPS/PPS inside decoder state would lead to undefined results when
> >>> +      decoding. Although the decoder must handle such case without a crash or a
> >>> +      fatal decode error, the client must not expect a sensible decode output.
> >>> +
> >>> +4. After a resume point is found, the decoder will start returning ``CAPTURE``
> >>> +   buffers containing decoded frames.
> >>> +
> >>> +.. important::
> >>> +
> >>> +   A seek may result in the `Dynamic resolution change` sequence being
> >>> +   iniitated, due to the seek target having decoding parameters different from
> >>> +   the part of the stream decoded before the seek. The sequence must be handled
> >>> +   as per normal decoder operation.
> >>> +
> >>> +.. warning::
> >>> +
> >>> +   It is not specified when the ``CAPTURE`` queue starts producing buffers
> >>> +   containing decoded data from the ``OUTPUT`` buffers queued after the seek,
> >>> +   as it operates independently from the ``OUTPUT`` queue.
> >>> +
> >>> +   The decoder may return a number of remaining ``CAPTURE`` buffers containing
> >>> +   decoded frames originating from the ``OUTPUT`` buffers queued before the
> >>> +   seek sequence is performed.
> >>> +
> >>> +   The ``VIDIOC_STREAMOFF`` operation discards any remaining queued
> >>> +   ``OUTPUT`` buffers, which means that not all of the ``OUTPUT`` buffers
> >>> +   queued before the seek sequence may have matching ``CAPTURE`` buffers
> >>> +   produced.  For example, given the sequence of operations on the
> >>> +   ``OUTPUT`` queue:
> >>> +
> >>> +     QBUF(A), QBUF(B), STREAMOFF(), STREAMON(), QBUF(G), QBUF(H),
> >>> +
> >>> +   any of the following results on the ``CAPTURE`` queue is allowed:
> >>> +
> >>> +     {A’, B’, G’, H’}, {A’, G’, H’}, {G’, H’}.
> >>
> >> Isn't it the case that if you would want to avoid that, then you should call
> >> DECODER_STOP, wait for the last buffer on the CAPTURE queue, then seek and
> >> call DECODER_START. If you do that, then you should always get {A’, B’, G’, H’}.
> >> (basically following the Drain sequence).
> >
> > Yes, it is, but I think it depends on the application needs. Here we
> > just give a primitive to change the place in the stream that's being
> > decoded (or change the stream on the fly).
> >
> > Actually, with the timestamp copy, I guess we wouldn't even need to do
> > the DECODER_STOP, as we could just discard the CAPTURE buffers until
> > we get one that matches the timestamp of the first OUTPUT buffer after
> > the seek.
> >
> >>
> >> Admittedly, you typically want to do an instantaneous seek, so this is probably
> >> not what you want to do normally.
> >>
> >> It might help to have this documented in a separate note.
> >>
> >
> > The instantaneous seek is documented below. I'm not sure if there is
> > any practical need to document the other case, but I could add a
> > sentence like below to the warning above. What do you think?
> >
> >    The ``VIDIOC_STREAMOFF`` operation discards any remaining queued
> >    ``OUTPUT`` buffers, which means that not all of the ``OUTPUT`` buffers
> >    queued before the seek sequence may have matching ``CAPTURE`` buffers
> >    produced.  For example, given the sequence of operations on the
> >    ``OUTPUT`` queue:
> >
> >      QBUF(A), QBUF(B), STREAMOFF(), STREAMON(), QBUF(G), QBUF(H),
> >
> >    any of the following results on the ``CAPTURE`` queue is allowed:
>
> is allowed -> are allowed
>

Only one can happen at given time, so I think singular is correct
here? (i.e. Any [...] is allowed)

[snip]
> >>> diff --git a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
> >>> index 3ead350e099f..0fc0b78a943e 100644
> >>> --- a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
> >>> +++ b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
> >>> @@ -53,6 +53,13 @@ devices that is either the struct
> >>>  member. When the requested buffer type is not supported drivers return
> >>>  an ``EINVAL`` error code.
> >>>
> >>> +A stateful mem2mem decoder will not allow operations on the
> >>> +``V4L2_BUF_TYPE_VIDEO_CAPTURE`` or ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``
> >>> +buffer type until the corresponding ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or
> >>> +``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE`` buffer type is configured. If such an
> >>> +operation is attempted, drivers return an ``EACCES`` error code. Refer to
> >>> +:ref:`decoder` for more details.
> >>
> >> This isn't right. EACCES is returned as long as the output format resolution is
> >> unknown. If it is set explicitly, then this will work without an error.
>
> Ah, sorry, I phrased that poorly. Let me try again:
>
> This isn't right. EACCES is returned for CAPTURE operations as long as the
> output format resolution is unknown or the CAPTURE format is explicitly set.
> If the CAPTURE format is set explicitly, then this will work without an error.

We don't allow directly setting CAPTURE format explicitly either,
because the driver wouldn't have anything to validate the format
against. We allow the client to do it indirectly, by setting the width
and height of the OUTPUT format, which unblocks the CAPTURE format
operations, because the driver can then validate against the OUTPUT
(coded) format.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2019-01-23  5:27         ` Tomasz Figa
@ 2019-01-23  8:10           ` Hans Verkuil
  2019-01-24  9:06           ` Tomasz Figa
  1 sibling, 0 replies; 41+ messages in thread
From: Hans Verkuil @ 2019-01-23  8:10 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Pawel Osciak, Alexandre Courbot, kamil,
	a.hajda, Kyungmin Park, jtp.park, Philipp Zabel,
	Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, todor.tomov, nicolas, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On 01/23/2019 06:27 AM, Tomasz Figa wrote:
> On Tue, Jan 22, 2019 at 11:47 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>>
>> On 01/22/19 11:02, Tomasz Figa wrote:
>>> On Mon, Nov 12, 2018 at 8:37 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>>>>
>>>> Hi Tomasz,
>>>>
>>>> A general note for the stateful and stateless patches: they describe specific
>>>> use-cases of the more generic Codec Interface, and as such should be one
>>>> level deeper in the section hierarchy.
>>>
>>> I wonder what exactly this Codec Interface is. Is it a historical name
>>> for mem2mem? If so, perhaps it would make sense to rename it?
>>
>> Yeah, it should be renamed to "Video Memory-to-Memory Interface", and the
>> codecs are just specific instances of such an interface.
>>
> 
> Ack.
> 
>>>
>>>>
>>>> I.e. instead of being section 4.6/7/8:
>>>>
>>>> https://hverkuil.home.xs4all.nl/request-api/uapi/v4l/devices.html
>>>>
>>>> they should be 4.5.1/2/3.
>>>>
>>>
>>> FYI, the first RFC started like that, but it only made the spec
>>> difficult to navigate and the section numbers too long.
>>>
>>> Still, no strong opinion. I'm okay moving it there, if you think it's better.
>>
>> It should be moved and the interface name should be renamed. It makes a lot
>> more sense with those changes.
>>
>> I've posted a patch for this.
>>
> 
> Thanks. I've rebased on top of it.
> 
> [snip]
>>>>> +3.  Query the minimum number of buffers required for the ``CAPTURE`` queue via
>>>>> +    :c:func:`VIDIOC_G_CTRL`. This is useful if the client intends to use more
>>>>> +    buffers than the minimum required by hardware/format.
>>>>
>>>> Is this step optional or required? Can it change when a resolution change occurs?
>>>
>>> Probably not with a simple resolution change, but a case when a stream
>>> is changed on the fly would trigger what we call "resolution change"
>>> here, but what would effectively be a "source change" and that could
>>> include a change in the number of required CAPTURE buffers.
>>>
>>>> How does this relate to the checks for the minimum number of buffers that REQBUFS
>>>> does?
>>>
>>> The control returns the minimum that REQBUFS would allow, so the
>>> application can add few more buffers on top of that and improve the
>>> pipelining.
>>>
>>>>
>>>> The 'This is useful if' sentence suggests that it is optional, but I think that
>>>> sentence just confuses the issue.
>>>>
>>>
>>> It used to be optional and I didn't rephrase it after turning it into
>>> mandatory. How about:
>>>
>>>     This enables the client to request more buffers
>>>     than the minimum required by hardware/format and achieve better pipelining.
>>
>> Hmm, OK. It'll do, I guess. I never liked these MIN_BUFFERS controls, I wish they
>> would return something like the recommended number of buffers that will give you
>> decent performance.
>>
> 
> The problem here is that the kernel doesn't know what is decent for
> the application, since it doesn't know how the results of the decoding
> are used. Over-allocating would result to a waste of memory, which
> could then make it less than decent for memory-constrained
> applications.
> 
>>>
>>>>> +
>>>>> +    * **Required fields:**
>>>>> +
>>>>> +      ``id``
>>>>> +          set to ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE``
>>>>> +
>>>>> +    * **Return fields:**
>>>>> +
>>>>> +      ``value``
>>>>> +          minimum number of buffers required to decode the stream parsed in
>>>>> +          this initialization sequence.
>>>>> +
>>>>> +    .. note::
>>>>> +
>>>>> +       The minimum number of buffers must be at least the number required to
>>>>> +       successfully decode the current stream. This may for example be the
>>>>> +       required DPB size for an H.264 stream given the parsed stream
>>>>> +       configuration (resolution, level).
>>>>> +
>>>>> +    .. warning::
>>>>> +
>>>>> +       The value is guaranteed to be meaningful only after the decoder
>>>>> +       successfully parses the stream metadata. The client must not rely on the
>>>>> +       query before that happens.
>>>>> +
>>>>> +4.  **Optional.** Enumerate ``CAPTURE`` formats via :c:func:`VIDIOC_ENUM_FMT` on
>>>>> +    the ``CAPTURE`` queue. Once the stream information is parsed and known, the
>>>>> +    client may use this ioctl to discover which raw formats are supported for
>>>>> +    given stream and select one of them via :c:func:`VIDIOC_S_FMT`.
>>>>
>>>> Can the list returned here differ from the list returned in the 'Querying capabilities'
>>>> step? If so, then I assume it will always be a subset of what was returned in
>>>> the 'Querying' step?
>>>
>>> Depends on whether you're considering just VIDIOC_ENUM_FMT or also
>>> VIDIOC_G_FMT and VIDIOC_ENUM_FRAMESIZES.
>>>
>>> The initial VIDIOC_ENUM_FMT has no way to account for any resolution
>>> constraints of the formats, so the list would include all raw pixel
>>> formats that the decoder can handle with selected coded pixel format.
>>> However, the list can be further narrowed down by using
>>> VIDIOC_ENUM_FRAMESIZES, to restrict each raw format only to the
>>> resolutions it can handle.
>>>
>>> The VIDIOC_ENUM_FMT call in this sequence (after getting the stream
>>> information) would have the knowledge about the resolution, so the
>>> list returned here would only include the formats that can be actually
>>> handled. It should match the result of the initial query using both
>>> VIDIOC_ENUM_FMT and VIDIOC_ENUM_FRAMESIZES.
>>
>> Right, so this will be a subset of the initial query taking the resolution
>> into account.
>>
> 
> Do you think it could be worth adding a note about this?

Yes, it helps clarify things.

> 
> [snip]
>>>>> +Decoding
>>>>> +========
>>>>> +
>>>>> +This state is reached after the `Capture setup` sequence finishes succesfully.
>>>>> +In this state, the client queues and dequeues buffers to both queues via
>>>>> +:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
>>>>> +semantics.
>>>>> +
>>>>> +The contents of the source ``OUTPUT`` buffers depend on the active coded pixel
>>>>> +format and may be affected by codec-specific extended controls, as stated in
>>>>> +the documentation of each format.
>>>>> +
>>>>> +Both queues operate independently, following the standard behavior of V4L2
>>>>> +buffer queues and memory-to-memory devices. In addition, the order of decoded
>>>>> +frames dequeued from the ``CAPTURE`` queue may differ from the order of queuing
>>>>> +coded frames to the ``OUTPUT`` queue, due to properties of the selected coded
>>>>> +format, e.g. frame reordering.
>>>>> +
>>>>> +The client must not assume any direct relationship between ``CAPTURE``
>>>>> +and ``OUTPUT`` buffers and any specific timing of buffers becoming
>>>>> +available to dequeue. Specifically,
>>>>> +
>>>>> +* a buffer queued to ``OUTPUT`` may result in no buffers being produced
>>>>> +  on ``CAPTURE`` (e.g. if it does not contain encoded data, or if only
>>>>> +  metadata syntax structures are present in it),
>>>>> +
>>>>> +* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced
>>>>> +  on ``CAPTURE`` (if the encoded data contained more than one frame, or if
>>>>> +  returning a decoded frame allowed the decoder to return a frame that
>>>>> +  preceded it in decode, but succeeded it in the display order),
>>>>> +
>>>>> +* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
>>>>> +  ``CAPTURE`` later into decode process, and/or after processing further
>>>>> +  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
>>>>> +  reordering is used,
>>>>> +
>>>>> +* buffers may become available on the ``CAPTURE`` queue without additional
>>>>> +  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
>>>>> +  ``OUTPUT`` buffers queued in the past whose decoding results are only
>>>>> +  available at later time, due to specifics of the decoding process.
>>>>> +
>>>>> +.. note::
>>>>> +
>>>>> +   To allow matching decoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
>>>>> +   originated from, the client can set the ``timestamp`` field of the
>>>>> +   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
>>>>> +   ``CAPTURE`` buffer(s), which resulted from decoding that ``OUTPUT`` buffer
>>>>> +   will have their ``timestamp`` field set to the same value when dequeued.
>>>>> +
>>>>> +   In addition to the straighforward case of one ``OUTPUT`` buffer producing
>>
>> straighforward -> straightforward
>>
> 
> Ack.
> 
>>>>> +   one ``CAPTURE`` buffer, the following cases are defined:
>>>>> +
>>>>> +   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
>>>>> +     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
>>>>> +
>>>>> +   * multiple ``OUTPUT`` buffers generate one ``CAPTURE`` buffer: timestamp of
>>>>> +     the ``OUTPUT`` buffer queued last will be copied,
>>>>> +
>>>>> +   * the decoding order differs from the display order (i.e. the
>>>>> +     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
>>>>> +     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
>>>>> +     and thus monotonicity of the timestamps cannot be guaranteed.
>>
>> I think this last point should be rewritten. The timestamp is just a value that
>> is copied, there are no monotonicity requirements for m2m devices in general.
>>
> 
> Actually I just realized the last point might not even be achievable
> for some of the decoders (s5p-mfc, mtk-vcodec), as they don't report
> which frame originates from which bitstream buffer and the driver just
> picks the most recently consumed OUTPUT buffer to copy the timestamp
> from. (s5p-mfc actually "forgets" to set the timestamp in some cases
> too...)
> 
> I need to think a bit more about this.
> 
> [snip]
>>>>> +Seek
>>>>> +====
>>>>> +
>>>>> +Seek is controlled by the ``OUTPUT`` queue, as it is the source of coded data.
>>>>> +The seek does not require any specific operation on the ``CAPTURE`` queue, but
>>>>> +it may be affected as per normal decoder operation.
>>>>> +
>>>>> +1. Stop the ``OUTPUT`` queue to begin the seek sequence via
>>>>> +   :c:func:`VIDIOC_STREAMOFF`.
>>>>> +
>>>>> +   * **Required fields:**
>>>>> +
>>>>> +     ``type``
>>>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>>>> +
>>>>> +   * The decoder will drop all the pending ``OUTPUT`` buffers and they must be
>>>>> +     treated as returned to the client (following standard semantics).
>>>>> +
>>>>> +2. Restart the ``OUTPUT`` queue via :c:func:`VIDIOC_STREAMON`
>>>>> +
>>>>> +   * **Required fields:**
>>>>> +
>>>>> +     ``type``
>>>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>>>> +
>>>>> +   * The decoder will start accepting new source bitstream buffers after the
>>>>> +     call returns.
>>>>> +
>>>>> +3. Start queuing buffers containing coded data after the seek to the ``OUTPUT``
>>>>> +   queue until a suitable resume point is found.
>>>>> +
>>>>> +   .. note::
>>>>> +
>>>>> +      There is no requirement to begin queuing coded data starting exactly
>>>>> +      from a resume point (e.g. SPS or a keyframe). Any queued ``OUTPUT``
>>>>> +      buffers will be processed and returned to the client until a suitable
>>>>> +      resume point is found.  While looking for a resume point, the decoder
>>>>> +      should not produce any decoded frames into ``CAPTURE`` buffers.
>>>>> +
>>>>> +      Some hardware is known to mishandle seeks to a non-resume point. Such an
>>>>> +      operation may result in an unspecified number of corrupted decoded frames
>>>>> +      being made available on the ``CAPTURE`` queue. Drivers must ensure that
>>>>> +      no fatal decoding errors or crashes occur, and implement any necessary
>>>>> +      handling and workarounds for hardware issues related to seek operations.
>>>>
>>>> Is there a requirement that those corrupted frames have V4L2_BUF_FLAG_ERROR set?
>>>> I.e., can userspace detect those currupted frames?
>>>>
>>>
>>> I think the question is whether the kernel driver can actually detect
>>> those corrupted frames. We can't guarantee reporting errors to the
>>> userspace, if the hardware doesn't actually report them.
>>>
>>> Could we perhaps keep this an open question and possibly address with
>>> some extension that could be an opt in for the decoders that can
>>> report errors?
>>
>> Hmm, how about: If the hardware can detect such corrupted decoded frames, then
>> it shall set V4L2_BUF_FLAG_ERROR.
>>
> 
> Sounds good to me.
> 
> Actually, let me add a paragraph about error handling in the decoding
> section, since it's a general problem, not limited to seeking. I can
> then refer to it from the Seek sequence.
> 
>>>
>>>>> +
>>>>> +   .. warning::
>>>>> +
>>>>> +      In case of the H.264 codec, the client must take care not to seek over a
>>>>> +      change of SPS/PPS. Even though the target frame could be a keyframe, the
>>>>> +      stale SPS/PPS inside decoder state would lead to undefined results when
>>>>> +      decoding. Although the decoder must handle such case without a crash or a
>>>>> +      fatal decode error, the client must not expect a sensible decode output.
>>>>> +
>>>>> +4. After a resume point is found, the decoder will start returning ``CAPTURE``
>>>>> +   buffers containing decoded frames.
>>>>> +
>>>>> +.. important::
>>>>> +
>>>>> +   A seek may result in the `Dynamic resolution change` sequence being
>>>>> +   iniitated, due to the seek target having decoding parameters different from
>>>>> +   the part of the stream decoded before the seek. The sequence must be handled
>>>>> +   as per normal decoder operation.
>>>>> +
>>>>> +.. warning::
>>>>> +
>>>>> +   It is not specified when the ``CAPTURE`` queue starts producing buffers
>>>>> +   containing decoded data from the ``OUTPUT`` buffers queued after the seek,
>>>>> +   as it operates independently from the ``OUTPUT`` queue.
>>>>> +
>>>>> +   The decoder may return a number of remaining ``CAPTURE`` buffers containing
>>>>> +   decoded frames originating from the ``OUTPUT`` buffers queued before the
>>>>> +   seek sequence is performed.
>>>>> +
>>>>> +   The ``VIDIOC_STREAMOFF`` operation discards any remaining queued
>>>>> +   ``OUTPUT`` buffers, which means that not all of the ``OUTPUT`` buffers
>>>>> +   queued before the seek sequence may have matching ``CAPTURE`` buffers
>>>>> +   produced.  For example, given the sequence of operations on the
>>>>> +   ``OUTPUT`` queue:
>>>>> +
>>>>> +     QBUF(A), QBUF(B), STREAMOFF(), STREAMON(), QBUF(G), QBUF(H),
>>>>> +
>>>>> +   any of the following results on the ``CAPTURE`` queue is allowed:
>>>>> +
>>>>> +     {A’, B’, G’, H’}, {A’, G’, H’}, {G’, H’}.
>>>>
>>>> Isn't it the case that if you would want to avoid that, then you should call
>>>> DECODER_STOP, wait for the last buffer on the CAPTURE queue, then seek and
>>>> call DECODER_START. If you do that, then you should always get {A’, B’, G’, H’}.
>>>> (basically following the Drain sequence).
>>>
>>> Yes, it is, but I think it depends on the application needs. Here we
>>> just give a primitive to change the place in the stream that's being
>>> decoded (or change the stream on the fly).
>>>
>>> Actually, with the timestamp copy, I guess we wouldn't even need to do
>>> the DECODER_STOP, as we could just discard the CAPTURE buffers until
>>> we get one that matches the timestamp of the first OUTPUT buffer after
>>> the seek.
>>>
>>>>
>>>> Admittedly, you typically want to do an instantaneous seek, so this is probably
>>>> not what you want to do normally.
>>>>
>>>> It might help to have this documented in a separate note.
>>>>
>>>
>>> The instantaneous seek is documented below. I'm not sure if there is
>>> any practical need to document the other case, but I could add a
>>> sentence like below to the warning above. What do you think?
>>>
>>>    The ``VIDIOC_STREAMOFF`` operation discards any remaining queued
>>>    ``OUTPUT`` buffers, which means that not all of the ``OUTPUT`` buffers
>>>    queued before the seek sequence may have matching ``CAPTURE`` buffers
>>>    produced.  For example, given the sequence of operations on the
>>>    ``OUTPUT`` queue:
>>>
>>>      QBUF(A), QBUF(B), STREAMOFF(), STREAMON(), QBUF(G), QBUF(H),
>>>
>>>    any of the following results on the ``CAPTURE`` queue is allowed:
>>
>> is allowed -> are allowed
>>
> 
> Only one can happen at given time, so I think singular is correct
> here? (i.e. Any [...] is allowed)

I think you are right.

> 
> [snip]
>>>>> diff --git a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
>>>>> index 3ead350e099f..0fc0b78a943e 100644
>>>>> --- a/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
>>>>> +++ b/Documentation/media/uapi/v4l/vidioc-g-fmt.rst
>>>>> @@ -53,6 +53,13 @@ devices that is either the struct
>>>>>  member. When the requested buffer type is not supported drivers return
>>>>>  an ``EINVAL`` error code.
>>>>>
>>>>> +A stateful mem2mem decoder will not allow operations on the
>>>>> +``V4L2_BUF_TYPE_VIDEO_CAPTURE`` or ``V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE``
>>>>> +buffer type until the corresponding ``V4L2_BUF_TYPE_VIDEO_OUTPUT`` or
>>>>> +``V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE`` buffer type is configured. If such an
>>>>> +operation is attempted, drivers return an ``EACCES`` error code. Refer to
>>>>> +:ref:`decoder` for more details.
>>>>
>>>> This isn't right. EACCES is returned as long as the output format resolution is
>>>> unknown. If it is set explicitly, then this will work without an error.
>>
>> Ah, sorry, I phrased that poorly. Let me try again:
>>
>> This isn't right. EACCES is returned for CAPTURE operations as long as the
>> output format resolution is unknown or the CAPTURE format is explicitly set.
>> If the CAPTURE format is set explicitly, then this will work without an error.
> 
> We don't allow directly setting CAPTURE format explicitly either,
> because the driver wouldn't have anything to validate the format
> against. We allow the client to do it indirectly, by setting the width
> and height of the OUTPUT format, which unblocks the CAPTURE format
> operations, because the driver can then validate against the OUTPUT
> (coded) format.

You are completely right, just forget what I said.

Regards,

	Hans

> 
> Best regards,
> Tomasz
> 


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2018-11-12 13:23   ` Hans Verkuil
  2018-11-17  4:18     ` Nicolas Dufresne
@ 2019-01-23  9:52     ` Tomasz Figa
  2019-01-23 13:04       ` Hans Verkuil
  1 sibling, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2019-01-23  9:52 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Paweł Ościak, Alexandre Courbot,
	Kamil Debski, Andrzej Hajda, Kyungmin Park, Jeongtae Park,
	Philipp Zabel, Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov,
	Todor Tomov, Nicolas Dufresne, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On Mon, Nov 12, 2018 at 10:23 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>
> On 10/22/2018 04:49 PM, Tomasz Figa wrote:
> > Due to complexity of the video encoding process, the V4L2 drivers of
> > stateful encoder hardware require specific sequences of V4L2 API calls
> > to be followed. These include capability enumeration, initialization,
> > encoding, encode parameters change, drain and reset.
> >
> > Specifics of the above have been discussed during Media Workshops at
> > LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
> > Conference Europe 2014 in Düsseldorf. The de facto Codec API that
> > originated at those events was later implemented by the drivers we already
> > have merged in mainline, such as s5p-mfc or coda.
> >
> > The only thing missing was the real specification included as a part of
> > Linux Media documentation. Fix it now and document the encoder part of
> > the Codec API.
> >
> > Signed-off-by: Tomasz Figa <tfiga@chromium.org>
> > ---
> >  Documentation/media/uapi/v4l/dev-encoder.rst  | 579 ++++++++++++++++++
> >  Documentation/media/uapi/v4l/devices.rst      |   1 +
> >  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |   5 +
> >  Documentation/media/uapi/v4l/v4l2.rst         |   2 +
> >  .../media/uapi/v4l/vidioc-encoder-cmd.rst     |  38 +-
> >  5 files changed, 610 insertions(+), 15 deletions(-)
> >  create mode 100644 Documentation/media/uapi/v4l/dev-encoder.rst
> >
> > diff --git a/Documentation/media/uapi/v4l/dev-encoder.rst b/Documentation/media/uapi/v4l/dev-encoder.rst
> > new file mode 100644
> > index 000000000000..41139e5e48eb
> > --- /dev/null
> > +++ b/Documentation/media/uapi/v4l/dev-encoder.rst
> > @@ -0,0 +1,579 @@
> > +.. -*- coding: utf-8; mode: rst -*-
> > +
> > +.. _encoder:
> > +
> > +*************************************************
> > +Memory-to-memory Stateful Video Encoder Interface
> > +*************************************************
> > +
> > +A stateful video encoder takes raw video frames in display order and encodes
> > +them into a bitstream. It generates complete chunks of the bitstream, including
> > +all metadata, headers, etc. The resulting bitstream does not require any
> > +further post-processing by the client.
> > +
> > +Performing software stream processing, header generation etc. in the driver
> > +in order to support this interface is strongly discouraged. In case such
> > +operations are needed, use of the Stateless Video Encoder Interface (in
> > +development) is strongly advised.
> > +
> > +Conventions and notation used in this document
> > +==============================================
> > +
> > +1. The general V4L2 API rules apply if not specified in this document
> > +   otherwise.
> > +
> > +2. The meaning of words "must", "may", "should", etc. is as per RFC
> > +   2119.
> > +
> > +3. All steps not marked "optional" are required.
> > +
> > +4. :c:func:`VIDIOC_G_EXT_CTRLS`, :c:func:`VIDIOC_S_EXT_CTRLS` may be used
> > +   interchangeably with :c:func:`VIDIOC_G_CTRL`, :c:func:`VIDIOC_S_CTRL`,
> > +   unless specified otherwise.
> > +
> > +5. Single-plane API (see spec) and applicable structures may be used
> > +   interchangeably with Multi-plane API, unless specified otherwise,
> > +   depending on encoder capabilities and following the general V4L2
> > +   guidelines.
> > +
> > +6. i = [a..b]: sequence of integers from a to b, inclusive, i.e. i =
> > +   [0..2]: i = 0, 1, 2.
> > +
> > +7. Given an ``OUTPUT`` buffer A, A' represents a buffer on the ``CAPTURE``
> > +   queue containing data (encoded frame/stream) that resulted from processing
> > +   buffer A.
>
> The same comments as I mentioned for the previous patch apply to this section.
>

I suppose you mean the decoder patch, right? (Will address those.)

> > +
> > +Glossary
> > +========
> > +
> > +Refer to :ref:`decoder-glossary`.
>
> Ah, you refer to the same glossary. Then my comment about the source resolution
> terms is obviously wrong.
>
> I wonder if it wouldn't be better to split off the sections above into a separate
> HW codec intro section where you explain the differences between stateful/stateless
> encoders and decoders, and add the conventions and glossary.
>
> After that you have the three documents for each variant (later four when we get
> stateless encoders).
>
> Up to you, and it can be done later in a follow-up patch.
>

I'd indeed prefer to do it in a follow up patch, to avoid distracting
this series too much. Agreed that such an intro section would make
sense, though.

> > +
> > +State machine
> > +=============
> > +
> > +.. kernel-render:: DOT
> > +   :alt: DOT digraph of encoder state machine
> > +   :caption: Encoder state machine
> > +
> > +   digraph encoder_state_machine {
> > +       node [shape = doublecircle, label="Encoding"] Encoding;
> > +
> > +       node [shape = circle, label="Initialization"] Initialization;
> > +       node [shape = circle, label="Stopped"] Stopped;
> > +       node [shape = circle, label="Drain"] Drain;
> > +       node [shape = circle, label="Reset"] Reset;
> > +
> > +       node [shape = point]; qi
> > +       qi -> Initialization [ label = "open()" ];
> > +
> > +       Initialization -> Encoding [ label = "Both queues streaming" ];
> > +
> > +       Encoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
> > +       Encoding -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> > +       Encoding -> Stopped [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
> > +       Encoding -> Encoding;
> > +
> > +       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
> > +       Drain -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> > +
> > +       Reset -> Encoding [ label = "VIDIOC_STREAMON(CAPTURE)" ];
> > +       Reset -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];
> > +
> > +       Stopped -> Encoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(OUTPUT)" ];
> > +       Stopped -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
> > +   }
> > +
> > +Querying capabilities
> > +=====================
> > +
> > +1. To enumerate the set of coded formats supported by the encoder, the
> > +   client may call :c:func:`VIDIOC_ENUM_FMT` on ``CAPTURE``.
> > +
> > +   * The full set of supported formats will be returned, regardless of the
> > +     format set on ``OUTPUT``.
> > +
> > +2. To enumerate the set of supported raw formats, the client may call
> > +   :c:func:`VIDIOC_ENUM_FMT` on ``OUTPUT``.
> > +
> > +   * Only the formats supported for the format currently active on ``CAPTURE``
> > +     will be returned.
> > +
> > +   * In order to enumerate raw formats supported by a given coded format,
> > +     the client must first set that coded format on ``CAPTURE`` and then
> > +     enumerate the formats on ``OUTPUT``.
> > +
> > +3. The client may use :c:func:`VIDIOC_ENUM_FRAMESIZES` to detect supported
> > +   resolutions for a given format, passing desired pixel format in
> > +   :c:type:`v4l2_frmsizeenum` ``pixel_format``.
> > +
> > +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a coded pixel
> > +     format will include all possible coded resolutions supported by the
> > +     encoder for given coded pixel format.
> > +
> > +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a raw pixel format
> > +     will include all possible frame buffer resolutions supported by the
> > +     encoder for given raw pixel format and coded format currently set on
> > +     ``CAPTURE``.
> > +
> > +4. Supported profiles and levels for given format, if applicable, may be
>
> format -> the coded format currently set on ``CAPTURE``
>

Ack.

> > +   queried using their respective controls via :c:func:`VIDIOC_QUERYCTRL`.
> > +
> > +5. Any additional encoder capabilities may be discovered by querying
> > +   their respective controls.
> > +
> > +Initialization
> > +==============
> > +
> > +1. **Optional.** Enumerate supported formats and resolutions. See
> > +   `Querying capabilities` above.
>
> Can be dropped IMHO.
>

Ack.

> > +
> > +2. Set a coded format on the ``CAPTURE`` queue via :c:func:`VIDIOC_S_FMT`
> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
> > +
> > +     ``pixelformat``
> > +         the coded format to be produced
> > +
> > +     ``sizeimage``
> > +         desired size of ``CAPTURE`` buffers; the encoder may adjust it to
> > +         match hardware requirements
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``sizeimage``
> > +         adjusted size of ``CAPTURE`` buffers
> > +
> > +   .. warning::
> > +
> > +      Changing the ``CAPTURE`` format may change the currently set ``OUTPUT``
> > +      format. The encoder will derive a new ``OUTPUT`` format from the
> > +      ``CAPTURE`` format being set, including resolution, colorimetry
> > +      parameters, etc. If the client needs a specific ``OUTPUT`` format, it
> > +      must adjust it afterwards.
> > +
> > +3. **Optional.** Enumerate supported ``OUTPUT`` formats (raw formats for
> > +   source) for the selected coded format via :c:func:`VIDIOC_ENUM_FMT`.
>
> Does this return the same set of formats as in the 'Querying Capabilities' phase?
>

It's actually an interesting question. At this point we wouldn't have
the OUTPUT resolution set yet, so that would be the same set as in the
initial query. If we set the resolution (with some arbitrary
pixelformat), it may become a subset...

> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``pixelformat``
> > +         raw format supported for the coded format currently selected on
> > +         the ``OUTPUT`` queue.
>
> OUTPUT -> CAPTURE
>

Ack.

> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +4. Set the raw source format on the ``OUTPUT`` queue via
> > +   :c:func:`VIDIOC_S_FMT`.
> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +     ``pixelformat``
> > +         raw format of the source
> > +
> > +     ``width``, ``height``
> > +         source resolution
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``width``, ``height``
> > +         may be adjusted by encoder to match alignment requirements, as
> > +         required by the currently selected formats
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * Setting the source resolution will reset the selection rectangles to their
> > +     default values, based on the new resolution, as described in the step 5
> > +     below.
> > +
> > +5. **Optional.** Set the visible resolution for the stream metadata via
> > +   :c:func:`VIDIOC_S_SELECTION` on the ``OUTPUT`` queue.
> > +
> > +   * **Required fields:**
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +     ``target``
> > +         set to ``V4L2_SEL_TGT_CROP``
> > +
> > +     ``r.left``, ``r.top``, ``r.width``, ``r.height``
> > +         visible rectangle; this must fit within the `V4L2_SEL_TGT_CROP_BOUNDS`
> > +         rectangle and may be subject to adjustment to match codec and
> > +         hardware constraints
> > +
> > +   * **Return fields:**
> > +
> > +     ``r.left``, ``r.top``, ``r.width``, ``r.height``
> > +         visible rectangle adjusted by the encoder
> > +
> > +   * The following selection targets are supported on ``OUTPUT``:
> > +
> > +     ``V4L2_SEL_TGT_CROP_BOUNDS``
> > +         equal to the full source frame, matching the active ``OUTPUT``
> > +         format
> > +
> > +     ``V4L2_SEL_TGT_CROP_DEFAULT``
> > +         equal to ``V4L2_SEL_TGT_CROP_BOUNDS``
> > +
> > +     ``V4L2_SEL_TGT_CROP``
> > +         rectangle within the source buffer to be encoded into the
> > +         ``CAPTURE`` stream; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``
>
> Since this defaults to the CROP_DEFAULT rectangle this means that if you have
> a 16x16 macroblock size and you want to encode 1080p, you will always have to
> explicitly set the CROP rectangle to 1920x1080, right? Since the default will
> be 1088 instead of 1080.

Not necessarily. It depends on whether the encoder needs the source
buffers to be aligned to macroblocks or not.

>
> It is probably wise to explicitly mention this.
>

Sounds reasonable to add a note.

> > +
> > +     ``V4L2_SEL_TGT_COMPOSE_BOUNDS``
> > +         maximum rectangle within the coded resolution, which the cropped
> > +         source frame can be output into; if the hardware does not support
>
> output -> composed
>

Ack.

> > +         composition or scaling, then this is always equal to the rectangle of
> > +         width and height matching ``V4L2_SEL_TGT_CROP`` and located at (0, 0)
> > +
> > +     ``V4L2_SEL_TGT_COMPOSE_DEFAULT``
> > +         equal to a rectangle of width and height matching
> > +         ``V4L2_SEL_TGT_CROP`` and located at (0, 0)
> > +
> > +     ``V4L2_SEL_TGT_COMPOSE``
> > +         rectangle within the coded frame, which the cropped source frame
> > +         is to be output into; defaults to
>
> output -> composed
>

Ack.

> > +         ``V4L2_SEL_TGT_COMPOSE_DEFAULT``; read-only on hardware without
> > +         additional compose/scaling capabilities; resulting stream will
> > +         have this rectangle encoded as the visible rectangle in its
> > +         metadata
> > +
> > +   .. warning::
> > +
> > +      The encoder may adjust the crop/compose rectangles to the nearest
> > +      supported ones to meet codec and hardware requirements. The client needs
> > +      to check the adjusted rectangle returned by :c:func:`VIDIOC_S_SELECTION`.
> > +
> > +6. Allocate buffers for both ``OUTPUT`` and ``CAPTURE`` via
> > +   :c:func:`VIDIOC_REQBUFS`. This may be performed in any order.
> > +
> > +   * **Required fields:**
> > +
> > +     ``count``
> > +         requested number of buffers to allocate; greater than zero
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT`` or
> > +         ``CAPTURE``
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``count``
> > +          actual number of buffers allocated
> > +
> > +   .. warning::
> > +
> > +      The actual number of allocated buffers may differ from the ``count``
> > +      given. The client must check the updated value of ``count`` after the
> > +      call returns.
> > +
> > +   .. note::
> > +
> > +      To allocate more than the minimum number of buffers (for pipeline depth),
> > +      the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT`` or
> > +      ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE`` control respectively, to get the
> > +      minimum number of buffers required by the encoder/format, and pass the
> > +      obtained value plus the number of additional buffers needed in the
> > +      ``count`` field to :c:func:`VIDIOC_REQBUFS`.
>
> Does V4L2_CID_MIN_BUFFERS_FOR_CAPTURE make any sense for encoders?
>

Indeed, I don't think it makes much sense. Let me drop it.

> V4L2_CID_MIN_BUFFERS_FOR_OUTPUT can make sense depending on GOP size etc.
>

Yep.


> > +
> > +   Alternatively, :c:func:`VIDIOC_CREATE_BUFS` can be used to have more
> > +   control over buffer allocation.
> > +
> > +   * **Required fields:**
> > +
> > +     ``count``
> > +         requested number of buffers to allocate; greater than zero
> > +
> > +     ``type``
> > +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
> > +
> > +     other fields
> > +         follow standard semantics
> > +
> > +   * **Return fields:**
> > +
> > +     ``count``
> > +         adjusted to the number of allocated buffers
> > +
> > +7. Begin streaming on both ``OUTPUT`` and ``CAPTURE`` queues via
> > +   :c:func:`VIDIOC_STREAMON`. This may be performed in any order. The actual
> > +   encoding process starts when both queues start streaming.
> > +
> > +.. note::
> > +
> > +   If the client stops the ``CAPTURE`` queue during the encode process and then
> > +   restarts it again, the encoder will begin generating a stream independent
> > +   from the stream generated before the stop. The exact constraints depend
> > +   on the coded format, but may include the following implications:
> > +
> > +   * encoded frames produced after the restart must not reference any
> > +     frames produced before the stop, e.g. no long term references for
> > +     H.264,
> > +
> > +   * any headers that must be included in a standalone stream must be
> > +     produced again, e.g. SPS and PPS for H.264.
> > +
> > +Encoding
> > +========
> > +
> > +This state is reached after the `Initialization` sequence finishes succesfully.
>
> successfully
>

Ack.

> > +In this state, client queues and dequeues buffers to both queues via
>
> client -> the client
>

Ack.

> > +:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
> > +semantics.
> > +
> > +The contents of encoded ``CAPTURE`` buffers depend on the active coded pixel
>
> contents ... depend -> content ... depends
>

The plural sounds more natural to me to be honest.

> > +format and may be affected by codec-specific extended controls, as stated
> > +in the documentation of each format.
> > +
> > +Both queues operate independently, following standard behavior of V4L2 buffer
> > +queues and memory-to-memory devices. In addition, the order of encoded frames
> > +dequeued from the ``CAPTURE`` queue may differ from the order of queuing raw
> > +frames to the ``OUTPUT`` queue, due to properties of the selected coded format,
> > +e.g. frame reordering.
> > +
> > +The client must not assume any direct relationship between ``CAPTURE`` and
> > +``OUTPUT`` buffers and any specific timing of buffers becoming
> > +available to dequeue. Specifically,
> > +
> > +* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced on
> > +  ``CAPTURE`` (if returning an encoded frame allowed the encoder to return a
> > +  frame that preceded it in display, but succeeded it in the decode order),
> > +
> > +* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
> > +  ``CAPTURE`` later into encode process, and/or after processing further
> > +  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
> > +  reordering is used,
> > +
> > +* buffers may become available on the ``CAPTURE`` queue without additional
> > +  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
> > +  ``OUTPUT`` buffers queued in the past whose decoding results are only
> > +  available at later time, due to specifics of the decoding process,
> > +
> > +* buffers queued to ``OUTPUT`` may not become available to dequeue instantly
> > +  after being encoded into a corresponding ``CATPURE`` buffer, e.g. if the
> > +  encoder needs to use the frame as a reference for encoding further frames.
> > +
> > +.. note::
> > +
> > +   To allow matching encoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
> > +   originated from, the client can set the ``timestamp`` field of the
> > +   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
> > +   ``CAPTURE`` buffer(s), which resulted from encoding that ``OUTPUT`` buffer
> > +   will have their ``timestamp`` field set to the same value when dequeued.
> > +
> > +   In addition to the straighforward case of one ``OUTPUT`` buffer producing
> > +   one ``CAPTURE`` buffer, the following cases are defined:
> > +
> > +   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
> > +     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
> > +
> > +   * the encoding order differs from the presentation order (i.e. the
> > +     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
> > +     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
> > +     and thus monotonicity of the timestamps cannot be guaranteed.
> > +
> > +.. note::
> > +
> > +   To let the client distinguish between frame types (keyframes, intermediate
> > +   frames; the exact list of types depends on the coded format), the
> > +   ``CAPTURE`` buffers will have corresponding flag bits set in their
> > +   :c:type:`v4l2_buffer` struct when dequeued. See the documentation of
> > +   :c:type:`v4l2_buffer` and each coded pixel format for exact list of flags
> > +   and their meanings.
>
> Is this required? (I think it should, but it isn't the case today).
>

At least V4L2_BUF_FLAG_KEYFRAME has been always required for Chromium,
as it's indispensable for any kind of real time streaming, e.g.
Hangouts. (Although technically the streaming layer could parse the
stream and pick it up itself...) I can see s5p-mfc, coda, venus and
mtk-vcodec supporting at least this one. mtk-vcodec doesn't seem to
support the other ones.

Should we make only the V4L2_BUF_FLAG_KEYFRAME mandatory, at least for now?

> Is the current set of buffer flags (Key/B/P frame) sufficient for the current
> set of codecs?
>

No, there is no way to distinguish between I and IDR frames in H.264.
You can see more details on our issue tracker: crbug.com/868792.

> > +
> > +Encoding parameter changes
> > +==========================
> > +
> > +The client is allowed to use :c:func:`VIDIOC_S_CTRL` to change encoder
> > +parameters at any time. The availability of parameters is encoder-specific
> > +and the client must query the encoder to find the set of available controls.
> > +
> > +The ability to change each parameter during encoding is encoder-specific, as per
> > +the standard semantics of the V4L2 control interface. The client may attempt
> > +setting a control of its interest during encoding and if the operation fails
>
> I'd simplify this:
>
> The client may attempt to set a control during encoding...

Ack.

>
> > +with the -EBUSY error code, the ``CAPTURE`` queue needs to be stopped for the
> > +configuration change to be allowed (following the `Drain` sequence will be
> > +needed to avoid losing the already queued/encoded frames).
>
> Rephrase:
>
> ...to be allowed. To do this follow the `Drain` sequence to avoid losing the
> already queued/encoded frames.
>

How about "To do this, it may follow...", to keep client as the third
person convention?

> > +
> > +The timing of parameter updates is encoder-specific, as per the standard
> > +semantics of the V4L2 control interface. If the client needs to apply the
> > +parameters exactly at specific frame, using the Request API should be
>
> Change this to a reference to the Request API section.
>

Do you mean just adding a reference or some other changes too?

> > +considered, if supported by the encoder.
> > +
> > +Drain
> > +=====
> > +
> > +To ensure that all the queued ``OUTPUT`` buffers have been processed and the
> > +related ``CAPTURE`` buffers output to the client, the client must follow the
>
> output -> are output
>
> or perhaps better (up to you): are given
>

Ack (went with the latter and updated the decoder too).

> > +drain sequence described below. After the drain sequence ends, the client has
> > +received all encoded frames for all ``OUTPUT`` buffers queued before the
> > +sequence was started.
> > +
> > +1. Begin the drain sequence by issuing :c:func:`VIDIOC_ENCODER_CMD`.
> > +
> > +   * **Required fields:**
> > +
> > +     ``cmd``
> > +         set to ``V4L2_ENC_CMD_STOP``
> > +
> > +     ``flags``
> > +         set to 0
> > +
> > +     ``pts``
> > +         set to 0
> > +
> > +   .. warning::
> > +
> > +   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
> > +   are streaming. For compatibility reasons, the call to
> > +   :c:func:`VIDIOC_ENCODER_CMD` will not fail even if any of the queues is not
> > +   streaming, but at the same time it will not initiate the `Drain` sequence
> > +   and so the steps described below would not be applicable.
> > +
> > +2. Any ``OUTPUT`` buffers queued by the client before the
> > +   :c:func:`VIDIOC_ENCODER_CMD` was issued will be processed and encoded as
> > +   normal. The client must continue to handle both queues independently,
> > +   similarly to normal encode operation. This includes,
> > +
> > +   * queuing and dequeuing ``CAPTURE`` buffers, until a buffer marked with the
> > +     ``V4L2_BUF_FLAG_LAST`` flag is dequeued,
> > +
> > +     .. warning::
> > +
> > +        The last buffer may be empty (with :c:type:`v4l2_buffer`
> > +        ``bytesused`` = 0) and in such case it must be ignored by the client,
>
> such -> that
>
> Check the previous patch as well if you used the phrase 'such case' and replace
> it with 'that case'.
>

Ack.

> > +        as it does not contain an encoded frame.
> > +
> > +     .. note::
> > +
> > +        Any attempt to dequeue more buffers beyond the buffer marked with
> > +        ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
> > +        :c:func:`VIDIOC_DQBUF`.
> > +
> > +   * dequeuing processed ``OUTPUT`` buffers, until all the buffers queued
> > +     before the ``V4L2_ENC_CMD_STOP`` command are dequeued,
> > +
> > +   * dequeuing the ``V4L2_EVENT_EOS`` event, if the client subscribes to it.
> > +
> > +   .. note::
> > +
> > +      For backwards compatibility, the encoder will signal a ``V4L2_EVENT_EOS``
> > +      event when the last the last frame has been decoded and all frames are
>
> the last the last -> the last
>

Ack.

> > +      ready to be dequeued. It is a deprecated behavior and the client must not
>
> is a -> is

Ack.

>
> > +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
> > +      instead.
>
> Question: should new codec drivers still implement the EOS event?
>

Good question. It's a userspace compatibility issue, so if we intend a
new codec driver to work with old userspace, it must do so. Perhaps up
to the driver author/maintainer?

> > +
> > +3. Once all ``OUTPUT`` buffers queued before the ``V4L2_ENC_CMD_STOP`` call and
> > +   the last ``CAPTURE`` buffer are dequeued, the encoder is stopped and it will
> > +   accept, but not process any newly queued ``OUTPUT`` buffers until the client
> > +   issues any of the following operations:
> > +
> > +   * ``V4L2_ENC_CMD_START`` - the encoder will resume operation normally,
>
> Perhaps mention that this does not reset the encoder? It's not immediately clear
> when reading this.
>

Ack.

> > +
> > +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> > +     ``CAPTURE`` queue - the encoder will be reset (see the `Reset` sequence)
> > +     and then resume encoding,
> > +
> > +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
> > +     ``OUTPUT`` queue - the encoder will resume operation normally, however any
> > +     source frames queued to the ``OUTPUT`` queue between ``V4L2_ENC_CMD_STOP``
> > +     and :c:func:`VIDIOC_STREAMOFF` will be discarded.
> > +
> > +.. note::
> > +
> > +   Once the drain sequence is initiated, the client needs to drive it to
> > +   completion, as described by the steps above, unless it aborts the process by
> > +   issuing :c:func:`VIDIOC_STREAMOFF` on any of the ``OUTPUT`` or ``CAPTURE``
> > +   queues.  The client is not allowed to issue ``V4L2_ENC_CMD_START`` or
> > +   ``V4L2_ENC_CMD_STOP`` again while the drain sequence is in progress and they
> > +   will fail with -EBUSY error code if attempted.
> > +
> > +   Although mandatory, the availability of encoder commands may be queried
> > +   using :c:func:`VIDIOC_TRY_ENCODER_CMD`.
> > +
> > +Reset
> > +=====
> > +
> > +The client may want to request the encoder to reinitialize the encoding, so
> > +that the following stream data becomes independent from the stream data
> > +generated before. Depending on the coded format, that may imply that,
>
> that, -> that:
>

Ack. (And also few other places.)

> > +
> > +* encoded frames produced after the restart must not reference any frames
> > +  produced before the stop, e.g. no long term references for H.264,
> > +
> > +* any headers that must be included in a standalone stream must be produced
> > +  again, e.g. SPS and PPS for H.264.
> > +
> > +This can be achieved by performing the reset sequence.
> > +
> > +1. Perform the `Drain` sequence to ensure all the in-flight encoding finishes
> > +   and respective buffers are dequeued.
> > +
> > +2. Stop streaming on the ``CAPTURE`` queue via :c:func:`VIDIOC_STREAMOFF`. This
> > +   will return all currently queued ``CAPTURE`` buffers to the client, without
> > +   valid frame data.
> > +
> > +3. Start streaming on the ``CAPTURE`` queue via :c:func:`VIDIOC_STREAMON` and
> > +   continue with regular encoding sequence. The encoded frames produced into
> > +   ``CAPTURE`` buffers from now on will contain a standalone stream that can be
> > +   decoded without the need for frames encoded before the reset sequence,
> > +   starting at the first ``OUTPUT`` buffer queued after issuing the
> > +   `V4L2_ENC_CMD_STOP` of the `Drain` sequence.
> > +
> > +This sequence may be also used to change encoding parameters for encoders
> > +without the ability to change the parameters on the fly.
> > +
> > +Commit points
> > +=============
> > +
> > +Setting formats and allocating buffers triggers changes in the behavior of the
> > +encoder.
> > +
> > +1. Setting the format on the ``CAPTURE`` queue may change the set of formats
> > +   supported/advertised on the ``OUTPUT`` queue. In particular, it also means
> > +   that the ``OUTPUT`` format may be reset and the client must not rely on the
> > +   previously set format being preserved.
> > +
> > +2. Enumerating formats on the ``OUTPUT`` queue always returns only formats
> > +   supported for the current ``CAPTURE`` format.
> > +
> > +3. Setting the format on the ``OUTPUT`` queue does not change the list of
> > +   formats available on the ``CAPTURE`` queue. An attempt to set the ``OUTPUT``
> > +   format that is not supported for the currently selected ``CAPTURE`` format
> > +   will result in the encoder adjusting the requested ``OUTPUT`` format to a
> > +   supported one.
> > +
> > +4. Enumerating formats on the ``CAPTURE`` queue always returns the full set of
> > +   supported coded formats, irrespectively of the current ``OUTPUT`` format.
> > +
> > +5. While buffers are allocated on the ``CAPTURE`` queue, the client must not
> > +   change the format on the queue. Drivers will return the -EBUSY error code
> > +   for any such format change attempt.
> > +
> > +To summarize, setting formats and allocation must always start with the
> > +``CAPTURE`` queue and the ``CAPTURE`` queue is the master that governs the
> > +set of supported formats for the ``OUTPUT`` queue.
> > diff --git a/Documentation/media/uapi/v4l/devices.rst b/Documentation/media/uapi/v4l/devices.rst
> > index 12d43fe711cf..1822c66c2154 100644
> > --- a/Documentation/media/uapi/v4l/devices.rst
> > +++ b/Documentation/media/uapi/v4l/devices.rst
> > @@ -16,6 +16,7 @@ Interfaces
> >      dev-osd
> >      dev-codec
> >      dev-decoder
> > +    dev-encoder
> >      dev-effect
> >      dev-raw-vbi
> >      dev-sliced-vbi
> > diff --git a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> > index ca5f2270a829..085089cd9577 100644
> > --- a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> > +++ b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
> > @@ -37,6 +37,11 @@ Single-planar format structure
> >       inside the stream, when fed to a stateful mem2mem decoder, the fields
> >       may be zero to rely on the decoder to detect the right values. For more
> >       details see :ref:`decoder` and format descriptions.
> > +
> > +     For compressed formats on the CAPTURE side of a stateful mem2mem
> > +     encoder, the fields must be zero, since the coded size is expected to
> > +     be calculated internally by the encoder itself, based on the OUTPUT
> > +     side. For more details see :ref:`encoder` and format descriptions.
>
> The encoder document doesn't actually mention this. I think it should, though.

Indeed. I'll make it say that the fields are "ignored (always zero)".

To be honest, I wanted to define them in a way that they would be
hardwired to the internal coded size selected by the driver, but it
just complicated things, since one would need to set the CAPTURE
format first, then the OUTPUT format, selection rectangles and only
then could read back the coded resolution from the CAPTURE format. It
would have also violated the assumption that CAPTURE format was not
expected to be altered by changing OUTPUT format.

>
> I'm a bit uncertain about this: the expected resolution might impact the
> sizeimage value: i.e. encoding 640x480 requires much less memory then
> encoding 4k video. If this is required to be 0x0, then the driver has to
> fill in a worst-case sizeimage value. It might make more sense to say that
> if a non-zero resolution is given, then the driver will attempt to
> calculate a sensible sizeimage value.

The driver would still be able to determine the sizeimage by the
internally known coded size, which it calculated based on OUTPUT
format, selection, codec constraints, etc. It's not something for the
userspace to provide (nor it would be able to provide).

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2018-11-17 11:37       ` Hans Verkuil
  2018-11-18  1:34         ` Nicolas Dufresne
@ 2019-01-23 10:00         ` Tomasz Figa
  2019-01-23 11:28           ` Hans Verkuil
  1 sibling, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2019-01-23 10:00 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Nicolas Dufresne, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

On Sat, Nov 17, 2018 at 8:37 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>
> On 11/17/2018 05:18 AM, Nicolas Dufresne wrote:
> > Le lundi 12 novembre 2018 à 14:23 +0100, Hans Verkuil a écrit :
> >> On 10/22/2018 04:49 PM, Tomasz Figa wrote:
[snip]
> >>> +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
> >>> +      instead.
> >>
> >> Question: should new codec drivers still implement the EOS event?
> >
> > I'm been asking around, but I think here is a good place. Do we really
> > need the FLAG_LAST in userspace ? Userspace can also wait for the first
> > EPIPE return from DQBUF.
>
> I'm interested in hearing Tomasz' opinion. This flag is used already, so there
> definitely is a backwards compatibility issue here.
>

FWIW, it would add the overhead of 1 more system call, although I
don't think it's of our concern.

My personal feeling is that using error codes for signaling normal
conditions isn't very elegant, though.

> >
> >>
> >>> +
> >>> +3. Once all ``OUTPUT`` buffers queued before the ``V4L2_ENC_CMD_STOP`` call and
> >>> +   the last ``CAPTURE`` buffer are dequeued, the encoder is stopped and it will
> >>> +   accept, but not process any newly queued ``OUTPUT`` buffers until the client
> >>> +   issues any of the following operations:
> >>> +
> >>> +   * ``V4L2_ENC_CMD_START`` - the encoder will resume operation normally,
> >>
> >> Perhaps mention that this does not reset the encoder? It's not immediately clear
> >> when reading this.
> >
> > Which drivers supports this ? I believe I tried with Exynos in the
> > past, and that didn't work. How do we know if a driver supports this or
> > not. Do we make it mandatory ? When it's not supported, it basically
> > mean userspace need to cache and resend the header in userspace, and
> > also need to skip to some sync point.
>
> Once we agree on the spec, then the next step will be to add good compliance
> checks and update drivers that fail the tests.
>
> To check if the driver support this ioctl you can call VIDIOC_TRY_ENCODER_CMD
> to see if the functionality is supported.

There is nothing here for the hardware to support. It's an entirely
driver thing, since it just needs to wait for the encoder to complete
all the pending frames and stop enqueuing more frames to the decoder
until V4L2_ENC_CMD_START is called. Any driver that can't do it must
be fixed, since otherwise you have no way to ensure that you got all
the encoded output.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2018-11-18  1:34         ` Nicolas Dufresne
@ 2019-01-23 10:02           ` Tomasz Figa
  2019-01-24 20:02             ` Nicolas Dufresne
  0 siblings, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2019-01-23 10:02 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Hans Verkuil, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

On Sun, Nov 18, 2018 at 10:34 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le samedi 17 novembre 2018 à 12:37 +0100, Hans Verkuil a écrit :
> > > > Does V4L2_CID_MIN_BUFFERS_FOR_CAPTURE make any sense for encoders?
> > >
> > > We do account for it in GStreamer (the capture/output handling is
> > > generic), but I don't know if it's being used anywhere.
> >
> > Do you use this value directly for REQBUFS, or do you use it as the minimum
> > value but in practice use more buffers?
>
> We add more buffers to that value. We assume this value is what will be
> held by the driver, hence without adding some buffers, the driver would
> go idle as soon as one is dequeued. We also need to allocate for the
> importing driver.
>
> In general, if we have a pipeline with Driver A sending to Driver B,
> both driver will require a certain amount of buffers to operate. E.g.
> with DRM display, the driver will hold on 1 buffer (the scannout
> buffer).
>
> In GStreamer, it's implemented generically, so we do:
>
>   MIN_BUFFERS_FOR + remote_min + 1
>
> If only MIN_BUFFERS_FOR was allocated, ignoring remote driver
> requirement, the streaming will likely get stuck.

What happens if the driver doesn't report it?

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2019-01-23 10:00         ` Tomasz Figa
@ 2019-01-23 11:28           ` Hans Verkuil
  2019-01-24 20:04             ` Nicolas Dufresne
  0 siblings, 1 reply; 41+ messages in thread
From: Hans Verkuil @ 2019-01-23 11:28 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Nicolas Dufresne, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

On 01/23/19 11:00, Tomasz Figa wrote:
> On Sat, Nov 17, 2018 at 8:37 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>>
>> On 11/17/2018 05:18 AM, Nicolas Dufresne wrote:
>>> Le lundi 12 novembre 2018 à 14:23 +0100, Hans Verkuil a écrit :
>>>> On 10/22/2018 04:49 PM, Tomasz Figa wrote:
> [snip]
>>>>> +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
>>>>> +      instead.
>>>>
>>>> Question: should new codec drivers still implement the EOS event?
>>>
>>> I'm been asking around, but I think here is a good place. Do we really
>>> need the FLAG_LAST in userspace ? Userspace can also wait for the first
>>> EPIPE return from DQBUF.
>>
>> I'm interested in hearing Tomasz' opinion. This flag is used already, so there
>> definitely is a backwards compatibility issue here.
>>
> 
> FWIW, it would add the overhead of 1 more system call, although I
> don't think it's of our concern.
> 
> My personal feeling is that using error codes for signaling normal
> conditions isn't very elegant, though.

I agree. Let's keep this flag.

Regards,

	Hans

> 
>>>
>>>>
>>>>> +
>>>>> +3. Once all ``OUTPUT`` buffers queued before the ``V4L2_ENC_CMD_STOP`` call and
>>>>> +   the last ``CAPTURE`` buffer are dequeued, the encoder is stopped and it will
>>>>> +   accept, but not process any newly queued ``OUTPUT`` buffers until the client
>>>>> +   issues any of the following operations:
>>>>> +
>>>>> +   * ``V4L2_ENC_CMD_START`` - the encoder will resume operation normally,
>>>>
>>>> Perhaps mention that this does not reset the encoder? It's not immediately clear
>>>> when reading this.
>>>
>>> Which drivers supports this ? I believe I tried with Exynos in the
>>> past, and that didn't work. How do we know if a driver supports this or
>>> not. Do we make it mandatory ? When it's not supported, it basically
>>> mean userspace need to cache and resend the header in userspace, and
>>> also need to skip to some sync point.
>>
>> Once we agree on the spec, then the next step will be to add good compliance
>> checks and update drivers that fail the tests.
>>
>> To check if the driver support this ioctl you can call VIDIOC_TRY_ENCODER_CMD
>> to see if the functionality is supported.
> 
> There is nothing here for the hardware to support. It's an entirely
> driver thing, since it just needs to wait for the encoder to complete
> all the pending frames and stop enqueuing more frames to the decoder
> until V4L2_ENC_CMD_START is called. Any driver that can't do it must
> be fixed, since otherwise you have no way to ensure that you got all
> the encoded output.
> 
> Best regards,
> Tomasz
> 


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2019-01-23  9:52     ` Tomasz Figa
@ 2019-01-23 13:04       ` Hans Verkuil
  2019-01-24 20:14         ` Nicolas Dufresne
  0 siblings, 1 reply; 41+ messages in thread
From: Hans Verkuil @ 2019-01-23 13:04 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Paweł Ościak, Alexandre Courbot,
	Kamil Debski, Andrzej Hajda, Kyungmin Park, Jeongtae Park,
	Philipp Zabel, Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov,
	Todor Tomov, Nicolas Dufresne, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On 01/23/19 10:52, Tomasz Figa wrote:
> On Mon, Nov 12, 2018 at 10:23 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
>>
>> On 10/22/2018 04:49 PM, Tomasz Figa wrote:
>>> Due to complexity of the video encoding process, the V4L2 drivers of
>>> stateful encoder hardware require specific sequences of V4L2 API calls
>>> to be followed. These include capability enumeration, initialization,
>>> encoding, encode parameters change, drain and reset.
>>>
>>> Specifics of the above have been discussed during Media Workshops at
>>> LinuxCon Europe 2012 in Barcelona and then later Embedded Linux
>>> Conference Europe 2014 in Düsseldorf. The de facto Codec API that
>>> originated at those events was later implemented by the drivers we already
>>> have merged in mainline, such as s5p-mfc or coda.
>>>
>>> The only thing missing was the real specification included as a part of
>>> Linux Media documentation. Fix it now and document the encoder part of
>>> the Codec API.
>>>
>>> Signed-off-by: Tomasz Figa <tfiga@chromium.org>
>>> ---
>>>  Documentation/media/uapi/v4l/dev-encoder.rst  | 579 ++++++++++++++++++
>>>  Documentation/media/uapi/v4l/devices.rst      |   1 +
>>>  Documentation/media/uapi/v4l/pixfmt-v4l2.rst  |   5 +
>>>  Documentation/media/uapi/v4l/v4l2.rst         |   2 +
>>>  .../media/uapi/v4l/vidioc-encoder-cmd.rst     |  38 +-
>>>  5 files changed, 610 insertions(+), 15 deletions(-)
>>>  create mode 100644 Documentation/media/uapi/v4l/dev-encoder.rst
>>>
>>> diff --git a/Documentation/media/uapi/v4l/dev-encoder.rst b/Documentation/media/uapi/v4l/dev-encoder.rst
>>> new file mode 100644
>>> index 000000000000..41139e5e48eb
>>> --- /dev/null
>>> +++ b/Documentation/media/uapi/v4l/dev-encoder.rst
>>> @@ -0,0 +1,579 @@
>>> +.. -*- coding: utf-8; mode: rst -*-
>>> +
>>> +.. _encoder:
>>> +
>>> +*************************************************
>>> +Memory-to-memory Stateful Video Encoder Interface
>>> +*************************************************
>>> +
>>> +A stateful video encoder takes raw video frames in display order and encodes
>>> +them into a bitstream. It generates complete chunks of the bitstream, including
>>> +all metadata, headers, etc. The resulting bitstream does not require any
>>> +further post-processing by the client.
>>> +
>>> +Performing software stream processing, header generation etc. in the driver
>>> +in order to support this interface is strongly discouraged. In case such
>>> +operations are needed, use of the Stateless Video Encoder Interface (in
>>> +development) is strongly advised.
>>> +
>>> +Conventions and notation used in this document
>>> +==============================================
>>> +
>>> +1. The general V4L2 API rules apply if not specified in this document
>>> +   otherwise.
>>> +
>>> +2. The meaning of words "must", "may", "should", etc. is as per RFC
>>> +   2119.
>>> +
>>> +3. All steps not marked "optional" are required.
>>> +
>>> +4. :c:func:`VIDIOC_G_EXT_CTRLS`, :c:func:`VIDIOC_S_EXT_CTRLS` may be used
>>> +   interchangeably with :c:func:`VIDIOC_G_CTRL`, :c:func:`VIDIOC_S_CTRL`,
>>> +   unless specified otherwise.
>>> +
>>> +5. Single-plane API (see spec) and applicable structures may be used
>>> +   interchangeably with Multi-plane API, unless specified otherwise,
>>> +   depending on encoder capabilities and following the general V4L2
>>> +   guidelines.
>>> +
>>> +6. i = [a..b]: sequence of integers from a to b, inclusive, i.e. i =
>>> +   [0..2]: i = 0, 1, 2.
>>> +
>>> +7. Given an ``OUTPUT`` buffer A, A' represents a buffer on the ``CAPTURE``
>>> +   queue containing data (encoded frame/stream) that resulted from processing
>>> +   buffer A.
>>
>> The same comments as I mentioned for the previous patch apply to this section.
>>
> 
> I suppose you mean the decoder patch, right? (Will address those.)

Yes.

> 
>>> +
>>> +Glossary
>>> +========
>>> +
>>> +Refer to :ref:`decoder-glossary`.
>>
>> Ah, you refer to the same glossary. Then my comment about the source resolution
>> terms is obviously wrong.
>>
>> I wonder if it wouldn't be better to split off the sections above into a separate
>> HW codec intro section where you explain the differences between stateful/stateless
>> encoders and decoders, and add the conventions and glossary.
>>
>> After that you have the three documents for each variant (later four when we get
>> stateless encoders).
>>
>> Up to you, and it can be done later in a follow-up patch.
>>
> 
> I'd indeed prefer to do it in a follow up patch, to avoid distracting
> this series too much. Agreed that such an intro section would make
> sense, though.

A follow-up patch is fine.

> 
>>> +
>>> +State machine
>>> +=============
>>> +
>>> +.. kernel-render:: DOT
>>> +   :alt: DOT digraph of encoder state machine
>>> +   :caption: Encoder state machine
>>> +
>>> +   digraph encoder_state_machine {
>>> +       node [shape = doublecircle, label="Encoding"] Encoding;
>>> +
>>> +       node [shape = circle, label="Initialization"] Initialization;
>>> +       node [shape = circle, label="Stopped"] Stopped;
>>> +       node [shape = circle, label="Drain"] Drain;
>>> +       node [shape = circle, label="Reset"] Reset;
>>> +
>>> +       node [shape = point]; qi
>>> +       qi -> Initialization [ label = "open()" ];
>>> +
>>> +       Initialization -> Encoding [ label = "Both queues streaming" ];
>>> +
>>> +       Encoding -> Drain [ label = "V4L2_DEC_CMD_STOP" ];
>>> +       Encoding -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
>>> +       Encoding -> Stopped [ label = "VIDIOC_STREAMOFF(OUTPUT)" ];
>>> +       Encoding -> Encoding;
>>> +
>>> +       Drain -> Stopped [ label = "All CAPTURE\nbuffers dequeued\nor\nVIDIOC_STREAMOFF(CAPTURE)" ];
>>> +       Drain -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
>>> +
>>> +       Reset -> Encoding [ label = "VIDIOC_STREAMON(CAPTURE)" ];
>>> +       Reset -> Initialization [ label = "VIDIOC_REQBUFS(OUTPUT, 0)" ];
>>> +
>>> +       Stopped -> Encoding [ label = "V4L2_DEC_CMD_START\nor\nVIDIOC_STREAMON(OUTPUT)" ];
>>> +       Stopped -> Reset [ label = "VIDIOC_STREAMOFF(CAPTURE)" ];
>>> +   }
>>> +
>>> +Querying capabilities
>>> +=====================
>>> +
>>> +1. To enumerate the set of coded formats supported by the encoder, the
>>> +   client may call :c:func:`VIDIOC_ENUM_FMT` on ``CAPTURE``.
>>> +
>>> +   * The full set of supported formats will be returned, regardless of the
>>> +     format set on ``OUTPUT``.
>>> +
>>> +2. To enumerate the set of supported raw formats, the client may call
>>> +   :c:func:`VIDIOC_ENUM_FMT` on ``OUTPUT``.
>>> +
>>> +   * Only the formats supported for the format currently active on ``CAPTURE``
>>> +     will be returned.
>>> +
>>> +   * In order to enumerate raw formats supported by a given coded format,
>>> +     the client must first set that coded format on ``CAPTURE`` and then
>>> +     enumerate the formats on ``OUTPUT``.
>>> +
>>> +3. The client may use :c:func:`VIDIOC_ENUM_FRAMESIZES` to detect supported
>>> +   resolutions for a given format, passing desired pixel format in
>>> +   :c:type:`v4l2_frmsizeenum` ``pixel_format``.
>>> +
>>> +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a coded pixel
>>> +     format will include all possible coded resolutions supported by the
>>> +     encoder for given coded pixel format.
>>> +
>>> +   * Values returned by :c:func:`VIDIOC_ENUM_FRAMESIZES` for a raw pixel format
>>> +     will include all possible frame buffer resolutions supported by the
>>> +     encoder for given raw pixel format and coded format currently set on
>>> +     ``CAPTURE``.
>>> +
>>> +4. Supported profiles and levels for given format, if applicable, may be
>>
>> format -> the coded format currently set on ``CAPTURE``
>>
> 
> Ack.
> 
>>> +   queried using their respective controls via :c:func:`VIDIOC_QUERYCTRL`.
>>> +
>>> +5. Any additional encoder capabilities may be discovered by querying
>>> +   their respective controls.
>>> +
>>> +Initialization
>>> +==============
>>> +
>>> +1. **Optional.** Enumerate supported formats and resolutions. See
>>> +   `Querying capabilities` above.
>>
>> Can be dropped IMHO.
>>
> 
> Ack.
> 
>>> +
>>> +2. Set a coded format on the ``CAPTURE`` queue via :c:func:`VIDIOC_S_FMT`
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``type``
>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``CAPTURE``
>>> +
>>> +     ``pixelformat``
>>> +         the coded format to be produced
>>> +
>>> +     ``sizeimage``
>>> +         desired size of ``CAPTURE`` buffers; the encoder may adjust it to
>>> +         match hardware requirements
>>> +
>>> +     other fields
>>> +         follow standard semantics
>>> +
>>> +   * **Return fields:**
>>> +
>>> +     ``sizeimage``
>>> +         adjusted size of ``CAPTURE`` buffers
>>> +
>>> +   .. warning::
>>> +
>>> +      Changing the ``CAPTURE`` format may change the currently set ``OUTPUT``
>>> +      format. The encoder will derive a new ``OUTPUT`` format from the
>>> +      ``CAPTURE`` format being set, including resolution, colorimetry
>>> +      parameters, etc. If the client needs a specific ``OUTPUT`` format, it
>>> +      must adjust it afterwards.
>>> +
>>> +3. **Optional.** Enumerate supported ``OUTPUT`` formats (raw formats for
>>> +   source) for the selected coded format via :c:func:`VIDIOC_ENUM_FMT`.
>>
>> Does this return the same set of formats as in the 'Querying Capabilities' phase?
>>
> 
> It's actually an interesting question. At this point we wouldn't have
> the OUTPUT resolution set yet, so that would be the same set as in the
> initial query. If we set the resolution (with some arbitrary
> pixelformat), it may become a subset...

But doesn't setting the capture format also set the resolution?

To quote from the text above:

"The encoder will derive a new ``OUTPUT`` format from the ``CAPTURE`` format
 being set, including resolution, colorimetry parameters, etc."

So you set the capture format with a resolution (you know that), then
ENUM_FMT will return the subset for that codec and resolution.

But see also the comment at the end of this email.

> 
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``type``
>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +     other fields
>>> +         follow standard semantics
>>> +
>>> +   * **Return fields:**
>>> +
>>> +     ``pixelformat``
>>> +         raw format supported for the coded format currently selected on
>>> +         the ``OUTPUT`` queue.
>>
>> OUTPUT -> CAPTURE
>>
> 
> Ack.
> 
>>> +
>>> +     other fields
>>> +         follow standard semantics
>>> +
>>> +4. Set the raw source format on the ``OUTPUT`` queue via
>>> +   :c:func:`VIDIOC_S_FMT`.
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``type``
>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +     ``pixelformat``
>>> +         raw format of the source
>>> +
>>> +     ``width``, ``height``
>>> +         source resolution
>>> +
>>> +     other fields
>>> +         follow standard semantics
>>> +
>>> +   * **Return fields:**
>>> +
>>> +     ``width``, ``height``
>>> +         may be adjusted by encoder to match alignment requirements, as
>>> +         required by the currently selected formats
>>> +
>>> +     other fields
>>> +         follow standard semantics
>>> +
>>> +   * Setting the source resolution will reset the selection rectangles to their
>>> +     default values, based on the new resolution, as described in the step 5
>>> +     below.
>>> +
>>> +5. **Optional.** Set the visible resolution for the stream metadata via
>>> +   :c:func:`VIDIOC_S_SELECTION` on the ``OUTPUT`` queue.
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``type``
>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +     ``target``
>>> +         set to ``V4L2_SEL_TGT_CROP``
>>> +
>>> +     ``r.left``, ``r.top``, ``r.width``, ``r.height``
>>> +         visible rectangle; this must fit within the `V4L2_SEL_TGT_CROP_BOUNDS`
>>> +         rectangle and may be subject to adjustment to match codec and
>>> +         hardware constraints
>>> +
>>> +   * **Return fields:**
>>> +
>>> +     ``r.left``, ``r.top``, ``r.width``, ``r.height``
>>> +         visible rectangle adjusted by the encoder
>>> +
>>> +   * The following selection targets are supported on ``OUTPUT``:
>>> +
>>> +     ``V4L2_SEL_TGT_CROP_BOUNDS``
>>> +         equal to the full source frame, matching the active ``OUTPUT``
>>> +         format
>>> +
>>> +     ``V4L2_SEL_TGT_CROP_DEFAULT``
>>> +         equal to ``V4L2_SEL_TGT_CROP_BOUNDS``
>>> +
>>> +     ``V4L2_SEL_TGT_CROP``
>>> +         rectangle within the source buffer to be encoded into the
>>> +         ``CAPTURE`` stream; defaults to ``V4L2_SEL_TGT_CROP_DEFAULT``
>>
>> Since this defaults to the CROP_DEFAULT rectangle this means that if you have
>> a 16x16 macroblock size and you want to encode 1080p, you will always have to
>> explicitly set the CROP rectangle to 1920x1080, right? Since the default will
>> be 1088 instead of 1080.
> 
> Not necessarily. It depends on whether the encoder needs the source
> buffers to be aligned to macroblocks or not.
> 
>>
>> It is probably wise to explicitly mention this.
>>
> 
> Sounds reasonable to add a note.
> 
>>> +
>>> +     ``V4L2_SEL_TGT_COMPOSE_BOUNDS``
>>> +         maximum rectangle within the coded resolution, which the cropped
>>> +         source frame can be output into; if the hardware does not support
>>
>> output -> composed
>>
> 
> Ack.
> 
>>> +         composition or scaling, then this is always equal to the rectangle of
>>> +         width and height matching ``V4L2_SEL_TGT_CROP`` and located at (0, 0)
>>> +
>>> +     ``V4L2_SEL_TGT_COMPOSE_DEFAULT``
>>> +         equal to a rectangle of width and height matching
>>> +         ``V4L2_SEL_TGT_CROP`` and located at (0, 0)
>>> +
>>> +     ``V4L2_SEL_TGT_COMPOSE``
>>> +         rectangle within the coded frame, which the cropped source frame
>>> +         is to be output into; defaults to
>>
>> output -> composed
>>
> 
> Ack.
> 
>>> +         ``V4L2_SEL_TGT_COMPOSE_DEFAULT``; read-only on hardware without
>>> +         additional compose/scaling capabilities; resulting stream will
>>> +         have this rectangle encoded as the visible rectangle in its
>>> +         metadata
>>> +
>>> +   .. warning::
>>> +
>>> +      The encoder may adjust the crop/compose rectangles to the nearest
>>> +      supported ones to meet codec and hardware requirements. The client needs
>>> +      to check the adjusted rectangle returned by :c:func:`VIDIOC_S_SELECTION`.
>>> +
>>> +6. Allocate buffers for both ``OUTPUT`` and ``CAPTURE`` via
>>> +   :c:func:`VIDIOC_REQBUFS`. This may be performed in any order.
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``count``
>>> +         requested number of buffers to allocate; greater than zero
>>> +
>>> +     ``type``
>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT`` or
>>> +         ``CAPTURE``
>>> +
>>> +     other fields
>>> +         follow standard semantics
>>> +
>>> +   * **Return fields:**
>>> +
>>> +     ``count``
>>> +          actual number of buffers allocated
>>> +
>>> +   .. warning::
>>> +
>>> +      The actual number of allocated buffers may differ from the ``count``
>>> +      given. The client must check the updated value of ``count`` after the
>>> +      call returns.
>>> +
>>> +   .. note::
>>> +
>>> +      To allocate more than the minimum number of buffers (for pipeline depth),
>>> +      the client may query the ``V4L2_CID_MIN_BUFFERS_FOR_OUTPUT`` or
>>> +      ``V4L2_CID_MIN_BUFFERS_FOR_CAPTURE`` control respectively, to get the
>>> +      minimum number of buffers required by the encoder/format, and pass the
>>> +      obtained value plus the number of additional buffers needed in the
>>> +      ``count`` field to :c:func:`VIDIOC_REQBUFS`.
>>
>> Does V4L2_CID_MIN_BUFFERS_FOR_CAPTURE make any sense for encoders?
>>
> 
> Indeed, I don't think it makes much sense. Let me drop it.
> 
>> V4L2_CID_MIN_BUFFERS_FOR_OUTPUT can make sense depending on GOP size etc.
>>
> 
> Yep.
> 
> 
>>> +
>>> +   Alternatively, :c:func:`VIDIOC_CREATE_BUFS` can be used to have more
>>> +   control over buffer allocation.
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``count``
>>> +         requested number of buffers to allocate; greater than zero
>>> +
>>> +     ``type``
>>> +         a ``V4L2_BUF_TYPE_*`` enum appropriate for ``OUTPUT``
>>> +
>>> +     other fields
>>> +         follow standard semantics
>>> +
>>> +   * **Return fields:**
>>> +
>>> +     ``count``
>>> +         adjusted to the number of allocated buffers
>>> +
>>> +7. Begin streaming on both ``OUTPUT`` and ``CAPTURE`` queues via
>>> +   :c:func:`VIDIOC_STREAMON`. This may be performed in any order. The actual
>>> +   encoding process starts when both queues start streaming.
>>> +
>>> +.. note::
>>> +
>>> +   If the client stops the ``CAPTURE`` queue during the encode process and then
>>> +   restarts it again, the encoder will begin generating a stream independent
>>> +   from the stream generated before the stop. The exact constraints depend
>>> +   on the coded format, but may include the following implications:
>>> +
>>> +   * encoded frames produced after the restart must not reference any
>>> +     frames produced before the stop, e.g. no long term references for
>>> +     H.264,
>>> +
>>> +   * any headers that must be included in a standalone stream must be
>>> +     produced again, e.g. SPS and PPS for H.264.
>>> +
>>> +Encoding
>>> +========
>>> +
>>> +This state is reached after the `Initialization` sequence finishes succesfully.
>>
>> successfully
>>
> 
> Ack.
> 
>>> +In this state, client queues and dequeues buffers to both queues via
>>
>> client -> the client
>>
> 
> Ack.
> 
>>> +:c:func:`VIDIOC_QBUF` and :c:func:`VIDIOC_DQBUF`, following the standard
>>> +semantics.
>>> +
>>> +The contents of encoded ``CAPTURE`` buffers depend on the active coded pixel
>>
>> contents ... depend -> content ... depends
>>
> 
> The plural sounds more natural to me to be honest.
> 
>>> +format and may be affected by codec-specific extended controls, as stated
>>> +in the documentation of each format.
>>> +
>>> +Both queues operate independently, following standard behavior of V4L2 buffer
>>> +queues and memory-to-memory devices. In addition, the order of encoded frames
>>> +dequeued from the ``CAPTURE`` queue may differ from the order of queuing raw
>>> +frames to the ``OUTPUT`` queue, due to properties of the selected coded format,
>>> +e.g. frame reordering.
>>> +
>>> +The client must not assume any direct relationship between ``CAPTURE`` and
>>> +``OUTPUT`` buffers and any specific timing of buffers becoming
>>> +available to dequeue. Specifically,
>>> +
>>> +* a buffer queued to ``OUTPUT`` may result in more than 1 buffer produced on
>>> +  ``CAPTURE`` (if returning an encoded frame allowed the encoder to return a
>>> +  frame that preceded it in display, but succeeded it in the decode order),
>>> +
>>> +* a buffer queued to ``OUTPUT`` may result in a buffer being produced on
>>> +  ``CAPTURE`` later into encode process, and/or after processing further
>>> +  ``OUTPUT`` buffers, or be returned out of order, e.g. if display
>>> +  reordering is used,
>>> +
>>> +* buffers may become available on the ``CAPTURE`` queue without additional
>>> +  buffers queued to ``OUTPUT`` (e.g. during drain or ``EOS``), because of the
>>> +  ``OUTPUT`` buffers queued in the past whose decoding results are only
>>> +  available at later time, due to specifics of the decoding process,
>>> +
>>> +* buffers queued to ``OUTPUT`` may not become available to dequeue instantly
>>> +  after being encoded into a corresponding ``CATPURE`` buffer, e.g. if the
>>> +  encoder needs to use the frame as a reference for encoding further frames.
>>> +
>>> +.. note::
>>> +
>>> +   To allow matching encoded ``CAPTURE`` buffers with ``OUTPUT`` buffers they
>>> +   originated from, the client can set the ``timestamp`` field of the
>>> +   :c:type:`v4l2_buffer` struct when queuing an ``OUTPUT`` buffer. The
>>> +   ``CAPTURE`` buffer(s), which resulted from encoding that ``OUTPUT`` buffer
>>> +   will have their ``timestamp`` field set to the same value when dequeued.
>>> +
>>> +   In addition to the straighforward case of one ``OUTPUT`` buffer producing
>>> +   one ``CAPTURE`` buffer, the following cases are defined:
>>> +
>>> +   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
>>> +     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
>>> +
>>> +   * the encoding order differs from the presentation order (i.e. the
>>> +     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
>>> +     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
>>> +     and thus monotonicity of the timestamps cannot be guaranteed.
>>> +
>>> +.. note::
>>> +
>>> +   To let the client distinguish between frame types (keyframes, intermediate
>>> +   frames; the exact list of types depends on the coded format), the
>>> +   ``CAPTURE`` buffers will have corresponding flag bits set in their
>>> +   :c:type:`v4l2_buffer` struct when dequeued. See the documentation of
>>> +   :c:type:`v4l2_buffer` and each coded pixel format for exact list of flags
>>> +   and their meanings.
>>
>> Is this required? (I think it should, but it isn't the case today).
>>
> 
> At least V4L2_BUF_FLAG_KEYFRAME has been always required for Chromium,
> as it's indispensable for any kind of real time streaming, e.g.
> Hangouts. (Although technically the streaming layer could parse the
> stream and pick it up itself...) I can see s5p-mfc, coda, venus and
> mtk-vcodec supporting at least this one. mtk-vcodec doesn't seem to
> support the other ones.
> 
> Should we make only the V4L2_BUF_FLAG_KEYFRAME mandatory, at least for now?
> 
>> Is the current set of buffer flags (Key/B/P frame) sufficient for the current
>> set of codecs?
>>
> 
> No, there is no way to distinguish between I and IDR frames in H.264.
> You can see more details on our issue tracker: crbug.com/868792.

OK. How about adding an IDRFRAME flag? If hardware can distinguish between
I and IDR frames, then it will set IDRFRAME in addition to KEYFRAME.

I think that is consistent with our spec. And it makes sense to make KEYFRAME
mandatory.

> 
>>> +
>>> +Encoding parameter changes
>>> +==========================
>>> +
>>> +The client is allowed to use :c:func:`VIDIOC_S_CTRL` to change encoder
>>> +parameters at any time. The availability of parameters is encoder-specific
>>> +and the client must query the encoder to find the set of available controls.
>>> +
>>> +The ability to change each parameter during encoding is encoder-specific, as per
>>> +the standard semantics of the V4L2 control interface. The client may attempt
>>> +setting a control of its interest during encoding and if the operation fails
>>
>> I'd simplify this:
>>
>> The client may attempt to set a control during encoding...
> 
> Ack.
> 
>>
>>> +with the -EBUSY error code, the ``CAPTURE`` queue needs to be stopped for the
>>> +configuration change to be allowed (following the `Drain` sequence will be
>>> +needed to avoid losing the already queued/encoded frames).
>>
>> Rephrase:
>>
>> ...to be allowed. To do this follow the `Drain` sequence to avoid losing the
>> already queued/encoded frames.
>>
> 
> How about "To do this, it may follow...", to keep client as the third
> person convention?

Ack.

> 
>>> +
>>> +The timing of parameter updates is encoder-specific, as per the standard
>>> +semantics of the V4L2 control interface. If the client needs to apply the
>>> +parameters exactly at specific frame, using the Request API should be
>>
>> Change this to a reference to the Request API section.
>>
> 
> Do you mean just adding a reference or some other changes too?

I meant: replace "Request API" by a reference (hyperlink) to the Request API
section. That way you can click on the link to get to the doc for that API.

> 
>>> +considered, if supported by the encoder.
>>> +
>>> +Drain
>>> +=====
>>> +
>>> +To ensure that all the queued ``OUTPUT`` buffers have been processed and the
>>> +related ``CAPTURE`` buffers output to the client, the client must follow the
>>
>> output -> are output
>>
>> or perhaps better (up to you): are given
>>
> 
> Ack (went with the latter and updated the decoder too).
> 
>>> +drain sequence described below. After the drain sequence ends, the client has
>>> +received all encoded frames for all ``OUTPUT`` buffers queued before the
>>> +sequence was started.
>>> +
>>> +1. Begin the drain sequence by issuing :c:func:`VIDIOC_ENCODER_CMD`.
>>> +
>>> +   * **Required fields:**
>>> +
>>> +     ``cmd``
>>> +         set to ``V4L2_ENC_CMD_STOP``
>>> +
>>> +     ``flags``
>>> +         set to 0
>>> +
>>> +     ``pts``
>>> +         set to 0
>>> +
>>> +   .. warning::
>>> +
>>> +   The sentence can be only initiated if both ``OUTPUT`` and ``CAPTURE`` queues
>>> +   are streaming. For compatibility reasons, the call to
>>> +   :c:func:`VIDIOC_ENCODER_CMD` will not fail even if any of the queues is not
>>> +   streaming, but at the same time it will not initiate the `Drain` sequence
>>> +   and so the steps described below would not be applicable.
>>> +
>>> +2. Any ``OUTPUT`` buffers queued by the client before the
>>> +   :c:func:`VIDIOC_ENCODER_CMD` was issued will be processed and encoded as
>>> +   normal. The client must continue to handle both queues independently,
>>> +   similarly to normal encode operation. This includes,
>>> +
>>> +   * queuing and dequeuing ``CAPTURE`` buffers, until a buffer marked with the
>>> +     ``V4L2_BUF_FLAG_LAST`` flag is dequeued,
>>> +
>>> +     .. warning::
>>> +
>>> +        The last buffer may be empty (with :c:type:`v4l2_buffer`
>>> +        ``bytesused`` = 0) and in such case it must be ignored by the client,
>>
>> such -> that
>>
>> Check the previous patch as well if you used the phrase 'such case' and replace
>> it with 'that case'.
>>
> 
> Ack.
> 
>>> +        as it does not contain an encoded frame.
>>> +
>>> +     .. note::
>>> +
>>> +        Any attempt to dequeue more buffers beyond the buffer marked with
>>> +        ``V4L2_BUF_FLAG_LAST`` will result in a -EPIPE error from
>>> +        :c:func:`VIDIOC_DQBUF`.
>>> +
>>> +   * dequeuing processed ``OUTPUT`` buffers, until all the buffers queued
>>> +     before the ``V4L2_ENC_CMD_STOP`` command are dequeued,
>>> +
>>> +   * dequeuing the ``V4L2_EVENT_EOS`` event, if the client subscribes to it.
>>> +
>>> +   .. note::
>>> +
>>> +      For backwards compatibility, the encoder will signal a ``V4L2_EVENT_EOS``
>>> +      event when the last the last frame has been decoded and all frames are
>>
>> the last the last -> the last
>>
> 
> Ack.
> 
>>> +      ready to be dequeued. It is a deprecated behavior and the client must not
>>
>> is a -> is
> 
> Ack.
> 
>>
>>> +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
>>> +      instead.
>>
>> Question: should new codec drivers still implement the EOS event?
>>
> 
> Good question. It's a userspace compatibility issue, so if we intend a
> new codec driver to work with old userspace, it must do so. Perhaps up
> to the driver author/maintainer?

I think we need to keep it for a bit, but make sure that we document that
userspace should not rely on it and use the LAST flag instead.

> 
>>> +
>>> +3. Once all ``OUTPUT`` buffers queued before the ``V4L2_ENC_CMD_STOP`` call and
>>> +   the last ``CAPTURE`` buffer are dequeued, the encoder is stopped and it will
>>> +   accept, but not process any newly queued ``OUTPUT`` buffers until the client
>>> +   issues any of the following operations:
>>> +
>>> +   * ``V4L2_ENC_CMD_START`` - the encoder will resume operation normally,
>>
>> Perhaps mention that this does not reset the encoder? It's not immediately clear
>> when reading this.
>>
> 
> Ack.
> 
>>> +
>>> +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
>>> +     ``CAPTURE`` queue - the encoder will be reset (see the `Reset` sequence)
>>> +     and then resume encoding,
>>> +
>>> +   * a pair of :c:func:`VIDIOC_STREAMOFF` and :c:func:`VIDIOC_STREAMON` on the
>>> +     ``OUTPUT`` queue - the encoder will resume operation normally, however any
>>> +     source frames queued to the ``OUTPUT`` queue between ``V4L2_ENC_CMD_STOP``
>>> +     and :c:func:`VIDIOC_STREAMOFF` will be discarded.
>>> +
>>> +.. note::
>>> +
>>> +   Once the drain sequence is initiated, the client needs to drive it to
>>> +   completion, as described by the steps above, unless it aborts the process by
>>> +   issuing :c:func:`VIDIOC_STREAMOFF` on any of the ``OUTPUT`` or ``CAPTURE``
>>> +   queues.  The client is not allowed to issue ``V4L2_ENC_CMD_START`` or
>>> +   ``V4L2_ENC_CMD_STOP`` again while the drain sequence is in progress and they
>>> +   will fail with -EBUSY error code if attempted.
>>> +
>>> +   Although mandatory, the availability of encoder commands may be queried
>>> +   using :c:func:`VIDIOC_TRY_ENCODER_CMD`.
>>> +
>>> +Reset
>>> +=====
>>> +
>>> +The client may want to request the encoder to reinitialize the encoding, so
>>> +that the following stream data becomes independent from the stream data
>>> +generated before. Depending on the coded format, that may imply that,
>>
>> that, -> that:
>>
> 
> Ack. (And also few other places.)
> 
>>> +
>>> +* encoded frames produced after the restart must not reference any frames
>>> +  produced before the stop, e.g. no long term references for H.264,
>>> +
>>> +* any headers that must be included in a standalone stream must be produced
>>> +  again, e.g. SPS and PPS for H.264.
>>> +
>>> +This can be achieved by performing the reset sequence.
>>> +
>>> +1. Perform the `Drain` sequence to ensure all the in-flight encoding finishes
>>> +   and respective buffers are dequeued.
>>> +
>>> +2. Stop streaming on the ``CAPTURE`` queue via :c:func:`VIDIOC_STREAMOFF`. This
>>> +   will return all currently queued ``CAPTURE`` buffers to the client, without
>>> +   valid frame data.
>>> +
>>> +3. Start streaming on the ``CAPTURE`` queue via :c:func:`VIDIOC_STREAMON` and
>>> +   continue with regular encoding sequence. The encoded frames produced into
>>> +   ``CAPTURE`` buffers from now on will contain a standalone stream that can be
>>> +   decoded without the need for frames encoded before the reset sequence,
>>> +   starting at the first ``OUTPUT`` buffer queued after issuing the
>>> +   `V4L2_ENC_CMD_STOP` of the `Drain` sequence.
>>> +
>>> +This sequence may be also used to change encoding parameters for encoders
>>> +without the ability to change the parameters on the fly.
>>> +
>>> +Commit points
>>> +=============
>>> +
>>> +Setting formats and allocating buffers triggers changes in the behavior of the
>>> +encoder.
>>> +
>>> +1. Setting the format on the ``CAPTURE`` queue may change the set of formats
>>> +   supported/advertised on the ``OUTPUT`` queue. In particular, it also means
>>> +   that the ``OUTPUT`` format may be reset and the client must not rely on the
>>> +   previously set format being preserved.
>>> +
>>> +2. Enumerating formats on the ``OUTPUT`` queue always returns only formats
>>> +   supported for the current ``CAPTURE`` format.
>>> +
>>> +3. Setting the format on the ``OUTPUT`` queue does not change the list of
>>> +   formats available on the ``CAPTURE`` queue. An attempt to set the ``OUTPUT``
>>> +   format that is not supported for the currently selected ``CAPTURE`` format
>>> +   will result in the encoder adjusting the requested ``OUTPUT`` format to a
>>> +   supported one.
>>> +
>>> +4. Enumerating formats on the ``CAPTURE`` queue always returns the full set of
>>> +   supported coded formats, irrespectively of the current ``OUTPUT`` format.
>>> +
>>> +5. While buffers are allocated on the ``CAPTURE`` queue, the client must not
>>> +   change the format on the queue. Drivers will return the -EBUSY error code
>>> +   for any such format change attempt.
>>> +
>>> +To summarize, setting formats and allocation must always start with the
>>> +``CAPTURE`` queue and the ``CAPTURE`` queue is the master that governs the
>>> +set of supported formats for the ``OUTPUT`` queue.
>>> diff --git a/Documentation/media/uapi/v4l/devices.rst b/Documentation/media/uapi/v4l/devices.rst
>>> index 12d43fe711cf..1822c66c2154 100644
>>> --- a/Documentation/media/uapi/v4l/devices.rst
>>> +++ b/Documentation/media/uapi/v4l/devices.rst
>>> @@ -16,6 +16,7 @@ Interfaces
>>>      dev-osd
>>>      dev-codec
>>>      dev-decoder
>>> +    dev-encoder
>>>      dev-effect
>>>      dev-raw-vbi
>>>      dev-sliced-vbi
>>> diff --git a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
>>> index ca5f2270a829..085089cd9577 100644
>>> --- a/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
>>> +++ b/Documentation/media/uapi/v4l/pixfmt-v4l2.rst
>>> @@ -37,6 +37,11 @@ Single-planar format structure
>>>       inside the stream, when fed to a stateful mem2mem decoder, the fields
>>>       may be zero to rely on the decoder to detect the right values. For more
>>>       details see :ref:`decoder` and format descriptions.
>>> +
>>> +     For compressed formats on the CAPTURE side of a stateful mem2mem
>>> +     encoder, the fields must be zero, since the coded size is expected to
>>> +     be calculated internally by the encoder itself, based on the OUTPUT
>>> +     side. For more details see :ref:`encoder` and format descriptions.
>>
>> The encoder document doesn't actually mention this. I think it should, though.
> 
> Indeed. I'll make it say that the fields are "ignored (always zero)".
> 
> To be honest, I wanted to define them in a way that they would be
> hardwired to the internal coded size selected by the driver, but it
> just complicated things, since one would need to set the CAPTURE
> format first, then the OUTPUT format, selection rectangles and only
> then could read back the coded resolution from the CAPTURE format. It
> would have also violated the assumption that CAPTURE format was not
> expected to be altered by changing OUTPUT format.

Right. So this now contradicts what the spec said at the beginning
(about "The encoder will derive a new ``OUTPUT`` format from the ``CAPTURE``
format being set, including resolution, colorimetry parameters, etc.").

An alternative to this is to indeed ignored width/height for the capture
format and just have ENUM_FMT report the full list of formats for the
given capture pixelformat, and require ENUM_FRAMESIZES as well to list
the resolutions that are supported by each output pixelformat.

If the OUTPUT resolution set by userspace is too large, then the CROP
rectangle should be set to a valid size or (if cropping is not supported)
the resolution should be reduced.

> 
>>
>> I'm a bit uncertain about this: the expected resolution might impact the
>> sizeimage value: i.e. encoding 640x480 requires much less memory then
>> encoding 4k video. If this is required to be 0x0, then the driver has to
>> fill in a worst-case sizeimage value. It might make more sense to say that
>> if a non-zero resolution is given, then the driver will attempt to
>> calculate a sensible sizeimage value.
> 
> The driver would still be able to determine the sizeimage by the
> internally known coded size, which it calculated based on OUTPUT
> format, selection, codec constraints, etc. It's not something for the
> userspace to provide (nor it would be able to provide).

How can it determine this if the output resolution isn't known?

It would have to set sizeimage to the worst-case size in the absence of
that information.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2019-01-23  5:27         ` Tomasz Figa
  2019-01-23  8:10           ` Hans Verkuil
@ 2019-01-24  9:06           ` Tomasz Figa
  2019-01-24 19:55             ` Nicolas Dufresne
  1 sibling, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2019-01-24  9:06 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Pawel Osciak, Alexandre Courbot,
	Kamil Debski, Andrzej Hajda, Kyungmin Park, Jeongtae Park,
	Philipp Zabel, Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, Todor Tomov, Nicolas Dufresne,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

On Wed, Jan 23, 2019 at 2:27 PM Tomasz Figa <tfiga@chromium.org> wrote:
>
> On Tue, Jan 22, 2019 at 11:47 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
> >
> > On 01/22/19 11:02, Tomasz Figa wrote:
[snip]
> > >>> +   one ``CAPTURE`` buffer, the following cases are defined:
> > >>> +
> > >>> +   * one ``OUTPUT`` buffer generates multiple ``CAPTURE`` buffers: the same
> > >>> +     ``OUTPUT`` timestamp will be copied to multiple ``CAPTURE`` buffers,
> > >>> +
> > >>> +   * multiple ``OUTPUT`` buffers generate one ``CAPTURE`` buffer: timestamp of
> > >>> +     the ``OUTPUT`` buffer queued last will be copied,
> > >>> +
> > >>> +   * the decoding order differs from the display order (i.e. the
> > >>> +     ``CAPTURE`` buffers are out-of-order compared to the ``OUTPUT`` buffers):
> > >>> +     ``CAPTURE`` timestamps will not retain the order of ``OUTPUT`` timestamps
> > >>> +     and thus monotonicity of the timestamps cannot be guaranteed.
> >
> > I think this last point should be rewritten. The timestamp is just a value that
> > is copied, there are no monotonicity requirements for m2m devices in general.
> >
>
> Actually I just realized the last point might not even be achievable
> for some of the decoders (s5p-mfc, mtk-vcodec), as they don't report
> which frame originates from which bitstream buffer and the driver just
> picks the most recently consumed OUTPUT buffer to copy the timestamp
> from. (s5p-mfc actually "forgets" to set the timestamp in some cases
> too...)
>
> I need to think a bit more about this.

Actually I misread the code. Both s5p-mfc and mtk-vcodec seem to
correctly match the buffers.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2019-01-24  9:06           ` Tomasz Figa
@ 2019-01-24 19:55             ` Nicolas Dufresne
  2019-01-25  3:27               ` Tomasz Figa
  0 siblings, 1 reply; 41+ messages in thread
From: Nicolas Dufresne @ 2019-01-24 19:55 UTC (permalink / raw)
  To: Tomasz Figa, Hans Verkuil
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Pawel Osciak, Alexandre Courbot,
	Kamil Debski, Andrzej Hajda, Kyungmin Park, Jeongtae Park,
	Philipp Zabel, Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

Le jeudi 24 janvier 2019 à 18:06 +0900, Tomasz Figa a écrit :
> > Actually I just realized the last point might not even be achievable
> > for some of the decoders (s5p-mfc, mtk-vcodec), as they don't report
> > which frame originates from which bitstream buffer and the driver just
> > picks the most recently consumed OUTPUT buffer to copy the timestamp
> > from. (s5p-mfc actually "forgets" to set the timestamp in some cases
> > too...)
> > 
> > I need to think a bit more about this.
> 
> Actually I misread the code. Both s5p-mfc and mtk-vcodec seem to
> correctly match the buffers.

Ok good, since otherwise it would have been a regression in MFC driver.
This timestamp passing thing could in theory be made optional though,
it lives under some COPY_TIMESTAMP kind of flag. What that means though
is that a driver without such a capability would need to signal dropped
frames using some other mean.

In userspace, the main use is to match the produced frame against a
userspace specific list of frames. At least this seems to be the case
in Gst and Chromium, since the userspace list contains a superset of
the metadata found in the v4l2_buffer.

Now, using the produced timestamp, userspace can deduce frame that the
driver should have produced but didn't (could be a deadline case codec,
or simply the frames where corrupted). It's quite normal for a codec to
just keep parsing until it finally find something it can decode.

That's at least one way to do it, but there is other possible
mechanism. The sequence number could be used, or even producing buffers
with the ERROR flag set. What matters is just to give userspace a way
to clear these frames, which would simply grow userspace memory usage
over time.

Nicolas


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2019-01-23 10:02           ` Tomasz Figa
@ 2019-01-24 20:02             ` Nicolas Dufresne
  0 siblings, 0 replies; 41+ messages in thread
From: Nicolas Dufresne @ 2019-01-24 20:02 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Hans Verkuil, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

Le mercredi 23 janvier 2019 à 19:02 +0900, Tomasz Figa a écrit :
> On Sun, Nov 18, 2018 at 10:34 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > Le samedi 17 novembre 2018 à 12:37 +0100, Hans Verkuil a écrit :
> > > > > Does V4L2_CID_MIN_BUFFERS_FOR_CAPTURE make any sense for encoders?
> > > > 
> > > > We do account for it in GStreamer (the capture/output handling is
> > > > generic), but I don't know if it's being used anywhere.
> > > 
> > > Do you use this value directly for REQBUFS, or do you use it as the minimum
> > > value but in practice use more buffers?
> > 
> > We add more buffers to that value. We assume this value is what will be
> > held by the driver, hence without adding some buffers, the driver would
> > go idle as soon as one is dequeued. We also need to allocate for the
> > importing driver.
> > 
> > In general, if we have a pipeline with Driver A sending to Driver B,
> > both driver will require a certain amount of buffers to operate. E.g.
> > with DRM display, the driver will hold on 1 buffer (the scannout
> > buffer).
> > 
> > In GStreamer, it's implemented generically, so we do:
> > 
> >   MIN_BUFFERS_FOR + remote_min + 1
> > 
> > If only MIN_BUFFERS_FOR was allocated, ignoring remote driver
> > requirement, the streaming will likely get stuck.
> 
> What happens if the driver doesn't report it?

If the driver does not report it because it does not use it (I think
CODA decoder is like that), there is no issue. If the driver do not
report but needs extra, the driver will end up growing count in
REQBUFS, so the end result will be under-allocation since the remote
requirement won't be accounted. Streaming will hang.

A good example is transcoding, you encoder will never have enough
frames to reproduce an output, because the decoder is waiting for his
frame to come back. Only solution to that would be a memcpy(), or
double allocation (redoing REQBUFS later on). The MIN_BUFFERS_FOR
announcement is the optimal way, avoiding copies and allocating twice.

> 
> Best regards,
> Tomasz


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2019-01-23 11:28           ` Hans Verkuil
@ 2019-01-24 20:04             ` Nicolas Dufresne
  2019-01-25  3:29               ` Tomasz Figa
  0 siblings, 1 reply; 41+ messages in thread
From: Nicolas Dufresne @ 2019-01-24 20:04 UTC (permalink / raw)
  To: Hans Verkuil, Tomasz Figa
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Paweł Ościak, Alexandre Courbot,
	Kamil Debski, Andrzej Hajda, Kyungmin Park, Jeongtae Park,
	Philipp Zabel, Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov,
	Todor Tomov, Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

Le mercredi 23 janvier 2019 à 12:28 +0100, Hans Verkuil a écrit :
> On 01/23/19 11:00, Tomasz Figa wrote:
> > On Sat, Nov 17, 2018 at 8:37 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
> > > On 11/17/2018 05:18 AM, Nicolas Dufresne wrote:
> > > > Le lundi 12 novembre 2018 à 14:23 +0100, Hans Verkuil a écrit :
> > > > > On 10/22/2018 04:49 PM, Tomasz Figa wrote:
> > [snip]
> > > > > > +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
> > > > > > +      instead.
> > > > > 
> > > > > Question: should new codec drivers still implement the EOS event?
> > > > 
> > > > I'm been asking around, but I think here is a good place. Do we really
> > > > need the FLAG_LAST in userspace ? Userspace can also wait for the first
> > > > EPIPE return from DQBUF.
> > > 
> > > I'm interested in hearing Tomasz' opinion. This flag is used already, so there
> > > definitely is a backwards compatibility issue here.
> > > 
> > 
> > FWIW, it would add the overhead of 1 more system call, although I
> > don't think it's of our concern.
> > 
> > My personal feeling is that using error codes for signaling normal
> > conditions isn't very elegant, though.
> 
> I agree. Let's keep this flag.

Agreed, though a reminder of the initial question, "do we keep the EOS
event ?", and I think the event can be dropped.

> 
> Regards,
> 
> 	Hans
> 
> > > > > > +
> > > > > > +3. Once all ``OUTPUT`` buffers queued before the ``V4L2_ENC_CMD_STOP`` call and
> > > > > > +   the last ``CAPTURE`` buffer are dequeued, the encoder is stopped and it will
> > > > > > +   accept, but not process any newly queued ``OUTPUT`` buffers until the client
> > > > > > +   issues any of the following operations:
> > > > > > +
> > > > > > +   * ``V4L2_ENC_CMD_START`` - the encoder will resume operation normally,
> > > > > 
> > > > > Perhaps mention that this does not reset the encoder? It's not immediately clear
> > > > > when reading this.
> > > > 
> > > > Which drivers supports this ? I believe I tried with Exynos in the
> > > > past, and that didn't work. How do we know if a driver supports this or
> > > > not. Do we make it mandatory ? When it's not supported, it basically
> > > > mean userspace need to cache and resend the header in userspace, and
> > > > also need to skip to some sync point.
> > > 
> > > Once we agree on the spec, then the next step will be to add good compliance
> > > checks and update drivers that fail the tests.
> > > 
> > > To check if the driver support this ioctl you can call VIDIOC_TRY_ENCODER_CMD
> > > to see if the functionality is supported.
> > 
> > There is nothing here for the hardware to support. It's an entirely
> > driver thing, since it just needs to wait for the encoder to complete
> > all the pending frames and stop enqueuing more frames to the decoder
> > until V4L2_ENC_CMD_START is called. Any driver that can't do it must
> > be fixed, since otherwise you have no way to ensure that you got all
> > the encoded output.
> > 
> > Best regards,
> > Tomasz
> > 


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2019-01-23 13:04       ` Hans Verkuil
@ 2019-01-24 20:14         ` Nicolas Dufresne
  2019-01-25  3:59           ` Tomasz Figa
  0 siblings, 1 reply; 41+ messages in thread
From: Nicolas Dufresne @ 2019-01-24 20:14 UTC (permalink / raw)
  To: Hans Verkuil, Tomasz Figa
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Paweł Ościak, Alexandre Courbot,
	Kamil Debski, Andrzej Hajda, Kyungmin Park, Jeongtae Park,
	Philipp Zabel, Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov,
	Todor Tomov, Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

Le mercredi 23 janvier 2019 à 14:04 +0100, Hans Verkuil a écrit :
> > > Does this return the same set of formats as in the 'Querying Capabilities' phase?
> > > 
> > 
> > It's actually an interesting question. At this point we wouldn't have
> > the OUTPUT resolution set yet, so that would be the same set as in the
> > initial query. If we set the resolution (with some arbitrary
> > pixelformat), it may become a subset...
> 
> But doesn't setting the capture format also set the resolution?
> 
> To quote from the text above:
> 
> "The encoder will derive a new ``OUTPUT`` format from the ``CAPTURE`` format
>  being set, including resolution, colorimetry parameters, etc."
> 
> So you set the capture format with a resolution (you know that), then
> ENUM_FMT will return the subset for that codec and resolution.
> 
> But see also the comment at the end of this email.

I'm thinking that the fact that there is no "unset" value for pixel
format creates a certain ambiguity. Maybe we could create a new pixel
format, and all CODEC driver could have that set by default ? Then we
can just fail STREAMON if that format is set.

That being said, in GStreamer, I have split each elements per CODEC,
and now only enumerate the information "per-codec". That makes me think
this "global" enumeration was just a miss-use of the API / me learning
to use it. Not having to implement this rather complex thing in the
driver would be nice. Notably, the new Amlogic driver does not have
this "Querying Capabilities" phase, and with latest GStreamer works
just fine.

Nicolas


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2019-01-24 19:55             ` Nicolas Dufresne
@ 2019-01-25  3:27               ` Tomasz Figa
  2019-01-30  4:02                 ` Nicolas Dufresne
  0 siblings, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2019-01-25  3:27 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Hans Verkuil, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab, Pawel Osciak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel,
	Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On Fri, Jan 25, 2019 at 4:55 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le jeudi 24 janvier 2019 à 18:06 +0900, Tomasz Figa a écrit :
> > > Actually I just realized the last point might not even be achievable
> > > for some of the decoders (s5p-mfc, mtk-vcodec), as they don't report
> > > which frame originates from which bitstream buffer and the driver just
> > > picks the most recently consumed OUTPUT buffer to copy the timestamp
> > > from. (s5p-mfc actually "forgets" to set the timestamp in some cases
> > > too...)
> > >
> > > I need to think a bit more about this.
> >
> > Actually I misread the code. Both s5p-mfc and mtk-vcodec seem to
> > correctly match the buffers.
>
> Ok good, since otherwise it would have been a regression in MFC driver.
> This timestamp passing thing could in theory be made optional though,
> it lives under some COPY_TIMESTAMP kind of flag. What that means though
> is that a driver without such a capability would need to signal dropped
> frames using some other mean.
>
> In userspace, the main use is to match the produced frame against a
> userspace specific list of frames. At least this seems to be the case
> in Gst and Chromium, since the userspace list contains a superset of
> the metadata found in the v4l2_buffer.
>
> Now, using the produced timestamp, userspace can deduce frame that the
> driver should have produced but didn't (could be a deadline case codec,
> or simply the frames where corrupted). It's quite normal for a codec to
> just keep parsing until it finally find something it can decode.
>
> That's at least one way to do it, but there is other possible
> mechanism. The sequence number could be used, or even producing buffers
> with the ERROR flag set. What matters is just to give userspace a way
> to clear these frames, which would simply grow userspace memory usage
> over time.

Is it just me or we were missing some consistent error handling then?

I feel like the drivers should definitely return the bitstream buffers
with the ERROR flag, if there is a decode failure of data in the
buffer. Still, that could become more complicated if there is more
than 1 frame in that piece of bitstream, but only 1 frame is corrupted
(or whatever).

Another case is when the bitstream, even if corrupted, is still enough
to produce some output. My intuition tells me that such CAPTURE buffer
should be then returned with the ERROR flag. That wouldn't still be
enough for any more sophisticated userspace error concealment, but
could still let the userspace know to perhaps drop the frame.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2019-01-24 20:04             ` Nicolas Dufresne
@ 2019-01-25  3:29               ` Tomasz Figa
  0 siblings, 0 replies; 41+ messages in thread
From: Tomasz Figa @ 2019-01-25  3:29 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Hans Verkuil, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

On Fri, Jan 25, 2019 at 5:04 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le mercredi 23 janvier 2019 à 12:28 +0100, Hans Verkuil a écrit :
> > On 01/23/19 11:00, Tomasz Figa wrote:
> > > On Sat, Nov 17, 2018 at 8:37 PM Hans Verkuil <hverkuil@xs4all.nl> wrote:
> > > > On 11/17/2018 05:18 AM, Nicolas Dufresne wrote:
> > > > > Le lundi 12 novembre 2018 à 14:23 +0100, Hans Verkuil a écrit :
> > > > > > On 10/22/2018 04:49 PM, Tomasz Figa wrote:
> > > [snip]
> > > > > > > +      rely on it. The ``V4L2_BUF_FLAG_LAST`` buffer flag should be used
> > > > > > > +      instead.
> > > > > >
> > > > > > Question: should new codec drivers still implement the EOS event?
> > > > >
> > > > > I'm been asking around, but I think here is a good place. Do we really
> > > > > need the FLAG_LAST in userspace ? Userspace can also wait for the first
> > > > > EPIPE return from DQBUF.
> > > >
> > > > I'm interested in hearing Tomasz' opinion. This flag is used already, so there
> > > > definitely is a backwards compatibility issue here.
> > > >
> > >
> > > FWIW, it would add the overhead of 1 more system call, although I
> > > don't think it's of our concern.
> > >
> > > My personal feeling is that using error codes for signaling normal
> > > conditions isn't very elegant, though.
> >
> > I agree. Let's keep this flag.
>
> Agreed, though a reminder of the initial question, "do we keep the EOS
> event ?", and I think the event can be dropped.
>

I would happily drop it, if we know that it wouldn't break any
userspace. Chromium doesn't use it either.

Best regards,
Tomasz

> >
> > Regards,
> >
> >       Hans
> >
> > > > > > > +
> > > > > > > +3. Once all ``OUTPUT`` buffers queued before the ``V4L2_ENC_CMD_STOP`` call and
> > > > > > > +   the last ``CAPTURE`` buffer are dequeued, the encoder is stopped and it will
> > > > > > > +   accept, but not process any newly queued ``OUTPUT`` buffers until the client
> > > > > > > +   issues any of the following operations:
> > > > > > > +
> > > > > > > +   * ``V4L2_ENC_CMD_START`` - the encoder will resume operation normally,
> > > > > >
> > > > > > Perhaps mention that this does not reset the encoder? It's not immediately clear
> > > > > > when reading this.
> > > > >
> > > > > Which drivers supports this ? I believe I tried with Exynos in the
> > > > > past, and that didn't work. How do we know if a driver supports this or
> > > > > not. Do we make it mandatory ? When it's not supported, it basically
> > > > > mean userspace need to cache and resend the header in userspace, and
> > > > > also need to skip to some sync point.
> > > >
> > > > Once we agree on the spec, then the next step will be to add good compliance
> > > > checks and update drivers that fail the tests.
> > > >
> > > > To check if the driver support this ioctl you can call VIDIOC_TRY_ENCODER_CMD
> > > > to see if the functionality is supported.
> > >
> > > There is nothing here for the hardware to support. It's an entirely
> > > driver thing, since it just needs to wait for the encoder to complete
> > > all the pending frames and stop enqueuing more frames to the decoder
> > > until V4L2_ENC_CMD_START is called. Any driver that can't do it must
> > > be fixed, since otherwise you have no way to ensure that you got all
> > > the encoded output.
> > >
> > > Best regards,
> > > Tomasz
> > >
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2019-01-24 20:14         ` Nicolas Dufresne
@ 2019-01-25  3:59           ` Tomasz Figa
  2019-01-30 15:06             ` Nicolas Dufresne
  0 siblings, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2019-01-25  3:59 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Hans Verkuil, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

On Fri, Jan 25, 2019 at 5:14 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le mercredi 23 janvier 2019 à 14:04 +0100, Hans Verkuil a écrit :
> > > > Does this return the same set of formats as in the 'Querying Capabilities' phase?
> > > >
> > >
> > > It's actually an interesting question. At this point we wouldn't have
> > > the OUTPUT resolution set yet, so that would be the same set as in the
> > > initial query. If we set the resolution (with some arbitrary
> > > pixelformat), it may become a subset...
> >
> > But doesn't setting the capture format also set the resolution?
> >
> > To quote from the text above:
> >
> > "The encoder will derive a new ``OUTPUT`` format from the ``CAPTURE`` format
> >  being set, including resolution, colorimetry parameters, etc."
> >
> > So you set the capture format with a resolution (you know that), then
> > ENUM_FMT will return the subset for that codec and resolution.
> >
> > But see also the comment at the end of this email.
>
> I'm thinking that the fact that there is no "unset" value for pixel
> format creates a certain ambiguity. Maybe we could create a new pixel
> format, and all CODEC driver could have that set by default ? Then we
> can just fail STREAMON if that format is set.

The state on the CAPTURE queue is actually not "unset". The queue is
simply not ready (yet) and any operations on it will error out.

Once the application sets the coded resolution on the OUTPUT queue or
the decoder parses the stream information, the CAPTURE queue becomes
ready and one can do the ioctls on it.

>
> That being said, in GStreamer, I have split each elements per CODEC,
> and now only enumerate the information "per-codec". That makes me think
> this "global" enumeration was just a miss-use of the API / me learning
> to use it. Not having to implement this rather complex thing in the
> driver would be nice. Notably, the new Amlogic driver does not have
> this "Querying Capabilities" phase, and with latest GStreamer works
> just fine.

What do you mean by "doesn't have"? Does it lack an implementation of
VIDIOC_ENUM_FMT and VIDIOC_ENUM_FRAMESIZES?

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2019-01-25  3:27               ` Tomasz Figa
@ 2019-01-30  4:02                 ` Nicolas Dufresne
  2019-02-06  5:35                   ` Tomasz Figa
  0 siblings, 1 reply; 41+ messages in thread
From: Nicolas Dufresne @ 2019-01-30  4:02 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Hans Verkuil, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab, Pawel Osciak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel,
	Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

Le vendredi 25 janvier 2019 à 12:27 +0900, Tomasz Figa a écrit :
> On Fri, Jan 25, 2019 at 4:55 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > Le jeudi 24 janvier 2019 à 18:06 +0900, Tomasz Figa a écrit :
> > > > Actually I just realized the last point might not even be achievable
> > > > for some of the decoders (s5p-mfc, mtk-vcodec), as they don't report
> > > > which frame originates from which bitstream buffer and the driver just
> > > > picks the most recently consumed OUTPUT buffer to copy the timestamp
> > > > from. (s5p-mfc actually "forgets" to set the timestamp in some cases
> > > > too...)
> > > > 
> > > > I need to think a bit more about this.
> > > 
> > > Actually I misread the code. Both s5p-mfc and mtk-vcodec seem to
> > > correctly match the buffers.
> > 
> > Ok good, since otherwise it would have been a regression in MFC driver.
> > This timestamp passing thing could in theory be made optional though,
> > it lives under some COPY_TIMESTAMP kind of flag. What that means though
> > is that a driver without such a capability would need to signal dropped
> > frames using some other mean.
> > 
> > In userspace, the main use is to match the produced frame against a
> > userspace specific list of frames. At least this seems to be the case
> > in Gst and Chromium, since the userspace list contains a superset of
> > the metadata found in the v4l2_buffer.
> > 
> > Now, using the produced timestamp, userspace can deduce frame that the
> > driver should have produced but didn't (could be a deadline case codec,
> > or simply the frames where corrupted). It's quite normal for a codec to
> > just keep parsing until it finally find something it can decode.
> > 
> > That's at least one way to do it, but there is other possible
> > mechanism. The sequence number could be used, or even producing buffers
> > with the ERROR flag set. What matters is just to give userspace a way
> > to clear these frames, which would simply grow userspace memory usage
> > over time.
> 
> Is it just me or we were missing some consistent error handling then?
> 
> I feel like the drivers should definitely return the bitstream buffers
> with the ERROR flag, if there is a decode failure of data in the
> buffer. Still, that could become more complicated if there is more
> than 1 frame in that piece of bitstream, but only 1 frame is corrupted
> (or whatever).

I agree, but it might be more difficult then it looks (even FFMPEG does
not do that). I believe the code that is processing the bitstream in
stateful codecs is mostly unrelated from the code actually doing the
decoding. So what might happen is that the decoding part will never
actually allocate a buffer for the skipped / corrupted part of the
bitstream. Also, the notion of a skipped frame is not always evident in
when parsing H264 or HEVC NALs. There is still a full page of text just
to explain how to detect that start of a new frame.

Yet, it would be interesting to study the firmwares we have and see
what they provide that would help making decode errors more explicit.

> 
> Another case is when the bitstream, even if corrupted, is still enough
> to produce some output. My intuition tells me that such CAPTURE buffer
> should be then returned with the ERROR flag. That wouldn't still be
> enough for any more sophisticated userspace error concealment, but
> could still let the userspace know to perhaps drop the frame.

You mean if a frame was concealed (typically the frame was decoded from
a closed by reference instead of the expected reference). That is
something signalled by FFPEG. We should document this possibility. I
actually have something implemented in GStreamer. Basically if we have
the ERROR flag with a payload size smaller then expected, I drop the
frame and produce a drop event message, while if I have a frame with
ERROR flag but of the right payload size, I assume it is corrupted, and
simply flag it as corrupted, leaving to the application the decision to
display it or not. This is a case that used to happen with some UVC
cameras (though some have been fixed, and the UVC camera should drop
smaller payload size buffers now).

> 
> Best regards,
> Tomasz


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2019-01-25  3:59           ` Tomasz Figa
@ 2019-01-30 15:06             ` Nicolas Dufresne
  2019-02-06  5:49               ` Tomasz Figa
  0 siblings, 1 reply; 41+ messages in thread
From: Nicolas Dufresne @ 2019-01-30 15:06 UTC (permalink / raw)
  To: Tomasz Figa
  Cc: Hans Verkuil, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

Le vendredi 25 janvier 2019 à 12:59 +0900, Tomasz Figa a écrit :
> On Fri, Jan 25, 2019 at 5:14 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > Le mercredi 23 janvier 2019 à 14:04 +0100, Hans Verkuil a écrit :
> > > > > Does this return the same set of formats as in the 'Querying Capabilities' phase?
> > > > > 
> > > > 
> > > > It's actually an interesting question. At this point we wouldn't have
> > > > the OUTPUT resolution set yet, so that would be the same set as in the
> > > > initial query. If we set the resolution (with some arbitrary
> > > > pixelformat), it may become a subset...
> > > 
> > > But doesn't setting the capture format also set the resolution?
> > > 
> > > To quote from the text above:
> > > 
> > > "The encoder will derive a new ``OUTPUT`` format from the ``CAPTURE`` format
> > >  being set, including resolution, colorimetry parameters, etc."
> > > 
> > > So you set the capture format with a resolution (you know that), then
> > > ENUM_FMT will return the subset for that codec and resolution.
> > > 
> > > But see also the comment at the end of this email.
> > 
> > I'm thinking that the fact that there is no "unset" value for pixel
> > format creates a certain ambiguity. Maybe we could create a new pixel
> > format, and all CODEC driver could have that set by default ? Then we
> > can just fail STREAMON if that format is set.
> 
> The state on the CAPTURE queue is actually not "unset". The queue is
> simply not ready (yet) and any operations on it will error out.

My point was that it's just awkward to have this "not ready" state, in
which you cannot go back. And in which the enum-format will ignore the
format configured on the other side.

What I wanted to say is that this special case is not really needed.

> 
> Once the application sets the coded resolution on the OUTPUT queue or
> the decoder parses the stream information, the CAPTURE queue becomes
> ready and one can do the ioctls on it.
> 
> > That being said, in GStreamer, I have split each elements per CODEC,
> > and now only enumerate the information "per-codec". That makes me think
> > this "global" enumeration was just a miss-use of the API / me learning
> > to use it. Not having to implement this rather complex thing in the
> > driver would be nice. Notably, the new Amlogic driver does not have
> > this "Querying Capabilities" phase, and with latest GStreamer works
> > just fine.
> 
> What do you mean by "doesn't have"? Does it lack an implementation of
> VIDIOC_ENUM_FMT and VIDIOC_ENUM_FRAMESIZES?

What it does is that it sets a default value for the codec format, so
if you just open the device and do enum_fmt/framesizes, you get that is
possible for the default codec that was selected. And I thin it's
entirely correct, doing ENUM_FMT(capture) without doing an
S_FMT(output) can easily be documented as undefined behaviour.

For proper enumeration would be:

for formats on OUTPUT device:
  S_FMT(OUTPUT):
  for formats on CAPTURE device:
    ...

(the pseudo for look represent an enum operation)

> 
> Best regards,
> Tomasz


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2019-01-30  4:02                 ` Nicolas Dufresne
@ 2019-02-06  5:35                   ` Tomasz Figa
  2019-04-09  9:47                     ` Tomasz Figa
  0 siblings, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2019-02-06  5:35 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Hans Verkuil, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab, Pawel Osciak,
	Alexandre Courbot, Kamil Debski, Andrzej Hajda, Kyungmin Park,
	Jeongtae Park, Philipp Zabel,
	Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On Wed, Jan 30, 2019 at 1:02 PM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le vendredi 25 janvier 2019 à 12:27 +0900, Tomasz Figa a écrit :
> > On Fri, Jan 25, 2019 at 4:55 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > Le jeudi 24 janvier 2019 à 18:06 +0900, Tomasz Figa a écrit :
> > > > > Actually I just realized the last point might not even be achievable
> > > > > for some of the decoders (s5p-mfc, mtk-vcodec), as they don't report
> > > > > which frame originates from which bitstream buffer and the driver just
> > > > > picks the most recently consumed OUTPUT buffer to copy the timestamp
> > > > > from. (s5p-mfc actually "forgets" to set the timestamp in some cases
> > > > > too...)
> > > > >
> > > > > I need to think a bit more about this.
> > > >
> > > > Actually I misread the code. Both s5p-mfc and mtk-vcodec seem to
> > > > correctly match the buffers.
> > >
> > > Ok good, since otherwise it would have been a regression in MFC driver.
> > > This timestamp passing thing could in theory be made optional though,
> > > it lives under some COPY_TIMESTAMP kind of flag. What that means though
> > > is that a driver without such a capability would need to signal dropped
> > > frames using some other mean.
> > >
> > > In userspace, the main use is to match the produced frame against a
> > > userspace specific list of frames. At least this seems to be the case
> > > in Gst and Chromium, since the userspace list contains a superset of
> > > the metadata found in the v4l2_buffer.
> > >
> > > Now, using the produced timestamp, userspace can deduce frame that the
> > > driver should have produced but didn't (could be a deadline case codec,
> > > or simply the frames where corrupted). It's quite normal for a codec to
> > > just keep parsing until it finally find something it can decode.
> > >
> > > That's at least one way to do it, but there is other possible
> > > mechanism. The sequence number could be used, or even producing buffers
> > > with the ERROR flag set. What matters is just to give userspace a way
> > > to clear these frames, which would simply grow userspace memory usage
> > > over time.
> >
> > Is it just me or we were missing some consistent error handling then?
> >
> > I feel like the drivers should definitely return the bitstream buffers
> > with the ERROR flag, if there is a decode failure of data in the
> > buffer. Still, that could become more complicated if there is more
> > than 1 frame in that piece of bitstream, but only 1 frame is corrupted
> > (or whatever).
>
> I agree, but it might be more difficult then it looks (even FFMPEG does
> not do that). I believe the code that is processing the bitstream in
> stateful codecs is mostly unrelated from the code actually doing the
> decoding. So what might happen is that the decoding part will never
> actually allocate a buffer for the skipped / corrupted part of the
> bitstream. Also, the notion of a skipped frame is not always evident in
> when parsing H264 or HEVC NALs. There is still a full page of text just
> to explain how to detect that start of a new frame.

Right. I don't think we can guarantee that we can always correlate the
errors with exact buffers and so I phrased the paragraph about errors
in v3 in a bit more conservative way:

See the snapshot hosted by Hans (thanks!):
https://hverkuil.home.xs4all.nl/codec-api/uapi/v4l/dev-decoder.html#decoding

>
> Yet, it would be interesting to study the firmwares we have and see
> what they provide that would help making decode errors more explicit.
>

Agreed.

> >
> > Another case is when the bitstream, even if corrupted, is still enough
> > to produce some output. My intuition tells me that such CAPTURE buffer
> > should be then returned with the ERROR flag. That wouldn't still be
> > enough for any more sophisticated userspace error concealment, but
> > could still let the userspace know to perhaps drop the frame.
>
> You mean if a frame was concealed (typically the frame was decoded from
> a closed by reference instead of the expected reference). That is
> something signalled by FFPEG. We should document this possibility. I
> actually have something implemented in GStreamer. Basically if we have
> the ERROR flag with a payload size smaller then expected, I drop the
> frame and produce a drop event message, while if I have a frame with
> ERROR flag but of the right payload size, I assume it is corrupted, and
> simply flag it as corrupted, leaving to the application the decision to
> display it or not. This is a case that used to happen with some UVC
> cameras (though some have been fixed, and the UVC camera should drop
> smaller payload size buffers now).

I think it's a behavior that makes the most sense indeed.

Technically one could also consider the case of 0 < bytesused <
sizeimage, which could mean that only a part of the frame is in the
buffer. An application could try to blend it with previous frame using
some concealing algorithms. I haven't seen an app that could do such
thing, though.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface
  2019-01-30 15:06             ` Nicolas Dufresne
@ 2019-02-06  5:49               ` Tomasz Figa
  0 siblings, 0 replies; 41+ messages in thread
From: Tomasz Figa @ 2019-02-06  5:49 UTC (permalink / raw)
  To: Nicolas Dufresne
  Cc: Hans Verkuil, Linux Media Mailing List,
	Linux Kernel Mailing List, Mauro Carvalho Chehab,
	Paweł Ościak, Alexandre Courbot, Kamil Debski,
	Andrzej Hajda, Kyungmin Park, Jeongtae Park, Philipp Zabel,
	Tiffany Lin, Andrew-CT Chen, Stanimir Varbanov, Todor Tomov,
	Paul Kocialkowski, Laurent Pinchart, dave.stevenson,
	Ezequiel Garcia, Maxime Jourdan

On Thu, Jan 31, 2019 at 12:06 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>
> Le vendredi 25 janvier 2019 à 12:59 +0900, Tomasz Figa a écrit :
> > On Fri, Jan 25, 2019 at 5:14 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > Le mercredi 23 janvier 2019 à 14:04 +0100, Hans Verkuil a écrit :
> > > > > > Does this return the same set of formats as in the 'Querying Capabilities' phase?
> > > > > >
> > > > >
> > > > > It's actually an interesting question. At this point we wouldn't have
> > > > > the OUTPUT resolution set yet, so that would be the same set as in the
> > > > > initial query. If we set the resolution (with some arbitrary
> > > > > pixelformat), it may become a subset...
> > > >
> > > > But doesn't setting the capture format also set the resolution?
> > > >
> > > > To quote from the text above:
> > > >
> > > > "The encoder will derive a new ``OUTPUT`` format from the ``CAPTURE`` format
> > > >  being set, including resolution, colorimetry parameters, etc."
> > > >
> > > > So you set the capture format with a resolution (you know that), then
> > > > ENUM_FMT will return the subset for that codec and resolution.
> > > >
> > > > But see also the comment at the end of this email.
> > >
> > > I'm thinking that the fact that there is no "unset" value for pixel
> > > format creates a certain ambiguity. Maybe we could create a new pixel
> > > format, and all CODEC driver could have that set by default ? Then we
> > > can just fail STREAMON if that format is set.
> >
> > The state on the CAPTURE queue is actually not "unset". The queue is
> > simply not ready (yet) and any operations on it will error out.
>
> My point was that it's just awkward to have this "not ready" state, in
> which you cannot go back. And in which the enum-format will ignore the
> format configured on the other side.
>
> What I wanted to say is that this special case is not really needed.
>

Yeah, I think we may actually end up going in that direction, as you
probably noticed in the discussion over the "venus: dec: make decoder
compliant with stateful codec API" patch [1].

[1] https://patchwork.kernel.org/patch/10768539/#22462703

> >
> > Once the application sets the coded resolution on the OUTPUT queue or
> > the decoder parses the stream information, the CAPTURE queue becomes
> > ready and one can do the ioctls on it.
> >
> > > That being said, in GStreamer, I have split each elements per CODEC,
> > > and now only enumerate the information "per-codec". That makes me think
> > > this "global" enumeration was just a miss-use of the API / me learning
> > > to use it. Not having to implement this rather complex thing in the
> > > driver would be nice. Notably, the new Amlogic driver does not have
> > > this "Querying Capabilities" phase, and with latest GStreamer works
> > > just fine.
> >
> > What do you mean by "doesn't have"? Does it lack an implementation of
> > VIDIOC_ENUM_FMT and VIDIOC_ENUM_FRAMESIZES?
>
> What it does is that it sets a default value for the codec format, so
> if you just open the device and do enum_fmt/framesizes, you get that is
> possible for the default codec that was selected. And I thin it's
> entirely correct, doing ENUM_FMT(capture) without doing an
> S_FMT(output) can easily be documented as undefined behaviour.

Okay.

>
> For proper enumeration would be:
>
> for formats on OUTPUT device:
>   S_FMT(OUTPUT):
>   for formats on CAPTURE device:
>     ...
>
> (the pseudo for look represent an enum operation)

And that's how it's defined in v3. There is no default state without
any codec selected.

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2019-02-06  5:35                   ` Tomasz Figa
@ 2019-04-09  9:47                     ` Tomasz Figa
  2019-04-10  9:26                       ` Hans Verkuil
  0 siblings, 1 reply; 41+ messages in thread
From: Tomasz Figa @ 2019-04-09  9:47 UTC (permalink / raw)
  To: Nicolas Dufresne, Hans Verkuil
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Pawel Osciak, Alexandre Courbot,
	Kamil Debski, Andrzej Hajda, Kyungmin Park, Jeongtae Park,
	Philipp Zabel, Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On Wed, Feb 6, 2019 at 2:35 PM Tomasz Figa <tfiga@chromium.org> wrote:
>
> On Wed, Jan 30, 2019 at 1:02 PM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> >
> > Le vendredi 25 janvier 2019 à 12:27 +0900, Tomasz Figa a écrit :
> > > On Fri, Jan 25, 2019 at 4:55 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
> > > > Le jeudi 24 janvier 2019 à 18:06 +0900, Tomasz Figa a écrit :
> > > > > > Actually I just realized the last point might not even be achievable
> > > > > > for some of the decoders (s5p-mfc, mtk-vcodec), as they don't report
> > > > > > which frame originates from which bitstream buffer and the driver just
> > > > > > picks the most recently consumed OUTPUT buffer to copy the timestamp
> > > > > > from. (s5p-mfc actually "forgets" to set the timestamp in some cases
> > > > > > too...)
> > > > > >
> > > > > > I need to think a bit more about this.
> > > > >
> > > > > Actually I misread the code. Both s5p-mfc and mtk-vcodec seem to
> > > > > correctly match the buffers.
> > > >
> > > > Ok good, since otherwise it would have been a regression in MFC driver.
> > > > This timestamp passing thing could in theory be made optional though,
> > > > it lives under some COPY_TIMESTAMP kind of flag. What that means though
> > > > is that a driver without such a capability would need to signal dropped
> > > > frames using some other mean.
> > > >
> > > > In userspace, the main use is to match the produced frame against a
> > > > userspace specific list of frames. At least this seems to be the case
> > > > in Gst and Chromium, since the userspace list contains a superset of
> > > > the metadata found in the v4l2_buffer.
> > > >
> > > > Now, using the produced timestamp, userspace can deduce frame that the
> > > > driver should have produced but didn't (could be a deadline case codec,
> > > > or simply the frames where corrupted). It's quite normal for a codec to
> > > > just keep parsing until it finally find something it can decode.
> > > >
> > > > That's at least one way to do it, but there is other possible
> > > > mechanism. The sequence number could be used, or even producing buffers
> > > > with the ERROR flag set. What matters is just to give userspace a way
> > > > to clear these frames, which would simply grow userspace memory usage
> > > > over time.
> > >
> > > Is it just me or we were missing some consistent error handling then?
> > >
> > > I feel like the drivers should definitely return the bitstream buffers
> > > with the ERROR flag, if there is a decode failure of data in the
> > > buffer. Still, that could become more complicated if there is more
> > > than 1 frame in that piece of bitstream, but only 1 frame is corrupted
> > > (or whatever).
> >
> > I agree, but it might be more difficult then it looks (even FFMPEG does
> > not do that). I believe the code that is processing the bitstream in
> > stateful codecs is mostly unrelated from the code actually doing the
> > decoding. So what might happen is that the decoding part will never
> > actually allocate a buffer for the skipped / corrupted part of the
> > bitstream. Also, the notion of a skipped frame is not always evident in
> > when parsing H264 or HEVC NALs. There is still a full page of text just
> > to explain how to detect that start of a new frame.
>
> Right. I don't think we can guarantee that we can always correlate the
> errors with exact buffers and so I phrased the paragraph about errors
> in v3 in a bit more conservative way:
>
> See the snapshot hosted by Hans (thanks!):
> https://hverkuil.home.xs4all.nl/codec-api/uapi/v4l/dev-decoder.html#decoding
>
> >
> > Yet, it would be interesting to study the firmwares we have and see
> > what they provide that would help making decode errors more explicit.
> >
>
> Agreed.
>
> > >
> > > Another case is when the bitstream, even if corrupted, is still enough
> > > to produce some output. My intuition tells me that such CAPTURE buffer
> > > should be then returned with the ERROR flag. That wouldn't still be
> > > enough for any more sophisticated userspace error concealment, but
> > > could still let the userspace know to perhaps drop the frame.
> >
> > You mean if a frame was concealed (typically the frame was decoded from
> > a closed by reference instead of the expected reference). That is
> > something signalled by FFPEG. We should document this possibility. I
> > actually have something implemented in GStreamer. Basically if we have
> > the ERROR flag with a payload size smaller then expected, I drop the
> > frame and produce a drop event message, while if I have a frame with
> > ERROR flag but of the right payload size, I assume it is corrupted, and
> > simply flag it as corrupted, leaving to the application the decision to
> > display it or not. This is a case that used to happen with some UVC
> > cameras (though some have been fixed, and the UVC camera should drop
> > smaller payload size buffers now).
>
> I think it's a behavior that makes the most sense indeed.
>
> Technically one could also consider the case of 0 < bytesused <
> sizeimage, which could mean that only a part of the frame is in the
> buffer. An application could try to blend it with previous frame using
> some concealing algorithms. I haven't seen an app that could do such
> thing, though.

Actually some interesting thought on this. I don't think the existing
drivers would return any CAPTURE buffers on errors right now, but just
return the OUTPUT buffer that failed to be decoded. Should we change
this so that always one CAPTURE buffer with the ERROR flag is
returned, to signal the application that a frame was potentially
dropped?

Best regards,
Tomasz

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface
  2019-04-09  9:47                     ` Tomasz Figa
@ 2019-04-10  9:26                       ` Hans Verkuil
  0 siblings, 0 replies; 41+ messages in thread
From: Hans Verkuil @ 2019-04-10  9:26 UTC (permalink / raw)
  To: Tomasz Figa, Nicolas Dufresne
  Cc: Linux Media Mailing List, Linux Kernel Mailing List,
	Mauro Carvalho Chehab, Pawel Osciak, Alexandre Courbot,
	Kamil Debski, Andrzej Hajda, Kyungmin Park, Jeongtae Park,
	Philipp Zabel, Tiffany Lin (林慧珊),
	Andrew-CT Chen (陳智迪),
	Stanimir Varbanov, Todor Tomov, Paul Kocialkowski,
	Laurent Pinchart, dave.stevenson, Ezequiel Garcia,
	Maxime Jourdan

On 4/9/19 11:47 AM, Tomasz Figa wrote:
> On Wed, Feb 6, 2019 at 2:35 PM Tomasz Figa <tfiga@chromium.org> wrote:
>>
>> On Wed, Jan 30, 2019 at 1:02 PM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>>>
>>> Le vendredi 25 janvier 2019 à 12:27 +0900, Tomasz Figa a écrit :
>>>> On Fri, Jan 25, 2019 at 4:55 AM Nicolas Dufresne <nicolas@ndufresne.ca> wrote:
>>>>> Le jeudi 24 janvier 2019 à 18:06 +0900, Tomasz Figa a écrit :
>>>>>>> Actually I just realized the last point might not even be achievable
>>>>>>> for some of the decoders (s5p-mfc, mtk-vcodec), as they don't report
>>>>>>> which frame originates from which bitstream buffer and the driver just
>>>>>>> picks the most recently consumed OUTPUT buffer to copy the timestamp
>>>>>>> from. (s5p-mfc actually "forgets" to set the timestamp in some cases
>>>>>>> too...)
>>>>>>>
>>>>>>> I need to think a bit more about this.
>>>>>>
>>>>>> Actually I misread the code. Both s5p-mfc and mtk-vcodec seem to
>>>>>> correctly match the buffers.
>>>>>
>>>>> Ok good, since otherwise it would have been a regression in MFC driver.
>>>>> This timestamp passing thing could in theory be made optional though,
>>>>> it lives under some COPY_TIMESTAMP kind of flag. What that means though
>>>>> is that a driver without such a capability would need to signal dropped
>>>>> frames using some other mean.
>>>>>
>>>>> In userspace, the main use is to match the produced frame against a
>>>>> userspace specific list of frames. At least this seems to be the case
>>>>> in Gst and Chromium, since the userspace list contains a superset of
>>>>> the metadata found in the v4l2_buffer.
>>>>>
>>>>> Now, using the produced timestamp, userspace can deduce frame that the
>>>>> driver should have produced but didn't (could be a deadline case codec,
>>>>> or simply the frames where corrupted). It's quite normal for a codec to
>>>>> just keep parsing until it finally find something it can decode.
>>>>>
>>>>> That's at least one way to do it, but there is other possible
>>>>> mechanism. The sequence number could be used, or even producing buffers
>>>>> with the ERROR flag set. What matters is just to give userspace a way
>>>>> to clear these frames, which would simply grow userspace memory usage
>>>>> over time.
>>>>
>>>> Is it just me or we were missing some consistent error handling then?
>>>>
>>>> I feel like the drivers should definitely return the bitstream buffers
>>>> with the ERROR flag, if there is a decode failure of data in the
>>>> buffer. Still, that could become more complicated if there is more
>>>> than 1 frame in that piece of bitstream, but only 1 frame is corrupted
>>>> (or whatever).
>>>
>>> I agree, but it might be more difficult then it looks (even FFMPEG does
>>> not do that). I believe the code that is processing the bitstream in
>>> stateful codecs is mostly unrelated from the code actually doing the
>>> decoding. So what might happen is that the decoding part will never
>>> actually allocate a buffer for the skipped / corrupted part of the
>>> bitstream. Also, the notion of a skipped frame is not always evident in
>>> when parsing H264 or HEVC NALs. There is still a full page of text just
>>> to explain how to detect that start of a new frame.
>>
>> Right. I don't think we can guarantee that we can always correlate the
>> errors with exact buffers and so I phrased the paragraph about errors
>> in v3 in a bit more conservative way:
>>
>> See the snapshot hosted by Hans (thanks!):
>> https://hverkuil.home.xs4all.nl/codec-api/uapi/v4l/dev-decoder.html#decoding
>>
>>>
>>> Yet, it would be interesting to study the firmwares we have and see
>>> what they provide that would help making decode errors more explicit.
>>>
>>
>> Agreed.
>>
>>>>
>>>> Another case is when the bitstream, even if corrupted, is still enough
>>>> to produce some output. My intuition tells me that such CAPTURE buffer
>>>> should be then returned with the ERROR flag. That wouldn't still be
>>>> enough for any more sophisticated userspace error concealment, but
>>>> could still let the userspace know to perhaps drop the frame.
>>>
>>> You mean if a frame was concealed (typically the frame was decoded from
>>> a closed by reference instead of the expected reference). That is
>>> something signalled by FFPEG. We should document this possibility. I
>>> actually have something implemented in GStreamer. Basically if we have
>>> the ERROR flag with a payload size smaller then expected, I drop the
>>> frame and produce a drop event message, while if I have a frame with
>>> ERROR flag but of the right payload size, I assume it is corrupted, and
>>> simply flag it as corrupted, leaving to the application the decision to
>>> display it or not. This is a case that used to happen with some UVC
>>> cameras (though some have been fixed, and the UVC camera should drop
>>> smaller payload size buffers now).
>>
>> I think it's a behavior that makes the most sense indeed.
>>
>> Technically one could also consider the case of 0 < bytesused <
>> sizeimage, which could mean that only a part of the frame is in the
>> buffer. An application could try to blend it with previous frame using
>> some concealing algorithms. I haven't seen an app that could do such
>> thing, though.
> 
> Actually some interesting thought on this. I don't think the existing
> drivers would return any CAPTURE buffers on errors right now, but just
> return the OUTPUT buffer that failed to be decoded. Should we change
> this so that always one CAPTURE buffer with the ERROR flag is
> returned, to signal the application that a frame was potentially
> dropped?

It's what vicodec does. Since device_run is called with both an OUTPUT
and a CAPTURE buffer it would make sense to return both buffers with the
ERROR flag if decoding (or encoding for that matter) fails.

But I would also be fine if this is seen as driver specific. Since an
OUTPUT buffer can decode to multiple CAPTURE buffers it may not be
all that useful in practice.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2019-04-10  9:26 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-22 14:48 [PATCH v2 0/2] Document memory-to-memory video codec interfaces Tomasz Figa
2018-10-22 14:48 ` [PATCH v2 1/2] media: docs-rst: Document memory-to-memory video decoder interface Tomasz Figa
2018-10-29  9:45   ` Stanimir Varbanov
2018-10-29 10:06     ` Tomasz Figa
2018-10-29 10:07       ` Tomasz Figa
2018-11-12 11:37   ` Hans Verkuil
2019-01-22 10:02     ` Tomasz Figa
2019-01-22 14:47       ` Hans Verkuil
2019-01-23  5:27         ` Tomasz Figa
2019-01-23  8:10           ` Hans Verkuil
2019-01-24  9:06           ` Tomasz Figa
2019-01-24 19:55             ` Nicolas Dufresne
2019-01-25  3:27               ` Tomasz Figa
2019-01-30  4:02                 ` Nicolas Dufresne
2019-02-06  5:35                   ` Tomasz Figa
2019-04-09  9:47                     ` Tomasz Figa
2019-04-10  9:26                       ` Hans Verkuil
2018-11-12 15:04   ` Stanimir Varbanov
2018-11-15 14:34   ` Hans Verkuil
2018-11-17  4:31     ` Nicolas Dufresne
2018-11-17 11:43       ` Hans Verkuil
2018-11-18  1:25         ` Nicolas Dufresne
2018-10-22 14:49 ` [PATCH v2 2/2] media: docs-rst: Document memory-to-memory video encoder interface Tomasz Figa
2018-11-12 13:23   ` Hans Verkuil
2018-11-17  4:18     ` Nicolas Dufresne
2018-11-17 11:37       ` Hans Verkuil
2018-11-18  1:34         ` Nicolas Dufresne
2019-01-23 10:02           ` Tomasz Figa
2019-01-24 20:02             ` Nicolas Dufresne
2019-01-23 10:00         ` Tomasz Figa
2019-01-23 11:28           ` Hans Verkuil
2019-01-24 20:04             ` Nicolas Dufresne
2019-01-25  3:29               ` Tomasz Figa
2019-01-23  9:52     ` Tomasz Figa
2019-01-23 13:04       ` Hans Verkuil
2019-01-24 20:14         ` Nicolas Dufresne
2019-01-25  3:59           ` Tomasz Figa
2019-01-30 15:06             ` Nicolas Dufresne
2019-02-06  5:49               ` Tomasz Figa
2018-10-22 15:41 ` [PATCH v2 0/2] Document memory-to-memory video codec interfaces Hans Verkuil
2018-10-23  0:54   ` Tomasz Figa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).