linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Hans Verkuil <hverkuil@xs4all.nl>
To: Alexandre Courbot <acourbot@chromium.org>,
	Nicolas Dufresne <nicolas@ndufresne.ca>
Cc: Paul Kocialkowski <paul.kocialkowski@bootlin.com>,
	Tomasz Figa <tfiga@chromium.org>,
	Maxime Ripard <maxime.ripard@bootlin.com>,
	Dafna Hirschfeld <dafna3@gmail.com>,
	Mauro Carvalho Chehab <mchehab@kernel.org>,
	Linux Media Mailing List <linux-media@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v4] media: docs-rst: Document m2m stateless video decoder interface
Date: Fri, 26 Apr 2019 16:18:38 +0200	[thread overview]
Message-ID: <793af82c-6b37-6f69-648e-2cd2a2e87645@xs4all.nl> (raw)
In-Reply-To: <CAPBb6MVG+3jQcw3AuhYDYCZ0YJ0aX=TmEuM5izh12GLw9V6B8Q@mail.gmail.com>

On 4/16/19 9:22 AM, Alexandre Courbot wrote:

<snip>

> Thanks for this great discussion. Let me try to summarize the status
> of this thread + the IRC discussion and add my own thoughts:
> 
> Proper support for multiple decoding units (e.g. H.264 slices) per
> frame should not be an afterthought ; compliance to encoded formats
> depend on it, and the benefit of lower latency is a significant
> consideration for vendors.
> 
> m2m, which we use for all stateless codecs, has a strong assumption
> that one OUTPUT buffer consumed results in one CAPTURE buffer being
> produced. This assumption can however be overruled: at least the venus
> driver does it to implement the stateful specification.
> 
> So we need a way to specify frame boundaries when submitting encoded
> content to the driver. One request should contain a single OUTPUT
> buffer, containing a single decoding unit, but we need a way to
> specify whether the driver should directly produce a CAPTURE buffer
> from this request, or keep using the same CAPTURE buffer with
> subsequent requests.
> 
> I can think of 2 ways this can be expressed:
> 1) We keep the current m2m behavior as the default (a CAPTURE buffer
> is produced), and add a flag to ask the driver to change that behavior
> and hold on the CAPTURE buffer and reuse it with the next request(s) ;
> 2) We specify that no CAPTURE buffer is produced by default, unless a
> flag asking so is specified.
> 
> The flag could be specified in one of two ways:
> a) As a new v4l2_buffer.flag for the OUTPUT buffer ;
> b) As a dedicated control, either format-specific or more common to all codecs.
> 
> I tend to favor 2) and b) for this, for the reason that with H.264 at
> least, user-space does not know whether a slice is the last slice of a
> frame until it starts parsing the next one, and we don't know when we
> will receive it. If we use a control to ask that a CAPTURE buffer be
> produced, we can always submit another request with only that control
> set once it is clear that the frame is complete (and not delay
> decoding meanwhile). In practice I am not that familiar with
> latency-sensitive streaming ; maybe a smart streamer would just append
> an AUD NAL unit at the end of every frame and we can thus submit the
> flag it with the last slice without further delay?
> 
> An extra constraint to enforce would be that each decoding unit
> belonging to the same frame must be submitted with the same timestamp,
> otherwise the request submission would fail. We really need a
> framework to enforce all this at a higher level than individual
> drivers, once we reach an agreement I will start working on this.
> 
> Formats that do not support multiple decoding units per frame would
> reject any request that does not carry the end-of-frame information.
> 
> Anything missing / any further comment?
> 

After reading through this thread and a further irc discussion I now
understand the problem. I think there are several ways this can be
solved, but I think this is the easiest:

Introduce a new V4L2_BUF_FLAG_HOLD_CAPTURE_BUFFER flag.

If set in the OUTPUT buffer, then don't mark the CAPTURE buffer as
done after processing the OUTPUT buffer.

If an OUTPUT buffer was queued with a different timestamp than was
used for the currently held CAPTURE buffer, then mark that CAPTURE
buffer as done before starting processing this OUTPUT buffer.

In other words, for slicing you can just always set this flag and
group the slices by the OUTPUT timestamp. If you know that you
reached the last slice of a frame, then you can optionally clear the
flag to ensure the CAPTURE buffer is marked done without having to wait
for the first slice of the next frame to arrive.

Potential disadvantage of this approach is that this relies on the
OUTPUT timestamp to be the same for all slices of the same frame.

Which sounds reasonable to me.

In addition add a V4L2_BUF_CAP_SUPPORTS_HOLD_CAPTURE_BUFFER
capability to signal support for this flag.

I think this can be fairly easily implemented in v4l2-mem2mem.c.

In addition, this approach is not specific to codecs, it can be
used elsewhere as well (composing multiple output buffers into one
capture buffer is one use-case that comes to mind).

Comments? Other ideas?

Regards,

	Hans

  parent reply	other threads:[~2019-04-26 14:18 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-06  8:00 [PATCH v4] media: docs-rst: Document m2m stateless video decoder interface Alexandre Courbot
2019-04-12 20:47 ` Nicolas Dufresne
2019-04-14 16:41   ` Paul Kocialkowski
2019-04-14 22:38     ` Nicolas Dufresne
2019-04-15  7:58       ` Paul Kocialkowski
2019-04-15 12:24         ` Nicolas Dufresne
2019-04-15 13:26           ` Paul Kocialkowski
2019-04-15 15:30             ` Nicolas Dufresne
2019-04-16  7:22               ` Alexandre Courbot
2019-04-16  7:55                 ` Paul Kocialkowski
2019-04-17  5:39                   ` Alexandre Courbot
2019-04-17 16:09                     ` Nicolas Dufresne
2019-04-17 16:06                 ` Nicolas Dufresne
2019-04-17 17:18                   ` Paul Kocialkowski
2019-04-26 14:18                 ` Hans Verkuil [this message]
2019-04-26 16:28                   ` Paul Kocialkowski
2019-04-27 12:23                     ` Nicolas Dufresne
2019-04-27 12:48                       ` Paul Kocialkowski
2019-04-27 12:06                   ` Nicolas Dufresne
2019-04-29  8:41                     ` Hans Verkuil
2019-04-29  8:48                       ` Paul Kocialkowski
2019-04-29  8:49                         ` Hans Verkuil
2019-04-29  8:50                           ` Paul Kocialkowski
2019-04-29 18:27                         ` Nicolas Dufresne
2019-04-29 20:32                           ` Paul Kocialkowski
2019-04-30  0:47                             ` Nicolas Dufresne
2019-04-16  7:37               ` Paul Kocialkowski
2019-04-17 16:17                 ` Nicolas Dufresne
2019-04-17 17:21                   ` Paul Kocialkowski
2019-04-17 15:30     ` Nicolas Dufresne
2019-04-17 15:40       ` Paul Kocialkowski
2019-04-17 16:22         ` Nicolas Dufresne

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=793af82c-6b37-6f69-648e-2cd2a2e87645@xs4all.nl \
    --to=hverkuil@xs4all.nl \
    --cc=acourbot@chromium.org \
    --cc=dafna3@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=maxime.ripard@bootlin.com \
    --cc=mchehab@kernel.org \
    --cc=nicolas@ndufresne.ca \
    --cc=paul.kocialkowski@bootlin.com \
    --cc=tfiga@chromium.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).