All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv2 00/12] vivid: Virtual Video Test Driver
@ 2014-08-25 11:30 Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 01/12] vb2: fix multiplanar read() with non-zero data_offset Hans Verkuil
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media

In July I posted a 'vivi, the next generation' patch series:

https://www.mail-archive.com/linux-media@vger.kernel.org/msg76758.html

However, since that time I realized that rather than building on top of the
old vivi, it would be much better to create a new, much more generic driver.
This vivid test driver no longer emulates just video capture, but also
video output, vbi capture/output, radio receivers/transmitters and SDR capture.
There is even support for testing capture and output overlays.

Up to 64 vivid instances can be created, each with up to 16 inputs and 16 outputs.

Each input can be a webcam, TV capture device, S-Video capture device or an HDMI
capture device. Each output can be an S-Video output device or an HDMI output
device.

These inputs and outputs act exactly as a real hardware device would behave. This
allows you to use this driver as a test input for application development, since
you can test the various features without requiring special hardware.

Some of the features supported by this driver are:

- Support for read()/write(), MMAP, USERPTR and DMABUF streaming I/O.
- A large list of test patterns and variations thereof
- Working brightness, contrast, saturation and hue controls
- Support for the alpha color component
- Full colorspace support, including limited/full RGB range
- All possible control types are present
- Support for various pixel aspect ratios and video aspect ratios
- Error injection to test what happens if errors occur
- Supports crop/compose/scale in any combination for both input and output
- Can emulate up to 4K resolutions
- All Field settings are supported for testing interlaced capturing
- Supports all standard YUV and RGB formats, including two multiplanar YUV formats
- Raw and Sliced VBI capture and output support
- Radio receiver and transmitter support, including RDS support
- Software defined radio (SDR) support
- Capture and output overlay support

This driver is big, but I believe that for the most part I managed to keep
the code clean (I'm biased, though). I've split it up in several parts to
make reviewing easier. The first patch is a vb2 fix I posted earlier, but
patchwork failed to pick it up (probably because it was missing a Signed-of-by
line), so I'm posting it again. The second patch is an extensive document
that describes the features currently implemented. After that the driver code
is posted and in the last patch the driver is hooked into Kconfig/Makefile.

This goal is for this to go in for 3.18, so I expect I'll likely to a v2 at
least since I am still improving the driver and it will be a while before
we can merge code for v3.18.

As far as I am concerned the vivi driver can be removed once this driver is
merged.

Two questions which I am sure will be raised by reviewers:

1) Why add support for capture and output overlays? Isn't that obsolete?

First of all, we have drivers that support it and it is really nice to be
able to test whether it still works. I found several issues, some in the core,
when it comes to overlay support, so at the very least it will help to
prevent regressions until the time comes that we actually remove this API.

Secondly, this driver was created not just to help applications to test their
code, but also to help in understanding and verifying the API. In order to do
that you need to be able to test it. Which is difficult since hardware that
supports this is rare.

I have mentioned in the documentation that the overlay support is there
primarily for API testing and that its use in new drivers is questionable.

2) Why add video loop support, doesn't that make abuse possible?

I think video loop support is a great feature as it allows you to test
video output since without it you have no idea what the video you give to
the driver actually looks like. So just from the perspective of testing your
application I believe this is an essential feature.

There are a few reasons why I think that this is unlikely to lead to abuse:

- the video loop functionality has to be enabled explicitly via a control of
  the video output device.
- the video capture and output resolution and formats have to match exactly
- by default the OSD text will be placed over the looped video. This can be
  turned off via a control of the video capture device.
- the number of resolutions is currently fixed to SDTV and the CEA-861 and
  VESA DMT timings. So 'random' resolutions are not supported. Although to
  be fair, this is something I intend to add. However, if I do that then I
  will require that the configured DV timings of the input and output are
  identical before the video loop is possible.

Taken altogether I do not think this is something that lends itself easily
for abuse since this won't work out-of-the-box.

Regards,

	Hans

Changes since v1:

- Fixed 'sinus/cosinus' typo to sine/cosine.
- Various fixes all over the place.
- Moved all controls relating to test patterns, error injection, etc. to
  the 'Vivid Controls' control class.
- Rebased to v3.17-rc1.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCHv2 01/12] vb2: fix multiplanar read() with non-zero data_offset
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 02/12] vivid.txt: add documentation for the vivid driver Hans Verkuil
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

If this is a multiplanar buf_type and the plane we want to read has a
non-zero data_offset, then that data_offset was not taken into account.

Note that read() or write() for formats with more than one plane is currently
not allowed, hence the use of 'planes[0]' since this is only relevant for a
single-plane format.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/v4l2-core/videobuf2-core.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/media/v4l2-core/videobuf2-core.c b/drivers/media/v4l2-core/videobuf2-core.c
index 5b808e2..7e6aff6 100644
--- a/drivers/media/v4l2-core/videobuf2-core.c
+++ b/drivers/media/v4l2-core/videobuf2-core.c
@@ -2955,6 +2955,12 @@ static size_t __vb2_perform_fileio(struct vb2_queue *q, char __user *data, size_
 		buf->queued = 0;
 		buf->size = read ? vb2_get_plane_payload(q->bufs[index], 0)
 				 : vb2_plane_size(q->bufs[index], 0);
+		/* Compensate for data_offset on read in the multiplanar case. */
+		if (is_multiplanar && read &&
+		    fileio->b.m.planes[0].data_offset < buf->size) {
+			buf->pos = fileio->b.m.planes[0].data_offset;
+			buf->size -= buf->pos;
+		}
 	} else {
 		buf = &fileio->bufs[index];
 	}
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 02/12] vivid.txt: add documentation for the vivid driver
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 01/12] vb2: fix multiplanar read() with non-zero data_offset Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 03/12] vivid: add core driver code Hans Verkuil
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

The vivid Virtual Video Test Driver helps testing V4L2 applications
and can emulate V4L2 hardware. Add the documentation for this driver.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 Documentation/video4linux/vivid.txt | 1109 +++++++++++++++++++++++++++++++++++
 1 file changed, 1109 insertions(+)
 create mode 100644 Documentation/video4linux/vivid.txt

diff --git a/Documentation/video4linux/vivid.txt b/Documentation/video4linux/vivid.txt
new file mode 100644
index 0000000..4f1d442
--- /dev/null
+++ b/Documentation/video4linux/vivid.txt
@@ -0,0 +1,1109 @@
+vivid: Virtual Video Test Driver
+================================
+
+This driver emulates video4linux hardware of various types: video capture, video
+output, vbi capture and output, radio receivers and transmitters and a software
+defined radio receiver. In addition a simple framebuffer device is available for
+testing capture and output overlays.
+
+Up to 64 vivid instances can be created, each with up to 16 inputs and 16 outputs.
+
+Each input can be a webcam, TV capture device, S-Video capture device or an HDMI
+capture device. Each output can be an S-Video output device or an HDMI output
+device.
+
+These inputs and outputs act exactly as a real hardware device would behave. This
+allows you to use this driver as a test input for application development, since
+you can test the various features without requiring special hardware.
+
+This document describes the features implemented by this driver:
+
+- Support for read()/write(), MMAP, USERPTR and DMABUF streaming I/O.
+- A large list of test patterns and variations thereof
+- Working brightness, contrast, saturation and hue controls
+- Support for the alpha color component
+- Full colorspace support, including limited/full RGB range
+- All possible control types are present
+- Support for various pixel aspect ratios and video aspect ratios
+- Error injection to test what happens if errors occur
+- Supports crop/compose/scale in any combination for both input and output
+- Can emulate up to 4K resolutions
+- All Field settings are supported for testing interlaced capturing
+- Supports all standard YUV and RGB formats, including two multiplanar YUV formats
+- Raw and Sliced VBI capture and output support
+- Radio receiver and transmitter support, including RDS support
+- Software defined radio (SDR) support
+- Capture and output overlay support
+
+These features will be described in more detail below.
+
+
+Table of Contents
+-----------------
+
+Section 1: Configuring the driver
+Section 2: Video Capture
+Section 2.1: Webcam Input
+Section 2.2: TV and S-Video Inputs
+Section 2.3: HDMI Input
+Section 3: Video Output
+Section 3.1: S-Video Output
+Section 3.2: HDMI Output
+Section 4: VBI Capture
+Section 5: VBI Output
+Section 6: Radio Receiver
+Section 7: Radio Transmitter
+Section 8: Software Defined Radio Receiver
+Section 9: Controls
+Section 9.1: User Controls - Test Controls
+Section 9.2: User Controls - Video Capture
+Section 9.3: User Controls - Audio
+Section 9.4: Vivid Controls
+Section 9.4.1: Test Pattern Controls
+Section 9.4.2: Capture Feature Selection Controls
+Section 9.4.3: Output Feature Selection Controls
+Section 9.4.4: Error Injection Controls
+Section 9.4.5: VBI Raw Capture Controls
+Section 9.5: Digital Video Controls
+Section 9.6: FM Radio Receiver Controls
+Section 9.7: FM Radio Modulator
+Section 10: Video, VBI and RDS Looping
+Section 10.1: Video and Sliced VBI looping
+Section 10.2: Radio & RDS Looping
+Section 11: Cropping, Composing, Scaling
+Section 12: Formats
+Section 13: Capture Overlay
+Section 14: Output Overlay
+Section 15: Some Future Improvements
+
+
+Section 1: Configuring the driver
+---------------------------------
+
+By default the driver will create a single instance that has a video capture
+device with webcam, TV, S-Video and HDMI inputs, a video output device with
+S-Video and HDMI outputs, one vbi capture device, one vbi output device, one
+radio receiver device, one radio transmitter device and one SDR device.
+
+The number of instances, devices, video inputs and outputs and their types are
+all configurable using the following module options:
+
+n_devs: number of driver instances to create. By default set to 1. Up to 64
+	instances can be created.
+
+node_types: which devices should each driver instance create. An array of
+	hexadecimal values, one for each instance. The default is 0x1d3d.
+	Each value is a bitmask with the following meaning:
+		bit 0: Video Capture node
+		bit 2-3: VBI Capture node: 0 = none, 1 = raw vbi, 2 = sliced vbi, 3 = both
+		bit 4: Radio Receiver node
+		bit 5: Software Defined Radio Receiver node
+		bit 8: Video Output node
+		bit 10-11: VBI Output node: 0 = none, 1 = raw vbi, 2 = sliced vbi, 3 = both
+		bit 12: Radio Transmitter node
+		bit 16: Framebuffer for testing overlays
+
+	So to create four instances, the first two with just one video capture
+	device, the second two with just one video output device you would pass
+	these module options to vivid:
+
+		n_devs=4 node_types=0x1,0x1,0x100,0x100
+
+num_inputs: the number of inputs, one for each instance. By default 4 inputs
+	are created for each video capture device. At most 16 inputs can be created,
+	and there must be at least one.
+
+input_types: the input types for each instance, the default is 0xe4. This defines
+	what the type of each input is when the inputs are created for each driver
+	instance. This is a hexadecimal value with up to 16 pairs of bits, each
+	pair gives the type and bits 0-1 map to input 0, bits 2-3 map to input 1,
+	30-31 map to input 15. Each pair of bits has the following meaning:
+
+		00: this is a webcam input
+		01: this is a TV tuner input
+		10: this is an S-Video input
+		11: this is an HDMI input
+
+	So to create a video capture device with 8 inputs where input 0 is a TV
+	tuner, inputs 1-3 are S-Video inputs and inputs 4-7 are HDMI inputs you
+	would use the following module options:
+
+		num_inputs=8 input_types=0xffa9
+
+num_outputs: the number of outputs, one for each instance. By default 2 outputs
+	are created for each video output device. At most 16 outputs can be
+	created, and there must be at least one.
+
+output_types: the output types for each instance, the default is 0x02. This defines
+	what the type of each output is when the outputs are created for each
+	driver instance. This is a hexadecimal value with up to 16 bits, each bit
+	gives the type and bit 0 maps to output 0, bit 1 maps to output 1, bit
+	15 maps to output 15. The meaning of each bit is as follows:
+
+		0: this is an S-Video output
+		1: this is an HDMI output
+
+	So to create a video output device with 8 outputs where outputs 0-3 are
+	S-Video outputs and outputs 4-7 are HDMI outputs you would use the
+	following module options:
+
+		num_outputs=8 output_types=0xf0
+
+vid_cap_nr: give the desired videoX start number for each video capture device.
+	The default is -1 which will just take the first free number. This allows
+	you to map capture video nodes to specific videoX device nodes. Example:
+
+		n_devs=4 vid_cap_nr=2,4,6,8
+
+	This will attempt to assign /dev/video2 for the video capture device of
+	the first vivid instance, video4 for the next up to video8 for the last
+	instance. If it can't succeed, then it will just take the next free
+	number.
+
+vid_out_nr: give the desired videoX start number for each video output device.
+        The default is -1 which will just take the first free number.
+
+vbi_cap_nr: give the desired vbiX start number for each vbi capture device.
+        The default is -1 which will just take the first free number.
+
+vbi_out_nr: give the desired vbiX start number for each vbi output device.
+        The default is -1 which will just take the first free number.
+
+radio_rx_nr: give the desired radioX start number for each radio receiver device.
+        The default is -1 which will just take the first free number.
+
+radio_tx_nr: give the desired radioX start number for each radio transmitter
+	device. The default is -1 which will just take the first free number.
+
+sdr_cap_nr: give the desired swradioX start number for each SDR capture device.
+        The default is -1 which will just take the first free number.
+
+ccs_cap_mode: specify the allowed video capture crop/compose/scaling combination
+	for each driver instance. Video capture devices can have any combination
+	of cropping, composing and scaling capabilities and this will tell the
+	vivid driver which of those is should emulate. By default the user can
+	select this through controls.
+
+	The value is either -1 (controlled by the user) or a set of three bits,
+	each enabling (1) or disabling (0) one of the features:
+
+		bit 0: Enable crop support. Cropping will take only part of the
+		       incoming picture.
+		bit 1: Enable compose support. Composing will copy the incoming
+		       picture into a larger buffer.
+		bit 2: Enable scaling support. Scaling can scale the incoming
+		       picture. The scaler of the vivid driver can enlarge up
+		       or down to four times the original size. The scaler is
+		       very simple and low-quality. Simplicity and speed were
+		       key, not quality.
+
+	Note that this value is ignored by webcam inputs: those enumerate
+	discrete framesizes and that is incompatible with cropping, composing
+	or scaling.
+
+ccs_out_mode: specify the allowed video output crop/compose/scaling combination
+	for each driver instance. Video output devices can have any combination
+	of cropping, composing and scaling capabilities and this will tell the
+	vivid driver which of those is should emulate. By default the user can
+	select this through controls.
+
+	The value is either -1 (controlled by the user) or a set of three bits,
+	each enabling (1) or disabling (0) one of the features:
+
+		bit 0: Enable crop support. Cropping will take only part of the
+		       outgoing buffer.
+		bit 1: Enable compose support. Composing will copy the incoming
+		       buffer into a larger picture frame.
+		bit 2: Enable scaling support. Scaling can scale the incoming
+		       buffer. The scaler of the vivid driver can enlarge up
+		       or down to four times the original size. The scaler is
+		       very simple and low-quality. Simplicity and speed were
+		       key, not quality.
+
+multiplanar: select whether each device instance supports multi-planar formats,
+	and thus the V4L2 multi-planar API. By default the first device instance
+	is single-planar, the second multi-planar, and it keeps alternating.
+
+	This module option can override that for each instance. Values are:
+
+		0: use alternating single and multi-planar devices.
+		1: this is a single-planar instance.
+		2: this is a multi-planar instance.
+
+vivid_debug: enable driver debugging info
+
+no_error_inj: if set disable the error injecting controls. This option is
+	needed in order to run a tool like v4l2-compliance. Tools like that
+	exercise all controls including a control like 'Disconnect' which
+	emulates a USB disconnect, making the device inaccessible and so
+	all tests that v4l2-compliance is doing will fail afterwards.
+
+	There may be other situations as well where you want to disable the
+	error injection support of vivid. When this option is set, then the
+	controls that select crop, compose and scale behavior are also
+	removed. Unless overridden by ccs_cap_mode and/or ccs_out_mode the
+	will default to enabling crop, compose and scaling.
+
+Taken together, all these module options allow you to precisely customize
+the driver behavior and test your application with all sorts of permutations.
+It is also very suitable to emulate hardware that is not yet available, e.g.
+when developing software for a new upcoming device.
+
+
+Section 2: Video Capture
+------------------------
+
+This is probably the most frequently used feature. The video capture device
+can be configured by using the module options num_inputs, input_types and
+ccs_cap_mode (see section 1 for more detailed information), but by default
+four inputs are configured: a webcam, a TV tuner, an S-Video and an HDMI
+input, one input for each input type. Those are described in more detail
+below.
+
+Special attention has been given to the rate at which new frames become
+available. The jitter will be around 1 jiffie (that depends on the HZ
+configuration of your kernel, so usually 1/100, 1/250 or 1/1000 of a second),
+but the long-term behavior is exactly following the framerate. So a
+framerate of 59.94 Hz is really different from 60 Hz. If the framerate
+exceeds your kernel's HZ value, then you will get dropped frames, but the
+frame/field sequence counting will keep track of that so the sequence
+count will skip whenever frames are dropped.
+
+
+Section 2.1: Webcam Input
+-------------------------
+
+The webcam input supports three framesizes: 320x180, 640x360 and 1280x720. It
+supports frames per second settings of 10, 15, 25, 30, 50 and 60 fps. Which ones
+are available depends on the chosen framesize: the larger the framesize, the
+lower the maximum frames per second.
+
+The initially selected colorspace when you switch to the webcam input will be
+sRGB.
+
+
+Section 2.2: TV and S-Video Inputs
+----------------------------------
+
+The only difference between the TV and S-Video input is that the TV has a
+tuner. Otherwise they behave identically.
+
+These inputs support audio inputs as well: one TV and one Line-In. They
+both support all TV standards. If the standard is queried, then the Vivid
+controls 'Standard Signal Mode' and 'Standard' determine what
+the result will be.
+
+These inputs support all combinations of the field setting. Special care has
+been taken to faithfully reproduce how fields are handled for the different
+TV standards. This is particularly noticable when generating a horizontally
+moving image so the temporal effect of using interlaced formats becomes clearly
+visible. For 50 Hz standards the top field is the oldest and the bottom field
+is the newest in time. For 60 Hz standards that is reversed: the bottom field
+is the oldest and the top field is the newest in time.
+
+When you start capturing in V4L2_FIELD_ALTERNATE mode the first buffer will
+contain the top field for 50 Hz standards and the bottom field for 60 Hz
+standards. This is what capture hardware does as well.
+
+Finally, for PAL/SECAM standards the first half of the top line contains noise.
+This simulates the Wide Screen Signal that is commonly placed there.
+
+The initially selected colorspace when you switch to the TV or S-Video input
+will be SMPTE-170M.
+
+The pixel aspect ratio will depend on the TV standard. The video aspect ratio
+can be selected through the 'Standard Aspect Ratio' Vivid control.
+Choices are '4x3', '16x9' which will give letterboxed widescreen video and
+'16x9 Anomorphic' which will give full screen squashed anamorphic widescreen
+video that will need to be scaled accordingly.
+
+The TV 'tuner' supports a frequency range of 44-958 MHz. Channels are available
+every 6 MHz, starting from 49.25 MHz. For each channel the generated image
+will be in color for the +/- 0.25 MHz around it, and in grayscale for
++/- 1 MHz around the channel. Beyond that it is just noise. The VIDIOC_G_TUNER
+ioctl will return 100% signal strength for +/- 0.25 MHz and 50% for +/- 1 MHz.
+It will also return correct afc values to show whether the frequency is too
+low or too high.
+
+The audio subchannels that are returned are MONO for the +/- 1 MHz range around
+a valid channel frequency. When the frequency is within +/- 0.25 MHz of the
+channel it will return either MONO, STEREO, either MONO | SAP (for NTSC) or
+LANG1 | LANG2 (for others), or STEREO | SAP.
+
+Which one is returned depends on the chosen channel, each next valid channel
+will cycle through the possible audio subchannel combinations. This allows
+you to test the various combinations by just switching channels..
+
+Finally, for these inputs the v4l2_timecode struct is filled in in the
+dequeued v4l2_buffer struct.
+
+
+Section 2.3: HDMI Input
+-----------------------
+
+The HDMI inputs supports all CEA-861 and DMT timings, both progressive and
+interlaced, for pixelclock frequencies between 25 and 600 MHz. The field
+mode for interlaced formats is always V4L2_FIELD_ALTERNATE. For HDMI the
+field order is always top field first, and when you start capturing an
+interlaced format you will receive the top field first.
+
+The initially selected colorspace when you switch to the HDMI input or
+select an HDMI timing is based on the format resolution: for resolutions
+less than or equal to 720x576 the colorspace is set to SMPTE-170M, for
+others it is set to REC-709 (CEA-861 timings) or sRGB (VESA DMT timings).
+
+The pixel aspect ratio will depend on the HDMI timing: for 720x480 is it
+set as for the NTSC TV standard, for 720x576 it is set as for the PAL TV
+standard, and for all others a 1:1 pixel aspect ratio is returned.
+
+The video aspect ratio can be selected through the 'DV Timings Aspect Ratio'
+Vivid control. Choices are 'Source Width x Height' (just use the
+same ratio as the chosen format), '4x3' or '16x9', either of which can
+result in pillarboxed or letterboxed video.
+
+For HDMI inputs it is possible to set the EDID. By default a simple EDID
+is provided. You can only set the EDID for HDMI inputs. Internally, however,
+the EDID is shared between all HDMI inputs.
+
+No interpretation is done of the EDID data.
+
+
+Section 3: Video Output
+-----------------------
+
+The video output device can be configured by using the module options
+num_outputs, output_types and ccs_out_mode (see section 1 for more detailed
+information), but by default two outputs are configured: an S-Video and an
+HDMI input, one output for each output type. Those are described in more detail
+below.
+
+Like with video capture the framerate is also exact in the long term.
+
+
+Section 3.1: S-Video Output
+---------------------------
+
+This output supports audio outputs as well: "Line-Out 1" and "Line-Out 2".
+The S-Video output supports all TV standards.
+
+This output supports all combinations of the field setting.
+
+The initially selected colorspace when you switch to the TV or S-Video input
+will be SMPTE-170M.
+
+
+Section 3.2: HDMI Output
+------------------------
+
+The HDMI output supports all CEA-861 and DMT timings, both progressive and
+interlaced, for pixelclock frequencies between 25 and 600 MHz. The field
+mode for interlaced formats is always V4L2_FIELD_ALTERNATE.
+
+The initially selected colorspace when you switch to the HDMI output or
+select an HDMI timing is based on the format resolution: for resolutions
+less than or equal to 720x576 the colorspace is set to SMPTE-170M, for
+others it is set to REC-709 (CEA-861 timings) or sRGB (VESA DMT timings).
+
+The pixel aspect ratio will depend on the HDMI timing: for 720x480 is it
+set as for the NTSC TV standard, for 720x576 it is set as for the PAL TV
+standard, and for all others a 1:1 pixel aspect ratio is returned.
+
+An HDMI output has a valid EDID which can be obtained through VIDIOC_G_EDID.
+
+
+Section 4: VBI Capture
+----------------------
+
+There are three types of VBI capture devices: those that only support raw
+(undecoded) VBI, those that only support sliced (decoded) VBI and those that
+support both. This is determined by the node_types module option. In all
+cases the driver will generate valid VBI data: for 60 Hz standards it will
+generate Closed Caption and XDS data. The closed caption stream will
+alternate between "Hello world!" and "Closed captions test" every second.
+The XDS stream will give the current time once a minute. For 50 Hz standards
+it will generate the Wide Screen Signal which is based on the actual Video
+Aspect Ratio control setting.
+
+The VBI device will only work for the S-Video and TV inputs, it will give
+back an error if the current input is a webcam or HDMI.
+
+
+Section 5: VBI Output
+---------------------
+
+There are three types of VBI output devices: those that only support raw
+(undecoded) VBI, those that only support sliced (decoded) VBI and those that
+support both. This is determined by the node_types module option.
+
+The sliced VBI output supports the Wide Screen Signal for 50 Hz standards
+and Closed Captioning + XDS for 60 Hz standards.
+
+The VBI device will only work for the S-Video output, it will give
+back an error if the current output is HDMI.
+
+
+Section 6: Radio Receiver
+-------------------------
+
+The radio receiver emulates an FM/AM/SW receiver. The FM band also supports RDS.
+The frequency ranges are:
+
+	FM: 64 MHz - 108 MHz
+	AM: 520 kHz - 1710 kHz
+	SW: 2300 kHz - 26.1 MHz
+
+Valid channels are emulated every 1 MHz for FM and every 100 kHz for AM and SW.
+The signal strength decreases the further the frequency is from the valid
+frequency until it becomes 0% at +/- 50 kHz (FM) or 5 kHz (AM/SW) from the
+ideal frequency. The initial frequency when the driver is loaded is set to
+95 MHz.
+
+The FM receiver supports RDS as well, both using 'Block I/O' and 'Controls'
+modes. In the 'Controls' mode the RDS information is stored in read-only
+controls. These controls are updated every time the frequency is changed,
+or when the tuner status is requested. The Block I/O method uses the read()
+interface to pass the RDS blocks on to the application for decoding.
+
+The RDS signal is 'detected' for +/- 12.5 kHz around the channel frequency,
+and the further the frequency is away from the valid frequency the more RDS
+errors are randomly introduced into the block I/O stream, up to 50% of all
+blocks if you are +/- 12.5 kHz from the channel frequency. All four errors
+can occur in equal proportions: blocks marked 'CORRECTED', blocks marked
+'ERROR', blocks marked 'INVALID' and dropped blocks.
+
+The generated RDS stream contains all the standard fields contained in a
+0B group, and also radio text and the current time.
+
+The receiver supports HW frequency seek, either in Bounded mode, Wrap Around
+mode or both, which is configurable with the "Radio HW Seek Mode" control.
+
+
+Section 7: Radio Transmitter
+----------------------------
+
+The radio transmitter emulates an FM/AM/SW transmitter. The FM band also supports RDS.
+The frequency ranges are:
+
+	FM: 64 MHz - 108 MHz
+	AM: 520 kHz - 1710 kHz
+	SW: 2300 kHz - 26.1 MHz
+
+The initial frequency when the driver is loaded is 95.5 MHz.
+
+The FM transmitter supports RDS as well, both using 'Block I/O' and 'Controls'
+modes. In the 'Controls' mode the transmitted RDS information is configured
+using controls, and in 'Block I/O' mode the blocks are passed to the driver
+using write().
+
+
+Section 8: Software Defined Radio Receiver
+------------------------------------------
+
+The SDR receiver has three frequency bands for the ADC tuner:
+
+	- 300 kHz
+	- 900 kHz - 2800 kHz
+	- 3200 kHz
+
+The RF tuner supports 50 MHz - 2000 MHz.
+
+The generated data contains the In-phase and Quadrature components of a
+1 kHz tone that has an amplitude of sqrt(2).
+
+
+Section 9: Controls
+-------------------
+
+Different devices support different controls. The sections below will describe
+each control and which devices support them.
+
+
+Section 9.1: User Controls - Test Controls
+------------------------------------------
+
+The Button, Boolean, Integer 32 Bits, Integer 64 Bits, Menu, String, Bitmask and
+Integer Menu are controls that represent all possible control types. The Menu
+control and the Integer Menu control both have 'holes' in their menu list,
+meaning that one or more menu items return EINVAL when VIDIOC_QUERYMENU is called.
+Both menu controls also have a non-zero minimum control value.  These features
+allow you to check if your application can handle such things correctly.
+These controls are supported for every device type.
+
+
+Section 9.2: User Controls - Video Capture
+------------------------------------------
+
+The following controls are specific to video capture.
+
+The Brightness, Contrast, Saturation and Hue controls actually work and are
+standard. There is one special feature with the Brightness control: each
+video input has its own brightness value, so changing input will restore
+the brightness for that input. In addition, each video input uses a different
+brightness range (minimum and maximum control values). Switching inputs will
+cause a control event to be sent with the V4L2_EVENT_CTRL_CH_RANGE flag set.
+This allows you to test controls that can change their range.
+
+The 'Gain, Automatic' and Gain controls can be used to test volatile controls:
+if 'Gain, Automatic' is set, then the Gain control is volatile and changes
+constantly. If 'Gain, Automatic' is cleared, then the Gain control is a normal
+control.
+
+The 'Horizontal Flip' and 'Vertical Flip' controls can be used to flip the
+image. These combine with the 'Sensor Flipped Horizontally/Vertically' Vivid
+controls.
+
+The 'Alpha Component' control can be used to set the alpha component for
+formats containing an alpha channel.
+
+
+Section 9.3: User Controls - Audio
+----------------------------------
+
+The following controls are specific to video capture and output and radio
+receivers and transmitters.
+
+The 'Volume' and 'Mute' audio controls are typical for such devices to
+control the volume and mute the audio. They don't actually do anything in
+the vivid driver.
+
+
+Section 9.4: Vivid Controls
+---------------------------
+
+These vivid custom controls control the image generation, error injection, etc.
+
+
+Section 9.4.1: Test Pattern Controls
+------------------------------------
+
+The Test Pattern Controls are all specific to video capture.
+
+Test Pattern: selects which test pattern to use. Use the CSC Colorbar for
+	testing colorspace conversions: the colors used in that test pattern
+	map to valid colors in all colorspaces. The colorspace conversion
+	is disabled for the other test patterns.
+
+OSD Text Mode: selects whether the text superimposed on the
+	test pattern should be shown, and if so, whether only counters should
+	be displayed or the full text.
+
+Horizontal Movement: selects whether the test pattern should
+	move to the left or right and at what speed.
+
+Vertical Movement: does the same for the vertical direction.
+
+Show Border: show a two-pixel wide border at the edge of the actual image,
+	excluding letter or pillarboxing.
+
+Show Square: show a square in the middle of the image. If the image is
+	displayed with the correct pixel and image aspect ratio corrections,
+	then the width and height of the square on the monitor should be
+	the same.
+
+Insert SAV Code in Image: adds a SAV (Start of Active Video) code to the image.
+	This can be used to check if such codes in the image are inadvertently
+	interpreted instead of being ignored.
+
+Insert EAV Code in Image: does the same for the EAV (End of Active Video) code.
+
+
+Section 9.4.2: Capture Feature Selection Controls
+-------------------------------------------------
+
+These controls are all specific to video capture.
+
+Sensor Flipped Horizontally: the image is flipped horizontally and the
+	V4L2_IN_ST_HFLIP input status flag is set. This emulates the case where
+	a sensor is for example mounted upside down.
+
+Sensor Flipped Vertically: the image is flipped vertically and the
+	V4L2_IN_ST_VFLIP input status flag is set. This emulates the case where
+        a sensor is for example mounted upside down.
+
+Standard Aspect Ratio: selects if the image aspect ratio as used for the TV or
+	S-Video input should be 4x3, 16x9 or anamorphic widescreen. This may
+	introduce letterboxing.
+
+DV Timings Aspect Ratio: selects if the image aspect ratio as used for the HDMI
+	input should be the same as the source width and height ratio, or if
+	it should be 4x3 or 16x9. This may introduce letter or pillarboxing.
+
+Timestamp Source: selects when the timestamp for each buffer is taken.
+
+Colorspace: selects which colorspace should be used when generating the image.
+	This only applies if the CSC Colorbar test pattern is selected,
+	otherwise the test pattern will go through unconverted (except for
+	the so-called 'Transfer Function' corrections and the R'G'B' to Y'CbCr
+	conversion). This behavior is also what you want, since a 75% Colorbar
+	should really have 75% signal intensity and should not be affected
+	by colorspace conversions.
+
+	Changing the colorspace will result in the V4L2_EVENT_SOURCE_CHANGE
+	to be sent since it emulates a detected colorspace change.
+
+Limited RGB Range (16-235): selects if the RGB range of the HDMI source should
+	be limited or full range. This combines with the Digital Video 'Rx RGB
+	Quantization Range' control and can be used to test what happens if
+	a source provides you with the wrong quantization range information.
+	See the description of that control for more details.
+
+Apply Alpha To Red Only: apply the alpha channel as set by the 'Alpha Component'
+	user control to the red color of the test pattern only.
+
+Enable Capture Cropping: enables crop support. This control is only present if
+	the ccs_cap_mode module option is set to the default value of -1 and if
+	the no_error_inj module option is set to 0 (the default).
+
+Enable Capture Composing: enables composing support. This control is only
+	present if the ccs_cap_mode module option is set to the default value of
+	-1 and if the no_error_inj module option is set to 0 (the default).
+
+Enable Capture Scaler: enables support for a scaler (maximum 4 times upscaling
+	and downscaling). This control is only present if the ccs_cap_mode
+	module option is set to the default value of -1 and if the no_error_inj
+	module option is set to 0 (the default).
+
+Maximum EDID Blocks: determines how many EDID blocks the driver supports.
+	Note that the vivid driver does not actually interpret new EDID
+	data, it just stores it. It allows for up to 256 EDID blocks
+	which is the maximum supported by the standard.
+
+Fill Percentage of Frame: can be used to draw only the top X percent
+	of the image. Since each frame has to be drawn by the driver, this
+	demands a lot of the CPU. For large resolutions this becomes
+	problematic. By drawing only part of the image this CPU load can
+	be reduced.
+
+
+Section 9.4.3: Output Feature Selection Controls
+------------------------------------------------
+
+These controls are all specific to video output.
+
+Enable Output Cropping: enables crop support. This control is only present if
+	the ccs_out_mode module option is set to the default value of -1 and if
+	the no_error_inj module option is set to 0 (the default).
+
+Enable Output Composing: enables composing support. This control is only
+	present if the ccs_out_mode module option is set to the default value of
+	-1 and if the no_error_inj module option is set to 0 (the default).
+
+Enable Output Scaler: enables support for a scaler (maximum 4 times upscaling
+	and downscaling). This control is only present if the ccs_out_mode
+	module option is set to the default value of -1 and if the no_error_inj
+	module option is set to 0 (the default).
+
+
+Section 9.4.4: Error Injection Controls
+---------------------------------------
+
+The following two controls are only valid for video and vbi capture.
+
+Standard Signal Mode: selects the behavior of VIDIOC_QUERYSTD: what should
+	it return?
+
+	Changing this control will result in the V4L2_EVENT_SOURCE_CHANGE
+	to be sent since it emulates a changed input condition (e.g. a cable
+	was plugged in or out).
+
+Standard: selects the standard that VIDIOC_QUERYSTD should return if the
+	previous control is set to "Selected Standard".
+
+	Changing this control will result in the V4L2_EVENT_SOURCE_CHANGE
+	to be sent since it emulates a changed input standard.
+
+
+The following two controls are only valid for video capture.
+
+DV Timings Signal Mode: selects the behavior of VIDIOC_QUERY_DV_TIMINGS: what
+	should it return?
+
+	Changing this control will result in the V4L2_EVENT_SOURCE_CHANGE
+	to be sent since it emulates a changed input condition (e.g. a cable
+	was plugged in or out).
+
+DV Timings: selects the timings the VIDIOC_QUERY_DV_TIMINGS should return
+	if the previous control is set to "Selected DV Timings".
+
+	Changing this control will result in the V4L2_EVENT_SOURCE_CHANGE
+	to be sent since it emulates changed input timings.
+
+
+The following controls are only present if the no_error_inj module option
+is set to 0 (the default). These controls are valid for video and vbi
+capture and output streams and for the SDR capture device except for the
+Disconnect control which is valid for all devices.
+
+Wrap Sequence Number: test what happens when you wrap the sequence number in
+	struct v4l2_buffer around.
+
+Wrap Timestamp: test what happens when you wrap the timestamp in struct
+	v4l2_buffer around.
+
+Percentage of Dropped Buffers: sets the percentage of buffers that
+	are never returned by the driver (i.e., they are dropped).
+
+Disconnect: emulates a USB disconnect. The device will act as if it has
+	been disconnected. Only after all open filehandles to the device
+	node have been closed will the device become 'connected' again.
+
+Inject V4L2_BUF_FLAG_ERROR: when pressed, the next frame returned by
+	the driver will have the error flag set (i.e. the frame is marked
+	corrupt).
+
+Inject VIDIOC_REQBUFS Error: when pressed, the next REQBUFS or CREATE_BUFS
+	ioctl call will fail with an error. To be precise: the videobuf2
+	queue_setup() op will return -EINVAL.
+
+Inject VIDIOC_QBUF Error: when pressed, the next VIDIOC_QBUF or
+	VIDIOC_PREPARE_BUFFER ioctl call will fail with an error. To be
+	precise: the videobuf2 buf_prepare() op will return -EINVAL.
+
+Inject VIDIOC_STREAMON Error: when pressed, the next VIDIOC_STREAMON ioctl
+	call will fail with an error. To be precise: the videobuf2
+	start_streaming() op will return -EINVAL.
+
+Inject Fatal Streaming Error: when pressed, the streaming core will be
+	marked as having suffered a fatal error, the only way to recover
+	from that is to stop streaming. To be precise: the videobuf2
+	vb2_queue_error() function is called.
+
+
+Section 9.4.5: VBI Raw Capture Controls
+---------------------------------------
+
+Interlaced VBI Format: if set, then the raw VBI data will be interlaced instead
+	of providing it grouped by field.
+
+
+Section 9.5: Digital Video Controls
+-----------------------------------
+
+Rx RGB Quantization Range: sets the RGB quantization detection of the HDMI
+	input. This combines with the Vivid 'Limited RGB Range (16-235)'
+	control and can be used to test what happens if a source provides
+	you with the wrong quantization range information. This can be tested
+	by selecting an HDMI input, setting this control to Full or Limited
+	range and selecting the opposite in the 'Limited RGB Range (16-235)'
+	control. The effect is easy to see if the 'Gray Ramp' test pattern
+	is selected.
+
+Tx RGB Quantization Range: sets the RGB quantization detection of the HDMI
+	output. It is currently not used for anything in vivid, but most HDMI
+	transmitters would typically have this control.
+
+Transmit Mode: sets the transmit mode of the HDMI output to HDMI or DVI-D. This
+	affects the reported colorspace since DVI_D outputs will always use
+	sRGB.
+
+
+Section 9.6: FM Radio Receiver Controls
+---------------------------------------
+
+RDS Reception: set if the RDS receiver should be enabled.
+
+RDS Program Type:
+RDS PS Name:
+RDS Radio Text:
+RDS Traffic Announcement:
+RDS Traffic Program:
+RDS Music: these are all read-only controls. If RDS Rx I/O Mode is set to
+	"Block I/O", then they are inactive as well. If RDS Rx I/O Mode is set
+	to "Controls", then these controls report the received RDS data. Note
+	that the vivid implementation of this is pretty basic: they are only
+	updated when you set a new frequency or when you get the tuner status
+	(VIDIOC_G_TUNER).
+
+Radio HW Seek Mode: can be one of "Bounded", "Wrap Around" or "Both". This
+	determines if VIDIOC_S_HW_FREQ_SEEK will be bounded by the frequency
+	range or wrap-around or if it is selectable by the user.
+
+Radio Programmable HW Seek: if set, then the user can provide the lower and
+	upper bound of the HW Seek. Otherwise the frequency range boundaries
+	will be used.
+
+Generate RBDS Instead of RDS: if set, then generate RBDS (the US variant of
+	RDS) data instead of RDS (European-style RDS). This affects only the
+	PICODE and PTY codes.
+
+RDS Rx I/O Mode: this can be "Block I/O" where the RDS blocks have to be read()
+	by the application, or "Controls" where the RDS data is provided by
+	the RDS controls mentioned above.
+
+
+Section 9.7: FM Radio Modulator Controls
+----------------------------------------
+
+RDS Program ID:
+RDS Program Type:
+RDS PS Name:
+RDS Radio Text:
+RDS Stereo:
+RDS Artificial Head:
+RDS Compressed:
+RDS Dymanic PTY:
+RDS Traffic Announcement:
+RDS Traffic Program:
+RDS Music: these are all controls that set the RDS data that is transmitted by
+	the FM modulator.
+
+RDS Tx I/O Mode: this can be "Block I/O" where the application has to use write()
+	to pass the RDS blocks to the driver, or "Controls" where the RDS data is
+	provided by the RDS controls mentioned above.
+
+
+Section 10: Video, VBI and RDS Looping
+--------------------------------------
+
+The vivid driver supports looping of video output to video input, VBI output
+to VBI input and RDS output to RDS input. For video/VBI looping this emulates
+as if a cable was hooked up between the output and input connector. So video
+and VBI looping is only supported between S-Video and HDMI inputs and outputs.
+VBI is only valid for S-Video as it makes no sense for HDMI.
+
+Since radio is wireless this looping always happens if the radio receiver
+frequency is close to the radio transmitter frequency. In that case the radio
+transmitter will 'override' the emulated radio stations.
+
+Looping is currently supported only between devices created by the same
+vivid driver instance.
+
+
+Section 10.1: Video and Sliced VBI looping
+------------------------------------------
+
+The way to enable video/VBI looping is currently fairly crude. A 'Loop Video'
+control is available in the "Vivid" control class of the video
+output and VBI output devices. When checked the video looping will be enabled.
+Once enabled any video S-Video or HDMI input will show a static test pattern
+until the video output has started. At that time the video output will be
+looped to the video input provided that:
+
+- the input type matches the output type. So the HDMI input cannot receive
+  video from the S-Video output.
+
+- the video resolution of the video input must match that of the video output.
+  So it is not possible to loop a 50 Hz (720x576) S-Video output to a 60 Hz
+  (720x480) S-Video input, or a 720p60 HDMI output to a 1080p30 input.
+
+- the pixel formats must be identical on both sides. Otherwise the driver would
+  have to do pixel format conversion as well, and that's taking things too far.
+
+- the field settings must be identical on both sides. Same reason as above:
+  requiring the driver to convert from one field format to another complicated
+  matters too much. This also prohibits capturing with 'Field Top' or 'Field
+  Bottom' when the output video is set to 'Field Alternate'. This combination,
+  while legal, became too complicated to support. Both sides have to be 'Field
+  Alternate' for this to work. Also note that for this specific case the
+  sequence and field counting in struct v4l2_buffer on the capture side may not
+  be 100% accurate.
+
+- on the input side the "Standard Signal Mode" for the S-Video input or the
+  "DV Timings Signal Mode" for the HDMI input should be configured so that a
+  valid signal is passed to the video input.
+
+The framerates do not have to match, although this might change in the future.
+
+By default you will see the OSD text superimposed on top of the looped video.
+This can be turned off by changing the "OSD Text Mode" control of the video
+capture device.
+
+For VBI looping to work all of the above must be valid and in addition the vbi
+output must be configured for sliced VBI. The VBI capture side can be configured
+for either raw or sliced VBI.
+
+
+Section 10.2: Radio & RDS Looping
+---------------------------------
+
+As mentioned in section 6 the radio receiver emulates stations are regular
+frequency intervals. Depending on the frequency of the radio receiver a
+signal strength value is calculated (this is returned by VIDIOC_G_TUNER).
+However, it will also look at the frequency set by the radio transmitter and
+if that results in a higher signal strength than the settings of the radio
+transmitter will be used as if it was a valid station. This also includes
+the RDS data (if any) that the transmitter 'transmits'. This is received
+faithfully on the receiver side. Note that when the driver is loaded the
+frequencies of the radio receiver and transmitter are not identical, so
+initially no looping takes place.
+
+
+Section 11: Cropping, Composing, Scaling
+----------------------------------------
+
+This driver supports cropping, composing and scaling in any combination. Normally
+which features are supported can be selected through the Vivid controls,
+but it is also possible to hardcode it when the module is loaded through the
+ccs_cap_mode and ccs_out_mode module options. See section 1 on the details of
+these module options.
+
+This allows you to test your application for all these variations.
+
+Note that the webcam input never supports cropping, composing or scaling. That
+only applies to the TV/S-Video/HDMI inputs and outputs. The reason is that
+webcams, including this virtual implementation, normally use
+VIDIOC_ENUM_FRAMESIZES to list a set of discrete framesizes that it supports.
+And that does not combine with cropping, composing or scaling. This is
+primarily a limitation of the V4L2 API which is carefully reproduced here.
+
+The minimum and maximum resolutions that the scaler can achieve are 16x16 and
+(4096 * 4) x (2160 x 4), but it can only scale up or down by a factor of 4 or
+less. So for a source resolution of 1280x720 the minimum the scaler can do is
+320x180 and the maximum is 5120x2880. You can play around with this using the
+qv4l2 test tool and you will see these dependencies.
+
+This driver also supports larger 'bytesperline' settings, something that
+VIDIOC_S_FMT allows but that few drivers implement.
+
+The scaler is a simple scaler that uses the Coarse Bresenham algorithm. It's
+designed for speed and simplicity, not quality.
+
+If the combination of crop, compose and scaling allows it, then it is possible
+to change crop and compose rectangles on the fly.
+
+
+Section 12: Formats
+-------------------
+
+The driver supports all the regular packed YUYV formats, 16, 24 and 32 RGB
+packed formats and two multiplanar formats (one luma and one chroma plane).
+
+The alpha component can be set through the 'Alpha Component' User control
+for those formats that support it. If the 'Apply Alpha To Red Only' control
+is set, then the alpha component is only used for the color red and set to
+0 otherwise.
+
+The driver has to be configured to support the multiplanar formats. By default
+the first driver instance is single-planar, the second is multi-planar, and it
+keeps alternating. This can be changed by setting the multiplanar module option,
+see section 1 for more details on that option.
+
+If the driver instance is using the multiplanar formats/API, then the first
+single planar format (YUYV) and the multiplanar NV16M and NV61M formats the
+will have a plane that has a non-zero data_offset of 128 bytes. It is rare for
+data_offset to be non-zero, so this is a useful feature for testing applications.
+
+Video output will also honor any data_offset that the application set.
+
+
+Section 13: Capture Overlay
+---------------------------
+
+Note: capture overlay support is implemented primarily to test the existing
+V4L2 capture overlay API. In practice few if any GPUs support such overlays
+anymore, and neither are they generally needed anymore since modern hardware
+is so much more capable. By setting flag 0x10000 in the node_types module
+option the vivid driver will create a simple framebuffer device that can be
+used for testing this API. Whether this API should be used for new drivers is
+questionable.
+
+This driver has support for a destructive capture overlay with bitmap clipping
+and list clipping (up to 16 rectangles) capabilities. Overlays are not
+supported for multiplanar formats. It also honors the struct v4l2_window field
+setting: if it is set to FIELD_TOP or FIELD_BOTTOM and the capture setting is
+FIELD_ALTERNATE, then only the top or bottom fields will be copied to the overlay.
+
+The overlay only works if you are also capturing at that same time. This is a
+vivid limitation since it copies from a buffer to the overlay instead of
+filling the overlay directly. And if you are not capturing, then no buffers
+are available to fill.
+
+In addition, the pixelformat of the capture format and that of the framebuffer
+must be the same for the overlay to work. Otherwise VIDIOC_OVERLAY will return
+an error.
+
+In order to really see what it going on you will need to create two vivid
+instances: the first with a framebuffer enabled. You configure the capture
+overlay of the second instance to use the framebuffer of the first, then
+you start capturing in the second instance. For the first instance you setup
+the output overlay for the video output, turn on video looping and capture
+to see the blended framebuffer overlay that's being written to by the second
+instance. This setup would require the following commands:
+
+	$ sudo modprobe vivid n_devs=2 node_types=0x10101,0x1 multiplanar=1,1
+	$ v4l2-ctl -d1 --find-fb
+	/dev/fb1 is the framebuffer associated with base address 0x12800000
+	$ sudo v4l2-ctl -d2 --set-fbuf fb=1
+	$ v4l2-ctl -d1 --set-fbuf fb=1
+	$ v4l2-ctl -d0 --set-fmt-video=pixelformat='AR15'
+	$ v4l2-ctl -d1 --set-fmt-video-out=pixelformat='AR15'
+	$ v4l2-ctl -d2 --set-fmt-video=pixelformat='AR15'
+	$ v4l2-ctl -d0 -i2
+	$ v4l2-ctl -d2 -i2
+	$ v4l2-ctl -d2 -c horizontal_movement=4
+	$ v4l2-ctl -d1 --overlay=1
+	$ v4l2-ctl -d1 -c loop_video=1
+	$ v4l2-ctl -d2 --stream-mmap --overlay=1
+
+And from another console:
+
+	$ v4l2-ctl -d1 --stream-out-mmap
+
+And yet another console:
+
+	$ qv4l2
+
+and start streaming.
+
+As you can see, this is not for the faint of heart...
+
+
+Section 14: Output Overlay
+--------------------------
+
+Note: output overlays are primarily implemented in order to test the existing
+V4L2 output overlay API. Whether this API should be used for new drivers is
+questionable.
+
+This driver has support for an output overlay and is capable of:
+
+	- bitmap clipping,
+	- list clipping (up to 16 rectangles)
+	- chromakey
+	- source chromakey
+	- global alpha
+	- local alpha
+	- local inverse alpha
+
+Output overlays are not supported for multiplanar formats. In addition, the
+pixelformat of the capture format and that of the framebuffer must be the
+same for the overlay to work. Otherwise VIDIOC_OVERLAY will return an error.
+
+Output overlays only work if the driver has been configured to create a
+framebuffer by setting flag 0x10000 in the node_types module option. The
+created framebuffer has a size of 720x576 and supports ARGB 1:5:5:5 and
+RGB 5:6:5.
+
+In order to see the effects of the various clipping, chromakeying or alpha
+processing capabilities you need to turn on video looping and see the results
+on the capture side. The use of the clipping, chromakeying or alpha processing
+capabilities will slow down the video loop considerably as a lot of checks have
+to be done per pixel.
+
+
+Section 15: Some Future Improvements
+------------------------------------
+
+Just as a reminder and in no particular order:
+
+- Add a virtual alsa driver to test audio
+- Add virtual sub-devices and media controller support
+- Some support for testing compressed video
+- Add support to loop raw VBI output to raw VBI input
+- Fix sequence/field numbering when looping of video with alternate fields
+- Add support for V4L2_CID_BG_COLOR for video outputs
+- Add ARGB888 overlay support: better testing of the alpha channel
+- Add custom DV timings support
+- Add support for V4L2_DV_FL_REDUCED_FPS
+- Improve pixel aspect support in the tpg code by passing a real v4l2_fract
+- Use per-queue locks and/or per-device locks to improve throughput
+- Add support to loop from a specific output to a specific input across
+  vivid instances
+- Add support for VIDIOC_EXPBUF once support for that has been added to vb2
+- The SDR radio should use the same 'frequencies' for stations as the normal
+  radio receiver, and give back noise if the frequency doesn't match up with
+  a station frequency
+- Improve the sine generation of the SDR radio.
+- Make a thread for the RDS generation, that would help in particular for the
+  "Controls" RDS Rx I/O Mode as the read-only RDS controls could be updated
+  in real-time.
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 03/12] vivid: add core driver code
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 01/12] vb2: fix multiplanar read() with non-zero data_offset Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 02/12] vivid.txt: add documentation for the vivid driver Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 04/12] vivid: add the control handling code Hans Verkuil
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

This is the core driver code that creates all the driver instances
and all the configured devices.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/platform/vivid/vivid-core.c | 1390 +++++++++++++++++++++++++++++
 drivers/media/platform/vivid/vivid-core.h |  520 +++++++++++
 2 files changed, 1910 insertions(+)
 create mode 100644 drivers/media/platform/vivid/vivid-core.c
 create mode 100644 drivers/media/platform/vivid/vivid-core.h

diff --git a/drivers/media/platform/vivid/vivid-core.c b/drivers/media/platform/vivid/vivid-core.c
new file mode 100644
index 0000000..708b053
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-core.c
@@ -0,0 +1,1390 @@
+/*
+ * vivid-core.c - A Virtual Video Test Driver, core initialization
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/font.h>
+#include <linux/mutex.h>
+#include <linux/videodev2.h>
+#include <linux/v4l2-dv-timings.h>
+#include <media/videobuf2-vmalloc.h>
+#include <media/v4l2-dv-timings.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-fh.h>
+#include <media/v4l2-event.h>
+
+#include "vivid-core.h"
+#include "vivid-vid-common.h"
+#include "vivid-vid-cap.h"
+#include "vivid-vid-out.h"
+#include "vivid-radio-common.h"
+#include "vivid-radio-rx.h"
+#include "vivid-radio-tx.h"
+#include "vivid-sdr-cap.h"
+#include "vivid-vbi-cap.h"
+#include "vivid-vbi-out.h"
+#include "vivid-osd.h"
+#include "vivid-ctrls.h"
+
+#define VIVID_MODULE_NAME "vivid"
+
+/* The maximum number of vivid devices */
+#define VIVID_MAX_DEVS 64
+
+MODULE_DESCRIPTION("Virtual Video Test Driver");
+MODULE_AUTHOR("Hans Verkuil");
+MODULE_LICENSE("GPL");
+
+static unsigned n_devs = 1;
+module_param(n_devs, uint, 0444);
+MODULE_PARM_DESC(n_devs, " number of driver instances to create");
+
+static int vid_cap_nr[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = -1 };
+module_param_array(vid_cap_nr, int, NULL, 0444);
+MODULE_PARM_DESC(vid_cap_nr, " videoX start number, -1 is autodetect");
+
+static int vid_out_nr[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = -1 };
+module_param_array(vid_out_nr, int, NULL, 0444);
+MODULE_PARM_DESC(vid_out_nr, " videoX start number, -1 is autodetect");
+
+static int vbi_cap_nr[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = -1 };
+module_param_array(vbi_cap_nr, int, NULL, 0444);
+MODULE_PARM_DESC(vbi_cap_nr, " vbiX start number, -1 is autodetect");
+
+static int vbi_out_nr[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = -1 };
+module_param_array(vbi_out_nr, int, NULL, 0444);
+MODULE_PARM_DESC(vbi_out_nr, " vbiX start number, -1 is autodetect");
+
+static int sdr_cap_nr[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = -1 };
+module_param_array(sdr_cap_nr, int, NULL, 0444);
+MODULE_PARM_DESC(sdr_cap_nr, " swradioX start number, -1 is autodetect");
+
+static int radio_rx_nr[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = -1 };
+module_param_array(radio_rx_nr, int, NULL, 0444);
+MODULE_PARM_DESC(radio_rx_nr, " radioX start number, -1 is autodetect");
+
+static int radio_tx_nr[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = -1 };
+module_param_array(radio_tx_nr, int, NULL, 0444);
+MODULE_PARM_DESC(radio_tx_nr, " radioX start number, -1 is autodetect");
+
+static int ccs_cap_mode[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = -1 };
+module_param_array(ccs_cap_mode, int, NULL, 0444);
+MODULE_PARM_DESC(ccs_cap_mode, " capture crop/compose/scale mode:\n"
+			   "\t\t    bit 0=crop, 1=compose, 2=scale,\n"
+			   "\t\t    -1=user-controlled (default)");
+
+static int ccs_out_mode[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = -1 };
+module_param_array(ccs_out_mode, int, NULL, 0444);
+MODULE_PARM_DESC(ccs_out_mode, " output crop/compose/scale mode:\n"
+			   "\t\t    bit 0=crop, 1=compose, 2=scale,\n"
+			   "\t\t    -1=user-controlled (default)");
+
+static unsigned multiplanar[VIVID_MAX_DEVS];
+module_param_array(multiplanar, uint, NULL, 0444);
+MODULE_PARM_DESC(multiplanar, " 0 (default) is alternating single and multiplanar devices,\n"
+			      "\t\t    1 is single planar devices,\n"
+			      "\t\t    2 is multiplanar devices");
+
+/* Default: video + vbi-cap (raw and sliced) + radio rx + radio tx + sdr + vbi-out + vid-out */
+static unsigned node_types[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = 0x1d3d };
+module_param_array(node_types, uint, NULL, 0444);
+MODULE_PARM_DESC(node_types, " node types, default is 0x1d3d. Bitmask with the following meaning:\n"
+			     "\t\t    bit 0: Video Capture node\n"
+			     "\t\t    bit 2-3: VBI Capture node: 0 = none, 1 = raw vbi, 2 = sliced vbi, 3 = both\n"
+			     "\t\t    bit 4: Radio Receiver node\n"
+			     "\t\t    bit 5: Software Defined Radio Receiver node\n"
+			     "\t\t    bit 8: Video Output node\n"
+			     "\t\t    bit 10-11: VBI Output node: 0 = none, 1 = raw vbi, 2 = sliced vbi, 3 = both\n"
+			     "\t\t    bit 12: Radio Transmitter node\n"
+			     "\t\t    bit 16: Framebuffer for testing overlays");
+
+/* Default: 4 inputs */
+static unsigned num_inputs[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = 4 };
+module_param_array(num_inputs, uint, NULL, 0444);
+MODULE_PARM_DESC(num_inputs, " number of inputs, default is 4");
+
+/* Default: input 0 = WEBCAM, 1 = TV, 2 = SVID, 3 = HDMI */
+static unsigned input_types[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = 0xe4 };
+module_param_array(input_types, uint, NULL, 0444);
+MODULE_PARM_DESC(input_types, " input types, default is 0xe4. Two bits per input,\n"
+			      "\t\t    bits 0-1 == input 0, bits 31-30 == input 15.\n"
+			      "\t\t    Type 0 == webcam, 1 == TV, 2 == S-Video, 3 == HDMI");
+
+/* Default: 2 outputs */
+static unsigned num_outputs[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = 2 };
+module_param_array(num_outputs, uint, NULL, 0444);
+MODULE_PARM_DESC(num_outputs, " number of outputs, default is 2");
+
+/* Default: output 0 = SVID, 1 = HDMI */
+static unsigned output_types[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = 2 };
+module_param_array(output_types, uint, NULL, 0444);
+MODULE_PARM_DESC(output_types, " output types, default is 0x02. One bit per output,\n"
+			      "\t\t    bit 0 == output 0, bit 15 == output 15.\n"
+			      "\t\t    Type 0 == S-Video, 1 == HDMI");
+
+unsigned vivid_debug;
+module_param(vivid_debug, uint, 0644);
+MODULE_PARM_DESC(vivid_debug, " activates debug info");
+
+static bool no_error_inj;
+module_param(no_error_inj, bool, 0444);
+MODULE_PARM_DESC(no_error_inj, " if set disable the error injecting controls");
+
+static struct vivid_dev *vivid_devs[VIVID_MAX_DEVS];
+
+const struct v4l2_rect vivid_min_rect = {
+	0, 0, MIN_WIDTH, MIN_HEIGHT
+};
+
+const struct v4l2_rect vivid_max_rect = {
+	0, 0, MAX_WIDTH * MAX_ZOOM, MAX_HEIGHT * MAX_ZOOM
+};
+
+static const u8 vivid_hdmi_edid[256] = {
+	0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00,
+	0x63, 0x3a, 0xaa, 0x55, 0x00, 0x00, 0x00, 0x00,
+	0x0a, 0x18, 0x01, 0x03, 0x80, 0x10, 0x09, 0x78,
+	0x0e, 0x00, 0xb2, 0xa0, 0x57, 0x49, 0x9b, 0x26,
+	0x10, 0x48, 0x4f, 0x2f, 0xcf, 0x00, 0x31, 0x59,
+	0x45, 0x59, 0x81, 0x80, 0x81, 0x40, 0x90, 0x40,
+	0x95, 0x00, 0xa9, 0x40, 0xb3, 0x00, 0x02, 0x3a,
+	0x80, 0x18, 0x71, 0x38, 0x2d, 0x40, 0x58, 0x2c,
+	0x46, 0x00, 0x10, 0x09, 0x00, 0x00, 0x00, 0x1e,
+	0x00, 0x00, 0x00, 0xfd, 0x00, 0x18, 0x55, 0x18,
+	0x5e, 0x11, 0x00, 0x0a, 0x20, 0x20, 0x20, 0x20,
+	0x20, 0x20, 0x00, 0x00, 0x00, 0xfc, 0x00,  'v',
+	'4',   'l',  '2',  '-',  'h',  'd',  'm',  'i',
+	0x0a, 0x0a, 0x0a, 0x0a, 0x00, 0x00, 0x00, 0x10,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0xf0,
+
+	0x02, 0x03, 0x1a, 0xc0, 0x48, 0xa2, 0x10, 0x04,
+	0x02, 0x01, 0x21, 0x14, 0x13, 0x23, 0x09, 0x07,
+	0x07, 0x65, 0x03, 0x0c, 0x00, 0x10, 0x00, 0xe2,
+	0x00, 0x2a, 0x01, 0x1d, 0x00, 0x80, 0x51, 0xd0,
+	0x1c, 0x20, 0x40, 0x80, 0x35, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x1e, 0x8c, 0x0a, 0xd0, 0x8a,
+	0x20, 0xe0, 0x2d, 0x10, 0x10, 0x3e, 0x96, 0x00,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+	0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd7
+};
+
+void vivid_lock(struct vb2_queue *vq)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+
+	mutex_lock(&dev->mutex);
+}
+
+void vivid_unlock(struct vb2_queue *vq)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+
+	mutex_unlock(&dev->mutex);
+}
+
+static int vidioc_querycap(struct file *file, void  *priv,
+					struct v4l2_capability *cap)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+
+	strcpy(cap->driver, "vivid");
+	strcpy(cap->card, "vivid");
+	snprintf(cap->bus_info, sizeof(cap->bus_info),
+			"platform:%s", dev->v4l2_dev.name);
+
+	if (vdev->vfl_type == VFL_TYPE_GRABBER && vdev->vfl_dir == VFL_DIR_RX)
+		cap->device_caps = dev->vid_cap_caps;
+	if (vdev->vfl_type == VFL_TYPE_GRABBER && vdev->vfl_dir == VFL_DIR_TX)
+		cap->device_caps = dev->vid_out_caps;
+	else if (vdev->vfl_type == VFL_TYPE_VBI && vdev->vfl_dir == VFL_DIR_RX)
+		cap->device_caps = dev->vbi_cap_caps;
+	else if (vdev->vfl_type == VFL_TYPE_VBI && vdev->vfl_dir == VFL_DIR_TX)
+		cap->device_caps = dev->vbi_out_caps;
+	else if (vdev->vfl_type == VFL_TYPE_SDR)
+		cap->device_caps = dev->sdr_cap_caps;
+	else if (vdev->vfl_type == VFL_TYPE_RADIO && vdev->vfl_dir == VFL_DIR_RX)
+		cap->device_caps = dev->radio_rx_caps;
+	else if (vdev->vfl_type == VFL_TYPE_RADIO && vdev->vfl_dir == VFL_DIR_TX)
+		cap->device_caps = dev->radio_tx_caps;
+	cap->capabilities = dev->vid_cap_caps | dev->vid_out_caps |
+		dev->vbi_cap_caps | dev->vbi_out_caps |
+		dev->radio_rx_caps | dev->radio_tx_caps |
+		dev->sdr_cap_caps | V4L2_CAP_DEVICE_CAPS;
+	return 0;
+}
+
+static int vidioc_s_hw_freq_seek(struct file *file, void *fh, const struct v4l2_hw_freq_seek *a)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_type == VFL_TYPE_RADIO)
+		return vivid_radio_rx_s_hw_freq_seek(file, fh, a);
+	return -ENOTTY;
+}
+
+static int vidioc_enum_freq_bands(struct file *file, void *fh, struct v4l2_frequency_band *band)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_type == VFL_TYPE_RADIO)
+		return vivid_radio_rx_enum_freq_bands(file, fh, band);
+	if (vdev->vfl_type == VFL_TYPE_SDR)
+		return vivid_sdr_enum_freq_bands(file, fh, band);
+	return -ENOTTY;
+}
+
+static int vidioc_g_tuner(struct file *file, void *fh, struct v4l2_tuner *vt)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_type == VFL_TYPE_RADIO)
+		return vivid_radio_rx_g_tuner(file, fh, vt);
+	if (vdev->vfl_type == VFL_TYPE_SDR)
+		return vivid_sdr_g_tuner(file, fh, vt);
+	return vivid_video_g_tuner(file, fh, vt);
+}
+
+static int vidioc_s_tuner(struct file *file, void *fh, const struct v4l2_tuner *vt)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_type == VFL_TYPE_RADIO)
+		return vivid_radio_rx_s_tuner(file, fh, vt);
+	if (vdev->vfl_type == VFL_TYPE_SDR)
+		return vivid_sdr_s_tuner(file, fh, vt);
+	return vivid_video_s_tuner(file, fh, vt);
+}
+
+static int vidioc_g_frequency(struct file *file, void *fh, struct v4l2_frequency *vf)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_type == VFL_TYPE_RADIO)
+		return vivid_radio_g_frequency(file,
+			vdev->vfl_dir == VFL_DIR_RX ?
+			&dev->radio_rx_freq : &dev->radio_tx_freq, vf);
+	if (vdev->vfl_type == VFL_TYPE_SDR)
+		return vivid_sdr_g_frequency(file, fh, vf);
+	return vivid_video_g_frequency(file, fh, vf);
+}
+
+static int vidioc_s_frequency(struct file *file, void *fh, const struct v4l2_frequency *vf)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_type == VFL_TYPE_RADIO)
+		return vivid_radio_s_frequency(file,
+			vdev->vfl_dir == VFL_DIR_RX ?
+			&dev->radio_rx_freq : &dev->radio_tx_freq, vf);
+	if (vdev->vfl_type == VFL_TYPE_SDR)
+		return vivid_sdr_s_frequency(file, fh, vf);
+	return vivid_video_s_frequency(file, fh, vf);
+}
+
+static int vidioc_overlay(struct file *file, void *fh, unsigned i)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_vid_cap_overlay(file, fh, i);
+	return vivid_vid_out_overlay(file, fh, i);
+}
+
+static int vidioc_g_fbuf(struct file *file, void *fh, struct v4l2_framebuffer *a)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_vid_cap_g_fbuf(file, fh, a);
+	return vivid_vid_out_g_fbuf(file, fh, a);
+}
+
+static int vidioc_s_fbuf(struct file *file, void *fh, const struct v4l2_framebuffer *a)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_vid_cap_s_fbuf(file, fh, a);
+	return vivid_vid_out_s_fbuf(file, fh, a);
+}
+
+static int vidioc_s_std(struct file *file, void *fh, v4l2_std_id id)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_vid_cap_s_std(file, fh, id);
+	return vivid_vid_out_s_std(file, fh, id);
+}
+
+static int vidioc_s_dv_timings(struct file *file, void *fh, struct v4l2_dv_timings *timings)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_vid_cap_s_dv_timings(file, fh, timings);
+	return vivid_vid_out_s_dv_timings(file, fh, timings);
+}
+
+static int vidioc_cropcap(struct file *file, void *fh, struct v4l2_cropcap *cc)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_vid_cap_cropcap(file, fh, cc);
+	return vivid_vid_out_cropcap(file, fh, cc);
+}
+
+static int vidioc_g_selection(struct file *file, void *fh,
+			      struct v4l2_selection *sel)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_vid_cap_g_selection(file, fh, sel);
+	return vivid_vid_out_g_selection(file, fh, sel);
+}
+
+static int vidioc_s_selection(struct file *file, void *fh,
+			      struct v4l2_selection *sel)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_vid_cap_s_selection(file, fh, sel);
+	return vivid_vid_out_s_selection(file, fh, sel);
+}
+
+static int vidioc_g_parm(struct file *file, void *fh,
+			  struct v4l2_streamparm *parm)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_vid_cap_g_parm(file, fh, parm);
+	return vivid_vid_out_g_parm(file, fh, parm);
+}
+
+static int vidioc_s_parm(struct file *file, void *fh,
+			  struct v4l2_streamparm *parm)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_vid_cap_s_parm(file, fh, parm);
+	return vivid_vid_out_g_parm(file, fh, parm);
+}
+
+static ssize_t vivid_radio_read(struct file *file, char __user *buf,
+			 size_t size, loff_t *offset)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_TX)
+		return -EINVAL;
+	return vivid_radio_rx_read(file, buf, size, offset);
+}
+
+static ssize_t vivid_radio_write(struct file *file, const char __user *buf,
+			  size_t size, loff_t *offset)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return -EINVAL;
+	return vivid_radio_tx_write(file, buf, size, offset);
+}
+
+static unsigned int vivid_radio_poll(struct file *file, struct poll_table_struct *wait)
+{
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX)
+		return vivid_radio_rx_poll(file, wait);
+	return vivid_radio_tx_poll(file, wait);
+}
+
+static bool vivid_is_in_use(struct video_device *vdev)
+{
+	unsigned long flags;
+	bool res;
+
+	spin_lock_irqsave(&vdev->fh_lock, flags);
+	res = !list_empty(&vdev->fh_list);
+	spin_unlock_irqrestore(&vdev->fh_lock, flags);
+	return res;
+}
+
+static bool vivid_is_last_user(struct vivid_dev *dev)
+{
+	unsigned uses = vivid_is_in_use(&dev->vid_cap_dev) +
+			vivid_is_in_use(&dev->vid_out_dev) +
+			vivid_is_in_use(&dev->vbi_cap_dev) +
+			vivid_is_in_use(&dev->vbi_out_dev) +
+			vivid_is_in_use(&dev->sdr_cap_dev) +
+			vivid_is_in_use(&dev->radio_rx_dev) +
+			vivid_is_in_use(&dev->radio_tx_dev);
+
+	return uses == 1;
+}
+
+static int vivid_fop_release(struct file *file)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+
+	mutex_lock(&dev->mutex);
+	if (!no_error_inj && v4l2_fh_is_singular_file(file) &&
+	    !video_is_registered(vdev) && vivid_is_last_user(dev)) {
+		/*
+		 * I am the last user of this driver, and a disconnect
+		 * was forced (since this video_device is unregistered),
+		 * so re-register all video_device's again.
+		 */
+		v4l2_info(&dev->v4l2_dev, "reconnect\n");
+		set_bit(V4L2_FL_REGISTERED, &dev->vid_cap_dev.flags);
+		set_bit(V4L2_FL_REGISTERED, &dev->vid_out_dev.flags);
+		set_bit(V4L2_FL_REGISTERED, &dev->vbi_cap_dev.flags);
+		set_bit(V4L2_FL_REGISTERED, &dev->vbi_out_dev.flags);
+		set_bit(V4L2_FL_REGISTERED, &dev->sdr_cap_dev.flags);
+		set_bit(V4L2_FL_REGISTERED, &dev->radio_rx_dev.flags);
+		set_bit(V4L2_FL_REGISTERED, &dev->radio_tx_dev.flags);
+	}
+	mutex_unlock(&dev->mutex);
+	if (file->private_data == dev->overlay_cap_owner)
+		dev->overlay_cap_owner = NULL;
+	if (file->private_data == dev->radio_rx_rds_owner) {
+		dev->radio_rx_rds_last_block = 0;
+		dev->radio_rx_rds_owner = NULL;
+	}
+	if (file->private_data == dev->radio_tx_rds_owner) {
+		dev->radio_tx_rds_last_block = 0;
+		dev->radio_tx_rds_owner = NULL;
+	}
+	if (vdev->queue)
+		return vb2_fop_release(file);
+	return v4l2_fh_release(file);
+}
+
+static const struct v4l2_file_operations vivid_fops = {
+	.owner		= THIS_MODULE,
+	.open           = v4l2_fh_open,
+	.release        = vivid_fop_release,
+	.read           = vb2_fop_read,
+	.write          = vb2_fop_write,
+	.poll		= vb2_fop_poll,
+	.unlocked_ioctl = video_ioctl2,
+	.mmap           = vb2_fop_mmap,
+};
+
+static const struct v4l2_file_operations vivid_radio_fops = {
+	.owner		= THIS_MODULE,
+	.open           = v4l2_fh_open,
+	.release        = vivid_fop_release,
+	.read           = vivid_radio_read,
+	.write          = vivid_radio_write,
+	.poll		= vivid_radio_poll,
+	.unlocked_ioctl = video_ioctl2,
+};
+
+static const struct v4l2_ioctl_ops vivid_ioctl_ops = {
+	.vidioc_querycap		= vidioc_querycap,
+
+	.vidioc_enum_fmt_vid_cap	= vidioc_enum_fmt_vid,
+	.vidioc_g_fmt_vid_cap		= vidioc_g_fmt_vid_cap,
+	.vidioc_try_fmt_vid_cap		= vidioc_try_fmt_vid_cap,
+	.vidioc_s_fmt_vid_cap		= vidioc_s_fmt_vid_cap,
+	.vidioc_enum_fmt_vid_cap_mplane = vidioc_enum_fmt_vid_mplane,
+	.vidioc_g_fmt_vid_cap_mplane	= vidioc_g_fmt_vid_cap_mplane,
+	.vidioc_try_fmt_vid_cap_mplane	= vidioc_try_fmt_vid_cap_mplane,
+	.vidioc_s_fmt_vid_cap_mplane	= vidioc_s_fmt_vid_cap_mplane,
+
+	.vidioc_enum_fmt_vid_out	= vidioc_enum_fmt_vid,
+	.vidioc_g_fmt_vid_out		= vidioc_g_fmt_vid_out,
+	.vidioc_try_fmt_vid_out		= vidioc_try_fmt_vid_out,
+	.vidioc_s_fmt_vid_out		= vidioc_s_fmt_vid_out,
+	.vidioc_enum_fmt_vid_out_mplane = vidioc_enum_fmt_vid_mplane,
+	.vidioc_g_fmt_vid_out_mplane	= vidioc_g_fmt_vid_out_mplane,
+	.vidioc_try_fmt_vid_out_mplane	= vidioc_try_fmt_vid_out_mplane,
+	.vidioc_s_fmt_vid_out_mplane	= vidioc_s_fmt_vid_out_mplane,
+
+	.vidioc_g_selection		= vidioc_g_selection,
+	.vidioc_s_selection		= vidioc_s_selection,
+	.vidioc_cropcap			= vidioc_cropcap,
+
+	.vidioc_g_fmt_vbi_cap		= vidioc_g_fmt_vbi_cap,
+	.vidioc_try_fmt_vbi_cap		= vidioc_g_fmt_vbi_cap,
+	.vidioc_s_fmt_vbi_cap		= vidioc_s_fmt_vbi_cap,
+
+	.vidioc_g_fmt_sliced_vbi_cap    = vidioc_g_fmt_sliced_vbi_cap,
+	.vidioc_try_fmt_sliced_vbi_cap  = vidioc_try_fmt_sliced_vbi_cap,
+	.vidioc_s_fmt_sliced_vbi_cap    = vidioc_s_fmt_sliced_vbi_cap,
+	.vidioc_g_sliced_vbi_cap	= vidioc_g_sliced_vbi_cap,
+
+	.vidioc_g_fmt_vbi_out		= vidioc_g_fmt_vbi_out,
+	.vidioc_try_fmt_vbi_out		= vidioc_g_fmt_vbi_out,
+	.vidioc_s_fmt_vbi_out		= vidioc_s_fmt_vbi_out,
+
+	.vidioc_g_fmt_sliced_vbi_out    = vidioc_g_fmt_sliced_vbi_out,
+	.vidioc_try_fmt_sliced_vbi_out  = vidioc_try_fmt_sliced_vbi_out,
+	.vidioc_s_fmt_sliced_vbi_out    = vidioc_s_fmt_sliced_vbi_out,
+
+	.vidioc_enum_fmt_sdr_cap	= vidioc_enum_fmt_sdr_cap,
+	.vidioc_g_fmt_sdr_cap		= vidioc_g_fmt_sdr_cap,
+	.vidioc_try_fmt_sdr_cap		= vidioc_g_fmt_sdr_cap,
+	.vidioc_s_fmt_sdr_cap		= vidioc_g_fmt_sdr_cap,
+
+	.vidioc_overlay			= vidioc_overlay,
+	.vidioc_enum_framesizes		= vidioc_enum_framesizes,
+	.vidioc_enum_frameintervals	= vidioc_enum_frameintervals,
+	.vidioc_g_parm			= vidioc_g_parm,
+	.vidioc_s_parm			= vidioc_s_parm,
+
+	.vidioc_enum_fmt_vid_overlay	= vidioc_enum_fmt_vid_overlay,
+	.vidioc_g_fmt_vid_overlay	= vidioc_g_fmt_vid_overlay,
+	.vidioc_try_fmt_vid_overlay	= vidioc_try_fmt_vid_overlay,
+	.vidioc_s_fmt_vid_overlay	= vidioc_s_fmt_vid_overlay,
+	.vidioc_g_fmt_vid_out_overlay	= vidioc_g_fmt_vid_out_overlay,
+	.vidioc_try_fmt_vid_out_overlay	= vidioc_try_fmt_vid_out_overlay,
+	.vidioc_s_fmt_vid_out_overlay	= vidioc_s_fmt_vid_out_overlay,
+	.vidioc_overlay			= vidioc_overlay,
+	.vidioc_g_fbuf			= vidioc_g_fbuf,
+	.vidioc_s_fbuf			= vidioc_s_fbuf,
+
+	.vidioc_reqbufs			= vb2_ioctl_reqbufs,
+	.vidioc_create_bufs		= vb2_ioctl_create_bufs,
+	.vidioc_prepare_buf		= vb2_ioctl_prepare_buf,
+	.vidioc_querybuf		= vb2_ioctl_querybuf,
+	.vidioc_qbuf			= vb2_ioctl_qbuf,
+	.vidioc_dqbuf			= vb2_ioctl_dqbuf,
+/* Not yet	.vidioc_expbuf		= vb2_ioctl_expbuf,*/
+	.vidioc_streamon		= vb2_ioctl_streamon,
+	.vidioc_streamoff		= vb2_ioctl_streamoff,
+
+	.vidioc_enum_input		= vidioc_enum_input,
+	.vidioc_g_input			= vidioc_g_input,
+	.vidioc_s_input			= vidioc_s_input,
+	.vidioc_s_audio			= vidioc_s_audio,
+	.vidioc_g_audio			= vidioc_g_audio,
+	.vidioc_enumaudio		= vidioc_enumaudio,
+	.vidioc_s_frequency		= vidioc_s_frequency,
+	.vidioc_g_frequency		= vidioc_g_frequency,
+	.vidioc_s_tuner			= vidioc_s_tuner,
+	.vidioc_g_tuner			= vidioc_g_tuner,
+	.vidioc_s_modulator		= vidioc_s_modulator,
+	.vidioc_g_modulator		= vidioc_g_modulator,
+	.vidioc_s_hw_freq_seek		= vidioc_s_hw_freq_seek,
+	.vidioc_enum_freq_bands		= vidioc_enum_freq_bands,
+
+	.vidioc_enum_output		= vidioc_enum_output,
+	.vidioc_g_output		= vidioc_g_output,
+	.vidioc_s_output		= vidioc_s_output,
+	.vidioc_s_audout		= vidioc_s_audout,
+	.vidioc_g_audout		= vidioc_g_audout,
+	.vidioc_enumaudout		= vidioc_enumaudout,
+
+	.vidioc_querystd		= vidioc_querystd,
+	.vidioc_g_std			= vidioc_g_std,
+	.vidioc_s_std			= vidioc_s_std,
+	.vidioc_s_dv_timings		= vidioc_s_dv_timings,
+	.vidioc_g_dv_timings		= vidioc_g_dv_timings,
+	.vidioc_query_dv_timings	= vidioc_query_dv_timings,
+	.vidioc_enum_dv_timings		= vidioc_enum_dv_timings,
+	.vidioc_dv_timings_cap		= vidioc_dv_timings_cap,
+	.vidioc_g_edid			= vidioc_g_edid,
+	.vidioc_s_edid			= vidioc_s_edid,
+
+	.vidioc_log_status		= v4l2_ctrl_log_status,
+	.vidioc_subscribe_event		= vidioc_subscribe_event,
+	.vidioc_unsubscribe_event	= v4l2_event_unsubscribe,
+};
+
+/* -----------------------------------------------------------------
+	Initialization and module stuff
+   ------------------------------------------------------------------*/
+
+static int __init vivid_create_instance(int inst)
+{
+	static const struct v4l2_dv_timings def_dv_timings =
+					V4L2_DV_BT_CEA_1280X720P60;
+	unsigned in_type_counter[4] = { 0, 0, 0, 0 };
+	unsigned out_type_counter[4] = { 0, 0, 0, 0 };
+	int ccs_cap = ccs_cap_mode[inst];
+	int ccs_out = ccs_out_mode[inst];
+	bool has_tuner;
+	bool has_modulator;
+	struct vivid_dev *dev;
+	struct video_device *vfd;
+	struct vb2_queue *q;
+	unsigned node_type = node_types[inst];
+	v4l2_std_id tvnorms_cap = 0, tvnorms_out = 0;
+	int ret;
+	int i;
+
+	/* allocate main vivid state structure */
+	dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+	if (!dev)
+		return -ENOMEM;
+
+	dev->inst = inst;
+
+	/* register v4l2_device */
+	snprintf(dev->v4l2_dev.name, sizeof(dev->v4l2_dev.name),
+			"%s-%03d", VIVID_MODULE_NAME, inst);
+	ret = v4l2_device_register(NULL, &dev->v4l2_dev);
+	if (ret)
+		goto free_dev;
+
+	/* start detecting feature set */
+
+	/* do we use single- or multi-planar? */
+	if (multiplanar[inst] == 0)
+		dev->multiplanar = inst & 1;
+	else
+		dev->multiplanar = multiplanar[inst] > 1;
+	v4l2_info(&dev->v4l2_dev, "using %splanar format API\n",
+			dev->multiplanar ? "multi" : "single ");
+
+	/* how many inputs do we have and of what type? */
+	dev->num_inputs = num_inputs[inst];
+	if (dev->num_inputs < 1)
+		dev->num_inputs = 1;
+	if (dev->num_inputs >= MAX_INPUTS)
+		dev->num_inputs = MAX_INPUTS;
+	for (i = 0; i < dev->num_inputs; i++) {
+		dev->input_type[i] = (input_types[inst] >> (i * 2)) & 0x3;
+		dev->input_name_counter[i] = in_type_counter[dev->input_type[i]]++;
+	}
+	dev->has_audio_inputs = in_type_counter[TV] && in_type_counter[SVID];
+
+	/* how many outputs do we have and of what type? */
+	dev->num_outputs = num_outputs[inst];
+	if (dev->num_outputs < 1)
+		dev->num_outputs = 1;
+	if (dev->num_outputs >= MAX_OUTPUTS)
+		dev->num_outputs = MAX_OUTPUTS;
+	for (i = 0; i < dev->num_outputs; i++) {
+		dev->output_type[i] = ((output_types[inst] >> i) & 1) ? HDMI : SVID;
+		dev->output_name_counter[i] = out_type_counter[dev->output_type[i]]++;
+	}
+	dev->has_audio_outputs = out_type_counter[SVID];
+
+	/* do we create a video capture device? */
+	dev->has_vid_cap = node_type & 0x0001;
+
+	/* do we create a vbi capture device? */
+	if (in_type_counter[TV] || in_type_counter[SVID]) {
+		dev->has_raw_vbi_cap = node_type & 0x0004;
+		dev->has_sliced_vbi_cap = node_type & 0x0008;
+		dev->has_vbi_cap = dev->has_raw_vbi_cap | dev->has_sliced_vbi_cap;
+	}
+
+	/* do we create a video output device? */
+	dev->has_vid_out = node_type & 0x0100;
+
+	/* do we create a vbi output device? */
+	if (out_type_counter[SVID]) {
+		dev->has_raw_vbi_out = node_type & 0x0400;
+		dev->has_sliced_vbi_out = node_type & 0x0800;
+		dev->has_vbi_out = dev->has_raw_vbi_out | dev->has_sliced_vbi_out;
+	}
+
+	/* do we create a radio receiver device? */
+	dev->has_radio_rx = node_type & 0x0010;
+
+	/* do we create a radio transmitter device? */
+	dev->has_radio_tx = node_type & 0x1000;
+
+	/* do we create a software defined radio capture device? */
+	dev->has_sdr_cap = node_type & 0x0020;
+
+	/* do we have a tuner? */
+	has_tuner = ((dev->has_vid_cap || dev->has_vbi_cap) && in_type_counter[TV]) ||
+		    dev->has_radio_rx || dev->has_sdr_cap;
+
+	/* do we have a modulator? */
+	has_modulator = dev->has_radio_tx;
+
+	if (dev->has_vid_cap)
+		/* do we have a framebuffer for overlay testing? */
+		dev->has_fb = node_type & 0x10000;
+
+	/* can we do crop/compose/scaling while capturing? */
+	if (no_error_inj && ccs_cap == -1)
+		ccs_cap = 7;
+
+	/* if ccs_cap == -1, then the use can select it using controls */
+	if (ccs_cap != -1) {
+		dev->has_crop_cap = ccs_cap & 1;
+		dev->has_compose_cap = ccs_cap & 2;
+		dev->has_scaler_cap = ccs_cap & 4;
+		v4l2_info(&dev->v4l2_dev, "Capture Crop: %c Compose: %c Scaler: %c\n",
+			dev->has_crop_cap ? 'Y' : 'N',
+			dev->has_compose_cap ? 'Y' : 'N',
+			dev->has_scaler_cap ? 'Y' : 'N');
+	}
+
+	/* can we do crop/compose/scaling with video output? */
+	if (no_error_inj && ccs_out == -1)
+		ccs_out = 7;
+
+	/* if ccs_out == -1, then the use can select it using controls */
+	if (ccs_out != -1) {
+		dev->has_crop_out = ccs_out & 1;
+		dev->has_compose_out = ccs_out & 2;
+		dev->has_scaler_out = ccs_out & 4;
+		v4l2_info(&dev->v4l2_dev, "Output Crop: %c Compose: %c Scaler: %c\n",
+			dev->has_crop_out ? 'Y' : 'N',
+			dev->has_compose_out ? 'Y' : 'N',
+			dev->has_scaler_out ? 'Y' : 'N');
+	}
+
+	/* end detecting feature set */
+
+	if (dev->has_vid_cap) {
+		/* set up the capabilities of the video capture device */
+		dev->vid_cap_caps = dev->multiplanar ?
+			V4L2_CAP_VIDEO_CAPTURE_MPLANE :
+			V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OVERLAY;
+		dev->vid_cap_caps |= V4L2_CAP_STREAMING | V4L2_CAP_READWRITE;
+		if (dev->has_audio_inputs)
+			dev->vid_cap_caps |= V4L2_CAP_AUDIO;
+		if (in_type_counter[TV])
+			dev->vid_cap_caps |= V4L2_CAP_TUNER;
+	}
+	if (dev->has_vid_out) {
+		/* set up the capabilities of the video output device */
+		dev->vid_out_caps = dev->multiplanar ?
+			V4L2_CAP_VIDEO_OUTPUT_MPLANE :
+			V4L2_CAP_VIDEO_OUTPUT;
+		if (dev->has_fb)
+			dev->vid_out_caps |= V4L2_CAP_VIDEO_OUTPUT_OVERLAY;
+		dev->vid_out_caps |= V4L2_CAP_STREAMING | V4L2_CAP_READWRITE;
+		if (dev->has_audio_outputs)
+			dev->vid_out_caps |= V4L2_CAP_AUDIO;
+	}
+	if (dev->has_vbi_cap) {
+		/* set up the capabilities of the vbi capture device */
+		dev->vbi_cap_caps = (dev->has_raw_vbi_cap ? V4L2_CAP_VBI_CAPTURE : 0) |
+				    (dev->has_sliced_vbi_cap ? V4L2_CAP_SLICED_VBI_CAPTURE : 0);
+		dev->vbi_cap_caps |= V4L2_CAP_STREAMING | V4L2_CAP_READWRITE;
+		if (dev->has_audio_inputs)
+			dev->vbi_cap_caps |= V4L2_CAP_AUDIO;
+		if (in_type_counter[TV])
+			dev->vbi_cap_caps |= V4L2_CAP_TUNER;
+	}
+	if (dev->has_vbi_out) {
+		/* set up the capabilities of the vbi output device */
+		dev->vbi_out_caps = (dev->has_raw_vbi_out ? V4L2_CAP_VBI_OUTPUT : 0) |
+				    (dev->has_sliced_vbi_out ? V4L2_CAP_SLICED_VBI_OUTPUT : 0);
+		dev->vbi_out_caps |= V4L2_CAP_STREAMING | V4L2_CAP_READWRITE;
+		if (dev->has_audio_outputs)
+			dev->vbi_out_caps |= V4L2_CAP_AUDIO;
+	}
+	if (dev->has_sdr_cap) {
+		/* set up the capabilities of the sdr capture device */
+		dev->sdr_cap_caps = V4L2_CAP_SDR_CAPTURE | V4L2_CAP_TUNER;
+		dev->sdr_cap_caps |= V4L2_CAP_STREAMING | V4L2_CAP_READWRITE;
+	}
+	/* set up the capabilities of the radio receiver device */
+	if (dev->has_radio_rx)
+		dev->radio_rx_caps = V4L2_CAP_RADIO | V4L2_CAP_RDS_CAPTURE |
+				     V4L2_CAP_HW_FREQ_SEEK | V4L2_CAP_TUNER |
+				     V4L2_CAP_READWRITE;
+	/* set up the capabilities of the radio transmitter device */
+	if (dev->has_radio_tx)
+		dev->radio_tx_caps = V4L2_CAP_RDS_OUTPUT | V4L2_CAP_MODULATOR |
+				     V4L2_CAP_READWRITE;
+
+	/* initialize the test pattern generator */
+	tpg_init(&dev->tpg, 640, 360);
+	if (tpg_alloc(&dev->tpg, MAX_ZOOM * MAX_WIDTH))
+		goto free_dev;
+	dev->scaled_line = vzalloc(MAX_ZOOM * MAX_WIDTH);
+	if (!dev->scaled_line)
+		goto free_dev;
+	dev->blended_line = vzalloc(MAX_ZOOM * MAX_WIDTH);
+	if (!dev->blended_line)
+		goto free_dev;
+
+	/* load the edid */
+	dev->edid = vmalloc(256 * 128);
+	if (!dev->edid)
+		goto free_dev;
+
+	/* create a string array containing the names of all the preset timings */
+	while (v4l2_dv_timings_presets[dev->query_dv_timings_size].bt.width)
+		dev->query_dv_timings_size++;
+	dev->query_dv_timings_qmenu = kmalloc(dev->query_dv_timings_size *
+					   (sizeof(void *) + 32), GFP_KERNEL);
+	if (dev->query_dv_timings_qmenu == NULL)
+		goto free_dev;
+	for (i = 0; i < dev->query_dv_timings_size; i++) {
+		const struct v4l2_bt_timings *bt = &v4l2_dv_timings_presets[i].bt;
+		char *p = (char *)&dev->query_dv_timings_qmenu[dev->query_dv_timings_size];
+		u32 htot, vtot;
+
+		p += i * 32;
+		dev->query_dv_timings_qmenu[i] = p;
+
+		htot = V4L2_DV_BT_FRAME_WIDTH(bt);
+		vtot = V4L2_DV_BT_FRAME_HEIGHT(bt);
+		snprintf(p, 32, "%ux%u%s%u",
+			bt->width, bt->height, bt->interlaced ? "i" : "p",
+			(u32)bt->pixelclock / (htot * vtot));
+	}
+
+	/* disable invalid ioctls based on the feature set */
+	if (!dev->has_audio_inputs) {
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_S_AUDIO);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_G_AUDIO);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_ENUMAUDIO);
+		v4l2_disable_ioctl(&dev->vbi_cap_dev, VIDIOC_S_AUDIO);
+		v4l2_disable_ioctl(&dev->vbi_cap_dev, VIDIOC_G_AUDIO);
+		v4l2_disable_ioctl(&dev->vbi_cap_dev, VIDIOC_ENUMAUDIO);
+	}
+	if (!dev->has_audio_outputs) {
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_S_AUDOUT);
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_G_AUDOUT);
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_ENUMAUDOUT);
+		v4l2_disable_ioctl(&dev->vbi_out_dev, VIDIOC_S_AUDOUT);
+		v4l2_disable_ioctl(&dev->vbi_out_dev, VIDIOC_G_AUDOUT);
+		v4l2_disable_ioctl(&dev->vbi_out_dev, VIDIOC_ENUMAUDOUT);
+	}
+	if (!in_type_counter[TV] && !in_type_counter[SVID]) {
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_S_STD);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_G_STD);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_ENUMSTD);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_QUERYSTD);
+	}
+	if (!out_type_counter[SVID]) {
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_S_STD);
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_G_STD);
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_ENUMSTD);
+	}
+	if (!has_tuner && !has_modulator) {
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_S_FREQUENCY);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_G_FREQUENCY);
+		v4l2_disable_ioctl(&dev->vbi_cap_dev, VIDIOC_S_FREQUENCY);
+		v4l2_disable_ioctl(&dev->vbi_cap_dev, VIDIOC_G_FREQUENCY);
+	}
+	if (!has_tuner) {
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_S_TUNER);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_G_TUNER);
+		v4l2_disable_ioctl(&dev->vbi_cap_dev, VIDIOC_S_TUNER);
+		v4l2_disable_ioctl(&dev->vbi_cap_dev, VIDIOC_G_TUNER);
+	}
+	if (in_type_counter[HDMI] == 0) {
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_S_EDID);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_G_EDID);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_DV_TIMINGS_CAP);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_G_DV_TIMINGS);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_S_DV_TIMINGS);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_ENUM_DV_TIMINGS);
+		v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_QUERY_DV_TIMINGS);
+	}
+	if (out_type_counter[HDMI] == 0) {
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_G_EDID);
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_DV_TIMINGS_CAP);
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_G_DV_TIMINGS);
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_S_DV_TIMINGS);
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_ENUM_DV_TIMINGS);
+	}
+	if (!dev->has_fb) {
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_G_FBUF);
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_S_FBUF);
+		v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_OVERLAY);
+	}
+	v4l2_disable_ioctl(&dev->vid_cap_dev, VIDIOC_S_HW_FREQ_SEEK);
+	v4l2_disable_ioctl(&dev->vbi_cap_dev, VIDIOC_S_HW_FREQ_SEEK);
+	v4l2_disable_ioctl(&dev->sdr_cap_dev, VIDIOC_S_HW_FREQ_SEEK);
+	v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_S_FREQUENCY);
+	v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_G_FREQUENCY);
+	v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_ENUM_FRAMESIZES);
+	v4l2_disable_ioctl(&dev->vid_out_dev, VIDIOC_ENUM_FRAMEINTERVALS);
+	v4l2_disable_ioctl(&dev->vbi_out_dev, VIDIOC_S_FREQUENCY);
+	v4l2_disable_ioctl(&dev->vbi_out_dev, VIDIOC_G_FREQUENCY);
+
+	/* configure internal data */
+	dev->fmt_cap = &vivid_formats[0];
+	dev->fmt_out = &vivid_formats[0];
+	if (!dev->multiplanar)
+		vivid_formats[0].data_offset[0] = 0;
+	dev->webcam_size_idx = 1;
+	dev->webcam_ival_idx = 3;
+	tpg_s_fourcc(&dev->tpg, dev->fmt_cap->fourcc);
+	dev->std_cap = V4L2_STD_PAL;
+	dev->std_out = V4L2_STD_PAL;
+	if (dev->input_type[0] == TV || dev->input_type[0] == SVID)
+		tvnorms_cap = V4L2_STD_ALL;
+	if (dev->output_type[0] == SVID)
+		tvnorms_out = V4L2_STD_ALL;
+	dev->dv_timings_cap = def_dv_timings;
+	dev->dv_timings_out = def_dv_timings;
+	dev->tv_freq = 2804 /* 175.25 * 16 */;
+	dev->tv_audmode = V4L2_TUNER_MODE_STEREO;
+	dev->tv_field_cap = V4L2_FIELD_INTERLACED;
+	dev->tv_field_out = V4L2_FIELD_INTERLACED;
+	dev->radio_rx_freq = 95000 * 16;
+	dev->radio_rx_audmode = V4L2_TUNER_MODE_STEREO;
+	if (dev->has_radio_tx) {
+		dev->radio_tx_freq = 95500 * 16;
+		dev->radio_rds_loop = false;
+	}
+	dev->radio_tx_subchans = V4L2_TUNER_SUB_STEREO | V4L2_TUNER_SUB_RDS;
+	dev->sdr_adc_freq = 300000;
+	dev->sdr_fm_freq = 50000000;
+	dev->edid_max_blocks = dev->edid_blocks = 2;
+	memcpy(dev->edid, vivid_hdmi_edid, sizeof(vivid_hdmi_edid));
+	ktime_get_ts(&dev->radio_rds_init_ts);
+
+	/* create all controls */
+	ret = vivid_create_controls(dev, ccs_cap == -1, ccs_out == -1, no_error_inj,
+			in_type_counter[TV] || in_type_counter[SVID] ||
+			out_type_counter[SVID],
+			in_type_counter[HDMI] || out_type_counter[HDMI]);
+	if (ret)
+		goto unreg_dev;
+
+	/*
+	 * update the capture and output formats to do a proper initial
+	 * configuration.
+	 */
+	vivid_update_format_cap(dev, false);
+	vivid_update_format_out(dev);
+
+	v4l2_ctrl_handler_setup(&dev->ctrl_hdl_vid_cap);
+	v4l2_ctrl_handler_setup(&dev->ctrl_hdl_vid_out);
+	v4l2_ctrl_handler_setup(&dev->ctrl_hdl_vbi_cap);
+	v4l2_ctrl_handler_setup(&dev->ctrl_hdl_vbi_out);
+	v4l2_ctrl_handler_setup(&dev->ctrl_hdl_radio_rx);
+	v4l2_ctrl_handler_setup(&dev->ctrl_hdl_radio_tx);
+	v4l2_ctrl_handler_setup(&dev->ctrl_hdl_sdr_cap);
+
+	/* initialize overlay */
+	dev->fb_cap.fmt.width = dev->src_rect.width;
+	dev->fb_cap.fmt.height = dev->src_rect.height;
+	dev->fb_cap.fmt.pixelformat = dev->fmt_cap->fourcc;
+	dev->fb_cap.fmt.bytesperline = dev->src_rect.width * tpg_g_twopixelsize(&dev->tpg, 0) / 2;
+	dev->fb_cap.fmt.sizeimage = dev->src_rect.height * dev->fb_cap.fmt.bytesperline;
+
+	/* initialize locks */
+	spin_lock_init(&dev->slock);
+	mutex_init(&dev->mutex);
+
+	/* init dma queues */
+	INIT_LIST_HEAD(&dev->vid_cap_active);
+	INIT_LIST_HEAD(&dev->vid_out_active);
+	INIT_LIST_HEAD(&dev->vbi_cap_active);
+	INIT_LIST_HEAD(&dev->vbi_out_active);
+	INIT_LIST_HEAD(&dev->sdr_cap_active);
+
+	/* start creating the vb2 queues */
+	if (dev->has_vid_cap) {
+		/* initialize vid_cap queue */
+		q = &dev->vb_vid_cap_q;
+		q->type = dev->multiplanar ? V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE :
+			V4L2_BUF_TYPE_VIDEO_CAPTURE;
+		q->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF | VB2_READ;
+		q->drv_priv = dev;
+		q->buf_struct_size = sizeof(struct vivid_buffer);
+		q->ops = &vivid_vid_cap_qops;
+		q->mem_ops = &vb2_vmalloc_memops;
+		q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+		q->min_buffers_needed = 2;
+
+		ret = vb2_queue_init(q);
+		if (ret)
+			goto unreg_dev;
+	}
+
+	if (dev->has_vid_out) {
+		/* initialize vid_out queue */
+		q = &dev->vb_vid_out_q;
+		q->type = dev->multiplanar ? V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE :
+			V4L2_BUF_TYPE_VIDEO_OUTPUT;
+		q->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF | VB2_WRITE;
+		q->drv_priv = dev;
+		q->buf_struct_size = sizeof(struct vivid_buffer);
+		q->ops = &vivid_vid_out_qops;
+		q->mem_ops = &vb2_vmalloc_memops;
+		q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+		q->min_buffers_needed = 2;
+
+		ret = vb2_queue_init(q);
+		if (ret)
+			goto unreg_dev;
+	}
+
+	if (dev->has_vbi_cap) {
+		/* initialize vbi_cap queue */
+		q = &dev->vb_vbi_cap_q;
+		q->type = dev->has_raw_vbi_cap ? V4L2_BUF_TYPE_VBI_CAPTURE :
+					      V4L2_BUF_TYPE_SLICED_VBI_CAPTURE;
+		q->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF | VB2_READ;
+		q->drv_priv = dev;
+		q->buf_struct_size = sizeof(struct vivid_buffer);
+		q->ops = &vivid_vbi_cap_qops;
+		q->mem_ops = &vb2_vmalloc_memops;
+		q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+		q->min_buffers_needed = 2;
+
+		ret = vb2_queue_init(q);
+		if (ret)
+			goto unreg_dev;
+	}
+
+	if (dev->has_vbi_out) {
+		/* initialize vbi_out queue */
+		q = &dev->vb_vbi_out_q;
+		q->type = dev->has_raw_vbi_out ? V4L2_BUF_TYPE_VBI_OUTPUT :
+					      V4L2_BUF_TYPE_SLICED_VBI_OUTPUT;
+		q->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF | VB2_WRITE;
+		q->drv_priv = dev;
+		q->buf_struct_size = sizeof(struct vivid_buffer);
+		q->ops = &vivid_vbi_out_qops;
+		q->mem_ops = &vb2_vmalloc_memops;
+		q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+		q->min_buffers_needed = 2;
+
+		ret = vb2_queue_init(q);
+		if (ret)
+			goto unreg_dev;
+	}
+
+	if (dev->has_sdr_cap) {
+		/* initialize sdr_cap queue */
+		q = &dev->vb_sdr_cap_q;
+		q->type = V4L2_BUF_TYPE_SDR_CAPTURE;
+		q->io_modes = VB2_MMAP | VB2_USERPTR | VB2_DMABUF | VB2_READ;
+		q->drv_priv = dev;
+		q->buf_struct_size = sizeof(struct vivid_buffer);
+		q->ops = &vivid_sdr_cap_qops;
+		q->mem_ops = &vb2_vmalloc_memops;
+		q->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+		q->min_buffers_needed = 8;
+
+		ret = vb2_queue_init(q);
+		if (ret)
+			goto unreg_dev;
+	}
+
+	if (dev->has_fb) {
+		/* Create framebuffer for testing capture/output overlay */
+		ret = vivid_fb_init(dev);
+		if (ret)
+			goto unreg_dev;
+		v4l2_info(&dev->v4l2_dev, "Framebuffer device registered as fb%d\n",
+				dev->fb_info.node);
+	}
+
+	/* finally start creating the device nodes */
+	if (dev->has_vid_cap) {
+		vfd = &dev->vid_cap_dev;
+		strlcpy(vfd->name, "vivid-vid-cap", sizeof(vfd->name));
+		vfd->fops = &vivid_fops;
+		vfd->ioctl_ops = &vivid_ioctl_ops;
+		vfd->release = video_device_release_empty;
+		vfd->v4l2_dev = &dev->v4l2_dev;
+		vfd->queue = &dev->vb_vid_cap_q;
+		vfd->tvnorms = tvnorms_cap;
+
+		/*
+		 * Provide a mutex to v4l2 core. It will be used to protect
+		 * all fops and v4l2 ioctls.
+		 */
+		vfd->lock = &dev->mutex;
+		video_set_drvdata(vfd, dev);
+
+		ret = video_register_device(vfd, VFL_TYPE_GRABBER, vid_cap_nr[inst]);
+		if (ret < 0)
+			goto unreg_dev;
+		v4l2_info(&dev->v4l2_dev, "V4L2 capture device registered as %s\n",
+					  video_device_node_name(vfd));
+	}
+
+	if (dev->has_vid_out) {
+		vfd = &dev->vid_out_dev;
+		strlcpy(vfd->name, "vivid-vid-out", sizeof(vfd->name));
+		vfd->vfl_dir = VFL_DIR_TX;
+		vfd->fops = &vivid_fops;
+		vfd->ioctl_ops = &vivid_ioctl_ops;
+		vfd->release = video_device_release_empty;
+		vfd->v4l2_dev = &dev->v4l2_dev;
+		vfd->queue = &dev->vb_vid_out_q;
+		vfd->tvnorms = tvnorms_out;
+
+		/*
+		 * Provide a mutex to v4l2 core. It will be used to protect
+		 * all fops and v4l2 ioctls.
+		 */
+		vfd->lock = &dev->mutex;
+		video_set_drvdata(vfd, dev);
+
+		ret = video_register_device(vfd, VFL_TYPE_GRABBER, vid_out_nr[inst]);
+		if (ret < 0)
+			goto unreg_dev;
+		v4l2_info(&dev->v4l2_dev, "V4L2 output device registered as %s\n",
+					  video_device_node_name(vfd));
+	}
+
+	if (dev->has_vbi_cap) {
+		vfd = &dev->vbi_cap_dev;
+		strlcpy(vfd->name, "vivid-vbi-cap", sizeof(vfd->name));
+		vfd->fops = &vivid_fops;
+		vfd->ioctl_ops = &vivid_ioctl_ops;
+		vfd->release = video_device_release_empty;
+		vfd->v4l2_dev = &dev->v4l2_dev;
+		vfd->queue = &dev->vb_vbi_cap_q;
+		vfd->lock = &dev->mutex;
+		vfd->tvnorms = tvnorms_cap;
+		video_set_drvdata(vfd, dev);
+
+		ret = video_register_device(vfd, VFL_TYPE_VBI, vbi_cap_nr[inst]);
+		if (ret < 0)
+			goto unreg_dev;
+		v4l2_info(&dev->v4l2_dev, "V4L2 capture device registered as %s, supports %s VBI\n",
+					  video_device_node_name(vfd),
+					  (dev->has_raw_vbi_cap && dev->has_sliced_vbi_cap) ?
+					  "raw and sliced" :
+					  (dev->has_raw_vbi_cap ? "raw" : "sliced"));
+	}
+
+	if (dev->has_vbi_out) {
+		vfd = &dev->vbi_out_dev;
+		strlcpy(vfd->name, "vivid-vbi-out", sizeof(vfd->name));
+		vfd->vfl_dir = VFL_DIR_TX;
+		vfd->fops = &vivid_fops;
+		vfd->ioctl_ops = &vivid_ioctl_ops;
+		vfd->release = video_device_release_empty;
+		vfd->v4l2_dev = &dev->v4l2_dev;
+		vfd->queue = &dev->vb_vbi_out_q;
+		vfd->lock = &dev->mutex;
+		vfd->tvnorms = tvnorms_out;
+		video_set_drvdata(vfd, dev);
+
+		ret = video_register_device(vfd, VFL_TYPE_VBI, vbi_out_nr[inst]);
+		if (ret < 0)
+			goto unreg_dev;
+		v4l2_info(&dev->v4l2_dev, "V4L2 output device registered as %s, supports %s VBI\n",
+					  video_device_node_name(vfd),
+					  (dev->has_raw_vbi_out && dev->has_sliced_vbi_out) ?
+					  "raw and sliced" :
+					  (dev->has_raw_vbi_out ? "raw" : "sliced"));
+	}
+
+	if (dev->has_sdr_cap) {
+		vfd = &dev->sdr_cap_dev;
+		strlcpy(vfd->name, "vivid-sdr-cap", sizeof(vfd->name));
+		vfd->fops = &vivid_fops;
+		vfd->ioctl_ops = &vivid_ioctl_ops;
+		vfd->release = video_device_release_empty;
+		vfd->v4l2_dev = &dev->v4l2_dev;
+		vfd->queue = &dev->vb_sdr_cap_q;
+		vfd->lock = &dev->mutex;
+		video_set_drvdata(vfd, dev);
+
+		ret = video_register_device(vfd, VFL_TYPE_SDR, sdr_cap_nr[inst]);
+		if (ret < 0)
+			goto unreg_dev;
+		v4l2_info(&dev->v4l2_dev, "V4L2 capture device registered as %s\n",
+					  video_device_node_name(vfd));
+	}
+
+	if (dev->has_radio_rx) {
+		vfd = &dev->radio_rx_dev;
+		strlcpy(vfd->name, "vivid-rad-rx", sizeof(vfd->name));
+		vfd->fops = &vivid_radio_fops;
+		vfd->ioctl_ops = &vivid_ioctl_ops;
+		vfd->release = video_device_release_empty;
+		vfd->v4l2_dev = &dev->v4l2_dev;
+		vfd->lock = &dev->mutex;
+		video_set_drvdata(vfd, dev);
+
+		ret = video_register_device(vfd, VFL_TYPE_RADIO, radio_rx_nr[inst]);
+		if (ret < 0)
+			goto unreg_dev;
+		v4l2_info(&dev->v4l2_dev, "V4L2 receiver device registered as %s\n",
+					  video_device_node_name(vfd));
+	}
+
+	if (dev->has_radio_tx) {
+		vfd = &dev->radio_tx_dev;
+		strlcpy(vfd->name, "vivid-rad-tx", sizeof(vfd->name));
+		vfd->vfl_dir = VFL_DIR_TX;
+		vfd->fops = &vivid_radio_fops;
+		vfd->ioctl_ops = &vivid_ioctl_ops;
+		vfd->release = video_device_release_empty;
+		vfd->v4l2_dev = &dev->v4l2_dev;
+		vfd->lock = &dev->mutex;
+		video_set_drvdata(vfd, dev);
+
+		ret = video_register_device(vfd, VFL_TYPE_RADIO, radio_tx_nr[inst]);
+		if (ret < 0)
+			goto unreg_dev;
+		v4l2_info(&dev->v4l2_dev, "V4L2 transmitter device registered as %s\n",
+					  video_device_node_name(vfd));
+	}
+
+	/* Now that everything is fine, let's add it to device list */
+	vivid_devs[inst] = dev;
+
+	return 0;
+
+unreg_dev:
+	video_unregister_device(&dev->radio_tx_dev);
+	video_unregister_device(&dev->radio_rx_dev);
+	video_unregister_device(&dev->sdr_cap_dev);
+	video_unregister_device(&dev->vbi_out_dev);
+	video_unregister_device(&dev->vbi_cap_dev);
+	video_unregister_device(&dev->vid_out_dev);
+	video_unregister_device(&dev->vid_cap_dev);
+	vivid_free_controls(dev);
+	v4l2_device_unregister(&dev->v4l2_dev);
+free_dev:
+	vfree(dev->scaled_line);
+	vfree(dev->blended_line);
+	vfree(dev->edid);
+	tpg_free(&dev->tpg);
+	kfree(dev->query_dv_timings_qmenu);
+	kfree(dev);
+	return ret;
+}
+
+/* This routine allocates from 1 to n_devs virtual drivers.
+
+   The real maximum number of virtual drivers will depend on how many drivers
+   will succeed. This is limited to the maximum number of devices that
+   videodev supports, which is equal to VIDEO_NUM_DEVICES.
+ */
+static int __init vivid_init(void)
+{
+	const struct font_desc *font = find_font("VGA8x16");
+	int ret = 0, i;
+
+	if (font == NULL) {
+		pr_err("vivid: could not find font\n");
+		return -ENODEV;
+	}
+
+	tpg_set_font(font->data);
+
+	n_devs = clamp_t(unsigned, n_devs, 1, VIVID_MAX_DEVS);
+
+	for (i = 0; i < n_devs; i++) {
+		ret = vivid_create_instance(i);
+		if (ret) {
+			/* If some instantiations succeeded, keep driver */
+			if (i)
+				ret = 0;
+			break;
+		}
+	}
+
+	if (ret < 0) {
+		pr_err("vivid: error %d while loading driver\n", ret);
+		return ret;
+	}
+
+	/* n_devs will reflect the actual number of allocated devices */
+	n_devs = i;
+
+	return ret;
+}
+
+static void __exit vivid_exit(void)
+{
+	struct vivid_dev *dev;
+	unsigned i;
+
+	for (i = 0; vivid_devs[i]; i++) {
+		dev = vivid_devs[i];
+
+		if (dev->has_vid_cap) {
+			v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
+				video_device_node_name(&dev->vid_cap_dev));
+			video_unregister_device(&dev->vid_cap_dev);
+		}
+		if (dev->has_vid_out) {
+			v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
+				video_device_node_name(&dev->vid_out_dev));
+			video_unregister_device(&dev->vid_out_dev);
+		}
+		if (dev->has_vbi_cap) {
+			v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
+				video_device_node_name(&dev->vbi_cap_dev));
+			video_unregister_device(&dev->vbi_cap_dev);
+		}
+		if (dev->has_vbi_out) {
+			v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
+				video_device_node_name(&dev->vbi_out_dev));
+			video_unregister_device(&dev->vbi_out_dev);
+		}
+		if (dev->has_sdr_cap) {
+			v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
+				video_device_node_name(&dev->sdr_cap_dev));
+			video_unregister_device(&dev->sdr_cap_dev);
+		}
+		if (dev->has_radio_rx) {
+			v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
+				video_device_node_name(&dev->radio_rx_dev));
+			video_unregister_device(&dev->radio_rx_dev);
+		}
+		if (dev->has_radio_tx) {
+			v4l2_info(&dev->v4l2_dev, "unregistering %s\n",
+				video_device_node_name(&dev->radio_tx_dev));
+			video_unregister_device(&dev->radio_tx_dev);
+		}
+		if (dev->has_fb) {
+			v4l2_info(&dev->v4l2_dev, "unregistering fb%d\n",
+				dev->fb_info.node);
+			unregister_framebuffer(&dev->fb_info);
+			vivid_fb_release_buffers(dev);
+		}
+		v4l2_device_unregister(&dev->v4l2_dev);
+		vivid_free_controls(dev);
+		vfree(dev->scaled_line);
+		vfree(dev->blended_line);
+		vfree(dev->edid);
+		vfree(dev->bitmap_cap);
+		vfree(dev->bitmap_out);
+		tpg_free(&dev->tpg);
+		kfree(dev->query_dv_timings_qmenu);
+		kfree(dev);
+		vivid_devs[i] = NULL;
+	}
+}
+
+module_init(vivid_init);
+module_exit(vivid_exit);
diff --git a/drivers/media/platform/vivid/vivid-core.h b/drivers/media/platform/vivid/vivid-core.h
new file mode 100644
index 0000000..811c286
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-core.h
@@ -0,0 +1,520 @@
+/*
+ * vivid-core.h - core datastructures
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_CORE_H_
+#define _VIVID_CORE_H_
+
+#include <linux/fb.h>
+#include <media/videobuf2-core.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-dev.h>
+#include <media/v4l2-ctrls.h>
+#include "vivid-tpg.h"
+#include "vivid-rds-gen.h"
+#include "vivid-vbi-gen.h"
+
+#define dprintk(dev, level, fmt, arg...) \
+	v4l2_dbg(level, vivid_debug, &dev->v4l2_dev, fmt, ## arg)
+
+/* Maximum allowed frame rate
+ *
+ * vivid will allow setting timeperframe in [1/FPS_MAX - FPS_MAX/1] range.
+ *
+ * Ideally FPS_MAX should be infinity, i.e. practically UINT_MAX, but that
+ * might hit application errors when they manipulate these values.
+ *
+ * Besides, for tpf < 10ms image-generation logic should be changed, to avoid
+ * producing frames with equal content.
+ */
+#define FPS_MAX 100
+
+/* The maximum number of clip rectangles */
+#define MAX_CLIPS  16
+/* The maximum number of inputs */
+#define MAX_INPUTS 16
+/* The maximum number of outputs */
+#define MAX_OUTPUTS 16
+/* The maximum up or down scaling factor is 4 */
+#define MAX_ZOOM  4
+/* The maximum image width/height are set to 4K DMT */
+#define MAX_WIDTH  4096
+#define MAX_HEIGHT 2160
+/* The minimum image width/height */
+#define MIN_WIDTH  16
+#define MIN_HEIGHT 16
+/* The data_offset of plane 0 for the multiplanar formats */
+#define PLANE0_DATA_OFFSET 128
+
+/* The supported TV frequency range in MHz */
+#define MIN_TV_FREQ (44U * 16U)
+#define MAX_TV_FREQ (958U * 16U)
+
+/* The number of samples returned in every SDR buffer */
+#define SDR_CAP_SAMPLES_PER_BUF 0x4000
+
+/* used by the threads to know when to resync internal counters */
+#define JIFFIES_PER_DAY (3600U * 24U * HZ)
+#define JIFFIES_RESYNC (JIFFIES_PER_DAY * (0xf0000000U / JIFFIES_PER_DAY))
+
+extern const struct v4l2_rect vivid_min_rect;
+extern const struct v4l2_rect vivid_max_rect;
+extern unsigned vivid_debug;
+
+struct vivid_fmt {
+	const char *name;
+	u32	fourcc;          /* v4l2 format id */
+	u8	depth;
+	bool	is_yuv;
+	bool	can_do_overlay;
+	u32	alpha_mask;
+	u8	planes;
+	u32	data_offset[2];
+};
+
+extern struct vivid_fmt vivid_formats[];
+
+/* buffer for one video frame */
+struct vivid_buffer {
+	/* common v4l buffer stuff -- must be first */
+	struct vb2_buffer	vb;
+	struct list_head	list;
+};
+
+enum vivid_input {
+	WEBCAM,
+	TV,
+	SVID,
+	HDMI,
+};
+
+enum vivid_signal_mode {
+	CURRENT_DV_TIMINGS,
+	CURRENT_STD = CURRENT_DV_TIMINGS,
+	NO_SIGNAL,
+	NO_LOCK,
+	OUT_OF_RANGE,
+	SELECTED_DV_TIMINGS,
+	SELECTED_STD = SELECTED_DV_TIMINGS,
+	CYCLE_DV_TIMINGS,
+	CYCLE_STD = CYCLE_DV_TIMINGS,
+	CUSTOM_DV_TIMINGS,
+};
+
+#define VIVID_INVALID_SIGNAL(mode) \
+	((mode) == NO_SIGNAL || (mode) == NO_LOCK || (mode) == OUT_OF_RANGE)
+
+struct vivid_dev {
+	unsigned			inst;
+	struct v4l2_device		v4l2_dev;
+	struct v4l2_ctrl_handler	ctrl_hdl_user_gen;
+	struct v4l2_ctrl_handler	ctrl_hdl_user_vid;
+	struct v4l2_ctrl_handler	ctrl_hdl_user_aud;
+	struct v4l2_ctrl_handler	ctrl_hdl_streaming;
+	struct v4l2_ctrl_handler	ctrl_hdl_sdtv_cap;
+	struct v4l2_ctrl_handler	ctrl_hdl_loop_out;
+	struct video_device		vid_cap_dev;
+	struct v4l2_ctrl_handler	ctrl_hdl_vid_cap;
+	struct video_device		vid_out_dev;
+	struct v4l2_ctrl_handler	ctrl_hdl_vid_out;
+	struct video_device		vbi_cap_dev;
+	struct v4l2_ctrl_handler	ctrl_hdl_vbi_cap;
+	struct video_device		vbi_out_dev;
+	struct v4l2_ctrl_handler	ctrl_hdl_vbi_out;
+	struct video_device		radio_rx_dev;
+	struct v4l2_ctrl_handler	ctrl_hdl_radio_rx;
+	struct video_device		radio_tx_dev;
+	struct v4l2_ctrl_handler	ctrl_hdl_radio_tx;
+	struct video_device		sdr_cap_dev;
+	struct v4l2_ctrl_handler	ctrl_hdl_sdr_cap;
+	spinlock_t			slock;
+	struct mutex			mutex;
+
+	/* capabilities */
+	u32				vid_cap_caps;
+	u32				vid_out_caps;
+	u32				vbi_cap_caps;
+	u32				vbi_out_caps;
+	u32				sdr_cap_caps;
+	u32				radio_rx_caps;
+	u32				radio_tx_caps;
+
+	/* supported features */
+	bool				multiplanar;
+	unsigned			num_inputs;
+	u8				input_type[MAX_INPUTS];
+	u8				input_name_counter[MAX_INPUTS];
+	unsigned			num_outputs;
+	u8				output_type[MAX_OUTPUTS];
+	u8				output_name_counter[MAX_OUTPUTS];
+	bool				has_audio_inputs;
+	bool				has_audio_outputs;
+	bool				has_vid_cap;
+	bool				has_vid_out;
+	bool				has_vbi_cap;
+	bool				has_raw_vbi_cap;
+	bool				has_sliced_vbi_cap;
+	bool				has_vbi_out;
+	bool				has_raw_vbi_out;
+	bool				has_sliced_vbi_out;
+	bool				has_radio_rx;
+	bool				has_radio_tx;
+	bool				has_sdr_cap;
+	bool				has_fb;
+
+	bool				can_loop_video;
+
+	/* controls */
+	struct v4l2_ctrl		*brightness;
+	struct v4l2_ctrl		*contrast;
+	struct v4l2_ctrl		*saturation;
+	struct v4l2_ctrl		*hue;
+	struct {
+		/* autogain/gain cluster */
+		struct v4l2_ctrl	*autogain;
+		struct v4l2_ctrl	*gain;
+	};
+	struct v4l2_ctrl		*volume;
+	struct v4l2_ctrl		*mute;
+	struct v4l2_ctrl		*alpha;
+	struct v4l2_ctrl		*button;
+	struct v4l2_ctrl		*boolean;
+	struct v4l2_ctrl		*int32;
+	struct v4l2_ctrl		*int64;
+	struct v4l2_ctrl		*menu;
+	struct v4l2_ctrl		*string;
+	struct v4l2_ctrl		*bitmask;
+	struct v4l2_ctrl		*int_menu;
+	struct v4l2_ctrl		*test_pattern;
+	struct v4l2_ctrl		*colorspace;
+	struct v4l2_ctrl		*rgb_range_cap;
+	struct v4l2_ctrl		*real_rgb_range_cap;
+	struct {
+		/* std_signal_mode/standard cluster */
+		struct v4l2_ctrl	*ctrl_std_signal_mode;
+		struct v4l2_ctrl	*ctrl_standard;
+	};
+	struct {
+		/* dv_timings_signal_mode/timings cluster */
+		struct v4l2_ctrl	*ctrl_dv_timings_signal_mode;
+		struct v4l2_ctrl	*ctrl_dv_timings;
+	};
+	struct v4l2_ctrl		*ctrl_has_crop_cap;
+	struct v4l2_ctrl		*ctrl_has_compose_cap;
+	struct v4l2_ctrl		*ctrl_has_scaler_cap;
+	struct v4l2_ctrl		*ctrl_has_crop_out;
+	struct v4l2_ctrl		*ctrl_has_compose_out;
+	struct v4l2_ctrl		*ctrl_has_scaler_out;
+	struct v4l2_ctrl		*ctrl_tx_mode;
+	struct v4l2_ctrl		*ctrl_tx_rgb_range;
+
+	struct v4l2_ctrl		*radio_tx_rds_pi;
+	struct v4l2_ctrl		*radio_tx_rds_pty;
+	struct v4l2_ctrl		*radio_tx_rds_mono_stereo;
+	struct v4l2_ctrl		*radio_tx_rds_art_head;
+	struct v4l2_ctrl		*radio_tx_rds_compressed;
+	struct v4l2_ctrl		*radio_tx_rds_dyn_pty;
+	struct v4l2_ctrl		*radio_tx_rds_ta;
+	struct v4l2_ctrl		*radio_tx_rds_tp;
+	struct v4l2_ctrl		*radio_tx_rds_ms;
+	struct v4l2_ctrl		*radio_tx_rds_psname;
+	struct v4l2_ctrl		*radio_tx_rds_radiotext;
+
+	struct v4l2_ctrl		*radio_rx_rds_pty;
+	struct v4l2_ctrl		*radio_rx_rds_ta;
+	struct v4l2_ctrl		*radio_rx_rds_tp;
+	struct v4l2_ctrl		*radio_rx_rds_ms;
+	struct v4l2_ctrl		*radio_rx_rds_psname;
+	struct v4l2_ctrl		*radio_rx_rds_radiotext;
+
+	unsigned			input_brightness[MAX_INPUTS];
+	unsigned			osd_mode;
+	unsigned			button_pressed;
+	bool				sensor_hflip;
+	bool				sensor_vflip;
+	bool				hflip;
+	bool				vflip;
+	bool				vbi_cap_interlaced;
+	bool				loop_video;
+
+	/* Framebuffer */
+	unsigned long			video_pbase;
+	void				*video_vbase;
+	u32				video_buffer_size;
+	int				display_width;
+	int				display_height;
+	int				display_byte_stride;
+	int				bits_per_pixel;
+	int				bytes_per_pixel;
+	struct fb_info			fb_info;
+	struct fb_var_screeninfo	fb_defined;
+	struct fb_fix_screeninfo	fb_fix;
+
+	/* Error injection */
+	bool				queue_setup_error;
+	bool				buf_prepare_error;
+	bool				start_streaming_error;
+	bool				dqbuf_error;
+	bool				seq_wrap;
+	bool				time_wrap;
+	__kernel_time_t			time_wrap_offset;
+	unsigned			perc_dropped_buffers;
+	enum vivid_signal_mode		std_signal_mode;
+	unsigned			query_std_last;
+	v4l2_std_id			query_std;
+	enum tpg_video_aspect		std_aspect_ratio;
+
+	enum vivid_signal_mode		dv_timings_signal_mode;
+	char				**query_dv_timings_qmenu;
+	unsigned			query_dv_timings_size;
+	unsigned			query_dv_timings_last;
+	unsigned			query_dv_timings;
+	enum tpg_video_aspect		dv_timings_aspect_ratio;
+
+	/* Input */
+	unsigned			input;
+	v4l2_std_id			std_cap;
+	struct v4l2_dv_timings		dv_timings_cap;
+	u32				service_set_cap;
+	struct vivid_vbi_gen_data	vbi_gen;
+	u8				*edid;
+	unsigned			edid_blocks;
+	unsigned			edid_max_blocks;
+	unsigned			webcam_size_idx;
+	unsigned			webcam_ival_idx;
+	unsigned			tv_freq;
+	unsigned			tv_audmode;
+	unsigned			tv_field_cap;
+	unsigned			tv_audio_input;
+
+	/* Capture Overlay */
+	struct v4l2_framebuffer		fb_cap;
+	struct v4l2_fh			*overlay_cap_owner;
+	void				*fb_vbase_cap;
+	int				overlay_cap_top, overlay_cap_left;
+	enum v4l2_field			overlay_cap_field;
+	void				*bitmap_cap;
+	struct v4l2_clip		clips_cap[MAX_CLIPS];
+	struct v4l2_clip		try_clips_cap[MAX_CLIPS];
+	unsigned			clipcount_cap;
+
+	/* Output */
+	unsigned			output;
+	v4l2_std_id			std_out;
+	struct v4l2_dv_timings		dv_timings_out;
+	u32				colorspace_out;
+	u32				service_set_out;
+	u32				bytesperline_out[2];
+	unsigned			tv_field_out;
+	unsigned			tv_audio_output;
+	bool				vbi_out_have_wss;
+	u8				vbi_out_wss[2];
+	bool				vbi_out_have_cc[2];
+	u8				vbi_out_cc[2][2];
+	bool				dvi_d_out;
+	u8				*scaled_line;
+	u8				*blended_line;
+	unsigned			cur_scaled_line;
+
+	/* Output Overlay */
+	void				*fb_vbase_out;
+	bool				overlay_out_enabled;
+	int				overlay_out_top, overlay_out_left;
+	void				*bitmap_out;
+	struct v4l2_clip		clips_out[MAX_CLIPS];
+	struct v4l2_clip		try_clips_out[MAX_CLIPS];
+	unsigned			clipcount_out;
+	unsigned			fbuf_out_flags;
+	u32				chromakey_out;
+	u8				global_alpha_out;
+
+	/* video capture */
+	struct tpg_data			tpg;
+	unsigned			ms_vid_cap;
+	bool				must_blank[VIDEO_MAX_FRAME];
+
+	const struct vivid_fmt		*fmt_cap;
+	struct v4l2_fract		timeperframe_vid_cap;
+	enum v4l2_field			field_cap;
+	struct v4l2_rect		src_rect;
+	struct v4l2_rect		fmt_cap_rect;
+	struct v4l2_rect		crop_cap;
+	struct v4l2_rect		compose_cap;
+	struct v4l2_rect		crop_bounds_cap;
+	struct vb2_queue		vb_vid_cap_q;
+	struct list_head		vid_cap_active;
+	struct vb2_queue		vb_vbi_cap_q;
+	struct list_head		vbi_cap_active;
+
+	/* thread for generating video capture stream */
+	struct task_struct		*kthread_vid_cap;
+	unsigned long			jiffies_vid_cap;
+	u32				cap_seq_offset;
+	u32				cap_seq_count;
+	bool				cap_seq_resync;
+	u32				vid_cap_seq_start;
+	u32				vid_cap_seq_count;
+	bool				vid_cap_streaming;
+	u32				vbi_cap_seq_start;
+	u32				vbi_cap_seq_count;
+	bool				vbi_cap_streaming;
+	bool				stream_sliced_vbi_cap;
+
+	/* video output */
+	const struct vivid_fmt		*fmt_out;
+	struct v4l2_fract		timeperframe_vid_out;
+	enum v4l2_field			field_out;
+	struct v4l2_rect		sink_rect;
+	struct v4l2_rect		fmt_out_rect;
+	struct v4l2_rect		crop_out;
+	struct v4l2_rect		compose_out;
+	struct v4l2_rect		compose_bounds_out;
+	struct vb2_queue		vb_vid_out_q;
+	struct list_head		vid_out_active;
+	struct vb2_queue		vb_vbi_out_q;
+	struct list_head		vbi_out_active;
+
+	/* video loop precalculated rectangles */
+
+	/*
+	 * Intersection between what the output side composes and the capture side
+	 * crops. I.e., what actually needs to be copied from the output buffer to
+	 * the capture buffer.
+	 */
+	struct v4l2_rect		loop_vid_copy;
+	/* The part of the output buffer that (after scaling) corresponds to loop_vid_copy. */
+	struct v4l2_rect		loop_vid_out;
+	/* The part of the capture buffer that (after scaling) corresponds to loop_vid_copy. */
+	struct v4l2_rect		loop_vid_cap;
+	/*
+	 * The intersection of the framebuffer, the overlay output window and
+	 * loop_vid_copy. I.e., the part of the framebuffer that actually should be
+	 * blended with the compose_out rectangle. This uses the framebuffer origin.
+	 */
+	struct v4l2_rect		loop_fb_copy;
+	/* The same as loop_fb_copy but with compose_out origin. */
+	struct v4l2_rect		loop_vid_overlay;
+	/*
+	 * The part of the capture buffer that (after scaling) corresponds
+	 * to loop_vid_overlay.
+	 */
+	struct v4l2_rect		loop_vid_overlay_cap;
+
+	/* thread for generating video output stream */
+	struct task_struct		*kthread_vid_out;
+	unsigned long			jiffies_vid_out;
+	u32				out_seq_offset;
+	u32				out_seq_count;
+	bool				out_seq_resync;
+	u32				vid_out_seq_start;
+	u32				vid_out_seq_count;
+	bool				vid_out_streaming;
+	u32				vbi_out_seq_start;
+	u32				vbi_out_seq_count;
+	bool				vbi_out_streaming;
+	bool				stream_sliced_vbi_out;
+
+	/* SDR capture */
+	struct vb2_queue		vb_sdr_cap_q;
+	struct list_head		sdr_cap_active;
+	unsigned			sdr_adc_freq;
+	unsigned			sdr_fm_freq;
+	int				sdr_fixp_src_phase;
+	int				sdr_fixp_mod_phase;
+
+	bool				tstamp_src_is_soe;
+	bool				has_crop_cap;
+	bool				has_compose_cap;
+	bool				has_scaler_cap;
+	bool				has_crop_out;
+	bool				has_compose_out;
+	bool				has_scaler_out;
+
+	/* thread for generating SDR stream */
+	struct task_struct		*kthread_sdr_cap;
+	unsigned long			jiffies_sdr_cap;
+	u32				sdr_cap_seq_offset;
+	u32				sdr_cap_seq_count;
+	bool				sdr_cap_seq_resync;
+
+	/* RDS generator */
+	struct vivid_rds_gen		rds_gen;
+
+	/* Radio receiver */
+	unsigned			radio_rx_freq;
+	unsigned			radio_rx_audmode;
+	int				radio_rx_sig_qual;
+	unsigned			radio_rx_hw_seek_mode;
+	bool				radio_rx_hw_seek_prog_lim;
+	bool				radio_rx_rds_controls;
+	bool				radio_rx_rds_enabled;
+	unsigned			radio_rx_rds_use_alternates;
+	unsigned			radio_rx_rds_last_block;
+	struct v4l2_fh			*radio_rx_rds_owner;
+
+	/* Radio transmitter */
+	unsigned			radio_tx_freq;
+	unsigned			radio_tx_subchans;
+	bool				radio_tx_rds_controls;
+	unsigned			radio_tx_rds_last_block;
+	struct v4l2_fh			*radio_tx_rds_owner;
+
+	/* Shared between radio receiver and transmitter */
+	bool				radio_rds_loop;
+	struct timespec			radio_rds_init_ts;
+};
+
+static inline bool vivid_is_webcam(const struct vivid_dev *dev)
+{
+	return dev->input_type[dev->input] == WEBCAM;
+}
+
+static inline bool vivid_is_tv_cap(const struct vivid_dev *dev)
+{
+	return dev->input_type[dev->input] == TV;
+}
+
+static inline bool vivid_is_svid_cap(const struct vivid_dev *dev)
+{
+	return dev->input_type[dev->input] == SVID;
+}
+
+static inline bool vivid_is_hdmi_cap(const struct vivid_dev *dev)
+{
+	return dev->input_type[dev->input] == HDMI;
+}
+
+static inline bool vivid_is_sdtv_cap(const struct vivid_dev *dev)
+{
+	return vivid_is_tv_cap(dev) || vivid_is_svid_cap(dev);
+}
+
+static inline bool vivid_is_svid_out(const struct vivid_dev *dev)
+{
+	return dev->output_type[dev->output] == SVID;
+}
+
+static inline bool vivid_is_hdmi_out(const struct vivid_dev *dev)
+{
+	return dev->output_type[dev->output] == HDMI;
+}
+
+void vivid_lock(struct vb2_queue *vq);
+void vivid_unlock(struct vb2_queue *vq);
+
+#endif
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 04/12] vivid: add the control handling code
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
                   ` (2 preceding siblings ...)
  2014-08-25 11:30 ` [PATCHv2 03/12] vivid: add core driver code Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 05/12] vivid: add the video capture and output parts Hans Verkuil
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

The vivid-ctrls code sets up and processes the various V4L2 controls
that are needed by this driver.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/platform/vivid/vivid-ctrls.c | 1502 ++++++++++++++++++++++++++++
 drivers/media/platform/vivid/vivid-ctrls.h |   34 +
 2 files changed, 1536 insertions(+)
 create mode 100644 drivers/media/platform/vivid/vivid-ctrls.c
 create mode 100644 drivers/media/platform/vivid/vivid-ctrls.h

diff --git a/drivers/media/platform/vivid/vivid-ctrls.c b/drivers/media/platform/vivid/vivid-ctrls.c
new file mode 100644
index 0000000..d5cbf00
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-ctrls.c
@@ -0,0 +1,1502 @@
+/*
+ * vivid-ctrls.c - control support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/videodev2.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-common.h>
+
+#include "vivid-core.h"
+#include "vivid-vid-cap.h"
+#include "vivid-vid-out.h"
+#include "vivid-vid-common.h"
+#include "vivid-radio-common.h"
+#include "vivid-osd.h"
+#include "vivid-ctrls.h"
+
+#define VIVID_CID_CUSTOM_BASE		(V4L2_CID_USER_BASE | 0xf000)
+#define VIVID_CID_BUTTON		(VIVID_CID_CUSTOM_BASE + 0)
+#define VIVID_CID_BOOLEAN		(VIVID_CID_CUSTOM_BASE + 1)
+#define VIVID_CID_INTEGER		(VIVID_CID_CUSTOM_BASE + 2)
+#define VIVID_CID_INTEGER64		(VIVID_CID_CUSTOM_BASE + 3)
+#define VIVID_CID_MENU			(VIVID_CID_CUSTOM_BASE + 4)
+#define VIVID_CID_STRING		(VIVID_CID_CUSTOM_BASE + 5)
+#define VIVID_CID_BITMASK		(VIVID_CID_CUSTOM_BASE + 6)
+#define VIVID_CID_INTMENU		(VIVID_CID_CUSTOM_BASE + 7)
+
+#define VIVID_CID_VIVID_BASE		(0x00f00000 | 0xf000)
+#define VIVID_CID_VIVID_CLASS		(0x00f00000 | 1)
+#define VIVID_CID_TEST_PATTERN		(VIVID_CID_VIVID_BASE + 0)
+#define VIVID_CID_OSD_TEXT_MODE		(VIVID_CID_VIVID_BASE + 1)
+#define VIVID_CID_HOR_MOVEMENT		(VIVID_CID_VIVID_BASE + 2)
+#define VIVID_CID_VERT_MOVEMENT		(VIVID_CID_VIVID_BASE + 3)
+#define VIVID_CID_SHOW_BORDER		(VIVID_CID_VIVID_BASE + 4)
+#define VIVID_CID_SHOW_SQUARE		(VIVID_CID_VIVID_BASE + 5)
+#define VIVID_CID_INSERT_SAV		(VIVID_CID_VIVID_BASE + 6)
+#define VIVID_CID_INSERT_EAV		(VIVID_CID_VIVID_BASE + 7)
+#define VIVID_CID_VBI_CAP_INTERLACED	(VIVID_CID_VIVID_BASE + 8)
+
+#define VIVID_CID_HFLIP			(VIVID_CID_VIVID_BASE + 20)
+#define VIVID_CID_VFLIP			(VIVID_CID_VIVID_BASE + 21)
+#define VIVID_CID_STD_ASPECT_RATIO	(VIVID_CID_VIVID_BASE + 22)
+#define VIVID_CID_DV_TIMINGS_ASPECT_RATIO	(VIVID_CID_VIVID_BASE + 23)
+#define VIVID_CID_TSTAMP_SRC		(VIVID_CID_VIVID_BASE + 24)
+#define VIVID_CID_COLORSPACE		(VIVID_CID_VIVID_BASE + 25)
+#define VIVID_CID_LIMITED_RGB_RANGE	(VIVID_CID_VIVID_BASE + 26)
+#define VIVID_CID_ALPHA_MODE		(VIVID_CID_VIVID_BASE + 27)
+#define VIVID_CID_HAS_CROP_CAP		(VIVID_CID_VIVID_BASE + 28)
+#define VIVID_CID_HAS_COMPOSE_CAP	(VIVID_CID_VIVID_BASE + 29)
+#define VIVID_CID_HAS_SCALER_CAP	(VIVID_CID_VIVID_BASE + 30)
+#define VIVID_CID_HAS_CROP_OUT		(VIVID_CID_VIVID_BASE + 31)
+#define VIVID_CID_HAS_COMPOSE_OUT	(VIVID_CID_VIVID_BASE + 32)
+#define VIVID_CID_HAS_SCALER_OUT	(VIVID_CID_VIVID_BASE + 33)
+#define VIVID_CID_LOOP_VIDEO		(VIVID_CID_VIVID_BASE + 34)
+#define VIVID_CID_SEQ_WRAP		(VIVID_CID_VIVID_BASE + 35)
+#define VIVID_CID_TIME_WRAP		(VIVID_CID_VIVID_BASE + 36)
+#define VIVID_CID_MAX_EDID_BLOCKS	(VIVID_CID_VIVID_BASE + 37)
+#define VIVID_CID_PERCENTAGE_FILL	(VIVID_CID_VIVID_BASE + 38)
+
+#define VIVID_CID_STD_SIGNAL_MODE	(VIVID_CID_VIVID_BASE + 60)
+#define VIVID_CID_STANDARD		(VIVID_CID_VIVID_BASE + 61)
+#define VIVID_CID_DV_TIMINGS_SIGNAL_MODE	(VIVID_CID_VIVID_BASE + 62)
+#define VIVID_CID_DV_TIMINGS		(VIVID_CID_VIVID_BASE + 63)
+#define VIVID_CID_PERC_DROPPED		(VIVID_CID_VIVID_BASE + 64)
+#define VIVID_CID_DISCONNECT		(VIVID_CID_VIVID_BASE + 65)
+#define VIVID_CID_DQBUF_ERROR		(VIVID_CID_VIVID_BASE + 66)
+#define VIVID_CID_QUEUE_SETUP_ERROR	(VIVID_CID_VIVID_BASE + 67)
+#define VIVID_CID_BUF_PREPARE_ERROR	(VIVID_CID_VIVID_BASE + 68)
+#define VIVID_CID_START_STR_ERROR	(VIVID_CID_VIVID_BASE + 69)
+#define VIVID_CID_QUEUE_ERROR		(VIVID_CID_VIVID_BASE + 70)
+#define VIVID_CID_CLEAR_FB		(VIVID_CID_VIVID_BASE + 71)
+
+#define VIVID_CID_RADIO_SEEK_MODE	(VIVID_CID_VIVID_BASE + 90)
+#define VIVID_CID_RADIO_SEEK_PROG_LIM	(VIVID_CID_VIVID_BASE + 91)
+#define VIVID_CID_RADIO_RX_RDS_RBDS	(VIVID_CID_VIVID_BASE + 92)
+#define VIVID_CID_RADIO_RX_RDS_BLOCKIO	(VIVID_CID_VIVID_BASE + 93)
+
+#define VIVID_CID_RADIO_TX_RDS_BLOCKIO	(VIVID_CID_VIVID_BASE + 94)
+
+
+/* General User Controls */
+
+static int vivid_user_gen_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_user_gen);
+
+	switch (ctrl->id) {
+	case VIVID_CID_DISCONNECT:
+		v4l2_info(&dev->v4l2_dev, "disconnect\n");
+		clear_bit(V4L2_FL_REGISTERED, &dev->vid_cap_dev.flags);
+		clear_bit(V4L2_FL_REGISTERED, &dev->vid_out_dev.flags);
+		clear_bit(V4L2_FL_REGISTERED, &dev->vbi_cap_dev.flags);
+		clear_bit(V4L2_FL_REGISTERED, &dev->vbi_out_dev.flags);
+		clear_bit(V4L2_FL_REGISTERED, &dev->sdr_cap_dev.flags);
+		clear_bit(V4L2_FL_REGISTERED, &dev->radio_rx_dev.flags);
+		clear_bit(V4L2_FL_REGISTERED, &dev->radio_tx_dev.flags);
+		break;
+	case VIVID_CID_CLEAR_FB:
+		vivid_clear_fb(dev);
+		break;
+	case VIVID_CID_BUTTON:
+		dev->button_pressed = 30;
+		break;
+	}
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_user_gen_ctrl_ops = {
+	.s_ctrl = vivid_user_gen_s_ctrl,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_button = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.id = VIVID_CID_BUTTON,
+	.name = "Button",
+	.type = V4L2_CTRL_TYPE_BUTTON,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_boolean = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.id = VIVID_CID_BOOLEAN,
+	.name = "Boolean",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.min = 0,
+	.max = 1,
+	.step = 1,
+	.def = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_int32 = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.id = VIVID_CID_INTEGER,
+	.name = "Integer 32 Bits",
+	.type = V4L2_CTRL_TYPE_INTEGER,
+	.min = 0xffffffff80000000ULL,
+	.max = 0x7fffffff,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_int64 = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.id = VIVID_CID_INTEGER64,
+	.name = "Integer 64 Bits",
+	.type = V4L2_CTRL_TYPE_INTEGER64,
+	.min = 0x8000000000000000ULL,
+	.max = 0x7fffffffffffffffLL,
+	.step = 1,
+};
+
+static const char * const vivid_ctrl_menu_strings[] = {
+	"Menu Item 0 (Skipped)",
+	"Menu Item 1",
+	"Menu Item 2 (Skipped)",
+	"Menu Item 3",
+	"Menu Item 4",
+	"Menu Item 5 (Skipped)",
+	NULL,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_menu = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.id = VIVID_CID_MENU,
+	.name = "Menu",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.min = 1,
+	.max = 4,
+	.def = 3,
+	.menu_skip_mask = 0x04,
+	.qmenu = vivid_ctrl_menu_strings,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_string = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.id = VIVID_CID_STRING,
+	.name = "String",
+	.type = V4L2_CTRL_TYPE_STRING,
+	.min = 2,
+	.max = 4,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_bitmask = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.id = VIVID_CID_BITMASK,
+	.name = "Bitmask",
+	.type = V4L2_CTRL_TYPE_BITMASK,
+	.def = 0x80002000,
+	.min = 0,
+	.max = 0x80402010,
+	.step = 0,
+};
+
+static const s64 vivid_ctrl_int_menu_values[] = {
+	1, 1, 2, 3, 5, 8, 13, 21, 42,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_int_menu = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.id = VIVID_CID_INTMENU,
+	.name = "Integer Menu",
+	.type = V4L2_CTRL_TYPE_INTEGER_MENU,
+	.min = 1,
+	.max = 8,
+	.def = 4,
+	.menu_skip_mask = 0x02,
+	.qmenu_int = vivid_ctrl_int_menu_values,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_disconnect = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.id = VIVID_CID_DISCONNECT,
+	.name = "Disconnect",
+	.type = V4L2_CTRL_TYPE_BUTTON,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_clear_fb = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.id = VIVID_CID_CLEAR_FB,
+	.name = "Clear Framebuffer",
+	.type = V4L2_CTRL_TYPE_BUTTON,
+};
+
+
+/* Video User Controls */
+
+static int vivid_user_vid_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_user_vid);
+
+	switch (ctrl->id) {
+	case V4L2_CID_AUTOGAIN:
+		dev->gain->val = dev->jiffies_vid_cap & 0xff;
+		break;
+	}
+	return 0;
+}
+
+static int vivid_user_vid_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_user_vid);
+
+	switch (ctrl->id) {
+	case V4L2_CID_BRIGHTNESS:
+		dev->input_brightness[dev->input] = ctrl->val - dev->input * 128;
+		tpg_s_brightness(&dev->tpg, dev->input_brightness[dev->input]);
+		break;
+	case V4L2_CID_CONTRAST:
+		tpg_s_contrast(&dev->tpg, ctrl->val);
+		break;
+	case V4L2_CID_SATURATION:
+		tpg_s_saturation(&dev->tpg, ctrl->val);
+		break;
+	case V4L2_CID_HUE:
+		tpg_s_hue(&dev->tpg, ctrl->val);
+		break;
+	case V4L2_CID_HFLIP:
+		dev->hflip = ctrl->val;
+		tpg_s_hflip(&dev->tpg, dev->sensor_hflip ^ dev->hflip);
+		break;
+	case V4L2_CID_VFLIP:
+		dev->vflip = ctrl->val;
+		tpg_s_vflip(&dev->tpg, dev->sensor_vflip ^ dev->vflip);
+		break;
+	case V4L2_CID_ALPHA_COMPONENT:
+		tpg_s_alpha_component(&dev->tpg, ctrl->val);
+		break;
+	}
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_user_vid_ctrl_ops = {
+	.g_volatile_ctrl = vivid_user_vid_g_volatile_ctrl,
+	.s_ctrl = vivid_user_vid_s_ctrl,
+};
+
+
+/* Video Capture Controls */
+
+static int vivid_vid_cap_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_vid_cap);
+	unsigned i;
+
+	switch (ctrl->id) {
+	case VIVID_CID_TEST_PATTERN:
+		vivid_update_quality(dev);
+		tpg_s_pattern(&dev->tpg, ctrl->val);
+		break;
+	case VIVID_CID_COLORSPACE:
+		tpg_s_colorspace(&dev->tpg, ctrl->val);
+		vivid_send_source_change(dev, TV);
+		vivid_send_source_change(dev, SVID);
+		vivid_send_source_change(dev, HDMI);
+		vivid_send_source_change(dev, WEBCAM);
+		break;
+	case V4L2_CID_DV_RX_RGB_RANGE:
+		if (!vivid_is_hdmi_cap(dev))
+			break;
+		tpg_s_rgb_range(&dev->tpg, ctrl->val);
+		break;
+	case VIVID_CID_LIMITED_RGB_RANGE:
+		tpg_s_real_rgb_range(&dev->tpg, ctrl->val ?
+				V4L2_DV_RGB_RANGE_LIMITED : V4L2_DV_RGB_RANGE_FULL);
+		break;
+	case VIVID_CID_ALPHA_MODE:
+		tpg_s_alpha_mode(&dev->tpg, ctrl->val);
+		break;
+	case VIVID_CID_HOR_MOVEMENT:
+		tpg_s_mv_hor_mode(&dev->tpg, ctrl->val);
+		break;
+	case VIVID_CID_VERT_MOVEMENT:
+		tpg_s_mv_vert_mode(&dev->tpg, ctrl->val);
+		break;
+	case VIVID_CID_OSD_TEXT_MODE:
+		dev->osd_mode = ctrl->val;
+		break;
+	case VIVID_CID_PERCENTAGE_FILL:
+		tpg_s_perc_fill(&dev->tpg, ctrl->val);
+		for (i = 0; i < VIDEO_MAX_FRAME; i++)
+			dev->must_blank[i] = ctrl->val < 100;
+		break;
+	case VIVID_CID_INSERT_SAV:
+		tpg_s_insert_sav(&dev->tpg, ctrl->val);
+		break;
+	case VIVID_CID_INSERT_EAV:
+		tpg_s_insert_eav(&dev->tpg, ctrl->val);
+		break;
+	case VIVID_CID_HFLIP:
+		dev->sensor_hflip = ctrl->val;
+		tpg_s_hflip(&dev->tpg, dev->sensor_hflip ^ dev->hflip);
+		break;
+	case VIVID_CID_VFLIP:
+		dev->sensor_vflip = ctrl->val;
+		tpg_s_vflip(&dev->tpg, dev->sensor_vflip ^ dev->vflip);
+		break;
+	case VIVID_CID_HAS_CROP_CAP:
+		dev->has_crop_cap = ctrl->val;
+		vivid_update_format_cap(dev, true);
+		break;
+	case VIVID_CID_HAS_COMPOSE_CAP:
+		dev->has_compose_cap = ctrl->val;
+		vivid_update_format_cap(dev, true);
+		break;
+	case VIVID_CID_HAS_SCALER_CAP:
+		dev->has_scaler_cap = ctrl->val;
+		vivid_update_format_cap(dev, true);
+		break;
+	case VIVID_CID_SHOW_BORDER:
+		tpg_s_show_border(&dev->tpg, ctrl->val);
+		break;
+	case VIVID_CID_SHOW_SQUARE:
+		tpg_s_show_square(&dev->tpg, ctrl->val);
+		break;
+	case VIVID_CID_STD_ASPECT_RATIO:
+		dev->std_aspect_ratio = ctrl->val;
+		tpg_s_video_aspect(&dev->tpg, vivid_get_video_aspect(dev));
+		break;
+	case VIVID_CID_DV_TIMINGS_SIGNAL_MODE:
+		dev->dv_timings_signal_mode = dev->ctrl_dv_timings_signal_mode->val;
+		if (dev->dv_timings_signal_mode == SELECTED_DV_TIMINGS)
+			dev->query_dv_timings = dev->ctrl_dv_timings->val;
+		v4l2_ctrl_activate(dev->ctrl_dv_timings,
+				dev->dv_timings_signal_mode == SELECTED_DV_TIMINGS);
+		vivid_update_quality(dev);
+		vivid_send_source_change(dev, HDMI);
+		break;
+	case VIVID_CID_DV_TIMINGS_ASPECT_RATIO:
+		dev->dv_timings_aspect_ratio = ctrl->val;
+		tpg_s_video_aspect(&dev->tpg, vivid_get_video_aspect(dev));
+		break;
+	case VIVID_CID_TSTAMP_SRC:
+		dev->tstamp_src_is_soe = ctrl->val;
+		dev->vb_vid_cap_q.timestamp_flags &= ~V4L2_BUF_FLAG_TSTAMP_SRC_MASK;
+		if (dev->tstamp_src_is_soe)
+			dev->vb_vid_cap_q.timestamp_flags |= V4L2_BUF_FLAG_TSTAMP_SRC_SOE;
+		break;
+	case VIVID_CID_MAX_EDID_BLOCKS:
+		dev->edid_max_blocks = ctrl->val;
+		if (dev->edid_blocks > dev->edid_max_blocks)
+			dev->edid_blocks = dev->edid_max_blocks;
+		break;
+	}
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_vid_cap_ctrl_ops = {
+	.s_ctrl = vivid_vid_cap_s_ctrl,
+};
+
+static const char * const vivid_ctrl_hor_movement_strings[] = {
+	"Move Left Fast",
+	"Move Left",
+	"Move Left Slow",
+	"No Movement",
+	"Move Right Slow",
+	"Move Right",
+	"Move Right Fast",
+	NULL,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_hor_movement = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_HOR_MOVEMENT,
+	.name = "Horizontal Movement",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.max = TPG_MOVE_POS_FAST,
+	.def = TPG_MOVE_NONE,
+	.qmenu = vivid_ctrl_hor_movement_strings,
+};
+
+static const char * const vivid_ctrl_vert_movement_strings[] = {
+	"Move Up Fast",
+	"Move Up",
+	"Move Up Slow",
+	"No Movement",
+	"Move Down Slow",
+	"Move Down",
+	"Move Down Fast",
+	NULL,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_vert_movement = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_VERT_MOVEMENT,
+	.name = "Vertical Movement",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.max = TPG_MOVE_POS_FAST,
+	.def = TPG_MOVE_NONE,
+	.qmenu = vivid_ctrl_vert_movement_strings,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_show_border = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_SHOW_BORDER,
+	.name = "Show Border",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_show_square = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_SHOW_SQUARE,
+	.name = "Show Square",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+static const char * const vivid_ctrl_osd_mode_strings[] = {
+	"All",
+	"Counters Only",
+	"None",
+	NULL,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_osd_mode = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_OSD_TEXT_MODE,
+	.name = "OSD Text Mode",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.max = 2,
+	.qmenu = vivid_ctrl_osd_mode_strings,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_perc_fill = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_PERCENTAGE_FILL,
+	.name = "Fill Percentage of Frame",
+	.type = V4L2_CTRL_TYPE_INTEGER,
+	.min = 0,
+	.max = 100,
+	.def = 100,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_insert_sav = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_INSERT_SAV,
+	.name = "Insert SAV Code in Image",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_insert_eav = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_INSERT_EAV,
+	.name = "Insert EAV Code in Image",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_hflip = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_HFLIP,
+	.name = "Sensor Flipped Horizontally",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_vflip = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_VFLIP,
+	.name = "Sensor Flipped Vertically",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_has_crop_cap = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_HAS_CROP_CAP,
+	.name = "Enable Capture Cropping",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.def = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_has_compose_cap = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_HAS_COMPOSE_CAP,
+	.name = "Enable Capture Composing",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.def = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_has_scaler_cap = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_HAS_SCALER_CAP,
+	.name = "Enable Capture Scaler",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.def = 1,
+	.step = 1,
+};
+
+static const char * const vivid_ctrl_tstamp_src_strings[] = {
+	"End of Frame",
+	"Start of Exposure",
+	NULL,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_tstamp_src = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_TSTAMP_SRC,
+	.name = "Timestamp Source",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.max = 1,
+	.qmenu = vivid_ctrl_tstamp_src_strings,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_std_aspect_ratio = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_STD_ASPECT_RATIO,
+	.name = "Standard Aspect Ratio",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.min = 1,
+	.max = 4,
+	.def = 1,
+	.qmenu = tpg_aspect_strings,
+};
+
+static const char * const vivid_ctrl_dv_timings_signal_mode_strings[] = {
+	"Current DV Timings",
+	"No Signal",
+	"No Lock",
+	"Out of Range",
+	"Selected DV Timings",
+	"Cycle Through All DV Timings",
+	"Custom DV Timings",
+	NULL,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_dv_timings_signal_mode = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_DV_TIMINGS_SIGNAL_MODE,
+	.name = "DV Timings Signal Mode",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.max = 5,
+	.qmenu = vivid_ctrl_dv_timings_signal_mode_strings,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_dv_timings_aspect_ratio = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_DV_TIMINGS_ASPECT_RATIO,
+	.name = "DV Timings Aspect Ratio",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.max = 3,
+	.qmenu = tpg_aspect_strings,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_max_edid_blocks = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_MAX_EDID_BLOCKS,
+	.name = "Maximum EDID Blocks",
+	.type = V4L2_CTRL_TYPE_INTEGER,
+	.min = 1,
+	.max = 256,
+	.def = 2,
+	.step = 1,
+};
+
+static const char * const vivid_ctrl_colorspace_strings[] = {
+	"",
+	"SMPTE 170M",
+	"SMPTE 240M",
+	"REC 709",
+	"", /* Skip Bt878 entry */
+	"470 System M",
+	"470 System BG",
+	"", /* Skip JPEG entry */
+	"sRGB",
+	NULL,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_colorspace = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_COLORSPACE,
+	.name = "Colorspace",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.min = 1,
+	.max = 8,
+	.menu_skip_mask = (1 << 4) | (1 << 7),
+	.def = 8,
+	.qmenu = vivid_ctrl_colorspace_strings,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_alpha_mode = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_ALPHA_MODE,
+	.name = "Apply Alpha To Red Only",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_limited_rgb_range = {
+	.ops = &vivid_vid_cap_ctrl_ops,
+	.id = VIVID_CID_LIMITED_RGB_RANGE,
+	.name = "Limited RGB Range (16-235)",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+
+/* VBI Capture Control */
+
+static int vivid_vbi_cap_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_vbi_cap);
+
+	switch (ctrl->id) {
+	case VIVID_CID_VBI_CAP_INTERLACED:
+		dev->vbi_cap_interlaced = ctrl->val;
+		break;
+	}
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_vbi_cap_ctrl_ops = {
+	.s_ctrl = vivid_vbi_cap_s_ctrl,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_vbi_cap_interlaced = {
+	.ops = &vivid_vbi_cap_ctrl_ops,
+	.id = VIVID_CID_VBI_CAP_INTERLACED,
+	.name = "Interlaced VBI Format",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+
+/* Video Output Controls */
+
+static int vivid_vid_out_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_vid_out);
+	struct v4l2_bt_timings *bt = &dev->dv_timings_out.bt;
+
+	switch (ctrl->id) {
+	case VIVID_CID_HAS_CROP_OUT:
+		dev->has_crop_out = ctrl->val;
+		vivid_update_format_out(dev);
+		break;
+	case VIVID_CID_HAS_COMPOSE_OUT:
+		dev->has_compose_out = ctrl->val;
+		vivid_update_format_out(dev);
+		break;
+	case VIVID_CID_HAS_SCALER_OUT:
+		dev->has_scaler_out = ctrl->val;
+		vivid_update_format_out(dev);
+		break;
+	case V4L2_CID_DV_TX_MODE:
+		dev->dvi_d_out = ctrl->val == V4L2_DV_TX_MODE_DVI_D;
+		if (!vivid_is_hdmi_out(dev))
+			break;
+		if (!dev->dvi_d_out && (bt->standards & V4L2_DV_BT_STD_CEA861)) {
+			if (bt->width == 720 && bt->height <= 576)
+				dev->colorspace_out = V4L2_COLORSPACE_SMPTE170M;
+			else
+				dev->colorspace_out = V4L2_COLORSPACE_REC709;
+		} else {
+			dev->colorspace_out = V4L2_COLORSPACE_SRGB;
+		}
+		if (dev->loop_video)
+			vivid_send_source_change(dev, HDMI);
+		break;
+	}
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_vid_out_ctrl_ops = {
+	.s_ctrl = vivid_vid_out_s_ctrl,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_has_crop_out = {
+	.ops = &vivid_vid_out_ctrl_ops,
+	.id = VIVID_CID_HAS_CROP_OUT,
+	.name = "Enable Output Cropping",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.def = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_has_compose_out = {
+	.ops = &vivid_vid_out_ctrl_ops,
+	.id = VIVID_CID_HAS_COMPOSE_OUT,
+	.name = "Enable Output Composing",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.def = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_has_scaler_out = {
+	.ops = &vivid_vid_out_ctrl_ops,
+	.id = VIVID_CID_HAS_SCALER_OUT,
+	.name = "Enable Output Scaler",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.def = 1,
+	.step = 1,
+};
+
+
+/* Streaming Controls */
+
+static int vivid_streaming_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_streaming);
+	struct timeval tv;
+
+	switch (ctrl->id) {
+	case VIVID_CID_DQBUF_ERROR:
+		dev->dqbuf_error = true;
+		break;
+	case VIVID_CID_PERC_DROPPED:
+		dev->perc_dropped_buffers = ctrl->val;
+		break;
+	case VIVID_CID_QUEUE_SETUP_ERROR:
+		dev->queue_setup_error = true;
+		break;
+	case VIVID_CID_BUF_PREPARE_ERROR:
+		dev->buf_prepare_error = true;
+		break;
+	case VIVID_CID_START_STR_ERROR:
+		dev->start_streaming_error = true;
+		break;
+	case VIVID_CID_QUEUE_ERROR:
+		if (dev->vb_vid_cap_q.start_streaming_called)
+			vb2_queue_error(&dev->vb_vid_cap_q);
+		if (dev->vb_vbi_cap_q.start_streaming_called)
+			vb2_queue_error(&dev->vb_vbi_cap_q);
+		if (dev->vb_vid_out_q.start_streaming_called)
+			vb2_queue_error(&dev->vb_vid_out_q);
+		if (dev->vb_vbi_out_q.start_streaming_called)
+			vb2_queue_error(&dev->vb_vbi_out_q);
+		if (dev->vb_sdr_cap_q.start_streaming_called)
+			vb2_queue_error(&dev->vb_sdr_cap_q);
+		break;
+	case VIVID_CID_SEQ_WRAP:
+		dev->seq_wrap = ctrl->val;
+		break;
+	case VIVID_CID_TIME_WRAP:
+		dev->time_wrap = ctrl->val;
+		if (ctrl->val == 0) {
+			dev->time_wrap_offset = 0;
+			break;
+		}
+		v4l2_get_timestamp(&tv);
+		dev->time_wrap_offset = -tv.tv_sec - 16;
+		break;
+	}
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_streaming_ctrl_ops = {
+	.s_ctrl = vivid_streaming_s_ctrl,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_dqbuf_error = {
+	.ops = &vivid_streaming_ctrl_ops,
+	.id = VIVID_CID_DQBUF_ERROR,
+	.name = "Inject V4L2_BUF_FLAG_ERROR",
+	.type = V4L2_CTRL_TYPE_BUTTON,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_perc_dropped = {
+	.ops = &vivid_streaming_ctrl_ops,
+	.id = VIVID_CID_PERC_DROPPED,
+	.name = "Percentage of Dropped Buffers",
+	.type = V4L2_CTRL_TYPE_INTEGER,
+	.min = 0,
+	.max = 100,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_queue_setup_error = {
+	.ops = &vivid_streaming_ctrl_ops,
+	.id = VIVID_CID_QUEUE_SETUP_ERROR,
+	.name = "Inject VIDIOC_REQBUFS Error",
+	.type = V4L2_CTRL_TYPE_BUTTON,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_buf_prepare_error = {
+	.ops = &vivid_streaming_ctrl_ops,
+	.id = VIVID_CID_BUF_PREPARE_ERROR,
+	.name = "Inject VIDIOC_QBUF Error",
+	.type = V4L2_CTRL_TYPE_BUTTON,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_start_streaming_error = {
+	.ops = &vivid_streaming_ctrl_ops,
+	.id = VIVID_CID_START_STR_ERROR,
+	.name = "Inject VIDIOC_STREAMON Error",
+	.type = V4L2_CTRL_TYPE_BUTTON,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_queue_error = {
+	.ops = &vivid_streaming_ctrl_ops,
+	.id = VIVID_CID_QUEUE_ERROR,
+	.name = "Inject Fatal Streaming Error",
+	.type = V4L2_CTRL_TYPE_BUTTON,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_seq_wrap = {
+	.ops = &vivid_streaming_ctrl_ops,
+	.id = VIVID_CID_SEQ_WRAP,
+	.name = "Wrap Sequence Number",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_time_wrap = {
+	.ops = &vivid_streaming_ctrl_ops,
+	.id = VIVID_CID_TIME_WRAP,
+	.name = "Wrap Timestamp",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+
+/* SDTV Capture Controls */
+
+static int vivid_sdtv_cap_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_sdtv_cap);
+
+	switch (ctrl->id) {
+	case VIVID_CID_STD_SIGNAL_MODE:
+		dev->std_signal_mode = dev->ctrl_std_signal_mode->val;
+		if (dev->std_signal_mode == SELECTED_STD)
+			dev->query_std = vivid_standard[dev->ctrl_standard->val];
+		v4l2_ctrl_activate(dev->ctrl_standard, dev->std_signal_mode == SELECTED_STD);
+		vivid_update_quality(dev);
+		vivid_send_source_change(dev, TV);
+		vivid_send_source_change(dev, SVID);
+		break;
+	}
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_sdtv_cap_ctrl_ops = {
+	.s_ctrl = vivid_sdtv_cap_s_ctrl,
+};
+
+static const char * const vivid_ctrl_std_signal_mode_strings[] = {
+	"Current Standard",
+	"No Signal",
+	"No Lock",
+	"",
+	"Selected Standard",
+	"Cycle Through All Standards",
+	NULL,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_std_signal_mode = {
+	.ops = &vivid_sdtv_cap_ctrl_ops,
+	.id = VIVID_CID_STD_SIGNAL_MODE,
+	.name = "Standard Signal Mode",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.max = 5,
+	.menu_skip_mask = 1 << 3,
+	.qmenu = vivid_ctrl_std_signal_mode_strings,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_standard = {
+	.ops = &vivid_sdtv_cap_ctrl_ops,
+	.id = VIVID_CID_STANDARD,
+	.name = "Standard",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.max = 14,
+	.qmenu = vivid_ctrl_standard_strings,
+};
+
+
+
+/* Radio Receiver Controls */
+
+static int vivid_radio_rx_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_radio_rx);
+
+	switch (ctrl->id) {
+	case VIVID_CID_RADIO_SEEK_MODE:
+		dev->radio_rx_hw_seek_mode = ctrl->val;
+		break;
+	case VIVID_CID_RADIO_SEEK_PROG_LIM:
+		dev->radio_rx_hw_seek_prog_lim = ctrl->val;
+		break;
+	case VIVID_CID_RADIO_RX_RDS_RBDS:
+		dev->rds_gen.use_rbds = ctrl->val;
+		break;
+	case VIVID_CID_RADIO_RX_RDS_BLOCKIO:
+		dev->radio_rx_rds_controls = ctrl->val;
+		dev->radio_rx_caps &= ~V4L2_CAP_READWRITE;
+		dev->radio_rx_rds_use_alternates = false;
+		if (!dev->radio_rx_rds_controls) {
+			dev->radio_rx_caps |= V4L2_CAP_READWRITE;
+			__v4l2_ctrl_s_ctrl(dev->radio_rx_rds_pty, 0);
+			__v4l2_ctrl_s_ctrl(dev->radio_rx_rds_ta, 0);
+			__v4l2_ctrl_s_ctrl(dev->radio_rx_rds_tp, 0);
+			__v4l2_ctrl_s_ctrl(dev->radio_rx_rds_ms, 0);
+			__v4l2_ctrl_s_ctrl_string(dev->radio_rx_rds_psname, "");
+			__v4l2_ctrl_s_ctrl_string(dev->radio_rx_rds_radiotext, "");
+		}
+		v4l2_ctrl_activate(dev->radio_rx_rds_pty, dev->radio_rx_rds_controls);
+		v4l2_ctrl_activate(dev->radio_rx_rds_psname, dev->radio_rx_rds_controls);
+		v4l2_ctrl_activate(dev->radio_rx_rds_radiotext, dev->radio_rx_rds_controls);
+		v4l2_ctrl_activate(dev->radio_rx_rds_ta, dev->radio_rx_rds_controls);
+		v4l2_ctrl_activate(dev->radio_rx_rds_tp, dev->radio_rx_rds_controls);
+		v4l2_ctrl_activate(dev->radio_rx_rds_ms, dev->radio_rx_rds_controls);
+		break;
+	case V4L2_CID_RDS_RECEPTION:
+		dev->radio_rx_rds_enabled = ctrl->val;
+		break;
+	}
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_radio_rx_ctrl_ops = {
+	.s_ctrl = vivid_radio_rx_s_ctrl,
+};
+
+static const char * const vivid_ctrl_radio_rds_mode_strings[] = {
+	"Block I/O",
+	"Controls",
+	NULL,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_radio_rx_rds_blockio = {
+	.ops = &vivid_radio_rx_ctrl_ops,
+	.id = VIVID_CID_RADIO_RX_RDS_BLOCKIO,
+	.name = "RDS Rx I/O Mode",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.qmenu = vivid_ctrl_radio_rds_mode_strings,
+	.max = 1,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_radio_rx_rds_rbds = {
+	.ops = &vivid_radio_rx_ctrl_ops,
+	.id = VIVID_CID_RADIO_RX_RDS_RBDS,
+	.name = "Generate RBDS Instead of RDS",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+static const char * const vivid_ctrl_radio_hw_seek_mode_strings[] = {
+	"Bounded",
+	"Wrap Around",
+	"Both",
+	NULL,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_radio_hw_seek_mode = {
+	.ops = &vivid_radio_rx_ctrl_ops,
+	.id = VIVID_CID_RADIO_SEEK_MODE,
+	.name = "Radio HW Seek Mode",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.max = 2,
+	.qmenu = vivid_ctrl_radio_hw_seek_mode_strings,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_radio_hw_seek_prog_lim = {
+	.ops = &vivid_radio_rx_ctrl_ops,
+	.id = VIVID_CID_RADIO_SEEK_PROG_LIM,
+	.name = "Radio Programmable HW Seek",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+
+/* Radio Transmitter Controls */
+
+static int vivid_radio_tx_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_radio_tx);
+
+	switch (ctrl->id) {
+	case VIVID_CID_RADIO_TX_RDS_BLOCKIO:
+		dev->radio_tx_rds_controls = ctrl->val;
+		dev->radio_tx_caps &= ~V4L2_CAP_READWRITE;
+		if (!dev->radio_tx_rds_controls)
+			dev->radio_tx_caps |= V4L2_CAP_READWRITE;
+		break;
+	case V4L2_CID_RDS_TX_PTY:
+		if (dev->radio_rx_rds_controls)
+			v4l2_ctrl_s_ctrl(dev->radio_rx_rds_pty, ctrl->val);
+		break;
+	case V4L2_CID_RDS_TX_PS_NAME:
+		if (dev->radio_rx_rds_controls)
+			v4l2_ctrl_s_ctrl_string(dev->radio_rx_rds_psname, ctrl->p_new.p_char);
+		break;
+	case V4L2_CID_RDS_TX_RADIO_TEXT:
+		if (dev->radio_rx_rds_controls)
+			v4l2_ctrl_s_ctrl_string(dev->radio_rx_rds_radiotext, ctrl->p_new.p_char);
+		break;
+	case V4L2_CID_RDS_TX_TRAFFIC_ANNOUNCEMENT:
+		if (dev->radio_rx_rds_controls)
+			v4l2_ctrl_s_ctrl(dev->radio_rx_rds_ta, ctrl->val);
+		break;
+	case V4L2_CID_RDS_TX_TRAFFIC_PROGRAM:
+		if (dev->radio_rx_rds_controls)
+			v4l2_ctrl_s_ctrl(dev->radio_rx_rds_tp, ctrl->val);
+		break;
+	case V4L2_CID_RDS_TX_MUSIC_SPEECH:
+		if (dev->radio_rx_rds_controls)
+			v4l2_ctrl_s_ctrl(dev->radio_rx_rds_ms, ctrl->val);
+		break;
+	}
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_radio_tx_ctrl_ops = {
+	.s_ctrl = vivid_radio_tx_s_ctrl,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_radio_tx_rds_blockio = {
+	.ops = &vivid_radio_tx_ctrl_ops,
+	.id = VIVID_CID_RADIO_TX_RDS_BLOCKIO,
+	.name = "RDS Tx I/O Mode",
+	.type = V4L2_CTRL_TYPE_MENU,
+	.qmenu = vivid_ctrl_radio_rds_mode_strings,
+	.max = 1,
+	.def = 1,
+};
+
+
+
+/* Video Loop Control */
+
+static int vivid_loop_out_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+	struct vivid_dev *dev = container_of(ctrl->handler, struct vivid_dev, ctrl_hdl_loop_out);
+
+	switch (ctrl->id) {
+	case VIVID_CID_LOOP_VIDEO:
+		dev->loop_video = ctrl->val;
+		vivid_update_quality(dev);
+		vivid_send_source_change(dev, SVID);
+		vivid_send_source_change(dev, HDMI);
+		break;
+	}
+	return 0;
+}
+
+static const struct v4l2_ctrl_ops vivid_loop_out_ctrl_ops = {
+	.s_ctrl = vivid_loop_out_s_ctrl,
+};
+
+static const struct v4l2_ctrl_config vivid_ctrl_loop_video = {
+	.ops = &vivid_loop_out_ctrl_ops,
+	.id = VIVID_CID_LOOP_VIDEO,
+	.name = "Loop Video",
+	.type = V4L2_CTRL_TYPE_BOOLEAN,
+	.max = 1,
+	.step = 1,
+};
+
+
+static const struct v4l2_ctrl_config vivid_ctrl_class = {
+	.ops = &vivid_user_gen_ctrl_ops,
+	.flags = V4L2_CTRL_FLAG_READ_ONLY | V4L2_CTRL_FLAG_WRITE_ONLY,
+	.id = VIVID_CID_VIVID_CLASS,
+	.name = "Vivid Controls",
+	.type = V4L2_CTRL_TYPE_CTRL_CLASS,
+};
+
+int vivid_create_controls(struct vivid_dev *dev, bool show_ccs_cap,
+		bool show_ccs_out, bool no_error_inj,
+		bool has_sdtv, bool has_hdmi)
+{
+	struct v4l2_ctrl_handler *hdl_user_gen = &dev->ctrl_hdl_user_gen;
+	struct v4l2_ctrl_handler *hdl_user_vid = &dev->ctrl_hdl_user_vid;
+	struct v4l2_ctrl_handler *hdl_user_aud = &dev->ctrl_hdl_user_aud;
+	struct v4l2_ctrl_handler *hdl_streaming = &dev->ctrl_hdl_streaming;
+	struct v4l2_ctrl_handler *hdl_sdtv_cap = &dev->ctrl_hdl_sdtv_cap;
+	struct v4l2_ctrl_handler *hdl_loop_out = &dev->ctrl_hdl_loop_out;
+	struct v4l2_ctrl_handler *hdl_vid_cap = &dev->ctrl_hdl_vid_cap;
+	struct v4l2_ctrl_handler *hdl_vid_out = &dev->ctrl_hdl_vid_out;
+	struct v4l2_ctrl_handler *hdl_vbi_cap = &dev->ctrl_hdl_vbi_cap;
+	struct v4l2_ctrl_handler *hdl_vbi_out = &dev->ctrl_hdl_vbi_out;
+	struct v4l2_ctrl_handler *hdl_radio_rx = &dev->ctrl_hdl_radio_rx;
+	struct v4l2_ctrl_handler *hdl_radio_tx = &dev->ctrl_hdl_radio_tx;
+	struct v4l2_ctrl_handler *hdl_sdr_cap = &dev->ctrl_hdl_sdr_cap;
+	struct v4l2_ctrl_config vivid_ctrl_dv_timings = {
+		.ops = &vivid_vid_cap_ctrl_ops,
+		.id = VIVID_CID_DV_TIMINGS,
+		.name = "DV Timings",
+		.type = V4L2_CTRL_TYPE_MENU,
+	};
+	int i;
+
+	v4l2_ctrl_handler_init(hdl_user_gen, 10);
+	v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_user_vid, 9);
+	v4l2_ctrl_new_custom(hdl_user_vid, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_user_aud, 2);
+	v4l2_ctrl_new_custom(hdl_user_aud, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_streaming, 8);
+	v4l2_ctrl_new_custom(hdl_streaming, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_sdtv_cap, 2);
+	v4l2_ctrl_new_custom(hdl_sdtv_cap, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_loop_out, 1);
+	v4l2_ctrl_new_custom(hdl_loop_out, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_vid_cap, 55);
+	v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_vid_out, 26);
+	v4l2_ctrl_new_custom(hdl_vid_out, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_vbi_cap, 21);
+	v4l2_ctrl_new_custom(hdl_vbi_cap, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_vbi_out, 19);
+	v4l2_ctrl_new_custom(hdl_vbi_out, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_radio_rx, 17);
+	v4l2_ctrl_new_custom(hdl_radio_rx, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_radio_tx, 17);
+	v4l2_ctrl_new_custom(hdl_radio_tx, &vivid_ctrl_class, NULL);
+	v4l2_ctrl_handler_init(hdl_sdr_cap, 18);
+	v4l2_ctrl_new_custom(hdl_sdr_cap, &vivid_ctrl_class, NULL);
+
+	/* User Controls */
+	dev->volume = v4l2_ctrl_new_std(hdl_user_aud, NULL,
+		V4L2_CID_AUDIO_VOLUME, 0, 255, 1, 200);
+	dev->mute = v4l2_ctrl_new_std(hdl_user_aud, NULL,
+		V4L2_CID_AUDIO_MUTE, 0, 1, 1, 0);
+	if (dev->has_vid_cap) {
+		dev->brightness = v4l2_ctrl_new_std(hdl_user_vid, &vivid_user_vid_ctrl_ops,
+			V4L2_CID_BRIGHTNESS, 0, 255, 1, 128);
+		for (i = 0; i < MAX_INPUTS; i++)
+			dev->input_brightness[i] = 128;
+		dev->contrast = v4l2_ctrl_new_std(hdl_user_vid, &vivid_user_vid_ctrl_ops,
+			V4L2_CID_CONTRAST, 0, 255, 1, 128);
+		dev->saturation = v4l2_ctrl_new_std(hdl_user_vid, &vivid_user_vid_ctrl_ops,
+			V4L2_CID_SATURATION, 0, 255, 1, 128);
+		dev->hue = v4l2_ctrl_new_std(hdl_user_vid, &vivid_user_vid_ctrl_ops,
+			V4L2_CID_HUE, -128, 128, 1, 0);
+		v4l2_ctrl_new_std(hdl_user_vid, &vivid_user_vid_ctrl_ops,
+			V4L2_CID_HFLIP, 0, 1, 1, 0);
+		v4l2_ctrl_new_std(hdl_user_vid, &vivid_user_vid_ctrl_ops,
+			V4L2_CID_VFLIP, 0, 1, 1, 0);
+		dev->autogain = v4l2_ctrl_new_std(hdl_user_vid, &vivid_user_vid_ctrl_ops,
+			V4L2_CID_AUTOGAIN, 0, 1, 1, 1);
+		dev->gain = v4l2_ctrl_new_std(hdl_user_vid, &vivid_user_vid_ctrl_ops,
+			V4L2_CID_GAIN, 0, 255, 1, 100);
+		dev->alpha = v4l2_ctrl_new_std(hdl_user_vid, &vivid_user_vid_ctrl_ops,
+			V4L2_CID_ALPHA_COMPONENT, 0, 255, 1, 0);
+	}
+	dev->button = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_button, NULL);
+	dev->int32 = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_int32, NULL);
+	dev->int64 = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_int64, NULL);
+	dev->boolean = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_boolean, NULL);
+	dev->menu = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_menu, NULL);
+	dev->string = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_string, NULL);
+	dev->bitmask = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_bitmask, NULL);
+	dev->int_menu = v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_int_menu, NULL);
+
+	if (dev->has_vid_cap) {
+		/* Image Processing Controls */
+		struct v4l2_ctrl_config vivid_ctrl_test_pattern = {
+			.ops = &vivid_vid_cap_ctrl_ops,
+			.id = VIVID_CID_TEST_PATTERN,
+			.name = "Test Pattern",
+			.type = V4L2_CTRL_TYPE_MENU,
+			.max = TPG_PAT_NOISE,
+			.qmenu = tpg_pattern_strings,
+		};
+
+		dev->test_pattern = v4l2_ctrl_new_custom(hdl_vid_cap,
+				&vivid_ctrl_test_pattern, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_perc_fill, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_hor_movement, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_vert_movement, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_osd_mode, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_show_border, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_show_square, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_hflip, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_vflip, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_insert_sav, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_insert_eav, NULL);
+		if (show_ccs_cap) {
+			dev->ctrl_has_crop_cap = v4l2_ctrl_new_custom(hdl_vid_cap,
+				&vivid_ctrl_has_crop_cap, NULL);
+			dev->ctrl_has_compose_cap = v4l2_ctrl_new_custom(hdl_vid_cap,
+				&vivid_ctrl_has_compose_cap, NULL);
+			dev->ctrl_has_scaler_cap = v4l2_ctrl_new_custom(hdl_vid_cap,
+				&vivid_ctrl_has_scaler_cap, NULL);
+		}
+
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_tstamp_src, NULL);
+		dev->colorspace = v4l2_ctrl_new_custom(hdl_vid_cap,
+			&vivid_ctrl_colorspace, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_alpha_mode, NULL);
+	}
+
+	if (dev->has_vid_out && show_ccs_out) {
+		dev->ctrl_has_crop_out = v4l2_ctrl_new_custom(hdl_vid_out,
+			&vivid_ctrl_has_crop_out, NULL);
+		dev->ctrl_has_compose_out = v4l2_ctrl_new_custom(hdl_vid_out,
+			&vivid_ctrl_has_compose_out, NULL);
+		dev->ctrl_has_scaler_out = v4l2_ctrl_new_custom(hdl_vid_out,
+			&vivid_ctrl_has_scaler_out, NULL);
+	}
+
+	/*
+	 * Testing this driver with v4l2-compliance will trigger the error
+	 * injection controls, and after that nothing will work as expected.
+	 * So we have a module option to drop these error injecting controls
+	 * allowing us to run v4l2_compliance again.
+	 */
+	if (!no_error_inj) {
+		v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_disconnect, NULL);
+		v4l2_ctrl_new_custom(hdl_streaming, &vivid_ctrl_dqbuf_error, NULL);
+		v4l2_ctrl_new_custom(hdl_streaming, &vivid_ctrl_perc_dropped, NULL);
+		v4l2_ctrl_new_custom(hdl_streaming, &vivid_ctrl_queue_setup_error, NULL);
+		v4l2_ctrl_new_custom(hdl_streaming, &vivid_ctrl_buf_prepare_error, NULL);
+		v4l2_ctrl_new_custom(hdl_streaming, &vivid_ctrl_start_streaming_error, NULL);
+		v4l2_ctrl_new_custom(hdl_streaming, &vivid_ctrl_queue_error, NULL);
+		v4l2_ctrl_new_custom(hdl_streaming, &vivid_ctrl_seq_wrap, NULL);
+		v4l2_ctrl_new_custom(hdl_streaming, &vivid_ctrl_time_wrap, NULL);
+	}
+
+	if (has_sdtv && (dev->has_vid_cap || dev->has_vbi_cap)) {
+		if (dev->has_vid_cap)
+			v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_std_aspect_ratio, NULL);
+		dev->ctrl_std_signal_mode = v4l2_ctrl_new_custom(hdl_sdtv_cap,
+			&vivid_ctrl_std_signal_mode, NULL);
+		dev->ctrl_standard = v4l2_ctrl_new_custom(hdl_sdtv_cap,
+			&vivid_ctrl_standard, NULL);
+		if (dev->ctrl_std_signal_mode)
+			v4l2_ctrl_cluster(2, &dev->ctrl_std_signal_mode);
+		if (dev->has_raw_vbi_cap)
+			v4l2_ctrl_new_custom(hdl_vbi_cap, &vivid_ctrl_vbi_cap_interlaced, NULL);
+	}
+
+	if (has_hdmi && dev->has_vid_cap) {
+		dev->ctrl_dv_timings_signal_mode = v4l2_ctrl_new_custom(hdl_vid_cap,
+					&vivid_ctrl_dv_timings_signal_mode, NULL);
+
+		vivid_ctrl_dv_timings.max = dev->query_dv_timings_size - 1;
+		vivid_ctrl_dv_timings.qmenu =
+			(const char * const *)dev->query_dv_timings_qmenu;
+		dev->ctrl_dv_timings = v4l2_ctrl_new_custom(hdl_vid_cap,
+			&vivid_ctrl_dv_timings, NULL);
+		if (dev->ctrl_dv_timings_signal_mode)
+			v4l2_ctrl_cluster(2, &dev->ctrl_dv_timings_signal_mode);
+
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_dv_timings_aspect_ratio, NULL);
+		v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_max_edid_blocks, NULL);
+		dev->real_rgb_range_cap = v4l2_ctrl_new_custom(hdl_vid_cap,
+			&vivid_ctrl_limited_rgb_range, NULL);
+		dev->rgb_range_cap = v4l2_ctrl_new_std_menu(hdl_vid_cap,
+			&vivid_vid_cap_ctrl_ops,
+			V4L2_CID_DV_RX_RGB_RANGE, V4L2_DV_RGB_RANGE_FULL,
+			0, V4L2_DV_RGB_RANGE_AUTO);
+	}
+	if (has_hdmi && dev->has_vid_out) {
+		/*
+		 * We aren't doing anything with this at the moment, but
+		 * HDMI outputs typically have this controls.
+		 */
+		dev->ctrl_tx_rgb_range = v4l2_ctrl_new_std_menu(hdl_vid_out, NULL,
+			V4L2_CID_DV_TX_RGB_RANGE, V4L2_DV_RGB_RANGE_FULL,
+			0, V4L2_DV_RGB_RANGE_AUTO);
+		dev->ctrl_tx_mode = v4l2_ctrl_new_std_menu(hdl_vid_out, NULL,
+			V4L2_CID_DV_TX_MODE, V4L2_DV_TX_MODE_HDMI,
+			0, V4L2_DV_TX_MODE_HDMI);
+	}
+	if ((dev->has_vid_cap && dev->has_vid_out) ||
+	    (dev->has_vbi_cap && dev->has_vbi_out))
+		v4l2_ctrl_new_custom(hdl_loop_out, &vivid_ctrl_loop_video, NULL);
+
+	if (dev->has_fb)
+		v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_clear_fb, NULL);
+
+	if (dev->has_radio_rx) {
+		v4l2_ctrl_new_custom(hdl_radio_rx, &vivid_ctrl_radio_hw_seek_mode, NULL);
+		v4l2_ctrl_new_custom(hdl_radio_rx, &vivid_ctrl_radio_hw_seek_prog_lim, NULL);
+		v4l2_ctrl_new_custom(hdl_radio_rx, &vivid_ctrl_radio_rx_rds_blockio, NULL);
+		v4l2_ctrl_new_custom(hdl_radio_rx, &vivid_ctrl_radio_rx_rds_rbds, NULL);
+		v4l2_ctrl_new_std(hdl_radio_rx, &vivid_radio_rx_ctrl_ops,
+			V4L2_CID_RDS_RECEPTION, 0, 1, 1, 1);
+		dev->radio_rx_rds_pty = v4l2_ctrl_new_std(hdl_radio_rx,
+			&vivid_radio_rx_ctrl_ops,
+			V4L2_CID_RDS_RX_PTY, 0, 31, 1, 0);
+		dev->radio_rx_rds_psname = v4l2_ctrl_new_std(hdl_radio_rx,
+			&vivid_radio_rx_ctrl_ops,
+			V4L2_CID_RDS_RX_PS_NAME, 0, 8, 8, 0);
+		dev->radio_rx_rds_radiotext = v4l2_ctrl_new_std(hdl_radio_rx,
+			&vivid_radio_rx_ctrl_ops,
+			V4L2_CID_RDS_RX_RADIO_TEXT, 0, 64, 64, 0);
+		dev->radio_rx_rds_ta = v4l2_ctrl_new_std(hdl_radio_rx,
+			&vivid_radio_rx_ctrl_ops,
+			V4L2_CID_RDS_RX_TRAFFIC_ANNOUNCEMENT, 0, 1, 1, 0);
+		dev->radio_rx_rds_tp = v4l2_ctrl_new_std(hdl_radio_rx,
+			&vivid_radio_rx_ctrl_ops,
+			V4L2_CID_RDS_RX_TRAFFIC_PROGRAM, 0, 1, 1, 0);
+		dev->radio_rx_rds_ms = v4l2_ctrl_new_std(hdl_radio_rx,
+			&vivid_radio_rx_ctrl_ops,
+			V4L2_CID_RDS_RX_MUSIC_SPEECH, 0, 1, 1, 1);
+	}
+	if (dev->has_radio_tx) {
+		v4l2_ctrl_new_custom(hdl_radio_tx,
+			&vivid_ctrl_radio_tx_rds_blockio, NULL);
+		dev->radio_tx_rds_pi = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_PI, 0, 0xffff, 1, 0x8088);
+		dev->radio_tx_rds_pty = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_PTY, 0, 31, 1, 3);
+		dev->radio_tx_rds_psname = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_PS_NAME, 0, 8, 8, 0);
+		if (dev->radio_tx_rds_psname)
+			v4l2_ctrl_s_ctrl_string(dev->radio_tx_rds_psname, "VIVID-TX");
+		dev->radio_tx_rds_radiotext = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_RADIO_TEXT, 0, 64 * 2, 64, 0);
+		if (dev->radio_tx_rds_radiotext)
+			v4l2_ctrl_s_ctrl_string(dev->radio_tx_rds_radiotext,
+			       "This is a VIVID default Radio Text template text, change at will");
+		dev->radio_tx_rds_mono_stereo = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_MONO_STEREO, 0, 1, 1, 1);
+		dev->radio_tx_rds_art_head = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_ARTIFICIAL_HEAD, 0, 1, 1, 0);
+		dev->radio_tx_rds_compressed = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_COMPRESSED, 0, 1, 1, 0);
+		dev->radio_tx_rds_dyn_pty = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_DYNAMIC_PTY, 0, 1, 1, 0);
+		dev->radio_tx_rds_ta = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_TRAFFIC_ANNOUNCEMENT, 0, 1, 1, 0);
+		dev->radio_tx_rds_tp = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_TRAFFIC_PROGRAM, 0, 1, 1, 1);
+		dev->radio_tx_rds_ms = v4l2_ctrl_new_std(hdl_radio_tx,
+			&vivid_radio_tx_ctrl_ops,
+			V4L2_CID_RDS_TX_MUSIC_SPEECH, 0, 1, 1, 1);
+	}
+	if (hdl_user_gen->error)
+		return hdl_user_gen->error;
+	if (hdl_user_vid->error)
+		return hdl_user_vid->error;
+	if (hdl_user_aud->error)
+		return hdl_user_aud->error;
+	if (hdl_streaming->error)
+		return hdl_streaming->error;
+	if (hdl_sdr_cap->error)
+		return hdl_sdr_cap->error;
+	if (hdl_loop_out->error)
+		return hdl_loop_out->error;
+
+	if (dev->autogain)
+		v4l2_ctrl_auto_cluster(2, &dev->autogain, 0, true);
+
+	if (dev->has_vid_cap) {
+		v4l2_ctrl_add_handler(hdl_vid_cap, hdl_user_gen, NULL);
+		v4l2_ctrl_add_handler(hdl_vid_cap, hdl_user_vid, NULL);
+		v4l2_ctrl_add_handler(hdl_vid_cap, hdl_user_aud, NULL);
+		v4l2_ctrl_add_handler(hdl_vid_cap, hdl_streaming, NULL);
+		v4l2_ctrl_add_handler(hdl_vid_cap, hdl_sdtv_cap, NULL);
+		if (hdl_vid_cap->error)
+			return hdl_vid_cap->error;
+		dev->vid_cap_dev.ctrl_handler = hdl_vid_cap;
+	}
+	if (dev->has_vid_out) {
+		v4l2_ctrl_add_handler(hdl_vid_out, hdl_user_gen, NULL);
+		v4l2_ctrl_add_handler(hdl_vid_out, hdl_user_aud, NULL);
+		v4l2_ctrl_add_handler(hdl_vid_out, hdl_streaming, NULL);
+		v4l2_ctrl_add_handler(hdl_vid_out, hdl_loop_out, NULL);
+		if (hdl_vid_out->error)
+			return hdl_vid_out->error;
+		dev->vid_out_dev.ctrl_handler = hdl_vid_out;
+	}
+	if (dev->has_vbi_cap) {
+		v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_user_gen, NULL);
+		v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_streaming, NULL);
+		v4l2_ctrl_add_handler(hdl_vbi_cap, hdl_sdtv_cap, NULL);
+		if (hdl_vbi_cap->error)
+			return hdl_vbi_cap->error;
+		dev->vbi_cap_dev.ctrl_handler = hdl_vbi_cap;
+	}
+	if (dev->has_vbi_out) {
+		v4l2_ctrl_add_handler(hdl_vbi_out, hdl_user_gen, NULL);
+		v4l2_ctrl_add_handler(hdl_vbi_out, hdl_streaming, NULL);
+		v4l2_ctrl_add_handler(hdl_vbi_out, hdl_loop_out, NULL);
+		if (hdl_vbi_out->error)
+			return hdl_vbi_out->error;
+		dev->vbi_out_dev.ctrl_handler = hdl_vbi_out;
+	}
+	if (dev->has_radio_rx) {
+		v4l2_ctrl_add_handler(hdl_radio_rx, hdl_user_gen, NULL);
+		v4l2_ctrl_add_handler(hdl_radio_rx, hdl_user_aud, NULL);
+		if (hdl_radio_rx->error)
+			return hdl_radio_rx->error;
+		dev->radio_rx_dev.ctrl_handler = hdl_radio_rx;
+	}
+	if (dev->has_radio_tx) {
+		v4l2_ctrl_add_handler(hdl_radio_tx, hdl_user_gen, NULL);
+		v4l2_ctrl_add_handler(hdl_radio_tx, hdl_user_aud, NULL);
+		if (hdl_radio_tx->error)
+			return hdl_radio_tx->error;
+		dev->radio_tx_dev.ctrl_handler = hdl_radio_tx;
+	}
+	if (dev->has_sdr_cap) {
+		v4l2_ctrl_add_handler(hdl_sdr_cap, hdl_user_gen, NULL);
+		v4l2_ctrl_add_handler(hdl_sdr_cap, hdl_streaming, NULL);
+		if (hdl_sdr_cap->error)
+			return hdl_sdr_cap->error;
+		dev->sdr_cap_dev.ctrl_handler = hdl_sdr_cap;
+	}
+	return 0;
+}
+
+void vivid_free_controls(struct vivid_dev *dev)
+{
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_vid_cap);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_vid_out);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_vbi_cap);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_vbi_out);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_radio_rx);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_radio_tx);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_sdr_cap);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_user_gen);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_user_vid);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_user_aud);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_streaming);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_sdtv_cap);
+	v4l2_ctrl_handler_free(&dev->ctrl_hdl_loop_out);
+}
diff --git a/drivers/media/platform/vivid/vivid-ctrls.h b/drivers/media/platform/vivid/vivid-ctrls.h
new file mode 100644
index 0000000..9bcca9d
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-ctrls.h
@@ -0,0 +1,34 @@
+/*
+ * vivid-ctrls.h - control support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_CTRLS_H_
+#define _VIVID_CTRLS_H_
+
+enum vivid_hw_seek_modes {
+	VIVID_HW_SEEK_BOUNDED,
+	VIVID_HW_SEEK_WRAP,
+	VIVID_HW_SEEK_BOTH,
+};
+
+int vivid_create_controls(struct vivid_dev *dev, bool show_ccs_cap,
+		bool show_ccs_out, bool no_error_inj,
+		bool has_sdtv, bool has_hdmi);
+void vivid_free_controls(struct vivid_dev *dev);
+
+#endif
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 05/12] vivid: add the video capture and output parts
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
                   ` (3 preceding siblings ...)
  2014-08-25 11:30 ` [PATCHv2 04/12] vivid: add the control handling code Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 06/12] vivid: add VBI capture and output code Hans Verkuil
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

This adds the ioctl and vb2 queue support for video capture and output.
Part of this is common to both, so that is placed in a vid-common source.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/platform/vivid/vivid-vid-cap.c    | 1729 +++++++++++++++++++++++
 drivers/media/platform/vivid/vivid-vid-cap.h    |   71 +
 drivers/media/platform/vivid/vivid-vid-common.c |  571 ++++++++
 drivers/media/platform/vivid/vivid-vid-common.h |   61 +
 drivers/media/platform/vivid/vivid-vid-out.c    | 1205 ++++++++++++++++
 drivers/media/platform/vivid/vivid-vid-out.h    |   57 +
 6 files changed, 3694 insertions(+)
 create mode 100644 drivers/media/platform/vivid/vivid-vid-cap.c
 create mode 100644 drivers/media/platform/vivid/vivid-vid-cap.h
 create mode 100644 drivers/media/platform/vivid/vivid-vid-common.c
 create mode 100644 drivers/media/platform/vivid/vivid-vid-common.h
 create mode 100644 drivers/media/platform/vivid/vivid-vid-out.c
 create mode 100644 drivers/media/platform/vivid/vivid-vid-out.h

diff --git a/drivers/media/platform/vivid/vivid-vid-cap.c b/drivers/media/platform/vivid/vivid-vid-cap.c
new file mode 100644
index 0000000..52f24ea
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vid-cap.c
@@ -0,0 +1,1729 @@
+/*
+ * vivid-vid-cap.c - video capture support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/videodev2.h>
+#include <linux/v4l2-dv-timings.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-dv-timings.h>
+
+#include "vivid-core.h"
+#include "vivid-vid-common.h"
+#include "vivid-kthread-cap.h"
+#include "vivid-vid-cap.h"
+
+/* timeperframe: min/max and default */
+static const struct v4l2_fract
+	tpf_min     = {.numerator = 1,		.denominator = FPS_MAX},
+	tpf_max     = {.numerator = FPS_MAX,	.denominator = 1},
+	tpf_default = {.numerator = 1,		.denominator = 30};
+
+static const struct vivid_fmt formats_ovl[] = {
+	{
+		.name     = "RGB565 (LE)",
+		.fourcc   = V4L2_PIX_FMT_RGB565, /* gggbbbbb rrrrrggg */
+		.depth    = 16,
+		.planes   = 1,
+	},
+	{
+		.name     = "XRGB555 (LE)",
+		.fourcc   = V4L2_PIX_FMT_XRGB555, /* gggbbbbb arrrrrgg */
+		.depth    = 16,
+		.planes   = 1,
+	},
+	{
+		.name     = "ARGB555 (LE)",
+		.fourcc   = V4L2_PIX_FMT_ARGB555, /* gggbbbbb arrrrrgg */
+		.depth    = 16,
+		.planes   = 1,
+	},
+};
+
+/* The number of discrete webcam framesizes */
+#define VIVID_WEBCAM_SIZES 3
+/* The number of discrete webcam frameintervals */
+#define VIVID_WEBCAM_IVALS (VIVID_WEBCAM_SIZES * 2)
+
+/* Sizes must be in increasing order */
+static const struct v4l2_frmsize_discrete webcam_sizes[VIVID_WEBCAM_SIZES] = {
+	{  320, 180 },
+	{  640, 360 },
+	{ 1280, 720 },
+};
+
+/*
+ * Intervals must be in increasing order and there must be twice as many
+ * elements in this array as there are in webcam_sizes.
+ */
+static const struct v4l2_fract webcam_intervals[VIVID_WEBCAM_IVALS] = {
+	{  1, 10 },
+	{  1, 15 },
+	{  1, 25 },
+	{  1, 30 },
+	{  1, 50 },
+	{  1, 60 },
+};
+
+static const struct v4l2_discrete_probe webcam_probe = {
+	webcam_sizes,
+	VIVID_WEBCAM_SIZES
+};
+
+static int vid_cap_queue_setup(struct vb2_queue *vq, const struct v4l2_format *fmt,
+		       unsigned *nbuffers, unsigned *nplanes,
+		       unsigned sizes[], void *alloc_ctxs[])
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+	unsigned planes = tpg_g_planes(&dev->tpg);
+	unsigned h = dev->fmt_cap_rect.height;
+	unsigned p;
+
+	if (dev->field_cap == V4L2_FIELD_ALTERNATE) {
+		/*
+		 * You cannot use read() with FIELD_ALTERNATE since the field
+		 * information (TOP/BOTTOM) cannot be passed back to the user.
+		 */
+		if (vb2_fileio_is_active(vq))
+			return -EINVAL;
+	}
+
+	if (dev->queue_setup_error) {
+		/*
+		 * Error injection: test what happens if queue_setup() returns
+		 * an error.
+		 */
+		dev->queue_setup_error = false;
+		return -EINVAL;
+	}
+	if (fmt) {
+		const struct v4l2_pix_format_mplane *mp;
+		struct v4l2_format mp_fmt;
+		const struct vivid_fmt *vfmt;
+
+		if (!V4L2_TYPE_IS_MULTIPLANAR(fmt->type)) {
+			fmt_sp2mp(fmt, &mp_fmt);
+			fmt = &mp_fmt;
+		}
+		mp = &fmt->fmt.pix_mp;
+		/*
+		 * Check if the number of planes in the specified format match
+		 * the number of planes in the current format. You can't mix that.
+		 */
+		if (mp->num_planes != planes)
+			return -EINVAL;
+		vfmt = get_format(dev, mp->pixelformat);
+		for (p = 0; p < planes; p++) {
+			sizes[p] = mp->plane_fmt[p].sizeimage;
+			if (sizes[0] < tpg_g_bytesperline(&dev->tpg, 0) * h +
+							vfmt->data_offset[p])
+				return -EINVAL;
+		}
+	} else {
+		for (p = 0; p < planes; p++)
+			sizes[p] = tpg_g_bytesperline(&dev->tpg, p) * h +
+					dev->fmt_cap->data_offset[p];
+	}
+
+	if (vq->num_buffers + *nbuffers < 2)
+		*nbuffers = 2 - vq->num_buffers;
+
+	*nplanes = planes;
+
+	/*
+	 * videobuf2-vmalloc allocator is context-less so no need to set
+	 * alloc_ctxs array.
+	 */
+
+	if (planes == 2)
+		dprintk(dev, 1, "%s, count=%d, sizes=%u, %u\n", __func__,
+			*nbuffers, sizes[0], sizes[1]);
+	else
+		dprintk(dev, 1, "%s, count=%d, size=%u\n", __func__,
+			*nbuffers, sizes[0]);
+
+	return 0;
+}
+
+static int vid_cap_buf_prepare(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	unsigned long size;
+	unsigned planes = tpg_g_planes(&dev->tpg);
+	unsigned p;
+
+	dprintk(dev, 1, "%s\n", __func__);
+
+	if (WARN_ON(NULL == dev->fmt_cap))
+		return -EINVAL;
+
+	if (dev->buf_prepare_error) {
+		/*
+		 * Error injection: test what happens if buf_prepare() returns
+		 * an error.
+		 */
+		dev->buf_prepare_error = false;
+		return -EINVAL;
+	}
+	for (p = 0; p < planes; p++) {
+		size = tpg_g_bytesperline(&dev->tpg, p) * dev->fmt_cap_rect.height +
+			dev->fmt_cap->data_offset[p];
+
+		if (vb2_plane_size(vb, 0) < size) {
+			dprintk(dev, 1, "%s data will not fit into plane %u (%lu < %lu)\n",
+					__func__, p, vb2_plane_size(vb, 0), size);
+			return -EINVAL;
+		}
+
+		vb2_set_plane_payload(vb, p, size);
+		vb->v4l2_planes[p].data_offset = dev->fmt_cap->data_offset[p];
+	}
+
+	return 0;
+}
+
+static void vid_cap_buf_finish(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	struct v4l2_timecode *tc = &vb->v4l2_buf.timecode;
+	unsigned fps = 25;
+	unsigned seq = vb->v4l2_buf.sequence;
+
+	if (!vivid_is_sdtv_cap(dev))
+		return;
+
+	/*
+	 * Set the timecode. Rarely used, so it is interesting to
+	 * test this.
+	 */
+	vb->v4l2_buf.flags |= V4L2_BUF_FLAG_TIMECODE;
+	if (dev->std_cap & V4L2_STD_525_60)
+		fps = 30;
+	tc->type = (fps == 30) ? V4L2_TC_TYPE_30FPS : V4L2_TC_TYPE_25FPS;
+	tc->flags = 0;
+	tc->frames = seq % fps;
+	tc->seconds = (seq / fps) % 60;
+	tc->minutes = (seq / (60 * fps)) % 60;
+	tc->hours = (seq / (60 * 60 * fps)) % 24;
+}
+
+static void vid_cap_buf_queue(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	struct vivid_buffer *buf = container_of(vb, struct vivid_buffer, vb);
+
+	dprintk(dev, 1, "%s\n", __func__);
+
+	spin_lock(&dev->slock);
+	list_add_tail(&buf->list, &dev->vid_cap_active);
+	spin_unlock(&dev->slock);
+}
+
+static int vid_cap_start_streaming(struct vb2_queue *vq, unsigned count)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+	unsigned i;
+	int err;
+
+	if (vb2_is_streaming(&dev->vb_vid_out_q))
+		dev->can_loop_video = vivid_vid_can_loop(dev);
+
+	if (dev->kthread_vid_cap)
+		return 0;
+
+	dev->vid_cap_seq_count = 0;
+	dprintk(dev, 1, "%s\n", __func__);
+	for (i = 0; i < VIDEO_MAX_FRAME; i++)
+		dev->must_blank[i] = tpg_g_perc_fill(&dev->tpg) < 100;
+	if (dev->start_streaming_error) {
+		dev->start_streaming_error = false;
+		err = -EINVAL;
+	} else {
+		err = vivid_start_generating_vid_cap(dev, &dev->vid_cap_streaming);
+	}
+	if (err) {
+		struct vivid_buffer *buf, *tmp;
+
+		list_for_each_entry_safe(buf, tmp, &dev->vid_cap_active, list) {
+			list_del(&buf->list);
+			vb2_buffer_done(&buf->vb, VB2_BUF_STATE_QUEUED);
+		}
+	}
+	return err;
+}
+
+/* abort streaming and wait for last buffer */
+static void vid_cap_stop_streaming(struct vb2_queue *vq)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+
+	dprintk(dev, 1, "%s\n", __func__);
+	vivid_stop_generating_vid_cap(dev, &dev->vid_cap_streaming);
+	dev->can_loop_video = false;
+}
+
+const struct vb2_ops vivid_vid_cap_qops = {
+	.queue_setup		= vid_cap_queue_setup,
+	.buf_prepare		= vid_cap_buf_prepare,
+	.buf_finish		= vid_cap_buf_finish,
+	.buf_queue		= vid_cap_buf_queue,
+	.start_streaming	= vid_cap_start_streaming,
+	.stop_streaming		= vid_cap_stop_streaming,
+	.wait_prepare		= vivid_unlock,
+	.wait_finish		= vivid_lock,
+};
+
+/*
+ * Determine the 'picture' quality based on the current TV frequency: either
+ * COLOR for a good 'signal', GRAY (grayscale picture) for a slightly off
+ * signal or NOISE for no signal.
+ */
+void vivid_update_quality(struct vivid_dev *dev)
+{
+	unsigned freq_modulus;
+
+	if (dev->loop_video && (vivid_is_svid_cap(dev) || vivid_is_hdmi_cap(dev))) {
+		/*
+		 * The 'noise' will only be replaced by the actual video
+		 * if the output video matches the input video settings.
+		 */
+		tpg_s_quality(&dev->tpg, TPG_QUAL_NOISE, 0);
+		return;
+	}
+	if (vivid_is_hdmi_cap(dev) && VIVID_INVALID_SIGNAL(dev->dv_timings_signal_mode)) {
+		tpg_s_quality(&dev->tpg, TPG_QUAL_NOISE, 0);
+		return;
+	}
+	if (vivid_is_sdtv_cap(dev) && VIVID_INVALID_SIGNAL(dev->std_signal_mode)) {
+		tpg_s_quality(&dev->tpg, TPG_QUAL_NOISE, 0);
+		return;
+	}
+	if (!vivid_is_tv_cap(dev)) {
+		tpg_s_quality(&dev->tpg, TPG_QUAL_COLOR, 0);
+		return;
+	}
+
+	/*
+	 * There is a fake channel every 6 MHz at 49.25, 55.25, etc.
+	 * From +/- 0.25 MHz around the channel there is color, and from
+	 * +/- 1 MHz there is grayscale (chroma is lost).
+	 * Everywhere else it is just noise.
+	 */
+	freq_modulus = (dev->tv_freq - 676 /* (43.25-1) * 16 */) % (6 * 16);
+	if (freq_modulus > 2 * 16) {
+		tpg_s_quality(&dev->tpg, TPG_QUAL_NOISE,
+			next_pseudo_random32(dev->tv_freq ^ 0x55) & 0x3f);
+		return;
+	}
+	if (freq_modulus < 12 /*0.75 * 16*/ || freq_modulus > 20 /*1.25 * 16*/)
+		tpg_s_quality(&dev->tpg, TPG_QUAL_GRAY, 0);
+	else
+		tpg_s_quality(&dev->tpg, TPG_QUAL_COLOR, 0);
+}
+
+/*
+ * Get the current picture quality and the associated afc value.
+ */
+static enum tpg_quality vivid_get_quality(struct vivid_dev *dev, s32 *afc)
+{
+	unsigned freq_modulus;
+
+	if (afc)
+		*afc = 0;
+	if (tpg_g_quality(&dev->tpg) == TPG_QUAL_COLOR ||
+	    tpg_g_quality(&dev->tpg) == TPG_QUAL_NOISE)
+		return tpg_g_quality(&dev->tpg);
+
+	/*
+	 * There is a fake channel every 6 MHz at 49.25, 55.25, etc.
+	 * From +/- 0.25 MHz around the channel there is color, and from
+	 * +/- 1 MHz there is grayscale (chroma is lost).
+	 * Everywhere else it is just gray.
+	 */
+	freq_modulus = (dev->tv_freq - 676 /* (43.25-1) * 16 */) % (6 * 16);
+	if (afc)
+		*afc = freq_modulus - 1 * 16;
+	return TPG_QUAL_GRAY;
+}
+
+enum tpg_video_aspect vivid_get_video_aspect(const struct vivid_dev *dev)
+{
+	if (vivid_is_sdtv_cap(dev))
+		return dev->std_aspect_ratio;
+
+	if (vivid_is_hdmi_cap(dev))
+		return dev->dv_timings_aspect_ratio;
+
+	return TPG_VIDEO_ASPECT_IMAGE;
+}
+
+static enum tpg_pixel_aspect vivid_get_pixel_aspect(const struct vivid_dev *dev)
+{
+	if (vivid_is_sdtv_cap(dev))
+		return (dev->std_cap & V4L2_STD_525_60) ?
+			TPG_PIXEL_ASPECT_NTSC : TPG_PIXEL_ASPECT_PAL;
+
+	if (vivid_is_hdmi_cap(dev) &&
+	    dev->src_rect.width == 720 && dev->src_rect.height <= 576)
+		return dev->src_rect.height == 480 ?
+			TPG_PIXEL_ASPECT_NTSC : TPG_PIXEL_ASPECT_PAL;
+
+	return TPG_PIXEL_ASPECT_SQUARE;
+}
+
+/*
+ * Called whenever the format has to be reset which can occur when
+ * changing inputs, standard, timings, etc.
+ */
+void vivid_update_format_cap(struct vivid_dev *dev, bool keep_controls)
+{
+	struct v4l2_bt_timings *bt = &dev->dv_timings_cap.bt;
+	unsigned size;
+
+	switch (dev->input_type[dev->input]) {
+	case WEBCAM:
+	default:
+		dev->src_rect.width = webcam_sizes[dev->webcam_size_idx].width;
+		dev->src_rect.height = webcam_sizes[dev->webcam_size_idx].height;
+		dev->timeperframe_vid_cap = webcam_intervals[dev->webcam_ival_idx];
+		dev->field_cap = V4L2_FIELD_NONE;
+		tpg_s_rgb_range(&dev->tpg, V4L2_DV_RGB_RANGE_AUTO);
+		break;
+	case TV:
+	case SVID:
+		dev->field_cap = dev->tv_field_cap;
+		dev->src_rect.width = 720;
+		if (dev->std_cap & V4L2_STD_525_60) {
+			dev->src_rect.height = 480;
+			dev->timeperframe_vid_cap = (struct v4l2_fract) { 1001, 30000 };
+			dev->service_set_cap = V4L2_SLICED_CAPTION_525;
+		} else {
+			dev->src_rect.height = 576;
+			dev->timeperframe_vid_cap = (struct v4l2_fract) { 1000, 25000 };
+			dev->service_set_cap = V4L2_SLICED_WSS_625;
+		}
+		tpg_s_rgb_range(&dev->tpg, V4L2_DV_RGB_RANGE_AUTO);
+		break;
+	case HDMI:
+		dev->src_rect.width = bt->width;
+		dev->src_rect.height = bt->height;
+		size = V4L2_DV_BT_FRAME_WIDTH(bt) * V4L2_DV_BT_FRAME_HEIGHT(bt);
+		dev->timeperframe_vid_cap = (struct v4l2_fract) {
+			size / 100, (u32)bt->pixelclock / 100
+		};
+		if (bt->interlaced)
+			dev->field_cap = V4L2_FIELD_ALTERNATE;
+		else
+			dev->field_cap = V4L2_FIELD_NONE;
+
+		/*
+		 * We can be called from within s_ctrl, in that case we can't
+		 * set/get controls. Luckily we don't need to in that case.
+		 */
+		if (keep_controls || !dev->colorspace)
+			break;
+		if (bt->standards & V4L2_DV_BT_STD_CEA861) {
+			if (bt->width == 720 && bt->height <= 576)
+				v4l2_ctrl_s_ctrl(dev->colorspace, V4L2_COLORSPACE_SMPTE170M);
+			else
+				v4l2_ctrl_s_ctrl(dev->colorspace, V4L2_COLORSPACE_REC709);
+			v4l2_ctrl_s_ctrl(dev->real_rgb_range_cap, 1);
+		} else {
+			v4l2_ctrl_s_ctrl(dev->colorspace, V4L2_COLORSPACE_SRGB);
+			v4l2_ctrl_s_ctrl(dev->real_rgb_range_cap, 0);
+		}
+		tpg_s_rgb_range(&dev->tpg, v4l2_ctrl_g_ctrl(dev->rgb_range_cap));
+		break;
+	}
+	vivid_update_quality(dev);
+	tpg_reset_source(&dev->tpg, dev->src_rect.width, dev->src_rect.height, dev->field_cap);
+	dev->crop_cap = dev->src_rect;
+	dev->crop_bounds_cap = dev->src_rect;
+	dev->compose_cap = dev->crop_cap;
+	if (V4L2_FIELD_HAS_T_OR_B(dev->field_cap))
+		dev->compose_cap.height /= 2;
+	dev->fmt_cap_rect = dev->compose_cap;
+	tpg_s_video_aspect(&dev->tpg, vivid_get_video_aspect(dev));
+	tpg_s_pixel_aspect(&dev->tpg, vivid_get_pixel_aspect(dev));
+	tpg_update_mv_step(&dev->tpg);
+}
+
+/* Map the field to something that is valid for the current input */
+static enum v4l2_field vivid_field_cap(struct vivid_dev *dev, enum v4l2_field field)
+{
+	if (vivid_is_sdtv_cap(dev)) {
+		switch (field) {
+		case V4L2_FIELD_INTERLACED_TB:
+		case V4L2_FIELD_INTERLACED_BT:
+		case V4L2_FIELD_SEQ_TB:
+		case V4L2_FIELD_SEQ_BT:
+		case V4L2_FIELD_TOP:
+		case V4L2_FIELD_BOTTOM:
+		case V4L2_FIELD_ALTERNATE:
+			return field;
+		case V4L2_FIELD_INTERLACED:
+		default:
+			return V4L2_FIELD_INTERLACED;
+		}
+	}
+	if (vivid_is_hdmi_cap(dev))
+		return dev->dv_timings_cap.bt.interlaced ? V4L2_FIELD_ALTERNATE :
+						       V4L2_FIELD_NONE;
+	return V4L2_FIELD_NONE;
+}
+
+static unsigned vivid_colorspace_cap(struct vivid_dev *dev)
+{
+	if (!dev->loop_video || vivid_is_webcam(dev) || vivid_is_tv_cap(dev))
+		return tpg_g_colorspace(&dev->tpg);
+	return dev->colorspace_out;
+}
+
+int vivid_g_fmt_vid_cap(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_pix_format_mplane *mp = &f->fmt.pix_mp;
+	unsigned p;
+
+	mp->width        = dev->fmt_cap_rect.width;
+	mp->height       = dev->fmt_cap_rect.height;
+	mp->field        = dev->field_cap;
+	mp->pixelformat  = dev->fmt_cap->fourcc;
+	mp->colorspace   = vivid_colorspace_cap(dev);
+	mp->num_planes = dev->fmt_cap->planes;
+	for (p = 0; p < mp->num_planes; p++) {
+		mp->plane_fmt[p].bytesperline = tpg_g_bytesperline(&dev->tpg, p);
+		mp->plane_fmt[p].sizeimage =
+			mp->plane_fmt[p].bytesperline * mp->height +
+			dev->fmt_cap->data_offset[p];
+	}
+	return 0;
+}
+
+int vivid_try_fmt_vid_cap(struct file *file, void *priv,
+			struct v4l2_format *f)
+{
+	struct v4l2_pix_format_mplane *mp = &f->fmt.pix_mp;
+	struct v4l2_plane_pix_format *pfmt = mp->plane_fmt;
+	struct vivid_dev *dev = video_drvdata(file);
+	const struct vivid_fmt *fmt;
+	unsigned bytesperline, max_bpl;
+	unsigned factor = 1;
+	unsigned w, h;
+	unsigned p;
+
+	fmt = get_format(dev, mp->pixelformat);
+	if (!fmt) {
+		dprintk(dev, 1, "Fourcc format (0x%08x) unknown.\n",
+			mp->pixelformat);
+		mp->pixelformat = V4L2_PIX_FMT_YUYV;
+		fmt = get_format(dev, mp->pixelformat);
+	}
+
+	mp->field = vivid_field_cap(dev, mp->field);
+	if (vivid_is_webcam(dev)) {
+		const struct v4l2_frmsize_discrete *sz =
+			v4l2_find_nearest_format(&webcam_probe, mp->width, mp->height);
+
+		w = sz->width;
+		h = sz->height;
+	} else if (vivid_is_sdtv_cap(dev)) {
+		w = 720;
+		h = (dev->std_cap & V4L2_STD_525_60) ? 480 : 576;
+	} else {
+		w = dev->src_rect.width;
+		h = dev->src_rect.height;
+	}
+	if (V4L2_FIELD_HAS_T_OR_B(mp->field))
+		factor = 2;
+	if (vivid_is_webcam(dev) ||
+	    (!dev->has_scaler_cap && !dev->has_crop_cap && !dev->has_compose_cap)) {
+		mp->width = w;
+		mp->height = h / factor;
+	} else {
+		struct v4l2_rect r = { 0, 0, mp->width, mp->height * factor };
+
+		rect_set_min_size(&r, &vivid_min_rect);
+		rect_set_max_size(&r, &vivid_max_rect);
+		if (dev->has_scaler_cap && !dev->has_compose_cap) {
+			struct v4l2_rect max_r = { 0, 0, MAX_ZOOM * w, MAX_ZOOM * h };
+
+			rect_set_max_size(&r, &max_r);
+		} else if (!dev->has_scaler_cap && dev->has_crop_cap && !dev->has_compose_cap) {
+			rect_set_max_size(&r, &dev->src_rect);
+		} else if (!dev->has_scaler_cap && !dev->has_crop_cap) {
+			rect_set_min_size(&r, &dev->src_rect);
+		}
+		mp->width = r.width;
+		mp->height = r.height / factor;
+	}
+
+	/* This driver supports custom bytesperline values */
+
+	/* Calculate the minimum supported bytesperline value */
+	bytesperline = (mp->width * fmt->depth) >> 3;
+	/* Calculate the maximum supported bytesperline value */
+	max_bpl = (MAX_ZOOM * MAX_WIDTH * fmt->depth) >> 3;
+	mp->num_planes = fmt->planes;
+	for (p = 0; p < mp->num_planes; p++) {
+		if (pfmt[p].bytesperline > max_bpl)
+			pfmt[p].bytesperline = max_bpl;
+		if (pfmt[p].bytesperline < bytesperline)
+			pfmt[p].bytesperline = bytesperline;
+		pfmt[p].sizeimage = pfmt[p].bytesperline * mp->height +
+			fmt->data_offset[p];
+		memset(pfmt[p].reserved, 0, sizeof(pfmt[p].reserved));
+	}
+	mp->colorspace = vivid_colorspace_cap(dev);
+	memset(mp->reserved, 0, sizeof(mp->reserved));
+	return 0;
+}
+
+int vivid_s_fmt_vid_cap(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct v4l2_pix_format_mplane *mp = &f->fmt.pix_mp;
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_rect *crop = &dev->crop_cap;
+	struct v4l2_rect *compose = &dev->compose_cap;
+	struct vb2_queue *q = &dev->vb_vid_cap_q;
+	int ret = vivid_try_fmt_vid_cap(file, priv, f);
+	unsigned factor = 1;
+	unsigned i;
+
+	if (ret < 0)
+		return ret;
+
+	if (vb2_is_busy(q)) {
+		dprintk(dev, 1, "%s device busy\n", __func__);
+		return -EBUSY;
+	}
+
+	if (dev->overlay_cap_owner && dev->fb_cap.fmt.pixelformat != mp->pixelformat) {
+		dprintk(dev, 1, "overlay is active, can't change pixelformat\n");
+		return -EBUSY;
+	}
+
+	dev->fmt_cap = get_format(dev, mp->pixelformat);
+	if (V4L2_FIELD_HAS_T_OR_B(mp->field))
+		factor = 2;
+
+	/* Note: the webcam input doesn't support scaling, cropping or composing */
+
+	if (!vivid_is_webcam(dev) &&
+	    (dev->has_scaler_cap || dev->has_crop_cap || dev->has_compose_cap)) {
+		struct v4l2_rect r = { 0, 0, mp->width, mp->height };
+
+		if (dev->has_scaler_cap) {
+			if (dev->has_compose_cap)
+				rect_map_inside(compose, &r);
+			else
+				*compose = r;
+			if (dev->has_crop_cap && !dev->has_compose_cap) {
+				struct v4l2_rect min_r = {
+					0, 0,
+					r.width / MAX_ZOOM,
+					factor * r.height / MAX_ZOOM
+				};
+				struct v4l2_rect max_r = {
+					0, 0,
+					r.width * MAX_ZOOM,
+					factor * r.height * MAX_ZOOM
+				};
+
+				rect_set_min_size(crop, &min_r);
+				rect_set_max_size(crop, &max_r);
+				rect_map_inside(crop, &dev->crop_bounds_cap);
+			} else if (dev->has_crop_cap) {
+				struct v4l2_rect min_r = {
+					0, 0,
+					compose->width / MAX_ZOOM,
+					factor * compose->height / MAX_ZOOM
+				};
+				struct v4l2_rect max_r = {
+					0, 0,
+					compose->width * MAX_ZOOM,
+					factor * compose->height * MAX_ZOOM
+				};
+
+				rect_set_min_size(crop, &min_r);
+				rect_set_max_size(crop, &max_r);
+				rect_map_inside(crop, &dev->crop_bounds_cap);
+			}
+		} else if (dev->has_crop_cap && !dev->has_compose_cap) {
+			r.height *= factor;
+			rect_set_size_to(crop, &r);
+			rect_map_inside(crop, &dev->crop_bounds_cap);
+			r = *crop;
+			r.height /= factor;
+			rect_set_size_to(compose, &r);
+		} else if (!dev->has_crop_cap) {
+			rect_map_inside(compose, &r);
+		} else {
+			r.height *= factor;
+			rect_set_max_size(crop, &r);
+			rect_map_inside(crop, &dev->crop_bounds_cap);
+			compose->top *= factor;
+			compose->height *= factor;
+			rect_set_size_to(compose, crop);
+			rect_map_inside(compose, &r);
+			compose->top /= factor;
+			compose->height /= factor;
+		}
+	} else if (vivid_is_webcam(dev)) {
+		/* Guaranteed to be a match */
+		for (i = 0; i < ARRAY_SIZE(webcam_sizes); i++)
+			if (webcam_sizes[i].width == mp->width &&
+					webcam_sizes[i].height == mp->height)
+				break;
+		dev->webcam_size_idx = i;
+		if (dev->webcam_ival_idx >= 2 * (3 - i))
+			dev->webcam_ival_idx = 2 * (3 - i) - 1;
+		vivid_update_format_cap(dev, false);
+	} else {
+		struct v4l2_rect r = { 0, 0, mp->width, mp->height };
+
+		rect_set_size_to(compose, &r);
+		r.height *= factor;
+		rect_set_size_to(crop, &r);
+	}
+
+	dev->fmt_cap_rect.width = mp->width;
+	dev->fmt_cap_rect.height = mp->height;
+	tpg_s_buf_height(&dev->tpg, mp->height);
+	tpg_s_bytesperline(&dev->tpg, 0, mp->plane_fmt[0].bytesperline);
+	if (tpg_g_planes(&dev->tpg) > 1)
+		tpg_s_bytesperline(&dev->tpg, 1, mp->plane_fmt[1].bytesperline);
+	dev->field_cap = mp->field;
+	tpg_s_field(&dev->tpg, dev->field_cap);
+	tpg_s_crop_compose(&dev->tpg, &dev->crop_cap, &dev->compose_cap);
+	tpg_s_fourcc(&dev->tpg, dev->fmt_cap->fourcc);
+	if (vivid_is_sdtv_cap(dev))
+		dev->tv_field_cap = mp->field;
+	tpg_update_mv_step(&dev->tpg);
+	return 0;
+}
+
+int vidioc_g_fmt_vid_cap_mplane(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!dev->multiplanar)
+		return -ENOTTY;
+	return vivid_g_fmt_vid_cap(file, priv, f);
+}
+
+int vidioc_try_fmt_vid_cap_mplane(struct file *file, void *priv,
+			struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!dev->multiplanar)
+		return -ENOTTY;
+	return vivid_try_fmt_vid_cap(file, priv, f);
+}
+
+int vidioc_s_fmt_vid_cap_mplane(struct file *file, void *priv,
+			struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!dev->multiplanar)
+		return -ENOTTY;
+	return vivid_s_fmt_vid_cap(file, priv, f);
+}
+
+int vidioc_g_fmt_vid_cap(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (dev->multiplanar)
+		return -ENOTTY;
+	return fmt_sp2mp_func(file, priv, f, vivid_g_fmt_vid_cap);
+}
+
+int vidioc_try_fmt_vid_cap(struct file *file, void *priv,
+			struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (dev->multiplanar)
+		return -ENOTTY;
+	return fmt_sp2mp_func(file, priv, f, vivid_try_fmt_vid_cap);
+}
+
+int vidioc_s_fmt_vid_cap(struct file *file, void *priv,
+			struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (dev->multiplanar)
+		return -ENOTTY;
+	return fmt_sp2mp_func(file, priv, f, vivid_s_fmt_vid_cap);
+}
+
+int vivid_vid_cap_g_selection(struct file *file, void *priv,
+			      struct v4l2_selection *sel)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!dev->has_crop_cap && !dev->has_compose_cap)
+		return -ENOTTY;
+	if (sel->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+	if (vivid_is_webcam(dev))
+		return -EINVAL;
+
+	sel->r.left = sel->r.top = 0;
+	switch (sel->target) {
+	case V4L2_SEL_TGT_CROP:
+		if (!dev->has_crop_cap)
+			return -EINVAL;
+		sel->r = dev->crop_cap;
+		break;
+	case V4L2_SEL_TGT_CROP_DEFAULT:
+	case V4L2_SEL_TGT_CROP_BOUNDS:
+		if (!dev->has_crop_cap)
+			return -EINVAL;
+		sel->r = dev->src_rect;
+		break;
+	case V4L2_SEL_TGT_COMPOSE_BOUNDS:
+		if (!dev->has_compose_cap)
+			return -EINVAL;
+		sel->r = vivid_max_rect;
+		break;
+	case V4L2_SEL_TGT_COMPOSE:
+		if (!dev->has_compose_cap)
+			return -EINVAL;
+		sel->r = dev->compose_cap;
+		break;
+	case V4L2_SEL_TGT_COMPOSE_DEFAULT:
+		if (!dev->has_compose_cap)
+			return -EINVAL;
+		sel->r = dev->fmt_cap_rect;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection *s)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_rect *crop = &dev->crop_cap;
+	struct v4l2_rect *compose = &dev->compose_cap;
+	unsigned factor = V4L2_FIELD_HAS_T_OR_B(dev->field_cap) ? 2 : 1;
+	int ret;
+
+	if (!dev->has_crop_cap && !dev->has_compose_cap)
+		return -ENOTTY;
+	if (s->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+	if (vivid_is_webcam(dev))
+		return -EINVAL;
+
+	switch (s->target) {
+	case V4L2_SEL_TGT_CROP:
+		if (!dev->has_crop_cap)
+			return -EINVAL;
+		ret = vivid_vid_adjust_sel(s->flags, &s->r);
+		if (ret)
+			return ret;
+		rect_set_min_size(&s->r, &vivid_min_rect);
+		rect_set_max_size(&s->r, &dev->src_rect);
+		rect_map_inside(&s->r, &dev->crop_bounds_cap);
+		s->r.top /= factor;
+		s->r.height /= factor;
+		if (dev->has_scaler_cap) {
+			struct v4l2_rect fmt = dev->fmt_cap_rect;
+			struct v4l2_rect max_rect = {
+				0, 0,
+				s->r.width * MAX_ZOOM,
+				s->r.height * MAX_ZOOM
+			};
+			struct v4l2_rect min_rect = {
+				0, 0,
+				s->r.width / MAX_ZOOM,
+				s->r.height / MAX_ZOOM
+			};
+
+			rect_set_min_size(&fmt, &min_rect);
+			if (!dev->has_compose_cap)
+				rect_set_max_size(&fmt, &max_rect);
+			if (!rect_same_size(&dev->fmt_cap_rect, &fmt) &&
+			    vb2_is_busy(&dev->vb_vid_cap_q))
+				return -EBUSY;
+			if (dev->has_compose_cap) {
+				rect_set_min_size(compose, &min_rect);
+				rect_set_max_size(compose, &max_rect);
+			}
+			dev->fmt_cap_rect = fmt;
+			tpg_s_buf_height(&dev->tpg, fmt.height);
+		} else if (dev->has_compose_cap) {
+			struct v4l2_rect fmt = dev->fmt_cap_rect;
+
+			rect_set_min_size(&fmt, &s->r);
+			if (!rect_same_size(&dev->fmt_cap_rect, &fmt) &&
+			    vb2_is_busy(&dev->vb_vid_cap_q))
+				return -EBUSY;
+			dev->fmt_cap_rect = fmt;
+			tpg_s_buf_height(&dev->tpg, fmt.height);
+			rect_set_size_to(compose, &s->r);
+			rect_map_inside(compose, &dev->fmt_cap_rect);
+		} else {
+			if (!rect_same_size(&s->r, &dev->fmt_cap_rect) &&
+			    vb2_is_busy(&dev->vb_vid_cap_q))
+				return -EBUSY;
+			rect_set_size_to(&dev->fmt_cap_rect, &s->r);
+			rect_set_size_to(compose, &s->r);
+			rect_map_inside(compose, &dev->fmt_cap_rect);
+			tpg_s_buf_height(&dev->tpg, dev->fmt_cap_rect.height);
+		}
+		s->r.top *= factor;
+		s->r.height *= factor;
+		*crop = s->r;
+		break;
+	case V4L2_SEL_TGT_COMPOSE:
+		if (!dev->has_compose_cap)
+			return -EINVAL;
+		ret = vivid_vid_adjust_sel(s->flags, &s->r);
+		if (ret)
+			return ret;
+		rect_set_min_size(&s->r, &vivid_min_rect);
+		rect_set_max_size(&s->r, &dev->fmt_cap_rect);
+		if (dev->has_scaler_cap) {
+			struct v4l2_rect max_rect = {
+				0, 0,
+				dev->src_rect.width * MAX_ZOOM,
+				(dev->src_rect.height / factor) * MAX_ZOOM
+			};
+
+			rect_set_max_size(&s->r, &max_rect);
+			if (dev->has_crop_cap) {
+				struct v4l2_rect min_rect = {
+					0, 0,
+					s->r.width / MAX_ZOOM,
+					(s->r.height * factor) / MAX_ZOOM
+				};
+				struct v4l2_rect max_rect = {
+					0, 0,
+					s->r.width * MAX_ZOOM,
+					(s->r.height * factor) * MAX_ZOOM
+				};
+
+				rect_set_min_size(crop, &min_rect);
+				rect_set_max_size(crop, &max_rect);
+				rect_map_inside(crop, &dev->crop_bounds_cap);
+			}
+		} else if (dev->has_crop_cap) {
+			s->r.top *= factor;
+			s->r.height *= factor;
+			rect_set_max_size(&s->r, &dev->src_rect);
+			rect_set_size_to(crop, &s->r);
+			rect_map_inside(crop, &dev->crop_bounds_cap);
+			s->r.top /= factor;
+			s->r.height /= factor;
+		} else {
+			rect_set_size_to(&s->r, &dev->src_rect);
+			s->r.height /= factor;
+		}
+		rect_map_inside(&s->r, &dev->fmt_cap_rect);
+		if (dev->bitmap_cap && (compose->width != s->r.width ||
+					compose->height != s->r.height)) {
+			kfree(dev->bitmap_cap);
+			dev->bitmap_cap = NULL;
+		}
+		*compose = s->r;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	tpg_s_crop_compose(&dev->tpg, crop, compose);
+	return 0;
+}
+
+int vivid_vid_cap_cropcap(struct file *file, void *priv,
+			      struct v4l2_cropcap *cap)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (cap->type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+		return -EINVAL;
+
+	switch (vivid_get_pixel_aspect(dev)) {
+	case TPG_PIXEL_ASPECT_NTSC:
+		cap->pixelaspect.numerator = 11;
+		cap->pixelaspect.denominator = 10;
+		break;
+	case TPG_PIXEL_ASPECT_PAL:
+		cap->pixelaspect.numerator = 54;
+		cap->pixelaspect.denominator = 59;
+		break;
+	case TPG_PIXEL_ASPECT_SQUARE:
+		cap->pixelaspect.numerator = 1;
+		cap->pixelaspect.denominator = 1;
+		break;
+	}
+	return 0;
+}
+
+int vidioc_enum_fmt_vid_overlay(struct file *file, void  *priv,
+					struct v4l2_fmtdesc *f)
+{
+	const struct vivid_fmt *fmt;
+
+	if (f->index >= ARRAY_SIZE(formats_ovl))
+		return -EINVAL;
+
+	fmt = &formats_ovl[f->index];
+
+	strlcpy(f->description, fmt->name, sizeof(f->description));
+	f->pixelformat = fmt->fourcc;
+	return 0;
+}
+
+int vidioc_g_fmt_vid_overlay(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	const struct v4l2_rect *compose = &dev->compose_cap;
+	struct v4l2_window *win = &f->fmt.win;
+	unsigned clipcount = win->clipcount;
+
+	win->w.top = dev->overlay_cap_top;
+	win->w.left = dev->overlay_cap_left;
+	win->w.width = compose->width;
+	win->w.height = compose->height;
+	win->field = dev->overlay_cap_field;
+	win->clipcount = dev->clipcount_cap;
+	if (clipcount > dev->clipcount_cap)
+		clipcount = dev->clipcount_cap;
+	if (dev->bitmap_cap == NULL)
+		win->bitmap = NULL;
+	else if (win->bitmap) {
+		if (copy_to_user(win->bitmap, dev->bitmap_cap,
+		    ((compose->width + 7) / 8) * compose->height))
+			return -EFAULT;
+	}
+	if (clipcount && win->clips) {
+		if (copy_to_user(win->clips, dev->clips_cap,
+				 clipcount * sizeof(dev->clips_cap[0])))
+			return -EFAULT;
+	}
+	return 0;
+}
+
+int vidioc_try_fmt_vid_overlay(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	const struct v4l2_rect *compose = &dev->compose_cap;
+	struct v4l2_window *win = &f->fmt.win;
+	int i, j;
+
+	win->w.left = clamp_t(int, win->w.left,
+			      -dev->fb_cap.fmt.width, dev->fb_cap.fmt.width);
+	win->w.top = clamp_t(int, win->w.top,
+			     -dev->fb_cap.fmt.height, dev->fb_cap.fmt.height);
+	win->w.width = compose->width;
+	win->w.height = compose->height;
+	if (win->field != V4L2_FIELD_BOTTOM && win->field != V4L2_FIELD_TOP)
+		win->field = V4L2_FIELD_ANY;
+	win->chromakey = 0;
+	win->global_alpha = 0;
+	if (win->clipcount && !win->clips)
+		win->clipcount = 0;
+	if (win->clipcount > MAX_CLIPS)
+		win->clipcount = MAX_CLIPS;
+	if (win->clipcount) {
+		if (copy_from_user(dev->try_clips_cap, win->clips,
+				   win->clipcount * sizeof(dev->clips_cap[0])))
+			return -EFAULT;
+		for (i = 0; i < win->clipcount; i++) {
+			struct v4l2_rect *r = &dev->try_clips_cap[i].c;
+
+			r->top = clamp_t(s32, r->top, 0, dev->fb_cap.fmt.height - 1);
+			r->height = clamp_t(s32, r->height, 1, dev->fb_cap.fmt.height - r->top);
+			r->left = clamp_t(u32, r->left, 0, dev->fb_cap.fmt.width - 1);
+			r->width = clamp_t(u32, r->width, 1, dev->fb_cap.fmt.width - r->left);
+		}
+		/*
+		 * Yeah, so sue me, it's an O(n^2) algorithm. But n is a small
+		 * number and it's typically a one-time deal.
+		 */
+		for (i = 0; i < win->clipcount - 1; i++) {
+			struct v4l2_rect *r1 = &dev->try_clips_cap[i].c;
+
+			for (j = i + 1; j < win->clipcount; j++) {
+				struct v4l2_rect *r2 = &dev->try_clips_cap[j].c;
+
+				if (rect_overlap(r1, r2))
+					return -EINVAL;
+			}
+		}
+		if (copy_to_user(win->clips, dev->try_clips_cap,
+				 win->clipcount * sizeof(dev->clips_cap[0])))
+			return -EFAULT;
+	}
+	return 0;
+}
+
+int vidioc_s_fmt_vid_overlay(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	const struct v4l2_rect *compose = &dev->compose_cap;
+	struct v4l2_window *win = &f->fmt.win;
+	int ret = vidioc_try_fmt_vid_overlay(file, priv, f);
+	unsigned bitmap_size = ((compose->width + 7) / 8) * compose->height;
+	unsigned clips_size = win->clipcount * sizeof(dev->clips_cap[0]);
+	void *new_bitmap = NULL;
+
+	if (ret)
+		return ret;
+
+	if (win->bitmap) {
+		new_bitmap = vzalloc(bitmap_size);
+
+		if (new_bitmap == NULL)
+			return -ENOMEM;
+		if (copy_from_user(new_bitmap, win->bitmap, bitmap_size)) {
+			vfree(new_bitmap);
+			return -EFAULT;
+		}
+	}
+
+	dev->overlay_cap_top = win->w.top;
+	dev->overlay_cap_left = win->w.left;
+	dev->overlay_cap_field = win->field;
+	vfree(dev->bitmap_cap);
+	dev->bitmap_cap = new_bitmap;
+	dev->clipcount_cap = win->clipcount;
+	if (dev->clipcount_cap)
+		memcpy(dev->clips_cap, dev->try_clips_cap, clips_size);
+	return 0;
+}
+
+int vivid_vid_cap_overlay(struct file *file, void *fh, unsigned i)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (i && dev->fb_vbase_cap == NULL)
+		return -EINVAL;
+
+	if (i && dev->fb_cap.fmt.pixelformat != dev->fmt_cap->fourcc) {
+		dprintk(dev, 1, "mismatch between overlay and video capture pixelformats\n");
+		return -EINVAL;
+	}
+
+	if (dev->overlay_cap_owner && dev->overlay_cap_owner != fh)
+		return -EBUSY;
+	dev->overlay_cap_owner = i ? fh : NULL;
+	return 0;
+}
+
+int vivid_vid_cap_g_fbuf(struct file *file, void *fh,
+				struct v4l2_framebuffer *a)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	*a = dev->fb_cap;
+	a->capability = V4L2_FBUF_CAP_BITMAP_CLIPPING |
+			V4L2_FBUF_CAP_LIST_CLIPPING;
+	a->flags = V4L2_FBUF_FLAG_PRIMARY;
+	a->fmt.field = V4L2_FIELD_NONE;
+	a->fmt.colorspace = V4L2_COLORSPACE_SRGB;
+	a->fmt.priv = 0;
+	return 0;
+}
+
+int vivid_vid_cap_s_fbuf(struct file *file, void *fh,
+				const struct v4l2_framebuffer *a)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	const struct vivid_fmt *fmt;
+
+	if (!capable(CAP_SYS_ADMIN) && !capable(CAP_SYS_RAWIO))
+		return -EPERM;
+
+	if (dev->overlay_cap_owner)
+		return -EBUSY;
+
+	if (a->base == NULL) {
+		dev->fb_cap.base = NULL;
+		dev->fb_vbase_cap = NULL;
+		return 0;
+	}
+
+	if (a->fmt.width < 48 || a->fmt.height < 32)
+		return -EINVAL;
+	fmt = get_format(dev, a->fmt.pixelformat);
+	if (!fmt || !fmt->can_do_overlay)
+		return -EINVAL;
+	if (a->fmt.bytesperline < (a->fmt.width * fmt->depth) / 8)
+		return -EINVAL;
+	if (a->fmt.height * a->fmt.bytesperline < a->fmt.sizeimage)
+		return -EINVAL;
+
+	dev->fb_vbase_cap = phys_to_virt((unsigned long)a->base);
+	dev->fb_cap = *a;
+	dev->overlay_cap_left = clamp_t(int, dev->overlay_cap_left,
+				    -dev->fb_cap.fmt.width, dev->fb_cap.fmt.width);
+	dev->overlay_cap_top = clamp_t(int, dev->overlay_cap_top,
+				   -dev->fb_cap.fmt.height, dev->fb_cap.fmt.height);
+	return 0;
+}
+
+static const struct v4l2_audio vivid_audio_inputs[] = {
+	{ 0, "TV", V4L2_AUDCAP_STEREO },
+	{ 1, "Line-In", V4L2_AUDCAP_STEREO },
+};
+
+int vidioc_enum_input(struct file *file, void *priv,
+				struct v4l2_input *inp)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (inp->index >= dev->num_inputs)
+		return -EINVAL;
+
+	inp->type = V4L2_INPUT_TYPE_CAMERA;
+	switch (dev->input_type[inp->index]) {
+	case WEBCAM:
+		snprintf(inp->name, sizeof(inp->name), "Webcam %u",
+				dev->input_name_counter[inp->index]);
+		inp->capabilities = 0;
+		break;
+	case TV:
+		snprintf(inp->name, sizeof(inp->name), "TV %u",
+				dev->input_name_counter[inp->index]);
+		inp->type = V4L2_INPUT_TYPE_TUNER;
+		inp->std = V4L2_STD_ALL;
+		if (dev->has_audio_inputs)
+			inp->audioset = (1 << ARRAY_SIZE(vivid_audio_inputs)) - 1;
+		inp->capabilities = V4L2_IN_CAP_STD;
+		break;
+	case SVID:
+		snprintf(inp->name, sizeof(inp->name), "S-Video %u",
+				dev->input_name_counter[inp->index]);
+		inp->std = V4L2_STD_ALL;
+		if (dev->has_audio_inputs)
+			inp->audioset = (1 << ARRAY_SIZE(vivid_audio_inputs)) - 1;
+		inp->capabilities = V4L2_IN_CAP_STD;
+		break;
+	case HDMI:
+		snprintf(inp->name, sizeof(inp->name), "HDMI %u",
+				dev->input_name_counter[inp->index]);
+		inp->capabilities = V4L2_IN_CAP_DV_TIMINGS;
+		if (dev->edid_blocks == 0 ||
+		    dev->dv_timings_signal_mode == NO_SIGNAL)
+			inp->status |= V4L2_IN_ST_NO_SIGNAL;
+		else if (dev->dv_timings_signal_mode == NO_LOCK ||
+			 dev->dv_timings_signal_mode == OUT_OF_RANGE)
+			inp->status |= V4L2_IN_ST_NO_H_LOCK;
+		break;
+	}
+	if (dev->sensor_hflip)
+		inp->status |= V4L2_IN_ST_HFLIP;
+	if (dev->sensor_vflip)
+		inp->status |= V4L2_IN_ST_VFLIP;
+	if (dev->input == inp->index && vivid_is_sdtv_cap(dev)) {
+		if (dev->std_signal_mode == NO_SIGNAL) {
+			inp->status |= V4L2_IN_ST_NO_SIGNAL;
+		} else if (dev->std_signal_mode == NO_LOCK) {
+			inp->status |= V4L2_IN_ST_NO_H_LOCK;
+		} else if (vivid_is_tv_cap(dev)) {
+			switch (tpg_g_quality(&dev->tpg)) {
+			case TPG_QUAL_GRAY:
+				inp->status |= V4L2_IN_ST_COLOR_KILL;
+				break;
+			case TPG_QUAL_NOISE:
+				inp->status |= V4L2_IN_ST_NO_H_LOCK;
+				break;
+			default:
+				break;
+			}
+		}
+	}
+	return 0;
+}
+
+int vidioc_g_input(struct file *file, void *priv, unsigned *i)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	*i = dev->input;
+	return 0;
+}
+
+int vidioc_s_input(struct file *file, void *priv, unsigned i)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_bt_timings *bt = &dev->dv_timings_cap.bt;
+	unsigned brightness;
+
+	if (i >= dev->num_inputs)
+		return -EINVAL;
+
+	if (i == dev->input)
+		return 0;
+
+	if (vb2_is_busy(&dev->vb_vid_cap_q) || vb2_is_busy(&dev->vb_vbi_cap_q))
+		return -EBUSY;
+
+	dev->input = i;
+	dev->vid_cap_dev.tvnorms = 0;
+	if (dev->input_type[i] == TV || dev->input_type[i] == SVID) {
+		dev->tv_audio_input = (dev->input_type[i] == TV) ? 0 : 1;
+		dev->vid_cap_dev.tvnorms = V4L2_STD_ALL;
+	}
+	dev->vbi_cap_dev.tvnorms = dev->vid_cap_dev.tvnorms;
+	vivid_update_format_cap(dev, false);
+
+	if (dev->colorspace) {
+		switch (dev->input_type[i]) {
+		case WEBCAM:
+			v4l2_ctrl_s_ctrl(dev->colorspace, V4L2_COLORSPACE_SRGB);
+			break;
+		case TV:
+		case SVID:
+			v4l2_ctrl_s_ctrl(dev->colorspace, V4L2_COLORSPACE_SMPTE170M);
+			break;
+		case HDMI:
+			if (bt->standards & V4L2_DV_BT_STD_CEA861) {
+				if (dev->src_rect.width == 720 && dev->src_rect.height <= 576)
+					v4l2_ctrl_s_ctrl(dev->colorspace, V4L2_COLORSPACE_SMPTE170M);
+				else
+					v4l2_ctrl_s_ctrl(dev->colorspace, V4L2_COLORSPACE_REC709);
+			} else {
+				v4l2_ctrl_s_ctrl(dev->colorspace, V4L2_COLORSPACE_SRGB);
+			}
+			break;
+		}
+	}
+
+	/*
+	 * Modify the brightness range depending on the input.
+	 * This makes it easy to use vivid to test if applications can
+	 * handle control range modifications and is also how this is
+	 * typically used in practice as different inputs may be hooked
+	 * up to different receivers with different control ranges.
+	 */
+	brightness = 128 * i + dev->input_brightness[i];
+	v4l2_ctrl_modify_range(dev->brightness,
+			128 * i, 255 + 128 * i, 1, 128 + 128 * i);
+	v4l2_ctrl_s_ctrl(dev->brightness, brightness);
+	return 0;
+}
+
+int vidioc_enumaudio(struct file *file, void *fh, struct v4l2_audio *vin)
+{
+	if (vin->index >= ARRAY_SIZE(vivid_audio_inputs))
+		return -EINVAL;
+	*vin = vivid_audio_inputs[vin->index];
+	return 0;
+}
+
+int vidioc_g_audio(struct file *file, void *fh, struct v4l2_audio *vin)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_sdtv_cap(dev))
+		return -EINVAL;
+	*vin = vivid_audio_inputs[dev->tv_audio_input];
+	return 0;
+}
+
+int vidioc_s_audio(struct file *file, void *fh, const struct v4l2_audio *vin)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_sdtv_cap(dev))
+		return -EINVAL;
+	if (vin->index >= ARRAY_SIZE(vivid_audio_inputs))
+		return -EINVAL;
+	dev->tv_audio_input = vin->index;
+	return 0;
+}
+
+int vivid_video_g_frequency(struct file *file, void *fh, struct v4l2_frequency *vf)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (vf->tuner != 0)
+		return -EINVAL;
+	vf->frequency = dev->tv_freq;
+	return 0;
+}
+
+int vivid_video_s_frequency(struct file *file, void *fh, const struct v4l2_frequency *vf)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (vf->tuner != 0)
+		return -EINVAL;
+	dev->tv_freq = clamp_t(unsigned, vf->frequency, MIN_TV_FREQ, MAX_TV_FREQ);
+	if (vivid_is_tv_cap(dev))
+		vivid_update_quality(dev);
+	return 0;
+}
+
+int vivid_video_s_tuner(struct file *file, void *fh, const struct v4l2_tuner *vt)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (vt->index != 0)
+		return -EINVAL;
+	if (vt->audmode > V4L2_TUNER_MODE_LANG1_LANG2)
+		return -EINVAL;
+	dev->tv_audmode = vt->audmode;
+	return 0;
+}
+
+int vivid_video_g_tuner(struct file *file, void *fh, struct v4l2_tuner *vt)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	enum tpg_quality qual;
+
+	if (vt->index != 0)
+		return -EINVAL;
+
+	vt->capability = V4L2_TUNER_CAP_NORM | V4L2_TUNER_CAP_STEREO |
+			 V4L2_TUNER_CAP_LANG1 | V4L2_TUNER_CAP_LANG2;
+	vt->audmode = dev->tv_audmode;
+	vt->rangelow = MIN_TV_FREQ;
+	vt->rangehigh = MAX_TV_FREQ;
+	qual = vivid_get_quality(dev, &vt->afc);
+	if (qual == TPG_QUAL_COLOR)
+		vt->signal = 0xffff;
+	else if (qual == TPG_QUAL_GRAY)
+		vt->signal = 0x8000;
+	else
+		vt->signal = 0;
+	if (qual == TPG_QUAL_NOISE) {
+		vt->rxsubchans = 0;
+	} else if (qual == TPG_QUAL_GRAY) {
+		vt->rxsubchans = V4L2_TUNER_SUB_MONO;
+	} else {
+		unsigned channel_nr = dev->tv_freq / (6 * 16);
+		unsigned options = (dev->std_cap & V4L2_STD_NTSC_M) ? 4 : 3;
+
+		switch (channel_nr % options) {
+		case 0:
+			vt->rxsubchans = V4L2_TUNER_SUB_MONO;
+			break;
+		case 1:
+			vt->rxsubchans = V4L2_TUNER_SUB_STEREO;
+			break;
+		case 2:
+			if (dev->std_cap & V4L2_STD_NTSC_M)
+				vt->rxsubchans = V4L2_TUNER_SUB_MONO | V4L2_TUNER_SUB_SAP;
+			else
+				vt->rxsubchans = V4L2_TUNER_SUB_LANG1 | V4L2_TUNER_SUB_LANG2;
+			break;
+		case 3:
+			vt->rxsubchans = V4L2_TUNER_SUB_STEREO | V4L2_TUNER_SUB_SAP;
+			break;
+		}
+	}
+	strlcpy(vt->name, "TV Tuner", sizeof(vt->name));
+	return 0;
+}
+
+/* Must remain in sync with the vivid_ctrl_standard_strings array */
+const v4l2_std_id vivid_standard[] = {
+	V4L2_STD_NTSC_M,
+	V4L2_STD_NTSC_M_JP,
+	V4L2_STD_NTSC_M_KR,
+	V4L2_STD_NTSC_443,
+	V4L2_STD_PAL_BG | V4L2_STD_PAL_H,
+	V4L2_STD_PAL_I,
+	V4L2_STD_PAL_DK,
+	V4L2_STD_PAL_M,
+	V4L2_STD_PAL_N,
+	V4L2_STD_PAL_Nc,
+	V4L2_STD_PAL_60,
+	V4L2_STD_SECAM_B | V4L2_STD_SECAM_G | V4L2_STD_SECAM_H,
+	V4L2_STD_SECAM_DK,
+	V4L2_STD_SECAM_L,
+	V4L2_STD_SECAM_LC,
+	V4L2_STD_UNKNOWN
+};
+
+/* Must remain in sync with the vivid_standard array */
+const char * const vivid_ctrl_standard_strings[] = {
+	"NTSC-M",
+	"NTSC-M-JP",
+	"NTSC-M-KR",
+	"NTSC-443",
+	"PAL-BGH",
+	"PAL-I",
+	"PAL-DK",
+	"PAL-M",
+	"PAL-N",
+	"PAL-Nc",
+	"PAL-60",
+	"SECAM-BGH",
+	"SECAM-DK",
+	"SECAM-L",
+	"SECAM-Lc",
+	NULL,
+};
+
+int vidioc_querystd(struct file *file, void *priv, v4l2_std_id *id)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_sdtv_cap(dev))
+		return -ENODATA;
+	if (dev->std_signal_mode == NO_SIGNAL ||
+	    dev->std_signal_mode == NO_LOCK) {
+		*id = V4L2_STD_UNKNOWN;
+		return 0;
+	}
+	if (vivid_is_tv_cap(dev) && tpg_g_quality(&dev->tpg) == TPG_QUAL_NOISE) {
+		*id = V4L2_STD_UNKNOWN;
+	} else if (dev->std_signal_mode == CURRENT_STD) {
+		*id = dev->std_cap;
+	} else if (dev->std_signal_mode == SELECTED_STD) {
+		*id = dev->query_std;
+	} else {
+		*id = vivid_standard[dev->query_std_last];
+		dev->query_std_last = (dev->query_std_last + 1) % ARRAY_SIZE(vivid_standard);
+	}
+
+	return 0;
+}
+
+int vivid_vid_cap_s_std(struct file *file, void *priv, v4l2_std_id id)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_sdtv_cap(dev))
+		return -ENODATA;
+	if (dev->std_cap == id)
+		return 0;
+	if (vb2_is_busy(&dev->vb_vid_cap_q) || vb2_is_busy(&dev->vb_vbi_cap_q))
+		return -EBUSY;
+	dev->std_cap = id;
+	vivid_update_format_cap(dev, false);
+	return 0;
+}
+
+int vivid_vid_cap_s_dv_timings(struct file *file, void *_fh,
+				    struct v4l2_dv_timings *timings)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_hdmi_cap(dev))
+		return -ENODATA;
+	if (vb2_is_busy(&dev->vb_vid_cap_q))
+		return -EBUSY;
+	if (!v4l2_find_dv_timings_cap(timings, &vivid_dv_timings_cap,
+				0, NULL, NULL))
+		return -EINVAL;
+	if (v4l2_match_dv_timings(timings, &dev->dv_timings_cap, 0))
+		return 0;
+	dev->dv_timings_cap = *timings;
+	vivid_update_format_cap(dev, false);
+	return 0;
+}
+
+int vidioc_query_dv_timings(struct file *file, void *_fh,
+				    struct v4l2_dv_timings *timings)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_hdmi_cap(dev))
+		return -ENODATA;
+	if (dev->dv_timings_signal_mode == NO_SIGNAL ||
+	    dev->edid_blocks == 0)
+		return -ENOLINK;
+	if (dev->dv_timings_signal_mode == NO_LOCK)
+		return -ENOLCK;
+	if (dev->dv_timings_signal_mode == OUT_OF_RANGE) {
+		timings->bt.pixelclock = vivid_dv_timings_cap.bt.max_pixelclock * 2;
+		return -ERANGE;
+	}
+	if (dev->dv_timings_signal_mode == CURRENT_DV_TIMINGS) {
+		*timings = dev->dv_timings_cap;
+	} else if (dev->dv_timings_signal_mode == SELECTED_DV_TIMINGS) {
+		*timings = v4l2_dv_timings_presets[dev->query_dv_timings];
+	} else {
+		*timings = v4l2_dv_timings_presets[dev->query_dv_timings_last];
+		dev->query_dv_timings_last = (dev->query_dv_timings_last + 1) %
+						dev->query_dv_timings_size;
+	}
+	return 0;
+}
+
+int vidioc_s_edid(struct file *file, void *_fh,
+			 struct v4l2_edid *edid)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	memset(edid->reserved, 0, sizeof(edid->reserved));
+	if (edid->pad >= dev->num_inputs)
+		return -EINVAL;
+	if (dev->input_type[edid->pad] != HDMI || edid->start_block)
+		return -EINVAL;
+	if (edid->blocks == 0) {
+		dev->edid_blocks = 0;
+		return 0;
+	}
+	if (edid->blocks > dev->edid_max_blocks) {
+		edid->blocks = dev->edid_max_blocks;
+		return -E2BIG;
+	}
+	dev->edid_blocks = edid->blocks;
+	memcpy(dev->edid, edid->edid, edid->blocks * 128);
+	return 0;
+}
+
+int vidioc_enum_framesizes(struct file *file, void *fh,
+					 struct v4l2_frmsizeenum *fsize)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_webcam(dev) && !dev->has_scaler_cap)
+		return -EINVAL;
+	if (get_format(dev, fsize->pixel_format) == NULL)
+		return -EINVAL;
+	if (vivid_is_webcam(dev)) {
+		if (fsize->index >= ARRAY_SIZE(webcam_sizes))
+			return -EINVAL;
+		fsize->type = V4L2_FRMSIZE_TYPE_DISCRETE;
+		fsize->discrete = webcam_sizes[fsize->index];
+		return 0;
+	}
+	if (fsize->index)
+		return -EINVAL;
+	fsize->type = V4L2_FRMSIZE_TYPE_STEPWISE;
+	fsize->stepwise.min_width = MIN_WIDTH;
+	fsize->stepwise.max_width = MAX_WIDTH * MAX_ZOOM;
+	fsize->stepwise.step_width = 2;
+	fsize->stepwise.min_height = MIN_HEIGHT;
+	fsize->stepwise.max_height = MAX_HEIGHT * MAX_ZOOM;
+	fsize->stepwise.step_height = 2;
+	return 0;
+}
+
+/* timeperframe is arbitrary and continuous */
+int vidioc_enum_frameintervals(struct file *file, void *priv,
+					     struct v4l2_frmivalenum *fival)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	const struct vivid_fmt *fmt;
+	int i;
+
+	fmt = get_format(dev, fival->pixel_format);
+	if (!fmt)
+		return -EINVAL;
+
+	if (!vivid_is_webcam(dev)) {
+		static const struct v4l2_fract step = { 1, 1 };
+
+		if (fival->index)
+			return -EINVAL;
+		if (fival->width < MIN_WIDTH || fival->width > MAX_WIDTH * MAX_ZOOM)
+			return -EINVAL;
+		if (fival->height < MIN_HEIGHT || fival->height > MAX_HEIGHT * MAX_ZOOM)
+			return -EINVAL;
+		fival->type = V4L2_FRMIVAL_TYPE_CONTINUOUS;
+		fival->stepwise.min = tpf_min;
+		fival->stepwise.max = tpf_max;
+		fival->stepwise.step = step;
+		return 0;
+	}
+
+	for (i = 0; i < ARRAY_SIZE(webcam_sizes); i++)
+		if (fival->width == webcam_sizes[i].width &&
+		    fival->height == webcam_sizes[i].height)
+			break;
+	if (i == ARRAY_SIZE(webcam_sizes))
+		return -EINVAL;
+	if (fival->index >= 2 * (3 - i))
+		return -EINVAL;
+	fival->type = V4L2_FRMIVAL_TYPE_DISCRETE;
+	fival->discrete = webcam_intervals[fival->index];
+	return 0;
+}
+
+int vivid_vid_cap_g_parm(struct file *file, void *priv,
+			  struct v4l2_streamparm *parm)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (parm->type != (dev->multiplanar ?
+			   V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE :
+			   V4L2_BUF_TYPE_VIDEO_CAPTURE))
+		return -EINVAL;
+
+	parm->parm.capture.capability   = V4L2_CAP_TIMEPERFRAME;
+	parm->parm.capture.timeperframe = dev->timeperframe_vid_cap;
+	parm->parm.capture.readbuffers  = 1;
+	return 0;
+}
+
+#define FRACT_CMP(a, OP, b)	\
+	((u64)(a).numerator * (b).denominator  OP  (u64)(b).numerator * (a).denominator)
+
+int vivid_vid_cap_s_parm(struct file *file, void *priv,
+			  struct v4l2_streamparm *parm)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	unsigned ival_sz = 2 * (3 - dev->webcam_size_idx);
+	struct v4l2_fract tpf;
+	unsigned i;
+
+	if (parm->type != (dev->multiplanar ?
+			   V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE :
+			   V4L2_BUF_TYPE_VIDEO_CAPTURE))
+		return -EINVAL;
+	if (!vivid_is_webcam(dev))
+		return vivid_vid_cap_g_parm(file, priv, parm);
+
+	tpf = parm->parm.capture.timeperframe;
+
+	if (tpf.denominator == 0)
+		tpf = webcam_intervals[ival_sz - 1];
+	for (i = 0; i < ival_sz; i++)
+		if (FRACT_CMP(tpf, >=, webcam_intervals[i]))
+			break;
+	if (i == ival_sz)
+		i = ival_sz - 1;
+	dev->webcam_ival_idx = i;
+	tpf = webcam_intervals[dev->webcam_ival_idx];
+	tpf = FRACT_CMP(tpf, <, tpf_min) ? tpf_min : tpf;
+	tpf = FRACT_CMP(tpf, >, tpf_max) ? tpf_max : tpf;
+
+	/* resync the thread's timings */
+	dev->cap_seq_resync = true;
+	dev->timeperframe_vid_cap = tpf;
+	parm->parm.capture.timeperframe = tpf;
+	parm->parm.capture.readbuffers  = 1;
+	return 0;
+}
diff --git a/drivers/media/platform/vivid/vivid-vid-cap.h b/drivers/media/platform/vivid/vivid-vid-cap.h
new file mode 100644
index 0000000..9407981
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vid-cap.h
@@ -0,0 +1,71 @@
+/*
+ * vivid-vid-cap.h - video capture support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_VID_CAP_H_
+#define _VIVID_VID_CAP_H_
+
+void vivid_update_quality(struct vivid_dev *dev);
+void vivid_update_format_cap(struct vivid_dev *dev, bool keep_controls);
+enum tpg_video_aspect vivid_get_video_aspect(const struct vivid_dev *dev);
+
+extern const v4l2_std_id vivid_standard[];
+extern const char * const vivid_ctrl_standard_strings[];
+
+extern const struct vb2_ops vivid_vid_cap_qops;
+
+int vivid_g_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f);
+int vivid_try_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f);
+int vivid_s_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_g_fmt_vid_cap_mplane(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_try_fmt_vid_cap_mplane(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_s_fmt_vid_cap_mplane(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_g_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_try_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_s_fmt_vid_cap(struct file *file, void *priv, struct v4l2_format *f);
+int vivid_vid_cap_g_selection(struct file *file, void *priv, struct v4l2_selection *sel);
+int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection *s);
+int vivid_vid_cap_cropcap(struct file *file, void *priv, struct v4l2_cropcap *cap);
+int vidioc_enum_fmt_vid_overlay(struct file *file, void  *priv, struct v4l2_fmtdesc *f);
+int vidioc_g_fmt_vid_overlay(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_try_fmt_vid_overlay(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_s_fmt_vid_overlay(struct file *file, void *priv, struct v4l2_format *f);
+int vivid_vid_cap_overlay(struct file *file, void *fh, unsigned i);
+int vivid_vid_cap_g_fbuf(struct file *file, void *fh, struct v4l2_framebuffer *a);
+int vivid_vid_cap_s_fbuf(struct file *file, void *fh, const struct v4l2_framebuffer *a);
+int vidioc_enum_input(struct file *file, void *priv, struct v4l2_input *inp);
+int vidioc_g_input(struct file *file, void *priv, unsigned *i);
+int vidioc_s_input(struct file *file, void *priv, unsigned i);
+int vidioc_enumaudio(struct file *file, void *fh, struct v4l2_audio *vin);
+int vidioc_g_audio(struct file *file, void *fh, struct v4l2_audio *vin);
+int vidioc_s_audio(struct file *file, void *fh, const struct v4l2_audio *vin);
+int vivid_video_g_frequency(struct file *file, void *fh, struct v4l2_frequency *vf);
+int vivid_video_s_frequency(struct file *file, void *fh, const struct v4l2_frequency *vf);
+int vivid_video_s_tuner(struct file *file, void *fh, const struct v4l2_tuner *vt);
+int vivid_video_g_tuner(struct file *file, void *fh, struct v4l2_tuner *vt);
+int vidioc_querystd(struct file *file, void *priv, v4l2_std_id *id);
+int vivid_vid_cap_s_std(struct file *file, void *priv, v4l2_std_id id);
+int vivid_vid_cap_s_dv_timings(struct file *file, void *_fh, struct v4l2_dv_timings *timings);
+int vidioc_query_dv_timings(struct file *file, void *_fh, struct v4l2_dv_timings *timings);
+int vidioc_s_edid(struct file *file, void *_fh, struct v4l2_edid *edid);
+int vidioc_enum_framesizes(struct file *file, void *fh, struct v4l2_frmsizeenum *fsize);
+int vidioc_enum_frameintervals(struct file *file, void *priv, struct v4l2_frmivalenum *fival);
+int vivid_vid_cap_g_parm(struct file *file, void *priv, struct v4l2_streamparm *parm);
+int vivid_vid_cap_s_parm(struct file *file, void *priv, struct v4l2_streamparm *parm);
+
+#endif
diff --git a/drivers/media/platform/vivid/vivid-vid-common.c b/drivers/media/platform/vivid/vivid-vid-common.c
new file mode 100644
index 0000000..7b981c1
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vid-common.c
@@ -0,0 +1,571 @@
+/*
+ * vivid-vid-common.c - common video support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/videodev2.h>
+#include <linux/v4l2-dv-timings.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-dv-timings.h>
+
+#include "vivid-core.h"
+#include "vivid-vid-common.h"
+
+const struct v4l2_dv_timings_cap vivid_dv_timings_cap = {
+	.type = V4L2_DV_BT_656_1120,
+	/* keep this initialization for compatibility with GCC < 4.4.6 */
+	.reserved = { 0 },
+	V4L2_INIT_BT_TIMINGS(0, MAX_WIDTH, 0, MAX_HEIGHT, 25000000, 600000000,
+		V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT,
+		V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_INTERLACED)
+};
+
+/* ------------------------------------------------------------------
+	Basic structures
+   ------------------------------------------------------------------*/
+
+struct vivid_fmt vivid_formats[] = {
+	{
+		.name     = "4:2:2, packed, YUYV",
+		.fourcc   = V4L2_PIX_FMT_YUYV,
+		.depth    = 16,
+		.is_yuv   = true,
+		.planes   = 1,
+		.data_offset = { PLANE0_DATA_OFFSET, 0 },
+	},
+	{
+		.name     = "4:2:2, packed, UYVY",
+		.fourcc   = V4L2_PIX_FMT_UYVY,
+		.depth    = 16,
+		.is_yuv   = true,
+		.planes   = 1,
+	},
+	{
+		.name     = "4:2:2, packed, YVYU",
+		.fourcc   = V4L2_PIX_FMT_YVYU,
+		.depth    = 16,
+		.is_yuv   = true,
+		.planes   = 1,
+	},
+	{
+		.name     = "4:2:2, packed, VYUY",
+		.fourcc   = V4L2_PIX_FMT_VYUY,
+		.depth    = 16,
+		.is_yuv   = true,
+		.planes   = 1,
+	},
+	{
+		.name     = "RGB565 (LE)",
+		.fourcc   = V4L2_PIX_FMT_RGB565, /* gggbbbbb rrrrrggg */
+		.depth    = 16,
+		.planes   = 1,
+		.can_do_overlay = true,
+	},
+	{
+		.name     = "RGB565 (BE)",
+		.fourcc   = V4L2_PIX_FMT_RGB565X, /* rrrrrggg gggbbbbb */
+		.depth    = 16,
+		.planes   = 1,
+		.can_do_overlay = true,
+	},
+	{
+		.name     = "RGB555 (LE)",
+		.fourcc   = V4L2_PIX_FMT_RGB555, /* gggbbbbb arrrrrgg */
+		.depth    = 16,
+		.planes   = 1,
+		.can_do_overlay = true,
+	},
+	{
+		.name     = "XRGB555 (LE)",
+		.fourcc   = V4L2_PIX_FMT_XRGB555, /* gggbbbbb arrrrrgg */
+		.depth    = 16,
+		.planes   = 1,
+		.can_do_overlay = true,
+	},
+	{
+		.name     = "ARGB555 (LE)",
+		.fourcc   = V4L2_PIX_FMT_ARGB555, /* gggbbbbb arrrrrgg */
+		.depth    = 16,
+		.planes   = 1,
+		.can_do_overlay = true,
+		.alpha_mask = 0x8000,
+	},
+	{
+		.name     = "RGB555 (BE)",
+		.fourcc   = V4L2_PIX_FMT_RGB555X, /* arrrrrgg gggbbbbb */
+		.depth    = 16,
+		.planes   = 1,
+		.can_do_overlay = true,
+	},
+	{
+		.name     = "RGB24 (LE)",
+		.fourcc   = V4L2_PIX_FMT_RGB24, /* rgb */
+		.depth    = 24,
+		.planes   = 1,
+	},
+	{
+		.name     = "RGB24 (BE)",
+		.fourcc   = V4L2_PIX_FMT_BGR24, /* bgr */
+		.depth    = 24,
+		.planes   = 1,
+	},
+	{
+		.name     = "RGB32 (LE)",
+		.fourcc   = V4L2_PIX_FMT_RGB32, /* argb */
+		.depth    = 32,
+		.planes   = 1,
+	},
+	{
+		.name     = "RGB32 (BE)",
+		.fourcc   = V4L2_PIX_FMT_BGR32, /* bgra */
+		.depth    = 32,
+		.planes   = 1,
+	},
+	{
+		.name     = "XRGB32 (LE)",
+		.fourcc   = V4L2_PIX_FMT_XRGB32, /* argb */
+		.depth    = 32,
+		.planes   = 1,
+	},
+	{
+		.name     = "XRGB32 (BE)",
+		.fourcc   = V4L2_PIX_FMT_XBGR32, /* bgra */
+		.depth    = 32,
+		.planes   = 1,
+	},
+	{
+		.name     = "ARGB32 (LE)",
+		.fourcc   = V4L2_PIX_FMT_ARGB32, /* argb */
+		.depth    = 32,
+		.planes   = 1,
+		.alpha_mask = 0x000000ff,
+	},
+	{
+		.name     = "ARGB32 (BE)",
+		.fourcc   = V4L2_PIX_FMT_ABGR32, /* bgra */
+		.depth    = 32,
+		.planes   = 1,
+		.alpha_mask = 0xff000000,
+	},
+	{
+		.name     = "4:2:2, planar, YUV",
+		.fourcc   = V4L2_PIX_FMT_NV16M,
+		.depth    = 8,
+		.is_yuv   = true,
+		.planes   = 2,
+		.data_offset = { PLANE0_DATA_OFFSET, 0 },
+	},
+	{
+		.name     = "4:2:2, planar, YVU",
+		.fourcc   = V4L2_PIX_FMT_NV61M,
+		.depth    = 8,
+		.is_yuv   = true,
+		.planes   = 2,
+		.data_offset = { 0, PLANE0_DATA_OFFSET },
+	},
+};
+
+/* There are 2 multiplanar formats in the list */
+#define VIVID_MPLANAR_FORMATS 2
+
+const struct vivid_fmt *get_format(struct vivid_dev *dev, u32 pixelformat)
+{
+	const struct vivid_fmt *fmt;
+	unsigned k;
+
+	for (k = 0; k < ARRAY_SIZE(vivid_formats); k++) {
+		fmt = &vivid_formats[k];
+		if (fmt->fourcc == pixelformat)
+			if (fmt->planes == 1 || dev->multiplanar)
+				return fmt;
+	}
+
+	return NULL;
+}
+
+bool vivid_vid_can_loop(struct vivid_dev *dev)
+{
+	if (dev->src_rect.width != dev->sink_rect.width ||
+	    dev->src_rect.height != dev->sink_rect.height)
+		return false;
+	if (dev->fmt_cap->fourcc != dev->fmt_out->fourcc)
+		return false;
+	if (dev->field_cap != dev->field_out)
+		return false;
+	if (vivid_is_svid_cap(dev) && vivid_is_svid_out(dev)) {
+		if (!(dev->std_cap & V4L2_STD_525_60) !=
+		    !(dev->std_out & V4L2_STD_525_60))
+			return false;
+		return true;
+	}
+	if (vivid_is_hdmi_cap(dev) && vivid_is_hdmi_out(dev))
+		return true;
+	return false;
+}
+
+void vivid_send_source_change(struct vivid_dev *dev, unsigned type)
+{
+	struct v4l2_event ev = {
+		.type = V4L2_EVENT_SOURCE_CHANGE,
+		.u.src_change.changes = V4L2_EVENT_SRC_CH_RESOLUTION,
+	};
+	unsigned i;
+
+	for (i = 0; i < dev->num_inputs; i++) {
+		ev.id = i;
+		if (dev->input_type[i] == type) {
+			if (video_is_registered(&dev->vid_cap_dev) && dev->has_vid_cap)
+				v4l2_event_queue(&dev->vid_cap_dev, &ev);
+			if (video_is_registered(&dev->vbi_cap_dev) && dev->has_vbi_cap)
+				v4l2_event_queue(&dev->vbi_cap_dev, &ev);
+		}
+	}
+}
+
+/*
+ * Conversion function that converts a single-planar format to a
+ * single-plane multiplanar format.
+ */
+void fmt_sp2mp(const struct v4l2_format *sp_fmt, struct v4l2_format *mp_fmt)
+{
+	struct v4l2_pix_format_mplane *mp = &mp_fmt->fmt.pix_mp;
+	struct v4l2_plane_pix_format *ppix = &mp->plane_fmt[0];
+	const struct v4l2_pix_format *pix = &sp_fmt->fmt.pix;
+	bool is_out = sp_fmt->type == V4L2_BUF_TYPE_VIDEO_OUTPUT;
+
+	memset(mp->reserved, 0, sizeof(mp->reserved));
+	mp_fmt->type = is_out ? V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE :
+			   V4L2_CAP_VIDEO_CAPTURE_MPLANE;
+	mp->width = pix->width;
+	mp->height = pix->height;
+	mp->pixelformat = pix->pixelformat;
+	mp->field = pix->field;
+	mp->colorspace = pix->colorspace;
+	mp->num_planes = 1;
+	mp->flags = pix->flags;
+	ppix->sizeimage = pix->sizeimage;
+	ppix->bytesperline = pix->bytesperline;
+	memset(ppix->reserved, 0, sizeof(ppix->reserved));
+}
+
+int fmt_sp2mp_func(struct file *file, void *priv,
+		struct v4l2_format *f, fmtfunc func)
+{
+	struct v4l2_format fmt;
+	struct v4l2_pix_format_mplane *mp = &fmt.fmt.pix_mp;
+	struct v4l2_plane_pix_format *ppix = &mp->plane_fmt[0];
+	struct v4l2_pix_format *pix = &f->fmt.pix;
+	int ret;
+
+	/* Converts to a mplane format */
+	fmt_sp2mp(f, &fmt);
+	/* Passes it to the generic mplane format function */
+	ret = func(file, priv, &fmt);
+	/* Copies back the mplane data to the single plane format */
+	pix->width = mp->width;
+	pix->height = mp->height;
+	pix->pixelformat = mp->pixelformat;
+	pix->field = mp->field;
+	pix->colorspace = mp->colorspace;
+	pix->sizeimage = ppix->sizeimage;
+	pix->bytesperline = ppix->bytesperline;
+	pix->flags = mp->flags;
+	return ret;
+}
+
+/* v4l2_rect helper function: copy the width/height values */
+void rect_set_size_to(struct v4l2_rect *r, const struct v4l2_rect *size)
+{
+	r->width = size->width;
+	r->height = size->height;
+}
+
+/* v4l2_rect helper function: width and height of r should be >= min_size */
+void rect_set_min_size(struct v4l2_rect *r, const struct v4l2_rect *min_size)
+{
+	if (r->width < min_size->width)
+		r->width = min_size->width;
+	if (r->height < min_size->height)
+		r->height = min_size->height;
+}
+
+/* v4l2_rect helper function: width and height of r should be <= max_size */
+void rect_set_max_size(struct v4l2_rect *r, const struct v4l2_rect *max_size)
+{
+	if (r->width > max_size->width)
+		r->width = max_size->width;
+	if (r->height > max_size->height)
+		r->height = max_size->height;
+}
+
+/* v4l2_rect helper function: r should be inside boundary */
+void rect_map_inside(struct v4l2_rect *r, const struct v4l2_rect *boundary)
+{
+	rect_set_max_size(r, boundary);
+	if (r->left < boundary->left)
+		r->left = boundary->left;
+	if (r->top < boundary->top)
+		r->top = boundary->top;
+	if (r->left + r->width > boundary->width)
+		r->left = boundary->width - r->width;
+	if (r->top + r->height > boundary->height)
+		r->top = boundary->height - r->height;
+}
+
+/* v4l2_rect helper function: return true if r1 has the same size as r2 */
+bool rect_same_size(const struct v4l2_rect *r1, const struct v4l2_rect *r2)
+{
+	return r1->width == r2->width && r1->height == r2->height;
+}
+
+/* v4l2_rect helper function: calculate the intersection of two rects */
+struct v4l2_rect rect_intersect(const struct v4l2_rect *a, const struct v4l2_rect *b)
+{
+	struct v4l2_rect r;
+	int right, bottom;
+
+	r.top = max(a->top, b->top);
+	r.left = max(a->left, b->left);
+	bottom = min(a->top + a->height, b->top + b->height);
+	right = min(a->left + a->width, b->left + b->width);
+	r.height = max(0, bottom - r.top);
+	r.width = max(0, right - r.left);
+	return r;
+}
+
+/*
+ * v4l2_rect helper function: scale rect r by to->width / from->width and
+ * to->height / from->height.
+ */
+void rect_scale(struct v4l2_rect *r, const struct v4l2_rect *from,
+				     const struct v4l2_rect *to)
+{
+	if (from->width == 0 || from->height == 0) {
+		r->left = r->top = r->width = r->height = 0;
+		return;
+	}
+	r->left = (((r->left - from->left) * to->width) / from->width) & ~1;
+	r->width = ((r->width * to->width) / from->width) & ~1;
+	r->top = ((r->top - from->top) * to->height) / from->height;
+	r->height = (r->height * to->height) / from->height;
+}
+
+bool rect_overlap(const struct v4l2_rect *r1, const struct v4l2_rect *r2)
+{
+	/*
+	 * IF the left side of r1 is to the right of the right side of r2 OR
+	 *    the left side of r2 is to the right of the right side of r1 THEN
+	 * they do not overlap.
+	 */
+	if (r1->left >= r2->left + r2->width ||
+	    r2->left >= r1->left + r1->width)
+		return false;
+	/*
+	 * IF the top side of r1 is below the bottom of r2 OR
+	 *    the top side of r2 is below the bottom of r1 THEN
+	 * they do not overlap.
+	 */
+	if (r1->top >= r2->top + r2->height ||
+	    r2->top >= r1->top + r1->height)
+		return false;
+	return true;
+}
+int vivid_vid_adjust_sel(unsigned flags, struct v4l2_rect *r)
+{
+	unsigned w = r->width;
+	unsigned h = r->height;
+
+	if (!(flags & V4L2_SEL_FLAG_LE)) {
+		w++;
+		h++;
+		if (w < 2)
+			w = 2;
+		if (h < 2)
+			h = 2;
+	}
+	if (!(flags & V4L2_SEL_FLAG_GE)) {
+		if (w > MAX_WIDTH)
+			w = MAX_WIDTH;
+		if (h > MAX_HEIGHT)
+			h = MAX_HEIGHT;
+	}
+	w = w & ~1;
+	h = h & ~1;
+	if (w < 2 || h < 2)
+		return -ERANGE;
+	if (w > MAX_WIDTH || h > MAX_HEIGHT)
+		return -ERANGE;
+	if (r->top < 0)
+		r->top = 0;
+	if (r->left < 0)
+		r->left = 0;
+	r->left &= ~1;
+	r->top &= ~1;
+	if (r->left + w > MAX_WIDTH)
+		r->left = MAX_WIDTH - w;
+	if (r->top + h > MAX_HEIGHT)
+		r->top = MAX_HEIGHT - h;
+	if ((flags & (V4L2_SEL_FLAG_GE | V4L2_SEL_FLAG_LE)) ==
+			(V4L2_SEL_FLAG_GE | V4L2_SEL_FLAG_LE) &&
+	    (r->width != w || r->height != h))
+		return -ERANGE;
+	r->width = w;
+	r->height = h;
+	return 0;
+}
+
+int vivid_enum_fmt_vid(struct file *file, void  *priv,
+					struct v4l2_fmtdesc *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	const struct vivid_fmt *fmt;
+
+	if (f->index >= ARRAY_SIZE(vivid_formats) -
+	    (dev->multiplanar ? 0 : VIVID_MPLANAR_FORMATS))
+		return -EINVAL;
+
+	fmt = &vivid_formats[f->index];
+
+	strlcpy(f->description, fmt->name, sizeof(f->description));
+	f->pixelformat = fmt->fourcc;
+	return 0;
+}
+
+int vidioc_enum_fmt_vid_mplane(struct file *file, void  *priv,
+					struct v4l2_fmtdesc *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!dev->multiplanar)
+		return -ENOTTY;
+	return vivid_enum_fmt_vid(file, priv, f);
+}
+
+int vidioc_enum_fmt_vid(struct file *file, void  *priv,
+					struct v4l2_fmtdesc *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (dev->multiplanar)
+		return -ENOTTY;
+	return vivid_enum_fmt_vid(file, priv, f);
+}
+
+int vidioc_g_std(struct file *file, void *priv, v4l2_std_id *id)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX) {
+		if (!vivid_is_sdtv_cap(dev))
+			return -ENODATA;
+		*id = dev->std_cap;
+	} else {
+		if (!vivid_is_svid_out(dev))
+			return -ENODATA;
+		*id = dev->std_out;
+	}
+	return 0;
+}
+
+int vidioc_g_dv_timings(struct file *file, void *_fh,
+				    struct v4l2_dv_timings *timings)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX) {
+		if (!vivid_is_hdmi_cap(dev))
+			return -ENODATA;
+		*timings = dev->dv_timings_cap;
+	} else {
+		if (!vivid_is_hdmi_out(dev))
+			return -ENODATA;
+		*timings = dev->dv_timings_out;
+	}
+	return 0;
+}
+
+int vidioc_enum_dv_timings(struct file *file, void *_fh,
+				    struct v4l2_enum_dv_timings *timings)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX) {
+		if (!vivid_is_hdmi_cap(dev))
+			return -ENODATA;
+	} else {
+		if (!vivid_is_hdmi_out(dev))
+			return -ENODATA;
+	}
+	return v4l2_enum_dv_timings_cap(timings, &vivid_dv_timings_cap,
+			NULL, NULL);
+}
+
+int vidioc_dv_timings_cap(struct file *file, void *_fh,
+				    struct v4l2_dv_timings_cap *cap)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+
+	if (vdev->vfl_dir == VFL_DIR_RX) {
+		if (!vivid_is_hdmi_cap(dev))
+			return -ENODATA;
+	} else {
+		if (!vivid_is_hdmi_out(dev))
+			return -ENODATA;
+	}
+	*cap = vivid_dv_timings_cap;
+	return 0;
+}
+
+int vidioc_g_edid(struct file *file, void *_fh,
+			 struct v4l2_edid *edid)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+
+	memset(edid->reserved, 0, sizeof(edid->reserved));
+	if (vdev->vfl_dir == VFL_DIR_RX) {
+		if (edid->pad >= dev->num_inputs)
+			return -EINVAL;
+		if (dev->input_type[edid->pad] != HDMI)
+			return -EINVAL;
+	} else {
+		if (edid->pad >= dev->num_outputs)
+			return -EINVAL;
+		if (dev->output_type[edid->pad] != HDMI)
+			return -EINVAL;
+	}
+	if (edid->start_block == 0 && edid->blocks == 0) {
+		edid->blocks = dev->edid_blocks;
+		return 0;
+	}
+	if (dev->edid_blocks == 0)
+		return -ENODATA;
+	if (edid->start_block >= dev->edid_blocks)
+		return -EINVAL;
+	if (edid->start_block + edid->blocks > dev->edid_blocks)
+		edid->blocks = dev->edid_blocks - edid->start_block;
+	memcpy(edid->edid, dev->edid, edid->blocks * 128);
+	return 0;
+}
diff --git a/drivers/media/platform/vivid/vivid-vid-common.h b/drivers/media/platform/vivid/vivid-vid-common.h
new file mode 100644
index 0000000..9563c32
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vid-common.h
@@ -0,0 +1,61 @@
+/*
+ * vivid-vid-common.h - common video support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_VID_COMMON_H_
+#define _VIVID_VID_COMMON_H_
+
+typedef int (*fmtfunc)(struct file *file, void *priv, struct v4l2_format *f);
+
+/*
+ * Conversion function that converts a single-planar format to a
+ * single-plane multiplanar format.
+ */
+void fmt_sp2mp(const struct v4l2_format *sp_fmt, struct v4l2_format *mp_fmt);
+int fmt_sp2mp_func(struct file *file, void *priv,
+		struct v4l2_format *f, fmtfunc func);
+
+extern const struct v4l2_dv_timings_cap vivid_dv_timings_cap;
+
+const struct vivid_fmt *get_format(struct vivid_dev *dev, u32 pixelformat);
+
+bool vivid_vid_can_loop(struct vivid_dev *dev);
+void vivid_send_source_change(struct vivid_dev *dev, unsigned type);
+
+bool rect_overlap(const struct v4l2_rect *r1, const struct v4l2_rect *r2);
+void rect_set_size_to(struct v4l2_rect *r, const struct v4l2_rect *size);
+void rect_set_min_size(struct v4l2_rect *r, const struct v4l2_rect *min_size);
+void rect_set_max_size(struct v4l2_rect *r, const struct v4l2_rect *max_size);
+void rect_map_inside(struct v4l2_rect *r, const struct v4l2_rect *boundary);
+bool rect_same_size(const struct v4l2_rect *r1, const struct v4l2_rect *r2);
+struct v4l2_rect rect_intersect(const struct v4l2_rect *a, const struct v4l2_rect *b);
+void rect_scale(struct v4l2_rect *r, const struct v4l2_rect *from,
+				     const struct v4l2_rect *to);
+int vivid_vid_adjust_sel(unsigned flags, struct v4l2_rect *r);
+
+int vivid_enum_fmt_vid(struct file *file, void  *priv, struct v4l2_fmtdesc *f);
+int vidioc_enum_fmt_vid_mplane(struct file *file, void  *priv, struct v4l2_fmtdesc *f);
+int vidioc_enum_fmt_vid(struct file *file, void  *priv, struct v4l2_fmtdesc *f);
+int vidioc_g_std(struct file *file, void *priv, v4l2_std_id *id);
+int vidioc_g_dv_timings(struct file *file, void *_fh, struct v4l2_dv_timings *timings);
+int vidioc_enum_dv_timings(struct file *file, void *_fh, struct v4l2_enum_dv_timings *timings);
+int vidioc_dv_timings_cap(struct file *file, void *_fh, struct v4l2_dv_timings_cap *cap);
+int vidioc_g_edid(struct file *file, void *_fh, struct v4l2_edid *edid);
+int vidioc_subscribe_event(struct v4l2_fh *fh, const struct v4l2_event_subscription *sub);
+
+#endif
diff --git a/drivers/media/platform/vivid/vivid-vid-out.c b/drivers/media/platform/vivid/vivid-vid-out.c
new file mode 100644
index 0000000..3078bd2
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vid-out.c
@@ -0,0 +1,1205 @@
+/*
+ * vivid-vid-out.c - video output support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/videodev2.h>
+#include <linux/v4l2-dv-timings.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-dv-timings.h>
+
+#include "vivid-core.h"
+#include "vivid-vid-common.h"
+#include "vivid-kthread-out.h"
+#include "vivid-vid-out.h"
+
+static int vid_out_queue_setup(struct vb2_queue *vq, const struct v4l2_format *fmt,
+		       unsigned *nbuffers, unsigned *nplanes,
+		       unsigned sizes[], void *alloc_ctxs[])
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+	unsigned planes = dev->fmt_out->planes;
+	unsigned h = dev->fmt_out_rect.height;
+	unsigned size = dev->bytesperline_out[0] * h;
+
+	if (dev->field_out == V4L2_FIELD_ALTERNATE) {
+		/*
+		 * You cannot use write() with FIELD_ALTERNATE since the field
+		 * information (TOP/BOTTOM) cannot be passed to the kernel.
+		 */
+		if (vb2_fileio_is_active(vq))
+			return -EINVAL;
+	}
+
+	if (dev->queue_setup_error) {
+		/*
+		 * Error injection: test what happens if queue_setup() returns
+		 * an error.
+		 */
+		dev->queue_setup_error = false;
+		return -EINVAL;
+	}
+
+	if (fmt) {
+		const struct v4l2_pix_format_mplane *mp;
+		struct v4l2_format mp_fmt;
+
+		if (!V4L2_TYPE_IS_MULTIPLANAR(fmt->type)) {
+			fmt_sp2mp(fmt, &mp_fmt);
+			fmt = &mp_fmt;
+		}
+		mp = &fmt->fmt.pix_mp;
+		/*
+		 * Check if the number of planes in the specified format match
+		 * the number of planes in the current format. You can't mix that.
+		 */
+		if (mp->num_planes != planes)
+			return -EINVAL;
+		sizes[0] = mp->plane_fmt[0].sizeimage;
+		if (planes == 2) {
+			sizes[1] = mp->plane_fmt[1].sizeimage;
+			if (sizes[0] < dev->bytesperline_out[0] * h ||
+			    sizes[1] < dev->bytesperline_out[1] * h)
+				return -EINVAL;
+		} else if (sizes[0] < size) {
+			return -EINVAL;
+		}
+	} else {
+		if (planes == 2) {
+			sizes[0] = dev->bytesperline_out[0] * h;
+			sizes[1] = dev->bytesperline_out[1] * h;
+		} else {
+			sizes[0] = size;
+		}
+	}
+
+	if (vq->num_buffers + *nbuffers < 2)
+		*nbuffers = 2 - vq->num_buffers;
+
+	*nplanes = planes;
+
+	/*
+	 * videobuf2-vmalloc allocator is context-less so no need to set
+	 * alloc_ctxs array.
+	 */
+
+	if (planes == 2)
+		dprintk(dev, 1, "%s, count=%d, sizes=%u, %u\n", __func__,
+			*nbuffers, sizes[0], sizes[1]);
+	else
+		dprintk(dev, 1, "%s, count=%d, size=%u\n", __func__,
+			*nbuffers, sizes[0]);
+	return 0;
+}
+
+static int vid_out_buf_prepare(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	unsigned long size;
+	unsigned planes = dev->fmt_out->planes;
+	unsigned p;
+
+	dprintk(dev, 1, "%s\n", __func__);
+
+	if (WARN_ON(NULL == dev->fmt_out))
+		return -EINVAL;
+
+	if (dev->buf_prepare_error) {
+		/*
+		 * Error injection: test what happens if buf_prepare() returns
+		 * an error.
+		 */
+		dev->buf_prepare_error = false;
+		return -EINVAL;
+	}
+
+	if (dev->field_out != V4L2_FIELD_ALTERNATE)
+		vb->v4l2_buf.field = dev->field_out;
+	else if (vb->v4l2_buf.field != V4L2_FIELD_TOP &&
+		 vb->v4l2_buf.field != V4L2_FIELD_BOTTOM)
+		return -EINVAL;
+
+	for (p = 0; p < planes; p++) {
+		size = dev->bytesperline_out[p] * dev->fmt_out_rect.height +
+			vb->v4l2_planes[p].data_offset;
+
+		if (vb2_get_plane_payload(vb, p) < size) {
+			dprintk(dev, 1, "%s the payload is too small for plane %u (%lu < %lu)\n",
+					__func__, p, vb2_get_plane_payload(vb, p), size);
+			return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+static void vid_out_buf_queue(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	struct vivid_buffer *buf = container_of(vb, struct vivid_buffer, vb);
+
+	dprintk(dev, 1, "%s\n", __func__);
+
+	spin_lock(&dev->slock);
+	list_add_tail(&buf->list, &dev->vid_out_active);
+	spin_unlock(&dev->slock);
+}
+
+static int vid_out_start_streaming(struct vb2_queue *vq, unsigned count)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+	int err;
+
+	if (vb2_is_streaming(&dev->vb_vid_cap_q))
+		dev->can_loop_video = vivid_vid_can_loop(dev);
+
+	if (dev->kthread_vid_out)
+		return 0;
+
+	dev->vid_out_seq_count = 0;
+	dprintk(dev, 1, "%s\n", __func__);
+	if (dev->start_streaming_error) {
+		dev->start_streaming_error = false;
+		err = -EINVAL;
+	} else {
+		err = vivid_start_generating_vid_out(dev, &dev->vid_out_streaming);
+	}
+	if (err) {
+		struct vivid_buffer *buf, *tmp;
+
+		list_for_each_entry_safe(buf, tmp, &dev->vid_out_active, list) {
+			list_del(&buf->list);
+			vb2_buffer_done(&buf->vb, VB2_BUF_STATE_QUEUED);
+		}
+	}
+	return err;
+}
+
+/* abort streaming and wait for last buffer */
+static void vid_out_stop_streaming(struct vb2_queue *vq)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+
+	dprintk(dev, 1, "%s\n", __func__);
+	vivid_stop_generating_vid_out(dev, &dev->vid_out_streaming);
+	dev->can_loop_video = false;
+}
+
+const struct vb2_ops vivid_vid_out_qops = {
+	.queue_setup		= vid_out_queue_setup,
+	.buf_prepare		= vid_out_buf_prepare,
+	.buf_queue		= vid_out_buf_queue,
+	.start_streaming	= vid_out_start_streaming,
+	.stop_streaming		= vid_out_stop_streaming,
+	.wait_prepare		= vivid_unlock,
+	.wait_finish		= vivid_lock,
+};
+
+/*
+ * Called whenever the format has to be reset which can occur when
+ * changing outputs, standard, timings, etc.
+ */
+void vivid_update_format_out(struct vivid_dev *dev)
+{
+	struct v4l2_bt_timings *bt = &dev->dv_timings_out.bt;
+	unsigned size;
+
+	switch (dev->output_type[dev->output]) {
+	case SVID:
+	default:
+		dev->field_out = dev->tv_field_out;
+		dev->sink_rect.width = 720;
+		if (dev->std_out & V4L2_STD_525_60) {
+			dev->sink_rect.height = 480;
+			dev->timeperframe_vid_out = (struct v4l2_fract) { 1001, 30000 };
+			dev->service_set_out = V4L2_SLICED_CAPTION_525;
+		} else {
+			dev->sink_rect.height = 576;
+			dev->timeperframe_vid_out = (struct v4l2_fract) { 1000, 25000 };
+			dev->service_set_out = V4L2_SLICED_WSS_625;
+		}
+		dev->colorspace_out = V4L2_COLORSPACE_SMPTE170M;
+		break;
+	case HDMI:
+		dev->sink_rect.width = bt->width;
+		dev->sink_rect.height = bt->height;
+		size = V4L2_DV_BT_FRAME_WIDTH(bt) * V4L2_DV_BT_FRAME_HEIGHT(bt);
+		dev->timeperframe_vid_out = (struct v4l2_fract) {
+			size / 100, (u32)bt->pixelclock / 100
+		};
+		if (bt->interlaced)
+			dev->field_out = V4L2_FIELD_ALTERNATE;
+		else
+			dev->field_out = V4L2_FIELD_NONE;
+		if (!dev->dvi_d_out && (bt->standards & V4L2_DV_BT_STD_CEA861)) {
+			if (bt->width == 720 && bt->height <= 576)
+				dev->colorspace_out = V4L2_COLORSPACE_SMPTE170M;
+			else
+				dev->colorspace_out = V4L2_COLORSPACE_REC709;
+		} else {
+			dev->colorspace_out = V4L2_COLORSPACE_SRGB;
+		}
+		break;
+	}
+	dev->compose_out = dev->sink_rect;
+	dev->compose_bounds_out = dev->sink_rect;
+	dev->crop_out = dev->compose_out;
+	if (V4L2_FIELD_HAS_T_OR_B(dev->field_out))
+		dev->crop_out.height /= 2;
+	dev->fmt_out_rect = dev->crop_out;
+	dev->bytesperline_out[0] = (dev->sink_rect.width * dev->fmt_out->depth) / 8;
+	if (dev->fmt_out->planes == 2)
+		dev->bytesperline_out[1] = (dev->sink_rect.width * dev->fmt_out->depth) / 8;
+}
+
+/* Map the field to something that is valid for the current output */
+static enum v4l2_field vivid_field_out(struct vivid_dev *dev, enum v4l2_field field)
+{
+	if (vivid_is_svid_out(dev)) {
+		switch (field) {
+		case V4L2_FIELD_INTERLACED_TB:
+		case V4L2_FIELD_INTERLACED_BT:
+		case V4L2_FIELD_SEQ_TB:
+		case V4L2_FIELD_SEQ_BT:
+		case V4L2_FIELD_ALTERNATE:
+			return field;
+		case V4L2_FIELD_INTERLACED:
+		default:
+			return V4L2_FIELD_INTERLACED;
+		}
+	}
+	if (vivid_is_hdmi_out(dev))
+		return dev->dv_timings_out.bt.interlaced ? V4L2_FIELD_ALTERNATE :
+						       V4L2_FIELD_NONE;
+	return V4L2_FIELD_NONE;
+}
+
+static enum tpg_pixel_aspect vivid_get_pixel_aspect(const struct vivid_dev *dev)
+{
+	if (vivid_is_svid_out(dev))
+		return (dev->std_out & V4L2_STD_525_60) ?
+			TPG_PIXEL_ASPECT_NTSC : TPG_PIXEL_ASPECT_PAL;
+
+	if (vivid_is_hdmi_out(dev) &&
+	    dev->sink_rect.width == 720 && dev->sink_rect.height <= 576)
+		return dev->sink_rect.height == 480 ?
+			TPG_PIXEL_ASPECT_NTSC : TPG_PIXEL_ASPECT_PAL;
+
+	return TPG_PIXEL_ASPECT_SQUARE;
+}
+
+int vivid_g_fmt_vid_out(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_pix_format_mplane *mp = &f->fmt.pix_mp;
+	unsigned p;
+
+	mp->width        = dev->fmt_out_rect.width;
+	mp->height       = dev->fmt_out_rect.height;
+	mp->field        = dev->field_out;
+	mp->pixelformat  = dev->fmt_out->fourcc;
+	mp->colorspace   = dev->colorspace_out;
+	mp->num_planes = dev->fmt_out->planes;
+	for (p = 0; p < mp->num_planes; p++) {
+		mp->plane_fmt[p].bytesperline = dev->bytesperline_out[p];
+		mp->plane_fmt[p].sizeimage =
+			mp->plane_fmt[p].bytesperline * mp->height;
+	}
+	return 0;
+}
+
+int vivid_try_fmt_vid_out(struct file *file, void *priv,
+			struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_bt_timings *bt = &dev->dv_timings_out.bt;
+	struct v4l2_pix_format_mplane *mp = &f->fmt.pix_mp;
+	struct v4l2_plane_pix_format *pfmt = mp->plane_fmt;
+	const struct vivid_fmt *fmt;
+	unsigned bytesperline, max_bpl;
+	unsigned factor = 1;
+	unsigned w, h;
+	unsigned p;
+
+	fmt = get_format(dev, mp->pixelformat);
+	if (!fmt) {
+		dprintk(dev, 1, "Fourcc format (0x%08x) unknown.\n",
+			mp->pixelformat);
+		mp->pixelformat = V4L2_PIX_FMT_YUYV;
+		fmt = get_format(dev, mp->pixelformat);
+	}
+
+	mp->field = vivid_field_out(dev, mp->field);
+	if (vivid_is_svid_out(dev)) {
+		w = 720;
+		h = (dev->std_out & V4L2_STD_525_60) ? 480 : 576;
+	} else {
+		w = dev->sink_rect.width;
+		h = dev->sink_rect.height;
+	}
+	if (V4L2_FIELD_HAS_T_OR_B(mp->field))
+		factor = 2;
+	if (!dev->has_scaler_out && !dev->has_crop_out && !dev->has_compose_out) {
+		mp->width = w;
+		mp->height = h / factor;
+	} else {
+		struct v4l2_rect r = { 0, 0, mp->width, mp->height * factor };
+
+		rect_set_min_size(&r, &vivid_min_rect);
+		rect_set_max_size(&r, &vivid_max_rect);
+		if (dev->has_scaler_out && !dev->has_crop_out) {
+			struct v4l2_rect max_r = { 0, 0, MAX_ZOOM * w, MAX_ZOOM * h };
+
+			rect_set_max_size(&r, &max_r);
+		} else if (!dev->has_scaler_out && dev->has_compose_out && !dev->has_crop_out) {
+			rect_set_max_size(&r, &dev->sink_rect);
+		} else if (!dev->has_scaler_out && !dev->has_compose_out) {
+			rect_set_min_size(&r, &dev->sink_rect);
+		}
+		mp->width = r.width;
+		mp->height = r.height / factor;
+	}
+
+	/* This driver supports custom bytesperline values */
+
+	/* Calculate the minimum supported bytesperline value */
+	bytesperline = (mp->width * fmt->depth) >> 3;
+	/* Calculate the maximum supported bytesperline value */
+	max_bpl = (MAX_ZOOM * MAX_WIDTH * fmt->depth) >> 3;
+	mp->num_planes = fmt->planes;
+	for (p = 0; p < mp->num_planes; p++) {
+		if (pfmt[p].bytesperline > max_bpl)
+			pfmt[p].bytesperline = max_bpl;
+		if (pfmt[p].bytesperline < bytesperline)
+			pfmt[p].bytesperline = bytesperline;
+		pfmt[p].sizeimage = pfmt[p].bytesperline * mp->height;
+		memset(pfmt[p].reserved, 0, sizeof(pfmt[p].reserved));
+	}
+	if (vivid_is_svid_out(dev))
+		mp->colorspace = V4L2_COLORSPACE_SMPTE170M;
+	else if (dev->dvi_d_out || !(bt->standards & V4L2_DV_BT_STD_CEA861))
+		mp->colorspace = V4L2_COLORSPACE_SRGB;
+	else if (bt->width == 720 && bt->height <= 576)
+		mp->colorspace = V4L2_COLORSPACE_SMPTE170M;
+	else if (mp->colorspace != V4L2_COLORSPACE_SMPTE170M &&
+		 mp->colorspace != V4L2_COLORSPACE_REC709 &&
+		 mp->colorspace != V4L2_COLORSPACE_SRGB)
+		mp->colorspace = V4L2_COLORSPACE_REC709;
+	memset(mp->reserved, 0, sizeof(mp->reserved));
+	return 0;
+}
+
+int vivid_s_fmt_vid_out(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct v4l2_pix_format_mplane *mp = &f->fmt.pix_mp;
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_rect *crop = &dev->crop_out;
+	struct v4l2_rect *compose = &dev->compose_out;
+	struct vb2_queue *q = &dev->vb_vid_out_q;
+	int ret = vivid_try_fmt_vid_out(file, priv, f);
+	unsigned factor = 1;
+
+	if (ret < 0)
+		return ret;
+
+	if (vb2_is_busy(q) &&
+	    (vivid_is_svid_out(dev) ||
+	     mp->width != dev->fmt_out_rect.width ||
+	     mp->height != dev->fmt_out_rect.height ||
+	     mp->pixelformat != dev->fmt_out->fourcc ||
+	     mp->field != dev->field_out)) {
+		dprintk(dev, 1, "%s device busy\n", __func__);
+		return -EBUSY;
+	}
+
+	/*
+	 * Allow for changing the colorspace on the fly. Useful for testing
+	 * purposes, and it is something that HDMI transmitters are able
+	 * to do.
+	 */
+	if (vb2_is_busy(q))
+		goto set_colorspace;
+
+	dev->fmt_out = get_format(dev, mp->pixelformat);
+	if (V4L2_FIELD_HAS_T_OR_B(mp->field))
+		factor = 2;
+
+	if (dev->has_scaler_out || dev->has_crop_out || dev->has_compose_out) {
+		struct v4l2_rect r = { 0, 0, mp->width, mp->height };
+
+		if (dev->has_scaler_out) {
+			if (dev->has_crop_out)
+				rect_map_inside(crop, &r);
+			else
+				*crop = r;
+			if (dev->has_compose_out && !dev->has_crop_out) {
+				struct v4l2_rect min_r = {
+					0, 0,
+					r.width / MAX_ZOOM,
+					factor * r.height / MAX_ZOOM
+				};
+				struct v4l2_rect max_r = {
+					0, 0,
+					r.width * MAX_ZOOM,
+					factor * r.height * MAX_ZOOM
+				};
+
+				rect_set_min_size(compose, &min_r);
+				rect_set_max_size(compose, &max_r);
+				rect_map_inside(compose, &dev->compose_bounds_out);
+			} else if (dev->has_compose_out) {
+				struct v4l2_rect min_r = {
+					0, 0,
+					crop->width / MAX_ZOOM,
+					factor * crop->height / MAX_ZOOM
+				};
+				struct v4l2_rect max_r = {
+					0, 0,
+					crop->width * MAX_ZOOM,
+					factor * crop->height * MAX_ZOOM
+				};
+
+				rect_set_min_size(compose, &min_r);
+				rect_set_max_size(compose, &max_r);
+				rect_map_inside(compose, &dev->compose_bounds_out);
+			}
+		} else if (dev->has_compose_out && !dev->has_crop_out) {
+			rect_set_size_to(crop, &r);
+			r.height *= factor;
+			rect_set_size_to(compose, &r);
+			rect_map_inside(compose, &dev->compose_bounds_out);
+		} else if (!dev->has_compose_out) {
+			rect_map_inside(crop, &r);
+			r.height /= factor;
+			rect_set_size_to(compose, &r);
+		} else {
+			r.height *= factor;
+			rect_set_max_size(compose, &r);
+			rect_map_inside(compose, &dev->compose_bounds_out);
+			crop->top *= factor;
+			crop->height *= factor;
+			rect_set_size_to(crop, compose);
+			rect_map_inside(crop, &r);
+			crop->top /= factor;
+			crop->height /= factor;
+		}
+	} else {
+		struct v4l2_rect r = { 0, 0, mp->width, mp->height };
+
+		rect_set_size_to(crop, &r);
+		r.height /= factor;
+		rect_set_size_to(compose, &r);
+	}
+
+	dev->fmt_out_rect.width = mp->width;
+	dev->fmt_out_rect.height = mp->height;
+	dev->bytesperline_out[0] = mp->plane_fmt[0].bytesperline;
+	if (mp->num_planes > 1)
+		dev->bytesperline_out[1] = mp->plane_fmt[1].bytesperline;
+	dev->field_out = mp->field;
+	if (vivid_is_svid_out(dev))
+		dev->tv_field_out = mp->field;
+
+set_colorspace:
+	dev->colorspace_out = mp->colorspace;
+	if (dev->loop_video) {
+		vivid_send_source_change(dev, SVID);
+		vivid_send_source_change(dev, HDMI);
+	}
+	return 0;
+}
+
+int vidioc_g_fmt_vid_out_mplane(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!dev->multiplanar)
+		return -ENOTTY;
+	return vivid_g_fmt_vid_out(file, priv, f);
+}
+
+int vidioc_try_fmt_vid_out_mplane(struct file *file, void *priv,
+			struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!dev->multiplanar)
+		return -ENOTTY;
+	return vivid_try_fmt_vid_out(file, priv, f);
+}
+
+int vidioc_s_fmt_vid_out_mplane(struct file *file, void *priv,
+			struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!dev->multiplanar)
+		return -ENOTTY;
+	return vivid_s_fmt_vid_out(file, priv, f);
+}
+
+int vidioc_g_fmt_vid_out(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (dev->multiplanar)
+		return -ENOTTY;
+	return fmt_sp2mp_func(file, priv, f, vivid_g_fmt_vid_out);
+}
+
+int vidioc_try_fmt_vid_out(struct file *file, void *priv,
+			struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (dev->multiplanar)
+		return -ENOTTY;
+	return fmt_sp2mp_func(file, priv, f, vivid_try_fmt_vid_out);
+}
+
+int vidioc_s_fmt_vid_out(struct file *file, void *priv,
+			struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (dev->multiplanar)
+		return -ENOTTY;
+	return fmt_sp2mp_func(file, priv, f, vivid_s_fmt_vid_out);
+}
+
+int vivid_vid_out_g_selection(struct file *file, void *priv,
+			      struct v4l2_selection *sel)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!dev->has_crop_out && !dev->has_compose_out)
+		return -ENOTTY;
+	if (sel->type != V4L2_BUF_TYPE_VIDEO_OUTPUT)
+		return -EINVAL;
+
+	sel->r.left = sel->r.top = 0;
+	switch (sel->target) {
+	case V4L2_SEL_TGT_CROP:
+		if (!dev->has_crop_out)
+			return -EINVAL;
+		sel->r = dev->crop_out;
+		break;
+	case V4L2_SEL_TGT_CROP_DEFAULT:
+		if (!dev->has_crop_out)
+			return -EINVAL;
+		sel->r = dev->fmt_out_rect;
+		break;
+	case V4L2_SEL_TGT_CROP_BOUNDS:
+		if (!dev->has_compose_out)
+			return -EINVAL;
+		sel->r = vivid_max_rect;
+		break;
+	case V4L2_SEL_TGT_COMPOSE:
+		if (!dev->has_compose_out)
+			return -EINVAL;
+		sel->r = dev->compose_out;
+		break;
+	case V4L2_SEL_TGT_COMPOSE_DEFAULT:
+	case V4L2_SEL_TGT_COMPOSE_BOUNDS:
+		if (!dev->has_compose_out)
+			return -EINVAL;
+		sel->r = dev->sink_rect;
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+int vivid_vid_out_s_selection(struct file *file, void *fh, struct v4l2_selection *s)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_rect *crop = &dev->crop_out;
+	struct v4l2_rect *compose = &dev->compose_out;
+	unsigned factor = V4L2_FIELD_HAS_T_OR_B(dev->field_out) ? 2 : 1;
+	int ret;
+
+	if (!dev->has_crop_out && !dev->has_compose_out)
+		return -ENOTTY;
+	if (s->type != V4L2_BUF_TYPE_VIDEO_OUTPUT)
+		return -EINVAL;
+
+	switch (s->target) {
+	case V4L2_SEL_TGT_CROP:
+		if (!dev->has_crop_out)
+			return -EINVAL;
+		ret = vivid_vid_adjust_sel(s->flags, &s->r);
+		if (ret)
+			return ret;
+		rect_set_min_size(&s->r, &vivid_min_rect);
+		rect_set_max_size(&s->r, &dev->fmt_out_rect);
+		if (dev->has_scaler_out) {
+			struct v4l2_rect max_rect = {
+				0, 0,
+				dev->sink_rect.width * MAX_ZOOM,
+				(dev->sink_rect.height / factor) * MAX_ZOOM
+			};
+
+			rect_set_max_size(&s->r, &max_rect);
+			if (dev->has_compose_out) {
+				struct v4l2_rect min_rect = {
+					0, 0,
+					s->r.width / MAX_ZOOM,
+					(s->r.height * factor) / MAX_ZOOM
+				};
+				struct v4l2_rect max_rect = {
+					0, 0,
+					s->r.width * MAX_ZOOM,
+					(s->r.height * factor) * MAX_ZOOM
+				};
+
+				rect_set_min_size(compose, &min_rect);
+				rect_set_max_size(compose, &max_rect);
+				rect_map_inside(compose, &dev->compose_bounds_out);
+			}
+		} else if (dev->has_compose_out) {
+			s->r.top *= factor;
+			s->r.height *= factor;
+			rect_set_max_size(&s->r, &dev->sink_rect);
+			rect_set_size_to(compose, &s->r);
+			rect_map_inside(compose, &dev->compose_bounds_out);
+			s->r.top /= factor;
+			s->r.height /= factor;
+		} else {
+			rect_set_size_to(&s->r, &dev->sink_rect);
+			s->r.height /= factor;
+		}
+		rect_map_inside(&s->r, &dev->fmt_out_rect);
+		*crop = s->r;
+		break;
+	case V4L2_SEL_TGT_COMPOSE:
+		if (!dev->has_compose_out)
+			return -EINVAL;
+		ret = vivid_vid_adjust_sel(s->flags, &s->r);
+		if (ret)
+			return ret;
+		rect_set_min_size(&s->r, &vivid_min_rect);
+		rect_set_max_size(&s->r, &dev->sink_rect);
+		rect_map_inside(&s->r, &dev->compose_bounds_out);
+		s->r.top /= factor;
+		s->r.height /= factor;
+		if (dev->has_scaler_out) {
+			struct v4l2_rect fmt = dev->fmt_out_rect;
+			struct v4l2_rect max_rect = {
+				0, 0,
+				s->r.width * MAX_ZOOM,
+				s->r.height * MAX_ZOOM
+			};
+			struct v4l2_rect min_rect = {
+				0, 0,
+				s->r.width / MAX_ZOOM,
+				s->r.height / MAX_ZOOM
+			};
+
+			rect_set_min_size(&fmt, &min_rect);
+			if (!dev->has_crop_out)
+				rect_set_max_size(&fmt, &max_rect);
+			if (!rect_same_size(&dev->fmt_out_rect, &fmt) &&
+			    vb2_is_busy(&dev->vb_vid_out_q))
+				return -EBUSY;
+			if (dev->has_crop_out) {
+				rect_set_min_size(crop, &min_rect);
+				rect_set_max_size(crop, &max_rect);
+			}
+			dev->fmt_out_rect = fmt;
+		} else if (dev->has_crop_out) {
+			struct v4l2_rect fmt = dev->fmt_out_rect;
+
+			rect_set_min_size(&fmt, &s->r);
+			if (!rect_same_size(&dev->fmt_out_rect, &fmt) &&
+			    vb2_is_busy(&dev->vb_vid_out_q))
+				return -EBUSY;
+			dev->fmt_out_rect = fmt;
+			rect_set_size_to(crop, &s->r);
+			rect_map_inside(crop, &dev->fmt_out_rect);
+		} else {
+			if (!rect_same_size(&s->r, &dev->fmt_out_rect) &&
+			    vb2_is_busy(&dev->vb_vid_out_q))
+				return -EBUSY;
+			rect_set_size_to(&dev->fmt_out_rect, &s->r);
+			rect_set_size_to(crop, &s->r);
+			crop->height /= factor;
+			rect_map_inside(crop, &dev->fmt_out_rect);
+		}
+		s->r.top *= factor;
+		s->r.height *= factor;
+		if (dev->bitmap_out && (compose->width != s->r.width ||
+					compose->height != s->r.height)) {
+			kfree(dev->bitmap_out);
+			dev->bitmap_out = NULL;
+		}
+		*compose = s->r;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int vivid_vid_out_cropcap(struct file *file, void *priv,
+			      struct v4l2_cropcap *cap)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (cap->type != V4L2_BUF_TYPE_VIDEO_OUTPUT)
+		return -EINVAL;
+
+	switch (vivid_get_pixel_aspect(dev)) {
+	case TPG_PIXEL_ASPECT_NTSC:
+		cap->pixelaspect.numerator = 11;
+		cap->pixelaspect.denominator = 10;
+		break;
+	case TPG_PIXEL_ASPECT_PAL:
+		cap->pixelaspect.numerator = 54;
+		cap->pixelaspect.denominator = 59;
+		break;
+	case TPG_PIXEL_ASPECT_SQUARE:
+		cap->pixelaspect.numerator = 1;
+		cap->pixelaspect.denominator = 1;
+		break;
+	}
+	return 0;
+}
+
+int vidioc_g_fmt_vid_out_overlay(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	const struct v4l2_rect *compose = &dev->compose_out;
+	struct v4l2_window *win = &f->fmt.win;
+	unsigned clipcount = win->clipcount;
+
+	if (!dev->has_fb)
+		return -EINVAL;
+	win->w.top = dev->overlay_out_top;
+	win->w.left = dev->overlay_out_left;
+	win->w.width = compose->width;
+	win->w.height = compose->height;
+	win->clipcount = dev->clipcount_out;
+	win->field = V4L2_FIELD_ANY;
+	win->chromakey = dev->chromakey_out;
+	win->global_alpha = dev->global_alpha_out;
+	if (clipcount > dev->clipcount_out)
+		clipcount = dev->clipcount_out;
+	if (dev->bitmap_out == NULL)
+		win->bitmap = NULL;
+	else if (win->bitmap) {
+		if (copy_to_user(win->bitmap, dev->bitmap_out,
+		    ((dev->compose_out.width + 7) / 8) * dev->compose_out.height))
+			return -EFAULT;
+	}
+	if (clipcount && win->clips) {
+		if (copy_to_user(win->clips, dev->clips_out,
+				 clipcount * sizeof(dev->clips_out[0])))
+			return -EFAULT;
+	}
+	return 0;
+}
+
+int vidioc_try_fmt_vid_out_overlay(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	const struct v4l2_rect *compose = &dev->compose_out;
+	struct v4l2_window *win = &f->fmt.win;
+	int i, j;
+
+	if (!dev->has_fb)
+		return -EINVAL;
+	win->w.left = clamp_t(int, win->w.left,
+			      -dev->display_width, dev->display_width);
+	win->w.top = clamp_t(int, win->w.top,
+			     -dev->display_height, dev->display_height);
+	win->w.width = compose->width;
+	win->w.height = compose->height;
+	/*
+	 * It makes no sense for an OSD to overlay only top or bottom fields,
+	 * so always set this to ANY.
+	 */
+	win->field = V4L2_FIELD_ANY;
+	if (win->clipcount && !win->clips)
+		win->clipcount = 0;
+	if (win->clipcount > MAX_CLIPS)
+		win->clipcount = MAX_CLIPS;
+	if (win->clipcount) {
+		if (copy_from_user(dev->try_clips_out, win->clips,
+				   win->clipcount * sizeof(dev->clips_out[0])))
+			return -EFAULT;
+		for (i = 0; i < win->clipcount; i++) {
+			struct v4l2_rect *r = &dev->try_clips_out[i].c;
+
+			r->top = clamp_t(s32, r->top, 0, dev->display_height - 1);
+			r->height = clamp_t(s32, r->height, 1, dev->display_height - r->top);
+			r->left = clamp_t(u32, r->left, 0, dev->display_width - 1);
+			r->width = clamp_t(u32, r->width, 1, dev->display_width - r->left);
+		}
+		/*
+		 * Yeah, so sue me, it's an O(n^2) algorithm. But n is a small
+		 * number and it's typically a one-time deal.
+		 */
+		for (i = 0; i < win->clipcount - 1; i++) {
+			struct v4l2_rect *r1 = &dev->try_clips_out[i].c;
+
+			for (j = i + 1; j < win->clipcount; j++) {
+				struct v4l2_rect *r2 = &dev->try_clips_out[j].c;
+
+				if (rect_overlap(r1, r2))
+					return -EINVAL;
+			}
+		}
+		if (copy_to_user(win->clips, dev->try_clips_out,
+				 win->clipcount * sizeof(dev->clips_out[0])))
+			return -EFAULT;
+	}
+	return 0;
+}
+
+int vidioc_s_fmt_vid_out_overlay(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	const struct v4l2_rect *compose = &dev->compose_out;
+	struct v4l2_window *win = &f->fmt.win;
+	int ret = vidioc_try_fmt_vid_out_overlay(file, priv, f);
+	unsigned bitmap_size = ((compose->width + 7) / 8) * compose->height;
+	unsigned clips_size = win->clipcount * sizeof(dev->clips_out[0]);
+	void *new_bitmap = NULL;
+
+	if (ret)
+		return ret;
+
+	if (win->bitmap) {
+		new_bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+
+		if (new_bitmap == NULL)
+			return -ENOMEM;
+		if (copy_from_user(new_bitmap, win->bitmap, bitmap_size)) {
+			kfree(new_bitmap);
+			return -EFAULT;
+		}
+	}
+
+	dev->overlay_out_top = win->w.top;
+	dev->overlay_out_left = win->w.left;
+	kfree(dev->bitmap_out);
+	dev->bitmap_out = new_bitmap;
+	dev->clipcount_out = win->clipcount;
+	if (dev->clipcount_out)
+		memcpy(dev->clips_out, dev->try_clips_out, clips_size);
+	dev->chromakey_out = win->chromakey;
+	dev->global_alpha_out = win->global_alpha;
+	return ret;
+}
+
+int vivid_vid_out_overlay(struct file *file, void *fh, unsigned i)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (i && !dev->fmt_out->can_do_overlay) {
+		dprintk(dev, 1, "unsupported output format for output overlay\n");
+		return -EINVAL;
+	}
+
+	dev->overlay_out_enabled = i;
+	return 0;
+}
+
+int vivid_vid_out_g_fbuf(struct file *file, void *fh,
+				struct v4l2_framebuffer *a)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	a->capability = V4L2_FBUF_CAP_EXTERNOVERLAY |
+			V4L2_FBUF_CAP_BITMAP_CLIPPING |
+			V4L2_FBUF_CAP_LIST_CLIPPING |
+			V4L2_FBUF_CAP_CHROMAKEY |
+			V4L2_FBUF_CAP_SRC_CHROMAKEY |
+			V4L2_FBUF_CAP_GLOBAL_ALPHA |
+			V4L2_FBUF_CAP_LOCAL_ALPHA |
+			V4L2_FBUF_CAP_LOCAL_INV_ALPHA;
+	a->flags = V4L2_FBUF_FLAG_OVERLAY | dev->fbuf_out_flags;
+	a->base = (void *)dev->video_pbase;
+	a->fmt.width = dev->display_width;
+	a->fmt.height = dev->display_height;
+	if (dev->fb_defined.green.length == 5)
+		a->fmt.pixelformat = V4L2_PIX_FMT_ARGB555;
+	else
+		a->fmt.pixelformat = V4L2_PIX_FMT_RGB565;
+	a->fmt.bytesperline = dev->display_byte_stride;
+	a->fmt.sizeimage = a->fmt.height * a->fmt.bytesperline;
+	a->fmt.field = V4L2_FIELD_NONE;
+	a->fmt.colorspace = V4L2_COLORSPACE_SRGB;
+	a->fmt.priv = 0;
+	return 0;
+}
+
+int vivid_vid_out_s_fbuf(struct file *file, void *fh,
+				const struct v4l2_framebuffer *a)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	const unsigned chroma_flags = V4L2_FBUF_FLAG_CHROMAKEY |
+				      V4L2_FBUF_FLAG_SRC_CHROMAKEY;
+	const unsigned alpha_flags = V4L2_FBUF_FLAG_GLOBAL_ALPHA |
+				     V4L2_FBUF_FLAG_LOCAL_ALPHA |
+				     V4L2_FBUF_FLAG_LOCAL_INV_ALPHA;
+
+
+	if ((a->flags & chroma_flags) == chroma_flags)
+		return -EINVAL;
+	switch (a->flags & alpha_flags) {
+	case 0:
+	case V4L2_FBUF_FLAG_GLOBAL_ALPHA:
+	case V4L2_FBUF_FLAG_LOCAL_ALPHA:
+	case V4L2_FBUF_FLAG_LOCAL_INV_ALPHA:
+		break;
+	default:
+		return -EINVAL;
+	}
+	dev->fbuf_out_flags &= ~(chroma_flags | alpha_flags);
+	dev->fbuf_out_flags = a->flags & (chroma_flags | alpha_flags);
+	return 0;
+}
+
+static const struct v4l2_audioout vivid_audio_outputs[] = {
+	{ 0, "Line-Out 1" },
+	{ 1, "Line-Out 2" },
+};
+
+int vidioc_enum_output(struct file *file, void *priv,
+				struct v4l2_output *out)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (out->index >= dev->num_outputs)
+		return -EINVAL;
+
+	out->type = V4L2_OUTPUT_TYPE_ANALOG;
+	switch (dev->output_type[out->index]) {
+	case SVID:
+		snprintf(out->name, sizeof(out->name), "S-Video %u",
+				dev->output_name_counter[out->index]);
+		out->std = V4L2_STD_ALL;
+		if (dev->has_audio_outputs)
+			out->audioset = (1 << ARRAY_SIZE(vivid_audio_outputs)) - 1;
+		out->capabilities = V4L2_OUT_CAP_STD;
+		break;
+	case HDMI:
+		snprintf(out->name, sizeof(out->name), "HDMI %u",
+				dev->output_name_counter[out->index]);
+		out->capabilities = V4L2_OUT_CAP_DV_TIMINGS;
+		break;
+	}
+	return 0;
+}
+
+int vidioc_g_output(struct file *file, void *priv, unsigned *o)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	*o = dev->output;
+	return 0;
+}
+
+int vidioc_s_output(struct file *file, void *priv, unsigned o)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (o >= dev->num_outputs)
+		return -EINVAL;
+
+	if (o == dev->output)
+		return 0;
+
+	if (vb2_is_busy(&dev->vb_vid_out_q) || vb2_is_busy(&dev->vb_vbi_out_q))
+		return -EBUSY;
+
+	dev->output = o;
+	dev->tv_audio_output = 0;
+	if (dev->output_type[o] == SVID)
+		dev->vid_out_dev.tvnorms = V4L2_STD_ALL;
+	else
+		dev->vid_out_dev.tvnorms = 0;
+
+	dev->vbi_out_dev.tvnorms = dev->vid_out_dev.tvnorms;
+	vivid_update_format_out(dev);
+	return 0;
+}
+
+int vidioc_enumaudout(struct file *file, void *fh, struct v4l2_audioout *vout)
+{
+	if (vout->index >= ARRAY_SIZE(vivid_audio_outputs))
+		return -EINVAL;
+	*vout = vivid_audio_outputs[vout->index];
+	return 0;
+}
+
+int vidioc_g_audout(struct file *file, void *fh, struct v4l2_audioout *vout)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_svid_out(dev))
+		return -EINVAL;
+	*vout = vivid_audio_outputs[dev->tv_audio_output];
+	return 0;
+}
+
+int vidioc_s_audout(struct file *file, void *fh, const struct v4l2_audioout *vout)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_svid_out(dev))
+		return -EINVAL;
+	if (vout->index >= ARRAY_SIZE(vivid_audio_outputs))
+		return -EINVAL;
+	dev->tv_audio_output = vout->index;
+	return 0;
+}
+
+int vivid_vid_out_s_std(struct file *file, void *priv, v4l2_std_id id)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_svid_out(dev))
+		return -ENODATA;
+	if (dev->std_out == id)
+		return 0;
+	if (vb2_is_busy(&dev->vb_vid_out_q) || vb2_is_busy(&dev->vb_vbi_out_q))
+		return -EBUSY;
+	dev->std_out = id;
+	vivid_update_format_out(dev);
+	return 0;
+}
+
+int vivid_vid_out_s_dv_timings(struct file *file, void *_fh,
+				    struct v4l2_dv_timings *timings)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (!vivid_is_hdmi_out(dev))
+		return -ENODATA;
+	if (vb2_is_busy(&dev->vb_vid_out_q))
+		return -EBUSY;
+	if (!v4l2_find_dv_timings_cap(timings, &vivid_dv_timings_cap,
+				0, NULL, NULL))
+		return -EINVAL;
+	if (v4l2_match_dv_timings(timings, &dev->dv_timings_out, 0))
+		return 0;
+	dev->dv_timings_out = *timings;
+	vivid_update_format_out(dev);
+	return 0;
+}
+
+int vivid_vid_out_g_edid(struct file *file, void *_fh,
+			 struct v4l2_edid *edid)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+
+	memset(edid->reserved, 0, sizeof(edid->reserved));
+	if (vdev->vfl_dir == VFL_DIR_RX) {
+		if (edid->pad >= dev->num_inputs)
+			return -EINVAL;
+		if (dev->input_type[edid->pad] != HDMI)
+			return -EINVAL;
+	} else {
+		if (edid->pad >= dev->num_outputs)
+			return -EINVAL;
+		if (dev->output_type[edid->pad] != HDMI)
+			return -EINVAL;
+	}
+	if (edid->start_block == 0 && edid->blocks == 0) {
+		edid->blocks = dev->edid_blocks;
+		return 0;
+	}
+	if (dev->edid_blocks == 0)
+		return -ENODATA;
+	if (edid->start_block >= dev->edid_blocks)
+		return -EINVAL;
+	if (edid->start_block + edid->blocks > dev->edid_blocks)
+		edid->blocks = dev->edid_blocks - edid->start_block;
+	memcpy(edid->edid, dev->edid, edid->blocks * 128);
+	return 0;
+}
+
+int vivid_vid_out_s_edid(struct file *file, void *_fh,
+			 struct v4l2_edid *edid)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	memset(edid->reserved, 0, sizeof(edid->reserved));
+	if (edid->pad >= dev->num_inputs)
+		return -EINVAL;
+	if (dev->input_type[edid->pad] != HDMI || edid->start_block)
+		return -EINVAL;
+	if (edid->blocks == 0) {
+		dev->edid_blocks = 0;
+		return 0;
+	}
+	if (edid->blocks > dev->edid_max_blocks) {
+		edid->blocks = dev->edid_max_blocks;
+		return -E2BIG;
+	}
+	dev->edid_blocks = edid->blocks;
+	memcpy(dev->edid, edid->edid, edid->blocks * 128);
+	return 0;
+}
+
+int vivid_vid_out_g_parm(struct file *file, void *priv,
+			  struct v4l2_streamparm *parm)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (parm->type != (dev->multiplanar ?
+			   V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE :
+			   V4L2_BUF_TYPE_VIDEO_OUTPUT))
+		return -EINVAL;
+
+	parm->parm.output.capability   = V4L2_CAP_TIMEPERFRAME;
+	parm->parm.output.timeperframe = dev->timeperframe_vid_out;
+	parm->parm.output.writebuffers  = 1;
+return 0;
+}
+
+int vidioc_subscribe_event(struct v4l2_fh *fh,
+			const struct v4l2_event_subscription *sub)
+{
+	switch (sub->type) {
+	case V4L2_EVENT_CTRL:
+		return v4l2_ctrl_subscribe_event(fh, sub);
+	case V4L2_EVENT_SOURCE_CHANGE:
+		if (fh->vdev->vfl_dir == VFL_DIR_RX)
+			return v4l2_src_change_event_subscribe(fh, sub);
+		break;
+	default:
+		break;
+	}
+	return -EINVAL;
+}
diff --git a/drivers/media/platform/vivid/vivid-vid-out.h b/drivers/media/platform/vivid/vivid-vid-out.h
new file mode 100644
index 0000000..a237465
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vid-out.h
@@ -0,0 +1,57 @@
+/*
+ * vivid-vid-out.h - video output support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_VID_OUT_H_
+#define _VIVID_VID_OUT_H_
+
+extern const struct vb2_ops vivid_vid_out_qops;
+
+void vivid_update_format_out(struct vivid_dev *dev);
+
+int vivid_g_fmt_vid_out(struct file *file, void *priv, struct v4l2_format *f);
+int vivid_try_fmt_vid_out(struct file *file, void *priv, struct v4l2_format *f);
+int vivid_s_fmt_vid_out(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_g_fmt_vid_out_mplane(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_try_fmt_vid_out_mplane(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_s_fmt_vid_out_mplane(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_g_fmt_vid_out(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_try_fmt_vid_out(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_s_fmt_vid_out(struct file *file, void *priv, struct v4l2_format *f);
+int vivid_vid_out_g_selection(struct file *file, void *priv, struct v4l2_selection *sel);
+int vivid_vid_out_s_selection(struct file *file, void *fh, struct v4l2_selection *s);
+int vivid_vid_out_cropcap(struct file *file, void *fh, struct v4l2_cropcap *cap);
+int vidioc_enum_fmt_vid_out_overlay(struct file *file, void  *priv, struct v4l2_fmtdesc *f);
+int vidioc_g_fmt_vid_out_overlay(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_try_fmt_vid_out_overlay(struct file *file, void *priv, struct v4l2_format *f);
+int vidioc_s_fmt_vid_out_overlay(struct file *file, void *priv, struct v4l2_format *f);
+int vivid_vid_out_overlay(struct file *file, void *fh, unsigned i);
+int vivid_vid_out_g_fbuf(struct file *file, void *fh, struct v4l2_framebuffer *a);
+int vivid_vid_out_s_fbuf(struct file *file, void *fh, const struct v4l2_framebuffer *a);
+int vidioc_enum_output(struct file *file, void *priv, struct v4l2_output *out);
+int vidioc_g_output(struct file *file, void *priv, unsigned *i);
+int vidioc_s_output(struct file *file, void *priv, unsigned i);
+int vidioc_enumaudout(struct file *file, void *fh, struct v4l2_audioout *vout);
+int vidioc_g_audout(struct file *file, void *fh, struct v4l2_audioout *vout);
+int vidioc_s_audout(struct file *file, void *fh, const struct v4l2_audioout *vout);
+int vivid_vid_out_s_std(struct file *file, void *priv, v4l2_std_id id);
+int vivid_vid_out_s_dv_timings(struct file *file, void *_fh, struct v4l2_dv_timings *timings);
+int vidioc_g_edid(struct file *file, void *_fh, struct v4l2_edid *edid);
+int vivid_vid_out_g_parm(struct file *file, void *priv, struct v4l2_streamparm *parm);
+
+#endif
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 06/12] vivid: add VBI capture and output code
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
                   ` (4 preceding siblings ...)
  2014-08-25 11:30 ` [PATCHv2 05/12] vivid: add the video capture and output parts Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 07/12] vivid: add the kthread code that controls the video rate Hans Verkuil
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

This adds support for VBI capture (raw and sliced) and VBI output
(raw and sliced) to the vivid driver. In addition a VBI generator
is added that generates simple VBI data in either sliced or raw
format.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/platform/vivid/vivid-vbi-cap.c | 356 +++++++++++++++++++++++++++
 drivers/media/platform/vivid/vivid-vbi-cap.h |  40 +++
 drivers/media/platform/vivid/vivid-vbi-gen.c | 248 +++++++++++++++++++
 drivers/media/platform/vivid/vivid-vbi-gen.h |  33 +++
 drivers/media/platform/vivid/vivid-vbi-out.c | 247 +++++++++++++++++++
 drivers/media/platform/vivid/vivid-vbi-out.h |  34 +++
 6 files changed, 958 insertions(+)
 create mode 100644 drivers/media/platform/vivid/vivid-vbi-cap.c
 create mode 100644 drivers/media/platform/vivid/vivid-vbi-cap.h
 create mode 100644 drivers/media/platform/vivid/vivid-vbi-gen.c
 create mode 100644 drivers/media/platform/vivid/vivid-vbi-gen.h
 create mode 100644 drivers/media/platform/vivid/vivid-vbi-out.c
 create mode 100644 drivers/media/platform/vivid/vivid-vbi-out.h

diff --git a/drivers/media/platform/vivid/vivid-vbi-cap.c b/drivers/media/platform/vivid/vivid-vbi-cap.c
new file mode 100644
index 0000000..e239cfd
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vbi-cap.c
@@ -0,0 +1,356 @@
+/*
+ * vivid-vbi-cap.c - vbi capture support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/videodev2.h>
+#include <media/v4l2-common.h>
+
+#include "vivid-core.h"
+#include "vivid-kthread-cap.h"
+#include "vivid-vbi-cap.h"
+#include "vivid-vbi-gen.h"
+
+static void vivid_sliced_vbi_cap_fill(struct vivid_dev *dev, unsigned seqnr)
+{
+	struct vivid_vbi_gen_data *vbi_gen = &dev->vbi_gen;
+	bool is_60hz = dev->std_cap & V4L2_STD_525_60;
+
+	vivid_vbi_gen_sliced(vbi_gen, is_60hz, seqnr);
+
+	if (!is_60hz) {
+		if (dev->loop_video) {
+			if (dev->vbi_out_have_wss) {
+				vbi_gen->data[0].data[0] = dev->vbi_out_wss[0];
+				vbi_gen->data[0].data[1] = dev->vbi_out_wss[1];
+			} else {
+				vbi_gen->data[0].id = 0;
+			}
+		} else {
+			switch (tpg_g_video_aspect(&dev->tpg)) {
+			case TPG_VIDEO_ASPECT_14X9_CENTRE:
+				vbi_gen->data[0].data[0] = 0x01;
+				break;
+			case TPG_VIDEO_ASPECT_16X9_CENTRE:
+				vbi_gen->data[0].data[0] = 0x0b;
+				break;
+			case TPG_VIDEO_ASPECT_16X9_ANAMORPHIC:
+				vbi_gen->data[0].data[0] = 0x07;
+				break;
+			case TPG_VIDEO_ASPECT_4X3:
+			default:
+				vbi_gen->data[0].data[0] = 0x08;
+				break;
+			}
+		}
+	} else if (dev->loop_video && is_60hz) {
+		if (dev->vbi_out_have_cc[0]) {
+			vbi_gen->data[0].data[0] = dev->vbi_out_cc[0][0];
+			vbi_gen->data[0].data[1] = dev->vbi_out_cc[0][1];
+		} else {
+			vbi_gen->data[0].id = 0;
+		}
+		if (dev->vbi_out_have_cc[1]) {
+			vbi_gen->data[1].data[0] = dev->vbi_out_cc[1][0];
+			vbi_gen->data[1].data[1] = dev->vbi_out_cc[1][1];
+		} else {
+			vbi_gen->data[1].id = 0;
+		}
+	}
+}
+
+static void vivid_g_fmt_vbi_cap(struct vivid_dev *dev, struct v4l2_vbi_format *vbi)
+{
+	bool is_60hz = dev->std_cap & V4L2_STD_525_60;
+
+	vbi->sampling_rate = 27000000;
+	vbi->offset = 24;
+	vbi->samples_per_line = 1440;
+	vbi->sample_format = V4L2_PIX_FMT_GREY;
+	vbi->start[0] = is_60hz ? 10 : 6;
+	vbi->start[1] = is_60hz ? 273 : 318;
+	vbi->count[0] = vbi->count[1] = is_60hz ? 12 : 18;
+	vbi->flags = dev->vbi_cap_interlaced ? V4L2_VBI_INTERLACED : 0;
+	vbi->reserved[0] = 0;
+	vbi->reserved[1] = 0;
+}
+
+void vivid_raw_vbi_cap_process(struct vivid_dev *dev, struct vivid_buffer *buf)
+{
+	struct v4l2_vbi_format vbi;
+	u8 *vbuf = vb2_plane_vaddr(&buf->vb, 0);
+
+	vivid_g_fmt_vbi_cap(dev, &vbi);
+	buf->vb.v4l2_buf.sequence = dev->vbi_cap_seq_count;
+	if (dev->field_cap == V4L2_FIELD_ALTERNATE)
+		buf->vb.v4l2_buf.sequence /= 2;
+
+	vivid_sliced_vbi_cap_fill(dev, buf->vb.v4l2_buf.sequence);
+
+	memset(vbuf, 0x10, vb2_plane_size(&buf->vb, 0));
+
+	if (!VIVID_INVALID_SIGNAL(dev->std_signal_mode))
+		vivid_vbi_gen_raw(&dev->vbi_gen, &vbi, vbuf);
+
+	v4l2_get_timestamp(&buf->vb.v4l2_buf.timestamp);
+	buf->vb.v4l2_buf.timestamp.tv_sec += dev->time_wrap_offset;
+}
+
+
+void vivid_sliced_vbi_cap_process(struct vivid_dev *dev, struct vivid_buffer *buf)
+{
+	struct v4l2_sliced_vbi_data *vbuf = vb2_plane_vaddr(&buf->vb, 0);
+
+	buf->vb.v4l2_buf.sequence = dev->vbi_cap_seq_count;
+	if (dev->field_cap == V4L2_FIELD_ALTERNATE)
+		buf->vb.v4l2_buf.sequence /= 2;
+
+	vivid_sliced_vbi_cap_fill(dev, buf->vb.v4l2_buf.sequence);
+
+	memset(vbuf, 0, vb2_plane_size(&buf->vb, 0));
+	if (!VIVID_INVALID_SIGNAL(dev->std_signal_mode)) {
+		vbuf[0] = dev->vbi_gen.data[0];
+		vbuf[1] = dev->vbi_gen.data[1];
+	}
+
+	v4l2_get_timestamp(&buf->vb.v4l2_buf.timestamp);
+	buf->vb.v4l2_buf.timestamp.tv_sec += dev->time_wrap_offset;
+}
+
+static int vbi_cap_queue_setup(struct vb2_queue *vq, const struct v4l2_format *fmt,
+		       unsigned *nbuffers, unsigned *nplanes,
+		       unsigned sizes[], void *alloc_ctxs[])
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+	bool is_60hz = dev->std_cap & V4L2_STD_525_60;
+	unsigned size = vq->type == V4L2_BUF_TYPE_SLICED_VBI_CAPTURE ?
+		36 * sizeof(struct v4l2_sliced_vbi_data) :
+		1440 * 2 * (is_60hz ? 12 : 18);
+
+	if (!vivid_is_sdtv_cap(dev))
+		return -EINVAL;
+
+	sizes[0] = size;
+
+	if (vq->num_buffers + *nbuffers < 2)
+		*nbuffers = 2 - vq->num_buffers;
+
+	*nplanes = 1;
+	return 0;
+}
+
+static int vbi_cap_buf_prepare(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	bool is_60hz = dev->std_cap & V4L2_STD_525_60;
+	unsigned size = vb->vb2_queue->type == V4L2_BUF_TYPE_SLICED_VBI_CAPTURE ?
+		36 * sizeof(struct v4l2_sliced_vbi_data) :
+		1440 * 2 * (is_60hz ? 12 : 18);
+
+	dprintk(dev, 1, "%s\n", __func__);
+
+	if (dev->buf_prepare_error) {
+		/*
+		 * Error injection: test what happens if buf_prepare() returns
+		 * an error.
+		 */
+		dev->buf_prepare_error = false;
+		return -EINVAL;
+	}
+	if (vb2_plane_size(vb, 0) < size) {
+		dprintk(dev, 1, "%s data will not fit into plane (%lu < %u)\n",
+				__func__, vb2_plane_size(vb, 0), size);
+		return -EINVAL;
+	}
+	vb2_set_plane_payload(vb, 0, size);
+
+	return 0;
+}
+
+static void vbi_cap_buf_queue(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	struct vivid_buffer *buf = container_of(vb, struct vivid_buffer, vb);
+
+	dprintk(dev, 1, "%s\n", __func__);
+
+	spin_lock(&dev->slock);
+	list_add_tail(&buf->list, &dev->vbi_cap_active);
+	spin_unlock(&dev->slock);
+}
+
+static int vbi_cap_start_streaming(struct vb2_queue *vq, unsigned count)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+	int err;
+
+	dprintk(dev, 1, "%s\n", __func__);
+	dev->vbi_cap_seq_count = 0;
+	if (dev->start_streaming_error) {
+		dev->start_streaming_error = false;
+		err = -EINVAL;
+	} else {
+		err = vivid_start_generating_vid_cap(dev, &dev->vbi_cap_streaming);
+	}
+	if (err) {
+		struct vivid_buffer *buf, *tmp;
+
+		list_for_each_entry_safe(buf, tmp, &dev->vbi_cap_active, list) {
+			list_del(&buf->list);
+			vb2_buffer_done(&buf->vb, VB2_BUF_STATE_QUEUED);
+		}
+	}
+	return err;
+}
+
+/* abort streaming and wait for last buffer */
+static void vbi_cap_stop_streaming(struct vb2_queue *vq)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+
+	dprintk(dev, 1, "%s\n", __func__);
+	vivid_stop_generating_vid_cap(dev, &dev->vbi_cap_streaming);
+}
+
+const struct vb2_ops vivid_vbi_cap_qops = {
+	.queue_setup		= vbi_cap_queue_setup,
+	.buf_prepare		= vbi_cap_buf_prepare,
+	.buf_queue		= vbi_cap_buf_queue,
+	.start_streaming	= vbi_cap_start_streaming,
+	.stop_streaming		= vbi_cap_stop_streaming,
+	.wait_prepare		= vivid_unlock,
+	.wait_finish		= vivid_lock,
+};
+
+int vidioc_g_fmt_vbi_cap(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_vbi_format *vbi = &f->fmt.vbi;
+
+	if (!vivid_is_sdtv_cap(dev) || !dev->has_raw_vbi_cap)
+		return -EINVAL;
+
+	vivid_g_fmt_vbi_cap(dev, vbi);
+	return 0;
+}
+
+int vidioc_s_fmt_vbi_cap(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	int ret = vidioc_g_fmt_vbi_cap(file, priv, f);
+
+	if (ret)
+		return ret;
+	if (dev->stream_sliced_vbi_cap && vb2_is_busy(&dev->vb_vbi_cap_q))
+		return -EBUSY;
+	dev->stream_sliced_vbi_cap = false;
+	dev->vbi_cap_dev.queue->type = V4L2_BUF_TYPE_VBI_CAPTURE;
+	return 0;
+}
+
+void vivid_fill_service_lines(struct v4l2_sliced_vbi_format *vbi, u32 service_set)
+{
+	vbi->io_size = sizeof(struct v4l2_sliced_vbi_data) * 36;
+	vbi->service_set = service_set;
+	memset(vbi->service_lines, 0, sizeof(vbi->service_lines));
+	memset(vbi->reserved, 0, sizeof(vbi->reserved));
+
+	if (vbi->service_set == 0)
+		return;
+
+	if (vbi->service_set & V4L2_SLICED_CAPTION_525) {
+		vbi->service_lines[0][21] = V4L2_SLICED_CAPTION_525;
+		vbi->service_lines[1][21] = V4L2_SLICED_CAPTION_525;
+	}
+	if (vbi->service_set & V4L2_SLICED_WSS_625)
+		vbi->service_lines[0][23] = V4L2_SLICED_WSS_625;
+}
+
+int vidioc_g_fmt_sliced_vbi_cap(struct file *file, void *fh, struct v4l2_format *fmt)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_sliced_vbi_format *vbi = &fmt->fmt.sliced;
+
+	if (!vivid_is_sdtv_cap(dev) || !dev->has_sliced_vbi_cap)
+		return -EINVAL;
+
+	vivid_fill_service_lines(vbi, dev->service_set_cap);
+	return 0;
+}
+
+int vidioc_try_fmt_sliced_vbi_cap(struct file *file, void *fh, struct v4l2_format *fmt)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_sliced_vbi_format *vbi = &fmt->fmt.sliced;
+	bool is_60hz = dev->std_cap & V4L2_STD_525_60;
+	u32 service_set = vbi->service_set;
+
+	if (!vivid_is_sdtv_cap(dev) || !dev->has_sliced_vbi_cap)
+		return -EINVAL;
+
+	service_set &= is_60hz ? V4L2_SLICED_CAPTION_525 : V4L2_SLICED_WSS_625;
+	vivid_fill_service_lines(vbi, service_set);
+	return 0;
+}
+
+int vidioc_s_fmt_sliced_vbi_cap(struct file *file, void *fh, struct v4l2_format *fmt)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_sliced_vbi_format *vbi = &fmt->fmt.sliced;
+	int ret = vidioc_try_fmt_sliced_vbi_cap(file, fh, fmt);
+
+	if (ret)
+		return ret;
+	if (!dev->stream_sliced_vbi_cap && vb2_is_busy(&dev->vb_vbi_cap_q))
+		return -EBUSY;
+	dev->service_set_cap = vbi->service_set;
+	dev->stream_sliced_vbi_cap = true;
+	dev->vbi_cap_dev.queue->type = V4L2_BUF_TYPE_SLICED_VBI_CAPTURE;
+	return 0;
+}
+
+int vidioc_g_sliced_vbi_cap(struct file *file, void *fh, struct v4l2_sliced_vbi_cap *cap)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct video_device *vdev = video_devdata(file);
+	bool is_60hz;
+
+	if (vdev->vfl_dir == VFL_DIR_RX) {
+		is_60hz = dev->std_cap & V4L2_STD_525_60;
+		if (!vivid_is_sdtv_cap(dev) || !dev->has_sliced_vbi_cap ||
+		    cap->type != V4L2_BUF_TYPE_SLICED_VBI_CAPTURE)
+			return -EINVAL;
+	} else {
+		is_60hz = dev->std_out & V4L2_STD_525_60;
+		if (!vivid_is_svid_out(dev) || !dev->has_sliced_vbi_out ||
+		    cap->type != V4L2_BUF_TYPE_SLICED_VBI_OUTPUT)
+			return -EINVAL;
+	}
+
+	cap->service_set = is_60hz ? V4L2_SLICED_CAPTION_525 : V4L2_SLICED_WSS_625;
+	if (is_60hz) {
+		cap->service_lines[0][21] = V4L2_SLICED_CAPTION_525;
+		cap->service_lines[1][21] = V4L2_SLICED_CAPTION_525;
+	} else {
+		cap->service_lines[0][23] = V4L2_SLICED_WSS_625;
+	}
+	return 0;
+}
diff --git a/drivers/media/platform/vivid/vivid-vbi-cap.h b/drivers/media/platform/vivid/vivid-vbi-cap.h
new file mode 100644
index 0000000..2d8ea0b
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vbi-cap.h
@@ -0,0 +1,40 @@
+/*
+ * vivid-vbi-cap.h - vbi capture support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_VBI_CAP_H_
+#define _VIVID_VBI_CAP_H_
+
+void vivid_fill_time_of_day_packet(u8 *packet);
+void vivid_raw_vbi_cap_process(struct vivid_dev *dev, struct vivid_buffer *buf);
+void vivid_sliced_vbi_cap_process(struct vivid_dev *dev, struct vivid_buffer *buf);
+void vivid_sliced_vbi_out_process(struct vivid_dev *dev, struct vivid_buffer *buf);
+int vidioc_g_fmt_vbi_cap(struct file *file, void *priv,
+					struct v4l2_format *f);
+int vidioc_s_fmt_vbi_cap(struct file *file, void *priv,
+					struct v4l2_format *f);
+int vidioc_g_fmt_sliced_vbi_cap(struct file *file, void *fh, struct v4l2_format *fmt);
+int vidioc_try_fmt_sliced_vbi_cap(struct file *file, void *fh, struct v4l2_format *fmt);
+int vidioc_s_fmt_sliced_vbi_cap(struct file *file, void *fh, struct v4l2_format *fmt);
+int vidioc_g_sliced_vbi_cap(struct file *file, void *fh, struct v4l2_sliced_vbi_cap *cap);
+
+void vivid_fill_service_lines(struct v4l2_sliced_vbi_format *vbi, u32 service_set);
+
+extern const struct vb2_ops vivid_vbi_cap_qops;
+
+#endif
diff --git a/drivers/media/platform/vivid/vivid-vbi-gen.c b/drivers/media/platform/vivid/vivid-vbi-gen.c
new file mode 100644
index 0000000..22f4bcc
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vbi-gen.c
@@ -0,0 +1,248 @@
+/*
+ * vivid-vbi-gen.c - vbi generator support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/ktime.h>
+#include <linux/videodev2.h>
+
+#include "vivid-vbi-gen.h"
+
+static void wss_insert(u8 *wss, u32 val, unsigned size)
+{
+	while (size--)
+		*wss++ = (val & (1 << size)) ? 0xc0 : 0x10;
+}
+
+static void vivid_vbi_gen_wss_raw(const struct v4l2_sliced_vbi_data *data,
+		u8 *buf, unsigned sampling_rate)
+{
+	const unsigned rate = 5000000;	/* WSS has a 5 MHz transmission rate */
+	u8 wss[29 + 24 + 24 + 24 + 18 + 18] = { 0 };
+	const unsigned zero = 0x07;
+	const unsigned one = 0x38;
+	unsigned bit = 0;
+	u16 wss_data;
+	int i;
+
+	wss_insert(wss + bit, 0x1f1c71c7, 29); bit += 29;
+	wss_insert(wss + bit, 0x1e3c1f, 24); bit += 24;
+
+	wss_data = (data->data[1] << 8) | data->data[0];
+	for (i = 0; i <= 13; i++, bit += 6)
+		wss_insert(wss + bit, (wss_data & (1 << i)) ? one : zero, 6);
+
+	for (i = 0, bit = 0; bit < sizeof(wss); bit++) {
+		unsigned n = ((bit + 1) * sampling_rate) / rate;
+
+		while (i < n)
+			buf[i++] = wss[bit];
+	}
+}
+
+static void cc_insert(u8 *cc, u8 ch)
+{
+	unsigned tot = 0;
+	unsigned i;
+
+	for (i = 0; i < 7; i++) {
+		cc[2 * i] = cc[2 * i + 1] = (ch & (1 << i)) ? 1 : 0;
+		tot += cc[2 * i];
+	}
+	cc[14] = cc[15] = !(tot & 1);
+}
+
+#define CC_PREAMBLE_BITS (14 + 4 + 2)
+
+static void vivid_vbi_gen_cc_raw(const struct v4l2_sliced_vbi_data *data,
+		u8 *buf, unsigned sampling_rate)
+{
+	const unsigned rate = 1000000;	/* CC has a 1 MHz transmission rate */
+
+	u8 cc[CC_PREAMBLE_BITS + 2 * 16] = {
+		/* Clock run-in: 7 cycles */
+		0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1,
+		/* 2 cycles of 0 */
+		0, 0, 0, 0,
+		/* Start bit of 1 (each bit is two cycles) */
+		1, 1
+	};
+	unsigned bit, i;
+
+	cc_insert(cc + CC_PREAMBLE_BITS, data->data[0]);
+	cc_insert(cc + CC_PREAMBLE_BITS + 16, data->data[1]);
+
+	for (i = 0, bit = 0; bit < sizeof(cc); bit++) {
+		unsigned n = ((bit + 1) * sampling_rate) / rate;
+
+		while (i < n)
+			buf[i++] = cc[bit] ? 0xc0 : 0x10;
+	}
+}
+
+void vivid_vbi_gen_raw(const struct vivid_vbi_gen_data *vbi,
+		const struct v4l2_vbi_format *vbi_fmt, u8 *buf)
+{
+	unsigned idx;
+
+	for (idx = 0; idx < 2; idx++) {
+		const struct v4l2_sliced_vbi_data *data = vbi->data + idx;
+		unsigned start_2nd_field;
+		unsigned line = data->line;
+		u8 *linebuf = buf;
+
+		start_2nd_field = (data->id & V4L2_SLICED_VBI_525) ? 263 : 313;
+		if (data->field)
+			line += start_2nd_field;
+		line -= vbi_fmt->start[data->field];
+
+		if (vbi_fmt->flags & V4L2_VBI_INTERLACED)
+			linebuf += (line * 2 + data->field) *
+				vbi_fmt->samples_per_line;
+		else
+			linebuf += (line + data->field * vbi_fmt->count[0]) *
+				vbi_fmt->samples_per_line;
+		if (data->id == V4L2_SLICED_CAPTION_525)
+			vivid_vbi_gen_cc_raw(data, linebuf, vbi_fmt->sampling_rate);
+		else if (data->id == V4L2_SLICED_WSS_625)
+			vivid_vbi_gen_wss_raw(data, linebuf, vbi_fmt->sampling_rate);
+	}
+}
+
+static const u8 vivid_cc_sequence1[30] = {
+	0x14, 0x20,	/* Resume Caption Loading */
+	'H',  'e',
+	'l',  'l',
+	'o',  ' ',
+	'w',  'o',
+	'r',  'l',
+	'd',  '!',
+	0x14, 0x2f,	/* End of Caption */
+};
+
+static const u8 vivid_cc_sequence2[30] = {
+	0x14, 0x20,	/* Resume Caption Loading */
+	'C',  'l',
+	'o',  's',
+	'e',  'd',
+	' ',  'c',
+	'a',  'p',
+	't',  'i',
+	'o',  'n',
+	's',  ' ',
+	't',  'e',
+	's',  't',
+	0x14, 0x2f,	/* End of Caption */
+};
+
+static u8 calc_parity(u8 val)
+{
+	unsigned i;
+	unsigned tot = 0;
+
+	for (i = 0; i < 7; i++)
+		tot += (val & (1 << i)) ? 1 : 0;
+	return val | ((tot & 1) ? 0 : 0x80);
+}
+
+static void vivid_vbi_gen_set_time_of_day(u8 *packet)
+{
+	struct tm tm;
+	u8 checksum, i;
+
+	time_to_tm(get_seconds(), 0, &tm);
+	packet[0] = calc_parity(0x07);
+	packet[1] = calc_parity(0x01);
+	packet[2] = calc_parity(0x40 | tm.tm_min);
+	packet[3] = calc_parity(0x40 | tm.tm_hour);
+	packet[4] = calc_parity(0x40 | tm.tm_mday);
+	if (tm.tm_mday == 1 && tm.tm_mon == 2 &&
+	    sys_tz.tz_minuteswest > tm.tm_min + tm.tm_hour * 60)
+		packet[4] = calc_parity(0x60 | tm.tm_mday);
+	packet[5] = calc_parity(0x40 | (1 + tm.tm_mon));
+	packet[6] = calc_parity(0x40 | (1 + tm.tm_wday));
+	packet[7] = calc_parity(0x40 | ((tm.tm_year - 90) & 0x3f));
+	packet[8] = calc_parity(0x0f);
+	for (checksum = i = 0; i <= 8; i++)
+		checksum += packet[i] & 0x7f;
+	packet[9] = calc_parity(0x100 - checksum);
+	checksum = 0;
+	packet[10] = calc_parity(0x07);
+	packet[11] = calc_parity(0x04);
+	if (sys_tz.tz_minuteswest >= 0)
+		packet[12] = calc_parity(0x40 | ((sys_tz.tz_minuteswest / 60) & 0x1f));
+	else
+		packet[12] = calc_parity(0x40 | ((24 + sys_tz.tz_minuteswest / 60) & 0x1f));
+	packet[13] = calc_parity(0);
+	packet[14] = calc_parity(0x0f);
+	for (checksum = 0, i = 10; i <= 14; i++)
+		checksum += packet[i] & 0x7f;
+	packet[15] = calc_parity(0x100 - checksum);
+}
+
+void vivid_vbi_gen_sliced(struct vivid_vbi_gen_data *vbi,
+		bool is_60hz, unsigned seqnr)
+{
+	struct v4l2_sliced_vbi_data *data0 = vbi->data;
+	struct v4l2_sliced_vbi_data *data1 = vbi->data + 1;
+	unsigned frame = seqnr % 60;
+
+	memset(vbi->data, 0, sizeof(vbi->data));
+
+	if (!is_60hz) {
+		data0->id = V4L2_SLICED_WSS_625;
+		data0->line = 23;
+		/* 4x3 video aspect ratio */
+		data0->data[0] = 0x08;
+		return;
+	}
+
+	data0->id = V4L2_SLICED_CAPTION_525;
+	data0->line = 21;
+	data1->id = V4L2_SLICED_CAPTION_525;
+	data1->field = 1;
+	data1->line = 21;
+
+	if (frame < 15) {
+		data0->data[0] = calc_parity(vivid_cc_sequence1[2 * frame]);
+		data0->data[1] = calc_parity(vivid_cc_sequence1[2 * frame + 1]);
+	} else if (frame >= 30 && frame < 45) {
+		frame -= 30;
+		data0->data[0] = calc_parity(vivid_cc_sequence2[2 * frame]);
+		data0->data[1] = calc_parity(vivid_cc_sequence2[2 * frame + 1]);
+	} else {
+		data0->data[0] = calc_parity(0);
+		data0->data[1] = calc_parity(0);
+	}
+
+	frame = seqnr % (30 * 60);
+	switch (frame) {
+	case 0:
+		vivid_vbi_gen_set_time_of_day(vbi->time_of_day_packet);
+		/* fall through */
+	case 1 ... 7:
+		data1->data[0] = vbi->time_of_day_packet[frame * 2];
+		data1->data[1] = vbi->time_of_day_packet[frame * 2 + 1];
+		break;
+	default:
+		data1->data[0] = calc_parity(0);
+		data1->data[1] = calc_parity(0);
+		break;
+	}
+}
diff --git a/drivers/media/platform/vivid/vivid-vbi-gen.h b/drivers/media/platform/vivid/vivid-vbi-gen.h
new file mode 100644
index 0000000..401dd47
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vbi-gen.h
@@ -0,0 +1,33 @@
+/*
+ * vivid-vbi-gen.h - vbi generator support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_VBI_GEN_H_
+#define _VIVID_VBI_GEN_H_
+
+struct vivid_vbi_gen_data {
+	struct v4l2_sliced_vbi_data data[2];
+	u8 time_of_day_packet[16];
+};
+
+void vivid_vbi_gen_sliced(struct vivid_vbi_gen_data *vbi,
+		bool is_60hz, unsigned seqnr);
+void vivid_vbi_gen_raw(const struct vivid_vbi_gen_data *vbi,
+		const struct v4l2_vbi_format *vbi_fmt, u8 *buf);
+
+#endif
diff --git a/drivers/media/platform/vivid/vivid-vbi-out.c b/drivers/media/platform/vivid/vivid-vbi-out.c
new file mode 100644
index 0000000..039316d
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vbi-out.c
@@ -0,0 +1,247 @@
+/*
+ * vivid-vbi-out.c - vbi output support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/videodev2.h>
+#include <media/v4l2-common.h>
+
+#include "vivid-core.h"
+#include "vivid-kthread-out.h"
+#include "vivid-vbi-out.h"
+#include "vivid-vbi-cap.h"
+
+static int vbi_out_queue_setup(struct vb2_queue *vq, const struct v4l2_format *fmt,
+		       unsigned *nbuffers, unsigned *nplanes,
+		       unsigned sizes[], void *alloc_ctxs[])
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+	bool is_60hz = dev->std_out & V4L2_STD_525_60;
+	unsigned size = vq->type == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT ?
+		36 * sizeof(struct v4l2_sliced_vbi_data) :
+		1440 * 2 * (is_60hz ? 12 : 18);
+
+	if (!vivid_is_svid_out(dev))
+		return -EINVAL;
+
+	sizes[0] = size;
+
+	if (vq->num_buffers + *nbuffers < 2)
+		*nbuffers = 2 - vq->num_buffers;
+
+	*nplanes = 1;
+	return 0;
+}
+
+static int vbi_out_buf_prepare(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	bool is_60hz = dev->std_out & V4L2_STD_525_60;
+	unsigned size = vb->vb2_queue->type == V4L2_BUF_TYPE_SLICED_VBI_OUTPUT ?
+		36 * sizeof(struct v4l2_sliced_vbi_data) :
+		1440 * 2 * (is_60hz ? 12 : 18);
+
+	dprintk(dev, 1, "%s\n", __func__);
+
+	if (dev->buf_prepare_error) {
+		/*
+		 * Error injection: test what happens if buf_prepare() returns
+		 * an error.
+		 */
+		dev->buf_prepare_error = false;
+		return -EINVAL;
+	}
+	if (vb2_plane_size(vb, 0) < size) {
+		dprintk(dev, 1, "%s data will not fit into plane (%lu < %u)\n",
+				__func__, vb2_plane_size(vb, 0), size);
+		return -EINVAL;
+	}
+	vb2_set_plane_payload(vb, 0, size);
+
+	return 0;
+}
+
+static void vbi_out_buf_queue(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	struct vivid_buffer *buf = container_of(vb, struct vivid_buffer, vb);
+
+	dprintk(dev, 1, "%s\n", __func__);
+
+	spin_lock(&dev->slock);
+	list_add_tail(&buf->list, &dev->vbi_out_active);
+	spin_unlock(&dev->slock);
+}
+
+static int vbi_out_start_streaming(struct vb2_queue *vq, unsigned count)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+	int err;
+
+	dprintk(dev, 1, "%s\n", __func__);
+	dev->vbi_out_seq_count = 0;
+	if (dev->start_streaming_error) {
+		dev->start_streaming_error = false;
+		err = -EINVAL;
+	} else {
+		err = vivid_start_generating_vid_out(dev, &dev->vbi_out_streaming);
+	}
+	if (err) {
+		struct vivid_buffer *buf, *tmp;
+
+		list_for_each_entry_safe(buf, tmp, &dev->vbi_out_active, list) {
+			list_del(&buf->list);
+			vb2_buffer_done(&buf->vb, VB2_BUF_STATE_QUEUED);
+		}
+	}
+	return err;
+}
+
+/* abort streaming and wait for last buffer */
+static void vbi_out_stop_streaming(struct vb2_queue *vq)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+
+	dprintk(dev, 1, "%s\n", __func__);
+	vivid_stop_generating_vid_out(dev, &dev->vbi_out_streaming);
+	dev->vbi_out_have_wss = false;
+	dev->vbi_out_have_cc[0] = false;
+	dev->vbi_out_have_cc[1] = false;
+}
+
+const struct vb2_ops vivid_vbi_out_qops = {
+	.queue_setup		= vbi_out_queue_setup,
+	.buf_prepare		= vbi_out_buf_prepare,
+	.buf_queue		= vbi_out_buf_queue,
+	.start_streaming	= vbi_out_start_streaming,
+	.stop_streaming		= vbi_out_stop_streaming,
+	.wait_prepare		= vivid_unlock,
+	.wait_finish		= vivid_lock,
+};
+
+int vidioc_g_fmt_vbi_out(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_vbi_format *vbi = &f->fmt.vbi;
+	bool is_60hz = dev->std_out & V4L2_STD_525_60;
+
+	if (!vivid_is_svid_out(dev) || !dev->has_raw_vbi_out)
+		return -EINVAL;
+
+	vbi->sampling_rate = 25000000;
+	vbi->offset = 24;
+	vbi->samples_per_line = 1440;
+	vbi->sample_format = V4L2_PIX_FMT_GREY;
+	vbi->start[0] = is_60hz ? 10 : 6;
+	vbi->start[1] = is_60hz ? 273 : 318;
+	vbi->count[0] = vbi->count[1] = is_60hz ? 12 : 18;
+	vbi->flags = dev->vbi_cap_interlaced ? V4L2_VBI_INTERLACED : 0;
+	vbi->reserved[0] = 0;
+	vbi->reserved[1] = 0;
+	return 0;
+}
+
+int vidioc_s_fmt_vbi_out(struct file *file, void *priv,
+					struct v4l2_format *f)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	int ret = vidioc_g_fmt_vbi_out(file, priv, f);
+
+	if (ret)
+		return ret;
+	if (vb2_is_busy(&dev->vb_vbi_out_q))
+		return -EBUSY;
+	dev->stream_sliced_vbi_out = false;
+	dev->vbi_out_dev.queue->type = V4L2_BUF_TYPE_VBI_OUTPUT;
+	return 0;
+}
+
+int vidioc_g_fmt_sliced_vbi_out(struct file *file, void *fh, struct v4l2_format *fmt)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_sliced_vbi_format *vbi = &fmt->fmt.sliced;
+
+	if (!vivid_is_svid_out(dev) || !dev->has_sliced_vbi_out)
+		return -EINVAL;
+
+	vivid_fill_service_lines(vbi, dev->service_set_out);
+	return 0;
+}
+
+int vidioc_try_fmt_sliced_vbi_out(struct file *file, void *fh, struct v4l2_format *fmt)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_sliced_vbi_format *vbi = &fmt->fmt.sliced;
+	bool is_60hz = dev->std_out & V4L2_STD_525_60;
+	u32 service_set = vbi->service_set;
+
+	if (!vivid_is_svid_out(dev) || !dev->has_sliced_vbi_out)
+		return -EINVAL;
+
+	service_set &= is_60hz ? V4L2_SLICED_CAPTION_525 : V4L2_SLICED_WSS_625;
+	vivid_fill_service_lines(vbi, service_set);
+	return 0;
+}
+
+int vidioc_s_fmt_sliced_vbi_out(struct file *file, void *fh, struct v4l2_format *fmt)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_sliced_vbi_format *vbi = &fmt->fmt.sliced;
+	int ret = vidioc_try_fmt_sliced_vbi_out(file, fh, fmt);
+
+	if (ret)
+		return ret;
+	if (vb2_is_busy(&dev->vb_vbi_out_q))
+		return -EBUSY;
+	dev->service_set_out = vbi->service_set;
+	dev->stream_sliced_vbi_out = true;
+	dev->vbi_out_dev.queue->type = V4L2_BUF_TYPE_SLICED_VBI_OUTPUT;
+	return 0;
+}
+
+void vivid_sliced_vbi_out_process(struct vivid_dev *dev, struct vivid_buffer *buf)
+{
+	struct v4l2_sliced_vbi_data *vbi = vb2_plane_vaddr(&buf->vb, 0);
+	unsigned elems = vb2_get_plane_payload(&buf->vb, 0) / sizeof(*vbi);
+
+	dev->vbi_out_have_cc[0] = false;
+	dev->vbi_out_have_cc[1] = false;
+	dev->vbi_out_have_wss = false;
+	while (elems--) {
+		switch (vbi->id) {
+		case V4L2_SLICED_CAPTION_525:
+			if ((dev->std_out & V4L2_STD_525_60) && vbi->line == 21) {
+				dev->vbi_out_have_cc[!!vbi->field] = true;
+				dev->vbi_out_cc[!!vbi->field][0] = vbi->data[0];
+				dev->vbi_out_cc[!!vbi->field][1] = vbi->data[1];
+			}
+			break;
+		case V4L2_SLICED_WSS_625:
+			if ((dev->std_out & V4L2_STD_625_50) &&
+			    vbi->field == 0 && vbi->line == 23) {
+				dev->vbi_out_have_wss = true;
+				dev->vbi_out_wss[0] = vbi->data[0];
+				dev->vbi_out_wss[1] = vbi->data[1];
+			}
+			break;
+		}
+		vbi++;
+	}
+}
diff --git a/drivers/media/platform/vivid/vivid-vbi-out.h b/drivers/media/platform/vivid/vivid-vbi-out.h
new file mode 100644
index 0000000..6555ba9
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-vbi-out.h
@@ -0,0 +1,34 @@
+/*
+ * vivid-vbi-out.h - vbi output support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_VBI_OUT_H_
+#define _VIVID_VBI_OUT_H_
+
+void vivid_sliced_vbi_out_process(struct vivid_dev *dev, struct vivid_buffer *buf);
+int vidioc_g_fmt_vbi_out(struct file *file, void *priv,
+					struct v4l2_format *f);
+int vidioc_s_fmt_vbi_out(struct file *file, void *priv,
+					struct v4l2_format *f);
+int vidioc_g_fmt_sliced_vbi_out(struct file *file, void *fh, struct v4l2_format *fmt);
+int vidioc_try_fmt_sliced_vbi_out(struct file *file, void *fh, struct v4l2_format *fmt);
+int vidioc_s_fmt_sliced_vbi_out(struct file *file, void *fh, struct v4l2_format *fmt);
+
+extern const struct vb2_ops vivid_vbi_out_qops;
+
+#endif
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 07/12] vivid: add the kthread code that controls the video rate
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
                   ` (5 preceding siblings ...)
  2014-08-25 11:30 ` [PATCHv2 06/12] vivid: add VBI capture and output code Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 08/12] vivid: add a simple framebuffer device for overlay testing Hans Verkuil
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

Add the kthread handlers for video/vbi capture and video/vbi output.
These carefully control the rate at which frames are generated (video
capture) and accepted (video output). While the short-term jitter is
around the order of a jiffie, in the long term the rate matches the
configured framerate exactly.

The capture thread handler also takes care of the video looping and
of capture and overlay support. This is probably the most complex part
of this driver due to the many combinations of crop, compose and scaling
on the input and output, and the blending that has to be done if
overlay support is enabled as well.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/platform/vivid/vivid-kthread-cap.c | 885 +++++++++++++++++++++++
 drivers/media/platform/vivid/vivid-kthread-cap.h |  26 +
 drivers/media/platform/vivid/vivid-kthread-out.c | 304 ++++++++
 drivers/media/platform/vivid/vivid-kthread-out.h |  26 +
 4 files changed, 1241 insertions(+)
 create mode 100644 drivers/media/platform/vivid/vivid-kthread-cap.c
 create mode 100644 drivers/media/platform/vivid/vivid-kthread-cap.h
 create mode 100644 drivers/media/platform/vivid/vivid-kthread-out.c
 create mode 100644 drivers/media/platform/vivid/vivid-kthread-out.h

diff --git a/drivers/media/platform/vivid/vivid-kthread-cap.c b/drivers/media/platform/vivid/vivid-kthread-cap.c
new file mode 100644
index 0000000..33ab1df
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-kthread-cap.c
@@ -0,0 +1,885 @@
+/*
+ * vivid-kthread-cap.h - video/vbi capture thread support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/font.h>
+#include <linux/mutex.h>
+#include <linux/videodev2.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
+#include <linux/random.h>
+#include <linux/v4l2-dv-timings.h>
+#include <asm/div64.h>
+#include <media/videobuf2-vmalloc.h>
+#include <media/v4l2-dv-timings.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-fh.h>
+#include <media/v4l2-event.h>
+
+#include "vivid-core.h"
+#include "vivid-vid-common.h"
+#include "vivid-vid-cap.h"
+#include "vivid-vid-out.h"
+#include "vivid-radio-common.h"
+#include "vivid-radio-rx.h"
+#include "vivid-radio-tx.h"
+#include "vivid-sdr-cap.h"
+#include "vivid-vbi-cap.h"
+#include "vivid-vbi-out.h"
+#include "vivid-osd.h"
+#include "vivid-ctrls.h"
+
+static inline v4l2_std_id vivid_get_std_cap(const struct vivid_dev *dev)
+{
+	if (vivid_is_sdtv_cap(dev))
+		return dev->std_cap;
+	return 0;
+}
+
+static void copy_pix(struct vivid_dev *dev, int win_y, int win_x,
+			u16 *cap, const u16 *osd)
+{
+	u16 out;
+	int left = dev->overlay_out_left;
+	int top = dev->overlay_out_top;
+	int fb_x = win_x + left;
+	int fb_y = win_y + top;
+	int i;
+
+	out = *cap;
+	*cap = *osd;
+	if (dev->bitmap_out) {
+		const u8 *p = dev->bitmap_out;
+		unsigned stride = (dev->compose_out.width + 7) / 8;
+
+		win_x -= dev->compose_out.left;
+		win_y -= dev->compose_out.top;
+		if (!(p[stride * win_y + win_x / 8] & (1 << (win_x & 7))))
+			return;
+	}
+
+	for (i = 0; i < dev->clipcount_out; i++) {
+		struct v4l2_rect *r = &dev->clips_out[i].c;
+
+		if (fb_y >= r->top && fb_y < r->top + r->height &&
+		    fb_x >= r->left && fb_x < r->left + r->width)
+			return;
+	}
+	if ((dev->fbuf_out_flags & V4L2_FBUF_FLAG_CHROMAKEY) &&
+	    *osd != dev->chromakey_out)
+		return;
+	if ((dev->fbuf_out_flags & V4L2_FBUF_FLAG_SRC_CHROMAKEY) &&
+	    out == dev->chromakey_out)
+		return;
+	if (dev->fmt_cap->alpha_mask) {
+		if ((dev->fbuf_out_flags & V4L2_FBUF_FLAG_GLOBAL_ALPHA) &&
+		    dev->global_alpha_out)
+			return;
+		if ((dev->fbuf_out_flags & V4L2_FBUF_FLAG_LOCAL_ALPHA) &&
+		    *cap & dev->fmt_cap->alpha_mask)
+			return;
+		if ((dev->fbuf_out_flags & V4L2_FBUF_FLAG_LOCAL_INV_ALPHA) &&
+		    !(*cap & dev->fmt_cap->alpha_mask))
+			return;
+	}
+	*cap = out;
+}
+
+static void blend_line(struct vivid_dev *dev, unsigned y_offset, unsigned x_offset,
+		u8 *vcapbuf, const u8 *vosdbuf,
+		unsigned width, unsigned pixsize)
+{
+	unsigned x;
+
+	for (x = 0; x < width; x++, vcapbuf += pixsize, vosdbuf += pixsize) {
+		copy_pix(dev, y_offset, x_offset + x,
+			 (u16 *)vcapbuf, (const u16 *)vosdbuf);
+	}
+}
+
+static void scale_line(const u8 *src, u8 *dst, unsigned srcw, unsigned dstw, unsigned twopixsize)
+{
+	/* Coarse scaling with Bresenham */
+	unsigned int_part;
+	unsigned fract_part;
+	unsigned src_x = 0;
+	unsigned error = 0;
+	unsigned x;
+
+	/*
+	 * We always combine two pixels to prevent color bleed in the packed
+	 * yuv case.
+	 */
+	srcw /= 2;
+	dstw /= 2;
+	int_part = srcw / dstw;
+	fract_part = srcw % dstw;
+	for (x = 0; x < dstw; x++, dst += twopixsize) {
+		memcpy(dst, src + src_x * twopixsize, twopixsize);
+		src_x += int_part;
+		error += fract_part;
+		if (error >= dstw) {
+			error -= dstw;
+			src_x++;
+		}
+	}
+}
+
+/*
+ * Precalculate the rectangles needed to perform video looping:
+ *
+ * The nominal pipeline is that the video output buffer is cropped by
+ * crop_out, scaled to compose_out, overlaid with the output overlay,
+ * cropped on the capture side by crop_cap and scaled again to the video
+ * capture buffer using compose_cap.
+ *
+ * To keep things efficient we calculate the intersection of compose_out
+ * and crop_cap (since that's the only part of the video that will
+ * actually end up in the capture buffer), determine which part of the
+ * video output buffer that is and which part of the video capture buffer
+ * so we can scale the video straight from the output buffer to the capture
+ * buffer without any intermediate steps.
+ *
+ * If we need to deal with an output overlay, then there is no choice and
+ * that intermediate step still has to be taken. For the output overlay
+ * support we calculate the intersection of the framebuffer and the overlay
+ * window (which may be partially or wholly outside of the framebuffer
+ * itself) and the intersection of that with loop_vid_copy (i.e. the part of
+ * the actual looped video that will be overlaid). The result is calculated
+ * both in framebuffer coordinates (loop_fb_copy) and compose_out coordinates
+ * (loop_vid_overlay). Finally calculate the part of the capture buffer that
+ * will receive that overlaid video.
+ */
+static void vivid_precalc_copy_rects(struct vivid_dev *dev)
+{
+	/* Framebuffer rectangle */
+	struct v4l2_rect r_fb = {
+		0, 0, dev->display_width, dev->display_height
+	};
+	/* Overlay window rectangle in framebuffer coordinates */
+	struct v4l2_rect r_overlay = {
+		dev->overlay_out_left, dev->overlay_out_top,
+		dev->compose_out.width, dev->compose_out.height
+	};
+
+	dev->loop_vid_copy = rect_intersect(&dev->crop_cap, &dev->compose_out);
+
+	dev->loop_vid_out = dev->loop_vid_copy;
+	rect_scale(&dev->loop_vid_out, &dev->compose_out, &dev->crop_out);
+	dev->loop_vid_out.left += dev->crop_out.left;
+	dev->loop_vid_out.top += dev->crop_out.top;
+
+	dev->loop_vid_cap = dev->loop_vid_copy;
+	rect_scale(&dev->loop_vid_cap, &dev->crop_cap, &dev->compose_cap);
+
+	dprintk(dev, 1,
+		"loop_vid_copy: %dx%d@%dx%d loop_vid_out: %dx%d@%dx%d loop_vid_cap: %dx%d@%dx%d\n",
+		dev->loop_vid_copy.width, dev->loop_vid_copy.height,
+		dev->loop_vid_copy.left, dev->loop_vid_copy.top,
+		dev->loop_vid_out.width, dev->loop_vid_out.height,
+		dev->loop_vid_out.left, dev->loop_vid_out.top,
+		dev->loop_vid_cap.width, dev->loop_vid_cap.height,
+		dev->loop_vid_cap.left, dev->loop_vid_cap.top);
+
+	r_overlay = rect_intersect(&r_fb, &r_overlay);
+
+	/* shift r_overlay to the same origin as compose_out */
+	r_overlay.left += dev->compose_out.left - dev->overlay_out_left;
+	r_overlay.top += dev->compose_out.top - dev->overlay_out_top;
+
+	dev->loop_vid_overlay = rect_intersect(&r_overlay, &dev->loop_vid_copy);
+	dev->loop_fb_copy = dev->loop_vid_overlay;
+
+	/* shift dev->loop_fb_copy back again to the fb origin */
+	dev->loop_fb_copy.left -= dev->compose_out.left - dev->overlay_out_left;
+	dev->loop_fb_copy.top -= dev->compose_out.top - dev->overlay_out_top;
+
+	dev->loop_vid_overlay_cap = dev->loop_vid_overlay;
+	rect_scale(&dev->loop_vid_overlay_cap, &dev->crop_cap, &dev->compose_cap);
+
+	dprintk(dev, 1,
+		"loop_fb_copy: %dx%d@%dx%d loop_vid_overlay: %dx%d@%dx%d loop_vid_overlay_cap: %dx%d@%dx%d\n",
+		dev->loop_fb_copy.width, dev->loop_fb_copy.height,
+		dev->loop_fb_copy.left, dev->loop_fb_copy.top,
+		dev->loop_vid_overlay.width, dev->loop_vid_overlay.height,
+		dev->loop_vid_overlay.left, dev->loop_vid_overlay.top,
+		dev->loop_vid_overlay_cap.width, dev->loop_vid_overlay_cap.height,
+		dev->loop_vid_overlay_cap.left, dev->loop_vid_overlay_cap.top);
+}
+
+static int vivid_copy_buffer(struct vivid_dev *dev, unsigned p, u8 *vcapbuf,
+		struct vivid_buffer *vid_cap_buf)
+{
+	bool blank = dev->must_blank[vid_cap_buf->vb.v4l2_buf.index];
+	struct tpg_data *tpg = &dev->tpg;
+	struct vivid_buffer *vid_out_buf = NULL;
+	unsigned pixsize = tpg_g_twopixelsize(tpg, p) / 2;
+	unsigned img_width = dev->compose_cap.width;
+	unsigned img_height = dev->compose_cap.height;
+	unsigned stride_cap = tpg->bytesperline[p];
+	unsigned stride_out = dev->bytesperline_out[p];
+	unsigned stride_osd = dev->display_byte_stride;
+	unsigned hmax = (img_height * tpg->perc_fill) / 100;
+	u8 *voutbuf;
+	u8 *vosdbuf = NULL;
+	unsigned y;
+	bool blend = dev->bitmap_out || dev->clipcount_out || dev->fbuf_out_flags;
+	/* Coarse scaling with Bresenham */
+	unsigned vid_out_int_part;
+	unsigned vid_out_fract_part;
+	unsigned vid_out_y = 0;
+	unsigned vid_out_error = 0;
+	unsigned vid_overlay_int_part = 0;
+	unsigned vid_overlay_fract_part = 0;
+	unsigned vid_overlay_y = 0;
+	unsigned vid_overlay_error = 0;
+	unsigned vid_cap_right;
+	bool quick;
+
+	vid_out_int_part = dev->loop_vid_out.height / dev->loop_vid_cap.height;
+	vid_out_fract_part = dev->loop_vid_out.height % dev->loop_vid_cap.height;
+
+	if (!list_empty(&dev->vid_out_active))
+		vid_out_buf = list_entry(dev->vid_out_active.next,
+					 struct vivid_buffer, list);
+	if (vid_out_buf == NULL)
+		return -ENODATA;
+
+	vid_cap_buf->vb.v4l2_buf.field = vid_out_buf->vb.v4l2_buf.field;
+
+	voutbuf = vb2_plane_vaddr(&vid_out_buf->vb, p) +
+				  vid_out_buf->vb.v4l2_planes[p].data_offset;
+	voutbuf += dev->loop_vid_out.left * pixsize + dev->loop_vid_out.top * stride_out;
+	vcapbuf += dev->compose_cap.left * pixsize + dev->compose_cap.top * stride_cap;
+
+	if (dev->loop_vid_copy.width == 0 || dev->loop_vid_copy.height == 0) {
+		/*
+		 * If there is nothing to copy, then just fill the capture window
+		 * with black.
+		 */
+		for (y = 0; y < hmax; y++, vcapbuf += stride_cap)
+			memcpy(vcapbuf, tpg->black_line[p], img_width * pixsize);
+		return 0;
+	}
+
+	if (dev->overlay_out_enabled &&
+	    dev->loop_vid_overlay.width && dev->loop_vid_overlay.height) {
+		vosdbuf = dev->video_vbase;
+		vosdbuf += dev->loop_fb_copy.left * pixsize +
+			   dev->loop_fb_copy.top * stride_osd;
+		vid_overlay_int_part = dev->loop_vid_overlay.height /
+				       dev->loop_vid_overlay_cap.height;
+		vid_overlay_fract_part = dev->loop_vid_overlay.height %
+					 dev->loop_vid_overlay_cap.height;
+	}
+
+	vid_cap_right = dev->loop_vid_cap.left + dev->loop_vid_cap.width;
+	/* quick is true if no video scaling is needed */
+	quick = dev->loop_vid_out.width == dev->loop_vid_cap.width;
+
+	dev->cur_scaled_line = dev->loop_vid_out.height;
+	for (y = 0; y < hmax; y++, vcapbuf += stride_cap) {
+		/* osdline is true if this line requires overlay blending */
+		bool osdline = vosdbuf && y >= dev->loop_vid_overlay_cap.top &&
+			  y < dev->loop_vid_overlay_cap.top + dev->loop_vid_overlay_cap.height;
+
+		/*
+		 * If this line of the capture buffer doesn't get any video, then
+		 * just fill with black.
+		 */
+		if (y < dev->loop_vid_cap.top ||
+		    y >= dev->loop_vid_cap.top + dev->loop_vid_cap.height) {
+			memcpy(vcapbuf, tpg->black_line[p], img_width * pixsize);
+			continue;
+		}
+
+		/* fill the left border with black */
+		if (dev->loop_vid_cap.left)
+			memcpy(vcapbuf, tpg->black_line[p], dev->loop_vid_cap.left * pixsize);
+
+		/* fill the right border with black */
+		if (vid_cap_right < img_width)
+			memcpy(vcapbuf + vid_cap_right * pixsize,
+				tpg->black_line[p], (img_width - vid_cap_right) * pixsize);
+
+		if (quick && !osdline) {
+			memcpy(vcapbuf + dev->loop_vid_cap.left * pixsize,
+			       voutbuf + vid_out_y * stride_out,
+			       dev->loop_vid_cap.width * pixsize);
+			goto update_vid_out_y;
+		}
+		if (dev->cur_scaled_line == vid_out_y) {
+			memcpy(vcapbuf + dev->loop_vid_cap.left * pixsize,
+			       dev->scaled_line,
+			       dev->loop_vid_cap.width * pixsize);
+			goto update_vid_out_y;
+		}
+		if (!osdline) {
+			scale_line(voutbuf + vid_out_y * stride_out, dev->scaled_line,
+				dev->loop_vid_out.width, dev->loop_vid_cap.width,
+				tpg_g_twopixelsize(tpg, p));
+		} else {
+			/*
+			 * Offset in bytes within loop_vid_copy to the start of the
+			 * loop_vid_overlay rectangle.
+			 */
+			unsigned offset =
+				(dev->loop_vid_overlay.left - dev->loop_vid_copy.left) * pixsize;
+			u8 *osd = vosdbuf + vid_overlay_y * stride_osd;
+
+			scale_line(voutbuf + vid_out_y * stride_out, dev->blended_line,
+				dev->loop_vid_out.width, dev->loop_vid_copy.width,
+				tpg_g_twopixelsize(tpg, p));
+			if (blend)
+				blend_line(dev, vid_overlay_y + dev->loop_vid_overlay.top,
+					   dev->loop_vid_overlay.left,
+					   dev->blended_line + offset, osd,
+					   dev->loop_vid_overlay.width, pixsize);
+			else
+				memcpy(dev->blended_line + offset,
+				       osd, dev->loop_vid_overlay.width * pixsize);
+			scale_line(dev->blended_line, dev->scaled_line,
+					dev->loop_vid_copy.width, dev->loop_vid_cap.width,
+					tpg_g_twopixelsize(tpg, p));
+		}
+		dev->cur_scaled_line = vid_out_y;
+		memcpy(vcapbuf + dev->loop_vid_cap.left * pixsize,
+		       dev->scaled_line,
+		       dev->loop_vid_cap.width * pixsize);
+
+update_vid_out_y:
+		if (osdline) {
+			vid_overlay_y += vid_overlay_int_part;
+			vid_overlay_error += vid_overlay_fract_part;
+			if (vid_overlay_error >= dev->loop_vid_overlay_cap.height) {
+				vid_overlay_error -= dev->loop_vid_overlay_cap.height;
+				vid_overlay_y++;
+			}
+		}
+		vid_out_y += vid_out_int_part;
+		vid_out_error += vid_out_fract_part;
+		if (vid_out_error >= dev->loop_vid_cap.height) {
+			vid_out_error -= dev->loop_vid_cap.height;
+			vid_out_y++;
+		}
+	}
+
+	if (!blank)
+		return 0;
+	for (; y < img_height; y++, vcapbuf += stride_cap)
+		memcpy(vcapbuf, tpg->contrast_line[p], img_width * pixsize);
+	return 0;
+}
+
+static void vivid_fillbuff(struct vivid_dev *dev, struct vivid_buffer *buf)
+{
+	unsigned factor = V4L2_FIELD_HAS_T_OR_B(dev->field_cap) ? 2 : 1;
+	unsigned line_height = 16 / factor;
+	bool is_tv = vivid_is_sdtv_cap(dev);
+	bool is_60hz = is_tv && (dev->std_cap & V4L2_STD_525_60);
+	unsigned p;
+	int line = 1;
+	u8 *basep[TPG_MAX_PLANES][2];
+	unsigned ms;
+	char str[100];
+	s32 gain;
+	bool is_loop = false;
+
+	if (dev->loop_video && dev->can_loop_video &&
+	    ((vivid_is_svid_cap(dev) && !VIVID_INVALID_SIGNAL(dev->std_signal_mode)) ||
+	     (vivid_is_hdmi_cap(dev) && !VIVID_INVALID_SIGNAL(dev->dv_timings_signal_mode))))
+		is_loop = true;
+
+	buf->vb.v4l2_buf.sequence = dev->vid_cap_seq_count;
+	/*
+	 * Take the timestamp now if the timestamp source is set to
+	 * "Start of Exposure".
+	 */
+	if (dev->tstamp_src_is_soe)
+		v4l2_get_timestamp(&buf->vb.v4l2_buf.timestamp);
+	if (dev->field_cap == V4L2_FIELD_ALTERNATE) {
+		/*
+		 * 60 Hz standards start with the bottom field, 50 Hz standards
+		 * with the top field. So if the 0-based seq_count is even,
+		 * then the field is TOP for 50 Hz and BOTTOM for 60 Hz
+		 * standards.
+		 */
+		buf->vb.v4l2_buf.field = ((dev->vid_cap_seq_count & 1) ^ is_60hz) ?
+			V4L2_FIELD_TOP : V4L2_FIELD_BOTTOM;
+		/*
+		 * The sequence counter counts frames, not fields. So divide
+		 * by two.
+		 */
+		buf->vb.v4l2_buf.sequence /= 2;
+	} else {
+		buf->vb.v4l2_buf.field = dev->field_cap;
+	}
+	tpg_s_field(&dev->tpg, buf->vb.v4l2_buf.field);
+	tpg_s_perc_fill_blank(&dev->tpg, dev->must_blank[buf->vb.v4l2_buf.index]);
+
+	vivid_precalc_copy_rects(dev);
+
+	for (p = 0; p < tpg_g_planes(&dev->tpg); p++) {
+		void *vbuf = vb2_plane_vaddr(&buf->vb, p);
+
+		/*
+		 * The first plane of a multiplanar format has a non-zero
+		 * data_offset. This helps testing whether the application
+		 * correctly supports non-zero data offsets.
+		 */
+		if (dev->fmt_cap->data_offset[p]) {
+			memset(vbuf, dev->fmt_cap->data_offset[p] & 0xff,
+			       dev->fmt_cap->data_offset[p]);
+			vbuf += dev->fmt_cap->data_offset[p];
+		}
+		tpg_calc_text_basep(&dev->tpg, basep, p, vbuf);
+		if (!is_loop || vivid_copy_buffer(dev, p, vbuf, buf))
+			tpg_fillbuffer(&dev->tpg, vivid_get_std_cap(dev), p, vbuf);
+	}
+	dev->must_blank[buf->vb.v4l2_buf.index] = false;
+
+	/* Updates stream time, only update at the start of a new frame. */
+	if (dev->field_cap != V4L2_FIELD_ALTERNATE || (buf->vb.v4l2_buf.sequence & 1) == 0)
+		dev->ms_vid_cap = jiffies_to_msecs(jiffies - dev->jiffies_vid_cap);
+
+	ms = dev->ms_vid_cap;
+	if (dev->osd_mode <= 1) {
+		snprintf(str, sizeof(str), " %02d:%02d:%02d:%03d %u%s",
+				(ms / (60 * 60 * 1000)) % 24,
+				(ms / (60 * 1000)) % 60,
+				(ms / 1000) % 60,
+				ms % 1000,
+				buf->vb.v4l2_buf.sequence,
+				(dev->field_cap == V4L2_FIELD_ALTERNATE) ?
+					(buf->vb.v4l2_buf.field == V4L2_FIELD_TOP ?
+					 " top" : " bottom") : "");
+		tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+	}
+	if (dev->osd_mode == 0) {
+		snprintf(str, sizeof(str), " %dx%d, input %d ",
+				dev->src_rect.width, dev->src_rect.height, dev->input);
+		tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+
+		gain = v4l2_ctrl_g_ctrl(dev->gain);
+		mutex_lock(dev->ctrl_hdl_user_vid.lock);
+		snprintf(str, sizeof(str),
+			" brightness %3d, contrast %3d, saturation %3d, hue %d ",
+			dev->brightness->cur.val,
+			dev->contrast->cur.val,
+			dev->saturation->cur.val,
+			dev->hue->cur.val);
+		tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+		snprintf(str, sizeof(str),
+			" autogain %d, gain %3d, alpha 0x%02x ",
+			dev->autogain->cur.val, gain, dev->alpha->cur.val);
+		mutex_unlock(dev->ctrl_hdl_user_vid.lock);
+		tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+		mutex_lock(dev->ctrl_hdl_user_aud.lock);
+		snprintf(str, sizeof(str),
+			" volume %3d, mute %d ",
+			dev->volume->cur.val, dev->mute->cur.val);
+		mutex_unlock(dev->ctrl_hdl_user_aud.lock);
+		tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+		mutex_lock(dev->ctrl_hdl_user_gen.lock);
+		snprintf(str, sizeof(str), " int32 %d, int64 %lld, bitmask %08x ",
+			dev->int32->cur.val,
+			*dev->int64->p_cur.p_s64,
+			dev->bitmask->cur.val);
+		tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+		snprintf(str, sizeof(str), " boolean %d, menu %s, string \"%s\" ",
+			dev->boolean->cur.val,
+			dev->menu->qmenu[dev->menu->cur.val],
+			dev->string->p_cur.p_char);
+		tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+		snprintf(str, sizeof(str), " integer_menu %lld, value %d ",
+			dev->int_menu->qmenu_int[dev->int_menu->cur.val],
+			dev->int_menu->cur.val);
+		mutex_unlock(dev->ctrl_hdl_user_gen.lock);
+		tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+		if (dev->button_pressed) {
+			dev->button_pressed--;
+			snprintf(str, sizeof(str), " button pressed!");
+			tpg_gen_text(&dev->tpg, basep, line++ * line_height, 16, str);
+		}
+	}
+
+	/*
+	 * If "End of Frame" is specified at the timestamp source, then take
+	 * the timestamp now.
+	 */
+	if (!dev->tstamp_src_is_soe)
+		v4l2_get_timestamp(&buf->vb.v4l2_buf.timestamp);
+	buf->vb.v4l2_buf.timestamp.tv_sec += dev->time_wrap_offset;
+}
+
+/*
+ * Return true if this pixel coordinate is a valid video pixel.
+ */
+static bool valid_pix(struct vivid_dev *dev, int win_y, int win_x, int fb_y, int fb_x)
+{
+	int i;
+
+	if (dev->bitmap_cap) {
+		/*
+		 * Only if the corresponding bit in the bitmap is set can
+		 * the video pixel be shown. Coordinates are relative to
+		 * the overlay window set by VIDIOC_S_FMT.
+		 */
+		const u8 *p = dev->bitmap_cap;
+		unsigned stride = (dev->compose_cap.width + 7) / 8;
+
+		if (!(p[stride * win_y + win_x / 8] & (1 << (win_x & 7))))
+			return false;
+	}
+
+	for (i = 0; i < dev->clipcount_cap; i++) {
+		/*
+		 * Only if the framebuffer coordinate is not in any of the
+		 * clip rectangles will be video pixel be shown.
+		 */
+		struct v4l2_rect *r = &dev->clips_cap[i].c;
+
+		if (fb_y >= r->top && fb_y < r->top + r->height &&
+		    fb_x >= r->left && fb_x < r->left + r->width)
+			return false;
+	}
+	return true;
+}
+
+/*
+ * Draw the image into the overlay buffer.
+ * Note that the combination of overlay and multiplanar is not supported.
+ */
+static void vivid_overlay(struct vivid_dev *dev, struct vivid_buffer *buf)
+{
+	struct tpg_data *tpg = &dev->tpg;
+	unsigned pixsize = tpg_g_twopixelsize(tpg, 0) / 2;
+	void *vbase = dev->fb_vbase_cap;
+	void *vbuf = vb2_plane_vaddr(&buf->vb, 0);
+	unsigned img_width = dev->compose_cap.width;
+	unsigned img_height = dev->compose_cap.height;
+	unsigned stride = tpg->bytesperline[0];
+	/* if quick is true, then valid_pix() doesn't have to be called */
+	bool quick = dev->bitmap_cap == NULL && dev->clipcount_cap == 0;
+	int x, y, w, out_x = 0;
+
+	if ((dev->overlay_cap_field == V4L2_FIELD_TOP ||
+	     dev->overlay_cap_field == V4L2_FIELD_BOTTOM) &&
+	    dev->overlay_cap_field != buf->vb.v4l2_buf.field)
+		return;
+
+	vbuf += dev->compose_cap.left * pixsize + dev->compose_cap.top * stride;
+	x = dev->overlay_cap_left;
+	w = img_width;
+	if (x < 0) {
+		out_x = -x;
+		w = w - out_x;
+		x = 0;
+	} else {
+		w = dev->fb_cap.fmt.width - x;
+		if (w > img_width)
+			w = img_width;
+	}
+	if (w <= 0)
+		return;
+	if (dev->overlay_cap_top >= 0)
+		vbase += dev->overlay_cap_top * dev->fb_cap.fmt.bytesperline;
+	for (y = dev->overlay_cap_top;
+	     y < dev->overlay_cap_top + (int)img_height;
+	     y++, vbuf += stride) {
+		int px;
+
+		if (y < 0 || y > dev->fb_cap.fmt.height)
+			continue;
+		if (quick) {
+			memcpy(vbase + x * pixsize,
+			       vbuf + out_x * pixsize, w * pixsize);
+			vbase += dev->fb_cap.fmt.bytesperline;
+			continue;
+		}
+		for (px = 0; px < w; px++) {
+			if (!valid_pix(dev, y - dev->overlay_cap_top,
+				       px + out_x, y, px + x))
+				continue;
+			memcpy(vbase + (px + x) * pixsize,
+			       vbuf + (px + out_x) * pixsize,
+			       pixsize);
+		}
+		vbase += dev->fb_cap.fmt.bytesperline;
+	}
+}
+
+static void vivid_thread_vid_cap_tick(struct vivid_dev *dev, int dropped_bufs)
+{
+	struct vivid_buffer *vid_cap_buf = NULL;
+	struct vivid_buffer *vbi_cap_buf = NULL;
+
+	dprintk(dev, 1, "Video Capture Thread Tick\n");
+
+	while (dropped_bufs-- > 1)
+		tpg_update_mv_count(&dev->tpg,
+				dev->field_cap == V4L2_FIELD_NONE ||
+				dev->field_cap == V4L2_FIELD_ALTERNATE);
+
+	/* Drop a certain percentage of buffers. */
+	if (dev->perc_dropped_buffers &&
+	    prandom_u32_max(100) < dev->perc_dropped_buffers)
+		goto update_mv;
+
+	spin_lock(&dev->slock);
+	if (!list_empty(&dev->vid_cap_active)) {
+		vid_cap_buf = list_entry(dev->vid_cap_active.next, struct vivid_buffer, list);
+		list_del(&vid_cap_buf->list);
+	}
+	if (!list_empty(&dev->vbi_cap_active)) {
+		if (dev->field_cap != V4L2_FIELD_ALTERNATE ||
+		    (dev->vbi_cap_seq_count & 1)) {
+			vbi_cap_buf = list_entry(dev->vbi_cap_active.next,
+						 struct vivid_buffer, list);
+			list_del(&vbi_cap_buf->list);
+		}
+	}
+	spin_unlock(&dev->slock);
+
+	if (!vid_cap_buf && !vbi_cap_buf)
+		goto update_mv;
+
+	if (vid_cap_buf) {
+		/* Fill buffer */
+		vivid_fillbuff(dev, vid_cap_buf);
+		dprintk(dev, 1, "filled buffer %d\n",
+			vid_cap_buf->vb.v4l2_buf.index);
+
+		/* Handle overlay */
+		if (dev->overlay_cap_owner && dev->fb_cap.base &&
+				dev->fb_cap.fmt.pixelformat == dev->fmt_cap->fourcc)
+			vivid_overlay(dev, vid_cap_buf);
+
+		vb2_buffer_done(&vid_cap_buf->vb, dev->dqbuf_error ?
+				VB2_BUF_STATE_ERROR : VB2_BUF_STATE_DONE);
+		dprintk(dev, 2, "vid_cap buffer %d done\n",
+				vid_cap_buf->vb.v4l2_buf.index);
+	}
+
+	if (vbi_cap_buf) {
+		if (dev->stream_sliced_vbi_cap)
+			vivid_sliced_vbi_cap_process(dev, vbi_cap_buf);
+		else
+			vivid_raw_vbi_cap_process(dev, vbi_cap_buf);
+		vb2_buffer_done(&vbi_cap_buf->vb, dev->dqbuf_error ?
+				VB2_BUF_STATE_ERROR : VB2_BUF_STATE_DONE);
+		dprintk(dev, 2, "vbi_cap %d done\n",
+				vbi_cap_buf->vb.v4l2_buf.index);
+	}
+	dev->dqbuf_error = false;
+
+update_mv:
+	/* Update the test pattern movement counters */
+	tpg_update_mv_count(&dev->tpg, dev->field_cap == V4L2_FIELD_NONE ||
+				       dev->field_cap == V4L2_FIELD_ALTERNATE);
+}
+
+static int vivid_thread_vid_cap(void *data)
+{
+	struct vivid_dev *dev = data;
+	u64 numerators_since_start;
+	u64 buffers_since_start;
+	u64 next_jiffies_since_start;
+	unsigned long jiffies_since_start;
+	unsigned long cur_jiffies;
+	unsigned wait_jiffies;
+	unsigned numerator;
+	unsigned denominator;
+	int dropped_bufs;
+
+	dprintk(dev, 1, "Video Capture Thread Start\n");
+
+	set_freezable();
+
+	/* Resets frame counters */
+	dev->cap_seq_offset = 0;
+	dev->cap_seq_count = 0;
+	dev->cap_seq_resync = false;
+	dev->jiffies_vid_cap = jiffies;
+
+	for (;;) {
+		try_to_freeze();
+		if (kthread_should_stop())
+			break;
+
+		mutex_lock(&dev->mutex);
+		cur_jiffies = jiffies;
+		if (dev->cap_seq_resync) {
+			dev->jiffies_vid_cap = cur_jiffies;
+			dev->cap_seq_offset = dev->cap_seq_count + 1;
+			dev->cap_seq_count = 0;
+			dev->cap_seq_resync = false;
+		}
+		numerator = dev->timeperframe_vid_cap.numerator;
+		denominator = dev->timeperframe_vid_cap.denominator;
+
+		if (dev->field_cap == V4L2_FIELD_ALTERNATE)
+			denominator *= 2;
+
+		/* Calculate the number of jiffies since we started streaming */
+		jiffies_since_start = cur_jiffies - dev->jiffies_vid_cap;
+		/* Get the number of buffers streamed since the start */
+		buffers_since_start = (u64)jiffies_since_start * denominator +
+				      (HZ * numerator) / 2;
+		do_div(buffers_since_start, HZ * numerator);
+
+		/*
+		 * After more than 0xf0000000 (rounded down to a multiple of
+		 * 'jiffies-per-day' to ease jiffies_to_msecs calculation)
+		 * jiffies have passed since we started streaming reset the
+		 * counters and keep track of the sequence offset.
+		 */
+		if (jiffies_since_start > JIFFIES_RESYNC) {
+			dev->jiffies_vid_cap = cur_jiffies;
+			dev->cap_seq_offset = buffers_since_start;
+			buffers_since_start = 0;
+		}
+		dropped_bufs = buffers_since_start + dev->cap_seq_offset - dev->cap_seq_count;
+		dev->cap_seq_count = buffers_since_start + dev->cap_seq_offset;
+		dev->vid_cap_seq_count = dev->cap_seq_count - dev->vid_cap_seq_start;
+		dev->vbi_cap_seq_count = dev->cap_seq_count - dev->vbi_cap_seq_start;
+
+		vivid_thread_vid_cap_tick(dev, dropped_bufs);
+
+		/*
+		 * Calculate the number of 'numerators' streamed since we started,
+		 * including the current buffer.
+		 */
+		numerators_since_start = ++buffers_since_start * numerator;
+
+		/* And the number of jiffies since we started */
+		jiffies_since_start = jiffies - dev->jiffies_vid_cap;
+
+		mutex_unlock(&dev->mutex);
+
+		/*
+		 * Calculate when that next buffer is supposed to start
+		 * in jiffies since we started streaming.
+		 */
+		next_jiffies_since_start = numerators_since_start * HZ +
+					   denominator / 2;
+		do_div(next_jiffies_since_start, denominator);
+		/* If it is in the past, then just schedule asap */
+		if (next_jiffies_since_start < jiffies_since_start)
+			next_jiffies_since_start = jiffies_since_start;
+
+		wait_jiffies = next_jiffies_since_start - jiffies_since_start;
+		schedule_timeout_interruptible(wait_jiffies ? wait_jiffies : 1);
+	}
+	dprintk(dev, 1, "Video Capture Thread End\n");
+	return 0;
+}
+
+static void vivid_grab_controls(struct vivid_dev *dev, bool grab)
+{
+	v4l2_ctrl_grab(dev->ctrl_has_crop_cap, grab);
+	v4l2_ctrl_grab(dev->ctrl_has_compose_cap, grab);
+	v4l2_ctrl_grab(dev->ctrl_has_scaler_cap, grab);
+}
+
+int vivid_start_generating_vid_cap(struct vivid_dev *dev, bool *pstreaming)
+{
+	dprintk(dev, 1, "%s\n", __func__);
+
+	if (dev->kthread_vid_cap) {
+		u32 seq_count = dev->cap_seq_count + dev->seq_wrap * 128;
+
+		if (pstreaming == &dev->vid_cap_streaming)
+			dev->vid_cap_seq_start = seq_count;
+		else
+			dev->vbi_cap_seq_start = seq_count;
+		*pstreaming = true;
+		return 0;
+	}
+
+	/* Resets frame counters */
+	tpg_init_mv_count(&dev->tpg);
+
+	dev->vid_cap_seq_start = dev->seq_wrap * 128;
+	dev->vbi_cap_seq_start = dev->seq_wrap * 128;
+
+	dev->kthread_vid_cap = kthread_run(vivid_thread_vid_cap, dev,
+			"%s-vid-cap", dev->v4l2_dev.name);
+
+	if (IS_ERR(dev->kthread_vid_cap)) {
+		v4l2_err(&dev->v4l2_dev, "kernel_thread() failed\n");
+		return PTR_ERR(dev->kthread_vid_cap);
+	}
+	*pstreaming = true;
+	vivid_grab_controls(dev, true);
+
+	dprintk(dev, 1, "returning from %s\n", __func__);
+	return 0;
+}
+
+void vivid_stop_generating_vid_cap(struct vivid_dev *dev, bool *pstreaming)
+{
+	dprintk(dev, 1, "%s\n", __func__);
+
+	if (dev->kthread_vid_cap == NULL)
+		return;
+
+	*pstreaming = false;
+	if (pstreaming == &dev->vid_cap_streaming) {
+		/* Release all active buffers */
+		while (!list_empty(&dev->vid_cap_active)) {
+			struct vivid_buffer *buf;
+
+			buf = list_entry(dev->vid_cap_active.next,
+					 struct vivid_buffer, list);
+			list_del(&buf->list);
+			vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
+			dprintk(dev, 2, "vid_cap buffer %d done\n",
+				buf->vb.v4l2_buf.index);
+		}
+	}
+
+	if (pstreaming == &dev->vbi_cap_streaming) {
+		while (!list_empty(&dev->vbi_cap_active)) {
+			struct vivid_buffer *buf;
+
+			buf = list_entry(dev->vbi_cap_active.next,
+					 struct vivid_buffer, list);
+			list_del(&buf->list);
+			vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
+			dprintk(dev, 2, "vbi_cap buffer %d done\n",
+				buf->vb.v4l2_buf.index);
+		}
+	}
+
+	if (dev->vid_cap_streaming || dev->vbi_cap_streaming)
+		return;
+
+	/* shutdown control thread */
+	vivid_grab_controls(dev, false);
+	mutex_unlock(&dev->mutex);
+	kthread_stop(dev->kthread_vid_cap);
+	dev->kthread_vid_cap = NULL;
+	mutex_lock(&dev->mutex);
+}
diff --git a/drivers/media/platform/vivid/vivid-kthread-cap.h b/drivers/media/platform/vivid/vivid-kthread-cap.h
new file mode 100644
index 0000000..5b92fc9
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-kthread-cap.h
@@ -0,0 +1,26 @@
+/*
+ * vivid-kthread-cap.h - video/vbi capture thread support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_KTHREAD_CAP_H_
+#define _VIVID_KTHREAD_CAP_H_
+
+int vivid_start_generating_vid_cap(struct vivid_dev *dev, bool *pstreaming);
+void vivid_stop_generating_vid_cap(struct vivid_dev *dev, bool *pstreaming);
+
+#endif
diff --git a/drivers/media/platform/vivid/vivid-kthread-out.c b/drivers/media/platform/vivid/vivid-kthread-out.c
new file mode 100644
index 0000000..1905755
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-kthread-out.c
@@ -0,0 +1,304 @@
+/*
+ * vivid-kthread-out.h - video/vbi output thread support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/font.h>
+#include <linux/mutex.h>
+#include <linux/videodev2.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
+#include <linux/random.h>
+#include <linux/v4l2-dv-timings.h>
+#include <asm/div64.h>
+#include <media/videobuf2-vmalloc.h>
+#include <media/v4l2-dv-timings.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-fh.h>
+#include <media/v4l2-event.h>
+
+#include "vivid-core.h"
+#include "vivid-vid-common.h"
+#include "vivid-vid-cap.h"
+#include "vivid-vid-out.h"
+#include "vivid-radio-common.h"
+#include "vivid-radio-rx.h"
+#include "vivid-radio-tx.h"
+#include "vivid-sdr-cap.h"
+#include "vivid-vbi-cap.h"
+#include "vivid-vbi-out.h"
+#include "vivid-osd.h"
+#include "vivid-ctrls.h"
+
+static void vivid_thread_vid_out_tick(struct vivid_dev *dev)
+{
+	struct vivid_buffer *vid_out_buf = NULL;
+	struct vivid_buffer *vbi_out_buf = NULL;
+
+	dprintk(dev, 1, "Video Output Thread Tick\n");
+
+	/* Drop a certain percentage of buffers. */
+	if (dev->perc_dropped_buffers &&
+	    prandom_u32_max(100) < dev->perc_dropped_buffers)
+		return;
+
+	spin_lock(&dev->slock);
+	/*
+	 * Only dequeue buffer if there is at least one more pending.
+	 * This makes video loopback possible.
+	 */
+	if (!list_empty(&dev->vid_out_active) &&
+	    !list_is_singular(&dev->vid_out_active)) {
+		vid_out_buf = list_entry(dev->vid_out_active.next,
+					 struct vivid_buffer, list);
+		list_del(&vid_out_buf->list);
+	}
+	if (!list_empty(&dev->vbi_out_active) &&
+	    (dev->field_out != V4L2_FIELD_ALTERNATE ||
+	     (dev->vbi_out_seq_count & 1))) {
+		vbi_out_buf = list_entry(dev->vbi_out_active.next,
+					 struct vivid_buffer, list);
+		list_del(&vbi_out_buf->list);
+	}
+	spin_unlock(&dev->slock);
+
+	if (!vid_out_buf && !vbi_out_buf)
+		return;
+
+	if (vid_out_buf) {
+		vid_out_buf->vb.v4l2_buf.sequence = dev->vid_out_seq_count;
+		if (dev->field_out == V4L2_FIELD_ALTERNATE) {
+			/*
+			 * The sequence counter counts frames, not fields. So divide
+			 * by two.
+			 */
+			vid_out_buf->vb.v4l2_buf.sequence /= 2;
+		}
+		v4l2_get_timestamp(&vid_out_buf->vb.v4l2_buf.timestamp);
+		vid_out_buf->vb.v4l2_buf.timestamp.tv_sec += dev->time_wrap_offset;
+		vb2_buffer_done(&vid_out_buf->vb, dev->dqbuf_error ?
+				VB2_BUF_STATE_ERROR : VB2_BUF_STATE_DONE);
+		dprintk(dev, 2, "vid_out buffer %d done\n",
+			vid_out_buf->vb.v4l2_buf.index);
+	}
+
+	if (vbi_out_buf) {
+		if (dev->stream_sliced_vbi_out)
+			vivid_sliced_vbi_out_process(dev, vbi_out_buf);
+
+		vbi_out_buf->vb.v4l2_buf.sequence = dev->vbi_out_seq_count;
+		v4l2_get_timestamp(&vbi_out_buf->vb.v4l2_buf.timestamp);
+		vbi_out_buf->vb.v4l2_buf.timestamp.tv_sec += dev->time_wrap_offset;
+		vb2_buffer_done(&vbi_out_buf->vb, dev->dqbuf_error ?
+				VB2_BUF_STATE_ERROR : VB2_BUF_STATE_DONE);
+		dprintk(dev, 2, "vbi_out buffer %d done\n",
+			vbi_out_buf->vb.v4l2_buf.index);
+	}
+	dev->dqbuf_error = false;
+}
+
+static int vivid_thread_vid_out(void *data)
+{
+	struct vivid_dev *dev = data;
+	u64 numerators_since_start;
+	u64 buffers_since_start;
+	u64 next_jiffies_since_start;
+	unsigned long jiffies_since_start;
+	unsigned long cur_jiffies;
+	unsigned wait_jiffies;
+	unsigned numerator;
+	unsigned denominator;
+
+	dprintk(dev, 1, "Video Output Thread Start\n");
+
+	set_freezable();
+
+	/* Resets frame counters */
+	dev->out_seq_offset = 0;
+	if (dev->seq_wrap)
+		dev->out_seq_count = 0xffffff80U;
+	dev->jiffies_vid_out = jiffies;
+	dev->vid_out_seq_start = dev->vbi_out_seq_start = 0;
+	dev->out_seq_resync = false;
+
+	for (;;) {
+		try_to_freeze();
+		if (kthread_should_stop())
+			break;
+
+		mutex_lock(&dev->mutex);
+		cur_jiffies = jiffies;
+		if (dev->out_seq_resync) {
+			dev->jiffies_vid_out = cur_jiffies;
+			dev->out_seq_offset = dev->out_seq_count + 1;
+			dev->out_seq_count = 0;
+			dev->out_seq_resync = false;
+		}
+		numerator = dev->timeperframe_vid_out.numerator;
+		denominator = dev->timeperframe_vid_out.denominator;
+
+		if (dev->field_out == V4L2_FIELD_ALTERNATE)
+			denominator *= 2;
+
+		/* Calculate the number of jiffies since we started streaming */
+		jiffies_since_start = cur_jiffies - dev->jiffies_vid_out;
+		/* Get the number of buffers streamed since the start */
+		buffers_since_start = (u64)jiffies_since_start * denominator +
+				      (HZ * numerator) / 2;
+		do_div(buffers_since_start, HZ * numerator);
+
+		/*
+		 * After more than 0xf0000000 (rounded down to a multiple of
+		 * 'jiffies-per-day' to ease jiffies_to_msecs calculation)
+		 * jiffies have passed since we started streaming reset the
+		 * counters and keep track of the sequence offset.
+		 */
+		if (jiffies_since_start > JIFFIES_RESYNC) {
+			dev->jiffies_vid_out = cur_jiffies;
+			dev->out_seq_offset = buffers_since_start;
+			buffers_since_start = 0;
+		}
+		dev->out_seq_count = buffers_since_start + dev->out_seq_offset;
+		dev->vid_out_seq_count = dev->out_seq_count - dev->vid_out_seq_start;
+		dev->vbi_out_seq_count = dev->out_seq_count - dev->vbi_out_seq_start;
+
+		vivid_thread_vid_out_tick(dev);
+		mutex_unlock(&dev->mutex);
+
+		/*
+		 * Calculate the number of 'numerators' streamed since we started,
+		 * not including the current buffer.
+		 */
+		numerators_since_start = buffers_since_start * numerator;
+
+		/* And the number of jiffies since we started */
+		jiffies_since_start = jiffies - dev->jiffies_vid_out;
+
+		/* Increase by the 'numerator' of one buffer */
+		numerators_since_start += numerator;
+		/*
+		 * Calculate when that next buffer is supposed to start
+		 * in jiffies since we started streaming.
+		 */
+		next_jiffies_since_start = numerators_since_start * HZ +
+					   denominator / 2;
+		do_div(next_jiffies_since_start, denominator);
+		/* If it is in the past, then just schedule asap */
+		if (next_jiffies_since_start < jiffies_since_start)
+			next_jiffies_since_start = jiffies_since_start;
+
+		wait_jiffies = next_jiffies_since_start - jiffies_since_start;
+		schedule_timeout_interruptible(wait_jiffies ? wait_jiffies : 1);
+	}
+	dprintk(dev, 1, "Video Output Thread End\n");
+	return 0;
+}
+
+static void vivid_grab_controls(struct vivid_dev *dev, bool grab)
+{
+	v4l2_ctrl_grab(dev->ctrl_has_crop_out, grab);
+	v4l2_ctrl_grab(dev->ctrl_has_compose_out, grab);
+	v4l2_ctrl_grab(dev->ctrl_has_scaler_out, grab);
+	v4l2_ctrl_grab(dev->ctrl_tx_mode, grab);
+	v4l2_ctrl_grab(dev->ctrl_tx_rgb_range, grab);
+}
+
+int vivid_start_generating_vid_out(struct vivid_dev *dev, bool *pstreaming)
+{
+	dprintk(dev, 1, "%s\n", __func__);
+
+	if (dev->kthread_vid_out) {
+		u32 seq_count = dev->out_seq_count + dev->seq_wrap * 128;
+
+		if (pstreaming == &dev->vid_out_streaming)
+			dev->vid_out_seq_start = seq_count;
+		else
+			dev->vbi_out_seq_start = seq_count;
+		*pstreaming = true;
+		return 0;
+	}
+
+	/* Resets frame counters */
+	dev->jiffies_vid_out = jiffies;
+	dev->vid_out_seq_start = dev->seq_wrap * 128;
+	dev->vbi_out_seq_start = dev->seq_wrap * 128;
+
+	dev->kthread_vid_out = kthread_run(vivid_thread_vid_out, dev,
+			"%s-vid-out", dev->v4l2_dev.name);
+
+	if (IS_ERR(dev->kthread_vid_out)) {
+		v4l2_err(&dev->v4l2_dev, "kernel_thread() failed\n");
+		return PTR_ERR(dev->kthread_vid_out);
+	}
+	*pstreaming = true;
+	vivid_grab_controls(dev, true);
+
+	dprintk(dev, 1, "returning from %s\n", __func__);
+	return 0;
+}
+
+void vivid_stop_generating_vid_out(struct vivid_dev *dev, bool *pstreaming)
+{
+	dprintk(dev, 1, "%s\n", __func__);
+
+	if (dev->kthread_vid_out == NULL)
+		return;
+
+	*pstreaming = false;
+	if (pstreaming == &dev->vid_out_streaming) {
+		/* Release all active buffers */
+		while (!list_empty(&dev->vid_out_active)) {
+			struct vivid_buffer *buf;
+
+			buf = list_entry(dev->vid_out_active.next,
+					 struct vivid_buffer, list);
+			list_del(&buf->list);
+			vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
+			dprintk(dev, 2, "vid_out buffer %d done\n",
+				buf->vb.v4l2_buf.index);
+		}
+	}
+
+	if (pstreaming == &dev->vbi_out_streaming) {
+		while (!list_empty(&dev->vbi_out_active)) {
+			struct vivid_buffer *buf;
+
+			buf = list_entry(dev->vbi_out_active.next,
+					 struct vivid_buffer, list);
+			list_del(&buf->list);
+			vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
+			dprintk(dev, 2, "vbi_out buffer %d done\n",
+				buf->vb.v4l2_buf.index);
+		}
+	}
+
+	if (dev->vid_out_streaming || dev->vbi_out_streaming)
+		return;
+
+	/* shutdown control thread */
+	vivid_grab_controls(dev, false);
+	mutex_unlock(&dev->mutex);
+	kthread_stop(dev->kthread_vid_out);
+	dev->kthread_vid_out = NULL;
+	mutex_lock(&dev->mutex);
+}
diff --git a/drivers/media/platform/vivid/vivid-kthread-out.h b/drivers/media/platform/vivid/vivid-kthread-out.h
new file mode 100644
index 0000000..2bf04a1
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-kthread-out.h
@@ -0,0 +1,26 @@
+/*
+ * vivid-kthread-out.h - video/vbi output thread support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_KTHREAD_OUT_H_
+#define _VIVID_KTHREAD_OUT_H_
+
+int vivid_start_generating_vid_out(struct vivid_dev *dev, bool *pstreaming);
+void vivid_stop_generating_vid_out(struct vivid_dev *dev, bool *pstreaming);
+
+#endif
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 08/12] vivid: add a simple framebuffer device for overlay testing
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
                   ` (6 preceding siblings ...)
  2014-08-25 11:30 ` [PATCHv2 07/12] vivid: add the kthread code that controls the video rate Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 09/12] vivid: add the Test Pattern Generator Hans Verkuil
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

In order to test capture and output overlays a simple framebuffer
device is created. It's bare bone, but it does the job.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/platform/vivid/vivid-osd.c | 400 +++++++++++++++++++++++++++++++
 drivers/media/platform/vivid/vivid-osd.h |  27 +++
 2 files changed, 427 insertions(+)
 create mode 100644 drivers/media/platform/vivid/vivid-osd.c
 create mode 100644 drivers/media/platform/vivid/vivid-osd.h

diff --git a/drivers/media/platform/vivid/vivid-osd.c b/drivers/media/platform/vivid/vivid-osd.c
new file mode 100644
index 0000000..084d346
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-osd.c
@@ -0,0 +1,400 @@
+/*
+ * vivid-osd.c - osd support for testing overlays.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/font.h>
+#include <linux/mutex.h>
+#include <linux/videodev2.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
+#include <linux/fb.h>
+#include <media/videobuf2-vmalloc.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-fh.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-common.h>
+
+#include "vivid-core.h"
+#include "vivid-osd.h"
+
+#define MAX_OSD_WIDTH  720
+#define MAX_OSD_HEIGHT 576
+
+/*
+ * Order: white, yellow, cyan, green, magenta, red, blue, black,
+ * and same again with the alpha bit set (if any)
+ */
+static const u16 rgb555[16] = {
+	0x7fff, 0x7fe0, 0x03ff, 0x03e0, 0x7c1f, 0x7c00, 0x001f, 0x0000,
+	0xffff, 0xffe0, 0x83ff, 0x83e0, 0xfc1f, 0xfc00, 0x801f, 0x8000
+};
+
+static const u16 rgb565[16] = {
+	0xffff, 0xffe0, 0x07ff, 0x07e0, 0xf81f, 0xf800, 0x001f, 0x0000,
+	0xffff, 0xffe0, 0x07ff, 0x07e0, 0xf81f, 0xf800, 0x001f, 0x0000
+};
+
+void vivid_clear_fb(struct vivid_dev *dev)
+{
+	void *p = dev->video_vbase;
+	const u16 *rgb = rgb555;
+	unsigned x, y;
+
+	if (dev->fb_defined.green.length == 6)
+		rgb = rgb565;
+
+	for (y = 0; y < dev->display_height; y++) {
+		u16 *d = p;
+
+		for (x = 0; x < dev->display_width; x++)
+			d[x] = rgb[(y / 16 + x / 16) % 16];
+		p += dev->display_byte_stride;
+	}
+}
+
+/* --------------------------------------------------------------------- */
+
+static int vivid_fb_ioctl(struct fb_info *info, unsigned cmd, unsigned long arg)
+{
+	struct vivid_dev *dev = (struct vivid_dev *)info->par;
+
+	switch (cmd) {
+	case FBIOGET_VBLANK: {
+		struct fb_vblank vblank;
+
+		vblank.flags = FB_VBLANK_HAVE_COUNT | FB_VBLANK_HAVE_VCOUNT |
+			FB_VBLANK_HAVE_VSYNC;
+		vblank.count = 0;
+		vblank.vcount = 0;
+		vblank.hcount = 0;
+		if (copy_to_user((void __user *)arg, &vblank, sizeof(vblank)))
+			return -EFAULT;
+		return 0;
+	}
+
+	default:
+		dprintk(dev, 1, "Unknown ioctl %08x\n", cmd);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+/* Framebuffer device handling */
+
+static int vivid_fb_set_var(struct vivid_dev *dev, struct fb_var_screeninfo *var)
+{
+	dprintk(dev, 1, "vivid_fb_set_var\n");
+
+	if (var->bits_per_pixel != 16) {
+		dprintk(dev, 1, "vivid_fb_set_var - Invalid bpp\n");
+		return -EINVAL;
+	}
+	dev->display_byte_stride = var->xres * dev->bytes_per_pixel;
+
+	return 0;
+}
+
+static int vivid_fb_get_fix(struct vivid_dev *dev, struct fb_fix_screeninfo *fix)
+{
+	dprintk(dev, 1, "vivid_fb_get_fix\n");
+	memset(fix, 0, sizeof(struct fb_fix_screeninfo));
+	strlcpy(fix->id, "vioverlay fb", sizeof(fix->id));
+	fix->smem_start = dev->video_pbase;
+	fix->smem_len = dev->video_buffer_size;
+	fix->type = FB_TYPE_PACKED_PIXELS;
+	fix->visual = FB_VISUAL_TRUECOLOR;
+	fix->xpanstep = 1;
+	fix->ypanstep = 1;
+	fix->ywrapstep = 0;
+	fix->line_length = dev->display_byte_stride;
+	fix->accel = FB_ACCEL_NONE;
+	return 0;
+}
+
+/* Check the requested display mode, returning -EINVAL if we can't
+   handle it. */
+
+static int _vivid_fb_check_var(struct fb_var_screeninfo *var, struct vivid_dev *dev)
+{
+	dprintk(dev, 1, "vivid_fb_check_var\n");
+
+	var->bits_per_pixel = 16;
+	if (var->green.length == 5) {
+		var->red.offset = 10;
+		var->red.length = 5;
+		var->green.offset = 5;
+		var->green.length = 5;
+		var->blue.offset = 0;
+		var->blue.length = 5;
+		var->transp.offset = 15;
+		var->transp.length = 1;
+	} else {
+		var->red.offset = 11;
+		var->red.length = 5;
+		var->green.offset = 5;
+		var->green.length = 6;
+		var->blue.offset = 0;
+		var->blue.length = 5;
+		var->transp.offset = 0;
+		var->transp.length = 0;
+	}
+	var->xoffset = var->yoffset = 0;
+	var->left_margin = var->upper_margin = 0;
+	var->nonstd = 0;
+
+	var->vmode &= ~FB_VMODE_MASK;
+	var->vmode = FB_VMODE_NONINTERLACED;
+
+	/* Dummy values */
+	var->hsync_len = 24;
+	var->vsync_len = 2;
+	var->pixclock = 84316;
+	var->right_margin = 776;
+	var->lower_margin = 591;
+	return 0;
+}
+
+static int vivid_fb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+{
+	struct vivid_dev *dev = (struct vivid_dev *) info->par;
+
+	dprintk(dev, 1, "vivid_fb_check_var\n");
+	return _vivid_fb_check_var(var, dev);
+}
+
+static int vivid_fb_pan_display(struct fb_var_screeninfo *var, struct fb_info *info)
+{
+	return 0;
+}
+
+static int vivid_fb_set_par(struct fb_info *info)
+{
+	int rc = 0;
+	struct vivid_dev *dev = (struct vivid_dev *) info->par;
+
+	dprintk(dev, 1, "vivid_fb_set_par\n");
+
+	rc = vivid_fb_set_var(dev, &info->var);
+	vivid_fb_get_fix(dev, &info->fix);
+	return rc;
+}
+
+static int vivid_fb_setcolreg(unsigned regno, unsigned red, unsigned green,
+				unsigned blue, unsigned transp,
+				struct fb_info *info)
+{
+	u32 color, *palette;
+
+	if (regno >= info->cmap.len)
+		return -EINVAL;
+
+	color = ((transp & 0xFF00) << 16) | ((red & 0xFF00) << 8) |
+		 (green & 0xFF00) | ((blue & 0xFF00) >> 8);
+	if (regno >= 16)
+		return -EINVAL;
+
+	palette = info->pseudo_palette;
+	if (info->var.bits_per_pixel == 16) {
+		switch (info->var.green.length) {
+		case 6:
+			color = (red & 0xf800) |
+				((green & 0xfc00) >> 5) |
+				((blue & 0xf800) >> 11);
+			break;
+		case 5:
+			color = ((red & 0xf800) >> 1) |
+				((green & 0xf800) >> 6) |
+				((blue & 0xf800) >> 11) |
+				(transp ? 0x8000 : 0);
+			break;
+		}
+	}
+	palette[regno] = color;
+	return 0;
+}
+
+/* We don't really support blanking. All this does is enable or
+   disable the OSD. */
+static int vivid_fb_blank(int blank_mode, struct fb_info *info)
+{
+	struct vivid_dev *dev = (struct vivid_dev *)info->par;
+
+	dprintk(dev, 1, "Set blanking mode : %d\n", blank_mode);
+	switch (blank_mode) {
+	case FB_BLANK_UNBLANK:
+		break;
+	case FB_BLANK_NORMAL:
+	case FB_BLANK_HSYNC_SUSPEND:
+	case FB_BLANK_VSYNC_SUSPEND:
+	case FB_BLANK_POWERDOWN:
+		break;
+	}
+	return 0;
+}
+
+static struct fb_ops vivid_fb_ops = {
+	.owner = THIS_MODULE,
+	.fb_check_var   = vivid_fb_check_var,
+	.fb_set_par     = vivid_fb_set_par,
+	.fb_setcolreg   = vivid_fb_setcolreg,
+	.fb_fillrect    = cfb_fillrect,
+	.fb_copyarea    = cfb_copyarea,
+	.fb_imageblit   = cfb_imageblit,
+	.fb_cursor      = NULL,
+	.fb_ioctl       = vivid_fb_ioctl,
+	.fb_pan_display = vivid_fb_pan_display,
+	.fb_blank       = vivid_fb_blank,
+};
+
+/* Initialization */
+
+
+/* Setup our initial video mode */
+static int vivid_fb_init_vidmode(struct vivid_dev *dev)
+{
+	struct v4l2_rect start_window;
+
+	/* Color mode */
+
+	dev->bits_per_pixel = 16;
+	dev->bytes_per_pixel = dev->bits_per_pixel / 8;
+
+	start_window.width = MAX_OSD_WIDTH;
+	start_window.left = 0;
+
+	dev->display_byte_stride = start_window.width * dev->bytes_per_pixel;
+
+	/* Vertical size & position */
+
+	start_window.height = MAX_OSD_HEIGHT;
+	start_window.top = 0;
+
+	dev->display_width = start_window.width;
+	dev->display_height = start_window.height;
+
+	/* Generate a valid fb_var_screeninfo */
+
+	dev->fb_defined.xres = dev->display_width;
+	dev->fb_defined.yres = dev->display_height;
+	dev->fb_defined.xres_virtual = dev->display_width;
+	dev->fb_defined.yres_virtual = dev->display_height;
+	dev->fb_defined.bits_per_pixel = dev->bits_per_pixel;
+	dev->fb_defined.vmode = FB_VMODE_NONINTERLACED;
+	dev->fb_defined.left_margin = start_window.left + 1;
+	dev->fb_defined.upper_margin = start_window.top + 1;
+	dev->fb_defined.accel_flags = FB_ACCEL_NONE;
+	dev->fb_defined.nonstd = 0;
+	/* set default to 1:5:5:5 */
+	dev->fb_defined.green.length = 5;
+
+	/* We've filled in the most data, let the usual mode check
+	   routine fill in the rest. */
+	_vivid_fb_check_var(&dev->fb_defined, dev);
+
+	/* Generate valid fb_fix_screeninfo */
+
+	vivid_fb_get_fix(dev, &dev->fb_fix);
+
+	/* Generate valid fb_info */
+
+	dev->fb_info.node = -1;
+	dev->fb_info.flags = FBINFO_FLAG_DEFAULT;
+	dev->fb_info.fbops = &vivid_fb_ops;
+	dev->fb_info.par = dev;
+	dev->fb_info.var = dev->fb_defined;
+	dev->fb_info.fix = dev->fb_fix;
+	dev->fb_info.screen_base = (u8 __iomem *)dev->video_vbase;
+	dev->fb_info.fbops = &vivid_fb_ops;
+
+	/* Supply some monitor specs. Bogus values will do for now */
+	dev->fb_info.monspecs.hfmin = 8000;
+	dev->fb_info.monspecs.hfmax = 70000;
+	dev->fb_info.monspecs.vfmin = 10;
+	dev->fb_info.monspecs.vfmax = 100;
+
+	/* Allocate color map */
+	if (fb_alloc_cmap(&dev->fb_info.cmap, 256, 1)) {
+		pr_err("abort, unable to alloc cmap\n");
+		return -ENOMEM;
+	}
+
+	/* Allocate the pseudo palette */
+	dev->fb_info.pseudo_palette = kmalloc_array(16, sizeof(u32), GFP_KERNEL);
+
+	return dev->fb_info.pseudo_palette ? 0 : -ENOMEM;
+}
+
+/* Release any memory we've grabbed */
+void vivid_fb_release_buffers(struct vivid_dev *dev)
+{
+	if (dev->video_vbase == NULL)
+		return;
+
+	/* Release cmap */
+	if (dev->fb_info.cmap.len)
+		fb_dealloc_cmap(&dev->fb_info.cmap);
+
+	/* Release pseudo palette */
+	kfree(dev->fb_info.pseudo_palette);
+	kfree((void *)dev->video_vbase);
+}
+
+/* Initialize the specified card */
+
+int vivid_fb_init(struct vivid_dev *dev)
+{
+	int ret;
+
+	dev->video_buffer_size = MAX_OSD_HEIGHT * MAX_OSD_WIDTH * 2;
+	dev->video_vbase = kzalloc(dev->video_buffer_size, GFP_KERNEL | GFP_DMA32);
+	if (dev->video_vbase == NULL)
+		return -ENOMEM;
+	dev->video_pbase = virt_to_phys(dev->video_vbase);
+
+	pr_info("Framebuffer at 0x%lx, mapped to 0x%p, size %dk\n",
+			dev->video_pbase, dev->video_vbase,
+			dev->video_buffer_size / 1024);
+
+	/* Set the startup video mode information */
+	ret = vivid_fb_init_vidmode(dev);
+	if (ret) {
+		vivid_fb_release_buffers(dev);
+		return ret;
+	}
+
+	vivid_clear_fb(dev);
+
+	/* Register the framebuffer */
+	if (register_framebuffer(&dev->fb_info) < 0) {
+		vivid_fb_release_buffers(dev);
+		return -EINVAL;
+	}
+
+	/* Set the card to the requested mode */
+	vivid_fb_set_par(&dev->fb_info);
+	return 0;
+
+}
diff --git a/drivers/media/platform/vivid/vivid-osd.h b/drivers/media/platform/vivid/vivid-osd.h
new file mode 100644
index 0000000..57c9daa
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-osd.h
@@ -0,0 +1,27 @@
+/*
+ * vivid-osd.h - output overlay support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_OSD_H_
+#define _VIVID_OSD_H_
+
+int vivid_fb_init(struct vivid_dev *dev);
+void vivid_fb_release_buffers(struct vivid_dev *dev);
+void vivid_clear_fb(struct vivid_dev *dev);
+
+#endif
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 09/12] vivid: add the Test Pattern Generator
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
                   ` (7 preceding siblings ...)
  2014-08-25 11:30 ` [PATCHv2 08/12] vivid: add a simple framebuffer device for overlay testing Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 10/12] vivid: add support for radio receivers and transmitters Hans Verkuil
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

The test patterns for video capture are generated by this code. All patterns
are precalculated taking into account colorspace information, pixel and video
aspect ratios and scaling information.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/platform/vivid/vivid-tpg-colors.c |  310 +++++
 drivers/media/platform/vivid/vivid-tpg-colors.h |   64 +
 drivers/media/platform/vivid/vivid-tpg.c        | 1439 +++++++++++++++++++++++
 drivers/media/platform/vivid/vivid-tpg.h        |  438 +++++++
 4 files changed, 2251 insertions(+)
 create mode 100644 drivers/media/platform/vivid/vivid-tpg-colors.c
 create mode 100644 drivers/media/platform/vivid/vivid-tpg-colors.h
 create mode 100644 drivers/media/platform/vivid/vivid-tpg.c
 create mode 100644 drivers/media/platform/vivid/vivid-tpg.h

diff --git a/drivers/media/platform/vivid/vivid-tpg-colors.c b/drivers/media/platform/vivid/vivid-tpg-colors.c
new file mode 100644
index 0000000..2adddc0
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-tpg-colors.c
@@ -0,0 +1,310 @@
+/*
+ * vivid-color.c - A table that converts colors to various colorspaces
+ *
+ * The test pattern generator uses the tpg_colors for its test patterns.
+ * For testing colorspaces the first 8 colors of that table need to be
+ * converted to their equivalent in the target colorspace.
+ *
+ * The tpg_csc_colors[] table is the result of that conversion and since
+ * it is precalculated the colorspace conversion is just a simple table
+ * lookup.
+ *
+ * This source also contains the code used to generate the tpg_csc_colors
+ * table. Run the following command to compile it:
+ *
+ *	gcc vivid-colors.c -DCOMPILE_APP -o gen-colors -lm
+ *
+ * and run the utility.
+ *
+ * Note that the converted colors are in the range 0x000-0xff0 (so times 16)
+ * in order to preserve precision.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/videodev2.h>
+
+#include "vivid-tpg-colors.h"
+
+/* sRGB colors with range [0-255] */
+const struct color tpg_colors[TPG_COLOR_MAX] = {
+	/*
+	 * Colors to test colorspace conversion: converting these colors
+	 * to other colorspaces will never lead to out-of-gamut colors.
+	 */
+	{ 191, 191, 191 }, /* TPG_COLOR_CSC_WHITE */
+	{ 191, 191,  50 }, /* TPG_COLOR_CSC_YELLOW */
+	{  50, 191, 191 }, /* TPG_COLOR_CSC_CYAN */
+	{  50, 191,  50 }, /* TPG_COLOR_CSC_GREEN */
+	{ 191,  50, 191 }, /* TPG_COLOR_CSC_MAGENTA */
+	{ 191,  50,  50 }, /* TPG_COLOR_CSC_RED */
+	{  50,  50, 191 }, /* TPG_COLOR_CSC_BLUE */
+	{  50,  50,  50 }, /* TPG_COLOR_CSC_BLACK */
+
+	/* 75% colors */
+	{ 191, 191,   0 }, /* TPG_COLOR_75_YELLOW */
+	{   0, 191, 191 }, /* TPG_COLOR_75_CYAN */
+	{   0, 191,   0 }, /* TPG_COLOR_75_GREEN */
+	{ 191,   0, 191 }, /* TPG_COLOR_75_MAGENTA */
+	{ 191,   0,   0 }, /* TPG_COLOR_75_RED */
+	{   0,   0, 191 }, /* TPG_COLOR_75_BLUE */
+
+	/* 100% colors */
+	{ 255, 255, 255 }, /* TPG_COLOR_100_WHITE */
+	{ 255, 255,   0 }, /* TPG_COLOR_100_YELLOW */
+	{   0, 255, 255 }, /* TPG_COLOR_100_CYAN */
+	{   0, 255,   0 }, /* TPG_COLOR_100_GREEN */
+	{ 255,   0, 255 }, /* TPG_COLOR_100_MAGENTA */
+	{ 255,   0,   0 }, /* TPG_COLOR_100_RED */
+	{   0,   0, 255 }, /* TPG_COLOR_100_BLUE */
+	{   0,   0,   0 }, /* TPG_COLOR_100_BLACK */
+
+	{   0,   0,   0 }, /* TPG_COLOR_RANDOM placeholder */
+};
+
+#ifndef COMPILE_APP
+
+/* Generated table */
+const struct color16 tpg_csc_colors[V4L2_COLORSPACE_SRGB + 1][TPG_COLOR_CSC_BLACK + 1] = {
+	[V4L2_COLORSPACE_SMPTE170M][0] = { 2953, 2939, 2939 },
+	[V4L2_COLORSPACE_SMPTE170M][1] = { 2954, 2963, 585 },
+	[V4L2_COLORSPACE_SMPTE170M][2] = { 84, 2967, 2937 },
+	[V4L2_COLORSPACE_SMPTE170M][3] = { 93, 2990, 575 },
+	[V4L2_COLORSPACE_SMPTE170M][4] = { 3030, 259, 2933 },
+	[V4L2_COLORSPACE_SMPTE170M][5] = { 3031, 406, 557 },
+	[V4L2_COLORSPACE_SMPTE170M][6] = { 544, 428, 2931 },
+	[V4L2_COLORSPACE_SMPTE170M][7] = { 551, 547, 547 },
+	[V4L2_COLORSPACE_SMPTE240M][0] = { 2926, 2926, 2926 },
+	[V4L2_COLORSPACE_SMPTE240M][1] = { 2926, 2926, 857 },
+	[V4L2_COLORSPACE_SMPTE240M][2] = { 1594, 2901, 2901 },
+	[V4L2_COLORSPACE_SMPTE240M][3] = { 1594, 2901, 774 },
+	[V4L2_COLORSPACE_SMPTE240M][4] = { 2484, 618, 2858 },
+	[V4L2_COLORSPACE_SMPTE240M][5] = { 2484, 618, 617 },
+	[V4L2_COLORSPACE_SMPTE240M][6] = { 507, 507, 2832 },
+	[V4L2_COLORSPACE_SMPTE240M][7] = { 507, 507, 507 },
+	[V4L2_COLORSPACE_REC709][0] = { 2939, 2939, 2939 },
+	[V4L2_COLORSPACE_REC709][1] = { 2939, 2939, 547 },
+	[V4L2_COLORSPACE_REC709][2] = { 547, 2939, 2939 },
+	[V4L2_COLORSPACE_REC709][3] = { 547, 2939, 547 },
+	[V4L2_COLORSPACE_REC709][4] = { 2939, 547, 2939 },
+	[V4L2_COLORSPACE_REC709][5] = { 2939, 547, 547 },
+	[V4L2_COLORSPACE_REC709][6] = { 547, 547, 2939 },
+	[V4L2_COLORSPACE_REC709][7] = { 547, 547, 547 },
+	[V4L2_COLORSPACE_470_SYSTEM_M][0] = { 2894, 2988, 2808 },
+	[V4L2_COLORSPACE_470_SYSTEM_M][1] = { 2847, 3070, 843 },
+	[V4L2_COLORSPACE_470_SYSTEM_M][2] = { 1656, 2962, 2783 },
+	[V4L2_COLORSPACE_470_SYSTEM_M][3] = { 1572, 3045, 763 },
+	[V4L2_COLORSPACE_470_SYSTEM_M][4] = { 2477, 229, 2743 },
+	[V4L2_COLORSPACE_470_SYSTEM_M][5] = { 2422, 672, 614 },
+	[V4L2_COLORSPACE_470_SYSTEM_M][6] = { 725, 63, 2718 },
+	[V4L2_COLORSPACE_470_SYSTEM_M][7] = { 534, 561, 509 },
+	[V4L2_COLORSPACE_470_SYSTEM_BG][0] = { 2939, 2939, 2939 },
+	[V4L2_COLORSPACE_470_SYSTEM_BG][1] = { 2939, 2939, 621 },
+	[V4L2_COLORSPACE_470_SYSTEM_BG][2] = { 786, 2939, 2939 },
+	[V4L2_COLORSPACE_470_SYSTEM_BG][3] = { 786, 2939, 621 },
+	[V4L2_COLORSPACE_470_SYSTEM_BG][4] = { 2879, 547, 2923 },
+	[V4L2_COLORSPACE_470_SYSTEM_BG][5] = { 2879, 547, 547 },
+	[V4L2_COLORSPACE_470_SYSTEM_BG][6] = { 547, 547, 2923 },
+	[V4L2_COLORSPACE_470_SYSTEM_BG][7] = { 547, 547, 547 },
+	[V4L2_COLORSPACE_SRGB][0] = { 3056, 3056, 3056 },
+	[V4L2_COLORSPACE_SRGB][1] = { 3056, 3056, 800 },
+	[V4L2_COLORSPACE_SRGB][2] = { 800, 3056, 3056 },
+	[V4L2_COLORSPACE_SRGB][3] = { 800, 3056, 800 },
+	[V4L2_COLORSPACE_SRGB][4] = { 3056, 800, 3056 },
+	[V4L2_COLORSPACE_SRGB][5] = { 3056, 800, 800 },
+	[V4L2_COLORSPACE_SRGB][6] = { 800, 800, 3056 },
+	[V4L2_COLORSPACE_SRGB][7] = { 800, 800, 800 },
+};
+
+#else
+
+/* This code generates the table above */
+
+#include <math.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+static const double rec709_to_ntsc1953[3][3] = {
+	{ 0.6698, 0.2678,  0.0323 },
+	{ 0.0185, 1.0742, -0.0603 },
+	{ 0.0162, 0.0432,  0.8551 }
+};
+
+static const double rec709_to_ebu[3][3] = {
+	{ 0.9578, 0.0422, 0      },
+	{ 0     , 1     , 0      },
+	{ 0     , 0.0118, 0.9882 }
+};
+
+static const double rec709_to_170m[3][3] = {
+	{  1.0654, -0.0554, -0.0010 },
+	{ -0.0196,  1.0364, -0.0167 },
+	{  0.0016,  0.0044,  0.9940 }
+};
+
+static const double rec709_to_240m[3][3] = {
+	{ 0.7151, 0.2849, 0      },
+	{ 0.0179, 0.9821, 0      },
+	{ 0.0177, 0.0472, 0.9350 }
+};
+
+
+static void mult_matrix(double *r, double *g, double *b, const double m[3][3])
+{
+	double ir, ig, ib;
+
+	ir = m[0][0] * (*r) + m[0][1] * (*g) + m[0][2] * (*b);
+	ig = m[1][0] * (*r) + m[1][1] * (*g) + m[1][2] * (*b);
+	ib = m[2][0] * (*r) + m[2][1] * (*g) + m[2][2] * (*b);
+	*r = ir;
+	*g = ig;
+	*b = ib;
+}
+
+static double transfer_srgb_to_rgb(double v)
+{
+	return (v <= 0.03928) ? v / 12.92 : pow((v + 0.055) / 1.055, 2.4);
+}
+
+static double transfer_rgb_to_smpte240m(double v)
+{
+	return (v <= 0.0228) ? v * 4.0 : 1.1115 * pow(v, 0.45) - 0.1115;
+}
+
+static double transfer_rgb_to_rec709(double v)
+{
+	return (v < 0.018) ? v * 4.5 : 1.099 * pow(v, 0.45) - 0.099;
+}
+
+static double transfer_srgb_to_rec709(double v)
+{
+	return transfer_rgb_to_rec709(transfer_srgb_to_rgb(v));
+}
+
+static void csc(enum v4l2_colorspace colorspace, double *r, double *g, double *b)
+{
+	/* Convert the primaries of Rec. 709 Linear RGB */
+	switch (colorspace) {
+	case V4L2_COLORSPACE_SMPTE240M:
+		*r = transfer_srgb_to_rgb(*r);
+		*g = transfer_srgb_to_rgb(*g);
+		*b = transfer_srgb_to_rgb(*b);
+		mult_matrix(r, g, b, rec709_to_240m);
+		break;
+	case V4L2_COLORSPACE_SMPTE170M:
+		*r = transfer_srgb_to_rgb(*r);
+		*g = transfer_srgb_to_rgb(*g);
+		*b = transfer_srgb_to_rgb(*b);
+		mult_matrix(r, g, b, rec709_to_170m);
+		break;
+	case V4L2_COLORSPACE_470_SYSTEM_BG:
+		*r = transfer_srgb_to_rgb(*r);
+		*g = transfer_srgb_to_rgb(*g);
+		*b = transfer_srgb_to_rgb(*b);
+		mult_matrix(r, g, b, rec709_to_ebu);
+		break;
+	case V4L2_COLORSPACE_470_SYSTEM_M:
+		*r = transfer_srgb_to_rgb(*r);
+		*g = transfer_srgb_to_rgb(*g);
+		*b = transfer_srgb_to_rgb(*b);
+		mult_matrix(r, g, b, rec709_to_ntsc1953);
+		break;
+	case V4L2_COLORSPACE_SRGB:
+	case V4L2_COLORSPACE_REC709:
+	default:
+		break;
+	}
+
+	*r = ((*r) < 0) ? 0 : (((*r) > 1) ? 1 : (*r));
+	*g = ((*g) < 0) ? 0 : (((*g) > 1) ? 1 : (*g));
+	*b = ((*b) < 0) ? 0 : (((*b) > 1) ? 1 : (*b));
+
+	/* Encode to gamma corrected colorspace */
+	switch (colorspace) {
+	case V4L2_COLORSPACE_SMPTE240M:
+		*r = transfer_rgb_to_smpte240m(*r);
+		*g = transfer_rgb_to_smpte240m(*g);
+		*b = transfer_rgb_to_smpte240m(*b);
+		break;
+	case V4L2_COLORSPACE_SMPTE170M:
+	case V4L2_COLORSPACE_470_SYSTEM_M:
+	case V4L2_COLORSPACE_470_SYSTEM_BG:
+		*r = transfer_rgb_to_rec709(*r);
+		*g = transfer_rgb_to_rec709(*g);
+		*b = transfer_rgb_to_rec709(*b);
+		break;
+	case V4L2_COLORSPACE_SRGB:
+		break;
+	case V4L2_COLORSPACE_REC709:
+	default:
+		*r = transfer_srgb_to_rec709(*r);
+		*g = transfer_srgb_to_rec709(*g);
+		*b = transfer_srgb_to_rec709(*b);
+		break;
+	}
+}
+
+int main(int argc, char **argv)
+{
+	static const unsigned colorspaces[] = {
+		0,
+		V4L2_COLORSPACE_SMPTE170M,
+		V4L2_COLORSPACE_SMPTE240M,
+		V4L2_COLORSPACE_REC709,
+		0,
+		V4L2_COLORSPACE_470_SYSTEM_M,
+		V4L2_COLORSPACE_470_SYSTEM_BG,
+		0,
+		V4L2_COLORSPACE_SRGB,
+	};
+	static const char * const colorspace_names[] = {
+		"",
+		"V4L2_COLORSPACE_SMPTE170M",
+		"V4L2_COLORSPACE_SMPTE240M",
+		"V4L2_COLORSPACE_REC709",
+		"",
+		"V4L2_COLORSPACE_470_SYSTEM_M",
+		"V4L2_COLORSPACE_470_SYSTEM_BG",
+		"",
+		"V4L2_COLORSPACE_SRGB",
+	};
+	int i;
+	int c;
+
+	printf("/* Generated table */\n");
+	printf("const struct color16 tpg_csc_colors[V4L2_COLORSPACE_SRGB + 1][TPG_COLOR_CSC_BLACK + 1] = {\n");
+	for (c = 0; c <= V4L2_COLORSPACE_SRGB; c++) {
+		for (i = 0; i <= TPG_COLOR_CSC_BLACK; i++) {
+			double r, g, b;
+
+			if (colorspaces[c] == 0)
+				continue;
+
+			r = tpg_colors[i].r / 255.0;
+			g = tpg_colors[i].g / 255.0;
+			b = tpg_colors[i].b / 255.0;
+
+			csc(c, &r, &g, &b);
+
+			printf("\t[%s][%d] = { %d, %d, %d },\n", colorspace_names[c], i,
+				(int)(r * 4080), (int)(g * 4080), (int)(b * 4080));
+		}
+	}
+	printf("};\n\n");
+	return 0;
+}
+
+#endif
diff --git a/drivers/media/platform/vivid/vivid-tpg-colors.h b/drivers/media/platform/vivid/vivid-tpg-colors.h
new file mode 100644
index 0000000..a2678fb
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-tpg-colors.h
@@ -0,0 +1,64 @@
+/*
+ * vivid-color.h - Color definitions for the test pattern generator
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_COLORS_H_
+#define _VIVID_COLORS_H_
+
+struct color {
+	unsigned char r, g, b;
+};
+
+struct color16 {
+	int r, g, b;
+};
+
+enum tpg_color {
+	TPG_COLOR_CSC_WHITE,
+	TPG_COLOR_CSC_YELLOW,
+	TPG_COLOR_CSC_CYAN,
+	TPG_COLOR_CSC_GREEN,
+	TPG_COLOR_CSC_MAGENTA,
+	TPG_COLOR_CSC_RED,
+	TPG_COLOR_CSC_BLUE,
+	TPG_COLOR_CSC_BLACK,
+	TPG_COLOR_75_YELLOW,
+	TPG_COLOR_75_CYAN,
+	TPG_COLOR_75_GREEN,
+	TPG_COLOR_75_MAGENTA,
+	TPG_COLOR_75_RED,
+	TPG_COLOR_75_BLUE,
+	TPG_COLOR_100_WHITE,
+	TPG_COLOR_100_YELLOW,
+	TPG_COLOR_100_CYAN,
+	TPG_COLOR_100_GREEN,
+	TPG_COLOR_100_MAGENTA,
+	TPG_COLOR_100_RED,
+	TPG_COLOR_100_BLUE,
+	TPG_COLOR_100_BLACK,
+	TPG_COLOR_TEXTFG,
+	TPG_COLOR_TEXTBG,
+	TPG_COLOR_RANDOM,
+	TPG_COLOR_RAMP,
+	TPG_COLOR_MAX = TPG_COLOR_RAMP + 256
+};
+
+extern const struct color tpg_colors[TPG_COLOR_MAX];
+extern const struct color16 tpg_csc_colors[V4L2_COLORSPACE_SRGB + 1][TPG_COLOR_CSC_BLACK + 1];
+
+#endif
diff --git a/drivers/media/platform/vivid/vivid-tpg.c b/drivers/media/platform/vivid/vivid-tpg.c
new file mode 100644
index 0000000..8576b2c
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-tpg.c
@@ -0,0 +1,1439 @@
+/*
+ * vivid-tpg.c - Test Pattern Generator
+ *
+ * Note: gen_twopix and tpg_gen_text are based on code from vivi.c. See the
+ * vivi.c source for the copyright information of those functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include "vivid-tpg.h"
+
+/* Must remain in sync with enum tpg_pattern */
+const char * const tpg_pattern_strings[] = {
+	"75% Colorbar",
+	"100% Colorbar",
+	"CSC Colorbar",
+	"Horizontal 100% Colorbar",
+	"100% Color Squares",
+	"100% Black",
+	"100% White",
+	"100% Red",
+	"100% Green",
+	"100% Blue",
+	"16x16 Checkers",
+	"1x1 Checkers",
+	"Alternating Hor Lines",
+	"Alternating Vert Lines",
+	"One Pixel Wide Cross",
+	"Two Pixels Wide Cross",
+	"Ten Pixels Wide Cross",
+	"Gray Ramp",
+	"Noise",
+	NULL
+};
+
+/* Must remain in sync with enum tpg_aspect */
+const char * const tpg_aspect_strings[] = {
+	"Source Width x Height",
+	"4x3",
+	"14x9",
+	"16x9",
+	"16x9 Anamorphic",
+	NULL
+};
+
+/*
+ * Sine table: sin[0] = 127 * sin(-180 degrees)
+ *             sin[128] = 127 * sin(0 degrees)
+ *             sin[256] = 127 * sin(180 degrees)
+ */
+static const s8 sin[257] = {
+	   0,   -4,   -7,  -11,  -13,  -18,  -20,  -22,  -26,  -29,  -33,  -35,  -37,  -41,  -43,  -48,
+	 -50,  -52,  -56,  -58,  -62,  -63,  -65,  -69,  -71,  -75,  -76,  -78,  -82,  -83,  -87,  -88,
+	 -90,  -93,  -94,  -97,  -99, -101, -103, -104, -107, -108, -110, -111, -112, -114, -115, -117,
+	-118, -119, -120, -121, -122, -123, -123, -124, -125, -125, -126, -126, -127, -127, -127, -127,
+	-127, -127, -127, -127, -126, -126, -125, -125, -124, -124, -123, -122, -121, -120, -119, -118,
+	-117, -116, -114, -113, -111, -110, -109, -107, -105, -103, -101, -100,  -97,  -96,  -93,  -91,
+	 -90,  -87,  -85,  -82,  -80,  -76,  -75,  -73,  -69,  -67,  -63,  -62,  -60,  -56,  -54,  -50,
+	 -48,  -46,  -41,  -39,  -35,  -33,  -31,  -26,  -24,  -20,  -18,  -15,  -11,   -9,   -4,   -2,
+	   0,    2,    4,    9,   11,   15,   18,   20,   24,   26,   31,   33,   35,   39,   41,   46,
+	  48,   50,   54,   56,   60,   62,   64,   67,   69,   73,   75,   76,   80,   82,   85,   87,
+	  90,   91,   93,   96,   97,  100,  101,  103,  105,  107,  109,  110,  111,  113,  114,  116,
+	 117,  118,  119,  120,  121,  122,  123,  124,  124,  125,  125,  126,  126,  127,  127,  127,
+	 127,  127,  127,  127,  127,  126,  126,  125,  125,  124,  123,  123,  122,  121,  120,  119,
+	 118,  117,  115,  114,  112,  111,  110,  108,  107,  104,  103,  101,   99,   97,   94,   93,
+	  90,   88,   87,   83,   82,   78,   76,   75,   71,   69,   65,   64,   62,   58,   56,   52,
+	  50,   48,   43,   41,   37,   35,   33,   29,   26,   22,   20,   18,   13,   11,    7,    4,
+	   0,
+};
+
+#define cos(idx) sin[((idx) + 64) % sizeof(sin)]
+
+/* Global font descriptor */
+static const u8 *font8x16;
+
+void tpg_set_font(const u8 *f)
+{
+	font8x16 = f;
+}
+
+void tpg_init(struct tpg_data *tpg, unsigned w, unsigned h)
+{
+	memset(tpg, 0, sizeof(*tpg));
+	tpg->scaled_width = tpg->src_width = w;
+	tpg->src_height = tpg->buf_height = h;
+	tpg->crop.width = tpg->compose.width = w;
+	tpg->crop.height = tpg->compose.height = h;
+	tpg->recalc_colors = true;
+	tpg->recalc_square_border = true;
+	tpg->brightness = 128;
+	tpg->contrast = 128;
+	tpg->saturation = 128;
+	tpg->hue = 0;
+	tpg->mv_hor_mode = TPG_MOVE_NONE;
+	tpg->mv_vert_mode = TPG_MOVE_NONE;
+	tpg->field = V4L2_FIELD_NONE;
+	tpg_s_fourcc(tpg, V4L2_PIX_FMT_RGB24);
+	tpg->colorspace = V4L2_COLORSPACE_SRGB;
+	tpg->perc_fill = 100;
+}
+
+int tpg_alloc(struct tpg_data *tpg, unsigned max_w)
+{
+	unsigned pat;
+	unsigned plane;
+
+	tpg->max_line_width = max_w;
+	for (pat = 0; pat < TPG_MAX_PAT_LINES; pat++) {
+		for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
+			unsigned pixelsz = plane ? 1 : 4;
+
+			tpg->lines[pat][plane] = vzalloc(max_w * 2 * pixelsz);
+			if (!tpg->lines[pat][plane])
+				return -ENOMEM;
+		}
+	}
+	for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
+		unsigned pixelsz = plane ? 1 : 4;
+
+		tpg->contrast_line[plane] = vzalloc(max_w * pixelsz);
+		if (!tpg->contrast_line[plane])
+			return -ENOMEM;
+		tpg->black_line[plane] = vzalloc(max_w * pixelsz);
+		if (!tpg->black_line[plane])
+			return -ENOMEM;
+		tpg->random_line[plane] = vzalloc(max_w * pixelsz);
+		if (!tpg->random_line[plane])
+			return -ENOMEM;
+	}
+	return 0;
+}
+
+void tpg_free(struct tpg_data *tpg)
+{
+	unsigned pat;
+	unsigned plane;
+
+	for (pat = 0; pat < TPG_MAX_PAT_LINES; pat++)
+		for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
+			vfree(tpg->lines[pat][plane]);
+			tpg->lines[pat][plane] = NULL;
+		}
+	for (plane = 0; plane < TPG_MAX_PLANES; plane++) {
+		vfree(tpg->contrast_line[plane]);
+		vfree(tpg->black_line[plane]);
+		vfree(tpg->random_line[plane]);
+		tpg->contrast_line[plane] = NULL;
+		tpg->black_line[plane] = NULL;
+		tpg->random_line[plane] = NULL;
+	}
+}
+
+bool tpg_s_fourcc(struct tpg_data *tpg, u32 fourcc)
+{
+	tpg->fourcc = fourcc;
+	tpg->planes = 1;
+	tpg->recalc_colors = true;
+	switch (fourcc) {
+	case V4L2_PIX_FMT_RGB565:
+	case V4L2_PIX_FMT_RGB565X:
+	case V4L2_PIX_FMT_RGB555:
+	case V4L2_PIX_FMT_XRGB555:
+	case V4L2_PIX_FMT_ARGB555:
+	case V4L2_PIX_FMT_RGB555X:
+	case V4L2_PIX_FMT_RGB24:
+	case V4L2_PIX_FMT_BGR24:
+	case V4L2_PIX_FMT_RGB32:
+	case V4L2_PIX_FMT_BGR32:
+	case V4L2_PIX_FMT_XRGB32:
+	case V4L2_PIX_FMT_XBGR32:
+	case V4L2_PIX_FMT_ARGB32:
+	case V4L2_PIX_FMT_ABGR32:
+		tpg->is_yuv = 0;
+		break;
+	case V4L2_PIX_FMT_NV16M:
+	case V4L2_PIX_FMT_NV61M:
+		tpg->planes = 2;
+		/* fall-through */
+	case V4L2_PIX_FMT_YUYV:
+	case V4L2_PIX_FMT_UYVY:
+	case V4L2_PIX_FMT_YVYU:
+	case V4L2_PIX_FMT_VYUY:
+		tpg->is_yuv = 1;
+		break;
+	default:
+		return false;
+	}
+
+	switch (fourcc) {
+	case V4L2_PIX_FMT_RGB565:
+	case V4L2_PIX_FMT_RGB565X:
+	case V4L2_PIX_FMT_RGB555:
+	case V4L2_PIX_FMT_XRGB555:
+	case V4L2_PIX_FMT_ARGB555:
+	case V4L2_PIX_FMT_RGB555X:
+	case V4L2_PIX_FMT_YUYV:
+	case V4L2_PIX_FMT_UYVY:
+	case V4L2_PIX_FMT_YVYU:
+	case V4L2_PIX_FMT_VYUY:
+		tpg->twopixelsize[0] = 2 * 2;
+		break;
+	case V4L2_PIX_FMT_RGB24:
+	case V4L2_PIX_FMT_BGR24:
+		tpg->twopixelsize[0] = 2 * 3;
+		break;
+	case V4L2_PIX_FMT_RGB32:
+	case V4L2_PIX_FMT_BGR32:
+	case V4L2_PIX_FMT_XRGB32:
+	case V4L2_PIX_FMT_XBGR32:
+	case V4L2_PIX_FMT_ARGB32:
+	case V4L2_PIX_FMT_ABGR32:
+		tpg->twopixelsize[0] = 2 * 4;
+		break;
+	case V4L2_PIX_FMT_NV16M:
+	case V4L2_PIX_FMT_NV61M:
+		tpg->twopixelsize[0] = 2;
+		tpg->twopixelsize[1] = 2;
+		break;
+	}
+	return true;
+}
+
+void tpg_s_crop_compose(struct tpg_data *tpg, const struct v4l2_rect *crop,
+		const struct v4l2_rect *compose)
+{
+	tpg->crop = *crop;
+	tpg->compose = *compose;
+	tpg->scaled_width = (tpg->src_width * tpg->compose.width +
+				 tpg->crop.width - 1) / tpg->crop.width;
+	tpg->scaled_width &= ~1;
+	if (tpg->scaled_width > tpg->max_line_width)
+		tpg->scaled_width = tpg->max_line_width;
+	if (tpg->scaled_width < 2)
+		tpg->scaled_width = 2;
+	tpg->recalc_lines = true;
+}
+
+void tpg_reset_source(struct tpg_data *tpg, unsigned width, unsigned height,
+		       enum v4l2_field field)
+{
+	unsigned p;
+
+	tpg->src_width = width;
+	tpg->src_height = height;
+	tpg->field = field;
+	tpg->buf_height = height;
+	if (V4L2_FIELD_HAS_T_OR_B(field))
+		tpg->buf_height /= 2;
+	tpg->scaled_width = width;
+	tpg->crop.top = tpg->crop.left = 0;
+	tpg->crop.width = width;
+	tpg->crop.height = height;
+	tpg->compose.top = tpg->compose.left = 0;
+	tpg->compose.width = width;
+	tpg->compose.height = tpg->buf_height;
+	for (p = 0; p < tpg->planes; p++)
+		tpg->bytesperline[p] = width * tpg->twopixelsize[p] / 2;
+	tpg->recalc_square_border = true;
+}
+
+static enum tpg_color tpg_get_textbg_color(struct tpg_data *tpg)
+{
+	switch (tpg->pattern) {
+	case TPG_PAT_BLACK:
+		return TPG_COLOR_100_WHITE;
+	case TPG_PAT_CSC_COLORBAR:
+		return TPG_COLOR_CSC_BLACK;
+	default:
+		return TPG_COLOR_100_BLACK;
+	}
+}
+
+static enum tpg_color tpg_get_textfg_color(struct tpg_data *tpg)
+{
+	switch (tpg->pattern) {
+	case TPG_PAT_75_COLORBAR:
+	case TPG_PAT_CSC_COLORBAR:
+		return TPG_COLOR_CSC_WHITE;
+	case TPG_PAT_BLACK:
+		return TPG_COLOR_100_BLACK;
+	default:
+		return TPG_COLOR_100_WHITE;
+	}
+}
+
+static u16 color_to_y(struct tpg_data *tpg, int r, int g, int b)
+{
+	switch (tpg->colorspace) {
+	case V4L2_COLORSPACE_SMPTE170M:
+	case V4L2_COLORSPACE_470_SYSTEM_M:
+	case V4L2_COLORSPACE_470_SYSTEM_BG:
+		return ((16829 * r + 33039 * g + 6416 * b + 16 * 32768) >> 16) + (16 << 4);
+	case V4L2_COLORSPACE_SMPTE240M:
+		return ((11932 * r + 39455 * g + 4897 * b + 16 * 32768) >> 16) + (16 << 4);
+	case V4L2_COLORSPACE_REC709:
+	case V4L2_COLORSPACE_SRGB:
+	default:
+		return ((11966 * r + 40254 * g + 4064 * b + 16 * 32768) >> 16) + (16 << 4);
+	}
+}
+
+static u16 color_to_cb(struct tpg_data *tpg, int r, int g, int b)
+{
+	switch (tpg->colorspace) {
+	case V4L2_COLORSPACE_SMPTE170M:
+	case V4L2_COLORSPACE_470_SYSTEM_M:
+	case V4L2_COLORSPACE_470_SYSTEM_BG:
+		return ((-9714 * r - 19070 * g + 28784 * b + 16 * 32768) >> 16) + (128 << 4);
+	case V4L2_COLORSPACE_SMPTE240M:
+		return ((-6684 * r - 22100 * g + 28784 * b + 16 * 32768) >> 16) + (128 << 4);
+	case V4L2_COLORSPACE_REC709:
+	case V4L2_COLORSPACE_SRGB:
+	default:
+		return ((-6596 * r - 22189 * g + 28784 * b + 16 * 32768) >> 16) + (128 << 4);
+	}
+}
+
+static u16 color_to_cr(struct tpg_data *tpg, int r, int g, int b)
+{
+	switch (tpg->colorspace) {
+	case V4L2_COLORSPACE_SMPTE170M:
+	case V4L2_COLORSPACE_470_SYSTEM_M:
+	case V4L2_COLORSPACE_470_SYSTEM_BG:
+		return ((28784 * r - 24103 * g - 4681 * b + 16 * 32768) >> 16) + (128 << 4);
+	case V4L2_COLORSPACE_SMPTE240M:
+		return ((28784 * r - 25606 * g - 3178 * b + 16 * 32768) >> 16) + (128 << 4);
+	case V4L2_COLORSPACE_REC709:
+	case V4L2_COLORSPACE_SRGB:
+	default:
+		return ((28784 * r - 26145 * g - 2639 * b + 16 * 32768) >> 16) + (128 << 4);
+	}
+}
+
+static u16 ycbcr_to_r(struct tpg_data *tpg, int y, int cb, int cr)
+{
+	int r;
+
+	y -= 16 << 4;
+	cb -= 128 << 4;
+	cr -= 128 << 4;
+	switch (tpg->colorspace) {
+	case V4L2_COLORSPACE_SMPTE170M:
+	case V4L2_COLORSPACE_470_SYSTEM_M:
+	case V4L2_COLORSPACE_470_SYSTEM_BG:
+		r = 4769 * y + 6537 * cr;
+		break;
+	case V4L2_COLORSPACE_SMPTE240M:
+		r = 4769 * y + 7376 * cr;
+		break;
+	case V4L2_COLORSPACE_REC709:
+	case V4L2_COLORSPACE_SRGB:
+	default:
+		r = 4769 * y + 7343 * cr;
+		break;
+	}
+	return clamp(r >> 12, 0, 0xff0);
+}
+
+static u16 ycbcr_to_g(struct tpg_data *tpg, int y, int cb, int cr)
+{
+	int g;
+
+	y -= 16 << 4;
+	cb -= 128 << 4;
+	cr -= 128 << 4;
+	switch (tpg->colorspace) {
+	case V4L2_COLORSPACE_SMPTE170M:
+	case V4L2_COLORSPACE_470_SYSTEM_M:
+	case V4L2_COLORSPACE_470_SYSTEM_BG:
+		g = 4769 * y - 1605 * cb - 3330 * cr;
+		break;
+	case V4L2_COLORSPACE_SMPTE240M:
+		g = 4769 * y - 1055 * cb - 2341 * cr;
+		break;
+	case V4L2_COLORSPACE_REC709:
+	case V4L2_COLORSPACE_SRGB:
+	default:
+		g = 4769 * y - 873 * cb - 2183 * cr;
+		break;
+	}
+	return clamp(g >> 12, 0, 0xff0);
+}
+
+static u16 ycbcr_to_b(struct tpg_data *tpg, int y, int cb, int cr)
+{
+	int b;
+
+	y -= 16 << 4;
+	cb -= 128 << 4;
+	cr -= 128 << 4;
+	switch (tpg->colorspace) {
+	case V4L2_COLORSPACE_SMPTE170M:
+	case V4L2_COLORSPACE_470_SYSTEM_M:
+	case V4L2_COLORSPACE_470_SYSTEM_BG:
+		b = 4769 * y + 7343 * cb;
+		break;
+	case V4L2_COLORSPACE_SMPTE240M:
+		b = 4769 * y + 8552 * cb;
+		break;
+	case V4L2_COLORSPACE_REC709:
+	case V4L2_COLORSPACE_SRGB:
+	default:
+		b = 4769 * y + 8652 * cb;
+		break;
+	}
+	return clamp(b >> 12, 0, 0xff0);
+}
+
+/* precalculate color bar values to speed up rendering */
+static void precalculate_color(struct tpg_data *tpg, int k)
+{
+	int col = k;
+	int r = tpg_colors[col].r;
+	int g = tpg_colors[col].g;
+	int b = tpg_colors[col].b;
+
+	if (k == TPG_COLOR_TEXTBG) {
+		col = tpg_get_textbg_color(tpg);
+
+		r = tpg_colors[col].r;
+		g = tpg_colors[col].g;
+		b = tpg_colors[col].b;
+	} else if (k == TPG_COLOR_TEXTFG) {
+		col = tpg_get_textfg_color(tpg);
+
+		r = tpg_colors[col].r;
+		g = tpg_colors[col].g;
+		b = tpg_colors[col].b;
+	} else if (tpg->pattern == TPG_PAT_NOISE) {
+		r = g = b = prandom_u32_max(256);
+	} else if (k == TPG_COLOR_RANDOM) {
+		r = g = b = tpg->qual_offset + prandom_u32_max(196);
+	} else if (k >= TPG_COLOR_RAMP) {
+		r = g = b = k - TPG_COLOR_RAMP;
+	}
+
+	if (tpg->pattern == TPG_PAT_CSC_COLORBAR && col <= TPG_COLOR_CSC_BLACK) {
+		r = tpg_csc_colors[tpg->colorspace][col].r;
+		g = tpg_csc_colors[tpg->colorspace][col].g;
+		b = tpg_csc_colors[tpg->colorspace][col].b;
+	} else {
+		r <<= 4;
+		g <<= 4;
+		b <<= 4;
+	}
+	if (tpg->qual == TPG_QUAL_GRAY)
+		r = g = b = color_to_y(tpg, r, g, b);
+
+	/*
+	 * The assumption is that the RGB output is always full range,
+	 * so only if the rgb_range overrides the 'real' rgb range do
+	 * we need to convert the RGB values.
+	 *
+	 * Currently there is no way of signalling to userspace if you
+	 * are actually giving it limited range RGB (or full range
+	 * YUV for that matter).
+	 *
+	 * Remember that r, g and b are still in the 0 - 0xff0 range.
+	 */
+	if (tpg->real_rgb_range == V4L2_DV_RGB_RANGE_LIMITED &&
+	    tpg->rgb_range == V4L2_DV_RGB_RANGE_FULL) {
+		/*
+		 * Convert from full range (which is what r, g and b are)
+		 * to limited range (which is the 'real' RGB range), which
+		 * is then interpreted as full range.
+		 */
+		r = (r * 219) / 255 + (16 << 4);
+		g = (g * 219) / 255 + (16 << 4);
+		b = (b * 219) / 255 + (16 << 4);
+	} else if (tpg->real_rgb_range != V4L2_DV_RGB_RANGE_LIMITED &&
+		   tpg->rgb_range == V4L2_DV_RGB_RANGE_LIMITED) {
+		/*
+		 * Clamp r, g and b to the limited range and convert to full
+		 * range since that's what we deliver.
+		 */
+		r = clamp(r, 16 << 4, 235 << 4);
+		g = clamp(g, 16 << 4, 235 << 4);
+		b = clamp(b, 16 << 4, 235 << 4);
+		r = (r - (16 << 4)) * 255 / 219;
+		g = (g - (16 << 4)) * 255 / 219;
+		b = (b - (16 << 4)) * 255 / 219;
+	}
+
+	if (tpg->brightness != 128 || tpg->contrast != 128 ||
+	    tpg->saturation != 128 || tpg->hue) {
+		/* Implement these operations */
+
+		/* First convert to YCbCr */
+		int y = color_to_y(tpg, r, g, b);	/* Luma */
+		int cb = color_to_cb(tpg, r, g, b);	/* Cb */
+		int cr = color_to_cr(tpg, r, g, b);	/* Cr */
+		int tmp_cb, tmp_cr;
+
+		y = (16 << 4) + ((y - (16 << 4)) * tpg->contrast) / 128;
+		y += (tpg->brightness << 4) - (128 << 4);
+
+		cb -= 128 << 4;
+		cr -= 128 << 4;
+		tmp_cb = (cb * cos(128 + tpg->hue)) / 127 + (cr * sin[128 + tpg->hue]) / 127;
+		tmp_cr = (cr * cos(128 + tpg->hue)) / 127 - (cb * sin[128 + tpg->hue]) / 127;
+
+		cb = (128 << 4) + (tmp_cb * tpg->contrast * tpg->saturation) / (128 * 128);
+		cr = (128 << 4) + (tmp_cr * tpg->contrast * tpg->saturation) / (128 * 128);
+		if (tpg->is_yuv) {
+			tpg->colors[k][0] = clamp(y >> 4, 1, 254);
+			tpg->colors[k][1] = clamp(cb >> 4, 1, 254);
+			tpg->colors[k][2] = clamp(cr >> 4, 1, 254);
+			return;
+		}
+		r = ycbcr_to_r(tpg, y, cb, cr);
+		g = ycbcr_to_g(tpg, y, cb, cr);
+		b = ycbcr_to_b(tpg, y, cb, cr);
+	}
+
+	if (tpg->is_yuv) {
+		/* Convert to YCbCr */
+		u16 y = color_to_y(tpg, r, g, b);	/* Luma */
+		u16 cb = color_to_cb(tpg, r, g, b);	/* Cb */
+		u16 cr = color_to_cr(tpg, r, g, b);	/* Cr */
+
+		tpg->colors[k][0] = clamp(y >> 4, 1, 254);
+		tpg->colors[k][1] = clamp(cb >> 4, 1, 254);
+		tpg->colors[k][2] = clamp(cr >> 4, 1, 254);
+	} else {
+		switch (tpg->fourcc) {
+		case V4L2_PIX_FMT_RGB565:
+		case V4L2_PIX_FMT_RGB565X:
+			r >>= 7;
+			g >>= 6;
+			b >>= 7;
+			break;
+		case V4L2_PIX_FMT_RGB555:
+		case V4L2_PIX_FMT_XRGB555:
+		case V4L2_PIX_FMT_ARGB555:
+		case V4L2_PIX_FMT_RGB555X:
+			r >>= 7;
+			g >>= 7;
+			b >>= 7;
+			break;
+		default:
+			r >>= 4;
+			g >>= 4;
+			b >>= 4;
+			break;
+		}
+
+		tpg->colors[k][0] = r;
+		tpg->colors[k][1] = g;
+		tpg->colors[k][2] = b;
+	}
+}
+
+static void tpg_precalculate_colors(struct tpg_data *tpg)
+{
+	int k;
+
+	for (k = 0; k < TPG_COLOR_MAX; k++)
+		precalculate_color(tpg, k);
+}
+
+/* 'odd' is true for pixels 1, 3, 5, etc. and false for pixels 0, 2, 4, etc. */
+static void gen_twopix(struct tpg_data *tpg,
+		u8 buf[TPG_MAX_PLANES][8], int color, bool odd)
+{
+	unsigned offset = odd * tpg->twopixelsize[0] / 2;
+	u8 alpha = tpg->alpha_component;
+	u8 r_y, g_u, b_v;
+
+	if (tpg->alpha_red_only && color != TPG_COLOR_CSC_RED &&
+				   color != TPG_COLOR_100_RED &&
+				   color != TPG_COLOR_75_RED)
+		alpha = 0;
+	if (color == TPG_COLOR_RANDOM)
+		precalculate_color(tpg, color);
+	r_y = tpg->colors[color][0]; /* R or precalculated Y */
+	g_u = tpg->colors[color][1]; /* G or precalculated U */
+	b_v = tpg->colors[color][2]; /* B or precalculated V */
+
+	switch (tpg->fourcc) {
+	case V4L2_PIX_FMT_NV16M:
+		buf[0][offset] = r_y;
+		buf[1][offset] = odd ? b_v : g_u;
+		break;
+	case V4L2_PIX_FMT_NV61M:
+		buf[0][offset] = r_y;
+		buf[1][offset] = odd ? g_u : b_v;
+		break;
+
+	case V4L2_PIX_FMT_YUYV:
+		buf[0][offset] = r_y;
+		buf[0][offset + 1] = odd ? b_v : g_u;
+		break;
+	case V4L2_PIX_FMT_UYVY:
+		buf[0][offset] = odd ? b_v : g_u;
+		buf[0][offset + 1] = r_y;
+		break;
+	case V4L2_PIX_FMT_YVYU:
+		buf[0][offset] = r_y;
+		buf[0][offset + 1] = odd ? g_u : b_v;
+		break;
+	case V4L2_PIX_FMT_VYUY:
+		buf[0][offset] = odd ? g_u : b_v;
+		buf[0][offset + 1] = r_y;
+		break;
+	case V4L2_PIX_FMT_RGB565:
+		buf[0][offset] = (g_u << 5) | b_v;
+		buf[0][offset + 1] = (r_y << 3) | (g_u >> 3);
+		break;
+	case V4L2_PIX_FMT_RGB565X:
+		buf[0][offset] = (r_y << 3) | (g_u >> 3);
+		buf[0][offset + 1] = (g_u << 5) | b_v;
+		break;
+	case V4L2_PIX_FMT_RGB555:
+	case V4L2_PIX_FMT_XRGB555:
+		alpha = 0;
+		/* fall through */
+	case V4L2_PIX_FMT_ARGB555:
+		buf[0][offset] = (g_u << 5) | b_v;
+		buf[0][offset + 1] = (alpha & 0x80) | (r_y << 2) | (g_u >> 3);
+		break;
+	case V4L2_PIX_FMT_RGB555X:
+		buf[0][offset] = (alpha & 0x80) | (r_y << 2) | (g_u >> 3);
+		buf[0][offset + 1] = (g_u << 5) | b_v;
+		break;
+	case V4L2_PIX_FMT_RGB24:
+		buf[0][offset] = r_y;
+		buf[0][offset + 1] = g_u;
+		buf[0][offset + 2] = b_v;
+		break;
+	case V4L2_PIX_FMT_BGR24:
+		buf[0][offset] = b_v;
+		buf[0][offset + 1] = g_u;
+		buf[0][offset + 2] = r_y;
+		break;
+	case V4L2_PIX_FMT_RGB32:
+	case V4L2_PIX_FMT_XRGB32:
+		alpha = 0;
+		/* fall through */
+	case V4L2_PIX_FMT_ARGB32:
+		buf[0][offset] = alpha;
+		buf[0][offset + 1] = r_y;
+		buf[0][offset + 2] = g_u;
+		buf[0][offset + 3] = b_v;
+		break;
+	case V4L2_PIX_FMT_BGR32:
+	case V4L2_PIX_FMT_XBGR32:
+		alpha = 0;
+		/* fall through */
+	case V4L2_PIX_FMT_ABGR32:
+		buf[0][offset] = b_v;
+		buf[0][offset + 1] = g_u;
+		buf[0][offset + 2] = r_y;
+		buf[0][offset + 3] = alpha;
+		break;
+	}
+}
+
+/* Return how many pattern lines are used by the current pattern. */
+static unsigned tpg_get_pat_lines(struct tpg_data *tpg)
+{
+	switch (tpg->pattern) {
+	case TPG_PAT_CHECKERS_16X16:
+	case TPG_PAT_CHECKERS_1X1:
+	case TPG_PAT_ALTERNATING_HLINES:
+	case TPG_PAT_CROSS_1_PIXEL:
+	case TPG_PAT_CROSS_2_PIXELS:
+	case TPG_PAT_CROSS_10_PIXELS:
+		return 2;
+	case TPG_PAT_100_COLORSQUARES:
+	case TPG_PAT_100_HCOLORBAR:
+		return 8;
+	default:
+		return 1;
+	}
+}
+
+/* Which pattern line should be used for the given frame line. */
+static unsigned tpg_get_pat_line(struct tpg_data *tpg, unsigned line)
+{
+	switch (tpg->pattern) {
+	case TPG_PAT_CHECKERS_16X16:
+		return (line >> 4) & 1;
+	case TPG_PAT_CHECKERS_1X1:
+	case TPG_PAT_ALTERNATING_HLINES:
+		return line & 1;
+	case TPG_PAT_100_COLORSQUARES:
+	case TPG_PAT_100_HCOLORBAR:
+		return (line * 8) / tpg->src_height;
+	case TPG_PAT_CROSS_1_PIXEL:
+		return line == tpg->src_height / 2;
+	case TPG_PAT_CROSS_2_PIXELS:
+		return (line + 1) / 2 == tpg->src_height / 4;
+	case TPG_PAT_CROSS_10_PIXELS:
+		return (line + 10) / 20 == tpg->src_height / 40;
+	default:
+		return 0;
+	}
+}
+
+/*
+ * Which color should be used for the given pattern line and X coordinate.
+ * Note: x is in the range 0 to 2 * tpg->src_width.
+ */
+static enum tpg_color tpg_get_color(struct tpg_data *tpg, unsigned pat_line, unsigned x)
+{
+	/* Maximum number of bars are TPG_COLOR_MAX - otherwise, the input print code
+	   should be modified */
+	static const enum tpg_color bars[3][8] = {
+		/* Standard ITU-R 75% color bar sequence */
+		{ TPG_COLOR_CSC_WHITE,   TPG_COLOR_75_YELLOW,
+		  TPG_COLOR_75_CYAN,     TPG_COLOR_75_GREEN,
+		  TPG_COLOR_75_MAGENTA,  TPG_COLOR_75_RED,
+		  TPG_COLOR_75_BLUE,     TPG_COLOR_100_BLACK, },
+		/* Standard ITU-R 100% color bar sequence */
+		{ TPG_COLOR_100_WHITE,   TPG_COLOR_100_YELLOW,
+		  TPG_COLOR_100_CYAN,    TPG_COLOR_100_GREEN,
+		  TPG_COLOR_100_MAGENTA, TPG_COLOR_100_RED,
+		  TPG_COLOR_100_BLUE,    TPG_COLOR_100_BLACK, },
+		/* Color bar sequence suitable to test CSC */
+		{ TPG_COLOR_CSC_WHITE,   TPG_COLOR_CSC_YELLOW,
+		  TPG_COLOR_CSC_CYAN,    TPG_COLOR_CSC_GREEN,
+		  TPG_COLOR_CSC_MAGENTA, TPG_COLOR_CSC_RED,
+		  TPG_COLOR_CSC_BLUE,    TPG_COLOR_CSC_BLACK, },
+	};
+
+	switch (tpg->pattern) {
+	case TPG_PAT_75_COLORBAR:
+	case TPG_PAT_100_COLORBAR:
+	case TPG_PAT_CSC_COLORBAR:
+		return bars[tpg->pattern][((x * 8) / tpg->src_width) % 8];
+	case TPG_PAT_100_COLORSQUARES:
+		return bars[1][(pat_line + (x * 8) / tpg->src_width) % 8];
+	case TPG_PAT_100_HCOLORBAR:
+		return bars[1][pat_line];
+	case TPG_PAT_BLACK:
+		return TPG_COLOR_100_BLACK;
+	case TPG_PAT_WHITE:
+		return TPG_COLOR_100_WHITE;
+	case TPG_PAT_RED:
+		return TPG_COLOR_100_RED;
+	case TPG_PAT_GREEN:
+		return TPG_COLOR_100_GREEN;
+	case TPG_PAT_BLUE:
+		return TPG_COLOR_100_BLUE;
+	case TPG_PAT_CHECKERS_16X16:
+		return (((x >> 4) & 1) ^ (pat_line & 1)) ?
+			TPG_COLOR_100_BLACK : TPG_COLOR_100_WHITE;
+	case TPG_PAT_CHECKERS_1X1:
+		return ((x & 1) ^ (pat_line & 1)) ?
+			TPG_COLOR_100_WHITE : TPG_COLOR_100_BLACK;
+	case TPG_PAT_ALTERNATING_HLINES:
+		return pat_line ? TPG_COLOR_100_WHITE : TPG_COLOR_100_BLACK;
+	case TPG_PAT_ALTERNATING_VLINES:
+		return (x & 1) ? TPG_COLOR_100_WHITE : TPG_COLOR_100_BLACK;
+	case TPG_PAT_CROSS_1_PIXEL:
+		if (pat_line || (x % tpg->src_width) == tpg->src_width / 2)
+			return TPG_COLOR_100_BLACK;
+		return TPG_COLOR_100_WHITE;
+	case TPG_PAT_CROSS_2_PIXELS:
+		if (pat_line || ((x % tpg->src_width) + 1) / 2 == tpg->src_width / 4)
+			return TPG_COLOR_100_BLACK;
+		return TPG_COLOR_100_WHITE;
+	case TPG_PAT_CROSS_10_PIXELS:
+		if (pat_line || ((x % tpg->src_width) + 10) / 20 == tpg->src_width / 40)
+			return TPG_COLOR_100_BLACK;
+		return TPG_COLOR_100_WHITE;
+	case TPG_PAT_GRAY_RAMP:
+		return TPG_COLOR_RAMP + ((x % tpg->src_width) * 256) / tpg->src_width;
+	default:
+		return TPG_COLOR_100_RED;
+	}
+}
+
+/*
+ * Given the pixel aspect ratio and video aspect ratio calculate the
+ * coordinates of a centered square and the coordinates of the border of
+ * the active video area. The coordinates are relative to the source
+ * frame rectangle.
+ */
+static void tpg_calculate_square_border(struct tpg_data *tpg)
+{
+	unsigned w = tpg->src_width;
+	unsigned h = tpg->src_height;
+	unsigned sq_w, sq_h;
+
+	sq_w = (w * 2 / 5) & ~1;
+	if (((w - sq_w) / 2) & 1)
+		sq_w += 2;
+	sq_h = sq_w;
+	tpg->square.width = sq_w;
+	if (tpg->vid_aspect == TPG_VIDEO_ASPECT_16X9_ANAMORPHIC) {
+		unsigned ana_sq_w = (sq_w / 4) * 3;
+
+		if (((w - ana_sq_w) / 2) & 1)
+			ana_sq_w += 2;
+		tpg->square.width = ana_sq_w;
+	}
+	tpg->square.left = (w - tpg->square.width) / 2;
+	if (tpg->pix_aspect == TPG_PIXEL_ASPECT_NTSC)
+		sq_h = sq_w * 10 / 11;
+	else if (tpg->pix_aspect == TPG_PIXEL_ASPECT_PAL)
+		sq_h = sq_w * 59 / 54;
+	tpg->square.height = sq_h;
+	tpg->square.top = (h - sq_h) / 2;
+	tpg->border.left = 0;
+	tpg->border.width = w;
+	tpg->border.top = 0;
+	tpg->border.height = h;
+	switch (tpg->vid_aspect) {
+	case TPG_VIDEO_ASPECT_4X3:
+		if (tpg->pix_aspect)
+			return;
+		if (3 * w >= 4 * h) {
+			tpg->border.width = ((4 * h) / 3) & ~1;
+			if (((w - tpg->border.width) / 2) & ~1)
+				tpg->border.width -= 2;
+			tpg->border.left = (w - tpg->border.width) / 2;
+			break;
+		}
+		tpg->border.height = ((3 * w) / 4) & ~1;
+		tpg->border.top = (h - tpg->border.height) / 2;
+		break;
+	case TPG_VIDEO_ASPECT_14X9_CENTRE:
+		if (tpg->pix_aspect) {
+			tpg->border.height = tpg->pix_aspect == TPG_PIXEL_ASPECT_NTSC ? 420 : 506;
+			tpg->border.top = (h - tpg->border.height) / 2;
+			break;
+		}
+		if (9 * w >= 14 * h) {
+			tpg->border.width = ((14 * h) / 9) & ~1;
+			if (((w - tpg->border.width) / 2) & ~1)
+				tpg->border.width -= 2;
+			tpg->border.left = (w - tpg->border.width) / 2;
+			break;
+		}
+		tpg->border.height = ((9 * w) / 14) & ~1;
+		tpg->border.top = (h - tpg->border.height) / 2;
+		break;
+	case TPG_VIDEO_ASPECT_16X9_CENTRE:
+		if (tpg->pix_aspect) {
+			tpg->border.height = tpg->pix_aspect == TPG_PIXEL_ASPECT_NTSC ? 368 : 442;
+			tpg->border.top = (h - tpg->border.height) / 2;
+			break;
+		}
+		if (9 * w >= 16 * h) {
+			tpg->border.width = ((16 * h) / 9) & ~1;
+			if (((w - tpg->border.width) / 2) & ~1)
+				tpg->border.width -= 2;
+			tpg->border.left = (w - tpg->border.width) / 2;
+			break;
+		}
+		tpg->border.height = ((9 * w) / 16) & ~1;
+		tpg->border.top = (h - tpg->border.height) / 2;
+		break;
+	default:
+		break;
+	}
+}
+
+static void tpg_precalculate_line(struct tpg_data *tpg)
+{
+	enum tpg_color contrast;
+	unsigned pat;
+	unsigned p;
+	unsigned x;
+
+	switch (tpg->pattern) {
+	case TPG_PAT_GREEN:
+		contrast = TPG_COLOR_100_RED;
+		break;
+	case TPG_PAT_CSC_COLORBAR:
+		contrast = TPG_COLOR_CSC_GREEN;
+		break;
+	default:
+		contrast = TPG_COLOR_100_GREEN;
+		break;
+	}
+
+	for (pat = 0; pat < tpg_get_pat_lines(tpg); pat++) {
+		/* Coarse scaling with Bresenham */
+		unsigned int_part = tpg->src_width / tpg->scaled_width;
+		unsigned fract_part = tpg->src_width % tpg->scaled_width;
+		unsigned src_x = 0;
+		unsigned error = 0;
+
+		for (x = 0; x < tpg->scaled_width * 2; x += 2) {
+			unsigned real_x = src_x;
+			enum tpg_color color1, color2;
+			u8 pix[TPG_MAX_PLANES][8];
+
+			real_x = tpg->hflip ? tpg->src_width * 2 - real_x - 2 : real_x;
+			color1 = tpg_get_color(tpg, pat, real_x);
+
+			src_x += int_part;
+			error += fract_part;
+			if (error >= tpg->scaled_width) {
+				error -= tpg->scaled_width;
+				src_x++;
+			}
+
+			real_x = src_x;
+			real_x = tpg->hflip ? tpg->src_width * 2 - real_x - 2 : real_x;
+			color2 = tpg_get_color(tpg, pat, real_x);
+
+			src_x += int_part;
+			error += fract_part;
+			if (error >= tpg->scaled_width) {
+				error -= tpg->scaled_width;
+				src_x++;
+			}
+
+			gen_twopix(tpg, pix, tpg->hflip ? color2 : color1, 0);
+			gen_twopix(tpg, pix, tpg->hflip ? color1 : color2, 1);
+			for (p = 0; p < tpg->planes; p++) {
+				unsigned twopixsize = tpg->twopixelsize[p];
+				u8 *pos = tpg->lines[pat][p] + x * twopixsize / 2;
+
+				memcpy(pos, pix[p], twopixsize);
+			}
+		}
+	}
+	for (x = 0; x < tpg->scaled_width; x += 2) {
+		u8 pix[TPG_MAX_PLANES][8];
+
+		gen_twopix(tpg, pix, contrast, 0);
+		gen_twopix(tpg, pix, contrast, 1);
+		for (p = 0; p < tpg->planes; p++) {
+			unsigned twopixsize = tpg->twopixelsize[p];
+			u8 *pos = tpg->contrast_line[p] + x * twopixsize / 2;
+
+			memcpy(pos, pix[p], twopixsize);
+		}
+	}
+	for (x = 0; x < tpg->scaled_width; x += 2) {
+		u8 pix[TPG_MAX_PLANES][8];
+
+		gen_twopix(tpg, pix, TPG_COLOR_100_BLACK, 0);
+		gen_twopix(tpg, pix, TPG_COLOR_100_BLACK, 1);
+		for (p = 0; p < tpg->planes; p++) {
+			unsigned twopixsize = tpg->twopixelsize[p];
+			u8 *pos = tpg->black_line[p] + x * twopixsize / 2;
+
+			memcpy(pos, pix[p], twopixsize);
+		}
+	}
+	for (x = 0; x < tpg->scaled_width * 2; x += 2) {
+		u8 pix[TPG_MAX_PLANES][8];
+
+		gen_twopix(tpg, pix, TPG_COLOR_RANDOM, 0);
+		gen_twopix(tpg, pix, TPG_COLOR_RANDOM, 1);
+		for (p = 0; p < tpg->planes; p++) {
+			unsigned twopixsize = tpg->twopixelsize[p];
+			u8 *pos = tpg->random_line[p] + x * twopixsize / 2;
+
+			memcpy(pos, pix[p], twopixsize);
+		}
+	}
+	gen_twopix(tpg, tpg->textbg, TPG_COLOR_TEXTBG, 0);
+	gen_twopix(tpg, tpg->textbg, TPG_COLOR_TEXTBG, 1);
+	gen_twopix(tpg, tpg->textfg, TPG_COLOR_TEXTFG, 0);
+	gen_twopix(tpg, tpg->textfg, TPG_COLOR_TEXTFG, 1);
+}
+
+/* need this to do rgb24 rendering */
+typedef struct { u16 __; u8 _; } __packed x24;
+
+void tpg_gen_text(struct tpg_data *tpg, u8 *basep[TPG_MAX_PLANES][2],
+		int y, int x, char *text)
+{
+	int line;
+	unsigned step = V4L2_FIELD_HAS_T_OR_B(tpg->field) ? 2 : 1;
+	unsigned div = step;
+	unsigned first = 0;
+	unsigned len = strlen(text);
+	unsigned p;
+
+	if (font8x16 == NULL || basep == NULL)
+		return;
+
+	/* Checks if it is possible to show string */
+	if (y + 16 >= tpg->compose.height || x + 8 >= tpg->compose.width)
+		return;
+
+	if (len > (tpg->compose.width - x) / 8)
+		len = (tpg->compose.width - x) / 8;
+	if (tpg->vflip)
+		y = tpg->compose.height - y - 16;
+	if (tpg->hflip)
+		x = tpg->compose.width - x - 8;
+	y += tpg->compose.top;
+	x += tpg->compose.left;
+	if (tpg->field == V4L2_FIELD_BOTTOM)
+		first = 1;
+	else if (tpg->field == V4L2_FIELD_SEQ_TB || tpg->field == V4L2_FIELD_SEQ_BT)
+		div = 2;
+
+	for (p = 0; p < tpg->planes; p++) {
+		/* Print stream time */
+#define PRINTSTR(PIXTYPE) do {	\
+	PIXTYPE fg;	\
+	PIXTYPE bg;	\
+	memcpy(&fg, tpg->textfg[p], sizeof(PIXTYPE));	\
+	memcpy(&bg, tpg->textbg[p], sizeof(PIXTYPE));	\
+	\
+	for (line = first; line < 16; line += step) {	\
+		int l = tpg->vflip ? 15 - line : line; \
+		PIXTYPE *pos = (PIXTYPE *)(basep[p][line & 1] + \
+			       ((y * step + l) / div) * tpg->bytesperline[p] + \
+			       x * sizeof(PIXTYPE));	\
+		unsigned s;	\
+	\
+		for (s = 0; s < len; s++) {	\
+			u8 chr = font8x16[text[s] * 16 + line];	\
+	\
+			if (tpg->hflip) { \
+				pos[7] = (chr & (0x01 << 7) ? fg : bg);	\
+				pos[6] = (chr & (0x01 << 6) ? fg : bg);	\
+				pos[5] = (chr & (0x01 << 5) ? fg : bg);	\
+				pos[4] = (chr & (0x01 << 4) ? fg : bg);	\
+				pos[3] = (chr & (0x01 << 3) ? fg : bg);	\
+				pos[2] = (chr & (0x01 << 2) ? fg : bg);	\
+				pos[1] = (chr & (0x01 << 1) ? fg : bg);	\
+				pos[0] = (chr & (0x01 << 0) ? fg : bg);	\
+			} else { \
+				pos[0] = (chr & (0x01 << 7) ? fg : bg);	\
+				pos[1] = (chr & (0x01 << 6) ? fg : bg);	\
+				pos[2] = (chr & (0x01 << 5) ? fg : bg);	\
+				pos[3] = (chr & (0x01 << 4) ? fg : bg);	\
+				pos[4] = (chr & (0x01 << 3) ? fg : bg);	\
+				pos[5] = (chr & (0x01 << 2) ? fg : bg);	\
+				pos[6] = (chr & (0x01 << 1) ? fg : bg);	\
+				pos[7] = (chr & (0x01 << 0) ? fg : bg);	\
+			} \
+	\
+			pos += tpg->hflip ? -8 : 8;	\
+		}	\
+	}	\
+} while (0)
+
+		switch (tpg->twopixelsize[p]) {
+		case 2:
+			PRINTSTR(u8); break;
+		case 4:
+			PRINTSTR(u16); break;
+		case 6:
+			PRINTSTR(x24); break;
+		case 8:
+			PRINTSTR(u32); break;
+		}
+	}
+}
+
+void tpg_update_mv_step(struct tpg_data *tpg)
+{
+	int factor = tpg->mv_hor_mode > TPG_MOVE_NONE ? -1 : 1;
+
+	if (tpg->hflip)
+		factor = -factor;
+	switch (tpg->mv_hor_mode) {
+	case TPG_MOVE_NEG_FAST:
+	case TPG_MOVE_POS_FAST:
+		tpg->mv_hor_step = ((tpg->src_width + 319) / 320) * 4;
+		break;
+	case TPG_MOVE_NEG:
+	case TPG_MOVE_POS:
+		tpg->mv_hor_step = ((tpg->src_width + 639) / 640) * 4;
+		break;
+	case TPG_MOVE_NEG_SLOW:
+	case TPG_MOVE_POS_SLOW:
+		tpg->mv_hor_step = 2;
+		break;
+	case TPG_MOVE_NONE:
+		tpg->mv_hor_step = 0;
+		break;
+	}
+	if (factor < 0)
+		tpg->mv_hor_step = tpg->src_width - tpg->mv_hor_step;
+
+	factor = tpg->mv_vert_mode > TPG_MOVE_NONE ? -1 : 1;
+	switch (tpg->mv_vert_mode) {
+	case TPG_MOVE_NEG_FAST:
+	case TPG_MOVE_POS_FAST:
+		tpg->mv_vert_step = ((tpg->src_width + 319) / 320) * 4;
+		break;
+	case TPG_MOVE_NEG:
+	case TPG_MOVE_POS:
+		tpg->mv_vert_step = ((tpg->src_width + 639) / 640) * 4;
+		break;
+	case TPG_MOVE_NEG_SLOW:
+	case TPG_MOVE_POS_SLOW:
+		tpg->mv_vert_step = 1;
+		break;
+	case TPG_MOVE_NONE:
+		tpg->mv_vert_step = 0;
+		break;
+	}
+	if (factor < 0)
+		tpg->mv_vert_step = tpg->src_height - tpg->mv_vert_step;
+}
+
+/* Map the line number relative to the crop rectangle to a frame line number */
+static unsigned tpg_calc_frameline(struct tpg_data *tpg, unsigned src_y,
+				    unsigned field)
+{
+	switch (field) {
+	case V4L2_FIELD_TOP:
+		return tpg->crop.top + src_y * 2;
+	case V4L2_FIELD_BOTTOM:
+		return tpg->crop.top + src_y * 2 + 1;
+	default:
+		return src_y + tpg->crop.top;
+	}
+}
+
+/*
+ * Map the line number relative to the compose rectangle to a destination
+ * buffer line number.
+ */
+static unsigned tpg_calc_buffer_line(struct tpg_data *tpg, unsigned y,
+				    unsigned field)
+{
+	y += tpg->compose.top;
+	switch (field) {
+	case V4L2_FIELD_SEQ_TB:
+		if (y & 1)
+			return tpg->buf_height / 2 + y / 2;
+		return y / 2;
+	case V4L2_FIELD_SEQ_BT:
+		if (y & 1)
+			return y / 2;
+		return tpg->buf_height / 2 + y / 2;
+	default:
+		return y;
+	}
+}
+
+static void tpg_recalc(struct tpg_data *tpg)
+{
+	if (tpg->recalc_colors) {
+		tpg->recalc_colors = false;
+		tpg->recalc_lines = true;
+		tpg_precalculate_colors(tpg);
+	}
+	if (tpg->recalc_square_border) {
+		tpg->recalc_square_border = false;
+		tpg_calculate_square_border(tpg);
+	}
+	if (tpg->recalc_lines) {
+		tpg->recalc_lines = false;
+		tpg_precalculate_line(tpg);
+	}
+}
+
+void tpg_calc_text_basep(struct tpg_data *tpg,
+		u8 *basep[TPG_MAX_PLANES][2], unsigned p, u8 *vbuf)
+{
+	unsigned stride = tpg->bytesperline[p];
+
+	tpg_recalc(tpg);
+
+	basep[p][0] = vbuf;
+	basep[p][1] = vbuf;
+	if (tpg->field == V4L2_FIELD_SEQ_TB)
+		basep[p][1] += tpg->buf_height * stride / 2;
+	else if (tpg->field == V4L2_FIELD_SEQ_BT)
+		basep[p][0] += tpg->buf_height * stride / 2;
+}
+
+void tpg_fillbuffer(struct tpg_data *tpg, v4l2_std_id std, unsigned p, u8 *vbuf)
+{
+	bool is_tv = std;
+	bool is_60hz = is_tv && (std & V4L2_STD_525_60);
+	unsigned mv_hor_old = tpg->mv_hor_count % tpg->src_width;
+	unsigned mv_hor_new = (tpg->mv_hor_count + tpg->mv_hor_step) % tpg->src_width;
+	unsigned mv_vert_old = tpg->mv_vert_count % tpg->src_height;
+	unsigned mv_vert_new = (tpg->mv_vert_count + tpg->mv_vert_step) % tpg->src_height;
+	unsigned wss_width;
+	unsigned f;
+	int hmax = (tpg->compose.height * tpg->perc_fill) / 100;
+	int h;
+	unsigned twopixsize = tpg->twopixelsize[p];
+	unsigned img_width = tpg->compose.width * twopixsize / 2;
+	unsigned line_offset;
+	unsigned left_pillar_width = 0;
+	unsigned right_pillar_start = img_width;
+	unsigned stride = tpg->bytesperline[p];
+	unsigned factor = V4L2_FIELD_HAS_T_OR_B(tpg->field) ? 2 : 1;
+	u8 *orig_vbuf = vbuf;
+
+	/* Coarse scaling with Bresenham */
+	unsigned int_part = (tpg->crop.height / factor) / tpg->compose.height;
+	unsigned fract_part = (tpg->crop.height / factor) % tpg->compose.height;
+	unsigned src_y = 0;
+	unsigned error = 0;
+
+	tpg_recalc(tpg);
+
+	mv_hor_old = (mv_hor_old * tpg->scaled_width / tpg->src_width) & ~1;
+	mv_hor_new = (mv_hor_new * tpg->scaled_width / tpg->src_width) & ~1;
+	wss_width = tpg->crop.left < tpg->src_width / 2 ?
+			tpg->src_width / 2 - tpg->crop.left : 0;
+	if (wss_width > tpg->crop.width)
+		wss_width = tpg->crop.width;
+	wss_width = wss_width * tpg->scaled_width / tpg->src_width;
+
+	vbuf += tpg->compose.left * twopixsize / 2;
+	line_offset = tpg->crop.left * tpg->scaled_width / tpg->src_width;
+	line_offset = (line_offset & ~1) * twopixsize / 2;
+	if (tpg->crop.left < tpg->border.left) {
+		left_pillar_width = tpg->border.left - tpg->crop.left;
+		if (left_pillar_width > tpg->crop.width)
+			left_pillar_width = tpg->crop.width;
+		left_pillar_width = (left_pillar_width * tpg->scaled_width) / tpg->src_width;
+		left_pillar_width = (left_pillar_width & ~1) * twopixsize / 2;
+	}
+	if (tpg->crop.left + tpg->crop.width > tpg->border.left + tpg->border.width) {
+		right_pillar_start = tpg->border.left + tpg->border.width - tpg->crop.left;
+		right_pillar_start = (right_pillar_start * tpg->scaled_width) / tpg->src_width;
+		right_pillar_start = (right_pillar_start & ~1) * twopixsize / 2;
+		if (right_pillar_start > img_width)
+			right_pillar_start = img_width;
+	}
+
+	f = tpg->field == (is_60hz ? V4L2_FIELD_TOP : V4L2_FIELD_BOTTOM);
+
+	for (h = 0; h < tpg->compose.height; h++) {
+		bool even;
+		bool fill_blank = false;
+		unsigned frame_line;
+		unsigned buf_line;
+		unsigned pat_line_old;
+		unsigned pat_line_new;
+		u8 *linestart_older;
+		u8 *linestart_newer;
+		u8 *linestart_top;
+		u8 *linestart_bottom;
+
+		frame_line = tpg_calc_frameline(tpg, src_y, tpg->field);
+		even = !(frame_line & 1);
+		buf_line = tpg_calc_buffer_line(tpg, h, tpg->field);
+		src_y += int_part;
+		error += fract_part;
+		if (error >= tpg->compose.height) {
+			error -= tpg->compose.height;
+			src_y++;
+		}
+
+		if (h >= hmax) {
+			if (hmax == tpg->compose.height)
+				continue;
+			if (!tpg->perc_fill_blank)
+				continue;
+			fill_blank = true;
+		}
+
+		if (tpg->vflip)
+			frame_line = tpg->src_height - frame_line - 1;
+
+		if (fill_blank) {
+			linestart_older = tpg->contrast_line[p];
+			linestart_newer = tpg->contrast_line[p];
+		} else if (tpg->qual != TPG_QUAL_NOISE &&
+			   (frame_line < tpg->border.top ||
+			    frame_line >= tpg->border.top + tpg->border.height)) {
+			linestart_older = tpg->black_line[p];
+			linestart_newer = tpg->black_line[p];
+		} else if (tpg->pattern == TPG_PAT_NOISE || tpg->qual == TPG_QUAL_NOISE) {
+			linestart_older = tpg->random_line[p] +
+					  twopixsize * prandom_u32_max(tpg->src_width / 2);
+			linestart_newer = tpg->random_line[p] +
+					  twopixsize * prandom_u32_max(tpg->src_width / 2);
+		} else {
+			pat_line_old = tpg_get_pat_line(tpg,
+						(frame_line + mv_vert_old) % tpg->src_height);
+			pat_line_new = tpg_get_pat_line(tpg,
+						(frame_line + mv_vert_new) % tpg->src_height);
+			linestart_older = tpg->lines[pat_line_old][p] +
+					  mv_hor_old * twopixsize / 2;
+			linestart_newer = tpg->lines[pat_line_new][p] +
+					  mv_hor_new * twopixsize / 2;
+			linestart_older += line_offset;
+			linestart_newer += line_offset;
+		}
+		if (is_60hz) {
+			linestart_top = linestart_newer;
+			linestart_bottom = linestart_older;
+		} else {
+			linestart_top = linestart_older;
+			linestart_bottom = linestart_newer;
+		}
+
+		switch (tpg->field) {
+		case V4L2_FIELD_INTERLACED:
+		case V4L2_FIELD_INTERLACED_TB:
+		case V4L2_FIELD_SEQ_TB:
+		case V4L2_FIELD_SEQ_BT:
+			if (even)
+				memcpy(vbuf + buf_line * stride, linestart_top, img_width);
+			else
+				memcpy(vbuf + buf_line * stride, linestart_bottom, img_width);
+			break;
+		case V4L2_FIELD_INTERLACED_BT:
+			if (even)
+				memcpy(vbuf + buf_line * stride, linestart_bottom, img_width);
+			else
+				memcpy(vbuf + buf_line * stride, linestart_top, img_width);
+			break;
+		case V4L2_FIELD_TOP:
+			memcpy(vbuf + buf_line * stride, linestart_top, img_width);
+			break;
+		case V4L2_FIELD_BOTTOM:
+			memcpy(vbuf + buf_line * stride, linestart_bottom, img_width);
+			break;
+		case V4L2_FIELD_NONE:
+		default:
+			memcpy(vbuf + buf_line * stride, linestart_older, img_width);
+			break;
+		}
+
+		if (is_tv && !is_60hz && frame_line == 0 && wss_width) {
+			/*
+			 * Replace the first half of the top line of a 50 Hz frame
+			 * with random data to simulate a WSS signal.
+			 */
+			u8 *wss = tpg->random_line[p] +
+				  twopixsize * prandom_u32_max(tpg->src_width / 2);
+
+			memcpy(vbuf + buf_line * stride, wss, wss_width * twopixsize / 2);
+		}
+	}
+
+	vbuf = orig_vbuf;
+	vbuf += tpg->compose.left * twopixsize / 2;
+	src_y = 0;
+	error = 0;
+	for (h = 0; h < tpg->compose.height; h++) {
+		unsigned frame_line = tpg_calc_frameline(tpg, src_y, tpg->field);
+		unsigned buf_line = tpg_calc_buffer_line(tpg, h, tpg->field);
+		const struct v4l2_rect *sq = &tpg->square;
+		const struct v4l2_rect *b = &tpg->border;
+		const struct v4l2_rect *c = &tpg->crop;
+
+		src_y += int_part;
+		error += fract_part;
+		if (error >= tpg->compose.height) {
+			error -= tpg->compose.height;
+			src_y++;
+		}
+
+		if (tpg->show_border && frame_line >= b->top &&
+		    frame_line < b->top + b->height) {
+			unsigned bottom = b->top + b->height - 1;
+			unsigned left = left_pillar_width;
+			unsigned right = right_pillar_start;
+
+			if (frame_line == b->top || frame_line == b->top + 1 ||
+			    frame_line == bottom || frame_line == bottom - 1) {
+				memcpy(vbuf + buf_line * stride + left, tpg->contrast_line[p],
+						right - left);
+			} else {
+				if (b->left >= c->left &&
+				    b->left < c->left + c->width)
+					memcpy(vbuf + buf_line * stride + left,
+						tpg->contrast_line[p], twopixsize);
+				if (b->left + b->width > c->left &&
+				    b->left + b->width <= c->left + c->width)
+					memcpy(vbuf + buf_line * stride + right - twopixsize,
+						tpg->contrast_line[p], twopixsize);
+			}
+		}
+		if (tpg->qual != TPG_QUAL_NOISE && frame_line >= b->top &&
+		    frame_line < b->top + b->height) {
+			memcpy(vbuf + buf_line * stride, tpg->black_line[p], left_pillar_width);
+			memcpy(vbuf + buf_line * stride + right_pillar_start, tpg->black_line[p],
+			       img_width - right_pillar_start);
+		}
+		if (tpg->show_square && frame_line >= sq->top &&
+		    frame_line < sq->top + sq->height &&
+		    sq->left < c->left + c->width &&
+		    sq->left + sq->width >= c->left) {
+			unsigned left = sq->left;
+			unsigned width = sq->width;
+
+			if (c->left > left) {
+				width -= c->left - left;
+				left = c->left;
+			}
+			if (c->left + c->width < left + width)
+				width -= left + width - c->left - c->width;
+			left -= c->left;
+			left = (left * tpg->scaled_width) / tpg->src_width;
+			left = (left & ~1) * twopixsize / 2;
+			width = (width * tpg->scaled_width) / tpg->src_width;
+			width = (width & ~1) * twopixsize / 2;
+			memcpy(vbuf + buf_line * stride + left, tpg->contrast_line[p], width);
+		}
+		if (tpg->insert_sav) {
+			unsigned offset = (tpg->compose.width / 6) * twopixsize;
+			u8 *p = vbuf + buf_line * stride + offset;
+			unsigned vact = 0, hact = 0;
+
+			p[0] = 0xff;
+			p[1] = 0;
+			p[2] = 0;
+			p[3] = 0x80 | (f << 6) | (vact << 5) | (hact << 4) |
+				((hact ^ vact) << 3) |
+				((hact ^ f) << 2) |
+				((f ^ vact) << 1) |
+				(hact ^ vact ^ f);
+		}
+		if (tpg->insert_eav) {
+			unsigned offset = (tpg->compose.width / 6) * 2 * twopixsize;
+			u8 *p = vbuf + buf_line * stride + offset;
+			unsigned vact = 0, hact = 1;
+
+			p[0] = 0xff;
+			p[1] = 0;
+			p[2] = 0;
+			p[3] = 0x80 | (f << 6) | (vact << 5) | (hact << 4) |
+				((hact ^ vact) << 3) |
+				((hact ^ f) << 2) |
+				((f ^ vact) << 1) |
+				(hact ^ vact ^ f);
+		}
+	}
+}
diff --git a/drivers/media/platform/vivid/vivid-tpg.h b/drivers/media/platform/vivid/vivid-tpg.h
new file mode 100644
index 0000000..51ef7d1
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-tpg.h
@@ -0,0 +1,438 @@
+/*
+ * vivid-tpg.h - Test Pattern Generator
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_TPG_H_
+#define _VIVID_TPG_H_
+
+#include <linux/version.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/random.h>
+#include <linux/slab.h>
+#include <linux/videodev2.h>
+
+#include "vivid-tpg-colors.h"
+
+enum tpg_pattern {
+	TPG_PAT_75_COLORBAR,
+	TPG_PAT_100_COLORBAR,
+	TPG_PAT_CSC_COLORBAR,
+	TPG_PAT_100_HCOLORBAR,
+	TPG_PAT_100_COLORSQUARES,
+	TPG_PAT_BLACK,
+	TPG_PAT_WHITE,
+	TPG_PAT_RED,
+	TPG_PAT_GREEN,
+	TPG_PAT_BLUE,
+	TPG_PAT_CHECKERS_16X16,
+	TPG_PAT_CHECKERS_1X1,
+	TPG_PAT_ALTERNATING_HLINES,
+	TPG_PAT_ALTERNATING_VLINES,
+	TPG_PAT_CROSS_1_PIXEL,
+	TPG_PAT_CROSS_2_PIXELS,
+	TPG_PAT_CROSS_10_PIXELS,
+	TPG_PAT_GRAY_RAMP,
+
+	/* Must be the last pattern */
+	TPG_PAT_NOISE,
+};
+
+extern const char * const tpg_pattern_strings[];
+
+enum tpg_quality {
+	TPG_QUAL_COLOR,
+	TPG_QUAL_GRAY,
+	TPG_QUAL_NOISE
+};
+
+enum tpg_video_aspect {
+	TPG_VIDEO_ASPECT_IMAGE,
+	TPG_VIDEO_ASPECT_4X3,
+	TPG_VIDEO_ASPECT_14X9_CENTRE,
+	TPG_VIDEO_ASPECT_16X9_CENTRE,
+	TPG_VIDEO_ASPECT_16X9_ANAMORPHIC,
+};
+
+enum tpg_pixel_aspect {
+	TPG_PIXEL_ASPECT_SQUARE,
+	TPG_PIXEL_ASPECT_NTSC,
+	TPG_PIXEL_ASPECT_PAL,
+};
+
+enum tpg_move_mode {
+	TPG_MOVE_NEG_FAST,
+	TPG_MOVE_NEG,
+	TPG_MOVE_NEG_SLOW,
+	TPG_MOVE_NONE,
+	TPG_MOVE_POS_SLOW,
+	TPG_MOVE_POS,
+	TPG_MOVE_POS_FAST,
+};
+
+extern const char * const tpg_aspect_strings[];
+
+#define TPG_MAX_PLANES 2
+#define TPG_MAX_PAT_LINES 8
+
+struct tpg_data {
+	/* Source frame size */
+	unsigned			src_width, src_height;
+	/* Buffer height */
+	unsigned			buf_height;
+	/* Scaled output frame size */
+	unsigned			scaled_width;
+	u32				field;
+	/* crop coordinates are frame-based */
+	struct v4l2_rect		crop;
+	/* compose coordinates are format-based */
+	struct v4l2_rect		compose;
+	/* border and square coordinates are frame-based */
+	struct v4l2_rect		border;
+	struct v4l2_rect		square;
+
+	/* Color-related fields */
+	enum tpg_quality		qual;
+	unsigned			qual_offset;
+	u8				alpha_component;
+	bool				alpha_red_only;
+	u8				brightness;
+	u8				contrast;
+	u8				saturation;
+	s16				hue;
+	u32				fourcc;
+	bool				is_yuv;
+	u32				colorspace;
+	enum tpg_video_aspect		vid_aspect;
+	enum tpg_pixel_aspect		pix_aspect;
+	unsigned			rgb_range;
+	unsigned			real_rgb_range;
+	unsigned			planes;
+	/* Used to store the colors in native format, either RGB or YUV */
+	u8				colors[TPG_COLOR_MAX][3];
+	u8				textfg[TPG_MAX_PLANES][8], textbg[TPG_MAX_PLANES][8];
+	/* size in bytes for two pixels in each plane */
+	unsigned			twopixelsize[TPG_MAX_PLANES];
+	unsigned			bytesperline[TPG_MAX_PLANES];
+
+	/* Configuration */
+	enum tpg_pattern		pattern;
+	bool				hflip;
+	bool				vflip;
+	unsigned			perc_fill;
+	bool				perc_fill_blank;
+	bool				show_border;
+	bool				show_square;
+	bool				insert_sav;
+	bool				insert_eav;
+
+	/* Test pattern movement */
+	enum tpg_move_mode		mv_hor_mode;
+	int				mv_hor_count;
+	int				mv_hor_step;
+	enum tpg_move_mode		mv_vert_mode;
+	int				mv_vert_count;
+	int				mv_vert_step;
+
+	bool				recalc_colors;
+	bool				recalc_lines;
+	bool				recalc_square_border;
+
+	/* Used to store TPG_MAX_PAT_LINES lines, each with up to two planes */
+	unsigned			max_line_width;
+	u8				*lines[TPG_MAX_PAT_LINES][TPG_MAX_PLANES];
+	u8				*random_line[TPG_MAX_PLANES];
+	u8				*contrast_line[TPG_MAX_PLANES];
+	u8				*black_line[TPG_MAX_PLANES];
+};
+
+void tpg_init(struct tpg_data *tpg, unsigned w, unsigned h);
+int tpg_alloc(struct tpg_data *tpg, unsigned max_w);
+void tpg_free(struct tpg_data *tpg);
+void tpg_reset_source(struct tpg_data *tpg, unsigned width, unsigned height,
+		       u32 field);
+
+void tpg_set_font(const u8 *f);
+void tpg_gen_text(struct tpg_data *tpg,
+		u8 *basep[TPG_MAX_PLANES][2], int y, int x, char *text);
+void tpg_calc_text_basep(struct tpg_data *tpg,
+		u8 *basep[TPG_MAX_PLANES][2], unsigned p, u8 *vbuf);
+void tpg_fillbuffer(struct tpg_data *tpg, v4l2_std_id std, unsigned p, u8 *vbuf);
+bool tpg_s_fourcc(struct tpg_data *tpg, u32 fourcc);
+void tpg_s_crop_compose(struct tpg_data *tpg, const struct v4l2_rect *crop,
+		const struct v4l2_rect *compose);
+
+static inline void tpg_s_pattern(struct tpg_data *tpg, enum tpg_pattern pattern)
+{
+	if (tpg->pattern == pattern)
+		return;
+	tpg->pattern = pattern;
+	tpg->recalc_colors = true;
+}
+
+static inline void tpg_s_quality(struct tpg_data *tpg,
+				    enum tpg_quality qual, unsigned qual_offset)
+{
+	if (tpg->qual == qual && tpg->qual_offset == qual_offset)
+		return;
+	tpg->qual = qual;
+	tpg->qual_offset = qual_offset;
+	tpg->recalc_colors = true;
+}
+
+static inline enum tpg_quality tpg_g_quality(const struct tpg_data *tpg)
+{
+	return tpg->qual;
+}
+
+static inline void tpg_s_alpha_component(struct tpg_data *tpg,
+					    u8 alpha_component)
+{
+	if (tpg->alpha_component == alpha_component)
+		return;
+	tpg->alpha_component = alpha_component;
+	tpg->recalc_colors = true;
+}
+
+static inline void tpg_s_alpha_mode(struct tpg_data *tpg,
+					    bool red_only)
+{
+	if (tpg->alpha_red_only == red_only)
+		return;
+	tpg->alpha_red_only = red_only;
+	tpg->recalc_colors = true;
+}
+
+static inline void tpg_s_brightness(struct tpg_data *tpg,
+					u8 brightness)
+{
+	if (tpg->brightness == brightness)
+		return;
+	tpg->brightness = brightness;
+	tpg->recalc_colors = true;
+}
+
+static inline void tpg_s_contrast(struct tpg_data *tpg,
+					u8 contrast)
+{
+	if (tpg->contrast == contrast)
+		return;
+	tpg->contrast = contrast;
+	tpg->recalc_colors = true;
+}
+
+static inline void tpg_s_saturation(struct tpg_data *tpg,
+					u8 saturation)
+{
+	if (tpg->saturation == saturation)
+		return;
+	tpg->saturation = saturation;
+	tpg->recalc_colors = true;
+}
+
+static inline void tpg_s_hue(struct tpg_data *tpg,
+					s16 hue)
+{
+	if (tpg->hue == hue)
+		return;
+	tpg->hue = hue;
+	tpg->recalc_colors = true;
+}
+
+static inline void tpg_s_rgb_range(struct tpg_data *tpg,
+					unsigned rgb_range)
+{
+	if (tpg->rgb_range == rgb_range)
+		return;
+	tpg->rgb_range = rgb_range;
+	tpg->recalc_colors = true;
+}
+
+static inline void tpg_s_real_rgb_range(struct tpg_data *tpg,
+					unsigned rgb_range)
+{
+	if (tpg->real_rgb_range == rgb_range)
+		return;
+	tpg->real_rgb_range = rgb_range;
+	tpg->recalc_colors = true;
+}
+
+static inline void tpg_s_colorspace(struct tpg_data *tpg, u32 colorspace)
+{
+	if (tpg->colorspace == colorspace)
+		return;
+	tpg->colorspace = colorspace;
+	tpg->recalc_colors = true;
+}
+
+static inline u32 tpg_g_colorspace(const struct tpg_data *tpg)
+{
+	return tpg->colorspace;
+}
+
+static inline unsigned tpg_g_planes(const struct tpg_data *tpg)
+{
+	return tpg->planes;
+}
+
+static inline unsigned tpg_g_twopixelsize(const struct tpg_data *tpg, unsigned plane)
+{
+	return tpg->twopixelsize[plane];
+}
+
+static inline unsigned tpg_g_bytesperline(const struct tpg_data *tpg, unsigned plane)
+{
+	return tpg->bytesperline[plane];
+}
+
+static inline void tpg_s_bytesperline(struct tpg_data *tpg, unsigned plane, unsigned bpl)
+{
+	tpg->bytesperline[plane] = bpl;
+}
+
+static inline void tpg_s_buf_height(struct tpg_data *tpg, unsigned h)
+{
+	tpg->buf_height = h;
+}
+
+static inline void tpg_s_field(struct tpg_data *tpg, unsigned field)
+{
+	tpg->field = field;
+}
+
+static inline void tpg_s_perc_fill(struct tpg_data *tpg,
+				      unsigned perc_fill)
+{
+	tpg->perc_fill = perc_fill;
+}
+
+static inline unsigned tpg_g_perc_fill(const struct tpg_data *tpg)
+{
+	return tpg->perc_fill;
+}
+
+static inline void tpg_s_perc_fill_blank(struct tpg_data *tpg,
+					 bool perc_fill_blank)
+{
+	tpg->perc_fill_blank = perc_fill_blank;
+}
+
+static inline void tpg_s_video_aspect(struct tpg_data *tpg,
+					enum tpg_video_aspect vid_aspect)
+{
+	if (tpg->vid_aspect == vid_aspect)
+		return;
+	tpg->vid_aspect = vid_aspect;
+	tpg->recalc_square_border = true;
+}
+
+static inline enum tpg_video_aspect tpg_g_video_aspect(const struct tpg_data *tpg)
+{
+	return tpg->vid_aspect;
+}
+
+static inline void tpg_s_pixel_aspect(struct tpg_data *tpg,
+					enum tpg_pixel_aspect pix_aspect)
+{
+	if (tpg->pix_aspect == pix_aspect)
+		return;
+	tpg->pix_aspect = pix_aspect;
+	tpg->recalc_square_border = true;
+}
+
+static inline void tpg_s_show_border(struct tpg_data *tpg,
+					bool show_border)
+{
+	tpg->show_border = show_border;
+}
+
+static inline void tpg_s_show_square(struct tpg_data *tpg,
+					bool show_square)
+{
+	tpg->show_square = show_square;
+}
+
+static inline void tpg_s_insert_sav(struct tpg_data *tpg, bool insert_sav)
+{
+	tpg->insert_sav = insert_sav;
+}
+
+static inline void tpg_s_insert_eav(struct tpg_data *tpg, bool insert_eav)
+{
+	tpg->insert_eav = insert_eav;
+}
+
+void tpg_update_mv_step(struct tpg_data *tpg);
+
+static inline void tpg_s_mv_hor_mode(struct tpg_data *tpg,
+				enum tpg_move_mode mv_hor_mode)
+{
+	tpg->mv_hor_mode = mv_hor_mode;
+	tpg_update_mv_step(tpg);
+}
+
+static inline void tpg_s_mv_vert_mode(struct tpg_data *tpg,
+				enum tpg_move_mode mv_vert_mode)
+{
+	tpg->mv_vert_mode = mv_vert_mode;
+	tpg_update_mv_step(tpg);
+}
+
+static inline void tpg_init_mv_count(struct tpg_data *tpg)
+{
+	tpg->mv_hor_count = tpg->mv_vert_count = 0;
+}
+
+static inline void tpg_update_mv_count(struct tpg_data *tpg, bool frame_is_field)
+{
+	tpg->mv_hor_count += tpg->mv_hor_step * (frame_is_field ? 1 : 2);
+	tpg->mv_vert_count += tpg->mv_vert_step * (frame_is_field ? 1 : 2);
+}
+
+static inline void tpg_s_hflip(struct tpg_data *tpg, bool hflip)
+{
+	if (tpg->hflip == hflip)
+		return;
+	tpg->hflip = hflip;
+	tpg_update_mv_step(tpg);
+	tpg->recalc_lines = true;
+}
+
+static inline bool tpg_g_hflip(const struct tpg_data *tpg)
+{
+	return tpg->hflip;
+}
+
+static inline void tpg_s_vflip(struct tpg_data *tpg, bool vflip)
+{
+	tpg->vflip = vflip;
+}
+
+static inline bool tpg_g_vflip(const struct tpg_data *tpg)
+{
+	return tpg->vflip;
+}
+
+static inline bool tpg_pattern_is_static(const struct tpg_data *tpg)
+{
+	return tpg->pattern != TPG_PAT_NOISE &&
+	       tpg->mv_hor_mode == TPG_MOVE_NONE &&
+	       tpg->mv_vert_mode == TPG_MOVE_NONE;
+}
+
+#endif
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 10/12] vivid: add support for radio receivers and transmitters
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
                   ` (8 preceding siblings ...)
  2014-08-25 11:30 ` [PATCHv2 09/12] vivid: add the Test Pattern Generator Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 11/12] vivid: add support for software defined radio Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 12/12] vivid: enable the vivid driver Hans Verkuil
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

This adds radio receiver and transmitter support. Part of that is common
to both and so is placed in the radio-common source.

These drivers also support RDS. In order to generate valid RDS data a
simple RDS generator is implemented in rds-gen.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/platform/vivid/vivid-radio-common.c | 189 ++++++++++++++
 drivers/media/platform/vivid/vivid-radio-common.h |  40 +++
 drivers/media/platform/vivid/vivid-radio-rx.c     | 287 ++++++++++++++++++++++
 drivers/media/platform/vivid/vivid-radio-rx.h     |  31 +++
 drivers/media/platform/vivid/vivid-radio-tx.c     | 141 +++++++++++
 drivers/media/platform/vivid/vivid-radio-tx.h     |  29 +++
 drivers/media/platform/vivid/vivid-rds-gen.c      | 165 +++++++++++++
 drivers/media/platform/vivid/vivid-rds-gen.h      |  53 ++++
 8 files changed, 935 insertions(+)
 create mode 100644 drivers/media/platform/vivid/vivid-radio-common.c
 create mode 100644 drivers/media/platform/vivid/vivid-radio-common.h
 create mode 100644 drivers/media/platform/vivid/vivid-radio-rx.c
 create mode 100644 drivers/media/platform/vivid/vivid-radio-rx.h
 create mode 100644 drivers/media/platform/vivid/vivid-radio-tx.c
 create mode 100644 drivers/media/platform/vivid/vivid-radio-tx.h
 create mode 100644 drivers/media/platform/vivid/vivid-rds-gen.c
 create mode 100644 drivers/media/platform/vivid/vivid-rds-gen.h

diff --git a/drivers/media/platform/vivid/vivid-radio-common.c b/drivers/media/platform/vivid/vivid-radio-common.c
new file mode 100644
index 0000000..78c1e92
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-radio-common.c
@@ -0,0 +1,189 @@
+/*
+ * vivid-radio-common.c - common radio rx/tx support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/videodev2.h>
+
+#include "vivid-core.h"
+#include "vivid-ctrls.h"
+#include "vivid-radio-common.h"
+#include "vivid-rds-gen.h"
+
+/*
+ * These functions are shared between the vivid receiver and transmitter
+ * since both use the same frequency bands.
+ */
+
+const struct v4l2_frequency_band vivid_radio_bands[TOT_BANDS] = {
+	/* Band FM */
+	{
+		.type = V4L2_TUNER_RADIO,
+		.index = 0,
+		.capability = V4L2_TUNER_CAP_LOW | V4L2_TUNER_CAP_STEREO |
+			      V4L2_TUNER_CAP_FREQ_BANDS,
+		.rangelow   = FM_FREQ_RANGE_LOW,
+		.rangehigh  = FM_FREQ_RANGE_HIGH,
+		.modulation = V4L2_BAND_MODULATION_FM,
+	},
+	/* Band AM */
+	{
+		.type = V4L2_TUNER_RADIO,
+		.index = 1,
+		.capability = V4L2_TUNER_CAP_LOW | V4L2_TUNER_CAP_FREQ_BANDS,
+		.rangelow   = AM_FREQ_RANGE_LOW,
+		.rangehigh  = AM_FREQ_RANGE_HIGH,
+		.modulation = V4L2_BAND_MODULATION_AM,
+	},
+	/* Band SW */
+	{
+		.type = V4L2_TUNER_RADIO,
+		.index = 2,
+		.capability = V4L2_TUNER_CAP_LOW | V4L2_TUNER_CAP_FREQ_BANDS,
+		.rangelow   = SW_FREQ_RANGE_LOW,
+		.rangehigh  = SW_FREQ_RANGE_HIGH,
+		.modulation = V4L2_BAND_MODULATION_AM,
+	},
+};
+
+/*
+ * Initialize the RDS generator. If we can loop, then the RDS generator
+ * is set up with the values from the RDS TX controls, otherwise it
+ * will fill in standard values using one of two alternates.
+ */
+void vivid_radio_rds_init(struct vivid_dev *dev)
+{
+	struct vivid_rds_gen *rds = &dev->rds_gen;
+	bool alt = dev->radio_rx_rds_use_alternates;
+
+	/* Do nothing, blocks will be filled by the transmitter */
+	if (dev->radio_rds_loop && !dev->radio_tx_rds_controls)
+		return;
+
+	if (dev->radio_rds_loop) {
+		v4l2_ctrl_lock(dev->radio_tx_rds_pi);
+		rds->picode = dev->radio_tx_rds_pi->cur.val;
+		rds->pty = dev->radio_tx_rds_pty->cur.val;
+		rds->mono_stereo = dev->radio_tx_rds_mono_stereo->cur.val;
+		rds->art_head = dev->radio_tx_rds_art_head->cur.val;
+		rds->compressed = dev->radio_tx_rds_compressed->cur.val;
+		rds->dyn_pty = dev->radio_tx_rds_dyn_pty->cur.val;
+		rds->ta = dev->radio_tx_rds_ta->cur.val;
+		rds->tp = dev->radio_tx_rds_tp->cur.val;
+		rds->ms = dev->radio_tx_rds_ms->cur.val;
+		strlcpy(rds->psname,
+			dev->radio_tx_rds_psname->p_cur.p_char,
+			sizeof(rds->psname));
+		strlcpy(rds->radiotext,
+			dev->radio_tx_rds_radiotext->p_cur.p_char + alt * 64,
+			sizeof(rds->radiotext));
+		v4l2_ctrl_unlock(dev->radio_tx_rds_pi);
+	} else {
+		vivid_rds_gen_fill(rds, dev->radio_rx_freq, alt);
+	}
+	if (dev->radio_rx_rds_controls) {
+		v4l2_ctrl_s_ctrl(dev->radio_rx_rds_pty, rds->pty);
+		v4l2_ctrl_s_ctrl(dev->radio_rx_rds_ta, rds->ta);
+		v4l2_ctrl_s_ctrl(dev->radio_rx_rds_tp, rds->tp);
+		v4l2_ctrl_s_ctrl(dev->radio_rx_rds_ms, rds->ms);
+		v4l2_ctrl_s_ctrl_string(dev->radio_rx_rds_psname, rds->psname);
+		v4l2_ctrl_s_ctrl_string(dev->radio_rx_rds_radiotext, rds->radiotext);
+		if (!dev->radio_rds_loop)
+			dev->radio_rx_rds_use_alternates = !dev->radio_rx_rds_use_alternates;
+	}
+	vivid_rds_generate(rds);
+}
+
+/*
+ * Calculate the emulated signal quality taking into account the frequency
+ * the transmitter is using.
+ */
+static void vivid_radio_calc_sig_qual(struct vivid_dev *dev)
+{
+	int mod = 16000;
+	int delta = 800;
+	int sig_qual, sig_qual_tx = mod;
+
+	/*
+	 * For SW and FM there is a channel every 1000 kHz, for AM there is one
+	 * every 100 kHz.
+	 */
+	if (dev->radio_rx_freq <= AM_FREQ_RANGE_HIGH) {
+		mod /= 10;
+		delta /= 10;
+	}
+	sig_qual = (dev->radio_rx_freq + delta) % mod - delta;
+	if (dev->has_radio_tx)
+		sig_qual_tx = dev->radio_rx_freq - dev->radio_tx_freq;
+	if (abs(sig_qual_tx) <= abs(sig_qual)) {
+		sig_qual = sig_qual_tx;
+		/*
+		 * Zero the internal rds buffer if we are going to loop
+		 * rds blocks.
+		 */
+		if (!dev->radio_rds_loop && !dev->radio_tx_rds_controls)
+			memset(dev->rds_gen.data, 0,
+			       sizeof(dev->rds_gen.data));
+		dev->radio_rds_loop = dev->radio_rx_freq >= FM_FREQ_RANGE_LOW;
+	} else {
+		dev->radio_rds_loop = false;
+	}
+	if (dev->radio_rx_freq <= AM_FREQ_RANGE_HIGH)
+		sig_qual *= 10;
+	dev->radio_rx_sig_qual = sig_qual;
+}
+
+int vivid_radio_g_frequency(struct file *file, const unsigned *pfreq, struct v4l2_frequency *vf)
+{
+	if (vf->tuner != 0)
+		return -EINVAL;
+	vf->frequency = *pfreq;
+	return 0;
+}
+
+int vivid_radio_s_frequency(struct file *file, unsigned *pfreq, const struct v4l2_frequency *vf)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	unsigned freq;
+	unsigned band;
+
+	if (vf->tuner != 0)
+		return -EINVAL;
+
+	if (vf->frequency >= (FM_FREQ_RANGE_LOW + SW_FREQ_RANGE_HIGH) / 2)
+		band = BAND_FM;
+	else if (vf->frequency <= (AM_FREQ_RANGE_HIGH + SW_FREQ_RANGE_LOW) / 2)
+		band = BAND_AM;
+	else
+		band = BAND_SW;
+
+	freq = clamp_t(u32, vf->frequency, vivid_radio_bands[band].rangelow,
+					   vivid_radio_bands[band].rangehigh);
+	*pfreq = freq;
+
+	/*
+	 * For both receiver and transmitter recalculate the signal quality
+	 * (since that depends on both frequencies) and re-init the rds
+	 * generator.
+	 */
+	vivid_radio_calc_sig_qual(dev);
+	vivid_radio_rds_init(dev);
+	return 0;
+}
diff --git a/drivers/media/platform/vivid/vivid-radio-common.h b/drivers/media/platform/vivid/vivid-radio-common.h
new file mode 100644
index 0000000..92fe589
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-radio-common.h
@@ -0,0 +1,40 @@
+/*
+ * vivid-radio-common.h - common radio rx/tx support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_RADIO_COMMON_H_
+#define _VIVID_RADIO_COMMON_H_
+
+/* The supported radio frequency ranges in kHz */
+#define FM_FREQ_RANGE_LOW       (64000U * 16U)
+#define FM_FREQ_RANGE_HIGH      (108000U * 16U)
+#define AM_FREQ_RANGE_LOW       (520U * 16U)
+#define AM_FREQ_RANGE_HIGH      (1710U * 16U)
+#define SW_FREQ_RANGE_LOW       (2300U * 16U)
+#define SW_FREQ_RANGE_HIGH      (26100U * 16U)
+
+enum { BAND_FM, BAND_AM, BAND_SW, TOT_BANDS };
+
+extern const struct v4l2_frequency_band vivid_radio_bands[TOT_BANDS];
+
+int vivid_radio_g_frequency(struct file *file, const unsigned *freq, struct v4l2_frequency *vf);
+int vivid_radio_s_frequency(struct file *file, unsigned *freq, const struct v4l2_frequency *vf);
+
+void vivid_radio_rds_init(struct vivid_dev *dev);
+
+#endif
diff --git a/drivers/media/platform/vivid/vivid-radio-rx.c b/drivers/media/platform/vivid/vivid-radio-rx.c
new file mode 100644
index 0000000..c7651a5
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-radio-rx.c
@@ -0,0 +1,287 @@
+/*
+ * vivid-radio-rx.c - radio receiver support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/videodev2.h>
+#include <linux/v4l2-dv-timings.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-dv-timings.h>
+
+#include "vivid-core.h"
+#include "vivid-ctrls.h"
+#include "vivid-radio-common.h"
+#include "vivid-rds-gen.h"
+#include "vivid-radio-rx.h"
+
+ssize_t vivid_radio_rx_read(struct file *file, char __user *buf,
+			 size_t size, loff_t *offset)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct timespec ts;
+	struct v4l2_rds_data *data = dev->rds_gen.data;
+	bool use_alternates;
+	unsigned blk;
+	int perc;
+	int i;
+
+	if (dev->radio_rx_rds_controls)
+		return -EINVAL;
+	if (size < sizeof(*data))
+		return 0;
+	size = sizeof(*data) * (size / sizeof(*data));
+
+	if (mutex_lock_interruptible(&dev->mutex))
+		return -ERESTARTSYS;
+	if (dev->radio_rx_rds_owner &&
+	    file->private_data != dev->radio_rx_rds_owner) {
+		mutex_unlock(&dev->mutex);
+		return -EBUSY;
+	}
+	if (dev->radio_rx_rds_owner == NULL) {
+		vivid_radio_rds_init(dev);
+		dev->radio_rx_rds_owner = file->private_data;
+	}
+
+retry:
+	ktime_get_ts(&ts);
+	use_alternates = ts.tv_sec % 10 >= 5;
+	if (dev->radio_rx_rds_last_block == 0 ||
+	    dev->radio_rx_rds_use_alternates != use_alternates) {
+		dev->radio_rx_rds_use_alternates = use_alternates;
+		/* Re-init the RDS generator */
+		vivid_radio_rds_init(dev);
+	}
+	ts = timespec_sub(ts, dev->radio_rds_init_ts);
+	blk = ts.tv_sec * 100 + ts.tv_nsec / 10000000;
+	blk = (blk * VIVID_RDS_GEN_BLOCKS) / 500;
+	if (blk >= dev->radio_rx_rds_last_block + VIVID_RDS_GEN_BLOCKS)
+		dev->radio_rx_rds_last_block = blk - VIVID_RDS_GEN_BLOCKS + 1;
+
+	/*
+	 * No data is available if there hasn't been time to get new data,
+	 * or if the RDS receiver has been disabled, or if we use the data
+	 * from the RDS transmitter and that RDS transmitter has been disabled,
+	 * or if the signal quality is too weak.
+	 */
+	if (blk == dev->radio_rx_rds_last_block || !dev->radio_rx_rds_enabled ||
+	    (dev->radio_rds_loop && !(dev->radio_tx_subchans & V4L2_TUNER_SUB_RDS)) ||
+	    abs(dev->radio_rx_sig_qual) > 200) {
+		mutex_unlock(&dev->mutex);
+		if (file->f_flags & O_NONBLOCK)
+			return -EWOULDBLOCK;
+		if (msleep_interruptible(20) && signal_pending(current))
+			return -EINTR;
+		if (mutex_lock_interruptible(&dev->mutex))
+			return -ERESTARTSYS;
+		goto retry;
+	}
+
+	/* abs(dev->radio_rx_sig_qual) <= 200, map that to a 0-50% range */
+	perc = abs(dev->radio_rx_sig_qual) / 4;
+
+	for (i = 0; i < size && blk > dev->radio_rx_rds_last_block;
+			dev->radio_rx_rds_last_block++) {
+		unsigned data_blk = dev->radio_rx_rds_last_block % VIVID_RDS_GEN_BLOCKS;
+		struct v4l2_rds_data rds = data[data_blk];
+
+		if (data_blk == 0 && dev->radio_rds_loop)
+			vivid_radio_rds_init(dev);
+		if (perc && prandom_u32_max(100) < perc) {
+			switch (prandom_u32_max(4)) {
+			case 0:
+				rds.block |= V4L2_RDS_BLOCK_CORRECTED;
+				break;
+			case 1:
+				rds.block |= V4L2_RDS_BLOCK_INVALID;
+				break;
+			case 2:
+				rds.block |= V4L2_RDS_BLOCK_ERROR;
+				rds.lsb = prandom_u32_max(256);
+				rds.msb = prandom_u32_max(256);
+				break;
+			case 3: /* Skip block altogether */
+				if (i)
+					continue;
+				/*
+				 * Must make sure at least one block is
+				 * returned, otherwise the application
+				 * might think that end-of-file occurred.
+				 */
+				break;
+			}
+		}
+		if (copy_to_user(buf + i, &rds, sizeof(rds))) {
+			i = -EFAULT;
+			break;
+		}
+		i += sizeof(rds);
+	}
+	mutex_unlock(&dev->mutex);
+	return i;
+}
+
+unsigned int vivid_radio_rx_poll(struct file *file, struct poll_table_struct *wait)
+{
+	return POLLIN | POLLRDNORM | v4l2_ctrl_poll(file, wait);
+}
+
+int vivid_radio_rx_enum_freq_bands(struct file *file, void *fh, struct v4l2_frequency_band *band)
+{
+	if (band->tuner != 0)
+		return -EINVAL;
+
+	if (band->index >= TOT_BANDS)
+		return -EINVAL;
+
+	*band = vivid_radio_bands[band->index];
+	return 0;
+}
+
+int vivid_radio_rx_s_hw_freq_seek(struct file *file, void *fh, const struct v4l2_hw_freq_seek *a)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	unsigned low, high;
+	unsigned freq;
+	unsigned spacing;
+	unsigned band;
+
+	if (a->tuner)
+		return -EINVAL;
+	if (a->wrap_around && dev->radio_rx_hw_seek_mode == VIVID_HW_SEEK_BOUNDED)
+		return -EINVAL;
+
+	if (!a->wrap_around && dev->radio_rx_hw_seek_mode == VIVID_HW_SEEK_WRAP)
+		return -EINVAL;
+	if (!a->rangelow ^ !a->rangehigh)
+		return -EINVAL;
+
+	if (file->f_flags & O_NONBLOCK)
+		return -EWOULDBLOCK;
+
+	if (a->rangelow) {
+		for (band = 0; band < TOT_BANDS; band++)
+			if (a->rangelow >= vivid_radio_bands[band].rangelow &&
+			    a->rangehigh <= vivid_radio_bands[band].rangehigh)
+				break;
+		if (band == TOT_BANDS)
+			return -EINVAL;
+		if (!dev->radio_rx_hw_seek_prog_lim &&
+		    (a->rangelow != vivid_radio_bands[band].rangelow ||
+		     a->rangehigh != vivid_radio_bands[band].rangehigh))
+			return -EINVAL;
+		low = a->rangelow;
+		high = a->rangehigh;
+	} else {
+		for (band = 0; band < TOT_BANDS; band++)
+			if (dev->radio_rx_freq >= vivid_radio_bands[band].rangelow &&
+			    dev->radio_rx_freq <= vivid_radio_bands[band].rangehigh)
+				break;
+		low = vivid_radio_bands[band].rangelow;
+		high = vivid_radio_bands[band].rangehigh;
+	}
+	spacing = band == BAND_AM ? 1600 : 16000;
+	freq = clamp(dev->radio_rx_freq, low, high);
+
+	if (a->seek_upward) {
+		freq = spacing * (freq / spacing) + spacing;
+		if (freq > high) {
+			if (!a->wrap_around)
+				return -ENODATA;
+			freq = spacing * (low / spacing) + spacing;
+			if (freq >= dev->radio_rx_freq)
+				return -ENODATA;
+		}
+	} else {
+		freq = spacing * ((freq + spacing - 1) / spacing) - spacing;
+		if (freq < low) {
+			if (!a->wrap_around)
+				return -ENODATA;
+			freq = spacing * ((high + spacing - 1) / spacing) - spacing;
+			if (freq <= dev->radio_rx_freq)
+				return -ENODATA;
+		}
+	}
+	return 0;
+}
+
+int vivid_radio_rx_g_tuner(struct file *file, void *fh, struct v4l2_tuner *vt)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	int delta = 800;
+	int sig_qual;
+
+	if (vt->index > 0)
+		return -EINVAL;
+
+	strlcpy(vt->name, "AM/FM/SW Receiver", sizeof(vt->name));
+	vt->capability = V4L2_TUNER_CAP_LOW | V4L2_TUNER_CAP_STEREO |
+			 V4L2_TUNER_CAP_FREQ_BANDS | V4L2_TUNER_CAP_RDS |
+			 (dev->radio_rx_rds_controls ?
+				V4L2_TUNER_CAP_RDS_CONTROLS :
+				V4L2_TUNER_CAP_RDS_BLOCK_IO) |
+			 (dev->radio_rx_hw_seek_prog_lim ?
+				V4L2_TUNER_CAP_HWSEEK_PROG_LIM : 0);
+	switch (dev->radio_rx_hw_seek_mode) {
+	case VIVID_HW_SEEK_BOUNDED:
+		vt->capability |= V4L2_TUNER_CAP_HWSEEK_BOUNDED;
+		break;
+	case VIVID_HW_SEEK_WRAP:
+		vt->capability |= V4L2_TUNER_CAP_HWSEEK_WRAP;
+		break;
+	case VIVID_HW_SEEK_BOTH:
+		vt->capability |= V4L2_TUNER_CAP_HWSEEK_WRAP |
+				  V4L2_TUNER_CAP_HWSEEK_BOUNDED;
+		break;
+	}
+	vt->rangelow = AM_FREQ_RANGE_LOW;
+	vt->rangehigh = FM_FREQ_RANGE_HIGH;
+	sig_qual = dev->radio_rx_sig_qual;
+	vt->signal = abs(sig_qual) > delta ? 0 :
+		     0xffff - (abs(sig_qual) * 0xffff) / delta;
+	vt->afc = sig_qual > delta ? 0 : sig_qual;
+	if (abs(sig_qual) > delta)
+		vt->rxsubchans = 0;
+	else if (dev->radio_rx_freq < FM_FREQ_RANGE_LOW || vt->signal < 0x8000)
+		vt->rxsubchans = V4L2_TUNER_SUB_MONO;
+	else if (dev->radio_rds_loop && !(dev->radio_tx_subchans & V4L2_TUNER_SUB_STEREO))
+		vt->rxsubchans = V4L2_TUNER_SUB_MONO;
+	else
+		vt->rxsubchans = V4L2_TUNER_SUB_STEREO;
+	if (dev->radio_rx_rds_enabled &&
+	    (!dev->radio_rds_loop || (dev->radio_tx_subchans & V4L2_TUNER_SUB_RDS)) &&
+	    dev->radio_rx_freq >= FM_FREQ_RANGE_LOW && vt->signal >= 0xc000)
+		vt->rxsubchans |= V4L2_TUNER_SUB_RDS;
+	if (dev->radio_rx_rds_controls)
+		vivid_radio_rds_init(dev);
+	vt->audmode = dev->radio_rx_audmode;
+	return 0;
+}
+
+int vivid_radio_rx_s_tuner(struct file *file, void *fh, const struct v4l2_tuner *vt)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (vt->index)
+		return -EINVAL;
+	dev->radio_rx_audmode = vt->audmode >= V4L2_TUNER_MODE_STEREO;
+	return 0;
+}
diff --git a/drivers/media/platform/vivid/vivid-radio-rx.h b/drivers/media/platform/vivid/vivid-radio-rx.h
new file mode 100644
index 0000000..1077d8f
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-radio-rx.h
@@ -0,0 +1,31 @@
+/*
+ * vivid-radio-rx.h - radio receiver support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_RADIO_RX_H_
+#define _VIVID_RADIO_RX_H_
+
+ssize_t vivid_radio_rx_read(struct file *, char __user *, size_t, loff_t *);
+unsigned int vivid_radio_rx_poll(struct file *file, struct poll_table_struct *wait);
+
+int vivid_radio_rx_enum_freq_bands(struct file *file, void *fh, struct v4l2_frequency_band *band);
+int vivid_radio_rx_s_hw_freq_seek(struct file *file, void *fh, const struct v4l2_hw_freq_seek *a);
+int vivid_radio_rx_g_tuner(struct file *file, void *fh, struct v4l2_tuner *vt);
+int vivid_radio_rx_s_tuner(struct file *file, void *fh, const struct v4l2_tuner *vt);
+
+#endif
diff --git a/drivers/media/platform/vivid/vivid-radio-tx.c b/drivers/media/platform/vivid/vivid-radio-tx.c
new file mode 100644
index 0000000..8c59d4f
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-radio-tx.c
@@ -0,0 +1,141 @@
+/*
+ * vivid-radio-tx.c - radio transmitter support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/videodev2.h>
+#include <linux/v4l2-dv-timings.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-dv-timings.h>
+
+#include "vivid-core.h"
+#include "vivid-ctrls.h"
+#include "vivid-radio-common.h"
+#include "vivid-radio-tx.h"
+
+ssize_t vivid_radio_tx_write(struct file *file, const char __user *buf,
+			  size_t size, loff_t *offset)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	struct v4l2_rds_data *data = dev->rds_gen.data;
+	struct timespec ts;
+	unsigned blk;
+	int i;
+
+	if (dev->radio_tx_rds_controls)
+		return -EINVAL;
+
+	if (size < sizeof(*data))
+		return -EINVAL;
+	size = sizeof(*data) * (size / sizeof(*data));
+
+	if (mutex_lock_interruptible(&dev->mutex))
+		return -ERESTARTSYS;
+	if (dev->radio_tx_rds_owner &&
+	    file->private_data != dev->radio_tx_rds_owner) {
+		mutex_unlock(&dev->mutex);
+		return -EBUSY;
+	}
+	dev->radio_tx_rds_owner = file->private_data;
+
+retry:
+	ktime_get_ts(&ts);
+	ts = timespec_sub(ts, dev->radio_rds_init_ts);
+	blk = ts.tv_sec * 100 + ts.tv_nsec / 10000000;
+	blk = (blk * VIVID_RDS_GEN_BLOCKS) / 500;
+	if (blk - VIVID_RDS_GEN_BLOCKS >= dev->radio_tx_rds_last_block)
+		dev->radio_tx_rds_last_block = blk - VIVID_RDS_GEN_BLOCKS + 1;
+
+	/*
+	 * No data is available if there hasn't been time to get new data,
+	 * or if the RDS receiver has been disabled, or if we use the data
+	 * from the RDS transmitter and that RDS transmitter has been disabled,
+	 * or if the signal quality is too weak.
+	 */
+	if (blk == dev->radio_tx_rds_last_block ||
+	    !(dev->radio_tx_subchans & V4L2_TUNER_SUB_RDS)) {
+		mutex_unlock(&dev->mutex);
+		if (file->f_flags & O_NONBLOCK)
+			return -EWOULDBLOCK;
+		if (msleep_interruptible(20) && signal_pending(current))
+			return -EINTR;
+		if (mutex_lock_interruptible(&dev->mutex))
+			return -ERESTARTSYS;
+		goto retry;
+	}
+
+	for (i = 0; i < size && blk > dev->radio_tx_rds_last_block;
+			dev->radio_tx_rds_last_block++) {
+		unsigned data_blk = dev->radio_tx_rds_last_block % VIVID_RDS_GEN_BLOCKS;
+		struct v4l2_rds_data rds;
+
+		if (copy_from_user(&rds, buf + i, sizeof(rds))) {
+			i = -EFAULT;
+			break;
+		}
+		i += sizeof(rds);
+		if (!dev->radio_rds_loop)
+			continue;
+		if ((rds.block & V4L2_RDS_BLOCK_MSK) == V4L2_RDS_BLOCK_INVALID ||
+		    (rds.block & V4L2_RDS_BLOCK_ERROR))
+			continue;
+		rds.block &= V4L2_RDS_BLOCK_MSK;
+		data[data_blk] = rds;
+	}
+	mutex_unlock(&dev->mutex);
+	return i;
+}
+
+unsigned int vivid_radio_tx_poll(struct file *file, struct poll_table_struct *wait)
+{
+	return POLLOUT | POLLWRNORM | v4l2_ctrl_poll(file, wait);
+}
+
+int vidioc_g_modulator(struct file *file, void *fh, struct v4l2_modulator *a)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (a->index > 0)
+		return -EINVAL;
+
+	strlcpy(a->name, "AM/FM/SW Transmitter", sizeof(a->name));
+	a->capability = V4L2_TUNER_CAP_LOW | V4L2_TUNER_CAP_STEREO |
+			V4L2_TUNER_CAP_FREQ_BANDS | V4L2_TUNER_CAP_RDS |
+			(dev->radio_tx_rds_controls ?
+				V4L2_TUNER_CAP_RDS_CONTROLS :
+				V4L2_TUNER_CAP_RDS_BLOCK_IO);
+	a->rangelow = AM_FREQ_RANGE_LOW;
+	a->rangehigh = FM_FREQ_RANGE_HIGH;
+	a->txsubchans = dev->radio_tx_subchans;
+	return 0;
+}
+
+int vidioc_s_modulator(struct file *file, void *fh, const struct v4l2_modulator *a)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	if (a->index)
+		return -EINVAL;
+	if (a->txsubchans & ~0x13)
+		return -EINVAL;
+	dev->radio_tx_subchans = a->txsubchans;
+	return 0;
+}
diff --git a/drivers/media/platform/vivid/vivid-radio-tx.h b/drivers/media/platform/vivid/vivid-radio-tx.h
new file mode 100644
index 0000000..7f8ff75
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-radio-tx.h
@@ -0,0 +1,29 @@
+/*
+ * vivid-radio-tx.h - radio transmitter support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_RADIO_TX_H_
+#define _VIVID_RADIO_TX_H_
+
+ssize_t vivid_radio_tx_write(struct file *, const char __user *, size_t, loff_t *);
+unsigned int vivid_radio_tx_poll(struct file *file, struct poll_table_struct *wait);
+
+int vidioc_g_modulator(struct file *file, void *fh, struct v4l2_modulator *a);
+int vidioc_s_modulator(struct file *file, void *fh, const struct v4l2_modulator *a);
+
+#endif
diff --git a/drivers/media/platform/vivid/vivid-rds-gen.c b/drivers/media/platform/vivid/vivid-rds-gen.c
new file mode 100644
index 0000000..dab5463
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-rds-gen.c
@@ -0,0 +1,165 @@
+/*
+ * vivid-rds-gen.c - rds (radio data system) generator support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/kernel.h>
+#include <linux/ktime.h>
+#include <linux/videodev2.h>
+
+#include "vivid-rds-gen.h"
+
+static u8 vivid_get_di(const struct vivid_rds_gen *rds, unsigned grp)
+{
+	switch (grp) {
+	case 0:
+		return (rds->dyn_pty << 2) | (grp & 3);
+	case 1:
+		return (rds->compressed << 2) | (grp & 3);
+	case 2:
+		return (rds->art_head << 2) | (grp & 3);
+	case 3:
+		return (rds->mono_stereo << 2) | (grp & 3);
+	}
+	return 0;
+}
+
+/*
+ * This RDS generator creates 57 RDS groups (one group == four RDS blocks).
+ * Groups 0-3, 22-25 and 44-47 (spaced 22 groups apart) are filled with a
+ * standard 0B group containing the PI code and PS name.
+ *
+ * Groups 4-19 and 26-41 use group 2A for the radio text.
+ *
+ * Group 56 contains the time (group 4A).
+ *
+ * All remaining groups use a filler group 15B block that just repeats
+ * the PI and PTY codes.
+ */
+void vivid_rds_generate(struct vivid_rds_gen *rds)
+{
+	struct v4l2_rds_data *data = rds->data;
+	unsigned grp;
+	struct tm tm;
+	unsigned date;
+	unsigned time;
+	int l;
+
+	for (grp = 0; grp < VIVID_RDS_GEN_GROUPS; grp++, data += VIVID_RDS_GEN_BLKS_PER_GRP) {
+		data[0].lsb = rds->picode & 0xff;
+		data[0].msb = rds->picode >> 8;
+		data[0].block = V4L2_RDS_BLOCK_A | (V4L2_RDS_BLOCK_A << 3);
+		data[1].lsb = rds->pty << 5;
+		data[1].msb = (rds->pty >> 3) | (rds->tp << 2);
+		data[1].block = V4L2_RDS_BLOCK_B | (V4L2_RDS_BLOCK_B << 3);
+		data[3].block = V4L2_RDS_BLOCK_D | (V4L2_RDS_BLOCK_D << 3);
+
+		switch (grp) {
+		case 0 ... 3:
+		case 22 ... 25:
+		case 44 ... 47: /* Group 0B */
+			data[1].lsb |= (rds->ta << 4) | (rds->ms << 3);
+			data[1].lsb |= vivid_get_di(rds, grp % 22);
+			data[1].msb |= 1 << 3;
+			data[2].lsb = rds->picode & 0xff;
+			data[2].msb = rds->picode >> 8;
+			data[2].block = V4L2_RDS_BLOCK_C_ALT | (V4L2_RDS_BLOCK_C_ALT << 3);
+			data[3].lsb = rds->psname[2 * (grp % 22) + 1];
+			data[3].msb = rds->psname[2 * (grp % 22)];
+			break;
+		case 4 ... 19:
+		case 26 ... 41: /* Group 2A */
+			data[1].lsb |= (grp - 4) % 22;
+			data[1].msb |= 4 << 3;
+			data[2].msb = rds->radiotext[4 * ((grp - 4) % 22)];
+			data[2].lsb = rds->radiotext[4 * ((grp - 4) % 22) + 1];
+			data[2].block = V4L2_RDS_BLOCK_C | (V4L2_RDS_BLOCK_C << 3);
+			data[3].msb = rds->radiotext[4 * ((grp - 4) % 22) + 2];
+			data[3].lsb = rds->radiotext[4 * ((grp - 4) % 22) + 3];
+			break;
+		case 56:
+			/*
+			 * Group 4A
+			 *
+			 * Uses the algorithm from Annex G of the RDS standard
+			 * EN 50067:1998 to convert a UTC date to an RDS Modified
+			 * Julian Day.
+			 */
+			time_to_tm(get_seconds(), 0, &tm);
+			l = tm.tm_mon <= 1;
+			date = 14956 + tm.tm_mday + ((tm.tm_year - l) * 1461) / 4 +
+				((tm.tm_mon + 2 + l * 12) * 306001) / 10000;
+			time = (tm.tm_hour << 12) |
+			       (tm.tm_min << 6) |
+			       (sys_tz.tz_minuteswest >= 0 ? 0x20 : 0) |
+			       (abs(sys_tz.tz_minuteswest) / 30);
+			data[1].lsb &= ~3;
+			data[1].lsb |= date >> 15;
+			data[1].msb |= 8 << 3;
+			data[2].lsb = (date << 1) & 0xfe;
+			data[2].lsb |= (time >> 16) & 1;
+			data[2].msb = (date >> 7) & 0xff;
+			data[2].block = V4L2_RDS_BLOCK_C | (V4L2_RDS_BLOCK_C << 3);
+			data[3].lsb = time & 0xff;
+			data[3].msb = (time >> 8) & 0xff;
+			break;
+		default: /* Group 15B */
+			data[1].lsb |= (rds->ta << 4) | (rds->ms << 3);
+			data[1].lsb |= vivid_get_di(rds, grp % 22);
+			data[1].msb |= 0x1f << 3;
+			data[2].lsb = rds->picode & 0xff;
+			data[2].msb = rds->picode >> 8;
+			data[2].block = V4L2_RDS_BLOCK_C_ALT | (V4L2_RDS_BLOCK_C_ALT << 3);
+			data[3].lsb = rds->pty << 5;
+			data[3].lsb |= (rds->ta << 4) | (rds->ms << 3);
+			data[3].lsb |= vivid_get_di(rds, grp % 22);
+			data[3].msb |= rds->pty >> 3;
+			data[3].msb |= 0x1f << 3;
+			break;
+		}
+	}
+}
+
+void vivid_rds_gen_fill(struct vivid_rds_gen *rds, unsigned freq,
+			  bool alt)
+{
+	/* Alternate PTY between Info and Weather */
+	if (rds->use_rbds) {
+		rds->picode = 0x2e75; /* 'KLNX' call sign */
+		rds->pty = alt ? 29 : 2;
+	} else {
+		rds->picode = 0x8088;
+		rds->pty = alt ? 16 : 3;
+	}
+	rds->mono_stereo = true;
+	rds->art_head = false;
+	rds->compressed = false;
+	rds->dyn_pty = false;
+	rds->tp = true;
+	rds->ta = alt;
+	rds->ms = true;
+	snprintf(rds->psname, sizeof(rds->psname), "%6d.%1d",
+		 freq / 16, ((freq & 0xf) * 10) / 16);
+	if (alt)
+		strlcpy(rds->radiotext,
+			" The Radio Data System can switch between different Radio Texts ",
+			sizeof(rds->radiotext));
+	else
+		strlcpy(rds->radiotext,
+			"An example of Radio Text as transmitted by the Radio Data System",
+			sizeof(rds->radiotext));
+}
diff --git a/drivers/media/platform/vivid/vivid-rds-gen.h b/drivers/media/platform/vivid/vivid-rds-gen.h
new file mode 100644
index 0000000..eff4bf5
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-rds-gen.h
@@ -0,0 +1,53 @@
+/*
+ * vivid-rds-gen.h - rds (radio data system) generator support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_RDS_GEN_H_
+#define _VIVID_RDS_GEN_H_
+
+/*
+ * It takes almost exactly 5 seconds to transmit 57 RDS groups.
+ * Each group has 4 blocks and each block has a payload of 16 bits + a
+ * block identification. The driver will generate the contents of these
+ * 57 groups only when necessary and it will just be played continuously.
+ */
+#define VIVID_RDS_GEN_GROUPS 57
+#define VIVID_RDS_GEN_BLKS_PER_GRP 4
+#define VIVID_RDS_GEN_BLOCKS (VIVID_RDS_GEN_BLKS_PER_GRP * VIVID_RDS_GEN_GROUPS)
+
+struct vivid_rds_gen {
+	struct v4l2_rds_data	data[VIVID_RDS_GEN_BLOCKS];
+	bool			use_rbds;
+	u16			picode;
+	u8			pty;
+	bool			mono_stereo;
+	bool			art_head;
+	bool			compressed;
+	bool			dyn_pty;
+	bool			ta;
+	bool			tp;
+	bool			ms;
+	char			psname[8 + 1];
+	char			radiotext[64 + 1];
+};
+
+void vivid_rds_gen_fill(struct vivid_rds_gen *rds, unsigned freq,
+		    bool use_alternate);
+void vivid_rds_generate(struct vivid_rds_gen *rds);
+
+#endif
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 11/12] vivid: add support for software defined radio
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
                   ` (9 preceding siblings ...)
  2014-08-25 11:30 ` [PATCHv2 10/12] vivid: add support for radio receivers and transmitters Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  2014-08-25 11:30 ` [PATCHv2 12/12] vivid: enable the vivid driver Hans Verkuil
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Antti Palosaari, Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

This adds support for an SDR capture device. It generates simple
sine/cosine waves. The code for that has been contributed by
Antti.

Signed-off-by: Antti Palosaari <crope@iki.fi>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/platform/vivid/vivid-sdr-cap.c | 499 +++++++++++++++++++++++++++
 drivers/media/platform/vivid/vivid-sdr-cap.h |  34 ++
 2 files changed, 533 insertions(+)
 create mode 100644 drivers/media/platform/vivid/vivid-sdr-cap.c
 create mode 100644 drivers/media/platform/vivid/vivid-sdr-cap.h

diff --git a/drivers/media/platform/vivid/vivid-sdr-cap.c b/drivers/media/platform/vivid/vivid-sdr-cap.c
new file mode 100644
index 0000000..8c5d661
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-sdr-cap.c
@@ -0,0 +1,499 @@
+/*
+ * vivid-sdr-cap.c - software defined radio support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/kthread.h>
+#include <linux/freezer.h>
+#include <linux/videodev2.h>
+#include <linux/v4l2-dv-timings.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-event.h>
+#include <media/v4l2-dv-timings.h>
+
+#include "vivid-core.h"
+#include "vivid-ctrls.h"
+#include "vivid-sdr-cap.h"
+
+static const struct v4l2_frequency_band bands_adc[] = {
+	{
+		.tuner = 0,
+		.type = V4L2_TUNER_ADC,
+		.index = 0,
+		.capability = V4L2_TUNER_CAP_1HZ | V4L2_TUNER_CAP_FREQ_BANDS,
+		.rangelow   =  300000,
+		.rangehigh  =  300000,
+	},
+	{
+		.tuner = 0,
+		.type = V4L2_TUNER_ADC,
+		.index = 1,
+		.capability = V4L2_TUNER_CAP_1HZ | V4L2_TUNER_CAP_FREQ_BANDS,
+		.rangelow   =  900001,
+		.rangehigh  = 2800000,
+	},
+	{
+		.tuner = 0,
+		.type = V4L2_TUNER_ADC,
+		.index = 2,
+		.capability = V4L2_TUNER_CAP_1HZ | V4L2_TUNER_CAP_FREQ_BANDS,
+		.rangelow   = 3200000,
+		.rangehigh  = 3200000,
+	},
+};
+
+/* ADC band midpoints */
+#define BAND_ADC_0 ((bands_adc[0].rangehigh + bands_adc[1].rangelow) / 2)
+#define BAND_ADC_1 ((bands_adc[1].rangehigh + bands_adc[2].rangelow) / 2)
+
+static const struct v4l2_frequency_band bands_fm[] = {
+	{
+		.tuner = 1,
+		.type = V4L2_TUNER_RF,
+		.index = 0,
+		.capability = V4L2_TUNER_CAP_1HZ | V4L2_TUNER_CAP_FREQ_BANDS,
+		.rangelow   =    50000000,
+		.rangehigh  =  2000000000,
+	},
+};
+
+static void vivid_thread_sdr_cap_tick(struct vivid_dev *dev)
+{
+	struct vivid_buffer *sdr_cap_buf = NULL;
+
+	dprintk(dev, 1, "SDR Capture Thread Tick\n");
+
+	/* Drop a certain percentage of buffers. */
+	if (dev->perc_dropped_buffers &&
+	    prandom_u32_max(100) < dev->perc_dropped_buffers)
+		return;
+
+	spin_lock(&dev->slock);
+	if (!list_empty(&dev->sdr_cap_active)) {
+		sdr_cap_buf = list_entry(dev->sdr_cap_active.next,
+					 struct vivid_buffer, list);
+		list_del(&sdr_cap_buf->list);
+	}
+	spin_unlock(&dev->slock);
+
+	if (sdr_cap_buf) {
+		sdr_cap_buf->vb.v4l2_buf.sequence = dev->sdr_cap_seq_count;
+		vivid_sdr_cap_process(dev, sdr_cap_buf);
+		v4l2_get_timestamp(&sdr_cap_buf->vb.v4l2_buf.timestamp);
+		sdr_cap_buf->vb.v4l2_buf.timestamp.tv_sec += dev->time_wrap_offset;
+		vb2_buffer_done(&sdr_cap_buf->vb, dev->dqbuf_error ?
+				VB2_BUF_STATE_ERROR : VB2_BUF_STATE_DONE);
+		dev->dqbuf_error = false;
+	}
+}
+
+static int vivid_thread_sdr_cap(void *data)
+{
+	struct vivid_dev *dev = data;
+	u64 samples_since_start;
+	u64 buffers_since_start;
+	u64 next_jiffies_since_start;
+	unsigned long jiffies_since_start;
+	unsigned long cur_jiffies;
+	unsigned wait_jiffies;
+
+	dprintk(dev, 1, "SDR Capture Thread Start\n");
+
+	set_freezable();
+
+	/* Resets frame counters */
+	dev->sdr_cap_seq_offset = 0;
+	if (dev->seq_wrap)
+		dev->sdr_cap_seq_offset = 0xffffff80U;
+	dev->jiffies_sdr_cap = jiffies;
+	dev->sdr_cap_seq_resync = false;
+
+	for (;;) {
+		try_to_freeze();
+		if (kthread_should_stop())
+			break;
+
+		mutex_lock(&dev->mutex);
+		cur_jiffies = jiffies;
+		if (dev->sdr_cap_seq_resync) {
+			dev->jiffies_sdr_cap = cur_jiffies;
+			dev->sdr_cap_seq_offset = dev->sdr_cap_seq_count + 1;
+			dev->sdr_cap_seq_count = 0;
+			dev->sdr_cap_seq_resync = false;
+		}
+		/* Calculate the number of jiffies since we started streaming */
+		jiffies_since_start = cur_jiffies - dev->jiffies_sdr_cap;
+		/* Get the number of buffers streamed since the start */
+		buffers_since_start = (u64)jiffies_since_start * dev->sdr_adc_freq +
+				      (HZ * SDR_CAP_SAMPLES_PER_BUF) / 2;
+		do_div(buffers_since_start, HZ * SDR_CAP_SAMPLES_PER_BUF);
+
+		/*
+		 * After more than 0xf0000000 (rounded down to a multiple of
+		 * 'jiffies-per-day' to ease jiffies_to_msecs calculation)
+		 * jiffies have passed since we started streaming reset the
+		 * counters and keep track of the sequence offset.
+		 */
+		if (jiffies_since_start > JIFFIES_RESYNC) {
+			dev->jiffies_sdr_cap = cur_jiffies;
+			dev->sdr_cap_seq_offset = buffers_since_start;
+			buffers_since_start = 0;
+		}
+		dev->sdr_cap_seq_count = buffers_since_start + dev->sdr_cap_seq_offset;
+
+		vivid_thread_sdr_cap_tick(dev);
+		mutex_unlock(&dev->mutex);
+
+		/*
+		 * Calculate the number of samples streamed since we started,
+		 * not including the current buffer.
+		 */
+		samples_since_start = buffers_since_start * SDR_CAP_SAMPLES_PER_BUF;
+
+		/* And the number of jiffies since we started */
+		jiffies_since_start = jiffies - dev->jiffies_sdr_cap;
+
+		/* Increase by the number of samples in one buffer */
+		samples_since_start += SDR_CAP_SAMPLES_PER_BUF;
+		/*
+		 * Calculate when that next buffer is supposed to start
+		 * in jiffies since we started streaming.
+		 */
+		next_jiffies_since_start = samples_since_start * HZ +
+					   dev->sdr_adc_freq / 2;
+		do_div(next_jiffies_since_start, dev->sdr_adc_freq);
+		/* If it is in the past, then just schedule asap */
+		if (next_jiffies_since_start < jiffies_since_start)
+			next_jiffies_since_start = jiffies_since_start;
+
+		wait_jiffies = next_jiffies_since_start - jiffies_since_start;
+		schedule_timeout_interruptible(wait_jiffies ? wait_jiffies : 1);
+	}
+	dprintk(dev, 1, "SDR Capture Thread End\n");
+	return 0;
+}
+
+static int sdr_cap_queue_setup(struct vb2_queue *vq, const struct v4l2_format *fmt,
+		       unsigned *nbuffers, unsigned *nplanes,
+		       unsigned sizes[], void *alloc_ctxs[])
+{
+	/* 2 = max 16-bit sample returned */
+	sizes[0] = SDR_CAP_SAMPLES_PER_BUF * 2;
+	*nplanes = 1;
+	return 0;
+}
+
+static int sdr_cap_buf_prepare(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	unsigned size = SDR_CAP_SAMPLES_PER_BUF * 2;
+
+	dprintk(dev, 1, "%s\n", __func__);
+
+	if (dev->buf_prepare_error) {
+		/*
+		 * Error injection: test what happens if buf_prepare() returns
+		 * an error.
+		 */
+		dev->buf_prepare_error = false;
+		return -EINVAL;
+	}
+	if (vb2_plane_size(vb, 0) < size) {
+		dprintk(dev, 1, "%s data will not fit into plane (%lu < %u)\n",
+				__func__, vb2_plane_size(vb, 0), size);
+		return -EINVAL;
+	}
+	vb2_set_plane_payload(vb, 0, size);
+
+	return 0;
+}
+
+static void sdr_cap_buf_queue(struct vb2_buffer *vb)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vb->vb2_queue);
+	struct vivid_buffer *buf = container_of(vb, struct vivid_buffer, vb);
+
+	dprintk(dev, 1, "%s\n", __func__);
+
+	spin_lock(&dev->slock);
+	list_add_tail(&buf->list, &dev->sdr_cap_active);
+	spin_unlock(&dev->slock);
+}
+
+static int sdr_cap_start_streaming(struct vb2_queue *vq, unsigned count)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+	int err = 0;
+
+	dprintk(dev, 1, "%s\n", __func__);
+	dev->sdr_cap_seq_count = 0;
+	if (dev->start_streaming_error) {
+		dev->start_streaming_error = false;
+		err = -EINVAL;
+	} else if (dev->kthread_sdr_cap == NULL) {
+		dev->kthread_sdr_cap = kthread_run(vivid_thread_sdr_cap, dev,
+				"%s-sdr-cap", dev->v4l2_dev.name);
+
+		if (IS_ERR(dev->kthread_sdr_cap)) {
+			v4l2_err(&dev->v4l2_dev, "kernel_thread() failed\n");
+			err = PTR_ERR(dev->kthread_sdr_cap);
+			dev->kthread_sdr_cap = NULL;
+		}
+	}
+	if (err) {
+		struct vivid_buffer *buf, *tmp;
+
+		list_for_each_entry_safe(buf, tmp, &dev->sdr_cap_active, list) {
+			list_del(&buf->list);
+			vb2_buffer_done(&buf->vb, VB2_BUF_STATE_QUEUED);
+		}
+	}
+	return err;
+}
+
+/* abort streaming and wait for last buffer */
+static void sdr_cap_stop_streaming(struct vb2_queue *vq)
+{
+	struct vivid_dev *dev = vb2_get_drv_priv(vq);
+
+	if (dev->kthread_sdr_cap == NULL)
+		return;
+
+	while (!list_empty(&dev->sdr_cap_active)) {
+		struct vivid_buffer *buf;
+
+		buf = list_entry(dev->sdr_cap_active.next, struct vivid_buffer, list);
+		list_del(&buf->list);
+		vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR);
+	}
+
+	/* shutdown control thread */
+	mutex_unlock(&dev->mutex);
+	kthread_stop(dev->kthread_sdr_cap);
+	dev->kthread_sdr_cap = NULL;
+	mutex_lock(&dev->mutex);
+}
+
+const struct vb2_ops vivid_sdr_cap_qops = {
+	.queue_setup		= sdr_cap_queue_setup,
+	.buf_prepare		= sdr_cap_buf_prepare,
+	.buf_queue		= sdr_cap_buf_queue,
+	.start_streaming	= sdr_cap_start_streaming,
+	.stop_streaming		= sdr_cap_stop_streaming,
+	.wait_prepare		= vivid_unlock,
+	.wait_finish		= vivid_lock,
+};
+
+int vivid_sdr_enum_freq_bands(struct file *file, void *fh, struct v4l2_frequency_band *band)
+{
+	switch (band->tuner) {
+	case 0:
+		if (band->index >= ARRAY_SIZE(bands_adc))
+			return -EINVAL;
+		*band = bands_adc[band->index];
+		return 0;
+	case 1:
+		if (band->index >= ARRAY_SIZE(bands_fm))
+			return -EINVAL;
+		*band = bands_fm[band->index];
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
+int vivid_sdr_g_frequency(struct file *file, void *fh, struct v4l2_frequency *vf)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+
+	switch (vf->tuner) {
+	case 0:
+		vf->frequency = dev->sdr_adc_freq;
+		vf->type = V4L2_TUNER_ADC;
+		return 0;
+	case 1:
+		vf->frequency = dev->sdr_fm_freq;
+		vf->type = V4L2_TUNER_RF;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
+int vivid_sdr_s_frequency(struct file *file, void *fh, const struct v4l2_frequency *vf)
+{
+	struct vivid_dev *dev = video_drvdata(file);
+	unsigned freq = vf->frequency;
+	unsigned band;
+
+	switch (vf->tuner) {
+	case 0:
+		if (vf->type != V4L2_TUNER_ADC)
+			return -EINVAL;
+		if (freq < BAND_ADC_0)
+			band = 0;
+		else if (freq < BAND_ADC_1)
+			band = 1;
+		else
+			band = 2;
+
+		freq = clamp_t(unsigned, freq,
+				bands_adc[band].rangelow,
+				bands_adc[band].rangehigh);
+
+		if (vb2_is_streaming(&dev->vb_sdr_cap_q) &&
+		    freq != dev->sdr_adc_freq) {
+			/* resync the thread's timings */
+			dev->sdr_cap_seq_resync = true;
+		}
+		dev->sdr_adc_freq = freq;
+		return 0;
+	case 1:
+		if (vf->type != V4L2_TUNER_RF)
+			return -EINVAL;
+		dev->sdr_fm_freq = clamp_t(unsigned, freq,
+				bands_fm[0].rangelow,
+				bands_fm[0].rangehigh);
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
+int vivid_sdr_g_tuner(struct file *file, void *fh, struct v4l2_tuner *vt)
+{
+	switch (vt->index) {
+	case 0:
+		strlcpy(vt->name, "ADC", sizeof(vt->name));
+		vt->type = V4L2_TUNER_ADC;
+		vt->capability = V4L2_TUNER_CAP_1HZ | V4L2_TUNER_CAP_FREQ_BANDS;
+		vt->rangelow = bands_adc[0].rangelow;
+		vt->rangehigh = bands_adc[2].rangehigh;
+		return 0;
+	case 1:
+		strlcpy(vt->name, "RF", sizeof(vt->name));
+		vt->type = V4L2_TUNER_RF;
+		vt->capability = V4L2_TUNER_CAP_1HZ | V4L2_TUNER_CAP_FREQ_BANDS;
+		vt->rangelow = bands_fm[0].rangelow;
+		vt->rangehigh = bands_fm[0].rangehigh;
+		return 0;
+	default:
+		return -EINVAL;
+	}
+}
+
+int vivid_sdr_s_tuner(struct file *file, void *fh, const struct v4l2_tuner *vt)
+{
+	if (vt->index > 1)
+		return -EINVAL;
+	return 0;
+}
+
+int vidioc_enum_fmt_sdr_cap(struct file *file, void *fh, struct v4l2_fmtdesc *f)
+{
+	if (f->index)
+		return -EINVAL;
+	f->pixelformat = V4L2_SDR_FMT_CU8;
+	strlcpy(f->description, "IQ U8", sizeof(f->description));
+	return 0;
+}
+
+int vidioc_g_fmt_sdr_cap(struct file *file, void *fh, struct v4l2_format *f)
+{
+	f->fmt.sdr.pixelformat = V4L2_SDR_FMT_CU8;
+	f->fmt.sdr.buffersize = SDR_CAP_SAMPLES_PER_BUF * 2;
+	memset(f->fmt.sdr.reserved, 0, sizeof(f->fmt.sdr.reserved));
+	return 0;
+}
+
+#define FIXP_FRAC    (1 << 15)
+#define FIXP_PI      ((int)(FIXP_FRAC * 3.141592653589))
+
+/* cos() from cx88 driver: cx88-dsp.c */
+static s32 fixp_cos(unsigned int x)
+{
+	u32 t2, t4, t6, t8;
+	u16 period = x / FIXP_PI;
+
+	if (period % 2)
+		return -fixp_cos(x - FIXP_PI);
+	x = x % FIXP_PI;
+	if (x > FIXP_PI/2)
+		return -fixp_cos(FIXP_PI/2 - (x % (FIXP_PI/2)));
+	/* Now x is between 0 and FIXP_PI/2.
+	 * To calculate cos(x) we use it's Taylor polinom. */
+	t2 = x*x/FIXP_FRAC/2;
+	t4 = t2*x/FIXP_FRAC*x/FIXP_FRAC/3/4;
+	t6 = t4*x/FIXP_FRAC*x/FIXP_FRAC/5/6;
+	t8 = t6*x/FIXP_FRAC*x/FIXP_FRAC/7/8;
+	return FIXP_FRAC-t2+t4-t6+t8;
+}
+
+static inline s32 fixp_sin(unsigned int x)
+{
+	return -fixp_cos(x + (FIXP_PI / 2));
+}
+
+void vivid_sdr_cap_process(struct vivid_dev *dev, struct vivid_buffer *buf)
+{
+	u8 *vbuf = vb2_plane_vaddr(&buf->vb, 0);
+	unsigned long i;
+	unsigned long plane_size = vb2_plane_size(&buf->vb, 0);
+	int fixp_src_phase_step, fixp_i, fixp_q;
+
+	/*
+	 * TODO: Generated beep tone goes very crackly when sample rate is
+	 * increased to ~1Msps or more. That is because of huge rounding error
+	 * of phase angle caused by used cosine implementation.
+	 */
+
+	/* calculate phase step */
+	#define BEEP_FREQ 1000 /* 1kHz beep */
+	fixp_src_phase_step = DIV_ROUND_CLOSEST(2 * FIXP_PI * BEEP_FREQ,
+			dev->sdr_adc_freq);
+
+	for (i = 0; i < plane_size; i += 2) {
+		dev->sdr_fixp_mod_phase += fixp_cos(dev->sdr_fixp_src_phase);
+		dev->sdr_fixp_src_phase += fixp_src_phase_step;
+
+		/*
+		 * Transfer phases to [0 / 2xPI] in order to avoid variable
+		 * overflow and make it suitable for cosine implementation
+		 * used, which does not support negative angles.
+		 */
+		while (dev->sdr_fixp_mod_phase < (0 * FIXP_PI))
+			dev->sdr_fixp_mod_phase += (2 * FIXP_PI);
+		while (dev->sdr_fixp_mod_phase > (2 * FIXP_PI))
+			dev->sdr_fixp_mod_phase -= (2 * FIXP_PI);
+
+		while (dev->sdr_fixp_src_phase > (2 * FIXP_PI))
+			dev->sdr_fixp_src_phase -= (2 * FIXP_PI);
+
+		fixp_i = fixp_cos(dev->sdr_fixp_mod_phase);
+		fixp_q = fixp_sin(dev->sdr_fixp_mod_phase);
+
+		/* convert 'fixp float' to u8 */
+		/* u8 = X * 127.5f + 127.5f; where X is float [-1.0 / +1.0] */
+		fixp_i = fixp_i * 1275 + FIXP_FRAC * 1275;
+		fixp_q = fixp_q * 1275 + FIXP_FRAC * 1275;
+		*vbuf++ = DIV_ROUND_CLOSEST(fixp_i, FIXP_FRAC * 10);
+		*vbuf++ = DIV_ROUND_CLOSEST(fixp_q, FIXP_FRAC * 10);
+	}
+}
diff --git a/drivers/media/platform/vivid/vivid-sdr-cap.h b/drivers/media/platform/vivid/vivid-sdr-cap.h
new file mode 100644
index 0000000..79c1890
--- /dev/null
+++ b/drivers/media/platform/vivid/vivid-sdr-cap.h
@@ -0,0 +1,34 @@
+/*
+ * vivid-sdr-cap.h - software defined radio support functions.
+ *
+ * Copyright 2014 Cisco Systems, Inc. and/or its affiliates. All rights reserved.
+ *
+ * This program is free software; you may redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef _VIVID_SDR_CAP_H_
+#define _VIVID_SDR_CAP_H_
+
+int vivid_sdr_enum_freq_bands(struct file *file, void *fh, struct v4l2_frequency_band *band);
+int vivid_sdr_g_frequency(struct file *file, void *fh, struct v4l2_frequency *vf);
+int vivid_sdr_s_frequency(struct file *file, void *fh, const struct v4l2_frequency *vf);
+int vivid_sdr_g_tuner(struct file *file, void *fh, struct v4l2_tuner *vt);
+int vivid_sdr_s_tuner(struct file *file, void *fh, const struct v4l2_tuner *vt);
+int vidioc_enum_fmt_sdr_cap(struct file *file, void *fh, struct v4l2_fmtdesc *f);
+int vidioc_g_fmt_sdr_cap(struct file *file, void *fh, struct v4l2_format *f);
+void vivid_sdr_cap_process(struct vivid_dev *dev, struct vivid_buffer *buf);
+
+extern const struct vb2_ops vivid_sdr_cap_qops;
+
+#endif
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCHv2 12/12] vivid: enable the vivid driver
  2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
                   ` (10 preceding siblings ...)
  2014-08-25 11:30 ` [PATCHv2 11/12] vivid: add support for software defined radio Hans Verkuil
@ 2014-08-25 11:30 ` Hans Verkuil
  11 siblings, 0 replies; 13+ messages in thread
From: Hans Verkuil @ 2014-08-25 11:30 UTC (permalink / raw)
  To: linux-media; +Cc: Hans Verkuil

From: Hans Verkuil <hans.verkuil@cisco.com>

Update the Kconfig and Makefile files so this driver can be compiled.

Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
---
 drivers/media/platform/Kconfig        |  3 +++
 drivers/media/platform/Makefile       |  2 ++
 drivers/media/platform/vivid/Kconfig  | 19 +++++++++++++++++++
 drivers/media/platform/vivid/Makefile |  6 ++++++
 4 files changed, 30 insertions(+)
 create mode 100644 drivers/media/platform/vivid/Kconfig
 create mode 100644 drivers/media/platform/vivid/Makefile

diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
index 6d86646..829a7d7 100644
--- a/drivers/media/platform/Kconfig
+++ b/drivers/media/platform/Kconfig
@@ -243,6 +243,9 @@ menuconfig V4L_TEST_DRIVERS
 	depends on MEDIA_CAMERA_SUPPORT
 
 if V4L_TEST_DRIVERS
+
+source "drivers/media/platform/vivid/Kconfig"
+
 config VIDEO_VIVI
 	tristate "Virtual Video Driver"
 	depends on VIDEO_DEV && VIDEO_V4L2 && !SPARC32 && !SPARC64
diff --git a/drivers/media/platform/Makefile b/drivers/media/platform/Makefile
index 4ac4c91..29aee16 100644
--- a/drivers/media/platform/Makefile
+++ b/drivers/media/platform/Makefile
@@ -15,8 +15,10 @@ obj-$(CONFIG_VIDEO_MMP_CAMERA) += marvell-ccic/
 obj-$(CONFIG_VIDEO_OMAP3)	+= omap3isp/
 
 obj-$(CONFIG_VIDEO_VIU) += fsl-viu.o
+
 obj-$(CONFIG_VIDEO_VIVI) += vivi.o
 
+obj-$(CONFIG_VIDEO_VIVID)		+= vivid/
 obj-$(CONFIG_VIDEO_MEM2MEM_TESTDEV) += mem2mem_testdev.o
 
 obj-$(CONFIG_VIDEO_TI_VPE)		+= ti-vpe/
diff --git a/drivers/media/platform/vivid/Kconfig b/drivers/media/platform/vivid/Kconfig
new file mode 100644
index 0000000..d71139a
--- /dev/null
+++ b/drivers/media/platform/vivid/Kconfig
@@ -0,0 +1,19 @@
+config VIDEO_VIVID
+	tristate "Virtual Video Test Driver"
+	depends on VIDEO_DEV && VIDEO_V4L2 && !SPARC32 && !SPARC64
+	select FONT_SUPPORT
+	select FONT_8x16
+	select VIDEOBUF2_VMALLOC
+	default n
+	---help---
+	  Enables a virtual video driver. This driver emulates a webcam,
+	  TV, S-Video and HDMI capture hardware, including VBI support for
+	  the SDTV inputs. Also video output, VBI output, radio receivers,
+	  transmitters and software defined radio capture is emulated.
+
+	  It is highly configurable and is ideal for testing applications.
+	  Error injection is supported to test rare errors that are hard
+	  to reproduce in real hardware.
+
+	  Say Y here if you want to test video apps or debug V4L devices.
+	  When in doubt, say N.
diff --git a/drivers/media/platform/vivid/Makefile b/drivers/media/platform/vivid/Makefile
new file mode 100644
index 0000000..756fc12
--- /dev/null
+++ b/drivers/media/platform/vivid/Makefile
@@ -0,0 +1,6 @@
+vivid-objs := vivid-core.o vivid-ctrls.o vivid-vid-common.o vivid-vbi-gen.o \
+		vivid-vid-cap.o vivid-vid-out.o vivid-kthread-cap.o vivid-kthread-out.o \
+		vivid-radio-rx.o vivid-radio-tx.o vivid-radio-common.o \
+		vivid-rds-gen.o vivid-sdr-cap.o vivid-vbi-cap.o vivid-vbi-out.o \
+		vivid-osd.o vivid-tpg.o vivid-tpg-colors.o
+obj-$(CONFIG_VIDEO_VIVID) += vivid.o
-- 
2.0.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2014-08-25 11:31 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-08-25 11:30 [PATCHv2 00/12] vivid: Virtual Video Test Driver Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 01/12] vb2: fix multiplanar read() with non-zero data_offset Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 02/12] vivid.txt: add documentation for the vivid driver Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 03/12] vivid: add core driver code Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 04/12] vivid: add the control handling code Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 05/12] vivid: add the video capture and output parts Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 06/12] vivid: add VBI capture and output code Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 07/12] vivid: add the kthread code that controls the video rate Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 08/12] vivid: add a simple framebuffer device for overlay testing Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 09/12] vivid: add the Test Pattern Generator Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 10/12] vivid: add support for radio receivers and transmitters Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 11/12] vivid: add support for software defined radio Hans Verkuil
2014-08-25 11:30 ` [PATCHv2 12/12] vivid: enable the vivid driver Hans Verkuil

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.