All of lore.kernel.org
 help / color / mirror / Atom feed
* Using the V4L2 device kernel drivers - TC358743
@ 2016-01-12 20:38 Dave Stevenson
  2016-01-13  7:58 ` Hans Verkuil
  0 siblings, 1 reply; 5+ messages in thread
From: Dave Stevenson @ 2016-01-12 20:38 UTC (permalink / raw)
  To: linux-media

Hi All.

Apologies for what feels like such a newbie question, but I've failed to 
find useful information elsewhere.

I'm one of the ex-Broadcom developers who is still supporting Raspberry 
Pi, although I'm not employed by Pi Foundation or Trading.
My aim is to open up that platform by exposing the CSI2 receiver block 
(and eventually parts of the ISP) via V4L2. The first use case would be 
for the Toshiba TC358743 HDMI to CSI2 converter, but it should be 
applicable to any of the other device drivers too.
Sadly it probably won't be upstreamable as it will require the GPU to do 
most of the register poking to avoid potential IP issues (Broadcom not 
having released the docs for the relevant hardware blocks). In that 
regard it will be fairly similar to the existing V4L2 driver for the Pi 
camera.

There is now the driver for the TC358743 in mainline, but my stumbling 
block is finding a useful example of how to actually use it. The commit 
text by Mats Randgaard says it was "tested on our hardware and all the 
implemented features works as expected", but I don't know what that 
hardware was or how it was used.

The media controller API seems to be part of the answer, but that seems 
to be a large overhead for an application to have to connect together 
multiple sub-devices when it is only interested in images out the back. 
Is there something that sets up default connections that I'm missing? 
Somewhere within device tree?

I have looked at the OMAP4 ISS driver as a vaguely similar device, but 
that seemingly covers the image processing pipe only, not hooking in to 
the sensor drivers.

I've also got a slight challenge in that ideally I want the GPU to 
allocate the memory, and ARM map that memory (we already have a service 
to do that), but I can't see how that would fit in with the the existing 
videobuf modes. Any thoughts on how I might be able to support that? The 
existing V4L2 driver ends up doing a full copy of every buffer from GPU 
memory to ARM, which isn't great for performance.
There may be an option to use contiguous memory and get the GPU to map 
that, but it's more involved as I don't believe the supporting code is 
on the Pi branch.

Any help much appreciated.

Thanks.
   Dave

PS If those involved in the TC358743 driver are reading, a couple of 
quick emails over the possibility of bringing the audio in over CSI2 
rather than I2S would be appreciated. I can split out the relevant CSI2 
ID stream, but have no idea how I would then feed that through the 
kernel to appear via ALSA.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Using the V4L2 device kernel drivers - TC358743
  2016-01-12 20:38 Using the V4L2 device kernel drivers - TC358743 Dave Stevenson
@ 2016-01-13  7:58 ` Hans Verkuil
  2016-01-13 11:38   ` Mauro Carvalho Chehab
  2016-01-13 20:43   ` Dave Stevenson
  0 siblings, 2 replies; 5+ messages in thread
From: Hans Verkuil @ 2016-01-13  7:58 UTC (permalink / raw)
  To: Dave Stevenson, linux-media; +Cc: Philipp Zabel

Hi Dave,

On 01/12/2016 09:38 PM, Dave Stevenson wrote:
> Hi All.
> 
> Apologies for what feels like such a newbie question, but I've failed to 
> find useful information elsewhere.
> 
> I'm one of the ex-Broadcom developers who is still supporting Raspberry 
> Pi, although I'm not employed by Pi Foundation or Trading.
> My aim is to open up that platform by exposing the CSI2 receiver block 
> (and eventually parts of the ISP) via V4L2. The first use case would be 
> for the Toshiba TC358743 HDMI to CSI2 converter, but it should be 
> applicable to any of the other device drivers too.
> Sadly it probably won't be upstreamable as it will require the GPU to do 
> most of the register poking to avoid potential IP issues (Broadcom not 
> having released the docs for the relevant hardware blocks). In that 
> regard it will be fairly similar to the existing V4L2 driver for the Pi 
> camera.
> 
> There is now the driver for the TC358743 in mainline, but my stumbling 
> block is finding a useful example of how to actually use it. The commit 
> text by Mats Randgaard says it was "tested on our hardware and all the 
> implemented features works as expected", but I don't know what that 
> hardware was or how it was used.

It's for Cisco video conferencing equipment, but it basically boils down to
capturing HDMI input over a CSI2 bus. I believe it's on an omap4.

I know Philip Zabel also developed for the TC358743. Philip, do you have a
git tree available that shows how it is used?

> The media controller API seems to be part of the answer, but that seems 
> to be a large overhead for an application to have to connect together 
> multiple sub-devices when it is only interested in images out the back. 

The MC is only needed if you have hardware that allows for complex and/or
dynamic internal video routing. For a standard linear video pipeline it
is not needed.

> Is there something that sets up default connections that I'm missing? 
> Somewhere within device tree?

Typically the bridge driver (i.e. the platform driver that sets up the
pipeline and creates the video devices) will use the device tree to find
the v4l2-subdevice(s) it has to load and hooks them into the pipeline.

drivers/media/platform/am437x/am437x-vpfe.c looks to be a decent example
of that.

Of course, if you have more complex pipelines, then you need to support
the MC.

> 
> I have looked at the OMAP4 ISS driver as a vaguely similar device, but 
> that seemingly covers the image processing pipe only, not hooking in to 
> the sensor drivers.

Sensor drivers are hooked in in function iss_register_entities(), see the
section for "/* Register external entities */".

> I've also got a slight challenge in that ideally I want the GPU to 
> allocate the memory, and ARM map that memory (we already have a service 
> to do that), but I can't see how that would fit in with the the existing 
> videobuf modes. Any thoughts on how I might be able to support that? The 
> existing V4L2 driver ends up doing a full copy of every buffer from GPU 
> memory to ARM, which isn't great for performance.
> There may be an option to use contiguous memory and get the GPU to map 
> that, but it's more involved as I don't believe the supporting code is 
> on the Pi branch.

The proper way to do this is that the GPU can export buffers as a DMABUF file
descriptor, then import them in V4L2 (V4L2_MEMORY_DMABUF). The videobuf2
v4l2 framework will handle all the details for you, so it is trivial on
the v4l2 side.

If the GPU doesn't support dmabuf, then I'm not sure if you can do this
without horrible hacks.

Regards,

	Hans

> 
> Any help much appreciated.
> 
> Thanks.
>    Dave
> 
> PS If those involved in the TC358743 driver are reading, a couple of 
> quick emails over the possibility of bringing the audio in over CSI2 
> rather than I2S would be appreciated. I can split out the relevant CSI2 
> ID stream, but have no idea how I would then feed that through the 
> kernel to appear via ALSA.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Using the V4L2 device kernel drivers - TC358743
  2016-01-13  7:58 ` Hans Verkuil
@ 2016-01-13 11:38   ` Mauro Carvalho Chehab
  2016-01-13 20:44     ` Dave Stevenson
  2016-01-13 20:43   ` Dave Stevenson
  1 sibling, 1 reply; 5+ messages in thread
From: Mauro Carvalho Chehab @ 2016-01-13 11:38 UTC (permalink / raw)
  To: Hans Verkuil; +Cc: Dave Stevenson, linux-media, Philipp Zabel

Em Wed, 13 Jan 2016 08:58:08 +0100
Hans Verkuil <hverkuil@xs4all.nl> escreveu:

> Hi Dave,
> 
> On 01/12/2016 09:38 PM, Dave Stevenson wrote:
> > Hi All.
> > 
> > Apologies for what feels like such a newbie question, but I've failed to 
> > find useful information elsewhere.
> > 
> > I'm one of the ex-Broadcom developers who is still supporting Raspberry 
> > Pi, although I'm not employed by Pi Foundation or Trading.
> > My aim is to open up that platform by exposing the CSI2 receiver block 
> > (and eventually parts of the ISP) via V4L2. The first use case would be 
> > for the Toshiba TC358743 HDMI to CSI2 converter, but it should be 
> > applicable to any of the other device drivers too.
> > Sadly it probably won't be upstreamable as it will require the GPU to do 
> > most of the register poking to avoid potential IP issues (Broadcom not 
> > having released the docs for the relevant hardware blocks). In that 
> > regard it will be fairly similar to the existing V4L2 driver for the Pi 
> > camera.

Hmm.... Broadcom wrote a new GPU driver (vc4) that was recently
upstreamed:
	https://wiki.freedesktop.org/dri/VC4/

Maybe its driver could be used/modified to cope with a V4L2 driver.

> > 
> > There is now the driver for the TC358743 in mainline, but my stumbling 
> > block is finding a useful example of how to actually use it. The commit 
> > text by Mats Randgaard says it was "tested on our hardware and all the 
> > implemented features works as expected", but I don't know what that 
> > hardware was or how it was used.  
> 
> It's for Cisco video conferencing equipment, but it basically boils down to
> capturing HDMI input over a CSI2 bus. I believe it's on an omap4.
> 
> I know Philip Zabel also developed for the TC358743. Philip, do you have a
> git tree available that shows how it is used?
> 
> > The media controller API seems to be part of the answer, but that seems 
> > to be a large overhead for an application to have to connect together 
> > multiple sub-devices when it is only interested in images out the back.   
> 
> The MC is only needed if you have hardware that allows for complex and/or
> dynamic internal video routing. For a standard linear video pipeline it
> is not needed.
> 
> > Is there something that sets up default connections that I'm missing? 
> > Somewhere within device tree?  
> 
> Typically the bridge driver (i.e. the platform driver that sets up the
> pipeline and creates the video devices) will use the device tree to find
> the v4l2-subdevice(s) it has to load and hooks them into the pipeline.
> 
> drivers/media/platform/am437x/am437x-vpfe.c looks to be a decent example
> of that.
> 
> Of course, if you have more complex pipelines, then you need to support
> the MC.
> 
> > 
> > I have looked at the OMAP4 ISS driver as a vaguely similar device, but 
> > that seemingly covers the image processing pipe only, not hooking in to 
> > the sensor drivers.  
> 
> Sensor drivers are hooked in in function iss_register_entities(), see the
> section for "/* Register external entities */".
> 
> > I've also got a slight challenge in that ideally I want the GPU to 
> > allocate the memory, and ARM map that memory (we already have a service 
> > to do that), but I can't see how that would fit in with the the existing 
> > videobuf modes. Any thoughts on how I might be able to support that? The 
> > existing V4L2 driver ends up doing a full copy of every buffer from GPU 
> > memory to ARM, which isn't great for performance.
> > There may be an option to use contiguous memory and get the GPU to map 
> > that, but it's more involved as I don't believe the supporting code is 
> > on the Pi branch.  
> 
> The proper way to do this is that the GPU can export buffers as a DMABUF file
> descriptor, then import them in V4L2 (V4L2_MEMORY_DMABUF). The videobuf2
> v4l2 framework will handle all the details for you, so it is trivial on
> the v4l2 side.
> 
> If the GPU doesn't support dmabuf, then I'm not sure if you can do this
> without horrible hacks.
> 
> Regards,
> 
> 	Hans
> 
> > 
> > Any help much appreciated.
> > 
> > Thanks.
> >    Dave
> > 
> > PS If those involved in the TC358743 driver are reading, a couple of 
> > quick emails over the possibility of bringing the audio in over CSI2 
> > rather than I2S would be appreciated. I can split out the relevant CSI2 
> > ID stream, but have no idea how I would then feed that through the 
> > kernel to appear via ALSA.
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-media" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >   
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Using the V4L2 device kernel drivers - TC358743
  2016-01-13  7:58 ` Hans Verkuil
  2016-01-13 11:38   ` Mauro Carvalho Chehab
@ 2016-01-13 20:43   ` Dave Stevenson
  1 sibling, 0 replies; 5+ messages in thread
From: Dave Stevenson @ 2016-01-13 20:43 UTC (permalink / raw)
  To: Hans Verkuil, linux-media; +Cc: Philipp Zabel

Hi Hans,

On 13/01/2016 07:58, Hans Verkuil wrote:
> Hi Dave,
>
> On 01/12/2016 09:38 PM, Dave Stevenson wrote:
>> Hi All.
>>
>> Apologies for what feels like such a newbie question, but I've failed to
>> find useful information elsewhere.
>>
>> I'm one of the ex-Broadcom developers who is still supporting Raspberry
>> Pi, although I'm not employed by Pi Foundation or Trading.
>> My aim is to open up that platform by exposing the CSI2 receiver block
>> (and eventually parts of the ISP) via V4L2. The first use case would be
>> for the Toshiba TC358743 HDMI to CSI2 converter, but it should be
>> applicable to any of the other device drivers too.
>> Sadly it probably won't be upstreamable as it will require the GPU to do
>> most of the register poking to avoid potential IP issues (Broadcom not
>> having released the docs for the relevant hardware blocks). In that
>> regard it will be fairly similar to the existing V4L2 driver for the Pi
>> camera.
>>
>> There is now the driver for the TC358743 in mainline, but my stumbling
>> block is finding a useful example of how to actually use it. The commit
>> text by Mats Randgaard says it was "tested on our hardware and all the
>> implemented features works as expected", but I don't know what that
>> hardware was or how it was used.
>
> It's for Cisco video conferencing equipment, but it basically boils down to
> capturing HDMI input over a CSI2 bus. I believe it's on an omap4.
>
> I know Philip Zabel also developed for the TC358743. Philip, do you have a
> git tree available that shows how it is used?

An example tree would be great :-)

>> The media controller API seems to be part of the answer, but that seems
>> to be a large overhead for an application to have to connect together
>> multiple sub-devices when it is only interested in images out the back.
>
> The MC is only needed if you have hardware that allows for complex and/or
> dynamic internal video routing. For a standard linear video pipeline it
> is not needed.

Things may become more complex if I can integrate in the ISP or expose 
multiple CSI rx instances, but initially I can keep it simple.

>> Is there something that sets up default connections that I'm missing?
>> Somewhere within device tree?
>
> Typically the bridge driver (i.e. the platform driver that sets up the
> pipeline and creates the video devices) will use the device tree to find
> the v4l2-subdevice(s) it has to load and hooks them into the pipeline.
>
> drivers/media/platform/am437x/am437x-vpfe.c looks to be a decent example
> of that.

Perfect. Bedtime reading :-)

> Of course, if you have more complex pipelines, then you need to support
> the MC.
>
>>
>> I have looked at the OMAP4 ISS driver as a vaguely similar device, but
>> that seemingly covers the image processing pipe only, not hooking in to
>> the sensor drivers.
>
> Sensor drivers are hooked in in function iss_register_entities(), see the
> section for "/* Register external entities */".

OK, stuff squirreled away in iss_platform_data. I'll take a look in more 
detail later.

>> I've also got a slight challenge in that ideally I want the GPU to
>> allocate the memory, and ARM map that memory (we already have a service
>> to do that), but I can't see how that would fit in with the the existing
>> videobuf modes. Any thoughts on how I might be able to support that? The
>> existing V4L2 driver ends up doing a full copy of every buffer from GPU
>> memory to ARM, which isn't great for performance.
>> There may be an option to use contiguous memory and get the GPU to map
>> that, but it's more involved as I don't believe the supporting code is
>> on the Pi branch.
>
> The proper way to do this is that the GPU can export buffers as a DMABUF file
> descriptor, then import them in V4L2 (V4L2_MEMORY_DMABUF). The videobuf2
> v4l2 framework will handle all the details for you, so it is trivial on
> the v4l2 side.
>
> If the GPU doesn't support dmabuf, then I'm not sure if you can do this
> without horrible hacks.

It doesn't directly support dmabuf. :-( The GPU has full access to RAM, 
but doesn't have an IOMMU so buffers have to be contiguous.
I'll have a look into how complex importing the code to map contiguous 
ARM memory into GPU space is. I suspect it can be made to work, but 
remember it took a long time to get working cleanly at Brcm, and that 
was on Android with ION buffers controlled from userspace. Going from 
ION to dmabuf shouldn't be too bad, but it will need a bit of work to 
sort it from the kernel.
I can always go for the painful copy to start with, and improve the 
buffer handling later.

Thanks for the advice.
   Dave


> Regards,
>
> 	Hans
>
>>
>> Any help much appreciated.
>>
>> Thanks.
>>     Dave
>>
>> PS If those involved in the TC358743 driver are reading, a couple of
>> quick emails over the possibility of bringing the audio in over CSI2
>> rather than I2S would be appreciated. I can split out the relevant CSI2
>> ID stream, but have no idea how I would then feed that through the
>> kernel to appear via ALSA.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-media" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Using the V4L2 device kernel drivers - TC358743
  2016-01-13 11:38   ` Mauro Carvalho Chehab
@ 2016-01-13 20:44     ` Dave Stevenson
  0 siblings, 0 replies; 5+ messages in thread
From: Dave Stevenson @ 2016-01-13 20:44 UTC (permalink / raw)
  To: Mauro Carvalho Chehab, Hans Verkuil; +Cc: linux-media, Philipp Zabel

On 13/01/2016 11:38, Mauro Carvalho Chehab wrote:
> Em Wed, 13 Jan 2016 08:58:08 +0100
> Hans Verkuil <hverkuil@xs4all.nl> escreveu:
>
>> Hi Dave,
>>
>> On 01/12/2016 09:38 PM, Dave Stevenson wrote:
>>> Hi All.
>>>
>>> Apologies for what feels like such a newbie question, but I've failed to
>>> find useful information elsewhere.
>>>
>>> I'm one of the ex-Broadcom developers who is still supporting Raspberry
>>> Pi, although I'm not employed by Pi Foundation or Trading.
>>> My aim is to open up that platform by exposing the CSI2 receiver block
>>> (and eventually parts of the ISP) via V4L2. The first use case would be
>>> for the Toshiba TC358743 HDMI to CSI2 converter, but it should be
>>> applicable to any of the other device drivers too.
>>> Sadly it probably won't be upstreamable as it will require the GPU to do
>>> most of the register poking to avoid potential IP issues (Broadcom not
>>> having released the docs for the relevant hardware blocks). In that
>>> regard it will be fairly similar to the existing V4L2 driver for the Pi
>>> camera.
>
> Hmm.... Broadcom wrote a new GPU driver (vc4) that was recently
> upstreamed:
> 	https://wiki.freedesktop.org/dri/VC4/
>
> Maybe its driver could be used/modified to cope with a V4L2 driver.

Broadcom released the IP details for the 3D graphics side 
(http://www.broadcom.com/docs/support/videocore/VideoCoreIV-AG100-R.pdf). Nothing 
for the imaging pipe, codecs, or other VideoCore hardware blocks.
They did make a slight whoopsie in that the register names and offsets 
for all blocks were also released. Sadly the bit-usage and descriptions 
weren't :-(

I'm actually waiting for the complaints of the camera not working with 
that new stack - the AWB algorithm is run on the 3D processing units as 
they support vector floating point operations. Having the VideoCore 
processor and the ARM programming up those units is currently going to 
break :-(

>>>
>>> There is now the driver for the TC358743 in mainline, but my stumbling
>>> block is finding a useful example of how to actually use it. The commit
>>> text by Mats Randgaard says it was "tested on our hardware and all the
>>> implemented features works as expected", but I don't know what that
>>> hardware was or how it was used.
>>
>> It's for Cisco video conferencing equipment, but it basically boils down to
>> capturing HDMI input over a CSI2 bus. I believe it's on an omap4.
>>
>> I know Philip Zabel also developed for the TC358743. Philip, do you have a
>> git tree available that shows how it is used?
>>
>>> The media controller API seems to be part of the answer, but that seems
>>> to be a large overhead for an application to have to connect together
>>> multiple sub-devices when it is only interested in images out the back.
>>
>> The MC is only needed if you have hardware that allows for complex and/or
>> dynamic internal video routing. For a standard linear video pipeline it
>> is not needed.
>>
>>> Is there something that sets up default connections that I'm missing?
>>> Somewhere within device tree?
>>
>> Typically the bridge driver (i.e. the platform driver that sets up the
>> pipeline and creates the video devices) will use the device tree to find
>> the v4l2-subdevice(s) it has to load and hooks them into the pipeline.
>>
>> drivers/media/platform/am437x/am437x-vpfe.c looks to be a decent example
>> of that.
>>
>> Of course, if you have more complex pipelines, then you need to support
>> the MC.
>>
>>>
>>> I have looked at the OMAP4 ISS driver as a vaguely similar device, but
>>> that seemingly covers the image processing pipe only, not hooking in to
>>> the sensor drivers.
>>
>> Sensor drivers are hooked in in function iss_register_entities(), see the
>> section for "/* Register external entities */".
>>
>>> I've also got a slight challenge in that ideally I want the GPU to
>>> allocate the memory, and ARM map that memory (we already have a service
>>> to do that), but I can't see how that would fit in with the the existing
>>> videobuf modes. Any thoughts on how I might be able to support that? The
>>> existing V4L2 driver ends up doing a full copy of every buffer from GPU
>>> memory to ARM, which isn't great for performance.
>>> There may be an option to use contiguous memory and get the GPU to map
>>> that, but it's more involved as I don't believe the supporting code is
>>> on the Pi branch.
>>
>> The proper way to do this is that the GPU can export buffers as a DMABUF file
>> descriptor, then import them in V4L2 (V4L2_MEMORY_DMABUF). The videobuf2
>> v4l2 framework will handle all the details for you, so it is trivial on
>> the v4l2 side.
>>
>> If the GPU doesn't support dmabuf, then I'm not sure if you can do this
>> without horrible hacks.
>>
>> Regards,
>>
>> 	Hans
>>
>>>
>>> Any help much appreciated.
>>>
>>> Thanks.
>>>     Dave
>>>
>>> PS If those involved in the TC358743 driver are reading, a couple of
>>> quick emails over the possibility of bringing the audio in over CSI2
>>> rather than I2S would be appreciated. I can split out the relevant CSI2
>>> ID stream, but have no idea how I would then feed that through the
>>> kernel to appear via ALSA.
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-media" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-media" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-01-13 20:42 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-12 20:38 Using the V4L2 device kernel drivers - TC358743 Dave Stevenson
2016-01-13  7:58 ` Hans Verkuil
2016-01-13 11:38   ` Mauro Carvalho Chehab
2016-01-13 20:44     ` Dave Stevenson
2016-01-13 20:43   ` Dave Stevenson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.