All of lore.kernel.org
 help / color / mirror / Atom feed
* RFC: bus configuration setup for sub-devices
@ 2009-08-29 14:31 Hans Verkuil
  2009-08-30 15:46 ` Hiremath, Vaibhav
                   ` (4 more replies)
  0 siblings, 5 replies; 18+ messages in thread
From: Hans Verkuil @ 2009-08-29 14:31 UTC (permalink / raw)
  To: linux-media
  Cc: Guennadi Liakhovetski, Sakari Ailus, Muralidharan Karicheri,
	Laurent Pinchart

Hi all,

This is an updated RFC on how to setup the bus for sub-devices.

It is based on work from Guennadi and Muralidharan.

The problem is that both sub-devices and bridge devices have to configure
their bus correctly so that they can talk to one another. We need a
standard way of configuring such busses.

The soc-camera driver did this by auto-negotiation. For several reasons (see
the threads on bus parameters in the past) I thought that that was not a good
idea. After talking this over last week with Guennadi we agreed that we would
configure busses directly rather than negotiate the bus configuration. It was
a very pleasant meeting (Hans de Goede, Laurent Pinchart, Guennadi Liakhovetski
and myself) and something that we should repeat. Face-to-face meetings make
it so much faster to come to a decision on difficult problems.

My last proposal merged subdev and bridge parameters into one struct, thus
completely describing the bus setup. I realized that there is a problem with
that if you have to define the bus for sub-devices that are in the middle of
a chain: e.g. a sensor sends its video to a image processing subdev and from
there it goes to the bridge. You have to be able to specify both the source and
sink part of each bus for that image processing subdev.

It's much easier to do that by keeping the source and sink bus config
separate.

Here is my new proposal:

/*
 * Some sub-devices are connected to the host/bridge device through a bus that
 * carries the clock, vsync, hsync and data. Some interfaces such as BT.656
 * carries the sync embedded in the data whereas others have separate lines
 * carrying the sync signals.
 */
struct v4l2_bus_config {
        /* embedded sync, set this when sync is embedded in the data stream */
        unsigned embedded_sync:1;
        /* master or slave */
        unsigned is_master:1;

        /* bus width */
        unsigned width:8;
        /* 0 - active low, 1 - active high */
        unsigned pol_vsync:1;
        /* 0 - active low, 1 - active high */
        unsigned pol_hsync:1;
        /* 0 - low to high, 1 - high to low */
        unsigned pol_field:1;
        /* 0 - sample at falling edge, 1 - sample at rising edge */
        unsigned edge_pclock:1;
        /* 0 - active low, 1 - active high */
        unsigned pol_data:1;
};

It's all bitfields, so it is a very compact representation.

In video_ops we add two new functions:

     int (*s_source_bus_config)(struct v4l2_subdev *sd, const struct v4l2_bus_config *bus);
     int (*s_sink_bus_config)(struct v4l2_subdev *sd, const struct v4l2_bus_config *bus);

Actually, if there are no subdevs that can act as sinks then we should omit
s_sink_bus_config for now.

In addition, I think we need to require that at the start of the s_*_bus_config
implementation in the host or subdev there should be a standard comment
block describing the possible combinations supported by the hardware:

/* This hardware supports the following bus settings:

   widths: 8/16
   syncs: embedded or separate
   bus master: slave
   vsync polarity: 0/1
   hsync polarity: 0/1
   field polarity: not applicable
   sampling edge pixelclock: 0/1
   data polarity: 1
 */

This should make it easy for implementers to pick a valid set of bus
parameters.

Eagle-eyed observers will note that the bus type field has disappeared from
this proposal. The problem with that is that I have no clue what it is supposed
to do. Is there more than one bus that can be set up? In that case it is not a
bus type but a bus ID. Or does it determine that type of data that is being
transported over the bus? In that case it does not belong here, since that is
something for a s_fmt type API.

This particular API should just setup the physical bus. Nothing more, IMHO.

Please let me know if I am missing something here.

Guennadi, from our meeting I understood that you also want a way provide
an offset in case the data is actually only on some pins (e.g. the lower
or upper X pins are always 0). I don't think it is hard to provide support
for that in this API by adding an offset field or something like that.

Can you give me a pointer to the actual place where that is needed? Possibly
also with references to datasheets? I'd like to understand this better.

Sakari, this proposal is specific to parallel busses. I understood that Nokia
also has to deal with serial busses. Can you take a look at this proposal with
a serial bus in mind?

This bus config stuff has been in limbo for too long, so I'd like to come
to a conclusion and implementation in 1-2 weeks.

Regards,

       Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: bus configuration setup for sub-devices
  2009-08-29 14:31 RFC: bus configuration setup for sub-devices Hans Verkuil
@ 2009-08-30 15:46 ` Hiremath, Vaibhav
  2009-08-30 19:35   ` Hans Verkuil
  2009-08-30 22:19 ` RFC: " Guennadi Liakhovetski
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Hiremath, Vaibhav @ 2009-08-30 15:46 UTC (permalink / raw)
  To: Hans Verkuil, linux-media
  Cc: Guennadi Liakhovetski, Sakari Ailus, Karicheri, Muralidharan,
	Laurent Pinchart

> -----Original Message-----
> From: linux-media-owner@vger.kernel.org [mailto:linux-media-
> owner@vger.kernel.org] On Behalf Of Hans Verkuil
> Sent: Saturday, August 29, 2009 8:01 PM
> To: linux-media@vger.kernel.org
> Cc: Guennadi Liakhovetski; Sakari Ailus; Karicheri, Muralidharan;
> Laurent Pinchart
> Subject: RFC: bus configuration setup for sub-devices
> 
> Hi all,
> 
> This is an updated RFC on how to setup the bus for sub-devices.
> 
> It is based on work from Guennadi and Muralidharan.
> 
> The problem is that both sub-devices and bridge devices have to
> configure
> their bus correctly so that they can talk to one another. We need a
> standard way of configuring such busses.
> 
> The soc-camera driver did this by auto-negotiation. For several
> reasons (see
> the threads on bus parameters in the past) I thought that that was
> not a good
> idea. After talking this over last week with Guennadi we agreed that
> we would
> configure busses directly rather than negotiate the bus
> configuration. It was
> a very pleasant meeting (Hans de Goede, Laurent Pinchart, Guennadi
> Liakhovetski
> and myself) and something that we should repeat. Face-to-face
> meetings make
> it so much faster to come to a decision on difficult problems.
> 
> My last proposal merged subdev and bridge parameters into one
> struct, thus
> completely describing the bus setup. I realized that there is a
> problem with
> that if you have to define the bus for sub-devices that are in the
> middle of
> a chain: e.g. a sensor sends its video to a image processing subdev
> and from
> there it goes to the bridge. You have to be able to specify both the
> source and
> sink part of each bus for that image processing subdev.
> 
> It's much easier to do that by keeping the source and sink bus
> config
> separate.
> 
> Here is my new proposal:
> 
> /*
>  * Some sub-devices are connected to the host/bridge device through
> a bus that
>  * carries the clock, vsync, hsync and data. Some interfaces such as
> BT.656
>  * carries the sync embedded in the data whereas others have
> separate lines
>  * carrying the sync signals.
>  */
> struct v4l2_bus_config {
>         /* embedded sync, set this when sync is embedded in the data
> stream */
>         unsigned embedded_sync:1;
>         /* master or slave */
>         unsigned is_master:1;
> 
>         /* bus width */
>         unsigned width:8;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_vsync:1;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_hsync:1;
>         /* 0 - low to high, 1 - high to low */
>         unsigned pol_field:1;
>         /* 0 - sample at falling edge, 1 - sample at rising edge */
>         unsigned edge_pclock:1;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_data:1;
> };
> 
> It's all bitfields, so it is a very compact representation.
> 
[Hiremath, Vaibhav] If I understand correctly, all the above data will come to host/bridge driver from board file, right?

Thanks,
Vaibhav
> In video_ops we add two new functions:
> 
>      int (*s_source_bus_config)(struct v4l2_subdev *sd, const struct
> v4l2_bus_config *bus);
>      int (*s_sink_bus_config)(struct v4l2_subdev *sd, const struct
> v4l2_bus_config *bus);
> 
> Actually, if there are no subdevs that can act as sinks then we
> should omit
> s_sink_bus_config for now.
> 
> In addition, I think we need to require that at the start of the
> s_*_bus_config
> implementation in the host or subdev there should be a standard
> comment
> block describing the possible combinations supported by the
> hardware:
> 
> /* This hardware supports the following bus settings:
> 
>    widths: 8/16
>    syncs: embedded or separate
>    bus master: slave
>    vsync polarity: 0/1
>    hsync polarity: 0/1
>    field polarity: not applicable
>    sampling edge pixelclock: 0/1
>    data polarity: 1
>  */
> 
> This should make it easy for implementers to pick a valid set of bus
> parameters.
> 
> Eagle-eyed observers will note that the bus type field has
> disappeared from
> this proposal. The problem with that is that I have no clue what it
> is supposed
> to do. Is there more than one bus that can be set up? In that case
> it is not a
> bus type but a bus ID. Or does it determine that type of data that
> is being
> transported over the bus? In that case it does not belong here,
> since that is
> something for a s_fmt type API.
> 
> This particular API should just setup the physical bus. Nothing
> more, IMHO.
> 
> Please let me know if I am missing something here.
> 
> Guennadi, from our meeting I understood that you also want a way
> provide
> an offset in case the data is actually only on some pins (e.g. the
> lower
> or upper X pins are always 0). I don't think it is hard to provide
> support
> for that in this API by adding an offset field or something like
> that.
> 
> Can you give me a pointer to the actual place where that is needed?
> Possibly
> also with references to datasheets? I'd like to understand this
> better.
> 
> Sakari, this proposal is specific to parallel busses. I understood
> that Nokia
> also has to deal with serial busses. Can you take a look at this
> proposal with
> a serial bus in mind?
> 
> This bus config stuff has been in limbo for too long, so I'd like to
> come
> to a conclusion and implementation in 1-2 weeks.
> 
> Regards,
> 
>        Hans
> 
> --
> Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
> --
> To unsubscribe from this list: send the line "unsubscribe linux-
> media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: bus configuration setup for sub-devices
  2009-08-30 15:46 ` Hiremath, Vaibhav
@ 2009-08-30 19:35   ` Hans Verkuil
  0 siblings, 0 replies; 18+ messages in thread
From: Hans Verkuil @ 2009-08-30 19:35 UTC (permalink / raw)
  To: Hiremath, Vaibhav
  Cc: linux-media, Guennadi Liakhovetski, Sakari Ailus, Karicheri,
	Muralidharan, Laurent Pinchart

On Sunday 30 August 2009 17:46:36 Hiremath, Vaibhav wrote:
> > -----Original Message-----

<snip>

> > 
> > /*
> >  * Some sub-devices are connected to the host/bridge device through
> > a bus that
> >  * carries the clock, vsync, hsync and data. Some interfaces such as
> > BT.656
> >  * carries the sync embedded in the data whereas others have
> > separate lines
> >  * carrying the sync signals.
> >  */
> > struct v4l2_bus_config {
> >         /* embedded sync, set this when sync is embedded in the data
> > stream */
> >         unsigned embedded_sync:1;
> >         /* master or slave */
> >         unsigned is_master:1;
> > 
> >         /* bus width */
> >         unsigned width:8;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_vsync:1;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_hsync:1;
> >         /* 0 - low to high, 1 - high to low */
> >         unsigned pol_field:1;
> >         /* 0 - sample at falling edge, 1 - sample at rising edge */
> >         unsigned edge_pclock:1;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_data:1;
> > };
> > 
> > It's all bitfields, so it is a very compact representation.
> > 
> [Hiremath, Vaibhav] If I understand correctly, all the above data will come to host/bridge driver from board file, right?

Yes, that's correct.

Regards,

	Hans


-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: RFC: bus configuration setup for sub-devices
  2009-08-29 14:31 RFC: bus configuration setup for sub-devices Hans Verkuil
  2009-08-30 15:46 ` Hiremath, Vaibhav
@ 2009-08-30 22:19 ` Guennadi Liakhovetski
  2009-08-31  6:23   ` Hans Verkuil
  2009-08-30 22:31 ` Laurent Pinchart
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 18+ messages in thread
From: Guennadi Liakhovetski @ 2009-08-30 22:19 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Linux Media Mailing List, Sakari Ailus, Muralidharan Karicheri,
	Laurent Pinchart

On Sat, 29 Aug 2009, Hans Verkuil wrote:

> Hi all,
> 
> This is an updated RFC on how to setup the bus for sub-devices.
> 
> It is based on work from Guennadi and Muralidharan.
> 
> The problem is that both sub-devices and bridge devices have to configure
> their bus correctly so that they can talk to one another. We need a
> standard way of configuring such busses.
> 
> The soc-camera driver did this by auto-negotiation. For several reasons (see
> the threads on bus parameters in the past) I thought that that was not a good
> idea. After talking this over last week with Guennadi we agreed that we would
> configure busses directly rather than negotiate the bus configuration. It was
> a very pleasant meeting (Hans de Goede, Laurent Pinchart, Guennadi Liakhovetski
> and myself) and something that we should repeat. Face-to-face meetings make
> it so much faster to come to a decision on difficult problems.
> 
> My last proposal merged subdev and bridge parameters into one struct, thus
> completely describing the bus setup. I realized that there is a problem with
> that if you have to define the bus for sub-devices that are in the middle of
> a chain: e.g. a sensor sends its video to a image processing subdev and from
> there it goes to the bridge. You have to be able to specify both the source and
> sink part of each bus for that image processing subdev.
> 
> It's much easier to do that by keeping the source and sink bus config
> separate.
> 
> Here is my new proposal:
> 
> /*
>  * Some sub-devices are connected to the host/bridge device through a bus that
>  * carries the clock, vsync, hsync and data. Some interfaces such as BT.656
>  * carries the sync embedded in the data whereas others have separate lines
>  * carrying the sync signals.
>  */
> struct v4l2_bus_config {
>         /* embedded sync, set this when sync is embedded in the data stream */
>         unsigned embedded_sync:1;
>         /* master or slave */
>         unsigned is_master:1;

Up to now I usually saw the master-slave relationship defined as per 
whether the protocol is "master" or "slave," which always was used from 
the PoV of the bridge. I.e., even in a camera datasheet a phrase like 
"supports master-parallel mode" means supports a mode in which the bridge 
is a master and the camera is a slave. So, maybe it is better instead of a 
.is_master flag to use a .master_mode flag?

Besides, aren't there any other bus synchronisation models apart from

data + master clock + pixel clock + hsync + vsync
and
data + master clock + pixel clock + embedded sync

? For example, we should be able to specify, that field is not connected?

> 
>         /* bus width */
>         unsigned width:8;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_vsync:1;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_hsync:1;
>         /* 0 - low to high, 1 - high to low */
>         unsigned pol_field:1;
>         /* 0 - sample at falling edge, 1 - sample at rising edge */
>         unsigned edge_pclock:1;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_data:1;
> };
> 
> It's all bitfields, so it is a very compact representation.
> 
> In video_ops we add two new functions:
> 
>      int (*s_source_bus_config)(struct v4l2_subdev *sd, const struct v4l2_bus_config *bus);
>      int (*s_sink_bus_config)(struct v4l2_subdev *sd, const struct v4l2_bus_config *bus);
> 
> Actually, if there are no subdevs that can act as sinks then we should omit
> s_sink_bus_config for now.
> 
> In addition, I think we need to require that at the start of the s_*_bus_config
> implementation in the host or subdev there should be a standard comment
> block describing the possible combinations supported by the hardware:
> 
> /* This hardware supports the following bus settings:
> 
>    widths: 8/16
>    syncs: embedded or separate
>    bus master: slave
>    vsync polarity: 0/1
>    hsync polarity: 0/1
>    field polarity: not applicable
>    sampling edge pixelclock: 0/1
>    data polarity: 1
>  */
> 
> This should make it easy for implementers to pick a valid set of bus
> parameters.
> 
> Eagle-eyed observers will note that the bus type field has disappeared from
> this proposal. The problem with that is that I have no clue what it is supposed
> to do. Is there more than one bus that can be set up? In that case it is not a
> bus type but a bus ID. Or does it determine that type of data that is being
> transported over the bus? In that case it does not belong here, since that is
> something for a s_fmt type API.
> 
> This particular API should just setup the physical bus. Nothing more, IMHO.
> 
> Please let me know if I am missing something here.
> 
> Guennadi, from our meeting I understood that you also want a way provide
> an offset in case the data is actually only on some pins (e.g. the lower
> or upper X pins are always 0). I don't think it is hard to provide support
> for that in this API by adding an offset field or something like that.
> 
> Can you give me a pointer to the actual place where that is needed? Possibly
> also with references to datasheets? I'd like to understand this better.

I'll just describe to you the cases, where we need this:

1. PXA270 SoC Quick Capture Interface has 10 data lines (D9...D0) and can 
sample in raw (parallel) mode 8, 9, or 10 bits of data. However, when 
configured to capture fewer than 10 bits of data it sample not the most 
significant lines (D9...D1 or D9...D2), but the least significant ones.

2. i.MX31 SoC Camera Sensor Interface has 15 data lines (D14...D0) and can 
sample in raw (parallel) mode 8, 10, or 15 lines of data. Thereby it 
sample the most significant lines.

3. MT9M001 and MT9V022 sensors have 10 lines of data and always deliver 10 
bits of data. When directly connected D0-D0...D9-D9 to the PXA270 you can 
only sample 10 bits and not 8 bits. When similarly connected to i.MX31 
D0-D6...D9-D15 you can seamlessly configure the SoC to either capture 8 or 
10 bits of data - both will work.

4. Creative hardware engineers have supplied both these cameras with an 
i2c switch, that can switch the 8 most significant data lines of the 
sensor to the least significant lines of the interface to work with 
PXA270.

5. The current soc-camera solution for this situation seems pretty clean: 
we provide an optional callback into platform code to query bus 
parameters. On boards, using such a camera with an i2c switch this 
function returns to the sensor "you can support 8 and 10 bit modes." On 
PXA270 boards, using such a camera without a switch this function is 
either not implemented at all, or it returns the default "you can only 
provide 10 bits." Then there is another callback to actually switch 
between low and high lines by operating the I2C switch. And a third 
callback to release i2c switch resources. On i.MX31 you would implement 
the query callback as "yes, you can support 8 and 10 bits," and the switch 
callback as a dummy (or not implement at all) because you don't have to 
switch anything on i.MX31.

> Sakari, this proposal is specific to parallel busses. I understood that Nokia
> also has to deal with serial busses. Can you take a look at this proposal with
> a serial bus in mind?
> 
> This bus config stuff has been in limbo for too long, so I'd like to come
> to a conclusion and implementation in 1-2 weeks.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: RFC: bus configuration setup for sub-devices
  2009-08-29 14:31 RFC: bus configuration setup for sub-devices Hans Verkuil
  2009-08-30 15:46 ` Hiremath, Vaibhav
  2009-08-30 22:19 ` RFC: " Guennadi Liakhovetski
@ 2009-08-30 22:31 ` Laurent Pinchart
  2009-08-31  6:33   ` Hans Verkuil
  2009-08-31 14:42 ` Karicheri, Muralidharan
  2009-08-31 15:54 ` RFC: " Sakari Ailus
  4 siblings, 1 reply; 18+ messages in thread
From: Laurent Pinchart @ 2009-08-30 22:31 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: linux-media, Guennadi Liakhovetski, Sakari Ailus, Muralidharan Karicheri

Hi Hans,

thanks for putting this all together.

On Saturday 29 August 2009 16:31:13 Hans Verkuil wrote:
> Hi all,
>
> This is an updated RFC on how to setup the bus for sub-devices.
>
> It is based on work from Guennadi and Muralidharan.
>
> The problem is that both sub-devices and bridge devices have to configure
> their bus correctly so that they can talk to one another. We need a
> standard way of configuring such busses.
>
> The soc-camera driver did this by auto-negotiation. For several reasons
> (see the threads on bus parameters in the past) I thought that that was not
> a good idea. After talking this over last week with Guennadi we agreed that
> we would configure busses directly rather than negotiate the bus
> configuration. It was a very pleasant meeting (Hans de Goede, Laurent
> Pinchart, Guennadi Liakhovetski and myself) and something that we should
> repeat. Face-to-face meetings make it so much faster to come to a decision
> on difficult problems.
>
> My last proposal merged subdev and bridge parameters into one struct, thus
> completely describing the bus setup. I realized that there is a problem
> with that if you have to define the bus for sub-devices that are in the
> middle of a chain: e.g. a sensor sends its video to a image processing
> subdev and from there it goes to the bridge. You have to be able to specify
> both the source and sink part of each bus for that image processing subdev.

Do we have such a device currently ? If not, it might not be worth it 
overdesigning the API. Otherwise we could also take into account sub-devices 
with multiple sources or sinks :-)

> It's much easier to do that by keeping the source and sink bus config
> separate.

Do you mean the platform data should define both source and sink bus config 
data separately ? Wouldn't it be better to use a single structure per bus ?

> Here is my new proposal:
>
> /*
>  * Some sub-devices are connected to the host/bridge device through a bus
> that * carries the clock, vsync, hsync and data. Some interfaces such as
> BT.656 * carries the sync embedded in the data whereas others have separate
> lines * carrying the sync signals.
>  */
> struct v4l2_bus_config {
>         /* embedded sync, set this when sync is embedded in the data stream
> */ unsigned embedded_sync:1;
>         /* master or slave */
>         unsigned is_master:1;
>
>         /* bus width */
>         unsigned width:8;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_vsync:1;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_hsync:1;
>         /* 0 - low to high, 1 - high to low */
>         unsigned pol_field:1;
>         /* 0 - sample at falling edge, 1 - sample at rising edge */
>         unsigned edge_pclock:1;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_data:1;
> };
>
> It's all bitfields, so it is a very compact representation.
>
> In video_ops we add two new functions:
>
>      int (*s_source_bus_config)(struct v4l2_subdev *sd, const struct
>      v4l2_bus_config *bus);
>      int (*s_sink_bus_config)(struct v4l2_subdev *sd, const struct
>      v4l2_bus_config *bus);
>
> Actually, if there are no subdevs that can act as sinks then we should omit
> s_sink_bus_config for now.

What about defining pins instead and have a single s_pin_bus_config function 
that takes some kind of pin ID ?

> In addition, I think we need to require that at the start of the
> s_*_bus_config implementation in the host or subdev there should be a
> standard comment block describing the possible combinations supported by
> the hardware:
>
> /* This hardware supports the following bus settings:
>
>    widths: 8/16
>    syncs: embedded or separate
>    bus master: slave
>    vsync polarity: 0/1
>    hsync polarity: 0/1
>    field polarity: not applicable
>    sampling edge pixelclock: 0/1
>    data polarity: 1
>  */

Agreed.

> This should make it easy for implementers to pick a valid set of bus
> parameters.
>
> Eagle-eyed observers will note that the bus type field has disappeared from
> this proposal. The problem with that is that I have no clue what it is
> supposed to do. Is there more than one bus that can be set up? In that case
> it is not a bus type but a bus ID. Or does it determine that type of data
> that is being transported over the bus? In that case it does not belong
> here, since that is something for a s_fmt type API.
>
> This particular API should just setup the physical bus. Nothing more, IMHO.

Could that change at runtime or is it fixed in stone ?

> Please let me know if I am missing something here.
>
> Guennadi, from our meeting I understood that you also want a way provide
> an offset in case the data is actually only on some pins (e.g. the lower
> or upper X pins are always 0). I don't think it is hard to provide support
> for that in this API by adding an offset field or something like that.
>
> Can you give me a pointer to the actual place where that is needed?
> Possibly also with references to datasheets? I'd like to understand this
> better.

The OMAP3430 public datasheet can be found at

http://focus.ti.com/pdfs/wtbu/SWPU114U_FinalEPDF_08_17_2009.pdf

The OMAP3 has a lane shifter than can be used for such purpose if I'm not 
mistaken, but I'll have to double check that.

> Sakari, this proposal is specific to parallel busses. I understood that
> Nokia also has to deal with serial busses. Can you take a look at this
> proposal with a serial bus in mind?
>
> This bus config stuff has been in limbo for too long, so I'd like to come
> to a conclusion and implementation in 1-2 weeks.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: RFC: bus configuration setup for sub-devices
  2009-08-30 22:19 ` RFC: " Guennadi Liakhovetski
@ 2009-08-31  6:23   ` Hans Verkuil
  2009-08-31 14:45     ` Karicheri, Muralidharan
  0 siblings, 1 reply; 18+ messages in thread
From: Hans Verkuil @ 2009-08-31  6:23 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Linux Media Mailing List, Sakari Ailus, Muralidharan Karicheri,
	Laurent Pinchart

On Monday 31 August 2009 00:19:38 Guennadi Liakhovetski wrote:
> On Sat, 29 Aug 2009, Hans Verkuil wrote:
> 
> > Hi all,
> > 
> > This is an updated RFC on how to setup the bus for sub-devices.
> > 
> > It is based on work from Guennadi and Muralidharan.
> > 
> > The problem is that both sub-devices and bridge devices have to configure
> > their bus correctly so that they can talk to one another. We need a
> > standard way of configuring such busses.
> > 
> > The soc-camera driver did this by auto-negotiation. For several reasons (see
> > the threads on bus parameters in the past) I thought that that was not a good
> > idea. After talking this over last week with Guennadi we agreed that we would
> > configure busses directly rather than negotiate the bus configuration. It was
> > a very pleasant meeting (Hans de Goede, Laurent Pinchart, Guennadi Liakhovetski
> > and myself) and something that we should repeat. Face-to-face meetings make
> > it so much faster to come to a decision on difficult problems.
> > 
> > My last proposal merged subdev and bridge parameters into one struct, thus
> > completely describing the bus setup. I realized that there is a problem with
> > that if you have to define the bus for sub-devices that are in the middle of
> > a chain: e.g. a sensor sends its video to a image processing subdev and from
> > there it goes to the bridge. You have to be able to specify both the source and
> > sink part of each bus for that image processing subdev.
> > 
> > It's much easier to do that by keeping the source and sink bus config
> > separate.
> > 
> > Here is my new proposal:
> > 
> > /*
> >  * Some sub-devices are connected to the host/bridge device through a bus that
> >  * carries the clock, vsync, hsync and data. Some interfaces such as BT.656
> >  * carries the sync embedded in the data whereas others have separate lines
> >  * carrying the sync signals.
> >  */
> > struct v4l2_bus_config {
> >         /* embedded sync, set this when sync is embedded in the data stream */
> >         unsigned embedded_sync:1;
> >         /* master or slave */
> >         unsigned is_master:1;
> 
> Up to now I usually saw the master-slave relationship defined as per 
> whether the protocol is "master" or "slave," which always was used from 
> the PoV of the bridge. I.e., even in a camera datasheet a phrase like 
> "supports master-parallel mode" means supports a mode in which the bridge 
> is a master and the camera is a slave. So, maybe it is better instead of a 
> .is_master flag to use a .master_mode flag?

Sounds reasonable. I'll check a few datasheets myself to see what terminology
they use.

> Besides, aren't there any other bus synchronisation models apart from
> 
> data + master clock + pixel clock + hsync + vsync
> and
> data + master clock + pixel clock + embedded sync
> 
> ? For example, we should be able to specify, that field is not connected?

Ah yes, the field ID pin. Very good point, that pin should be added. I've
never seen other pins used, though. We'll tackle that as we encounter them.

> 
> > 
> >         /* bus width */
> >         unsigned width:8;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_vsync:1;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_hsync:1;
> >         /* 0 - low to high, 1 - high to low */
> >         unsigned pol_field:1;
> >         /* 0 - sample at falling edge, 1 - sample at rising edge */
> >         unsigned edge_pclock:1;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_data:1;
> > };
> > 
> > It's all bitfields, so it is a very compact representation.
> > 
> > In video_ops we add two new functions:
> > 
> >      int (*s_source_bus_config)(struct v4l2_subdev *sd, const struct v4l2_bus_config *bus);
> >      int (*s_sink_bus_config)(struct v4l2_subdev *sd, const struct v4l2_bus_config *bus);
> > 
> > Actually, if there are no subdevs that can act as sinks then we should omit
> > s_sink_bus_config for now.
> > 
> > In addition, I think we need to require that at the start of the s_*_bus_config
> > implementation in the host or subdev there should be a standard comment
> > block describing the possible combinations supported by the hardware:
> > 
> > /* This hardware supports the following bus settings:
> > 
> >    widths: 8/16
> >    syncs: embedded or separate
> >    bus master: slave
> >    vsync polarity: 0/1
> >    hsync polarity: 0/1
> >    field polarity: not applicable
> >    sampling edge pixelclock: 0/1
> >    data polarity: 1
> >  */
> > 
> > This should make it easy for implementers to pick a valid set of bus
> > parameters.
> > 
> > Eagle-eyed observers will note that the bus type field has disappeared from
> > this proposal. The problem with that is that I have no clue what it is supposed
> > to do. Is there more than one bus that can be set up? In that case it is not a
> > bus type but a bus ID. Or does it determine that type of data that is being
> > transported over the bus? In that case it does not belong here, since that is
> > something for a s_fmt type API.
> > 
> > This particular API should just setup the physical bus. Nothing more, IMHO.
> > 
> > Please let me know if I am missing something here.
> > 
> > Guennadi, from our meeting I understood that you also want a way provide
> > an offset in case the data is actually only on some pins (e.g. the lower
> > or upper X pins are always 0). I don't think it is hard to provide support
> > for that in this API by adding an offset field or something like that.
> > 
> > Can you give me a pointer to the actual place where that is needed? Possibly
> > also with references to datasheets? I'd like to understand this better.
> 
> I'll just describe to you the cases, where we need this:
> 
> 1. PXA270 SoC Quick Capture Interface has 10 data lines (D9...D0) and can 
> sample in raw (parallel) mode 8, 9, or 10 bits of data. However, when 
> configured to capture fewer than 10 bits of data it sample not the most 
> significant lines (D9...D1 or D9...D2), but the least significant ones.
> 
> 2. i.MX31 SoC Camera Sensor Interface has 15 data lines (D14...D0) and can 
> sample in raw (parallel) mode 8, 10, or 15 lines of data. Thereby it 
> sample the most significant lines.
> 
> 3. MT9M001 and MT9V022 sensors have 10 lines of data and always deliver 10 
> bits of data. When directly connected D0-D0...D9-D9 to the PXA270 you can 
> only sample 10 bits and not 8 bits. When similarly connected to i.MX31 
> D0-D6...D9-D15 you can seamlessly configure the SoC to either capture 8 or 
> 10 bits of data - both will work.
> 
> 4. Creative hardware engineers have supplied both these cameras with an 
> i2c switch, that can switch the 8 most significant data lines of the 
> sensor to the least significant lines of the interface to work with 
> PXA270.
> 
> 5. The current soc-camera solution for this situation seems pretty clean: 
> we provide an optional callback into platform code to query bus 
> parameters. On boards, using such a camera with an i2c switch this 
> function returns to the sensor "you can support 8 and 10 bit modes." On 
> PXA270 boards, using such a camera without a switch this function is 
> either not implemented at all, or it returns the default "you can only 
> provide 10 bits." Then there is another callback to actually switch 
> between low and high lines by operating the I2C switch. And a third 
> callback to release i2c switch resources. On i.MX31 you would implement 
> the query callback as "yes, you can support 8 and 10 bits," and the switch 
> callback as a dummy (or not implement at all) because you don't have to 
> switch anything on i.MX31.

I understand why you would need this information when you are negotiating
the bus config, but in the new situation you would just set up the bus to
fixed values. Do you still need this information outside of the current
auto-negotiation code?

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: RFC: bus configuration setup for sub-devices
  2009-08-30 22:31 ` Laurent Pinchart
@ 2009-08-31  6:33   ` Hans Verkuil
  0 siblings, 0 replies; 18+ messages in thread
From: Hans Verkuil @ 2009-08-31  6:33 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: linux-media, Guennadi Liakhovetski, Sakari Ailus, Muralidharan Karicheri

On Monday 31 August 2009 00:31:01 Laurent Pinchart wrote:
> Hi Hans,
> 
> thanks for putting this all together.
> 
> On Saturday 29 August 2009 16:31:13 Hans Verkuil wrote:
> > Hi all,
> >
> > This is an updated RFC on how to setup the bus for sub-devices.
> >
> > It is based on work from Guennadi and Muralidharan.
> >
> > The problem is that both sub-devices and bridge devices have to configure
> > their bus correctly so that they can talk to one another. We need a
> > standard way of configuring such busses.
> >
> > The soc-camera driver did this by auto-negotiation. For several reasons
> > (see the threads on bus parameters in the past) I thought that that was not
> > a good idea. After talking this over last week with Guennadi we agreed that
> > we would configure busses directly rather than negotiate the bus
> > configuration. It was a very pleasant meeting (Hans de Goede, Laurent
> > Pinchart, Guennadi Liakhovetski and myself) and something that we should
> > repeat. Face-to-face meetings make it so much faster to come to a decision
> > on difficult problems.
> >
> > My last proposal merged subdev and bridge parameters into one struct, thus
> > completely describing the bus setup. I realized that there is a problem
> > with that if you have to define the bus for sub-devices that are in the
> > middle of a chain: e.g. a sensor sends its video to a image processing
> > subdev and from there it goes to the bridge. You have to be able to specify
> > both the source and sink part of each bus for that image processing subdev.
> 
> Do we have such a device currently ? If not, it might not be worth it 
> overdesigning the API. Otherwise we could also take into account sub-devices 
> with multiple sources or sinks :-)

I actually though of that, but there are enough bits left to add a bus ID
field should that ever be needed. We do not currently have subdevices that
can act as both source and sink, but we do have sink subdevices (TV outputs).

So yes, I think we do have to take this into account.

> > It's much easier to do that by keeping the source and sink bus config
> > separate.
> 
> Do you mean the platform data should define both source and sink bus config 
> data separately ? Wouldn't it be better to use a single structure per bus ?

I thought so too, but I changed my mind for the reasons described above.

> 
> > Here is my new proposal:
> >
> > /*
> >  * Some sub-devices are connected to the host/bridge device through a bus
> > that * carries the clock, vsync, hsync and data. Some interfaces such as
> > BT.656 * carries the sync embedded in the data whereas others have separate
> > lines * carrying the sync signals.
> >  */
> > struct v4l2_bus_config {
> >         /* embedded sync, set this when sync is embedded in the data stream
> > */ unsigned embedded_sync:1;
> >         /* master or slave */
> >         unsigned is_master:1;
> >
> >         /* bus width */
> >         unsigned width:8;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_vsync:1;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_hsync:1;
> >         /* 0 - low to high, 1 - high to low */
> >         unsigned pol_field:1;
> >         /* 0 - sample at falling edge, 1 - sample at rising edge */
> >         unsigned edge_pclock:1;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_data:1;
> > };
> >
> > It's all bitfields, so it is a very compact representation.
> >
> > In video_ops we add two new functions:
> >
> >      int (*s_source_bus_config)(struct v4l2_subdev *sd, const struct
> >      v4l2_bus_config *bus);
> >      int (*s_sink_bus_config)(struct v4l2_subdev *sd, const struct
> >      v4l2_bus_config *bus);
> >
> > Actually, if there are no subdevs that can act as sinks then we should omit
> > s_sink_bus_config for now.
> 
> What about defining pins instead and have a single s_pin_bus_config function 
> that takes some kind of pin ID ?

It could be done of course, but that would make the bus setup a lot more
complex. I see no benefit for that at the moment. Should we need different
bus configs for different bus types in the future, then we should probably
move to a structure like this:

struct bus_config {
	enum bus_type type;
	union {
		struct { } parallel;
		struct { } serial;
		struct { } foo;
	};
};

I do think we should specify the bus config in one datastructure. It may be
difficult for a subdev driver to setup a bus if it gets its bus config pin by
pin.

> 
> > In addition, I think we need to require that at the start of the
> > s_*_bus_config implementation in the host or subdev there should be a
> > standard comment block describing the possible combinations supported by
> > the hardware:
> >
> > /* This hardware supports the following bus settings:
> >
> >    widths: 8/16
> >    syncs: embedded or separate
> >    bus master: slave
> >    vsync polarity: 0/1
> >    hsync polarity: 0/1
> >    field polarity: not applicable
> >    sampling edge pixelclock: 0/1
> >    data polarity: 1
> >  */
> 
> Agreed.
> 
> > This should make it easy for implementers to pick a valid set of bus
> > parameters.
> >
> > Eagle-eyed observers will note that the bus type field has disappeared from
> > this proposal. The problem with that is that I have no clue what it is
> > supposed to do. Is there more than one bus that can be set up? In that case
> > it is not a bus type but a bus ID. Or does it determine that type of data
> > that is being transported over the bus? In that case it does not belong
> > here, since that is something for a s_fmt type API.
> >
> > This particular API should just setup the physical bus. Nothing more, IMHO.
> 
> Could that change at runtime or is it fixed in stone ?

Yes, this can change at runtime.
 
> > Please let me know if I am missing something here.
> >
> > Guennadi, from our meeting I understood that you also want a way provide
> > an offset in case the data is actually only on some pins (e.g. the lower
> > or upper X pins are always 0). I don't think it is hard to provide support
> > for that in this API by adding an offset field or something like that.
> >
> > Can you give me a pointer to the actual place where that is needed?
> > Possibly also with references to datasheets? I'd like to understand this
> > better.
> 
> The OMAP3430 public datasheet can be found at
> 
> http://focus.ti.com/pdfs/wtbu/SWPU114U_FinalEPDF_08_17_2009.pdf
> 
> The OMAP3 has a lane shifter than can be used for such purpose if I'm not 
> mistaken, but I'll have to double check that.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: bus configuration setup for sub-devices
  2009-08-29 14:31 RFC: bus configuration setup for sub-devices Hans Verkuil
                   ` (2 preceding siblings ...)
  2009-08-30 22:31 ` Laurent Pinchart
@ 2009-08-31 14:42 ` Karicheri, Muralidharan
  2009-08-31 17:23   ` Hans Verkuil
  2009-08-31 15:54 ` RFC: " Sakari Ailus
  4 siblings, 1 reply; 18+ messages in thread
From: Karicheri, Muralidharan @ 2009-08-31 14:42 UTC (permalink / raw)
  To: Hans Verkuil, linux-media
  Cc: Guennadi Liakhovetski, Sakari Ailus, Laurent Pinchart

Hans,
>
>My last proposal merged subdev and bridge parameters into one struct, thus
>completely describing the bus setup. I realized that there is a problem
>with
>that if you have to define the bus for sub-devices that are in the middle
>of
>a chain: e.g. a sensor sends its video to a image processing subdev and
>from
>there it goes to the bridge. You have to be able to specify both the source
>and
>sink part of each bus for that image processing subdev.

In the above, what you mean by image processing subdev? In the case of DM6446/DM355/DM365, here is the connection

Image sensor/ video decoder -> ccdc -> ipipe/preview engine/resizer -> SDRAM. 
In this scenario, ipipe, preview engine and resizer are image processing hardware which are managed by bridge device. So we have just source sub device and bridge device. Which hardware supports the image processing sub device?

>
>Eagle-eyed observers will note that the bus type field has disappeared from
>this proposal. The problem with that is that I have no clue what it is
>supposed
>to do. Is there more than one bus that can be set up? In that case it is
>not a
>bus type but a bus ID. Or does it determine that type of data that is being
>transported over the bus? In that case it does not belong here, since that
>is

At the bridge device, for configuring CCDC, the hardware needs to know if the bus is having raw bayer data or YCbCr data. Similarly, if the bus is YCbCr, the data bus can carry Y and CbCr muxed over 8 lines or dedicated Y lines and CbCr muxed over other 8 lines. Also Y and CbCr can be swapped. How do we convey this information without bus type? May be another field is required in addition to bus type to convey swapped state.

For raw data processing, for example, MT9T031 sensor output 10 bits and MT9P031 outputs 12 bits. DM365 IPIPE handles 12 bits and DM365 IPIPE 10 bits. So in EVM the 10 bits (upper 10 bits for MT9P031)are connected to data 11-2. so bits 0-1 are pulled low. It is required to specify this information in the bus structure. If offset is suggested for this, it could be used to specify it. So in this case offset is related to LSB? So for the above case it is 2 meaning bit2 starts with real data and bit 0 - 1 are zeros. Please confirm.

>something for a s_fmt type API.
>
>This particular API should just setup the physical bus. Nothing more, IMHO.
>
>Please let me know if I am missing something here.
>
>Guennadi, from our meeting I understood that you also want a way provide
>an offset in case the data is actually only on some pins (e.g. the lower
>or upper X pins are always 0). I don't think it is hard to provide support
>for that in this API by adding an offset field or something like that.
>
>Can you give me a pointer to the actual place where that is needed?
>Possibly
>also with references to datasheets? I'd like to understand this better.
>
>Sakari, this proposal is specific to parallel busses. I understood that
>Nokia
>also has to deal with serial busses. Can you take a look at this proposal
>with
>a serial bus in mind?
>
>This bus config stuff has been in limbo for too long, so I'd like to come
>to a conclusion and implementation in 1-2 weeks.
>
>Regards,
>
>       Hans
>
>--
>Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom


^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: RFC: bus configuration setup for sub-devices
  2009-08-31  6:23   ` Hans Verkuil
@ 2009-08-31 14:45     ` Karicheri, Muralidharan
  2009-08-31 15:07       ` Hans Verkuil
  0 siblings, 1 reply; 18+ messages in thread
From: Karicheri, Muralidharan @ 2009-08-31 14:45 UTC (permalink / raw)
  To: Hans Verkuil, Guennadi Liakhovetski
  Cc: Linux Media Mailing List, Sakari Ailus, Laurent Pinchart

>>
>> Up to now I usually saw the master-slave relationship defined as per
>> whether the protocol is "master" or "slave," which always was used from
>> the PoV of the bridge. I.e., even in a camera datasheet a phrase like
>> "supports master-parallel mode" means supports a mode in which the bridge
>> is a master and the camera is a slave. So, maybe it is better instead of
>a
>> .is_master flag to use a .master_mode flag?
>
>Sounds reasonable. I'll check a few datasheets myself to see what
>terminology
>they use.

Master/Slave is always confusing to me. In VPFE, it can act as master (when it output sync signal and pixel clock) and slave (when it get sync signal from sensor/decoder). We use VPFE as slave and sensor/decoder will provide the pixel clock and sync signal. Please confirm if this is what master_mode flag means.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: RFC: bus configuration setup for sub-devices
  2009-08-31 14:45     ` Karicheri, Muralidharan
@ 2009-08-31 15:07       ` Hans Verkuil
  2009-08-31 15:12         ` Karicheri, Muralidharan
  2009-08-31 16:11         ` Guennadi Liakhovetski
  0 siblings, 2 replies; 18+ messages in thread
From: Hans Verkuil @ 2009-08-31 15:07 UTC (permalink / raw)
  To: Karicheri, Muralidharan
  Cc: Guennadi Liakhovetski, Linux Media Mailing List, Sakari Ailus,
	Laurent Pinchart


>>>
>>> Up to now I usually saw the master-slave relationship defined as per
>>> whether the protocol is "master" or "slave," which always was used from
>>> the PoV of the bridge. I.e., even in a camera datasheet a phrase like
>>> "supports master-parallel mode" means supports a mode in which the
>>> bridge
>>> is a master and the camera is a slave. So, maybe it is better instead
>>> of
>>a
>>> .is_master flag to use a .master_mode flag?
>>
>>Sounds reasonable. I'll check a few datasheets myself to see what
>>terminology
>>they use.
>
> Master/Slave is always confusing to me. In VPFE, it can act as master
> (when it output sync signal and pixel clock) and slave (when it get sync
> signal from sensor/decoder). We use VPFE as slave and sensor/decoder will
> provide the pixel clock and sync signal. Please confirm if this is what
> master_mode flag means.

That's correct: the master provides the pixel clock signal. I'm not sure
if it also means that the syncs are provided by the master. Do you know?

Regards,

         Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG


^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: RFC: bus configuration setup for sub-devices
  2009-08-31 15:07       ` Hans Verkuil
@ 2009-08-31 15:12         ` Karicheri, Muralidharan
  2009-08-31 16:11         ` Guennadi Liakhovetski
  1 sibling, 0 replies; 18+ messages in thread
From: Karicheri, Muralidharan @ 2009-08-31 15:12 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Guennadi Liakhovetski, Linux Media Mailing List, Sakari Ailus,
	Laurent Pinchart


>>
>> Master/Slave is always confusing to me. In VPFE, it can act as master
>> (when it output sync signal and pixel clock) and slave (when it get sync
>> signal from sensor/decoder). We use VPFE as slave and sensor/decoder will
>> provide the pixel clock and sync signal. Please confirm if this is what
>> master_mode flag means.
>
>That's correct: the master provides the pixel clock signal. I'm not sure
>if it also means that the syncs are provided by the master. Do you know?
Yes. Both hsync and vsync signals can be output from VPFE. So also field id signal. But I don't know if any customer is using it that way.
>
>Regards,
>
>         Hans
>
>--
>Hans Verkuil - video4linux developer - sponsored by TANDBERG
>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: RFC: bus configuration setup for sub-devices
  2009-08-29 14:31 RFC: bus configuration setup for sub-devices Hans Verkuil
                   ` (3 preceding siblings ...)
  2009-08-31 14:42 ` Karicheri, Muralidharan
@ 2009-08-31 15:54 ` Sakari Ailus
  2009-08-31 16:08   ` Guennadi Liakhovetski
                     ` (2 more replies)
  4 siblings, 3 replies; 18+ messages in thread
From: Sakari Ailus @ 2009-08-31 15:54 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: linux-media, Guennadi Liakhovetski, Muralidharan Karicheri,
	Laurent Pinchart

Hans Verkuil wrote:
> Hi all,
> 
> This is an updated RFC on how to setup the bus for sub-devices.
> 
> It is based on work from Guennadi and Muralidharan.
> 
> The problem is that both sub-devices and bridge devices have to configure
> their bus correctly so that they can talk to one another. We need a
> standard way of configuring such busses.
> 
> The soc-camera driver did this by auto-negotiation. For several reasons (see
> the threads on bus parameters in the past) I thought that that was not a good
> idea. After talking this over last week with Guennadi we agreed that we would
> configure busses directly rather than negotiate the bus configuration. It was
> a very pleasant meeting (Hans de Goede, Laurent Pinchart, Guennadi Liakhovetski
> and myself) and something that we should repeat. Face-to-face meetings make
> it so much faster to come to a decision on difficult problems.
> 
> My last proposal merged subdev and bridge parameters into one struct, thus
> completely describing the bus setup. I realized that there is a problem with
> that if you have to define the bus for sub-devices that are in the middle of
> a chain: e.g. a sensor sends its video to a image processing subdev and from
> there it goes to the bridge. You have to be able to specify both the source and
> sink part of each bus for that image processing subdev.
> 
> It's much easier to do that by keeping the source and sink bus config
> separate.
> 
> Here is my new proposal:
> 
> /*
>  * Some sub-devices are connected to the host/bridge device through a bus that
>  * carries the clock, vsync, hsync and data. Some interfaces such as BT.656
>  * carries the sync embedded in the data whereas others have separate lines
>  * carrying the sync signals.
>  */
> struct v4l2_bus_config {
>         /* embedded sync, set this when sync is embedded in the data stream */
>         unsigned embedded_sync:1;
>         /* master or slave */
>         unsigned is_master:1;
> 
>         /* bus width */
>         unsigned width:8;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_vsync:1;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_hsync:1;
>         /* 0 - low to high, 1 - high to low */
>         unsigned pol_field:1;
>         /* 0 - sample at falling edge, 1 - sample at rising edge */
>         unsigned edge_pclock:1;
>         /* 0 - active low, 1 - active high */
>         unsigned pol_data:1;
> };

How about the clock of the parallel interface?

I have to admit I haven't had time to read the thread about bus 
parameter negotiation.

The above parameters are probably valid for a number of possible 
parallel buses. Serial buses like CSI1 (CCP2) and CSI2 do have a 
different set of parameters, some of these are meaningless in CSI1 and 
CSI 2 context, like width. The bus specification already might define 
some, too.

I think there could be an union for different bus types, and a field 
that tells which type is the bus of.

CSI 1 is a subset of CCP 2 which include at least this:

- CRC enabled / disabled
- CSI 1 / CCP 2 mode
- channel
- data + clock or data + strobe signalling
- strobe clock inverted / not
- clock
- bits per pixel (bayer); 8, 10 or 12
- frame start / end and line start / end sync codes

There's more in the OMAP 3 ISP TRM here, as Laurent pointed out:

<URL:http://focus.ti.com/pdfs/wtbu/SWPU114U_FinalEPDF_08_17_2009.pdf>

The CSI 1 also defines image formats, line size and line start, see page 
1548.

I did try to define the bus parameters in the OMAP 2 camera and there's 
some of that in include/media/v4l2-int-device.h but the bus parameters 
are again different for OMAP 3 ISP so we then resorted to just have a 
configuration that is specific to the ISP (or the bridge, I guess).

The disadvantage is then that the sensor is not part of this 
configuration, so a generic way of expressing bus configuration really 
has an advantage.

> It's all bitfields, so it is a very compact representation.
> 
> In video_ops we add two new functions:
> 
>      int (*s_source_bus_config)(struct v4l2_subdev *sd, const struct v4l2_bus_config *bus);
>      int (*s_sink_bus_config)(struct v4l2_subdev *sd, const struct v4l2_bus_config *bus);
> 
> Actually, if there are no subdevs that can act as sinks then we should omit
> s_sink_bus_config for now.

I hope that could one day include the OMAP 3 ISP. :-)

> In addition, I think we need to require that at the start of the s_*_bus_config
> implementation in the host or subdev there should be a standard comment
> block describing the possible combinations supported by the hardware:
> 
> /* This hardware supports the following bus settings:
> 
>    widths: 8/16
>    syncs: embedded or separate
>    bus master: slave
>    vsync polarity: 0/1
>    hsync polarity: 0/1
>    field polarity: not applicable
>    sampling edge pixelclock: 0/1
>    data polarity: 1
>  */
> 
> This should make it easy for implementers to pick a valid set of bus
> parameters.
> 
> Eagle-eyed observers will note that the bus type field has disappeared from
> this proposal. The problem with that is that I have no clue what it is supposed
> to do. Is there more than one bus that can be set up? In that case it is not a
> bus type but a bus ID. Or does it determine that type of data that is being
> transported over the bus? In that case it does not belong here, since that is
> something for a s_fmt type API.

The bus type should be definitely included. If a bridge has several 
different receivers, say parallel, CSI1 and CSI2, which of them should 
be configured to receive data unless that's part of the bus configuration?

> This particular API should just setup the physical bus. Nothing more, IMHO.

How would the image format be defined then...? The ISP in this case can 
mangle the image format the way it wants so what's coming from the 
sensor can be quite different from what's coming out of the ISP. Say, 
sensor produces raw bayer and ISP writes YUYV, for example. Usually the 
format the sensor outputs is not defined by the input format the user 
wants from the device.

Regards,

-- 
Sakari Ailus
sakari.ailus@maxwell.research.nokia.com

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: RFC: bus configuration setup for sub-devices
  2009-08-31 15:54 ` RFC: " Sakari Ailus
@ 2009-08-31 16:08   ` Guennadi Liakhovetski
  2009-08-31 16:54   ` Karicheri, Muralidharan
  2009-08-31 17:42   ` Hans Verkuil
  2 siblings, 0 replies; 18+ messages in thread
From: Guennadi Liakhovetski @ 2009-08-31 16:08 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: Hans Verkuil, Linux Media Mailing List, Muralidharan Karicheri,
	Laurent Pinchart

On Mon, 31 Aug 2009, Sakari Ailus wrote:

> How would the image format be defined then...? The ISP in this case can mangle
> the image format the way it wants so what's coming from the sensor can be
> quite different from what's coming out of the ISP. Say, sensor produces raw
> bayer and ISP writes YUYV, for example. Usually the format the sensor outputs
> is not defined by the input format the user wants from the device.

It would be good if you could have a look at my recent RFC and the 
following discussion, this kind of format conversion is exactly what is 
handled by it.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: RFC: bus configuration setup for sub-devices
  2009-08-31 15:07       ` Hans Verkuil
  2009-08-31 15:12         ` Karicheri, Muralidharan
@ 2009-08-31 16:11         ` Guennadi Liakhovetski
  1 sibling, 0 replies; 18+ messages in thread
From: Guennadi Liakhovetski @ 2009-08-31 16:11 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Karicheri, Muralidharan, Linux Media Mailing List, Sakari Ailus,
	Laurent Pinchart

On Mon, 31 Aug 2009, Hans Verkuil wrote:

> That's correct: the master provides the pixel clock signal. I'm not sure
> if it also means that the syncs are provided by the master. Do you know?

In all docs I've seen so far "master mode" means "sensor generates pixel 
clock," nothing changes for other syncs. Also I so far only worked with 
master clock being generated by the bridge, hsync and vsync generated by 
the sensor.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: RFC: bus configuration setup for sub-devices
  2009-08-31 15:54 ` RFC: " Sakari Ailus
  2009-08-31 16:08   ` Guennadi Liakhovetski
@ 2009-08-31 16:54   ` Karicheri, Muralidharan
  2009-08-31 17:42   ` Hans Verkuil
  2 siblings, 0 replies; 18+ messages in thread
From: Karicheri, Muralidharan @ 2009-08-31 16:54 UTC (permalink / raw)
  To: Sakari Ailus, Hans Verkuil
  Cc: linux-media, Guennadi Liakhovetski, Laurent Pinchart

Sakari,

>
>The bus type should be definitely included. If a bridge has several
>different receivers, say parallel, CSI1 and CSI2, which of them should
>be configured to receive data unless that's part of the bus configuration?
I agree and I have responded to have it included in the RFC.
>
>> This particular API should just setup the physical bus. Nothing more,
>IMHO.
>
>How would the image format be defined then...? The ISP in this case can
>mangle the image format the way it wants so what's coming from the
>sensor can be quite different from what's coming out of the ISP. Say,
>sensor produces raw bayer and ISP writes YUYV, for example. Usually the
>format the sensor outputs is not defined by the input format the user
>wants from the device.
It is a bit tricky here. In OMAP and DMxxx SOCs of TI, the image processing block comes in between sensor output and SDRAM. I would consider the whole path from input to VPFE or ISP to output to SDRAM as being managed by the bridge device. So S_FMT will handle all the formats that could be output
to SDRAM from vpfe or isp. So it is within the bridge device how it manages all the format conversion. 

So in these cases, the following scenario applies...

sensor/decoder -> bus -> vpfe/isp. The bus here could be CSI1 or CSI2 or Raw or YbCr. The bridge device will use appropriate hardware in the pipe line
to do the conversion.
		>
>Regards,
>
>--
>Sakari Ailus
>sakari.ailus@maxwell.research.nokia.com


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: bus configuration setup for sub-devices
  2009-08-31 14:42 ` Karicheri, Muralidharan
@ 2009-08-31 17:23   ` Hans Verkuil
  0 siblings, 0 replies; 18+ messages in thread
From: Hans Verkuil @ 2009-08-31 17:23 UTC (permalink / raw)
  To: Karicheri, Muralidharan
  Cc: linux-media, Guennadi Liakhovetski, Sakari Ailus, Laurent Pinchart

On Monday 31 August 2009 16:42:28 Karicheri, Muralidharan wrote:
> Hans,
> >
> >My last proposal merged subdev and bridge parameters into one struct, thus
> >completely describing the bus setup. I realized that there is a problem
> >with
> >that if you have to define the bus for sub-devices that are in the middle
> >of
> >a chain: e.g. a sensor sends its video to a image processing subdev and
> >from
> >there it goes to the bridge. You have to be able to specify both the source
> >and
> >sink part of each bus for that image processing subdev.
> 
> In the above, what you mean by image processing subdev? In the case of DM6446/DM355/DM365, here is the connection
> 
> Image sensor/ video decoder -> ccdc -> ipipe/preview engine/resizer -> SDRAM. 
> In this scenario, ipipe, preview engine and resizer are image processing hardware which are managed by bridge device. So we have just source sub device and bridge device. Which hardware supports the image processing sub device?

We actually do have image processing subdevs: upd64031a.c and upd64083.c.
These are only used on TV capture cards, though. It is probably more likely
to see fpgas between a sensor and a bridge where the fpga does the image
processing. So yes, they do exist and I want to be at least prepared for this.

> 
> >
> >Eagle-eyed observers will note that the bus type field has disappeared from
> >this proposal. The problem with that is that I have no clue what it is
> >supposed
> >to do. Is there more than one bus that can be set up? In that case it is
> >not a
> >bus type but a bus ID. Or does it determine that type of data that is being
> >transported over the bus? In that case it does not belong here, since that
> >is
> 
> At the bridge device, for configuring CCDC, the hardware needs to know if the bus is having raw bayer data or YCbCr data. Similarly, if the bus is YCbCr, the data bus can carry Y and CbCr muxed over 8 lines or dedicated Y lines and CbCr muxed over other 8 lines. Also Y and CbCr can be swapped. How do we convey this information without bus type? May be another field is required in addition to bus type to convey swapped state.

That is part of the proposal that Guennadi is working on: setting up the
format of the data being transferred over the bus. This bus config proposal
deals with the bus itself, not what is being transported over the bus.

One very valid question is whether these two shouldn't be combined. I don't
know the answer to that question, though.

> 
> For raw data processing, for example, MT9T031 sensor output 10 bits and MT9P031 outputs 12 bits. DM365 IPIPE handles 12 bits and DM365 IPIPE 10 bits. So in EVM the 10 bits (upper 10 bits for MT9P031)are connected to data 11-2. so bits 0-1 are pulled low. It is required to specify this information in the bus structure. If offset is suggested for this, it could be used to specify it. So in this case offset is related to LSB? So for the above case it is 2 meaning bit2 starts with real data and bit 0 - 1 are zeros. Please confirm.

That was the idea, but I see no point in specifying this. Why would the DM365
care about that? I have yet to see a use-case where you really need this info.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: RFC: bus configuration setup for sub-devices
  2009-08-31 15:54 ` RFC: " Sakari Ailus
  2009-08-31 16:08   ` Guennadi Liakhovetski
  2009-08-31 16:54   ` Karicheri, Muralidharan
@ 2009-08-31 17:42   ` Hans Verkuil
  2009-08-31 17:53     ` Guennadi Liakhovetski
  2 siblings, 1 reply; 18+ messages in thread
From: Hans Verkuil @ 2009-08-31 17:42 UTC (permalink / raw)
  To: Sakari Ailus
  Cc: linux-media, Guennadi Liakhovetski, Muralidharan Karicheri,
	Laurent Pinchart

On Monday 31 August 2009 17:54:18 Sakari Ailus wrote:
> Hans Verkuil wrote:
> > Hi all,
> > 
> > This is an updated RFC on how to setup the bus for sub-devices.
> > 
> > It is based on work from Guennadi and Muralidharan.
> > 
> > The problem is that both sub-devices and bridge devices have to configure
> > their bus correctly so that they can talk to one another. We need a
> > standard way of configuring such busses.
> > 
> > The soc-camera driver did this by auto-negotiation. For several reasons (see
> > the threads on bus parameters in the past) I thought that that was not a good
> > idea. After talking this over last week with Guennadi we agreed that we would
> > configure busses directly rather than negotiate the bus configuration. It was
> > a very pleasant meeting (Hans de Goede, Laurent Pinchart, Guennadi Liakhovetski
> > and myself) and something that we should repeat. Face-to-face meetings make
> > it so much faster to come to a decision on difficult problems.
> > 
> > My last proposal merged subdev and bridge parameters into one struct, thus
> > completely describing the bus setup. I realized that there is a problem with
> > that if you have to define the bus for sub-devices that are in the middle of
> > a chain: e.g. a sensor sends its video to a image processing subdev and from
> > there it goes to the bridge. You have to be able to specify both the source and
> > sink part of each bus for that image processing subdev.
> > 
> > It's much easier to do that by keeping the source and sink bus config
> > separate.
> > 
> > Here is my new proposal:
> > 
> > /*
> >  * Some sub-devices are connected to the host/bridge device through a bus that
> >  * carries the clock, vsync, hsync and data. Some interfaces such as BT.656
> >  * carries the sync embedded in the data whereas others have separate lines
> >  * carrying the sync signals.
> >  */
> > struct v4l2_bus_config {
> >         /* embedded sync, set this when sync is embedded in the data stream */
> >         unsigned embedded_sync:1;
> >         /* master or slave */
> >         unsigned is_master:1;
> > 
> >         /* bus width */
> >         unsigned width:8;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_vsync:1;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_hsync:1;
> >         /* 0 - low to high, 1 - high to low */
> >         unsigned pol_field:1;
> >         /* 0 - sample at falling edge, 1 - sample at rising edge */
> >         unsigned edge_pclock:1;
> >         /* 0 - active low, 1 - active high */
> >         unsigned pol_data:1;
> > };
> 
> How about the clock of the parallel interface?

I consider that part of the video format setup: e.g. if you want 720p, then
you can setup the timings (i.e. pixelclock, front/backporch, sync widths)
accordingly. This API just configures the bus.

> 
> I have to admit I haven't had time to read the thread about bus 
> parameter negotiation.
> 
> The above parameters are probably valid for a number of possible 
> parallel buses. Serial buses like CSI1 (CCP2) and CSI2 do have a 
> different set of parameters, some of these are meaningless in CSI1 and 
> CSI 2 context, like width. The bus specification already might define 
> some, too.
> 
> I think there could be an union for different bus types, and a field 
> that tells which type is the bus of.
> 
> CSI 1 is a subset of CCP 2 which include at least this:
> 
> - CRC enabled / disabled
> - CSI 1 / CCP 2 mode
> - channel
> - data + clock or data + strobe signalling
> - strobe clock inverted / not
> - clock
> - bits per pixel (bayer); 8, 10 or 12
> - frame start / end and line start / end sync codes

These last two items do not belong here: those are concerned with what is
going over the bus, not with the bus configuration itself. I'm not sure what
you mean with 'clock': the clock frequency? A pixel clock is usually part of
the video setup as it depends on the chosen video format.

> 
> There's more in the OMAP 3 ISP TRM here, as Laurent pointed out:
> 
> <URL:http://focus.ti.com/pdfs/wtbu/SWPU114U_FinalEPDF_08_17_2009.pdf>
> 
> The CSI 1 also defines image formats, line size and line start, see page 
> 1548.
> 
> I did try to define the bus parameters in the OMAP 2 camera and there's 
> some of that in include/media/v4l2-int-device.h but the bus parameters 
> are again different for OMAP 3 ISP so we then resorted to just have a 
> configuration that is specific to the ISP (or the bridge, I guess).
> 
> The disadvantage is then that the sensor is not part of this 
> configuration, so a generic way of expressing bus configuration really 
> has an advantage.
> 
> > It's all bitfields, so it is a very compact representation.
> > 
> > In video_ops we add two new functions:
> > 
> >      int (*s_source_bus_config)(struct v4l2_subdev *sd, const struct v4l2_bus_config *bus);
> >      int (*s_sink_bus_config)(struct v4l2_subdev *sd, const struct v4l2_bus_config *bus);
> > 
> > Actually, if there are no subdevs that can act as sinks then we should omit
> > s_sink_bus_config for now.
> 
> I hope that could one day include the OMAP 3 ISP. :-)
> 
> > In addition, I think we need to require that at the start of the s_*_bus_config
> > implementation in the host or subdev there should be a standard comment
> > block describing the possible combinations supported by the hardware:
> > 
> > /* This hardware supports the following bus settings:
> > 
> >    widths: 8/16
> >    syncs: embedded or separate
> >    bus master: slave
> >    vsync polarity: 0/1
> >    hsync polarity: 0/1
> >    field polarity: not applicable
> >    sampling edge pixelclock: 0/1
> >    data polarity: 1
> >  */
> > 
> > This should make it easy for implementers to pick a valid set of bus
> > parameters.
> > 
> > Eagle-eyed observers will note that the bus type field has disappeared from
> > this proposal. The problem with that is that I have no clue what it is supposed
> > to do. Is there more than one bus that can be set up? In that case it is not a
> > bus type but a bus ID. Or does it determine that type of data that is being
> > transported over the bus? In that case it does not belong here, since that is
> > something for a s_fmt type API.
> 
> The bus type should be definitely included. If a bridge has several 
> different receivers, say parallel, CSI1 and CSI2, which of them should 
> be configured to receive data unless that's part of the bus configuration?

Yeah, I see that omap supports one parallel and two serial busses, so we need
something for that.

> 
> > This particular API should just setup the physical bus. Nothing more, IMHO.
> 
> How would the image format be defined then...? The ISP in this case can 
> mangle the image format the way it wants so what's coming from the 
> sensor can be quite different from what's coming out of the ISP. Say, 
> sensor produces raw bayer and ISP writes YUYV, for example. Usually the 
> format the sensor outputs is not defined by the input format the user 
> wants from the device.

The image format is something that should be setup by a separate API. Guennadi
has a proposal for that which is being discussed. Although I wonder whether
these two APIs should perhaps be combined into one. Don't know yet.

In particular I wonder whether the bus width shouldn't become part of the image
format API rather than the bus configuration.

Regards,

	Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: RFC: bus configuration setup for sub-devices
  2009-08-31 17:42   ` Hans Verkuil
@ 2009-08-31 17:53     ` Guennadi Liakhovetski
  0 siblings, 0 replies; 18+ messages in thread
From: Guennadi Liakhovetski @ 2009-08-31 17:53 UTC (permalink / raw)
  To: Hans Verkuil
  Cc: Sakari Ailus, Linux Media Mailing List, Muralidharan Karicheri,
	Laurent Pinchart

On Mon, 31 Aug 2009, Hans Verkuil wrote:

> The image format is something that should be setup by a separate API. Guennadi
> has a proposal for that which is being discussed. Although I wonder whether
> these two APIs should perhaps be combined into one. Don't know yet.
> 
> In particular I wonder whether the bus width shouldn't become part of the image
> format API rather than the bus configuration.

I'd be happy to offer free unlimited accommodation to the bus 
configuration API in my v4l2-imagebus.c and share the imgbus namespace 
with it:-) Indeed, it also looks logical to me for the two to share the 
file, the namespace, and, probably, some code too.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2009-08-31 17:53 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-08-29 14:31 RFC: bus configuration setup for sub-devices Hans Verkuil
2009-08-30 15:46 ` Hiremath, Vaibhav
2009-08-30 19:35   ` Hans Verkuil
2009-08-30 22:19 ` RFC: " Guennadi Liakhovetski
2009-08-31  6:23   ` Hans Verkuil
2009-08-31 14:45     ` Karicheri, Muralidharan
2009-08-31 15:07       ` Hans Verkuil
2009-08-31 15:12         ` Karicheri, Muralidharan
2009-08-31 16:11         ` Guennadi Liakhovetski
2009-08-30 22:31 ` Laurent Pinchart
2009-08-31  6:33   ` Hans Verkuil
2009-08-31 14:42 ` Karicheri, Muralidharan
2009-08-31 17:23   ` Hans Verkuil
2009-08-31 15:54 ` RFC: " Sakari Ailus
2009-08-31 16:08   ` Guennadi Liakhovetski
2009-08-31 16:54   ` Karicheri, Muralidharan
2009-08-31 17:42   ` Hans Verkuil
2009-08-31 17:53     ` Guennadi Liakhovetski

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.