All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] [media] v4l: soc-camera: add enum-frame-size ioctl
@ 2011-01-07  2:49 Qing Xu
  2011-01-07 14:37 ` Guennadi Liakhovetski
  0 siblings, 1 reply; 21+ messages in thread
From: Qing Xu @ 2011-01-07  2:49 UTC (permalink / raw)
  To: g.liakhovetski; +Cc: linux-media, Qing Xu, Kassey Lee

pass VIDIOC_ENUM_FRAMESIZES down to sub device drivers. So far no
special handling in soc-camera core.

Signed-off-by: Kassey Lee <ygli@marvell.com>
Signed-off-by: Qing Xu <qingx@marvell.com>
---
 drivers/media/video/soc_camera.c |   11 +++++++++++
 1 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/drivers/media/video/soc_camera.c b/drivers/media/video/soc_camera.c
index 052bd6d..11715fb 100644
--- a/drivers/media/video/soc_camera.c
+++ b/drivers/media/video/soc_camera.c
@@ -145,6 +145,16 @@ static int soc_camera_s_std(struct file *file, void *priv, v4l2_std_id *a)
 	return v4l2_subdev_call(sd, core, s_std, *a);
 }
 
+static int soc_camera_enum_framesizes(struct file *file, void *fh,
+					 struct v4l2_frmsizeenum *fsize)
+{
+	struct soc_camera_device *icd = file->private_data;
+	struct v4l2_subdev *sd = soc_camera_to_subdev(icd);
+
+	return v4l2_subdev_call(sd, video, enum_framesizes, fsize);
+}
+
+
 static int soc_camera_reqbufs(struct file *file, void *priv,
 			      struct v4l2_requestbuffers *p)
 {
@@ -1302,6 +1312,7 @@ static const struct v4l2_ioctl_ops soc_camera_ioctl_ops = {
 	.vidioc_g_input		 = soc_camera_g_input,
 	.vidioc_s_input		 = soc_camera_s_input,
 	.vidioc_s_std		 = soc_camera_s_std,
+	.vidioc_enum_framesizes  = soc_camera_enum_framesizes,
 	.vidioc_reqbufs		 = soc_camera_reqbufs,
 	.vidioc_try_fmt_vid_cap  = soc_camera_try_fmt_vid_cap,
 	.vidioc_querybuf	 = soc_camera_querybuf,
-- 
1.6.3.3


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH] [media] v4l: soc-camera: add enum-frame-size ioctl
  2011-01-07  2:49 [PATCH] [media] v4l: soc-camera: add enum-frame-size ioctl Qing Xu
@ 2011-01-07 14:37 ` Guennadi Liakhovetski
  2011-01-07 14:50   ` Laurent Pinchart
  2011-01-09 19:24   ` Guennadi Liakhovetski
  0 siblings, 2 replies; 21+ messages in thread
From: Guennadi Liakhovetski @ 2011-01-07 14:37 UTC (permalink / raw)
  To: Qing Xu
  Cc: Linux Media Mailing List, Kassey Lee, Hans Verkuil, Laurent Pinchart

On Fri, 7 Jan 2011, Qing Xu wrote:

> pass VIDIOC_ENUM_FRAMESIZES down to sub device drivers. So far no
> special handling in soc-camera core.

Hm, no, guess what? I don't think this can work. The parameter, that this 
routine gets from the user struct v4l2_frmsizeenum contains a member 
pixel_format, which is a fourcc code. Whereas subdevices don't deal with 
them, they deal with mediabus codes. It is the same problem as with old 
s/g/try/enum_fmt, which we replaced with respective mbus variants. So, we 
either have to add our own .enum_mbus_framesizes video subdevice 
operation, or we abuse this one, but interpret the pixel_format field as a 
media-bus code.

Currently I only see one subdev driver, that implements this API: 
ov7670.c, and it just happily ignores the pixel_format altogether...

Hans, Laurent, what do you think?

In the soc-camera case you will have to use soc_camera_xlate_by_fourcc() 
to convert to media-bus code from fourcc. A nit-pick: please, follow the 
style of the file, that you patch and don't add double empty lines between 
functions.

A side question: why do you need this format at all? Is it for some custom 
application? What is your use-case, that makes try_fmt / s_fmt 
insufficient for you and how does enum_framesizes help you?

Thanks
Guennadi

> 
> Signed-off-by: Kassey Lee <ygli@marvell.com>
> Signed-off-by: Qing Xu <qingx@marvell.com>
> ---
>  drivers/media/video/soc_camera.c |   11 +++++++++++
>  1 files changed, 11 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/media/video/soc_camera.c b/drivers/media/video/soc_camera.c
> index 052bd6d..11715fb 100644
> --- a/drivers/media/video/soc_camera.c
> +++ b/drivers/media/video/soc_camera.c
> @@ -145,6 +145,16 @@ static int soc_camera_s_std(struct file *file, void *priv, v4l2_std_id *a)
>  	return v4l2_subdev_call(sd, core, s_std, *a);
>  }
>  
> +static int soc_camera_enum_framesizes(struct file *file, void *fh,
> +					 struct v4l2_frmsizeenum *fsize)
> +{
> +	struct soc_camera_device *icd = file->private_data;
> +	struct v4l2_subdev *sd = soc_camera_to_subdev(icd);
> +
> +	return v4l2_subdev_call(sd, video, enum_framesizes, fsize);
> +}
> +
> +
>  static int soc_camera_reqbufs(struct file *file, void *priv,
>  			      struct v4l2_requestbuffers *p)
>  {
> @@ -1302,6 +1312,7 @@ static const struct v4l2_ioctl_ops soc_camera_ioctl_ops = {
>  	.vidioc_g_input		 = soc_camera_g_input,
>  	.vidioc_s_input		 = soc_camera_s_input,
>  	.vidioc_s_std		 = soc_camera_s_std,
> +	.vidioc_enum_framesizes  = soc_camera_enum_framesizes,
>  	.vidioc_reqbufs		 = soc_camera_reqbufs,
>  	.vidioc_try_fmt_vid_cap  = soc_camera_try_fmt_vid_cap,
>  	.vidioc_querybuf	 = soc_camera_querybuf,
> -- 
> 1.6.3.3
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH] [media] v4l: soc-camera: add enum-frame-size ioctl
  2011-01-07 14:37 ` Guennadi Liakhovetski
@ 2011-01-07 14:50   ` Laurent Pinchart
  2011-01-07 15:11     ` Guennadi Liakhovetski
  2011-01-09 19:24   ` Guennadi Liakhovetski
  1 sibling, 1 reply; 21+ messages in thread
From: Laurent Pinchart @ 2011-01-07 14:50 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Qing Xu, Linux Media Mailing List, Kassey Lee, Hans Verkuil

Hi Guennadi,

On Friday 07 January 2011 15:37:35 Guennadi Liakhovetski wrote:
> On Fri, 7 Jan 2011, Qing Xu wrote:
> > pass VIDIOC_ENUM_FRAMESIZES down to sub device drivers. So far no
> > special handling in soc-camera core.
> 
> Hm, no, guess what? I don't think this can work. The parameter, that this
> routine gets from the user struct v4l2_frmsizeenum contains a member
> pixel_format, which is a fourcc code. Whereas subdevices don't deal with
> them, they deal with mediabus codes. It is the same problem as with old
> s/g/try/enum_fmt, which we replaced with respective mbus variants. So, we
> either have to add our own .enum_mbus_framesizes video subdevice
> operation, or we abuse this one, but interpret the pixel_format field as a
> media-bus code.
> 
> Currently I only see one subdev driver, that implements this API:
> ov7670.c, and it just happily ignores the pixel_format altogether...
> 
> Hans, Laurent, what do you think?

Use the pad-level subdev operations, they take a media bus code as argument 
;-)

> In the soc-camera case you will have to use soc_camera_xlate_by_fourcc()
> to convert to media-bus code from fourcc. A nit-pick: please, follow the
> style of the file, that you patch and don't add double empty lines between
> functions.
> 
> A side question: why do you need this format at all? Is it for some custom
> application? What is your use-case, that makes try_fmt / s_fmt
> insufficient for you and how does enum_framesizes help you?

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH] [media] v4l: soc-camera: add enum-frame-size ioctl
  2011-01-07 14:50   ` Laurent Pinchart
@ 2011-01-07 15:11     ` Guennadi Liakhovetski
  0 siblings, 0 replies; 21+ messages in thread
From: Guennadi Liakhovetski @ 2011-01-07 15:11 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Qing Xu, Linux Media Mailing List, Kassey Lee, Hans Verkuil

On Fri, 7 Jan 2011, Laurent Pinchart wrote:

> Hi Guennadi,
> 
> On Friday 07 January 2011 15:37:35 Guennadi Liakhovetski wrote:
> > On Fri, 7 Jan 2011, Qing Xu wrote:
> > > pass VIDIOC_ENUM_FRAMESIZES down to sub device drivers. So far no
> > > special handling in soc-camera core.
> > 
> > Hm, no, guess what? I don't think this can work. The parameter, that this
> > routine gets from the user struct v4l2_frmsizeenum contains a member
> > pixel_format, which is a fourcc code. Whereas subdevices don't deal with
> > them, they deal with mediabus codes. It is the same problem as with old
> > s/g/try/enum_fmt, which we replaced with respective mbus variants. So, we
> > either have to add our own .enum_mbus_framesizes video subdevice
> > operation, or we abuse this one, but interpret the pixel_format field as a
> > media-bus code.
> > 
> > Currently I only see one subdev driver, that implements this API:
> > ov7670.c, and it just happily ignores the pixel_format altogether...
> > 
> > Hans, Laurent, what do you think?
> 
> Use the pad-level subdev operations, they take a media bus code as argument 
> ;-)

Sure, but as you will appreciate, a currently mainlinable solution would 
be preferred. Even if you get MC in next soon enough for 2.6.39, we still 
will need a while to convert soc-camera to it first.

Thanks
Guennadi

> 
> > In the soc-camera case you will have to use soc_camera_xlate_by_fourcc()
> > to convert to media-bus code from fourcc. A nit-pick: please, follow the
> > style of the file, that you patch and don't add double empty lines between
> > functions.
> > 
> > A side question: why do you need this format at all? Is it for some custom
> > application? What is your use-case, that makes try_fmt / s_fmt
> > insufficient for you and how does enum_framesizes help you?
> 
> -- 
> Regards,
> 
> Laurent Pinchart
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH] [media] v4l: soc-camera: add enum-frame-size ioctl
  2011-01-07 14:37 ` Guennadi Liakhovetski
  2011-01-07 14:50   ` Laurent Pinchart
@ 2011-01-09 19:24   ` Guennadi Liakhovetski
  2011-01-10  2:45     ` Qing Xu
  2011-01-10  2:47     ` Qing Xu
  1 sibling, 2 replies; 21+ messages in thread
From: Guennadi Liakhovetski @ 2011-01-09 19:24 UTC (permalink / raw)
  To: Qing Xu
  Cc: Linux Media Mailing List, Kassey Lee, Hans Verkuil, Laurent Pinchart

On Fri, 7 Jan 2011, Guennadi Liakhovetski wrote:

> On Fri, 7 Jan 2011, Qing Xu wrote:
> 
> > pass VIDIOC_ENUM_FRAMESIZES down to sub device drivers. So far no
> > special handling in soc-camera core.
> 
> Hm, no, guess what? I don't think this can work. The parameter, that this 
> routine gets from the user struct v4l2_frmsizeenum contains a member 
> pixel_format, which is a fourcc code. Whereas subdevices don't deal with 
> them, they deal with mediabus codes. It is the same problem as with old 
> s/g/try/enum_fmt, which we replaced with respective mbus variants. So, we 
> either have to add our own .enum_mbus_framesizes video subdevice 
> operation, or we abuse this one, but interpret the pixel_format field as a 
> media-bus code.
> 
> Currently I only see one subdev driver, that implements this API: 
> ov7670.c, and it just happily ignores the pixel_format altogether...
> 
> Hans, Laurent, what do you think?
> 
> In the soc-camera case you will have to use soc_camera_xlate_by_fourcc() 
> to convert to media-bus code from fourcc. A nit-pick: please, follow the 
> style of the file, that you patch and don't add double empty lines between 
> functions.
> 
> A side question: why do you need this format at all? Is it for some custom 

Sorry, I meant to ask - what do you need this operation / ioctl() for?

Thanks
Guennadi

> application? What is your use-case, that makes try_fmt / s_fmt 
> insufficient for you and how does enum_framesizes help you?
> 
> Thanks
> Guennadi
> 
> > 
> > Signed-off-by: Kassey Lee <ygli@marvell.com>
> > Signed-off-by: Qing Xu <qingx@marvell.com>
> > ---
> >  drivers/media/video/soc_camera.c |   11 +++++++++++
> >  1 files changed, 11 insertions(+), 0 deletions(-)
> > 
> > diff --git a/drivers/media/video/soc_camera.c b/drivers/media/video/soc_camera.c
> > index 052bd6d..11715fb 100644
> > --- a/drivers/media/video/soc_camera.c
> > +++ b/drivers/media/video/soc_camera.c
> > @@ -145,6 +145,16 @@ static int soc_camera_s_std(struct file *file, void *priv, v4l2_std_id *a)
> >  	return v4l2_subdev_call(sd, core, s_std, *a);
> >  }
> >  
> > +static int soc_camera_enum_framesizes(struct file *file, void *fh,
> > +					 struct v4l2_frmsizeenum *fsize)
> > +{
> > +	struct soc_camera_device *icd = file->private_data;
> > +	struct v4l2_subdev *sd = soc_camera_to_subdev(icd);
> > +
> > +	return v4l2_subdev_call(sd, video, enum_framesizes, fsize);
> > +}
> > +
> > +
> >  static int soc_camera_reqbufs(struct file *file, void *priv,
> >  			      struct v4l2_requestbuffers *p)
> >  {
> > @@ -1302,6 +1312,7 @@ static const struct v4l2_ioctl_ops soc_camera_ioctl_ops = {
> >  	.vidioc_g_input		 = soc_camera_g_input,
> >  	.vidioc_s_input		 = soc_camera_s_input,
> >  	.vidioc_s_std		 = soc_camera_s_std,
> > +	.vidioc_enum_framesizes  = soc_camera_enum_framesizes,
> >  	.vidioc_reqbufs		 = soc_camera_reqbufs,
> >  	.vidioc_try_fmt_vid_cap  = soc_camera_try_fmt_vid_cap,
> >  	.vidioc_querybuf	 = soc_camera_querybuf,
> > -- 
> > 1.6.3.3
> > 
> 
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATCH] [media] v4l: soc-camera: add enum-frame-size ioctl
  2011-01-09 19:24   ` Guennadi Liakhovetski
@ 2011-01-10  2:45     ` Qing Xu
  2011-01-10  2:47     ` Qing Xu
  1 sibling, 0 replies; 21+ messages in thread
From: Qing Xu @ 2011-01-10  2:45 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Linux Media Mailing List, Kassey Li, Hans Verkuil, Laurent Pinchart

On Mon, 10 Jan 2011, Qing Xu wrote:

> On Fri, 7 Jan 2011, Guennadi Liakhovetski wrote:

> On Fri, 7 Jan 2011, Qing Xu wrote:
>
> > pass VIDIOC_ENUM_FRAMESIZES down to sub device drivers. So far no
> > special handling in soc-camera core.
>
> Hm, no, guess what? I don't think this can work. The parameter, that this
> routine gets from the user struct v4l2_frmsizeenum contains a member
> pixel_format, which is a fourcc code. Whereas subdevices don't deal with
> them, they deal with mediabus codes. It is the same problem as with old
> s/g/try/enum_fmt, which we replaced with respective mbus variants. So, we
> either have to add our own .enum_mbus_framesizes video subdevice
> operation, or we abuse this one, but interpret the pixel_format field as a
> media-bus code.
>
> Currently I only see one subdev driver, that implements this API:
> ov7670.c, and it just happily ignores the pixel_format altogether...
>
> Hans, Laurent, what do you think?
>
> In the soc-camera case you will have to use soc_camera_xlate_by_fourcc()
> to convert to media-bus code from fourcc. A nit-pick: please, follow the
> style of the file, that you patch and don't add double empty lines between
> functions.
>
> A side question: why do you need this format at all? Is it for some custom

> Sorry, I meant to ask - what do you need this operation / ioctl() for?

Hi Guennadi,

Before we launch camera application, we will use enum-frame-size ioctl to get all frame size that the sensor supports, and show all options in UI menu, then the customers could choose a size, and tell camera driver.

If use mbus structure pass to sensor, we need to modify the second parameter definition, it will contain both mbus code information and width/height ingotmation:
int (*enum_framesizes)(struct v4l2_subdev *sd, struct v4l2_frmsizeenum *fsize);

or use g_mbus_fmt instead:
int (*g_mbus_fmt)(struct v4l2_subdev *sd, struct v4l2_mbus_framefmt *fmt);
soc_camera_enum_framesizes()
{
        ...
        return v4l2_subdev_call(sd, video, enum_framesizes, fsize);
}

What do you think?

Thanks!
Qing

> application? What is your use-case, that makes try_fmt / s_fmt
> insufficient for you and how does enum_framesizes help you?
>
> Thanks
> Guennadi
>
> >
> > Signed-off-by: Kassey Lee <ygli@marvell.com>
> > Signed-off-by: Qing Xu <qingx@marvell.com>
> > ---
> >  drivers/media/video/soc_camera.c |   11 +++++++++++
> >  1 files changed, 11 insertions(+), 0 deletions(-)
> >
> > diff --git a/drivers/media/video/soc_camera.c b/drivers/media/video/soc_camera.c
> > index 052bd6d..11715fb 100644
> > --- a/drivers/media/video/soc_camera.c
> > +++ b/drivers/media/video/soc_camera.c
> > @@ -145,6 +145,16 @@ static int soc_camera_s_std(struct file *file, void *priv, v4l2_std_id *a)
> >     return v4l2_subdev_call(sd, core, s_std, *a);
> >  }
> >
> > +static int soc_camera_enum_framesizes(struct file *file, void *fh,
> > +                                    struct v4l2_frmsizeenum *fsize)
> > +{
> > +   struct soc_camera_device *icd = file->private_data;
> > +   struct v4l2_subdev *sd = soc_camera_to_subdev(icd);
> > +
> > +   return v4l2_subdev_call(sd, video, enum_framesizes, fsize);
> > +}
> > +
> > +
> >  static int soc_camera_reqbufs(struct file *file, void *priv,
> >                           struct v4l2_requestbuffers *p)
> >  {
> > @@ -1302,6 +1312,7 @@ static const struct v4l2_ioctl_ops soc_camera_ioctl_ops = {
> >     .vidioc_g_input          = soc_camera_g_input,
> >     .vidioc_s_input          = soc_camera_s_input,
> >     .vidioc_s_std            = soc_camera_s_std,
> > +   .vidioc_enum_framesizes  = soc_camera_enum_framesizes,
> >     .vidioc_reqbufs          = soc_camera_reqbufs,
> >     .vidioc_try_fmt_vid_cap  = soc_camera_try_fmt_vid_cap,
> >     .vidioc_querybuf         = soc_camera_querybuf,
> > --
> > 1.6.3.3
> >
>
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: [PATCH] [media] v4l: soc-camera: add enum-frame-size ioctl
  2011-01-09 19:24   ` Guennadi Liakhovetski
  2011-01-10  2:45     ` Qing Xu
@ 2011-01-10  2:47     ` Qing Xu
       [not found]       ` <Pine.LNX.4.64.1101100853490.24479@axis700.grange>
  1 sibling, 1 reply; 21+ messages in thread
From: Qing Xu @ 2011-01-10  2:47 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Linux Media Mailing List, Kassey Li, Hans Verkuil, Laurent Pinchart

On Mon, 10 Jan 2011, Qing Xu wrote:

> On Fri, 7 Jan 2011, Guennadi Liakhovetski wrote:

> On Fri, 7 Jan 2011, Qing Xu wrote:
>
> > pass VIDIOC_ENUM_FRAMESIZES down to sub device drivers. So far no
> > special handling in soc-camera core.
>
> Hm, no, guess what? I don't think this can work. The parameter, that this
> routine gets from the user struct v4l2_frmsizeenum contains a member
> pixel_format, which is a fourcc code. Whereas subdevices don't deal with
> them, they deal with mediabus codes. It is the same problem as with old
> s/g/try/enum_fmt, which we replaced with respective mbus variants. So, we
> either have to add our own .enum_mbus_framesizes video subdevice
> operation, or we abuse this one, but interpret the pixel_format field as a
> media-bus code.
>
> Currently I only see one subdev driver, that implements this API:
> ov7670.c, and it just happily ignores the pixel_format altogether...
>
> Hans, Laurent, what do you think?
>
> In the soc-camera case you will have to use soc_camera_xlate_by_fourcc()
> to convert to media-bus code from fourcc. A nit-pick: please, follow the
> style of the file, that you patch and don't add double empty lines between
> functions.
>
> A side question: why do you need this format at all? Is it for some custom

> Sorry, I meant to ask - what do you need this operation / ioctl() for?

Hi Guennadi,

Before we launch camera application, we will use enum-frame-size ioctl to get all frame size that the sensor supports, and show all options in UI menu, then the customers could choose a size, and tell camera driver.

If use mbus structure pass to sensor, we need to modify the second parameter definition, it will contain both mbus code information and width/height ingotmation:
int (*enum_framesizes)(struct v4l2_subdev *sd, struct v4l2_frmsizeenum *fsize);

or use g_mbus_fmt instead:
int (*g_mbus_fmt)(struct v4l2_subdev *sd, struct v4l2_mbus_framefmt *fmt);
soc_camera_enum_framesizes()
{
        ...
        return v4l2_subdev_call(sd, video, g_mbus_fmt, fsize);//typo, I mean "g_mbus_fmt"
}

What do you think?

Thanks!
Qing

> application? What is your use-case, that makes try_fmt / s_fmt
> insufficient for you and how does enum_framesizes help you?
>
> Thanks
> Guennadi
>
> >
> > Signed-off-by: Kassey Lee <ygli@marvell.com>
> > Signed-off-by: Qing Xu <qingx@marvell.com>
> > ---
> >  drivers/media/video/soc_camera.c |   11 +++++++++++
> >  1 files changed, 11 insertions(+), 0 deletions(-)
> >
> > diff --git a/drivers/media/video/soc_camera.c b/drivers/media/video/soc_camera.c
> > index 052bd6d..11715fb 100644
> > --- a/drivers/media/video/soc_camera.c
> > +++ b/drivers/media/video/soc_camera.c
> > @@ -145,6 +145,16 @@ static int soc_camera_s_std(struct file *file, void *priv, v4l2_std_id *a)
> >     return v4l2_subdev_call(sd, core, s_std, *a);
> >  }
> >
> > +static int soc_camera_enum_framesizes(struct file *file, void *fh,
> > +                                    struct v4l2_frmsizeenum *fsize)
> > +{
> > +   struct soc_camera_device *icd = file->private_data;
> > +   struct v4l2_subdev *sd = soc_camera_to_subdev(icd);
> > +
> > +   return v4l2_subdev_call(sd, video, enum_framesizes, fsize);
> > +}
> > +
> > +
> >  static int soc_camera_reqbufs(struct file *file, void *priv,
> >                           struct v4l2_requestbuffers *p)
> >  {
> > @@ -1302,6 +1312,7 @@ static const struct v4l2_ioctl_ops soc_camera_ioctl_ops = {
> >     .vidioc_g_input          = soc_camera_g_input,
> >     .vidioc_s_input          = soc_camera_s_input,
> >     .vidioc_s_std            = soc_camera_s_std,
> > +   .vidioc_enum_framesizes  = soc_camera_enum_framesizes,
> >     .vidioc_reqbufs          = soc_camera_reqbufs,
> >     .vidioc_try_fmt_vid_cap  = soc_camera_try_fmt_vid_cap,
> >     .vidioc_querybuf         = soc_camera_querybuf,
> > --
> > 1.6.3.3
> >
>
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH] [media] v4l: soc-camera: add enum-frame-size ioctl
       [not found]       ` <Pine.LNX.4.64.1101100853490.24479@axis700.grange>
@ 2011-01-10 10:33         ` Laurent Pinchart
  2011-01-17  9:53           ` soc-camera jpeg support? Qing Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Laurent Pinchart @ 2011-01-10 10:33 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Qing Xu, Linux Media Mailing List, Kassey Li, Hans Verkuil

Hi Guennadi,

On Monday 10 January 2011 09:20:05 Guennadi Liakhovetski wrote:
> On Sun, 9 Jan 2011, Qing Xu wrote:
> > On Fri, 7 Jan 2011, Guennadi Liakhovetski wrote:
> > > On Fri, 7 Jan 2011, Qing Xu wrote:
> > > > pass VIDIOC_ENUM_FRAMESIZES down to sub device drivers. So far no
> > > > special handling in soc-camera core.
> > > 
> > > Hm, no, guess what? I don't think this can work. The parameter, that
> > > this routine gets from the user struct v4l2_frmsizeenum contains a
> > > member pixel_format, which is a fourcc code. Whereas subdevices don't
> > > deal with them, they deal with mediabus codes. It is the same problem
> > > as with old s/g/try/enum_fmt, which we replaced with respective mbus
> > > variants. So, we either have to add our own .enum_mbus_framesizes
> > > video subdevice operation, or we abuse this one, but interpret the
> > > pixel_format field as a media-bus code.
> > > 
> > > Currently I only see one subdev driver, that implements this API:
> > > ov7670.c, and it just happily ignores the pixel_format altogether...
> > > 
> > > Hans, Laurent, what do you think?
> > > 
> > > In the soc-camera case you will have to use
> > > soc_camera_xlate_by_fourcc() to convert to media-bus code from fourcc.
> > > A nit-pick: please, follow the style of the file, that you patch and
> > > don't add double empty lines between functions.
> > > 
> > > A side question: why do you need this format at all? Is it for some
> > > custom
> > > 
> > > Sorry, I meant to ask - what do you need this operation / ioctl() for?
> > 
> > Before we launch camera application, we will use enum-frame-size ioctl
> > to get all frame size that the sensor supports, and show all options in
> > UI menu, then the customers could choose a size, and tell camera driver.
> 
> And if the camera supports ranges of sizes? Or doesn't implement this
> ioctl() at all? Remember, that this is an optional ioctl(). Would your
> application just fail? Or you could provide a slider to let the user
> select a size from a range, then just issue an s_fmt and use whatever it
> returns... This is something you'd do for a generic application
> 
> > If use mbus structure pass to sensor, we need to modify the second
> > parameter definition, it will contain both mbus code information and
> > width/height ingotmation:
> > int (*enum_framesizes)(struct v4l2_subdev *sd, struct v4l2_frmsizeenum
> > *fsize);
> > 
> > or use g_mbus_fmt instead:
> > int (*g_mbus_fmt)(struct v4l2_subdev *sd, struct v4l2_mbus_framefmt
> > *fmt); soc_camera_enum_framesizes()
> > {
> > 
> >         ...
> >         return v4l2_subdev_call(sd, video, g_mbus_fmt, fsize);//typo, I
> >         mean "g_mbus_fmt"
> > 
> > }
> > 
> > What do you think?
> 
> In any case therer needs to be a possibility for host drivers to override
> this routine, so, please, do something similar, to what default_g_crop() /
> default_s_crop() / default_cropcap() / default_g_parm() / default_s_parm()
> do: add a host operation and provide a default implementation for it. And
> since noone seems to care enough, I think, we can just abuse struct
> v4l2_frmsizeenum for now, just make sure to rewrite pixel_format to a
> media-bus code, and restore it before returning to the caller.

I like the .enum_mbus_framesizes better, but I could live with a hack until if 
you convert soc_camera to use subdev pad-level operations when the MC will be 
available.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 21+ messages in thread

* soc-camera jpeg support?
  2011-01-10 10:33         ` Laurent Pinchart
@ 2011-01-17  9:53           ` Qing Xu
  2011-01-17 17:38             ` Guennadi Liakhovetski
  0 siblings, 1 reply; 21+ messages in thread
From: Qing Xu @ 2011-01-17  9:53 UTC (permalink / raw)
  To: Guennadi Liakhovetski, Laurent Pinchart; +Cc: Linux Media Mailing List

Hi,

Many of our sensors support directly outputting JPEG data to camera controller, do you feel it's reasonable to add jpeg support into soc-camera? As it seems that there is no define in v4l2-mediabus.h which is suitable for our case.

Such as:
--- a/drivers/media/video/soc_mediabus.c
+++ b/drivers/media/video/soc_mediabus.c
@@ -130,6 +130,13 @@ static const struct soc_mbus_pixelfmt mbus_fmt[] = {
                .packing                = SOC_MBUS_PACKING_2X8_PADLO,
                .order                  = SOC_MBUS_ORDER_BE,
        },
+       [MBUS_IDX(JPEG_1X8)] = {
+               .fourcc                 = V4L2_PIX_FMT_JPEG,
+               .name                   = "JPEG",
+               .bits_per_sample        = 8,
+               .packing                = SOC_MBUS_PACKING_NONE,
+               .order                  = SOC_MBUS_ORDER_LE,
+       },
 };

--- a/include/media/v4l2-mediabus.h
+++ b/include/media/v4l2-mediabus.h
@@ -41,6 +41,7 @@ enum v4l2_mbus_pixelcode {
        V4L2_MBUS_FMT_SBGGR10_2X8_PADHI_BE,
        V4L2_MBUS_FMT_SBGGR10_2X8_PADLO_BE,
        V4L2_MBUS_FMT_SGRBG8_1X8,
+       V4L2_MBUS_FMT_JPEG_1X8,
 };

Any ideas will be appreciated!
Thanks!
Qing Xu

Email: qingx@marvell.com
Application Processor Systems Engineering,
Marvell Technology Group Ltd.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: soc-camera jpeg support?
  2011-01-17  9:53           ` soc-camera jpeg support? Qing Xu
@ 2011-01-17 17:38             ` Guennadi Liakhovetski
  2011-01-18  6:06               ` Qing Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Guennadi Liakhovetski @ 2011-01-17 17:38 UTC (permalink / raw)
  To: Qing Xu; +Cc: Laurent Pinchart, Linux Media Mailing List

On Mon, 17 Jan 2011, Qing Xu wrote:

> Hi,
> 
> Many of our sensors support directly outputting JPEG data to camera 
> controller, do you feel it's reasonable to add jpeg support into 
> soc-camera? As it seems that there is no define in v4l2-mediabus.h which 
> is suitable for our case.

In principle I have nothing against this, but, I'm afraid, it is not quite 
that simple. I haven't worked with such sensors myself, but, AFAIU, the 
JPEG image format doesn't have fixed bytes-per-line and bytes-per-frame 
values. If you add it like you propose, this would mean, that it just 
delivers "normal" 8 bits per pixel images. OTOH, soc_mbus_bytes_per_line() 
would just return -EINVAL for your JPEG format, so, unsupporting drivers 
would just error out, which is not all that bad. When the first driver 
decides to support JPEG, they will have to handle that error. But maybe 
we'll want to return a special error code for it.

But, in fact, that is in a way my problem with your patches: you propose 
extensions to generic code without showing how this is going to be used in 
specific drivers. Could you show us your host driver, please? I don't 
think this is still the pxa27x driver, is it?

Thanks
Guennadi

> 
> Such as:
> --- a/drivers/media/video/soc_mediabus.c
> +++ b/drivers/media/video/soc_mediabus.c
> @@ -130,6 +130,13 @@ static const struct soc_mbus_pixelfmt mbus_fmt[] = {
>                 .packing                = SOC_MBUS_PACKING_2X8_PADLO,
>                 .order                  = SOC_MBUS_ORDER_BE,
>         },
> +       [MBUS_IDX(JPEG_1X8)] = {
> +               .fourcc                 = V4L2_PIX_FMT_JPEG,
> +               .name                   = "JPEG",
> +               .bits_per_sample        = 8,
> +               .packing                = SOC_MBUS_PACKING_NONE,
> +               .order                  = SOC_MBUS_ORDER_LE,
> +       },
>  };
> 
> --- a/include/media/v4l2-mediabus.h
> +++ b/include/media/v4l2-mediabus.h
> @@ -41,6 +41,7 @@ enum v4l2_mbus_pixelcode {
>         V4L2_MBUS_FMT_SBGGR10_2X8_PADHI_BE,
>         V4L2_MBUS_FMT_SBGGR10_2X8_PADLO_BE,
>         V4L2_MBUS_FMT_SGRBG8_1X8,
> +       V4L2_MBUS_FMT_JPEG_1X8,
>  };
> 
> Any ideas will be appreciated!
> Thanks!
> Qing Xu
> 
> Email: qingx@marvell.com
> Application Processor Systems Engineering,
> Marvell Technology Group Ltd.
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: soc-camera jpeg support?
  2011-01-17 17:38             ` Guennadi Liakhovetski
@ 2011-01-18  6:06               ` Qing Xu
  2011-01-18 17:30                 ` Guennadi Liakhovetski
  0 siblings, 1 reply; 21+ messages in thread
From: Qing Xu @ 2011-01-18  6:06 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: Laurent Pinchart, Linux Media Mailing List

[-- Attachment #1: Type: text/plain, Size: 4499 bytes --]

Hi, Guennadi,

Oh, yes, I agree with you, you are right, it is really not that simple, JPEG is always a headache,:(, as it is quite different from original yuv/rgb format, it has neither fixed bits-per-sample, nor fixed packing/bytes-per-line/per-frame, so when coding, I just hack value of .bits_per_sample and .packing, in fact, you will see in our host driver, if we find it is JPEG, will ignore bytes-per-line value, for example, in pxa955_videobuf_prepare(), for jpeg, we always allocate fixed buffer size for it (or, a better method is to allocate buffer size = width*height/jpeg-compress-ratio).

I have 2 ideas:
1) Do you think it is reasonable to add a not-fixed define into soc_mbus_packing:
enum soc_mbus_packing {
        SOC_MBUS_PACKING_NOT_FIXED,
        ...
};
And then, .bits_per_sample      = 0, /* indicate that sample bits is not-fixed*/
And, in function:
s32 soc_mbus_bytes_per_line(u32 width, const struct soc_mbus_pixelfmt *mf)
{
        switch (mf->packing) {
                case SOC_MBUS_PACKING_NOT_FIXED:
                        return 0;
                case SOC_MBUS_PACKING_NONE:
                        return width * mf->bits_per_sample / 8;
                ...
        }
        return -EINVAL;
}

2) Or, an workaround in sensor(ov5642.c), is to implement:
int (*try_fmt)(struct v4l2_subdev *sd, struct v4l2_format *fmt);
use the member (fmt->pix->pixelformat == V4L2_PIX_FMT_JPEG) to find out if it is jpeg. And in host driver, it calls try_fmt() into sensor. In this way, no need to change soc-camera common code, but I feel that it goes against with your xxx_mbus_xxx design purpose, right?

What do you think?

Please check the attachment, it is our host camera controller driver and sensor.

Thanks a lot!
-Qing

-----Original Message-----
From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
Sent: 2011年1月18日 1:39
To: Qing Xu
Cc: Laurent Pinchart; Linux Media Mailing List
Subject: Re: soc-camera jpeg support?

On Mon, 17 Jan 2011, Qing Xu wrote:

> Hi,
>
> Many of our sensors support directly outputting JPEG data to camera
> controller, do you feel it's reasonable to add jpeg support into
> soc-camera? As it seems that there is no define in v4l2-mediabus.h which
> is suitable for our case.

In principle I have nothing against this, but, I'm afraid, it is not quite
that simple. I haven't worked with such sensors myself, but, AFAIU, the
JPEG image format doesn't have fixed bytes-per-line and bytes-per-frame
values. If you add it like you propose, this would mean, that it just
delivers "normal" 8 bits per pixel images. OTOH, soc_mbus_bytes_per_line()
would just return -EINVAL for your JPEG format, so, unsupporting drivers
would just error out, which is not all that bad. When the first driver
decides to support JPEG, they will have to handle that error. But maybe
we'll want to return a special error code for it.

But, in fact, that is in a way my problem with your patches: you propose
extensions to generic code without showing how this is going to be used in
specific drivers. Could you show us your host driver, please? I don't
think this is still the pxa27x driver, is it?

Thanks
Guennadi

>
> Such as:
> --- a/drivers/media/video/soc_mediabus.c
> +++ b/drivers/media/video/soc_mediabus.c
> @@ -130,6 +130,13 @@ static const struct soc_mbus_pixelfmt mbus_fmt[] = {
>                 .packing                = SOC_MBUS_PACKING_2X8_PADLO,
>                 .order                  = SOC_MBUS_ORDER_BE,
>         },
> +       [MBUS_IDX(JPEG_1X8)] = {
> +               .fourcc                 = V4L2_PIX_FMT_JPEG,
> +               .name                   = "JPEG",
> +               .bits_per_sample        = 8,
> +               .packing                = SOC_MBUS_PACKING_NONE,
> +               .order                  = SOC_MBUS_ORDER_LE,
> +       },
>  };
>
> --- a/include/media/v4l2-mediabus.h
> +++ b/include/media/v4l2-mediabus.h
> @@ -41,6 +41,7 @@ enum v4l2_mbus_pixelcode {
>         V4L2_MBUS_FMT_SBGGR10_2X8_PADHI_BE,
>         V4L2_MBUS_FMT_SBGGR10_2X8_PADLO_BE,
>         V4L2_MBUS_FMT_SGRBG8_1X8,
> +       V4L2_MBUS_FMT_JPEG_1X8,
>  };
>
> Any ideas will be appreciated!
> Thanks!
> Qing Xu
>
> Email: qingx@marvell.com
> Application Processor Systems Engineering,
> Marvell Technology Group Ltd.
>

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

[-- Attachment #2: pxa955_cam.c --]
[-- Type: text/plain, Size: 41134 bytes --]

/*----------------------------------------------------------

* V4L2 Driver for PXA95x camera host
*
* Based on linux/drivers/media/video/pxa_camera.c
*
* Copyright (C) 2010, Marvell International Ltd.
*              Qing Xu <qingx@marvell.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.

----------------------------------------------------------*/

#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/spinlock.h>
#include <linux/device.h>
#include <linux/wait.h>
#include <linux/list.h>
#include <linux/dma-mapping.h>
#include <linux/slab.h>
#include <linux/platform_device.h>
#include <linux/version.h>
#include <linux/clk.h>

#include <mach/dma.h>

#include <media/v4l2-common.h>
#include <media/v4l2-dev.h>
#include <media/videobuf-dma-contig.h>
#include <media/soc_camera.h>
#include <media/soc_mediabus.h>
#include <media/v4l2-chip-ident.h>

#include "pxa955_csi.h"

MODULE_AUTHOR("Qing Xu <qingx@marvell.com>");
MODULE_DESCRIPTION("Marvell PXA955 Simple Capture Controller Driver");
MODULE_LICENSE("GPL");
MODULE_SUPPORTED_DEVICE("Video");

/* sci register, sci0 base: 0x50000000*/
#define REG_SCICR0	0x0000
#define REG_SCICR1	0x0004
#define REG_SCISR	0x0008
#define REG_SCIMASK	0x000C
#define REG_SCIFIFO	0x00F8
#define REG_SCIFIFOSR	0x00FC
#define REG_SCIDADDR0	0x0200
#define REG_SCISADDR0	0x0204
#define REG_SCITADDR0	0x0208
#define REG_SCIDCMD0	0x020C
#define REG_SCIDADDR1	0x0210
#define REG_SCISADDR1	0x0214
#define REG_SCITADDR1	0x0218
#define REG_SCIDCMD1	0x021C
#define REG_SCIDADDR2	0x0220
#define REG_SCISADDR2	0x0224
#define REG_SCITADDR2	0x0228
#define REG_SCIDCMD2	0x022C
#define REG_SCIDBR0	0x0300
#define REG_SCIDCSR0	0x0304
#define REG_SCIDBR1	0x0310
#define REG_SCIDCSR1	0x0314
#define REG_SCIDBR2	0x0320
#define REG_SCIDCSR2	0x0324

#define	IRQ_EOFX	0x00000001
#define	IRQ_EOF		0x00000002
#define	IRQ_DIS		0x00000004
#define	IRQ_OFO		0x00000008
#define	IRQ_DBS		0x00000200
#define	IRQ_SOFX	0x00000400
#define	IRQ_SOF		0x00000800

/* FMT_YUV420 & FMT_RGB666 only for input format without output format */
#define   FMT_RAW8	  0x0000
#define   FMT_RAW10	  0x0002
#define   FMT_YUV422	  0x0003
#define   FMT_YUV420	  0x0004
#define   FMT_RGB565	  0x0005
#define   FMT_RGB666	  0x0006
#define   FMT_RGB888	  0x0007
#define   FMT_JPEG	  0x0008
#define   FMT_YUV422PACKET	0x0009

#define SCICR1_FMT_OUT(n)       ((n) <<12)	/* Output Pixel Format (4 bits) */
#define SCICR1_FMT_IN(n)        ((n) <<28)	/* Input Pixel Format (4 bits) */

/* REG_SCIDCSR0 */
#define SCIDCSR_DMA_RUN        0x80000000  	/* DMA Channel Enable */

/* REG_SCICR0 */
#define SCICR0_CAP_EN          0x40000000 	/* Capture Enable */
#define SCICR0_CI_EN           0x80000000 	/* Camera Interface Enable */

/* REG_SCIFIFO */
#define SCIFIFO_Fx_EN        0x0010	/* FIFO 0 Enable */

#define SCIFIFO_F0_EN        0x0010	/* FIFO 0 Enable */
#define SCIFIFO_F1_EN        0x0020	/* FIFO 1 Enable */
#define SCIFIFO_F2_EN        0x0040	/* FIFO 2 Enable */
#define SCIFIFO_FX_EN(n)  ((n) <<4)		/* Input Pixel Format (4 bits) */

/* REG_SCIDBRx*/
#define SCIDBR_EN 		(0x1u<<1) /*DMA Branch Enable*/

#define VGA_WIDTH       640
#define VGA_HEIGHT      480

#define SENSOR_MAX 2

#define PXA955_CAM_VERSION_CODE KERNEL_VERSION(0, 0, 5)
#define PXA955_CAM_DRV_NAME "pxa95x-camera"

#define JPEG_COMPRESS_RATIO_HIGH 10
#define CHANNEL_NUM 3 		/*YUV*/
#define MAX_DMA_BUFS 4
#define MIN_DMA_BUFS 2
#define MAX_DMA_SIZE (1920*1080*3/2)
#define JPEG_BUF_SIZE (1024*1024)

static unsigned int vid_mem_limit = 16;	/* Video memory limit, in Mb */

/* descriptor needed for the camera controller DMA engine */
struct pxa_cam_dma {
	dma_addr_t	sg_dma;
	struct pxa_dma_desc	*sg_cpu;
	size_t	sg_size;
	int	sglen;
};

struct pxa_buf_node {
	/* common v4l buffer stuff -- must be first */
	struct videobuf_buffer vb;
	enum v4l2_mbus_pixelcode	code;
	struct page *page;
	struct pxa_cam_dma dma_desc[CHANNEL_NUM];
};

struct pxa955_cam_dev {
	struct soc_camera_host	soc_host;
	struct soc_camera_device *icd;
	struct list_head dev_list;	/* link to other devices */

	struct clk *sci1_clk;
	struct clk *sci2_clk;

	struct pxa955_csi_dev		*csidev;
	unsigned int		irq;
	void __iomem		*regs;
	wait_queue_head_t iowait;	/* waiting on frame data */
	bool streamon;
	bool streamoff;

	struct resource		*res;
	unsigned long		platform_flags;

	/* DMA buffers */
	struct list_head dma_buf_list;	/*dma buffer list, the list member is buf_node*/
	unsigned int dma_buf_size;		/* allocated size */
	unsigned int channels;
	unsigned int channel_size[CHANNEL_NUM];

	/* streaming buffers */
	spinlock_t spin_lock;  /* Access to device */

	struct pxa955_camera_pdata *pdata;
};

/* refer to videobuf-dma-contig.c*/
struct videobuf_dma_contig_memory {
	u32 magic;
	void *vaddr;
	dma_addr_t dma_handle;
	unsigned long size;
	int is_userptr;
};

#define MAGIC_DC_MEM 0x0733ac61
#define MAGIC_CHECK(is, should)					    \
	if (unlikely((is) != (should)))	{				    \
		pr_err("magic mismatch: %x expected %x\n", (is), (should)); \
		BUG();							    \
	}

/* Len should 8 bytes align, bit[2:0] should be 0 */
#define SINGLE_DESC_TRANS_MAX   (1 << 24)

extern struct pxa955_csi_dev *csi_device;
static int csi_attach(struct pxa955_cam_dev *pcdev)
{
	if (csi_device != NULL) {
		pcdev->csidev = csi_device;
		return 0;
	}

	printk(KERN_ERR "cam: failed to find CSI device!\n");
	return -ENODEV;
}

static inline void sci_reg_write(struct pxa955_cam_dev *pcdev, unsigned int reg,
		unsigned int val)
{
	__raw_writel(val, pcdev->regs + reg);
}

static inline unsigned int sci_reg_read(struct pxa955_cam_dev *pcdev,
		unsigned int reg)
{
	return __raw_readl(pcdev->regs + reg);
}


static inline void sci_reg_write_mask(struct pxa955_cam_dev *pcdev, unsigned int reg,
		unsigned int val, unsigned int mask)
{
	unsigned int v = sci_reg_read(pcdev, reg);

	v = (v & ~mask) | (val & mask);
	sci_reg_write(pcdev, reg, v);
}

static inline void sci_reg_clear_bit(struct pxa955_cam_dev *pcdev,
		unsigned int reg, unsigned int val)
{
	sci_reg_write_mask(pcdev, reg, 0, val);
}

static inline void sci_reg_set_bit(struct pxa955_cam_dev *pcdev,
		unsigned int reg, unsigned int val)
{
	sci_reg_write_mask(pcdev, reg, val, val);
}

#ifdef DEBUG
static void sci_dump_registers(struct pxa955_cam_dev *pcdev)
{
	printk(KERN_ERR "SCICR0 0x%x\n", sci_reg_read(pcdev, REG_SCICR0));
	printk(KERN_ERR "SCICR1 0x%x\n", sci_reg_read(pcdev, REG_SCICR1));
	printk(KERN_ERR "SCISR 0x%x\n", sci_reg_read(pcdev, REG_SCISR));
	printk(KERN_ERR "SCIMASK 0x%x\n", sci_reg_read(pcdev, REG_SCIMASK));
	printk(KERN_ERR "SCIFIFO 0x%x\n", sci_reg_read(pcdev, REG_SCIFIFO));
	printk(KERN_ERR "SCIFIFOSR 0x%x\n", sci_reg_read(pcdev, REG_SCIFIFOSR));

	printk(KERN_ERR "SCIDADDR0 0x%x\n", sci_reg_read(pcdev, REG_SCIDADDR0));
	printk(KERN_ERR "SCISADDR0 0x%x\n", sci_reg_read(pcdev, REG_SCISADDR0));
	printk(KERN_ERR "SCITADDR0 0x%x\n", sci_reg_read(pcdev, REG_SCITADDR0));
	printk(KERN_ERR "SCIDCMD0 0x%x\n", sci_reg_read(pcdev, REG_SCIDCMD0));

	printk(KERN_ERR "SCIDADDR1 0x%x\n", sci_reg_read(pcdev, REG_SCIDADDR1));
	printk(KERN_ERR "SCISADDR1 0x%x\n", sci_reg_read(pcdev, REG_SCISADDR1));
	printk(KERN_ERR "SCITADDR1 0x%x\n", sci_reg_read(pcdev, REG_SCITADDR1));
	printk(KERN_ERR "SCIDCMD1 0x%x\n", sci_reg_read(pcdev, REG_SCIDCMD1));

	printk(KERN_ERR "SCIDADDR2 0x%x\n", sci_reg_read(pcdev, REG_SCIDADDR2));
	printk(KERN_ERR "SCISADDR2 0x%x\n", sci_reg_read(pcdev, REG_SCISADDR2));
	printk(KERN_ERR "SCITADDR2 0x%x\n", sci_reg_read(pcdev, REG_SCITADDR2));
	printk(KERN_ERR "SCIDCMD2 0x%x\n", sci_reg_read(pcdev, REG_SCIDCMD2));

	printk(KERN_ERR "SCIDBR0 0x%x\n", sci_reg_read(pcdev, REG_SCIDBR0));
	printk(KERN_ERR "SCIDCSR0 0x%x\n", sci_reg_read(pcdev, REG_SCIDCSR0));

	printk(KERN_ERR "SCIDBR1 0x%x\n", sci_reg_read(pcdev, REG_SCIDBR1));
	printk(KERN_ERR "SCIDCSR1 0x%x\n", sci_reg_read(pcdev, REG_SCIDCSR1));

	printk(KERN_ERR "SCIDBR2 0x%x\n", sci_reg_read(pcdev, REG_SCIDBR2));
	printk(KERN_ERR "SCIDCSR2 0x%x\n", sci_reg_read(pcdev, REG_SCIDCSR2));
}

static void dma_dump_desc(struct pxa955_cam_dev *pcdev)
{
	int i, k;
	struct pxa_buf_node *buf_node;
	printk(KERN_ERR "cam: dump_dma_desc ************+\n");

	printk(KERN_ERR "dma_buf_list head 0x%x\n",(unsigned int)&pcdev->dma_buf_list);
	list_for_each_entry(buf_node, &pcdev->dma_buf_list, vb.queue)  {
			printk(KERN_ERR "buf_node 0x%x\n",(unsigned int)buf_node);

		for (i = 0; i < pcdev->channels; i++){
			printk(KERN_ERR "chnnl %d, chnnl_size %d, sglen %d, sgdma 0x%x, sgsze %d\n",
				i,pcdev->channel_size[i],buf_node->dma_desc[i].sglen,buf_node->dma_desc[i].sg_dma,buf_node->dma_desc[i].sg_size);
			for (k = 0; k < buf_node->dma_desc[i].sglen; k++) {
				printk(KERN_ERR "dcmd [%d]  0x%x\n",k,buf_node->dma_desc[i].sg_cpu[k].dcmd);
				printk(KERN_ERR "ddadr[%d]  0x%x\n",k,buf_node->dma_desc[i].sg_cpu[k].ddadr);
				printk(KERN_ERR "dsadr[%d]  0x%x\n",k,buf_node->dma_desc[i].sg_cpu[k].dsadr);
				printk(KERN_ERR "dtadr[%d]  0x%x\n\n",k,buf_node->dma_desc[i].sg_cpu[k].dtadr);
			}
		}
	}

	printk(KERN_ERR "cam: dump_dma_desc ************-\n\n");
}

static void dma_dump_buf_list(struct pxa955_cam_dev *pcdev)
{
	struct pxa_buf_node *buf_node;
	dma_addr_t dma_handles;

	printk(KERN_ERR "cam: dump_dma_buf_list ************+\n");
	list_for_each_entry(buf_node, &pcdev->dma_buf_list, vb.queue) {
		dma_handles = videobuf_to_dma_contig(&buf_node->vb);
		printk(KERN_ERR "cam: buf_node 0x%x, pa 0x%x\n",
			(unsigned int)buf_node, dma_handles);
	}
	printk(KERN_ERR "cam: dump_dma_buf_list ************-\n\n");
}
#endif

/* only handle in irq context*/
static void dma_fetch_frame(struct pxa955_cam_dev *pcdev)
{
	struct pxa_buf_node *buf_node = NULL;
	int node_num = 0;
	dma_addr_t dma_handles;
	struct device *dev = pcdev->soc_host.v4l2_dev.dev;

	list_for_each_entry(buf_node, &pcdev->dma_buf_list, vb.queue) {
		if (buf_node->vb.state == VIDEOBUF_QUEUED)
			node_num++;
	}
	if (node_num > 1) {
		/*
		* get the first node of dma_list, it must have been filled by dma, and
		* remove it from dma-buf-list.
		*/
		buf_node = list_entry(pcdev->dma_buf_list.next,
			struct pxa_buf_node, vb.queue);
		dma_handles = videobuf_to_dma_contig(&buf_node->vb);

		/* invalid cache, to make sure user could get the real data from DDR*/
		dma_map_page(dev,
				buf_node->page,
				0,
				buf_node->vb.bsize,
				DMA_FROM_DEVICE);

		/* TODO: for JPEG

		if (pcdev->pix_format.pixelformat == V4L2_PIX_FMT_JPEG) {
			dma_sync_single_for_device(&pcdev->pdev->dev,
							sbuf->dma_handles,
							sbuf->v4lbuf.length,
							DMA_FROM_DEVICE);
			if ((((char *)buf_node->dma_bufs)[0] != 0xff)
			|| (((char *)buf_node->dma_bufs)[1] != 0xd8))
				printk(KERN_ERR "cam: JPEG ERROR !!! dropped this frame.\n");
		}
		*/
		buf_node->vb.state = VIDEOBUF_DONE;
		wake_up(&buf_node->vb.done);
		list_del_init(&buf_node->vb.queue);
	} else {
		/*if there is only one left in dma_list, drop it!*/
		printk(KERN_DEBUG "cam: drop a frame!\n");
	}

}

static void dma_append_desc(struct pxa955_cam_dev* pcdev,
								struct pxa_buf_node* pre,
								struct pxa_buf_node* next)
{
	int i = 0;
	struct pxa_cam_dma *pre_dma = NULL, *next_dma = NULL;

	for (i = 0; i < pcdev->channels; i++) {
		pre_dma = &pre->dma_desc[i];
		next_dma = &next->dma_desc[i];
		pre_dma->sg_cpu[pre_dma->sglen-1].ddadr = (u32)next_dma->sg_dma;
	}
	printk(KERN_DEBUG "cam: append new dma 0x%x to 0x%x\n",
		next_dma->sg_dma, pre_dma->sg_dma);
}

static void dma_attach_bufs(struct pxa955_cam_dev *pcdev)
{
	struct pxa_buf_node *buf_node = NULL;
	struct pxa_buf_node *tail_node = NULL;
	unsigned int regval;
	bool dma_branched = false;

	list_for_each_entry(buf_node, &pcdev->dma_buf_list, vb.queue) {

		if (buf_node->vb.state == VIDEOBUF_QUEUED)
			continue;

		/*
		* we got a new en-queue buf which is not in HW dma chain, append it to
		* the tail of HW dma chain, and loop the tail to itself.
		*/
		if (buf_node->vb.state == VIDEOBUF_ACTIVE) {

			tail_node = list_entry(buf_node->vb.queue.prev,
						struct pxa_buf_node, vb.queue);

			/*
			* NOTE!!! only one desc for one frame buffer, if not in this way,
			* need change here, as phy addr might not located between the
			* begin and end, phy addr might not continuous between different
			* desc of one frame buffer.
			*/
			regval = sci_reg_read(pcdev, REG_SCITADDR0);
			if (((regval >= videobuf_to_dma_contig(&tail_node->vb))
				&& (regval < (videobuf_to_dma_contig(&tail_node->vb)
				+ pcdev->channel_size[0])))
				&& (dma_branched == false)) {
				/*
				* if we find DMA is looping in the last buf, and there is new
				* coming buf, (DMA target address shows that DMA is working in
				* the tail buffer) we SHOULD set DMA branch reg, force DMA move
				* to the new buf descriptor in the next frame, so we can pick up
				* this buf when next irq comes.
				*/
				dma_branched = true;
				sci_reg_write(pcdev, REG_SCIDBR0, (buf_node->dma_desc[0].sg_dma | SCIDBR_EN));
				if(pcdev->channels == 3) {
					sci_reg_write(pcdev, REG_SCIDBR1, (buf_node->dma_desc[1].sg_dma | SCIDBR_EN));
					sci_reg_write(pcdev, REG_SCIDBR2, (buf_node->dma_desc[2].sg_dma | SCIDBR_EN));
				}
			} else {
				/*
				* if it is not the last buf which DMA looping in, just append
				* desc to the tail of the DMA chain.
				*/
				dma_append_desc(pcdev, tail_node, buf_node);
			}
			dma_append_desc(pcdev, buf_node, buf_node);
			buf_node->vb.state = VIDEOBUF_QUEUED;
		}
	}

}

static int dma_alloc_desc(struct pxa_buf_node *buf_node,
					struct pxa955_cam_dev *pcdev)
{
	int i;
	unsigned int 	len = 0, len_tmp = 0;
	pxa_dma_desc 	*dma_desc_tmp;
	unsigned long 	dma_desc_phy_tmp;
	unsigned long 	srcphyaddr, dstphyaddr;
	struct pxa_cam_dma *desc;
	struct videobuf_buffer *vb = &buf_node->vb;
	struct device *dev = pcdev->soc_host.v4l2_dev.dev;
	srcphyaddr = 0;	/* TBD */

	dstphyaddr = videobuf_to_dma_contig(vb);

	for (i = 0; i < pcdev->channels; i++) {
		printk(KERN_DEBUG "cam: index %d, channels %d\n",vb->i, i);
		desc = &buf_node->dma_desc[i];
		len = pcdev->channel_size[i];

		desc->sglen = (len + SINGLE_DESC_TRANS_MAX - 1) / SINGLE_DESC_TRANS_MAX;
		desc->sg_size = (desc->sglen) * sizeof(struct pxa_dma_desc);

		if (desc->sg_cpu == NULL){
			desc->sg_cpu = dma_alloc_coherent(dev, desc->sg_size,
					     &desc->sg_dma, GFP_KERNEL);
		}
		printk(KERN_DEBUG "cam: sglen %d, size %d, sg_cpu 0x%x\n",
			desc->sglen, desc->sg_size, (unsigned int)desc->sg_cpu);
		if (!desc->sg_cpu){
			printk(KERN_ERR "cam: dma_alloc_coherent failed at chnnl %d!\n", i);
			goto err;
		}

		dma_desc_tmp = desc->sg_cpu;
		dma_desc_phy_tmp = desc->sg_dma;

		while (len) {
			len_tmp = len > SINGLE_DESC_TRANS_MAX ?
				SINGLE_DESC_TRANS_MAX : len;

			if ((dstphyaddr & 0xf) != 0) {
				printk(KERN_ERR "cam: error: at least we need 16bytes align for DMA!\n");
				goto err;
			}
			dma_desc_tmp->ddadr = dma_desc_phy_tmp + sizeof(pxa_dma_desc);
			dma_desc_tmp->dsadr = srcphyaddr; /* TBD */
			dma_desc_tmp->dtadr = dstphyaddr;
			dma_desc_tmp->dcmd = len_tmp;

			len -= len_tmp;
			dma_desc_tmp++;
			dma_desc_phy_tmp += sizeof(pxa_dma_desc);
			dstphyaddr += len_tmp;

		}
	}
	return 0;

err:
	for (i = 0; i < pcdev->channels; i++) {
		desc = &buf_node->dma_desc[i];
		if (desc->sg_cpu) {
			dma_free_coherent(dev, desc->sg_size,
				    desc->sg_cpu, desc->sg_dma);
			desc->sg_cpu = 0;
		}
	}
	return -ENOMEM;
}

static void dma_free_desc(struct pxa_buf_node *buf_node,
							struct pxa955_cam_dev *pcdev)
{
	int i;
	struct pxa_cam_dma *desc;
	struct device *dev = pcdev->soc_host.v4l2_dev.dev;

	for (i = 0; i < pcdev->channels; i++) {
		desc = &buf_node->dma_desc[i];
		if (desc->sg_cpu){
			dma_free_coherent(dev, desc->sg_size,
				    desc->sg_cpu, desc->sg_dma);
			desc->sg_cpu = 0;
		}
	}
}

static void dma_free_bufs(struct videobuf_queue *vq,
				struct pxa_buf_node *buf,
				struct pxa955_cam_dev *pcdev)
{
	struct videobuf_buffer *vb = &buf->vb;
	struct videobuf_dma_contig_memory *mem = vb->priv;
	/*
	* TODO:
	* This waits until this buffer is out of danger, i.e., until it is no
	* longer in STATE_QUEUED or STATE_ACTIVE
	*/
	/*videobuf_waiton(vb, 0, 0);*/

	dma_free_desc(buf, pcdev);

	/*
	* TODO: as user will call stream-off when he hasn't en-queue all
	* request-buffers, if directlty call videobuf_dma_contig_free here,
	* it will release non-initizlized buffers, it causes kernel panic. So,
	* we implement our own free operation as follow to avoid kernel panic.
	* (refer to videobuf-dma-contig.c)
	*/
	/*videobuf_dma_contig_free(vq, &buf->vb);*/

	/*
	* mmapped memory can't be freed here, otherwise mmapped region
	* would be released, while still needed. In this case, the memory
	* release should happen inside videobuf_vm_close().
	* So, it should free memory only if the memory were allocated for
	* read() operation.
	*/
	if (vb->memory != V4L2_MEMORY_USERPTR)
		return;

	if (!mem)
		return;

	MAGIC_CHECK(mem->magic, MAGIC_DC_MEM);

	/* handle user space pointer case */
	if (vb->baddr) {
		/*videobuf_dma_contig_user_put(mem);*/
		mem->is_userptr = 0;
		mem->dma_handle = 0;
		mem->size = 0;
	}

	buf->vb.state = VIDEOBUF_NEEDS_INIT;
}

static void dma_chain_init(struct pxa955_cam_dev *pcdev)
{
	int i = 0;
	struct pxa_cam_dma *dma_cur = NULL, *dma_pre = NULL;
	struct pxa_buf_node *buf_cur = NULL, *buf_pre = NULL;
	struct pxa_buf_node *buf_node;

	/*chain the buffers in the dma_buf_list list*/
	list_for_each_entry(buf_node, &pcdev->dma_buf_list, vb.queue) {
		if (buf_node->vb.state == VIDEOBUF_ACTIVE) {
			buf_cur = buf_node;

			/*head of the dma_buf_list, this is the first dma desc*/
			if (buf_node->vb.queue.prev == &pcdev->dma_buf_list) {
				for (i = 0; i < pcdev->channels; i++) {
					dma_cur = &buf_cur->dma_desc[i];
					if (buf_pre)
						dma_pre = &buf_pre->dma_desc[i];
					sci_reg_write(pcdev, REG_SCIDADDR0 + i*0x10, dma_cur->sg_dma);
				}
			} else {
				/* link to prev descriptor */
				dma_append_desc(pcdev, buf_pre, buf_cur);
			}

			/* find the tail, loop back to the tail itself */
			if (&pcdev->dma_buf_list == buf_node->vb.queue.next) {
				dma_append_desc(pcdev, buf_cur, buf_cur);
			}
			buf_pre = buf_cur;
			buf_node->vb.state = VIDEOBUF_QUEUED;
		}
	}

}

static int sci_cken(struct pxa955_cam_dev *pcdev, int flag)
{
	if (flag) {
		/* The order of enabling matters! AXI must be the 1st one */
		clk_enable(pcdev->sci1_clk);
		clk_enable(pcdev->sci2_clk);
	} else {
		clk_disable(pcdev->sci2_clk);
		clk_disable(pcdev->sci1_clk);
	};

	return 0;
}

#define pixfmtstr(x) (x) & 0xff, ((x) >> 8) & 0xff, ((x) >> 16) & 0xff, \
	((x) >> 24) & 0xff
static void sci_s_fmt(struct pxa955_cam_dev *pcdev,
				struct v4l2_pix_format *fmt)
{
	unsigned int size = fmt->width*fmt->height;
	printk(KERN_NOTICE "cam: set fmt as %c%c%c%c, %ux%u\n",
		pixfmtstr(fmt->pixelformat), fmt->width, fmt->height);

	switch (fmt->pixelformat) {

	case V4L2_PIX_FMT_RGB565:
		pcdev->channels = 1;
		pcdev->channel_size[0] = size*2;
		sci_reg_write(pcdev, REG_SCICR1, SCICR1_FMT_IN(FMT_RGB565) | SCICR1_FMT_OUT(FMT_RGB565));
	    break;

	case V4L2_PIX_FMT_JPEG:
		pcdev->channels = 1;
		/* use size get from sensor */
		/* pcdev->channel_size[0] = fmt->sizeimage;*/

		pcdev->channel_size[0] = JPEG_BUF_SIZE;
		sci_reg_write(pcdev, REG_SCICR1, SCICR1_FMT_IN(FMT_JPEG) | SCICR1_FMT_OUT(FMT_JPEG));
	    break;

	case V4L2_PIX_FMT_YUV422P:
		pcdev->channels = 3;
		pcdev->channel_size[0] = size;
		pcdev->channel_size[1] = pcdev->channel_size[2] = size/2;
		sci_reg_write(pcdev, REG_SCICR1, SCICR1_FMT_IN(FMT_YUV422) | SCICR1_FMT_OUT(FMT_YUV422));
		break;

	case V4L2_PIX_FMT_YVYU:
	case V4L2_PIX_FMT_YUYV:
	case V4L2_PIX_FMT_UYVY:
	case V4L2_PIX_FMT_VYUY:
		pcdev->channels = 1;
		pcdev->channel_size[0] = size*2;
		sci_reg_write(pcdev, REG_SCICR1, SCICR1_FMT_IN(FMT_YUV422) | SCICR1_FMT_OUT(FMT_YUV422PACKET));
		break;

	case V4L2_PIX_FMT_YUV420:
		/* only accept YUV422 as input */
		pcdev->channels = 3;
		pcdev->channel_size[0] = size;
		pcdev->channel_size[1] = pcdev->channel_size[2] = size/4;
		sci_reg_write(pcdev, REG_SCICR1, SCICR1_FMT_IN(FMT_YUV422) | SCICR1_FMT_OUT(FMT_YUV420));
		break;

	default:
		printk(KERN_ERR "cam: error can not support fmt!\n");
		break;
	}

}

static void sci_irq_enable(struct pxa955_cam_dev *pcdev, unsigned int val)
{
	sci_reg_write(pcdev, REG_SCISR, sci_reg_read(pcdev, REG_SCISR));
	sci_reg_clear_bit(pcdev, REG_SCIMASK, val);
}

static void sci_irq_disable(struct pxa955_cam_dev *pcdev, unsigned int val)
{
	sci_reg_set_bit(pcdev, REG_SCIMASK, val);
}

/*
 * Make the controller start grabbing images.  Everything must
 * be set up before doing this.
 */
static void sci_enable(struct pxa955_cam_dev *pcdev)
{
	int i = 0;
	unsigned int val = 0;

	/* start_fifo */
	for (i = 0; i < pcdev->channels; i++) {
		val = SCIFIFO_F0_EN << i;
		sci_reg_set_bit(pcdev, REG_SCIFIFO, val);
	}

	/* start_dma */
	for (i = 0; i < pcdev->channels; i++)
		sci_reg_set_bit(pcdev, REG_SCIDCSR0 + i*0x10, SCIDCSR_DMA_RUN);

	/* start sci */
	sci_reg_set_bit(pcdev, REG_SCICR0, SCICR0_CAP_EN | SCICR0_CI_EN);
}

static void sci_disable(struct pxa955_cam_dev *pcdev)
{
	int i = 0;
	unsigned int val = 0;

	/* stop_fifo */
	for (i = 0; i < pcdev->channels; i++) {
		val = SCIFIFO_F0_EN << i;
		sci_reg_clear_bit(pcdev, REG_SCIFIFO, val);
	}

	/* stop_dma */
	for (i = 0; i < pcdev->channels; i++) {
		sci_reg_clear_bit(pcdev, REG_SCIDCSR0 + i*0x10, SCIDCSR_DMA_RUN);
		sci_reg_clear_bit(pcdev, REG_SCIDBR0 + i*0x10, SCIDBR_EN);
	}
	/* stop sci */
	sci_reg_clear_bit(pcdev, REG_SCICR0, SCICR0_CAP_EN | SCICR0_CI_EN);
}

void sci_init(struct pxa955_cam_dev *pcdev)
{
	/*
	* Turn off the enable bit.  It sure should be off anyway,
	* but it's good to be sure.
	*/
	sci_reg_clear_bit(pcdev, REG_SCICR0, SCICR0_CI_EN);

	/* Mask all interrupts.*/
	sci_reg_write(pcdev, REG_SCIMASK, ~0);
}

static unsigned long uva_to_pa(unsigned long addr, struct page **page)
{
	unsigned long ret = 0UL;
	pgd_t *pgd;
	pud_t *pud;
	pmd_t *pmd;
	pte_t *pte;

	pgd = pgd_offset(current->mm, addr);
	if (!pgd_none(*pgd)) {
		pud = pud_offset(pgd, addr);
		if (!pud_none(*pud)) {
			pmd = pmd_offset(pud, addr);
			if (!pmd_none(*pmd)) {
				pte = pte_offset_map(pmd, addr);
				if (!pte_none(*pte) && pte_present(*pte)) {
					(*page) = pte_page(*pte);
					ret = page_to_phys(*page);
					ret |= (addr & (PAGE_SIZE-1));
				}
			}
		}
	}
	return ret;
}

struct page* va_to_page(unsigned long user_addr)
{
	struct page *page = NULL;
	unsigned int vaddr = PAGE_ALIGN(user_addr);

	if (uva_to_pa(vaddr, &page) != 0)
		return page;

	return 0;
}
unsigned long va_to_pa(unsigned long user_addr, unsigned int size)
{
	unsigned long  paddr, paddr_tmp;
	unsigned long  size_tmp = 0;
	struct page *page = NULL;
	int page_num = PAGE_ALIGN(size) / PAGE_SIZE;
	unsigned int vaddr = PAGE_ALIGN(user_addr);
	int i = 0;

	if(vaddr == 0)
		return 0;

	paddr = uva_to_pa(vaddr, &page);

	for (i = 0; i < page_num; i++) {
		paddr_tmp = uva_to_pa(vaddr, &page);
		if ((paddr_tmp - paddr) != size_tmp)
			return 0;
		vaddr += PAGE_SIZE;
		size_tmp += PAGE_SIZE;
	}
	return paddr;
}

static int pxa955_videobuf_setup(struct videobuf_queue *vq, unsigned int *count,
			      unsigned int *size)
{
	struct soc_camera_device *icd = vq->priv_data;
	int bytes_per_line = soc_mbus_bytes_per_line(icd->user_width,
						icd->current_fmt->host_fmt);

	if (bytes_per_line < 0)
		return bytes_per_line;

	if (icd->current_fmt->host_fmt->fourcc != V4L2_PIX_FMT_JPEG) {
		dev_dbg(icd->dev.parent, "count=%d, size=%d\n", *count, *size);

		*size = bytes_per_line * icd->user_height;
		if (0 == *count)
			*count = 32;
		if (*size * *count > vid_mem_limit * 1024 * 1024)
			*count = (vid_mem_limit * 1024 * 1024) / *size;
	} else {
		*size = JPEG_BUF_SIZE;
		if (0 == *count)
			*count = 32;
		if (*size * *count > vid_mem_limit * 1024 * 1024)
			*count = (vid_mem_limit * 1024 * 1024) / *size;
	}
	return 0;
}

static int pxa955_videobuf_prepare(struct videobuf_queue *vq,
		struct videobuf_buffer *vb, enum v4l2_field field)
{
	struct soc_camera_device *icd = vq->priv_data;
	struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent);
	struct pxa955_cam_dev *pcdev = ici->priv;
	struct pxa_buf_node *buf =
		container_of(vb, struct pxa_buf_node, vb);
	struct pxa_cam_dma *desc;
	unsigned int vaddr;
	int ret;
	size_t new_size;
	int bytes_per_line = soc_mbus_bytes_per_line(icd->user_width,
						icd->current_fmt->host_fmt);
	if (vb->memory == V4L2_MEMORY_USERPTR) {

		vaddr = PAGE_ALIGN(vb->baddr);
		if (vaddr != vb->baddr) {
			printk(KERN_ERR "cam: the memory is not page align!\n");
			return -EPERM;
		}

		buf->page = va_to_page(vaddr);
		if (!buf->page) {
			printk(KERN_ERR "cam: fail to get page info!\n");
			return -EFAULT;
		}
	}

	if (bytes_per_line < 0)
		return bytes_per_line;
	new_size = bytes_per_line * icd->user_height;
	if (icd->current_fmt->host_fmt->fourcc == V4L2_PIX_FMT_JPEG) {
		new_size = JPEG_BUF_SIZE;
	}

	if (buf->code	!= icd->current_fmt->code ||
	    vb->width	!= icd->user_width ||
	    vb->height	!= icd->user_height ||
	    vb->field	!= field) {
		buf->code	= icd->current_fmt->code;
		vb->width	= icd->user_width;
		vb->height	= icd->user_height;
		vb->field	= field;
		if (vb->state != VIDEOBUF_NEEDS_INIT) {
			dma_free_bufs(vq, buf, pcdev);
		}
	}

	if (vb->baddr && (vb->bsize < new_size)) {
		/* User provided buffer, but it is too small */
		printk(KERN_ERR
			"cam: buf in use %d is smaller than required size %d!\n",
			vb->bsize, new_size);
		return -ENOMEM;
	}

	if (vb->state == VIDEOBUF_NEEDS_INIT) {
		/*
		* The total size of video-buffers that will be allocated / mapped.
		* size that we calculated in videobuf_setup gets assigned to
		* vb->bsize, and now we use the same calculation to get vb->size.
		*/
		vb->size = new_size;

		/* This actually (allocates and) maps buffers */
		ret = videobuf_iolock(vq, vb, NULL);

		desc = &buf->dma_desc[0];
		if (desc->sg_cpu == NULL) {
			dma_alloc_desc(buf, pcdev);
		}

		vb->state = VIDEOBUF_PREPARED;
	}

	return 0;
}

static void pxa955_videobuf_queue(struct videobuf_queue *vq,
			       struct videobuf_buffer *vb)
{
	struct soc_camera_device *icd = vq->priv_data;
	struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent);
	struct pxa955_cam_dev *pcdev = ici->priv;
	struct csi_phy_config timing;

	dev_dbg(icd->dev.parent, "%s (vb=0x%p) 0x%08lx %d\n",
		__func__, vb, vb->baddr, vb->bsize);

	list_add_tail(&vb->queue, &pcdev->dma_buf_list);
	vb->state = VIDEOBUF_ACTIVE;

	if (vq->streaming == 1) {

		/* at first time, streamon set to true, means stream-on*/
		if (!pcdev->streamon) {

			if (list_empty(&pcdev->dma_buf_list)) {
				printk(KERN_ERR "cam: internal buffers are not ready!\n");
				return;
			}
			pcdev->streamon = true;
			pcdev->streamoff = false;

			dma_chain_init(pcdev);

			sci_irq_enable(pcdev, IRQ_EOFX|IRQ_OFO);
			pxa955_csi_dphy(pcdev->csidev, timing);
			/* configure enable which camera interface controller*/
			pxa955_csi_enable(pcdev->csidev, 0);
			sci_enable(pcdev);
		}
	}
}

static void pxa955_videobuf_release(struct videobuf_queue *vq,
				 struct videobuf_buffer *vb)
{
	struct soc_camera_device *icd = vq->priv_data;
	struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent);
	struct pxa955_cam_dev *pcdev = ici->priv;
	struct pxa_buf_node *buf = container_of(vb, struct pxa_buf_node, vb);

#ifdef DEBUG
	struct device *dev = icd->dev.parent;

	dev_err(dev, "%s (vb=0x%p) 0x%08lx %d\n", __func__,
		vb, vb->baddr, vb->bsize);

	switch (vb->state) {
	case VIDEOBUF_ACTIVE:
		dev_dbg(dev, "%s (active)\n", __func__);
		break;
	case VIDEOBUF_QUEUED:
		dev_dbg(dev, "%s (queued)\n", __func__);
		break;
	case VIDEOBUF_PREPARED:
		dev_dbg(dev, "%s (prepared)\n", __func__);
		break;
	default:
		dev_dbg(dev, "%s (unknown)\n", __func__);
		break;
	}
#endif

	if ((vb->state == VIDEOBUF_ACTIVE || vb->state == VIDEOBUF_QUEUED) &&
	    !list_empty(&vb->queue)) {
		vb->state = VIDEOBUF_ERROR;

		list_del_init(&vb->queue);
	}

	if (vq->streaming == 0) {
		if (!pcdev->streamoff) {
			INIT_LIST_HEAD(&pcdev->dma_buf_list);
			pcdev->streamon = false;
			pcdev->streamoff = true;

			pxa955_csi_disable(pcdev->csidev);
			sci_irq_disable(pcdev, IRQ_EOFX|IRQ_OFO);
			sci_disable(pcdev);
		}
	}
	dma_free_bufs(vq, buf, pcdev);

}

static struct videobuf_queue_ops pxa955_videobuf_ops = {
	.buf_setup      = pxa955_videobuf_setup,
	.buf_prepare    = pxa955_videobuf_prepare,
	.buf_queue      = pxa955_videobuf_queue,
	.buf_release    = pxa955_videobuf_release,
};

static int pxa955_cam_add_device(struct soc_camera_device *icd)
{
	struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent);
	struct pxa955_cam_dev *pcdev = ici->priv;

	if (pcdev->icd)
		return -EBUSY;

	pcdev->icd = icd;

	pxa955_csi_cken(pcdev->csidev, 1);
	sci_cken(pcdev, 1);
	sci_init(pcdev);
	pxa955_csi_clkdiv(pcdev->csidev);

	dev_info(icd->dev.parent, "pxa955 camera driver attached to camera %d\n",
		 icd->devnum);

	return 0;
}

static void pxa955_cam_remove_device(struct soc_camera_device *icd)
{
	struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent);
	struct pxa955_cam_dev *pcdev = ici->priv;

	BUG_ON(icd != pcdev->icd);
	sci_cken(pcdev, 0);
	pxa955_csi_cken(pcdev->csidev, 0);

	pcdev->icd = NULL;

	dev_info(icd->dev.parent, "pxa955 Camera driver detached from camera %d\n",
		 icd->devnum);
}

static const struct soc_mbus_pixelfmt pxa955_camera_formats[] = {
	{
			.fourcc 	= V4L2_PIX_FMT_YUV422P,
			.name		= "YUV422P",
			.bits_per_sample	= 8,
			.packing		= SOC_MBUS_PACKING_2X8_PADLO,
			.order			= SOC_MBUS_ORDER_LE,
	},{
		.fourcc			= V4L2_PIX_FMT_YUV420,
		.name			= "YUV420P",
		.bits_per_sample	= 8,
		.packing		= SOC_MBUS_PACKING_2X8_PADLO,
		.order			= SOC_MBUS_ORDER_LE,
	},
};

/* pxa955_cam_get_formats provide all fmts that camera controller support*/
static int pxa955_cam_get_formats(struct soc_camera_device *icd, unsigned int idx,
				  struct soc_camera_format_xlate *xlate)
{
	struct v4l2_subdev *sd = soc_camera_to_subdev(icd);
	struct device *dev = icd->dev.parent;
	int formats = 0, ret, i;
	enum v4l2_mbus_pixelcode code;
	const struct soc_mbus_pixelfmt *fmt;

	ret = v4l2_subdev_call(sd, video, enum_mbus_fmt, idx, &code);
	if (ret < 0)
		/* No more formats */
		return 0;

	fmt = soc_mbus_get_fmtdesc(code);
	if (!fmt) {
		dev_err(dev, "Invalid format code #%u: %d\n", idx, code);
		return 0;
	}

	switch (code) {
	/* refer to mbus_fmt struct*/
	case V4L2_MBUS_FMT_YUYV8_2X8_BE:
		formats = ARRAY_SIZE(pxa955_camera_formats);

		if (xlate) {
			for (i = 0; i < ARRAY_SIZE(pxa955_camera_formats); i++) {
				xlate->host_fmt = &pxa955_camera_formats[i];
				xlate->code	= code;
				xlate++;
				dev_err(dev, "Providing format %s\n",
					pxa955_camera_formats[i].name);
			}
			dev_err(dev, "Providing format %s\n", fmt->name);
		}
		break;

	case V4L2_MBUS_FMT_YVYU8_2X8_BE:
	case V4L2_MBUS_FMT_YVYU8_2X8_LE:
	case V4L2_MBUS_FMT_YUYV8_2X8_LE:
	case V4L2_MBUS_FMT_RGB565_2X8_LE:
	case V4L2_MBUS_FMT_RGB565_2X8_BE:
	case V4L2_MBUS_FMT_JPEG_1X8:
		if (xlate)
			dev_err(dev, "Providing format %s\n", fmt->name);
		break;
	default:
		/* camera controller can not support this format, which might supported by the sensor*/
		dev_err(dev, "Not support fmt %s\n", fmt->name);
		return 0;
	}

	/* Generic pass-through */
	formats++;
	if (xlate) {
		xlate->host_fmt	= fmt;
		xlate->code	= code;
		xlate++;
	}

	return formats;
}

static void pxa955_cam_put_formats(struct soc_camera_device *icd)
{
	kfree(icd->host_priv);
	icd->host_priv = NULL;
}

static int pxa955_cam_try_fmt(struct soc_camera_device *icd,
			      struct v4l2_format *f)
{
	struct v4l2_subdev *sd = soc_camera_to_subdev(icd);
	const struct soc_camera_format_xlate *xlate;
	struct v4l2_pix_format *pix = &f->fmt.pix;
	struct v4l2_mbus_framefmt mf;
	__u32 pixfmt = pix->pixelformat;
	int ret;

	xlate = soc_camera_xlate_by_fourcc(icd, pixfmt);
	if (!xlate) {
		dev_warn(icd->dev.parent, "Format %x not found\n", pixfmt);
		return -EINVAL;
	}

	pix->bytesperline = soc_mbus_bytes_per_line(pix->width,
							xlate->host_fmt);
	if (pix->bytesperline < 0)
		return pix->bytesperline;
	pix->sizeimage = pix->height * pix->bytesperline;

	/* limit to sensor capabilities */
	mf.width	= pix->width;
	mf.height	= pix->height;
	mf.field	= pix->field;
	mf.colorspace	= pix->colorspace;
	mf.code 	= xlate->code;

	ret = v4l2_subdev_call(sd, video, try_mbus_fmt, &mf);
	if (ret < 0)
		return ret;

	pix->width	= mf.width;
	pix->height = mf.height;
	pix->colorspace = mf.colorspace;

	switch (mf.field) {
	case V4L2_FIELD_ANY:
	case V4L2_FIELD_NONE:
		pix->field	= V4L2_FIELD_NONE;
		break;
	default:
		dev_err(icd->dev.parent, "Field type %d unsupported.\n",
			mf.field);
		return -EINVAL;
	}

	return ret;
}

static int pxa955_cam_set_fmt(struct soc_camera_device *icd,
			      struct v4l2_format *f)
{
	struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent);
	struct pxa955_cam_dev *pcdev = ici->priv;
	struct device *dev = icd->dev.parent;
	struct v4l2_subdev *sd = soc_camera_to_subdev(icd);
	const struct soc_camera_format_xlate *xlate = NULL;
	struct v4l2_pix_format *pix = &f->fmt.pix;
	__u32 pixfmt = pix->pixelformat;
	struct v4l2_mbus_framefmt mf;
	int ret;

	xlate = soc_camera_xlate_by_fourcc(icd, pixfmt);
	if (!xlate) {
		dev_warn(dev, "Format %x not found\n", pixfmt);
		return -EINVAL;
	}

	mf.width	= pix->width;
	mf.height	= pix->height;
	mf.field	= pix->field;
	mf.colorspace	= pix->colorspace;
	mf.code		= xlate->code;

	ret = v4l2_subdev_call(sd, video, s_mbus_fmt, &mf);

	if (mf.code != xlate->code)
		return -EINVAL;

	icd->sense = NULL;
	pix->width		= mf.width;
	pix->height		= mf.height;
	pix->field		= mf.field;
	pix->colorspace		= mf.colorspace;
	icd->current_fmt	= xlate;

	sci_disable(pcdev);
	sci_s_fmt(pcdev, pix);

	return ret;
}

static void pxa955_cam_init_videobuf(struct videobuf_queue *q,
			      struct soc_camera_device *icd)
{
	struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent);
	struct pxa955_cam_dev *pcdev = ici->priv;

	/*
	* We must pass NULL as dev pointer, then all pci_* dma operations
	* transform to normal dma_* ones.
	*/
	videobuf_queue_dma_contig_init(q, &pxa955_videobuf_ops, NULL, &pcdev->spin_lock,
				V4L2_BUF_TYPE_VIDEO_CAPTURE, V4L2_FIELD_NONE,
				sizeof(struct pxa_buf_node), icd);
}

static int pxa955_cam_reqbufs(struct soc_camera_file *icf,
			      struct v4l2_requestbuffers *p)
{
	int i;
	if (p->count < MIN_DMA_BUFS) {
		printk(KERN_ERR "cam: need %d buffers at least!\n", MIN_DMA_BUFS);
		return -EINVAL;
	}

	/*
	*This is for locking debugging only. I removed spinlocks and now I
	* check whether .prepare is ever called on a linked buffer, or whether
	* a dma IRQ can occur for an in-work or unlinked buffer. Until now
	* it hadn't triggered
	*/
	for (i = 0; i < p->count; i++) {
		struct pxa_buf_node *buf = container_of(icf->vb_vidq.bufs[i],
						      struct pxa_buf_node, vb);
		INIT_LIST_HEAD(&buf->vb.queue);
	}
	return 0;
}

static unsigned int pxa955_cam_poll(struct file *file, poll_table *pt)
{
	struct soc_camera_file *icf = file->private_data;
	return videobuf_poll_stream(file, &icf->vb_vidq, pt);
}

static int pxa955_cam_querycap(struct soc_camera_host *ici,
			       struct v4l2_capability *cap)
{
	struct v4l2_dbg_chip_ident id;
	struct pxa955_cam_dev *pcdev = ici->priv;
	struct soc_camera_device *icd = pcdev->icd;
	struct v4l2_subdev *sd = soc_camera_to_subdev(icd);
	int ret = 0;

	cap->version = PXA955_CAM_VERSION_CODE;
	cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING;
	strcpy(cap->card, "MG1");
	strcpy(cap->driver, "N/A");

	ret = v4l2_subdev_call(sd, core, g_chip_ident, &id);
	if (ret < 0) {
		printk(KERN_ERR "cam: failed to get sensor's name!\n");
		return ret;
	}
	switch (id.ident) {
		case V4L2_IDENT_OV5642:
			strcpy(cap->driver, "ov5642");
			break;
		case V4L2_IDENT_OV7690:
			strcpy(cap->driver, "ov7690");
			break;
	}
	return 0;
}

static int pxa955_cam_set_bus_param(struct soc_camera_device *icd, __u32 pixfmt)
{
	struct soc_camera_host *ici = to_soc_camera_host(icd->dev.parent);
	struct pxa955_cam_dev *pcdev = ici->priv;
	unsigned long ctrller_flags, sensor_flags, common_flags;
	int ret;
	int lane = 1;

	ctrller_flags = SOCAM_MIPI | SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE;
	sensor_flags = icd->ops->query_bus_param(icd);

	common_flags = soc_camera_bus_param_compatible(sensor_flags, ctrller_flags);
	if (!common_flags) {
		return -EINVAL;
	}

	ret = icd->ops->set_bus_param(icd, common_flags);
	if (ret < 0)
		return ret;

	if (common_flags & SOCAM_MIPI_1LANE) {
		lane = 1;
	} else if (common_flags & SOCAM_MIPI_2LANE) {
		lane = 2;
	}
	pxa955_csi_lane(pcdev->csidev, lane);
	return 0;
}

static int pxa955_cam_set_param(struct soc_camera_device *icd,
				  struct v4l2_streamparm *parm)
{
	return 0;
}


static struct soc_camera_host_ops pxa955_soc_cam_host_ops = {
	.owner		= THIS_MODULE,
	.add		= pxa955_cam_add_device,
	.remove		= pxa955_cam_remove_device,
	.get_formats	= pxa955_cam_get_formats,
	.put_formats	= pxa955_cam_put_formats,
	.set_fmt	= pxa955_cam_set_fmt,
	.try_fmt	= pxa955_cam_try_fmt,
	.init_videobuf	= pxa955_cam_init_videobuf,
	.reqbufs	= pxa955_cam_reqbufs,
	.poll		= pxa955_cam_poll,
	.querycap	= pxa955_cam_querycap,
	.set_bus_param	= pxa955_cam_set_bus_param,
	.set_parm = pxa955_cam_set_param,
};

static irqreturn_t pxa955_cam_irq(int irq, void *data)
{
	struct pxa955_cam_dev *pcdev = data;
	unsigned int irqs = 0;

	spin_lock(&pcdev->spin_lock);
	irqs = sci_reg_read(pcdev, REG_SCISR);
	sci_reg_write(pcdev, REG_SCISR, irqs);		/*clear irqs here*/
	if (irqs & IRQ_OFO) {

		printk(KERN_ERR "cam: ccic over flow error!\n");
		pxa955_csi_disable(pcdev->csidev);
		sci_disable(pcdev);
		pxa955_csi_enable(pcdev->csidev, 0);
		sci_enable(pcdev);
		spin_unlock(&pcdev->spin_lock);
		return IRQ_HANDLED;
	}

	if (irqs & IRQ_EOFX) {
		dma_fetch_frame(pcdev);
		dma_attach_bufs(pcdev);
	}

	spin_unlock(&pcdev->spin_lock);
	return IRQ_HANDLED;
}

static int pxa955_camera_probe(struct platform_device *pdev)
{
	struct resource *res;
	int err = -ENOMEM;
	struct pxa955_cam_dev *pcdev;

	pcdev = kzalloc(sizeof(struct pxa955_cam_dev), GFP_KERNEL);
	if (pcdev == NULL)
		goto exit;
	memset(pcdev, 0, sizeof(struct pxa955_cam_dev));

	spin_lock_init(&pcdev->spin_lock);
	init_waitqueue_head(&pcdev->iowait);
	INIT_LIST_HEAD(&pcdev->dev_list);
	INIT_LIST_HEAD(&pcdev->dma_buf_list);

	pcdev->pdata = pdev->dev.platform_data;
	if (pcdev->pdata == NULL) {
		printk(KERN_ERR "cam: camera no platform data defined\n");
		return -ENODEV;
	}

	pcdev->irq = platform_get_irq(pdev, 0);
	if (pcdev->irq < 0) {
		printk(KERN_ERR "cam: camera no irq\n");
		return -ENXIO;
	}
	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
	if (res == NULL) {
		printk(KERN_ERR "cam: no IO memory resource defined\n");
		return -ENODEV;
	}

	err = -EIO;
	pcdev->regs = ioremap(res->start, SZ_4K);
	if (!pcdev->regs) {
		printk(KERN_ERR "cam: unable to ioremap pxa95x-camera regs\n");
		goto exit_free;
	}
	err = request_irq(pcdev->irq, pxa955_cam_irq,
		0, PXA955_CAM_DRV_NAME, pcdev);
	if (err) {
		printk(KERN_ERR "cam: unable to create ist\n");
		goto exit_iounmap;
	}

	pcdev->sci1_clk = clk_get(NULL, "SCI1CLK");
	if (!pcdev->sci1_clk) {
		printk(KERN_ERR "cam: unable to get sci clk\n");
		goto exit_free_irq;
	}
	pcdev->sci2_clk = clk_get(NULL, "SCI2CLK");
	if (!pcdev->sci2_clk) {
		printk(KERN_ERR "cam: unable to get sci clk\n");
		goto exit_free_irq;
	}

	csi_attach(pcdev);

	pcdev->soc_host.drv_name	= PXA955_CAM_DRV_NAME;
	pcdev->soc_host.ops		= &pxa955_soc_cam_host_ops;
	pcdev->soc_host.priv		= pcdev;
	pcdev->soc_host.v4l2_dev.dev	= &pdev->dev;
	pcdev->soc_host.nr		= pdev->id;

	err = soc_camera_host_register(&pcdev->soc_host);
	if (err)
		goto exit_free_irq;

	return 0;

exit_free_irq:
	free_irq(pcdev->irq, pcdev);
exit_iounmap:
	iounmap(pcdev->regs);
	clk_put(pcdev->sci1_clk);
	clk_put(pcdev->sci2_clk);
exit_free:
	kfree(pcdev);
exit:
	return err;
}


static int pxa955_camera_remove(struct platform_device *pdev)
{
	struct soc_camera_host *soc_host = to_soc_camera_host(&pdev->dev);
	struct pxa955_cam_dev *pcdev = container_of(soc_host,
					struct pxa955_cam_dev, soc_host);

	if (pcdev == NULL) {
		printk(KERN_WARNING "cam: remove on unknown pdev %p\n", pdev);
		return -EIO;
	}

	pxa955_csi_disable(pcdev->csidev);

	sci_disable(pcdev);
	sci_irq_disable(pcdev, IRQ_EOFX|IRQ_OFO);
	free_irq(pcdev->irq, pcdev);
	return 0;
}

static struct platform_driver pxa955_camera_driver = {
	.driver = {
		.name = PXA955_CAM_DRV_NAME
	},
	.probe 		= pxa955_camera_probe,
	.remove 	= pxa955_camera_remove,

};

static int __devinit pxa955_camera_init(void)
{
	int ret = 0;

	ret = platform_driver_register(&pxa955_camera_driver);
	return ret;
}

static void __exit pxa955_camera_exit(void)
{
	platform_driver_unregister(&pxa955_camera_driver);
}

module_init(pxa955_camera_init);
module_exit(pxa955_camera_exit);

[-- Attachment #3: ov5642.c --]
[-- Type: text/plain, Size: 24318 bytes --]

/*-------------------------------------------------------

* Driver for OmniVision CMOS Image Sensor
*
* Copyright (C) 2010, Marvell International Ltd.
*				Qing Xu <qingx@marvell.com>
*
* Based on linux/drivers/media/video/mt9m001.c
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.

-------------------------------------------------------*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/videodev2.h>
#include <linux/i2c.h>

#include "ov5642.h"

#include <media/v4l2-common.h>
#include <media/v4l2-chip-ident.h>
#include <media/soc_camera.h>

/*
* flash mode:
* 00 Continous flash, flash on when on req flash off when off req.
* 01 One flash, the flash strobe is opened for a period of time specified
* by high 3 bytes.
* 10 periodic flash, the flash is on periodiclly the period is specified
* by high 3 bytes.
* 11 pulse flash, flash one time for a very short pulse, width of the
* pulse is specified by bit
*/
struct flash_struct {
	unsigned char	mode;
	unsigned char	brightness;
	unsigned int 	duration;
};

/* ov5642 has only one fixed colorspace per pixelcode */
struct ov5642_datafmt {
	enum v4l2_mbus_pixelcode	code;
	enum v4l2_colorspace		colorspace;
};

struct ov5642 {
	struct v4l2_subdev subdev;
	int model;	/* V4L2_IDENT_OV5642* codes from v4l2-chip-ident.h */
	struct v4l2_rect rect;
	u32 pixfmt;
	const struct ov5642_datafmt *curfmt;
	const struct ov5642_datafmt *fmts;
	int num_fmts;

	struct flash_struct flash;	/* Flash strobe related info */
	struct regval_list *regs_fmt;
	struct regval_list *regs_size;
};

static const struct ov5642_datafmt ov5642_colour_fmts[] = {
	{V4L2_MBUS_FMT_YUYV8_2X8_BE, V4L2_COLORSPACE_JPEG},
	{V4L2_MBUS_FMT_JPEG_1X8, V4L2_COLORSPACE_JPEG},
	{V4L2_MBUS_FMT_RGB565_2X8_LE, V4L2_COLORSPACE_SRGB}
};

/*
* Store information about the video data format.  The color matrix
* is deeply tied into the format, so keep the relevant values here.
* The magic matrix nubmers come from OmniVision.
*/
static struct ov5642_format_struct {
	enum v4l2_mbus_pixelcode	code;
	struct regval_list	*regs;
} ov5642_fmts[] = {
	{
		.code = V4L2_MBUS_FMT_YUYV8_2X8_LE,
		.regs = ov5642_fmt_yuv422,
	},{
		.code = V4L2_MBUS_FMT_YVYU8_2X8_LE,
		.regs = ov5642_fmt_yuv422,
	},{
		.code = V4L2_MBUS_FMT_YUYV8_2X8_BE,
		.regs = ov5642_fmt_yuv422,
	},{
		.code = V4L2_MBUS_FMT_YVYU8_2X8_BE,
		.regs = ov5642_fmt_yuv422,
	},{
		.code = V4L2_MBUS_FMT_JPEG_1X8,
		.regs = ov5642_fmt_jpg,
	},{
		.code = V4L2_MBUS_FMT_RGB565_2X8_LE,
		.regs = ov5642_fmt_rgb565,
	},
};

static struct ov5642_win_size {
	int	width;
	int	height;
	struct regval_list *regs;
} ov5642_sizes[] = {
	{
		.width = 320,
		.height = 240,
		.regs = ov5642_res_qvga,/* QVGA */
	}, {
		.width = 480,
		.height = 320,
		.regs = ov5642_res_half_vga,/* half VGA */
	}, {
		.width = 640,
		.height = 480,
		.regs = ov5642_res_vga,/* VGA */
	}, {
		.width = 800,
		.height = 480,
		.regs = ov5642_res_wvga,/* WVGA */
	}, {
		.width = 2592,
		.height = 1944,
		.regs = ov5642_res_5M,/*5M 2592x1944*/
	}, {
		.width = 1920,
		.height = 1080,
		.regs = ov5642_res_1080P,/*1080p 1920x1080 */
	}, {
		.width = 1280,
		.height = 720,
		.regs = ov5642_res_720P,/* 1280x720 */
	},
};

static struct ov5642_win_size ov5642_sizes_jpeg[] = {
	{
		.width = 320,
		.height = 240,
		.regs = ov5642_res_qvga,/* QVGA */
	}, {
		.width = 640,
		.height = 480,
		.regs = ov5642_res_vga,	/* VGA */
	}, {
		.width = 2592,
		.height = 1944,
		.regs = ov5642_res_5M,/*5M 2592x1944*/
	},
};

#define N_OV5642_FMTS ARRAY_SIZE(ov5642_fmts)
#define N_OV5642_SIZES ARRAY_SIZE(ov5642_sizes)
#define N_OV5642_SIZES_JPEG ARRAY_SIZE(ov5642_sizes_jpeg)
#define JPEG_BUF_SIZE (1024*1024)

enum flash_arg_format {
	FLASH_ARG_TIME_OFF	= 0,
	FLASH_ARG_MODE_0	= 0,
	FLASH_ARG_MODE_1,
	FLASH_ARG_MODE_2,
	FLASH_ARG_MODE_3,
	FLASH_ARG_MODE_EVER	= FLASH_ARG_MODE_0, /* Always on till send off  */
	FLASH_ARG_MODE_TIMED	= FLASH_ARG_MODE_1, /* On for some time */
	FLASH_ARG_MODE_REPEAT	= FLASH_ARG_MODE_2, /* Repeatedlly on */
	FLASH_ARG_MODE_BLINK	= FLASH_ARG_MODE_3, /* On for some jeffies defined by H/W */
	FLASH_ARG_MODE_MAX,	/* Equal or greater mode considered invalid */
	FLASH_ARG_LUMI_OFF	= 0, /* Flash off */
	FLASH_ARG_LUMI_FULL,
	FLASH_ARG_LUMI_DIM1,	/* One fifth luminance*/
	FLASH_ARG_LUMI_DIM2,	/* One third luminance, not supported in H/W configuration */
	FLASH_ARG_LUMI_MAX,	/* Equal or greater luminance considered invalid */
};

static const struct v4l2_queryctrl ov5642_controls[] = {
	{
		.id = V4L2_CID_FOCUS_AUTO,
		.type = V4L2_CTRL_TYPE_BOOLEAN,
		.name = "auto focus",
		.minimum = 0,
		.maximum = 1,
		.step = 1,
		.default_value = 0,
	},{
		.id = V4L2_CID_AUTO_WHITE_BALANCE,
		.type = V4L2_CTRL_TYPE_BOOLEAN,
		.name = "auto white balance",
		.minimum = 0,
		.maximum = 1,
		.step = 1,
		.default_value = 0,
	},{
		.id = V4L2_CID_EXPOSURE_AUTO,
		.type = V4L2_CTRL_TYPE_BOOLEAN,
		.name = "auto exposure",
		.minimum = 0,
		.maximum = 1,
		.step = 1,
		.default_value = 0,
	},{
		.id = V4L2_CID_FLASH_DURATION,
		.type = V4L2_CTRL_TYPE_INTEGER,
		.name = "flash duration",
		.minimum = 0,
		.maximum = 60000,
		.step = 1,
		.default_value = 1000,
	},{
		.id = V4L2_CID_FLASH_MODE,
		.type = V4L2_CTRL_TYPE_INTEGER,
		.name = "flash mode",
		.minimum = FLASH_ARG_MODE_0,
		.maximum = FLASH_ARG_MODE_MAX - 1,
		.step = 1,
		.default_value = 0,
	},{
		.id = V4L2_CID_FLASH_LUMINANCE,
		.type = V4L2_CTRL_TYPE_INTEGER,
		.name = "flash brightness",
		.minimum = FLASH_ARG_LUMI_OFF,
		.maximum = FLASH_ARG_LUMI_MAX - 1,
		.step = 1,
		.default_value = 0,
	}
};

#define setbit(reg, mask) \
do{ \
	unsigned char val; \
	ov5642_read(client, reg, &val); \
	val |= (mask); \
	ov5642_write(client, reg, val); \
}while (0)

#define clrbit(reg, mask) \
do{ \
	unsigned char val; \
	ov5642_read(client, reg, &val);	\
	val &= (~(mask));	\
	ov5642_write(client, reg, val);	\
}while (0)

static struct ov5642 *to_ov5642(const struct i2c_client *client)
{
	return container_of(i2c_get_clientdata(client), struct ov5642, subdev);
}


/*issue that OV sensor must write then read. 5642 register is 16bit!!!*/
static int ov5642_read(struct i2c_client *c, u16 reg,
		unsigned char *value)
{
	u8 data;
	u8 address[2];
	int ret = 0;
	address[0] = reg>>8;
	address[1] = reg;

	ret = (int)i2c_smbus_write_byte_data(c,address[0],address[1]);
	if (ret)
		goto exit;
	data = i2c_smbus_read_byte(c);
	*value = data;
exit:
	return ret;
}

static int ov5642_write(struct i2c_client *c, u16 reg,
		unsigned char value)
{
	u8 data[3];
	data[0] = reg>>8;
	data[1] = reg;
	data[2]=  value;
	i2c_master_send(c, data, 3);
	if (reg == REG_SYS && (value & SYS_RESET))
		msleep(2);  /* Wait for reset to run */
	return 0;
}

/* Write a list of register settings; ff/ff stops the process.*/
static int ov5642_write_array(struct i2c_client *c, struct regval_list *vals)
{
	while (vals->reg_num != 0xffff || vals->value != 0xff) {
		int ret = ov5642_write(c, vals->reg_num, vals->value);
		if (ret < 0)
			return ret;
		vals++;
	}
	return 0;
}

static int ov5642_detect(struct i2c_client *client)
{
	unsigned char v = 0;
	int ret = 0;

	ret = ov5642_read(client, REG_PIDH, &v);

	printk(KERN_NOTICE "cam: ov5642_detect 0x%x\n",v);
	if (ret < 0)
		return ret;
	if (v != 0x56)
		return -ENODEV;

	ret = ov5642_read(client, REG_PIDL, &v);

	printk(KERN_NOTICE "cam: ov5642_detect 0x%x\n",v);
	if (ret < 0)
		return ret;
	if (v != 0x42)
		return -ENODEV;

	return 0;
}


static int ov5642_get_awb(struct i2c_client *client, int *value)
{
	int ret;
	unsigned char v = 0;

	ret = ov5642_read(client, REG_AWB_EN, &v);
	*value = ((v & AWB_DIS) != AWB_DIS);
	return ret;
}

static int ov5642_set_awb(struct i2c_client *client, int value)
{
	unsigned char v = 0;
	int ret;

	ret = ov5642_read(client, REG_AWB_EN, &v);
	if (!value)
		v = AWB_DIS;
	else
		v = 0x00;
	ret = ov5642_write(client, REG_AWB_EN, v);
	return ret;
}

static int ov5642_get_ae(struct i2c_client *client, int *value)
{
	int ret;
	unsigned char v = 0;

	ret = ov5642_read(client, REG_AE_EN, &v);
	*value = ((v & AE_EN) == AE_EN);
	return ret;
}

static int ov5642_set_ae(struct i2c_client *client, int value)
{
	unsigned char v = 0;
	int ret;

	ret = ov5642_read(client, REG_AE_EN, &v);
	if (value)
		v |= AE_EN;
	else
		v &= ~AE_EN;
	ret += ov5642_write(client, REG_AE_EN, v);
	return ret;
}

static int ov5642_get_af(struct i2c_client *client, int *value)
{
	int ret;
	unsigned char v = 0;

	ret = ov5642_read(client, REG_AF_EN, &v);
	*value = (v & AF_EN) == AF_EN;
	return ret;
}

static int ov5642_set_af(struct i2c_client *client, int value)
{
	unsigned char v = 0;
	int ret;

	ret = ov5642_read(client, REG_AF_EN, &v);
	if (value)
		v |= AF_EN;
	else
		v &= ~AF_EN;
	ret += ov5642_write(client, REG_AF_EN, v);
	return ret;
}

static int ov5642_get_flash_lum(struct i2c_client *client, int *lum)
{
	struct ov5642 *ov5642 = to_ov5642(client);
	*lum = ov5642->flash.brightness;
	return *lum;
}

static int ov5642_set_flash_lum(struct i2c_client *client, int lum)
{
	struct ov5642 *ov5642 = to_ov5642(client);
	int ret = 0;

	if ((lum < 0) || (lum > FLASH_ARG_LUMI_MAX))
		return -EINVAL;

	if (lum == 0) {
		ov5642->flash.brightness = FLASH_ARG_LUMI_OFF;
		/* Request turn OFF the flash, so ignore flash mode */
		clrbit(REG_SYS_RESET, 0x08);	/* clear reset*/
		setbit(REG_CLK_ENABLE, 0x08);	/* enable clock*/
		setbit(REG_PAD_OUTPUT, 0x20);	/* pad output enable*/
		setbit(REG_FREX_MODE, 0x02);	/* FREX mode*/
		/* Strobe control */
		ret = ov5642_write(client, REG_STRB_CTRL, 0x03);
		/*
		* Write again to make sure strobe request end is received
		* after other bits are saved in the reg, use LED3 mode off
		*/
		ret |= ov5642_write(client, REG_STRB_CTRL, 0x03);
		return ret;
	}

	ov5642->flash.brightness = lum;
	clrbit(REG_SYS_RESET, 0x08);	/* clear reset*/
	setbit(REG_CLK_ENABLE, 0x08);	/* enable clock*/
	setbit(REG_PAD_OUTPUT, 0x02);	/* pad output enable*/
	setbit(REG_FREX_MODE, 0x02);	/* FREX mode*/

	switch (ov5642->flash.mode)
	{
		case FLASH_ARG_MODE_TIMED:
			if (ov5642->flash.duration) {
				/* Strobe control */
				ret = ov5642_write(client, REG_STRB_CTRL, 0x03);
				/* Send strobe request */
				ret |= ov5642_write(client, REG_STRB_CTRL, 0x83);
				msleep(ov5642->flash.duration);
				ret |= ov5642_write(client, REG_STRB_CTRL, 0x03);
				ov5642->flash.brightness = FLASH_ARG_LUMI_OFF;
				return ret;
			}
			/* NO break here, if duration is 0, leave it ON */
		case FLASH_ARG_MODE_EVER:
			/* Strobe control */
			ret = ov5642_write(client, REG_STRB_CTRL, 0x03);
			/* Send strobe request */
			ret |= ov5642_write(client, REG_STRB_CTRL, 0x83);
			return ret;
		case FLASH_ARG_MODE_REPEAT:
			/* Strobe control */
			ret = ov5642_write(client, REG_STRB_CTRL, 0x02);
			/* Send strobe request */
			ret |= ov5642_write(client, REG_STRB_CTRL, 0x82);
			return ret;
		case FLASH_ARG_MODE_BLINK:
			/* Use ov5642 xenon flash mode */
			return ret;
	}
	return 0;
}

static int ov5642_get_flash_time(struct i2c_client *client, int *time)
{
	struct ov5642 *ov5642 = to_ov5642(client);
	*time = ov5642->flash.duration;
	return *time;
}


static int ov5642_set_flash_time(struct i2c_client *client, int time)
{
	int ret = 0;
	struct ov5642 *ov5642 = to_ov5642(client);
	if (time < 0) {
		printk(KERN_ERR "cam: Invalid flash duration: %d\n", time);
		return -EINVAL;
	}
	ov5642->flash.duration = time;
	if (time == 0)
		ov5642->flash.mode = FLASH_ARG_MODE_EVER;
	else
		ov5642->flash.mode = FLASH_ARG_MODE_TIMED;
	/*
	* If the flash is on, the duration change will affect the
	* behavoir of flash immediatelly
	*/
	if (ov5642->flash.brightness != FLASH_ARG_LUMI_OFF)
		ret = ov5642_set_flash_lum(client, ov5642->flash.brightness);
	return ret;
}

static int ov5642_get_flash_mode(struct i2c_client *client, int *mode)
{
	struct ov5642 *ov5642 = to_ov5642(client);
	*mode = ov5642->flash.mode;
	return *mode;
}

static int ov5642_set_flash_mode(struct i2c_client *client, int mode)
{
	struct ov5642 *ov5642 = to_ov5642(client);

	if (mode >= FLASH_ARG_MODE_MAX)
		return -EINVAL;
	ov5642->flash.mode = mode;
	return 0;
}

static int ov5642_g_chip_ident(struct v4l2_subdev *sd,
				struct v4l2_dbg_chip_ident *id)
{
	struct i2c_client *client = sd->priv;
	struct ov5642 *ov5642 = to_ov5642(client);
	id->ident	= ov5642->model;
	id->revision	= 0;

	return 0;
}

static int ov5642_g_register(struct v4l2_subdev *sd,
			      struct v4l2_dbg_register *reg)
{
	struct i2c_client *client = sd->priv;
	return ov5642_read(client, (u16)reg->reg, (unsigned char *)&(reg->val));
}

static int ov5642_s_register(struct v4l2_subdev *sd,
			      struct v4l2_dbg_register *reg)
{
	struct i2c_client *client = sd->priv;
	return ov5642_write(client, (u16)reg->reg, (unsigned char)reg->val);
}

static int ov5642_s_stream(struct v4l2_subdev *sd, int enable)
{
	unsigned char val;
	struct i2c_client *client = sd->priv;
	if (enable) {
		ov5642_read(client, 0x3008, &val);
		val &= ~0x40;
		ov5642_write(client, 0x3008, val);
	} else {
		/* stop after one frame */
		ov5642_read(client, 0x4201, &val);
		val |= 0x01;
		ov5642_write(client, 0x4201, val);
	}
	return 0;
}
static int ov5642_enum_fmt(struct v4l2_subdev *sd, unsigned int index,
			    enum v4l2_mbus_pixelcode *code)
{
	struct i2c_client *client = sd->priv;
	struct ov5642 *ov5642 = to_ov5642(client);

	if (index >= ov5642->num_fmts)
		return -EINVAL;

	*code = ov5642->fmts[index].code;
	return 0;
}

static int ov5642_try_fmt(struct v4l2_subdev *sd,
			   struct v4l2_mbus_framefmt *mf)
{
	int i;
	struct i2c_client *client = sd->priv;
	struct ov5642 *ov5642 = to_ov5642(client);

	/* enum the supported formats*/
	for (i = 0; i < N_OV5642_FMTS; i++) {
		if (ov5642_fmts[i].code == mf->code){
			ov5642->regs_fmt = ov5642_fmts[i].regs;
			break;
		}
	}
	if (i >= N_OV5642_FMTS){
		printk(KERN_ERR "cam: ov5642 unsupported color format!\n");
		return -EINVAL;
	}


	if(mf->code == V4L2_MBUS_FMT_JPEG_1X8){
		/* enum the supported sizes for JPEG*/
		for (i = 0; i < N_OV5642_SIZES_JPEG; i++) {
			if (mf->width == ov5642_sizes_jpeg[i].width
				&& mf->height == ov5642_sizes_jpeg[i].height) {
				ov5642->regs_size = ov5642_sizes_jpeg[i].regs;
				break;
			}
		}
		if (i >= N_OV5642_SIZES_JPEG){
				printk(KERN_ERR"cam: ov5642 unsupported jpeg size!\n");
				return -EINVAL;
		}
	} else {
		/* enum the supported sizes*/
		for (i = 0; i < N_OV5642_SIZES; i++) {
			if (mf->width == ov5642_sizes[i].width
				&& mf->height == ov5642_sizes[i].height) {
				ov5642->regs_size = ov5642_sizes[i].regs;
				break;
			}
		}
		if (i >= N_OV5642_SIZES){
			printk(KERN_ERR "cam: ov5642 unsupported window size, w%d, h%d!\n",
					mf->width, mf->height);
			return -EINVAL;
		}
	}

	mf->field = V4L2_FIELD_NONE;

	switch (mf->code) {
	case V4L2_MBUS_FMT_RGB565_2X8_LE:
	case V4L2_MBUS_FMT_YUYV8_2X8_LE:
	case V4L2_MBUS_FMT_YVYU8_2X8_LE:
	case V4L2_MBUS_FMT_YUYV8_2X8_BE:
	case V4L2_MBUS_FMT_YVYU8_2X8_BE:
		/* enum the supported sizes*/
		for (i = 0; i < N_OV5642_SIZES; i++) {
			if (mf->width == ov5642_sizes[i].width
				&& mf->height == ov5642_sizes[i].height) {
				ov5642->regs_size = ov5642_sizes[i].regs;
				break;
			}
		}
		if (i >= N_OV5642_SIZES){
			printk(KERN_ERR "cam: ov5642 unsupported window size, w%d, h%d!\n",
					mf->width, mf->height);
			return -EINVAL;
		}

		if (mf->code == V4L2_MBUS_FMT_RGB565_2X8_LE)
			mf->colorspace = V4L2_COLORSPACE_SRGB;
		else
			mf->colorspace = V4L2_COLORSPACE_JPEG;
		break;

	case V4L2_MBUS_FMT_JPEG_1X8:
		/* enum the supported sizes for JPEG*/
		for (i = 0; i < N_OV5642_SIZES_JPEG; i++) {
			if (mf->width == ov5642_sizes_jpeg[i].width
				&& mf->height == ov5642_sizes_jpeg[i].height) {
				ov5642->regs_size = ov5642_sizes_jpeg[i].regs;
				break;
			}
		}
		if (i >= N_OV5642_SIZES_JPEG){
				printk(KERN_ERR"cam: ov5642 unsupported jpeg size!\n");
				return -EINVAL;
		}
		mf->colorspace = V4L2_COLORSPACE_JPEG;
		break;
	default:
		dev_err(&client->dev, "ov5642 doesn't support code %d\n", mf->code);
		break;
	}

	return 0;
}

static int ov5642_s_fmt(struct v4l2_subdev *sd,
			 struct v4l2_mbus_framefmt *mf)
{
	int ret = 0;
	struct i2c_client *client = sd->priv;
	struct ov5642 *ov5642 = to_ov5642(client);

	switch (mf->code) {
		case V4L2_MBUS_FMT_JPEG_1X8:
			ov5642_write_array(client, ov5642_jpg_default);
			ov5642_write_array(client, ov5642->regs_size);
			break;

		case V4L2_MBUS_FMT_YUYV8_2X8_LE:
		case V4L2_MBUS_FMT_YVYU8_2X8_LE:
		case V4L2_MBUS_FMT_YUYV8_2X8_BE:
		case V4L2_MBUS_FMT_YVYU8_2X8_BE:
		case V4L2_MBUS_FMT_RGB565_2X8_LE:
		default:
			if (mf->width == 1920 && mf->height == 1080) {
				ov5642_write_array(client, ov5642_yuv_1080p);
			} else if (mf->width == 1280 && mf->height == 720) {
				ov5642_write_array(client, ov5642_yuv_720p);
			} else {
				ov5642_write_array(client, ov5642_yuv_default);
				ov5642_write_array(client, ov5642->regs_fmt);
				ov5642_write_array(client, ov5642->regs_size);
				ov5642_write_array(client, ov5642_mipi);
			}
			break;
	}

	return ret;
}

static int ov5642_g_fmt(struct v4l2_subdev *sd,
			 struct v4l2_mbus_framefmt *mf)
{
	struct i2c_client *client = sd->priv;
	struct ov5642 *ov5642 = to_ov5642(client);

	mf->width		= ov5642->rect.width;
	mf->height		= ov5642->rect.height;
	mf->code	= V4L2_MBUS_FMT_YUYV8_2X8_BE;
	mf->field		= V4L2_FIELD_NONE;
	mf->colorspace		= V4L2_COLORSPACE_JPEG;
	return 0;
}

static int ov5642_enum_framesizes(struct v4l2_subdev *sd,
			struct v4l2_frmsizeenum *fsize)
{
	struct i2c_client *client = sd->priv;

	fsize->type = V4L2_FRMSIZE_TYPE_DISCRETE;
	switch (fsize->pixel_format) {
		case V4L2_PIX_FMT_UYVY:
		case V4L2_PIX_FMT_YUV422P:
		case V4L2_PIX_FMT_YUV420:
		case V4L2_PIX_FMT_RGB565:
			if (fsize->index >= N_OV5642_SIZES) {
				dev_warn(&client->dev, "ov5642 unsupported size!\n");
				return -EINVAL;
			}
			fsize->discrete.height = ov5642_sizes[fsize->index].height;
			fsize->discrete.width = ov5642_sizes[fsize->index].width;
			break;
		case V4L2_PIX_FMT_JPEG:
			if (fsize->index >= N_OV5642_SIZES_JPEG) {
				dev_warn(&client->dev, "ov5642 unsupported jpeg size!\n");
				return -EINVAL;
			}
			fsize->discrete.height = ov5642_sizes_jpeg[fsize->index].height;
			fsize->discrete.width = ov5642_sizes_jpeg[fsize->index].width;
			break;
		default:
			dev_err(&client->dev, "ov5642 unsupported format!\n");
			return -EINVAL;
	}

	return 0;
}

static unsigned long ov5642_query_bus_param(struct soc_camera_device *icd)
{
	struct soc_camera_link *icl = to_soc_camera_link(icd);
	unsigned long flags = SOCAM_MIPI | SOCAM_MIPI_1LANE;

	return soc_camera_apply_sensor_flags(icl, flags);
}

static int ov5642_set_bus_param(struct soc_camera_device *icd, unsigned long f)
{
#if 0/*TODO: add mipi and parallel different setting*/
	if (f & SOCAM_MIPI) /* mipi setting*/
		ov5642_write_array(client, ov5642_mipi);
	else /* parallel setting*/
		ov5642_write_array(client, ov5642_mipi);
#endif
	return 0;
}

static struct soc_camera_ops ov5642_ops = {
	.query_bus_param	= ov5642_query_bus_param,
	.set_bus_param		= ov5642_set_bus_param,
	.controls			= ov5642_controls,
	.num_controls		= ARRAY_SIZE(ov5642_controls),
};

static int ov5642_s_ctrl(struct v4l2_subdev *sd, struct v4l2_control *ctrl)
{
	struct i2c_client *client = sd->priv;
	const struct v4l2_queryctrl *qctrl;
	int ret;

	qctrl = soc_camera_find_qctrl(&ov5642_ops, ctrl->id);
	if (!qctrl)
		return -EINVAL;

	switch (ctrl->id) {
	case V4L2_CID_AUTO_WHITE_BALANCE:
		ret = ov5642_set_awb(client, ctrl->value);
		break;
	case V4L2_CID_FOCUS_AUTO:
		ret = ov5642_set_af(client, ctrl->value);
		break;
	case V4L2_CID_EXPOSURE_AUTO:
		ret = ov5642_set_ae(client, ctrl->value);
		break;
	case V4L2_CID_FLASH_DURATION:
		ret =  ov5642_set_flash_time(client, ctrl->value);
		break;
	case V4L2_CID_FLASH_MODE:
		ret =  ov5642_set_flash_mode(client, ctrl->value);
		break;
	case V4L2_CID_FLASH_LUMINANCE:
		ret =  ov5642_set_flash_lum(client, ctrl->value);
		break;
	default:
		ret = -EINVAL;
	}

	return ret;
}

static int ov5642_g_ctrl(struct v4l2_subdev *sd, struct v4l2_control *ctrl)
{
	struct i2c_client *client = sd->priv;
	int ret;

	switch (ctrl->id) {
	case V4L2_CID_AUTO_WHITE_BALANCE:
		ret = ov5642_get_awb(client, &ctrl->value);
		break;
	case V4L2_CID_FOCUS_AUTO:
		ret = ov5642_get_af(client, &ctrl->value);
		break;
	case V4L2_CID_EXPOSURE_AUTO:
		ret = ov5642_get_ae(client, &ctrl->value);
		break;
	case V4L2_CID_FLASH_DURATION:
		ret =  ov5642_get_flash_time(client, &ctrl->value);
		break;
	case V4L2_CID_FLASH_MODE:
		ret =  ov5642_get_flash_mode(client, &ctrl->value);
		break;
	case V4L2_CID_FLASH_LUMINANCE:
		ret =  ov5642_get_flash_lum(client, &ctrl->value);
		break;
	default:
		ret = -EINVAL;
	}

	return 0;
}

static struct v4l2_subdev_core_ops ov5642_subdev_core_ops = {
	.g_ctrl		= ov5642_g_ctrl,
	.s_ctrl		= ov5642_s_ctrl,
	.g_chip_ident	= ov5642_g_chip_ident,
#ifdef CONFIG_VIDEO_ADV_DEBUG
	.g_register	= ov5642_g_register,
	.s_register	= ov5642_s_register,
#endif
};

static struct v4l2_subdev_video_ops ov5642_subdev_video_ops = {
	.s_stream	= ov5642_s_stream,
	.s_mbus_fmt	= ov5642_s_fmt,
	.g_mbus_fmt	= ov5642_g_fmt,
	.try_mbus_fmt	= ov5642_try_fmt,
	.enum_mbus_fmt	= ov5642_enum_fmt,
	.enum_framesizes = ov5642_enum_framesizes,
};

static struct v4l2_subdev_ops ov5642_subdev_ops = {
	.core	= &ov5642_subdev_core_ops,
	.video	= &ov5642_subdev_video_ops,
};

static int ov5642_probe(struct i2c_client *client,
			 const struct i2c_device_id *did)
{
	struct ov5642 *ov5642;
	struct soc_camera_device *icd = client->dev.platform_data;
	struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent);
	struct soc_camera_link *icl;
	int ret;
	int i;

	if (!icd) {
		dev_err(&client->dev, "ov5642 missing soc-camera data!\n");
		return -EINVAL;
	}
	if (!icd->dev.parent ||
	    to_soc_camera_host(icd->dev.parent)->nr != icd->iface)
		return -ENODEV;

	icl = to_soc_camera_link(icd);
	if (!icl) {
		dev_err(&client->dev, "ov5642 driver needs platform data\n");
		return -EINVAL;
	}

	if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_WORD_DATA)) {
		dev_warn(&adapter->dev,
			 "I2C-Adapter doesn't support I2C_FUNC_SMBUS_WORD\n");
		return -EIO;
	}

	ov5642 = kzalloc(sizeof(struct ov5642), GFP_KERNEL);
	if (!ov5642) {
		dev_err(&client->dev, "ov5642 failed to alloc struct!\n");
		return -ENOMEM;
	}

	ov5642->rect.left = 0;
	ov5642->rect.top = 0;
	ov5642->rect.width = 640;
	ov5642->rect.height = 480;
	ov5642->pixfmt = V4L2_PIX_FMT_UYVY;

	icd->ops = &ov5642_ops;

	ov5642->model = V4L2_IDENT_OV5642;
	ov5642->fmts = ov5642_colour_fmts;
	ov5642->num_fmts = ARRAY_SIZE(ov5642_colour_fmts);

	ov5642->flash.brightness = FLASH_ARG_LUMI_OFF;
	ov5642->flash.duration = FLASH_ARG_TIME_OFF;
	ov5642->flash.mode = FLASH_ARG_MODE_EVER;
	v4l2_i2c_subdev_init(&ov5642->subdev, client, &ov5642_subdev_ops);

	for (i = MAX_DETECT_NUM; i > 0; --i) {
		ret = ov5642_detect(client);
		if (!ret) {
			printk(KERN_NOTICE "cam: OmniVision ov5642 sensor detected!\n");
			return 0;
		}
	}
	printk(KERN_ERR "cam: abort retry, failed to detect OmniVision ov5642!\n");

	icd->ops = NULL;
	i2c_set_clientdata(client, NULL);
	if (ov5642)
		kfree(ov5642);

	ret = -ENODEV;
	return ret;
}

static int ov5642_remove(struct i2c_client *client)
{
	struct ov5642 *ov5642 = to_ov5642(client);
	struct soc_camera_device *icd = client->dev.platform_data;
	struct soc_camera_link *icl = to_soc_camera_link(icd);

	icd->ops = NULL;
	if (icl->free_bus)
		icl->free_bus(icl);
	icl->power(icd->pdev, 0);

	i2c_set_clientdata(client, NULL);
	client->driver = NULL;
	kfree(ov5642);
	return 0;
}

static struct i2c_device_id ov5642_idtable[] = {
	{ "ov5642", 0 },
	{ }
};

MODULE_DEVICE_TABLE(i2c, ov5642_idtable);

static struct i2c_driver ov5642_driver = {
	.driver = {
		.name	= "ov5642",
	},
	.id_table	= ov5642_idtable,
	.probe		= ov5642_probe,
	.remove		= ov5642_remove,
};

static int __init ov5642_mod_init(void)
{
	int ret = 0;
	ret = i2c_add_driver(&ov5642_driver);
	return ret;
}

static void __exit ov5642_mod_exit(void)
{
	i2c_del_driver(&ov5642_driver);
}

module_init(ov5642_mod_init);
module_exit(ov5642_mod_exit);

MODULE_DESCRIPTION("OmniVision OV5642 Camera Driver");
MODULE_AUTHOR("Qing Xu");
MODULE_LICENSE("GPL");


^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: soc-camera jpeg support?
  2011-01-18  6:06               ` Qing Xu
@ 2011-01-18 17:30                 ` Guennadi Liakhovetski
  2011-01-19  2:50                   ` Qing Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Guennadi Liakhovetski @ 2011-01-18 17:30 UTC (permalink / raw)
  To: Qing Xu; +Cc: Laurent Pinchart, Linux Media Mailing List

Thanks for the code! With it at hand it is going to be easier to 
understand and evaluate changes, that you propose to the generic modules.

On Mon, 17 Jan 2011, Qing Xu wrote:

> Hi, Guennadi,
> 
> Oh, yes, I agree with you, you are right, it is really not that simple, 
> JPEG is always a headache,:(, as it is quite different from original 
> yuv/rgb format, it has neither fixed bits-per-sample, nor fixed 
> packing/bytes-per-line/per-frame, so when coding, I just hack value of 
> .bits_per_sample and .packing, in fact, you will see in our host driver, 
> if we find it is JPEG, will ignore bytes-per-line value, for example, in 
> pxa955_videobuf_prepare(), for jpeg, we always allocate fixed buffer 
> size for it (or, a better method is to allocate buffer size = 
> width*height/jpeg-compress-ratio).
> 
> I have 2 ideas:
> 1) Do you think it is reasonable to add a not-fixed define into soc_mbus_packing:
> enum soc_mbus_packing {
>         SOC_MBUS_PACKING_NOT_FIXED,
>         ...
> };
> And then, .bits_per_sample      = 0, /* indicate that sample bits is not-fixed*/
> And, in function:
> s32 soc_mbus_bytes_per_line(u32 width, const struct soc_mbus_pixelfmt *mf)
> {
>         switch (mf->packing) {
>                 case SOC_MBUS_PACKING_NOT_FIXED:
>                         return 0;
>                 case SOC_MBUS_PACKING_NONE:
>                         return width * mf->bits_per_sample / 8;
>                 ...
>         }
>         return -EINVAL;
> }
> 
> 2) Or, an workaround in sensor(ov5642.c), is to implement:
> int (*try_fmt)(struct v4l2_subdev *sd, struct v4l2_format *fmt);
> use the member (fmt->pix->pixelformat == V4L2_PIX_FMT_JPEG) to find out 
> if it is jpeg. And in host driver, it calls try_fmt() into sensor. In 
> this way, no need to change soc-camera common code, but I feel that it 
> goes against with your xxx_mbus_xxx design purpose, right?

I actually prefer this one, but with an addition of V4L2_MBUS_FMT_JPEG_1X8 
as per your additional üatch, but, please, also add a new packing, e.g., 
SOC_MBUS_PACKING_COMPRESSED (or SOC_MBUS_PACKING_VARIABLE?). So, that we 
can cleanly translate between V4L2_MBUS_FMT_JPEG_1X8 and 
V4L2_PIX_FMT_JPEG, but host drivers, that want to support this, will have 
to know to calculate frame sizes in some special way, not just aborting, 
if soc_mbus_bytes_per_line() returned an error. We might also consider 
returning a different error code for this case, e.g., we could change the 
general error case to return -ENOENT, and use -EINVAL for cases like JPEG, 
where data is valid, but no valid calculation is possible.

Thanks
Guennadi

> What do you think?
> 
> Please check the attachment, it is our host camera controller driver and sensor.
> 
> Thanks a lot!
> -Qing
> 
> -----Original Message-----
> From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
> Sent: 2011Äê1ÔÂ18ÈÕ 1:39
> To: Qing Xu
> Cc: Laurent Pinchart; Linux Media Mailing List
> Subject: Re: soc-camera jpeg support?
> 
> On Mon, 17 Jan 2011, Qing Xu wrote:
> 
> > Hi,
> >
> > Many of our sensors support directly outputting JPEG data to camera
> > controller, do you feel it's reasonable to add jpeg support into
> > soc-camera? As it seems that there is no define in v4l2-mediabus.h which
> > is suitable for our case.
> 
> In principle I have nothing against this, but, I'm afraid, it is not quite
> that simple. I haven't worked with such sensors myself, but, AFAIU, the
> JPEG image format doesn't have fixed bytes-per-line and bytes-per-frame
> values. If you add it like you propose, this would mean, that it just
> delivers "normal" 8 bits per pixel images. OTOH, soc_mbus_bytes_per_line()
> would just return -EINVAL for your JPEG format, so, unsupporting drivers
> would just error out, which is not all that bad. When the first driver
> decides to support JPEG, they will have to handle that error. But maybe
> we'll want to return a special error code for it.
> 
> But, in fact, that is in a way my problem with your patches: you propose
> extensions to generic code without showing how this is going to be used in
> specific drivers. Could you show us your host driver, please? I don't
> think this is still the pxa27x driver, is it?
> 
> Thanks
> Guennadi
> 
> >
> > Such as:
> > --- a/drivers/media/video/soc_mediabus.c
> > +++ b/drivers/media/video/soc_mediabus.c
> > @@ -130,6 +130,13 @@ static const struct soc_mbus_pixelfmt mbus_fmt[] = {
> >                 .packing                = SOC_MBUS_PACKING_2X8_PADLO,
> >                 .order                  = SOC_MBUS_ORDER_BE,
> >         },
> > +       [MBUS_IDX(JPEG_1X8)] = {
> > +               .fourcc                 = V4L2_PIX_FMT_JPEG,
> > +               .name                   = "JPEG",
> > +               .bits_per_sample        = 8,
> > +               .packing                = SOC_MBUS_PACKING_NONE,
> > +               .order                  = SOC_MBUS_ORDER_LE,
> > +       },
> >  };
> >
> > --- a/include/media/v4l2-mediabus.h
> > +++ b/include/media/v4l2-mediabus.h
> > @@ -41,6 +41,7 @@ enum v4l2_mbus_pixelcode {
> >         V4L2_MBUS_FMT_SBGGR10_2X8_PADHI_BE,
> >         V4L2_MBUS_FMT_SBGGR10_2X8_PADLO_BE,
> >         V4L2_MBUS_FMT_SGRBG8_1X8,
> > +       V4L2_MBUS_FMT_JPEG_1X8,
> >  };
> >
> > Any ideas will be appreciated!
> > Thanks!
> > Qing Xu
> >
> > Email: qingx@marvell.com
> > Application Processor Systems Engineering,
> > Marvell Technology Group Ltd.
> >
> 
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: soc-camera jpeg support?
  2011-01-18 17:30                 ` Guennadi Liakhovetski
@ 2011-01-19  2:50                   ` Qing Xu
  2011-01-19  6:32                     ` How to support MIPI CSI-2 controller in soc-camera framework? Qing Xu
  2011-01-19 16:00                     ` soc-camera jpeg support? Guennadi Liakhovetski
  0 siblings, 2 replies; 21+ messages in thread
From: Qing Xu @ 2011-01-19  2:50 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: Laurent Pinchart, Linux Media Mailing List


Sorry, which solution you would like me to implement?
(1) is to add SOC_MBUS_PACKING_VARIABLE define and add error code in soc_mbus_bytes_per_line(),
and (2) is to implement int (*try_fmt)(struct v4l2_subdev *sd, struct v4l2_format *fmt); in subdev, directly get V4L2_PIX_FMT_JPEG info from host driver.

-Qing

-----Original Message-----
From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
Sent: 2011年1月19日 1:31
To: Qing Xu
Cc: Laurent Pinchart; Linux Media Mailing List
Subject: RE: soc-camera jpeg support?

Thanks for the code! With it at hand it is going to be easier to
understand and evaluate changes, that you propose to the generic modules.

On Mon, 17 Jan 2011, Qing Xu wrote:

> Hi, Guennadi,
>
> Oh, yes, I agree with you, you are right, it is really not that simple,
> JPEG is always a headache,:(, as it is quite different from original
> yuv/rgb format, it has neither fixed bits-per-sample, nor fixed
> packing/bytes-per-line/per-frame, so when coding, I just hack value of
> .bits_per_sample and .packing, in fact, you will see in our host driver,
> if we find it is JPEG, will ignore bytes-per-line value, for example, in
> pxa955_videobuf_prepare(), for jpeg, we always allocate fixed buffer
> size for it (or, a better method is to allocate buffer size =
> width*height/jpeg-compress-ratio).
>
> I have 2 ideas:
> 1) Do you think it is reasonable to add a not-fixed define into soc_mbus_packing:
> enum soc_mbus_packing {
>         SOC_MBUS_PACKING_NOT_FIXED,
>         ...
> };
> And then, .bits_per_sample      = 0, /* indicate that sample bits is not-fixed*/
> And, in function:
> s32 soc_mbus_bytes_per_line(u32 width, const struct soc_mbus_pixelfmt *mf)
> {
>         switch (mf->packing) {
>                 case SOC_MBUS_PACKING_NOT_FIXED:
>                         return 0;
>                 case SOC_MBUS_PACKING_NONE:
>                         return width * mf->bits_per_sample / 8;
>                 ...
>         }
>         return -EINVAL;
> }
>
> 2) Or, an workaround in sensor(ov5642.c), is to implement:
> int (*try_fmt)(struct v4l2_subdev *sd, struct v4l2_format *fmt);
> use the member (fmt->pix->pixelformat == V4L2_PIX_FMT_JPEG) to find out
> if it is jpeg. And in host driver, it calls try_fmt() into sensor. In
> this way, no need to change soc-camera common code, but I feel that it
> goes against with your xxx_mbus_xxx design purpose, right?

I actually prefer this one, but with an addition of V4L2_MBUS_FMT_JPEG_1X8
as per your additional üatch, but, please, also add a new packing, e.g.,
SOC_MBUS_PACKING_COMPRESSED (or SOC_MBUS_PACKING_VARIABLE?). So, that we
can cleanly translate between V4L2_MBUS_FMT_JPEG_1X8 and
V4L2_PIX_FMT_JPEG, but host drivers, that want to support this, will have
to know to calculate frame sizes in some special way, not just aborting,
if soc_mbus_bytes_per_line() returned an error. We might also consider
returning a different error code for this case, e.g., we could change the
general error case to return -ENOENT, and use -EINVAL for cases like JPEG,
where data is valid, but no valid calculation is possible.

Thanks
Guennadi

> What do you think?
>
> Please check the attachment, it is our host camera controller driver and sensor.
>
> Thanks a lot!
> -Qing
>
> -----Original Message-----
> From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
> Sent: 2011Äê1ÔÂ18ÈÕ 1:39
> To: Qing Xu
> Cc: Laurent Pinchart; Linux Media Mailing List
> Subject: Re: soc-camera jpeg support?
>
> On Mon, 17 Jan 2011, Qing Xu wrote:
>
> > Hi,
> >
> > Many of our sensors support directly outputting JPEG data to camera
> > controller, do you feel it's reasonable to add jpeg support into
> > soc-camera? As it seems that there is no define in v4l2-mediabus.h which
> > is suitable for our case.
>
> In principle I have nothing against this, but, I'm afraid, it is not quite
> that simple. I haven't worked with such sensors myself, but, AFAIU, the
> JPEG image format doesn't have fixed bytes-per-line and bytes-per-frame
> values. If you add it like you propose, this would mean, that it just
> delivers "normal" 8 bits per pixel images. OTOH, soc_mbus_bytes_per_line()
> would just return -EINVAL for your JPEG format, so, unsupporting drivers
> would just error out, which is not all that bad. When the first driver
> decides to support JPEG, they will have to handle that error. But maybe
> we'll want to return a special error code for it.
>
> But, in fact, that is in a way my problem with your patches: you propose
> extensions to generic code without showing how this is going to be used in
> specific drivers. Could you show us your host driver, please? I don't
> think this is still the pxa27x driver, is it?
>
> Thanks
> Guennadi
>
> >
> > Such as:
> > --- a/drivers/media/video/soc_mediabus.c
> > +++ b/drivers/media/video/soc_mediabus.c
> > @@ -130,6 +130,13 @@ static const struct soc_mbus_pixelfmt mbus_fmt[] = {
> >                 .packing                = SOC_MBUS_PACKING_2X8_PADLO,
> >                 .order                  = SOC_MBUS_ORDER_BE,
> >         },
> > +       [MBUS_IDX(JPEG_1X8)] = {
> > +               .fourcc                 = V4L2_PIX_FMT_JPEG,
> > +               .name                   = "JPEG",
> > +               .bits_per_sample        = 8,
> > +               .packing                = SOC_MBUS_PACKING_NONE,
> > +               .order                  = SOC_MBUS_ORDER_LE,
> > +       },
> >  };
> >
> > --- a/include/media/v4l2-mediabus.h
> > +++ b/include/media/v4l2-mediabus.h
> > @@ -41,6 +41,7 @@ enum v4l2_mbus_pixelcode {
> >         V4L2_MBUS_FMT_SBGGR10_2X8_PADHI_BE,
> >         V4L2_MBUS_FMT_SBGGR10_2X8_PADLO_BE,
> >         V4L2_MBUS_FMT_SGRBG8_1X8,
> > +       V4L2_MBUS_FMT_JPEG_1X8,
> >  };
> >
> > Any ideas will be appreciated!
> > Thanks!
> > Qing Xu
> >
> > Email: qingx@marvell.com
> > Application Processor Systems Engineering,
> > Marvell Technology Group Ltd.
> >
>
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
>

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* How to support MIPI CSI-2 controller in soc-camera framework?
  2011-01-19  2:50                   ` Qing Xu
@ 2011-01-19  6:32                     ` Qing Xu
  2011-01-19 16:20                       ` Guennadi Liakhovetski
  2011-01-19 16:00                     ` soc-camera jpeg support? Guennadi Liakhovetski
  1 sibling, 1 reply; 21+ messages in thread
From: Qing Xu @ 2011-01-19  6:32 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: Laurent Pinchart, Linux Media Mailing List

Hi,

Our chip support both MIPI and parallel interface. The HW connection logic is
sensor(such as ov5642) -> our MIPI controller(handle DPHY timing/ CSI-2 things) -> our camera controller (handle DMA transmitting/ fmt/ size things). Now, I find the driver of sh_mobile_csi2.c, it seems like a CSI-2 driver, but I don't quite understand how it works:
1) how the host controller call into this driver?
2) how the host controller/sensor negotiate MIPI variable with this driver, such as D-PHY timing(hs_settle/hs_termen/clk_settle/clk_termen), number of lanes...?

Thanks a lot!
Qing Xu

Email: qingx@marvell.com
Application Processor Systems Engineering,
Marvell Technology Group Ltd.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: soc-camera jpeg support?
  2011-01-19  2:50                   ` Qing Xu
  2011-01-19  6:32                     ` How to support MIPI CSI-2 controller in soc-camera framework? Qing Xu
@ 2011-01-19 16:00                     ` Guennadi Liakhovetski
  2011-04-07  3:12                       ` Kassey Li
  1 sibling, 1 reply; 21+ messages in thread
From: Guennadi Liakhovetski @ 2011-01-19 16:00 UTC (permalink / raw)
  To: Qing Xu; +Cc: Laurent Pinchart, Linux Media Mailing List

On Tue, 18 Jan 2011, Qing Xu wrote:

> 
> Sorry, which solution you would like me to implement?
> (1) is to add SOC_MBUS_PACKING_VARIABLE define and add error code in 
> soc_mbus_bytes_per_line(),
> and (2) is to implement int (*try_fmt)(struct v4l2_subdev *sd, struct 
> v4l2_format *fmt); in subdev, directly get V4L2_PIX_FMT_JPEG info from 
> host driver.

Number (1), please. Sensor drivers should not use v4l2_format, only 
mediabus formats. Your host driver should proceed as usual: if the user is 
requesting the JPEG fourcc format from it, it should look in the format 
translation table, find there the respective JPEG mediabus code, and 
configure the sensor with it. Only when calculating buffer sizes the host 
driver will have to treat JPEG specially. This way you have to add 3 
things to the generic code: definitions for JPEG mediabus code and a 
variable-line length packing, and an entry to the fourcc-mediabus 
translation table.

Thanks
Guennadi

> 
> -Qing
> 
> -----Original Message-----
> From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
> Sent: 2011年1月19日 1:31
> To: Qing Xu
> Cc: Laurent Pinchart; Linux Media Mailing List
> Subject: RE: soc-camera jpeg support?
> 
> Thanks for the code! With it at hand it is going to be easier to
> understand and evaluate changes, that you propose to the generic modules.
> 
> On Mon, 17 Jan 2011, Qing Xu wrote:
> 
> > Hi, Guennadi,
> >
> > Oh, yes, I agree with you, you are right, it is really not that simple,
> > JPEG is always a headache,:(, as it is quite different from original
> > yuv/rgb format, it has neither fixed bits-per-sample, nor fixed
> > packing/bytes-per-line/per-frame, so when coding, I just hack value of
> > .bits_per_sample and .packing, in fact, you will see in our host driver,
> > if we find it is JPEG, will ignore bytes-per-line value, for example, in
> > pxa955_videobuf_prepare(), for jpeg, we always allocate fixed buffer
> > size for it (or, a better method is to allocate buffer size =
> > width*height/jpeg-compress-ratio).
> >
> > I have 2 ideas:
> > 1) Do you think it is reasonable to add a not-fixed define into soc_mbus_packing:
> > enum soc_mbus_packing {
> >         SOC_MBUS_PACKING_NOT_FIXED,
> >         ...
> > };
> > And then, .bits_per_sample      = 0, /* indicate that sample bits is not-fixed*/
> > And, in function:
> > s32 soc_mbus_bytes_per_line(u32 width, const struct soc_mbus_pixelfmt *mf)
> > {
> >         switch (mf->packing) {
> >                 case SOC_MBUS_PACKING_NOT_FIXED:
> >                         return 0;
> >                 case SOC_MBUS_PACKING_NONE:
> >                         return width * mf->bits_per_sample / 8;
> >                 ...
> >         }
> >         return -EINVAL;
> > }
> >
> > 2) Or, an workaround in sensor(ov5642.c), is to implement:
> > int (*try_fmt)(struct v4l2_subdev *sd, struct v4l2_format *fmt);
> > use the member (fmt->pix->pixelformat == V4L2_PIX_FMT_JPEG) to find out
> > if it is jpeg. And in host driver, it calls try_fmt() into sensor. In
> > this way, no need to change soc-camera common code, but I feel that it
> > goes against with your xxx_mbus_xxx design purpose, right?
> 
> I actually prefer this one, but with an addition of V4L2_MBUS_FMT_JPEG_1X8
> as per your additional üatch, but, please, also add a new packing, e.g.,
> SOC_MBUS_PACKING_COMPRESSED (or SOC_MBUS_PACKING_VARIABLE?). So, that we
> can cleanly translate between V4L2_MBUS_FMT_JPEG_1X8 and
> V4L2_PIX_FMT_JPEG, but host drivers, that want to support this, will have
> to know to calculate frame sizes in some special way, not just aborting,
> if soc_mbus_bytes_per_line() returned an error. We might also consider
> returning a different error code for this case, e.g., we could change the
> general error case to return -ENOENT, and use -EINVAL for cases like JPEG,
> where data is valid, but no valid calculation is possible.
> 
> Thanks
> Guennadi
> 
> > What do you think?
> >
> > Please check the attachment, it is our host camera controller driver and sensor.
> >
> > Thanks a lot!
> > -Qing
> >
> > -----Original Message-----
> > From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
> > Sent: 2011Äê1ÔÂ18ÈÕ 1:39
> > To: Qing Xu
> > Cc: Laurent Pinchart; Linux Media Mailing List
> > Subject: Re: soc-camera jpeg support?
> >
> > On Mon, 17 Jan 2011, Qing Xu wrote:
> >
> > > Hi,
> > >
> > > Many of our sensors support directly outputting JPEG data to camera
> > > controller, do you feel it's reasonable to add jpeg support into
> > > soc-camera? As it seems that there is no define in v4l2-mediabus.h which
> > > is suitable for our case.
> >
> > In principle I have nothing against this, but, I'm afraid, it is not quite
> > that simple. I haven't worked with such sensors myself, but, AFAIU, the
> > JPEG image format doesn't have fixed bytes-per-line and bytes-per-frame
> > values. If you add it like you propose, this would mean, that it just
> > delivers "normal" 8 bits per pixel images. OTOH, soc_mbus_bytes_per_line()
> > would just return -EINVAL for your JPEG format, so, unsupporting drivers
> > would just error out, which is not all that bad. When the first driver
> > decides to support JPEG, they will have to handle that error. But maybe
> > we'll want to return a special error code for it.
> >
> > But, in fact, that is in a way my problem with your patches: you propose
> > extensions to generic code without showing how this is going to be used in
> > specific drivers. Could you show us your host driver, please? I don't
> > think this is still the pxa27x driver, is it?
> >
> > Thanks
> > Guennadi
> >
> > >
> > > Such as:
> > > --- a/drivers/media/video/soc_mediabus.c
> > > +++ b/drivers/media/video/soc_mediabus.c
> > > @@ -130,6 +130,13 @@ static const struct soc_mbus_pixelfmt mbus_fmt[] = {
> > >                 .packing                = SOC_MBUS_PACKING_2X8_PADLO,
> > >                 .order                  = SOC_MBUS_ORDER_BE,
> > >         },
> > > +       [MBUS_IDX(JPEG_1X8)] = {
> > > +               .fourcc                 = V4L2_PIX_FMT_JPEG,
> > > +               .name                   = "JPEG",
> > > +               .bits_per_sample        = 8,
> > > +               .packing                = SOC_MBUS_PACKING_NONE,
> > > +               .order                  = SOC_MBUS_ORDER_LE,
> > > +       },
> > >  };
> > >
> > > --- a/include/media/v4l2-mediabus.h
> > > +++ b/include/media/v4l2-mediabus.h
> > > @@ -41,6 +41,7 @@ enum v4l2_mbus_pixelcode {
> > >         V4L2_MBUS_FMT_SBGGR10_2X8_PADHI_BE,
> > >         V4L2_MBUS_FMT_SBGGR10_2X8_PADLO_BE,
> > >         V4L2_MBUS_FMT_SGRBG8_1X8,
> > > +       V4L2_MBUS_FMT_JPEG_1X8,
> > >  };
> > >
> > > Any ideas will be appreciated!
> > > Thanks!
> > > Qing Xu
> > >
> > > Email: qingx@marvell.com
> > > Application Processor Systems Engineering,
> > > Marvell Technology Group Ltd.
> > >
> >
> > ---
> > Guennadi Liakhovetski, Ph.D.
> > Freelance Open-Source Software Developer
> > http://www.open-technology.de/
> >
> 
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to support MIPI CSI-2 controller in soc-camera framework?
  2011-01-19  6:32                     ` How to support MIPI CSI-2 controller in soc-camera framework? Qing Xu
@ 2011-01-19 16:20                       ` Guennadi Liakhovetski
  2011-01-20  2:26                         ` Qing Xu
  0 siblings, 1 reply; 21+ messages in thread
From: Guennadi Liakhovetski @ 2011-01-19 16:20 UTC (permalink / raw)
  To: Qing Xu; +Cc: Laurent Pinchart, Linux Media Mailing List

(a general request: could you please configure your mailer to wrap lines 
at somewhere around 70 characters?)

On Tue, 18 Jan 2011, Qing Xu wrote:

> Hi,
> 
> Our chip support both MIPI and parallel interface. The HW connection logic is
> sensor(such as ov5642) -> our MIPI controller(handle DPHY timing/ CSI-2 
> things) -> our camera controller (handle DMA transmitting/ fmt/ size 
> things). Now, I find the driver of sh_mobile_csi2.c, it seems like a 
> CSI-2 driver, but I don't quite understand how it works:
> 1) how the host controller call into this driver?

This is a normal v4l2-subdev driver. Platform data for the 
sh_mobile_ceu_camera driver provides a link to CSI2 driver data, then the 
host driver loads the CSI2 driver, which then links itself into the 
subdevice list. Look at arch/arm/mach-shmobile/board-ap4evb.c how the data 
is linked:

static struct sh_mobile_ceu_info sh_mobile_ceu_info = {
	.flags = SH_CEU_FLAG_USE_8BIT_BUS,
	.csi2_dev = &csi2_device.dev,
};

and in the hosz driver drivers/media/video/sh_mobile_ceu_camera.c look in 
the sh_mobile_ceu_probe function below the lines:

	csi2 = pcdev->pdata->csi2_dev;
	if (csi2) {
...


> 2) how the host controller/sensor negotiate MIPI variable with this 
> driver, such as D-PHY timing(hs_settle/hs_termen/clk_settle/clk_termen), 
> number of lanes...?

Since I only had a limited number of MIPI setups, I haven't implemented 
maximum flexibility. A part of the parameters is hard-coded, another part 
is provided in the platform driver, yet another part is calculated 
dynamically.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: How to support MIPI CSI-2 controller in soc-camera framework?
  2011-01-19 16:20                       ` Guennadi Liakhovetski
@ 2011-01-20  2:26                         ` Qing Xu
  2011-02-04  7:45                           ` Guennadi Liakhovetski
  0 siblings, 1 reply; 21+ messages in thread
From: Qing Xu @ 2011-01-20  2:26 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: Laurent Pinchart, Linux Media Mailing List

Hi,

(a general request: could you please configure your mailer to wrap
Lines at somewhere around 70 characters?)
very sorry for the un-convenience!

Thanks for your description! I could verify and try your way on our
CSI-2 driver.
Also, our another chip's camera controller support both MIPI and
traditional parallel(H_sync/V_sync) interface, we hope host can
negotiate with sensor on MIPI configure, as the sensor could be
parallel interface or MIPI interface, so I have a proposal as
follow:

in soc_camera.h, SOCAM_XXX defines all HW connection properties,
I thing MIPI(1/2/3/4 lanes) is also a kind of HW connection
property, and it is mutex with parallel properties(if sensor
support mipi connection, the HW signal has no parallel property
any more), once host controller find subdev support MIPI, it will
enable MIPI functional itself, and if subdev only support parallel,
it will enable parallel functional itself.
(you can find the proposal in the code which I have sent, refer to pxa955_cam_set_bus_param() in pxa955_cam.c, ov5642_query_bus_param
In ov5642.c)

--- a/drivers/media/video/soc_camera.c
+++ b/drivers/media/video/soc_camera.c
unsigned long soc_camera_apply_sensor_flags(struct soc_camera_link *icl,
                if (f == SOCAM_PCLK_SAMPLE_RISING || f == SOCAM_PCLK_SAMPLE_FALLING)
                        flags ^= SOCAM_PCLK_SAMPLE_RISING | SOCAM_PCLK_SAMPLE_FALLING;
        }
+       if (icl->flags & SOCAM_MIPI) {
+               flags &= SOCAM_MIPI | SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE
+                                       | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE;
+       }

        return flags;
 }

--- a/include/media/soc_camera.h
+++ b/include/media/soc_camera.h

 #define SOCAM_DATA_ACTIVE_HIGH         (1 << 14)
 #define SOCAM_DATA_ACTIVE_LOW          (1 << 15)

+#define SOCAM_MIPI             (1 << 16)
+#define SOCAM_MIPI_1LANE               (1 << 17)
+#define SOCAM_MIPI_2LANE               (1 << 18)
+#define SOCAM_MIPI_3LANE               (1 << 19)
+#define SOCAM_MIPI_4LANE               (1 << 20)
+

 static inline unsigned long soc_camera_bus_param_compatible(
                        unsigned long camera_flags, unsigned long bus_flags)
 {
-       unsigned long common_flags, hsync, vsync, pclk, data, buswidth, mode;
+       unsigned long common_flags, hsync, vsync, pclk, data, buswidth, mode, mipi;

        common_flags = camera_flags & bus_flags;

@@ -261,8 +267,10 @@ static inline unsigned long soc_camera_bus_param_compatible(
        data = common_flags & (SOCAM_DATA_ACTIVE_HIGH | SOCAM_DATA_ACTIVE_LOW);
        mode = common_flags & (SOCAM_MASTER | SOCAM_SLAVE);
        buswidth = common_flags & SOCAM_DATAWIDTH_MASK;
+       mipi = common_flags & (SOCAM_MIPI | SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE
+                                               | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE);

-       return (!hsync || !vsync || !pclk || !data || !mode || !buswidth) ? 0 :
+       return ((!hsync || !vsync || !pclk || !data || !mode || !buswidth) && (!mipi)) ? 0 :
                common_flags;
 }


-----Original Message-----
From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
Sent: 2011年1月20日 0:20
To: Qing Xu
Cc: Laurent Pinchart; Linux Media Mailing List
Subject: Re: How to support MIPI CSI-2 controller in soc-camera framework?

(a general request: could you please configure your mailer to wrap lines
at somewhere around 70 characters?)

On Tue, 18 Jan 2011, Qing Xu wrote:

> Hi,
>
> Our chip support both MIPI and parallel interface. The HW connection logic is
> sensor(such as ov5642) -> our MIPI controller(handle DPHY timing/ CSI-2
> things) -> our camera controller (handle DMA transmitting/ fmt/ size
> things). Now, I find the driver of sh_mobile_csi2.c, it seems like a
> CSI-2 driver, but I don't quite understand how it works:
> 1) how the host controller call into this driver?

This is a normal v4l2-subdev driver. Platform data for the
sh_mobile_ceu_camera driver provides a link to CSI2 driver data, then the
host driver loads the CSI2 driver, which then links itself into the
subdevice list. Look at arch/arm/mach-shmobile/board-ap4evb.c how the data
is linked:

static struct sh_mobile_ceu_info sh_mobile_ceu_info = {
        .flags = SH_CEU_FLAG_USE_8BIT_BUS,
        .csi2_dev = &csi2_device.dev,
};

and in the hosz driver drivers/media/video/sh_mobile_ceu_camera.c look in
the sh_mobile_ceu_probe function below the lines:

        csi2 = pcdev->pdata->csi2_dev;
        if (csi2) {
...


> 2) how the host controller/sensor negotiate MIPI variable with this
> driver, such as D-PHY timing(hs_settle/hs_termen/clk_settle/clk_termen),
> number of lanes...?

Since I only had a limited number of MIPI setups, I haven't implemented
maximum flexibility. A part of the parameters is hard-coded, another part
is provided in the platform driver, yet another part is calculated
dynamically.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: How to support MIPI CSI-2 controller in soc-camera framework?
  2011-01-20  2:26                         ` Qing Xu
@ 2011-02-04  7:45                           ` Guennadi Liakhovetski
  2011-02-09  3:36                             ` Qing Xu
  2011-04-07  5:21                             ` Kassey Li
  0 siblings, 2 replies; 21+ messages in thread
From: Guennadi Liakhovetski @ 2011-02-04  7:45 UTC (permalink / raw)
  To: Qing Xu; +Cc: Laurent Pinchart, Linux Media Mailing List

Hi

Sorry for the delay first of all.

On Wed, 19 Jan 2011, Qing Xu wrote:

> Hi,
> 
> (a general request: could you please configure your mailer to wrap
> Lines at somewhere around 70 characters?)
> very sorry for the un-convenience!
> 
> Thanks for your description! I could verify and try your way on our
> CSI-2 driver.
> Also, our another chip's camera controller support both MIPI and
> traditional parallel(H_sync/V_sync) interface, we hope host can
> negotiate with sensor on MIPI configure, as the sensor could be
> parallel interface or MIPI interface, so I have a proposal as
> follow:
> 
> in soc_camera.h, SOCAM_XXX defines all HW connection properties,
> I thing MIPI(1/2/3/4 lanes) is also a kind of HW connection
> property, and it is mutex with parallel properties(if sensor
> support mipi connection, the HW signal has no parallel property
> any more), once host controller find subdev support MIPI, it will
> enable MIPI functional itself, and if subdev only support parallel,
> it will enable parallel functional itself.

I think, yes, we can add MIPI definitions to soc_camera.h, similar to your 
proposal below, but I don't think we need the "SOCAM_MIPI" macro itself, 
maybe define it as a mask

#define SOCAM_MIPI (SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE)

Also, the decision "if MIPI supported by the client always prefer it over 
the parallel connection" doesn't seem to be a meaningful thing to do in 
the driver to me. I would leave the choice of a connection to the platform 
code. In that case your addition to the soc_camera_apply_sensor_flags() 
function becomes unneeded.

Makes sense?

Thanks
Guennadi

> (you can find the proposal in the code which I have sent, refer to 
> pxa955_cam_set_bus_param() in pxa955_cam.c, ov5642_query_bus_param
> In ov5642.c)
> 
> --- a/drivers/media/video/soc_camera.c
> +++ b/drivers/media/video/soc_camera.c
> unsigned long soc_camera_apply_sensor_flags(struct soc_camera_link *icl,
>                 if (f == SOCAM_PCLK_SAMPLE_RISING || f == SOCAM_PCLK_SAMPLE_FALLING)
>                         flags ^= SOCAM_PCLK_SAMPLE_RISING | SOCAM_PCLK_SAMPLE_FALLING;
>         }
> +       if (icl->flags & SOCAM_MIPI) {
> +               flags &= SOCAM_MIPI | SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE
> +                                       | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE;
> +       }
> 
>         return flags;
>  }
> 
> --- a/include/media/soc_camera.h
> +++ b/include/media/soc_camera.h
> 
>  #define SOCAM_DATA_ACTIVE_HIGH         (1 << 14)
>  #define SOCAM_DATA_ACTIVE_LOW          (1 << 15)
> 
> +#define SOCAM_MIPI             (1 << 16)
> +#define SOCAM_MIPI_1LANE               (1 << 17)
> +#define SOCAM_MIPI_2LANE               (1 << 18)
> +#define SOCAM_MIPI_3LANE               (1 << 19)
> +#define SOCAM_MIPI_4LANE               (1 << 20)
> +
> 
>  static inline unsigned long soc_camera_bus_param_compatible(
>                         unsigned long camera_flags, unsigned long bus_flags)
>  {
> -       unsigned long common_flags, hsync, vsync, pclk, data, buswidth, mode;
> +       unsigned long common_flags, hsync, vsync, pclk, data, buswidth, mode, mipi;
> 
>         common_flags = camera_flags & bus_flags;
> 
> @@ -261,8 +267,10 @@ static inline unsigned long soc_camera_bus_param_compatible(
>         data = common_flags & (SOCAM_DATA_ACTIVE_HIGH | SOCAM_DATA_ACTIVE_LOW);
>         mode = common_flags & (SOCAM_MASTER | SOCAM_SLAVE);
>         buswidth = common_flags & SOCAM_DATAWIDTH_MASK;
> +       mipi = common_flags & (SOCAM_MIPI | SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE
> +                                               | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE);
> 
> -       return (!hsync || !vsync || !pclk || !data || !mode || !buswidth) ? 0 :
> +       return ((!hsync || !vsync || !pclk || !data || !mode || !buswidth) && (!mipi)) ? 0 :
>                 common_flags;
>  }
> 
> 
> -----Original Message-----
> From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
> Sent: 2011Äê1ÔÂ20ÈÕ 0:20
> To: Qing Xu
> Cc: Laurent Pinchart; Linux Media Mailing List
> Subject: Re: How to support MIPI CSI-2 controller in soc-camera framework?
> 
> (a general request: could you please configure your mailer to wrap lines
> at somewhere around 70 characters?)
> 
> On Tue, 18 Jan 2011, Qing Xu wrote:
> 
> > Hi,
> >
> > Our chip support both MIPI and parallel interface. The HW connection logic is
> > sensor(such as ov5642) -> our MIPI controller(handle DPHY timing/ CSI-2
> > things) -> our camera controller (handle DMA transmitting/ fmt/ size
> > things). Now, I find the driver of sh_mobile_csi2.c, it seems like a
> > CSI-2 driver, but I don't quite understand how it works:
> > 1) how the host controller call into this driver?
> 
> This is a normal v4l2-subdev driver. Platform data for the
> sh_mobile_ceu_camera driver provides a link to CSI2 driver data, then the
> host driver loads the CSI2 driver, which then links itself into the
> subdevice list. Look at arch/arm/mach-shmobile/board-ap4evb.c how the data
> is linked:
> 
> static struct sh_mobile_ceu_info sh_mobile_ceu_info = {
>         .flags = SH_CEU_FLAG_USE_8BIT_BUS,
>         .csi2_dev = &csi2_device.dev,
> };
> 
> and in the hosz driver drivers/media/video/sh_mobile_ceu_camera.c look in
> the sh_mobile_ceu_probe function below the lines:
> 
>         csi2 = pcdev->pdata->csi2_dev;
>         if (csi2) {
> ...
> 
> 
> > 2) how the host controller/sensor negotiate MIPI variable with this
> > driver, such as D-PHY timing(hs_settle/hs_termen/clk_settle/clk_termen),
> > number of lanes...?
> 
> Since I only had a limited number of MIPI setups, I haven't implemented
> maximum flexibility. A part of the parameters is hard-coded, another part
> is provided in the platform driver, yet another part is calculated
> dynamically.
> 
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: How to support MIPI CSI-2 controller in soc-camera framework?
  2011-02-04  7:45                           ` Guennadi Liakhovetski
@ 2011-02-09  3:36                             ` Qing Xu
  2011-04-07  5:21                             ` Kassey Li
  1 sibling, 0 replies; 21+ messages in thread
From: Qing Xu @ 2011-02-09  3:36 UTC (permalink / raw)
  To: Guennadi Liakhovetski; +Cc: Laurent Pinchart, Linux Media Mailing List

Hi, Guennadi,

Yes, you are right, the implementation in soc_camera_apply_sensor_flags
is not well considered, it would be removed, it is better like
SOCAM_DATAWIDTH_MASK implementation, what do you think?

And, it is reasonable that if sensor and host controller support both
MIPI/parallel interface, we can leave the choice into platform code,
and if sensor and host controller support only one interface, such as
our controller support MIPI interface, not support parallel interface,
but sensor supports both parallel and MIPI interface, we could directly
use soc_camera_bus_param_compatible() to find out which is the common
property supported by both. What do you think?

-Qing

-----Original Message-----
From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
Sent: 2011年2月4日 15:45
To: Qing Xu
Cc: Laurent Pinchart; Linux Media Mailing List
Subject: RE: How to support MIPI CSI-2 controller in soc-camera framework?

Hi

Sorry for the delay first of all.

On Wed, 19 Jan 2011, Qing Xu wrote:

> Hi,
>
> (a general request: could you please configure your mailer to wrap
> Lines at somewhere around 70 characters?)
> very sorry for the un-convenience!
>
> Thanks for your description! I could verify and try your way on our
> CSI-2 driver.
> Also, our another chip's camera controller support both MIPI and
> traditional parallel(H_sync/V_sync) interface, we hope host can
> negotiate with sensor on MIPI configure, as the sensor could be
> parallel interface or MIPI interface, so I have a proposal as
> follow:
>
> in soc_camera.h, SOCAM_XXX defines all HW connection properties,
> I thing MIPI(1/2/3/4 lanes) is also a kind of HW connection
> property, and it is mutex with parallel properties(if sensor
> support mipi connection, the HW signal has no parallel property
> any more), once host controller find subdev support MIPI, it will
> enable MIPI functional itself, and if subdev only support parallel,
> it will enable parallel functional itself.

I think, yes, we can add MIPI definitions to soc_camera.h, similar to your
proposal below, but I don't think we need the "SOCAM_MIPI" macro itself,
maybe define it as a mask

#define SOCAM_MIPI (SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE)

Also, the decision "if MIPI supported by the client always prefer it over
the parallel connection" doesn't seem to be a meaningful thing to do in
the driver to me. I would leave the choice of a connection to the platform
code. In that case your addition to the soc_camera_apply_sensor_flags()
function becomes unneeded.

Makes sense?

Thanks
Guennadi

> (you can find the proposal in the code which I have sent, refer to
> pxa955_cam_set_bus_param() in pxa955_cam.c, ov5642_query_bus_param
> In ov5642.c)
>
> --- a/drivers/media/video/soc_camera.c
> +++ b/drivers/media/video/soc_camera.c
> unsigned long soc_camera_apply_sensor_flags(struct soc_camera_link *icl,
>                 if (f == SOCAM_PCLK_SAMPLE_RISING || f == SOCAM_PCLK_SAMPLE_FALLING)
>                         flags ^= SOCAM_PCLK_SAMPLE_RISING | SOCAM_PCLK_SAMPLE_FALLING;
>         }
> +       if (icl->flags & SOCAM_MIPI) {
> +               flags &= SOCAM_MIPI | SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE
> +                                       | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE;
> +       }
>
>         return flags;
>  }
>
> --- a/include/media/soc_camera.h
> +++ b/include/media/soc_camera.h
>
>  #define SOCAM_DATA_ACTIVE_HIGH         (1 << 14)
>  #define SOCAM_DATA_ACTIVE_LOW          (1 << 15)
>
> +#define SOCAM_MIPI             (1 << 16)
> +#define SOCAM_MIPI_1LANE               (1 << 17)
> +#define SOCAM_MIPI_2LANE               (1 << 18)
> +#define SOCAM_MIPI_3LANE               (1 << 19)
> +#define SOCAM_MIPI_4LANE               (1 << 20)
> +
>
>  static inline unsigned long soc_camera_bus_param_compatible(
>                         unsigned long camera_flags, unsigned long bus_flags)
>  {
> -       unsigned long common_flags, hsync, vsync, pclk, data, buswidth, mode;
> +       unsigned long common_flags, hsync, vsync, pclk, data, buswidth, mode, mipi;
>
>         common_flags = camera_flags & bus_flags;
>
> @@ -261,8 +267,10 @@ static inline unsigned long soc_camera_bus_param_compatible(
>         data = common_flags & (SOCAM_DATA_ACTIVE_HIGH | SOCAM_DATA_ACTIVE_LOW);
>         mode = common_flags & (SOCAM_MASTER | SOCAM_SLAVE);
>         buswidth = common_flags & SOCAM_DATAWIDTH_MASK;
> +       mipi = common_flags & (SOCAM_MIPI | SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE
> +                                               | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE);
>
> -       return (!hsync || !vsync || !pclk || !data || !mode || !buswidth) ? 0 :
> +       return ((!hsync || !vsync || !pclk || !data || !mode || !buswidth) && (!mipi)) ? 0 :
>                 common_flags;
>  }
>
>
> -----Original Message-----
> From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
> Sent: 2011Äê1ÔÂ20ÈÕ 0:20
> To: Qing Xu
> Cc: Laurent Pinchart; Linux Media Mailing List
> Subject: Re: How to support MIPI CSI-2 controller in soc-camera framework?
>
> (a general request: could you please configure your mailer to wrap lines
> at somewhere around 70 characters?)
>
> On Tue, 18 Jan 2011, Qing Xu wrote:
>
> > Hi,
> >
> > Our chip support both MIPI and parallel interface. The HW connection logic is
> > sensor(such as ov5642) -> our MIPI controller(handle DPHY timing/ CSI-2
> > things) -> our camera controller (handle DMA transmitting/ fmt/ size
> > things). Now, I find the driver of sh_mobile_csi2.c, it seems like a
> > CSI-2 driver, but I don't quite understand how it works:
> > 1) how the host controller call into this driver?
>
> This is a normal v4l2-subdev driver. Platform data for the
> sh_mobile_ceu_camera driver provides a link to CSI2 driver data, then the
> host driver loads the CSI2 driver, which then links itself into the
> subdevice list. Look at arch/arm/mach-shmobile/board-ap4evb.c how the data
> is linked:
>
> static struct sh_mobile_ceu_info sh_mobile_ceu_info = {
>         .flags = SH_CEU_FLAG_USE_8BIT_BUS,
>         .csi2_dev = &csi2_device.dev,
> };
>
> and in the hosz driver drivers/media/video/sh_mobile_ceu_camera.c look in
> the sh_mobile_ceu_probe function below the lines:
>
>         csi2 = pcdev->pdata->csi2_dev;
>         if (csi2) {
> ...
>
>
> > 2) how the host controller/sensor negotiate MIPI variable with this
> > driver, such as D-PHY timing(hs_settle/hs_termen/clk_settle/clk_termen),
> > number of lanes...?
>
> Since I only had a limited number of MIPI setups, I haven't implemented
> maximum flexibility. A part of the parameters is hard-coded, another part
> is provided in the platform driver, yet another part is calculated
> dynamically.
>
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
>

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: soc-camera jpeg support?
  2011-01-19 16:00                     ` soc-camera jpeg support? Guennadi Liakhovetski
@ 2011-04-07  3:12                       ` Kassey Li
  0 siblings, 0 replies; 21+ messages in thread
From: Kassey Li @ 2011-04-07  3:12 UTC (permalink / raw)
  To: Guennadi Liakhovetski, Qing Xu; +Cc: Laurent Pinchart, Linux Media Mailing List

Hi, Guennadi:

	Do you mean this ?

Signed-off-by: Qing Xu <qingx@marvell.com>
Signed-off-by: Kassey Lee <ygli@marvell.com>
---
 drivers/media/video/soc_mediabus.c |    9 +++++++++
 include/media/soc_mediabus.h       |    1 +
 include/media/v4l2-mediabus.h      |    1 +
 3 files changed, 11 insertions(+), 0 deletions(-)

diff --git a/drivers/media/video/soc_mediabus.c b/drivers/media/video/soc_mediabus.c
index 9139121..a33f9ec 100644
--- a/drivers/media/video/soc_mediabus.c
+++ b/drivers/media/video/soc_mediabus.c
@@ -130,6 +130,13 @@ static const struct soc_mbus_pixelfmt mbus_fmt[] = {
 		.packing		= SOC_MBUS_PACKING_2X8_PADLO,
 		.order			= SOC_MBUS_ORDER_BE,
 	},
+	[MBUS_IDX(JPEG_1X8)] = {
+		.fourcc                 = V4L2_PIX_FMT_JPEG,
+		.name                   = "JPEG",
+		.bits_per_sample        = 8,
+		.packing                = SOC_MBUS_PACKING_VARIABLE,
+		.order                  = SOC_MBUS_ORDER_LE,
+	},
 };
 
 s32 soc_mbus_bytes_per_line(u32 width, const struct soc_mbus_pixelfmt *mf) @@ -141,6 +148,8 @@ s32 soc_mbus_bytes_per_line(u32 width, const struct soc_mbus_pixelfmt *mf)
 	case SOC_MBUS_PACKING_2X8_PADLO:
 	case SOC_MBUS_PACKING_EXTEND16:
 		return width * 2;
+	case SOC_MBUS_PACKING_VARIABLE:
+		return 0;
 	}
 	return -EINVAL;
 }
diff --git a/include/media/soc_mediabus.h b/include/media/soc_mediabus.h index 037cd7b..d351408 100644
--- a/include/media/soc_mediabus.h
+++ b/include/media/soc_mediabus.h
@@ -29,6 +29,7 @@ enum soc_mbus_packing {
 	SOC_MBUS_PACKING_2X8_PADHI,
 	SOC_MBUS_PACKING_2X8_PADLO,
 	SOC_MBUS_PACKING_EXTEND16,
+	SOC_MBUS_PACKING_VARIABLE,
 };
 
 /**
diff --git a/include/media/v4l2-mediabus.h b/include/media/v4l2-mediabus.h index 8e65598..f0d9146 100644
--- a/include/media/v4l2-mediabus.h
+++ b/include/media/v4l2-mediabus.h
@@ -54,6 +54,7 @@ enum v4l2_mbus_pixelcode {
 	V4L2_MBUS_FMT_YVYU8_1_5X8,
 	V4L2_MBUS_FMT_UYVY8_1_5X8,
 	V4L2_MBUS_FMT_VYUY8_1_5X8,
+	V4L2_MBUS_FMT_JPEG_1X8,
 };


Best regards
Kassey Lee
Email: ygli@marvell.com
Application Processor Systems Engineering, Marvell Technology Group Ltd.
Shanghai, China.

-----Original Message-----
From: linux-media-owner@vger.kernel.org [mailto:linux-media-owner@vger.kernel.org] On Behalf Of Guennadi Liakhovetski
Sent: 2011年1月20日 0:00
To: Qing Xu
Cc: Laurent Pinchart; Linux Media Mailing List
Subject: RE: soc-camera jpeg support?

On Tue, 18 Jan 2011, Qing Xu wrote:

> 
> Sorry, which solution you would like me to implement?
> (1) is to add SOC_MBUS_PACKING_VARIABLE define and add error code in 
> soc_mbus_bytes_per_line(),
> and (2) is to implement int (*try_fmt)(struct v4l2_subdev *sd, struct 
> v4l2_format *fmt); in subdev, directly get V4L2_PIX_FMT_JPEG info from 
> host driver.

Number (1), please. Sensor drivers should not use v4l2_format, only 
mediabus formats. Your host driver should proceed as usual: if the user is 
requesting the JPEG fourcc format from it, it should look in the format 
translation table, find there the respective JPEG mediabus code, and 
configure the sensor with it. Only when calculating buffer sizes the host 
driver will have to treat JPEG specially. This way you have to add 3 
things to the generic code: definitions for JPEG mediabus code and a 
variable-line length packing, and an entry to the fourcc-mediabus 
translation table.

Thanks
Guennadi

> 
> -Qing
> 
> -----Original Message-----
> From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
> Sent: 2011年1月19日 1:31
> To: Qing Xu
> Cc: Laurent Pinchart; Linux Media Mailing List
> Subject: RE: soc-camera jpeg support?
> 
> Thanks for the code! With it at hand it is going to be easier to
> understand and evaluate changes, that you propose to the generic modules.
> 
> On Mon, 17 Jan 2011, Qing Xu wrote:
> 
> > Hi, Guennadi,
> >
> > Oh, yes, I agree with you, you are right, it is really not that simple,
> > JPEG is always a headache,:(, as it is quite different from original
> > yuv/rgb format, it has neither fixed bits-per-sample, nor fixed
> > packing/bytes-per-line/per-frame, so when coding, I just hack value of
> > .bits_per_sample and .packing, in fact, you will see in our host driver,
> > if we find it is JPEG, will ignore bytes-per-line value, for example, in
> > pxa955_videobuf_prepare(), for jpeg, we always allocate fixed buffer
> > size for it (or, a better method is to allocate buffer size =
> > width*height/jpeg-compress-ratio).
> >
> > I have 2 ideas:
> > 1) Do you think it is reasonable to add a not-fixed define into soc_mbus_packing:
> > enum soc_mbus_packing {
> >         SOC_MBUS_PACKING_NOT_FIXED,
> >         ...
> > };
> > And then, .bits_per_sample      = 0, /* indicate that sample bits is not-fixed*/
> > And, in function:
> > s32 soc_mbus_bytes_per_line(u32 width, const struct soc_mbus_pixelfmt *mf)
> > {
> >         switch (mf->packing) {
> >                 case SOC_MBUS_PACKING_NOT_FIXED:
> >                         return 0;
> >                 case SOC_MBUS_PACKING_NONE:
> >                         return width * mf->bits_per_sample / 8;
> >                 ...
> >         }
> >         return -EINVAL;
> > }
> >
> > 2) Or, an workaround in sensor(ov5642.c), is to implement:
> > int (*try_fmt)(struct v4l2_subdev *sd, struct v4l2_format *fmt);
> > use the member (fmt->pix->pixelformat == V4L2_PIX_FMT_JPEG) to find out
> > if it is jpeg. And in host driver, it calls try_fmt() into sensor. In
> > this way, no need to change soc-camera common code, but I feel that it
> > goes against with your xxx_mbus_xxx design purpose, right?
> 
> I actually prefer this one, but with an addition of V4L2_MBUS_FMT_JPEG_1X8
> as per your additional üatch, but, please, also add a new packing, e.g.,
> SOC_MBUS_PACKING_COMPRESSED (or SOC_MBUS_PACKING_VARIABLE?). So, that we
> can cleanly translate between V4L2_MBUS_FMT_JPEG_1X8 and
> V4L2_PIX_FMT_JPEG, but host drivers, that want to support this, will have
> to know to calculate frame sizes in some special way, not just aborting,
> if soc_mbus_bytes_per_line() returned an error. We might also consider
> returning a different error code for this case, e.g., we could change the
> general error case to return -ENOENT, and use -EINVAL for cases like JPEG,
> where data is valid, but no valid calculation is possible.
> 
> Thanks
> Guennadi
> 
> > What do you think?
> >
> > Please check the attachment, it is our host camera controller driver and sensor.
> >
> > Thanks a lot!
> > -Qing
> >
> > -----Original Message-----
> > From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
> > Sent: 2011Äê1ÔÂ18ÈÕ 1:39
> > To: Qing Xu
> > Cc: Laurent Pinchart; Linux Media Mailing List
> > Subject: Re: soc-camera jpeg support?
> >
> > On Mon, 17 Jan 2011, Qing Xu wrote:
> >
> > > Hi,
> > >
> > > Many of our sensors support directly outputting JPEG data to camera
> > > controller, do you feel it's reasonable to add jpeg support into
> > > soc-camera? As it seems that there is no define in v4l2-mediabus.h which
> > > is suitable for our case.
> >
> > In principle I have nothing against this, but, I'm afraid, it is not quite
> > that simple. I haven't worked with such sensors myself, but, AFAIU, the
> > JPEG image format doesn't have fixed bytes-per-line and bytes-per-frame
> > values. If you add it like you propose, this would mean, that it just
> > delivers "normal" 8 bits per pixel images. OTOH, soc_mbus_bytes_per_line()
> > would just return -EINVAL for your JPEG format, so, unsupporting drivers
> > would just error out, which is not all that bad. When the first driver
> > decides to support JPEG, they will have to handle that error. But maybe
> > we'll want to return a special error code for it.
> >
> > But, in fact, that is in a way my problem with your patches: you propose
> > extensions to generic code without showing how this is going to be used in
> > specific drivers. Could you show us your host driver, please? I don't
> > think this is still the pxa27x driver, is it?
> >
> > Thanks
> > Guennadi
> >
> > >
> > > Such as:
> > > --- a/drivers/media/video/soc_mediabus.c
> > > +++ b/drivers/media/video/soc_mediabus.c
> > > @@ -130,6 +130,13 @@ static const struct soc_mbus_pixelfmt mbus_fmt[] = {
> > >                 .packing                = SOC_MBUS_PACKING_2X8_PADLO,
> > >                 .order                  = SOC_MBUS_ORDER_BE,
> > >         },
> > > +       [MBUS_IDX(JPEG_1X8)] = {
> > > +               .fourcc                 = V4L2_PIX_FMT_JPEG,
> > > +               .name                   = "JPEG",
> > > +               .bits_per_sample        = 8,
> > > +               .packing                = SOC_MBUS_PACKING_NONE,
> > > +               .order                  = SOC_MBUS_ORDER_LE,
> > > +       },
> > >  };
> > >
> > > --- a/include/media/v4l2-mediabus.h
> > > +++ b/include/media/v4l2-mediabus.h
> > > @@ -41,6 +41,7 @@ enum v4l2_mbus_pixelcode {
> > >         V4L2_MBUS_FMT_SBGGR10_2X8_PADHI_BE,
> > >         V4L2_MBUS_FMT_SBGGR10_2X8_PADLO_BE,
> > >         V4L2_MBUS_FMT_SGRBG8_1X8,
> > > +       V4L2_MBUS_FMT_JPEG_1X8,
> > >  };
> > >
> > > Any ideas will be appreciated!
> > > Thanks!
> > > Qing Xu
> > >
> > > Email: qingx@marvell.com
> > > Application Processor Systems Engineering,
> > > Marvell Technology Group Ltd.
> > >
> >
> > ---
> > Guennadi Liakhovetski, Ph.D.
> > Freelance Open-Source Software Developer
> > http://www.open-technology.de/
> >
> 
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* RE: How to support MIPI CSI-2 controller in soc-camera framework?
  2011-02-04  7:45                           ` Guennadi Liakhovetski
  2011-02-09  3:36                             ` Qing Xu
@ 2011-04-07  5:21                             ` Kassey Li
  1 sibling, 0 replies; 21+ messages in thread
From: Kassey Li @ 2011-04-07  5:21 UTC (permalink / raw)
  To: Guennadi Liakhovetski, Qing Xu; +Cc: Laurent Pinchart, Linux Media Mailing List

Hi, Guennadi:
	Sorry for the late!
	"I would leave the choice of a connection to the platform 
code."  I agree with you on this, since HW board will only support MIPI or parallel even the sensor
can support both.

	Please review ! thanks for your time.

Subject: [PATCH] V4L/DVB: V4L2: add MIPI bus flags

Signed-off-by: Kassey Lee <ygli@marvell.com>
Signed-off-by: Qing Xu <qingx@marvell.com>
---
 include/media/soc_camera.h |   13 +++++++++----
 1 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/include/media/soc_camera.h b/include/media/soc_camera.h
index 9386db8..480e3e9 100644
--- a/include/media/soc_camera.h
+++ b/include/media/soc_camera.h
@@ -95,7 +95,12 @@ struct soc_camera_host_ops {
 #define SOCAM_SENSOR_INVERT_HSYNC	(1 << 2)
 #define SOCAM_SENSOR_INVERT_VSYNC	(1 << 3)
 #define SOCAM_SENSOR_INVERT_DATA	(1 << 4)
-
+#define SOCAM_MIPI_1LANE	(1 << 17)
+#define SOCAM_MIPI_2LANE	(1 << 18)
+#define SOCAM_MIPI_3LANE	(1 << 19)
+#define SOCAM_MIPI_4LANE	(1 << 20)
+#define SOCAM_MIPI	(SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE | \
+			SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE)
 struct i2c_board_info;
 struct regulator_bulk_data;
 
@@ -259,7 +264,7 @@ static inline unsigned long soc_camera_bus_param_compatible(
 			unsigned long camera_flags, unsigned long bus_flags)
 {
 	unsigned long common_flags, hsync, vsync, pclk, data, buswidth, mode;
-
+	unsigned long mipi;
 	common_flags = camera_flags & bus_flags;
 
 	hsync = common_flags & (SOCAM_HSYNC_ACTIVE_HIGH | SOCAM_HSYNC_ACTIVE_LOW);
@@ -268,8 +273,8 @@ static inline unsigned long soc_camera_bus_param_compatible(
 	data = common_flags & (SOCAM_DATA_ACTIVE_HIGH | SOCAM_DATA_ACTIVE_LOW);
 	mode = common_flags & (SOCAM_MASTER | SOCAM_SLAVE);
 	buswidth = common_flags & SOCAM_DATAWIDTH_MASK;
-
-	return (!hsync || !vsync || !pclk || !data || !mode || !buswidth) ? 0 :
+	mipi = common_flags & SOCAM_MIPI;
+	return ((!hsync || !vsync || !pclk || !data || !mode || !buswidth) && (!mipi)) ? 0 :
 		common_flags;
 }


Best regards
Kassey 
Email: ygli@marvell.com
Application Processor Systems Engineering, Marvell Technology Group Ltd.
Shanghai, China.


-----Original Message-----
From: linux-media-owner@vger.kernel.org [mailto:linux-media-owner@vger.kernel.org] On Behalf Of Guennadi Liakhovetski
Sent: 2011年2月4日 15:45
To: Qing Xu
Cc: Laurent Pinchart; Linux Media Mailing List
Subject: RE: How to support MIPI CSI-2 controller in soc-camera framework?

Hi

Sorry for the delay first of all.

On Wed, 19 Jan 2011, Qing Xu wrote:

> Hi,
> 
> (a general request: could you please configure your mailer to wrap
> Lines at somewhere around 70 characters?)
> very sorry for the un-convenience!
> 
> Thanks for your description! I could verify and try your way on our
> CSI-2 driver.
> Also, our another chip's camera controller support both MIPI and
> traditional parallel(H_sync/V_sync) interface, we hope host can
> negotiate with sensor on MIPI configure, as the sensor could be
> parallel interface or MIPI interface, so I have a proposal as
> follow:
> 
> in soc_camera.h, SOCAM_XXX defines all HW connection properties,
> I thing MIPI(1/2/3/4 lanes) is also a kind of HW connection
> property, and it is mutex with parallel properties(if sensor
> support mipi connection, the HW signal has no parallel property
> any more), once host controller find subdev support MIPI, it will
> enable MIPI functional itself, and if subdev only support parallel,
> it will enable parallel functional itself.

I think, yes, we can add MIPI definitions to soc_camera.h, similar to your 
proposal below, but I don't think we need the "SOCAM_MIPI" macro itself, 
maybe define it as a mask

#define SOCAM_MIPI (SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE)

Also, the decision "if MIPI supported by the client always prefer it over 
the parallel connection" doesn't seem to be a meaningful thing to do in 
the driver to me. I would leave the choice of a connection to the platform 
code. In that case your addition to the soc_camera_apply_sensor_flags() 
function becomes unneeded.

Makes sense?

Thanks
Guennadi

> (you can find the proposal in the code which I have sent, refer to 
> pxa955_cam_set_bus_param() in pxa955_cam.c, ov5642_query_bus_param
> In ov5642.c)
> 
> --- a/drivers/media/video/soc_camera.c
> +++ b/drivers/media/video/soc_camera.c
> unsigned long soc_camera_apply_sensor_flags(struct soc_camera_link *icl,
>                 if (f == SOCAM_PCLK_SAMPLE_RISING || f == SOCAM_PCLK_SAMPLE_FALLING)
>                         flags ^= SOCAM_PCLK_SAMPLE_RISING | SOCAM_PCLK_SAMPLE_FALLING;
>         }
> +       if (icl->flags & SOCAM_MIPI) {
> +               flags &= SOCAM_MIPI | SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE
> +                                       | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE;
> +       }
> 
>         return flags;
>  }
> 
> --- a/include/media/soc_camera.h
> +++ b/include/media/soc_camera.h
> 
>  #define SOCAM_DATA_ACTIVE_HIGH         (1 << 14)
>  #define SOCAM_DATA_ACTIVE_LOW          (1 << 15)
> 
> +#define SOCAM_MIPI             (1 << 16)
> +#define SOCAM_MIPI_1LANE               (1 << 17)
> +#define SOCAM_MIPI_2LANE               (1 << 18)
> +#define SOCAM_MIPI_3LANE               (1 << 19)
> +#define SOCAM_MIPI_4LANE               (1 << 20)
> +
> 
>  static inline unsigned long soc_camera_bus_param_compatible(
>                         unsigned long camera_flags, unsigned long bus_flags)
>  {
> -       unsigned long common_flags, hsync, vsync, pclk, data, buswidth, mode;
> +       unsigned long common_flags, hsync, vsync, pclk, data, buswidth, mode, mipi;
> 
>         common_flags = camera_flags & bus_flags;
> 
> @@ -261,8 +267,10 @@ static inline unsigned long soc_camera_bus_param_compatible(
>         data = common_flags & (SOCAM_DATA_ACTIVE_HIGH | SOCAM_DATA_ACTIVE_LOW);
>         mode = common_flags & (SOCAM_MASTER | SOCAM_SLAVE);
>         buswidth = common_flags & SOCAM_DATAWIDTH_MASK;
> +       mipi = common_flags & (SOCAM_MIPI | SOCAM_MIPI_1LANE | SOCAM_MIPI_2LANE
> +                                               | SOCAM_MIPI_3LANE | SOCAM_MIPI_4LANE);
> 
> -       return (!hsync || !vsync || !pclk || !data || !mode || !buswidth) ? 0 :
> +       return ((!hsync || !vsync || !pclk || !data || !mode || !buswidth) && (!mipi)) ? 0 :
>                 common_flags;
>  }
> 
> 
> -----Original Message-----
> From: Guennadi Liakhovetski [mailto:g.liakhovetski@gmx.de]
> Sent: 2011Äê1ÔÂ20ÈÕ 0:20
> To: Qing Xu
> Cc: Laurent Pinchart; Linux Media Mailing List
> Subject: Re: How to support MIPI CSI-2 controller in soc-camera framework?
> 
> (a general request: could you please configure your mailer to wrap lines
> at somewhere around 70 characters?)
> 
> On Tue, 18 Jan 2011, Qing Xu wrote:
> 
> > Hi,
> >
> > Our chip support both MIPI and parallel interface. The HW connection logic is
> > sensor(such as ov5642) -> our MIPI controller(handle DPHY timing/ CSI-2
> > things) -> our camera controller (handle DMA transmitting/ fmt/ size
> > things). Now, I find the driver of sh_mobile_csi2.c, it seems like a
> > CSI-2 driver, but I don't quite understand how it works:
> > 1) how the host controller call into this driver?
> 
> This is a normal v4l2-subdev driver. Platform data for the
> sh_mobile_ceu_camera driver provides a link to CSI2 driver data, then the
> host driver loads the CSI2 driver, which then links itself into the
> subdevice list. Look at arch/arm/mach-shmobile/board-ap4evb.c how the data
> is linked:
> 
> static struct sh_mobile_ceu_info sh_mobile_ceu_info = {
>         .flags = SH_CEU_FLAG_USE_8BIT_BUS,
>         .csi2_dev = &csi2_device.dev,
> };
> 
> and in the hosz driver drivers/media/video/sh_mobile_ceu_camera.c look in
> the sh_mobile_ceu_probe function below the lines:
> 
>         csi2 = pcdev->pdata->csi2_dev;
>         if (csi2) {
> ...
> 
> 
> > 2) how the host controller/sensor negotiate MIPI variable with this
> > driver, such as D-PHY timing(hs_settle/hs_termen/clk_settle/clk_termen),
> > number of lanes...?
> 
> Since I only had a limited number of MIPI setups, I haven't implemented
> maximum flexibility. A part of the parameters is hard-coded, another part
> is provided in the platform driver, yet another part is calculated
> dynamically.
> 
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2011-04-07  5:25 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-01-07  2:49 [PATCH] [media] v4l: soc-camera: add enum-frame-size ioctl Qing Xu
2011-01-07 14:37 ` Guennadi Liakhovetski
2011-01-07 14:50   ` Laurent Pinchart
2011-01-07 15:11     ` Guennadi Liakhovetski
2011-01-09 19:24   ` Guennadi Liakhovetski
2011-01-10  2:45     ` Qing Xu
2011-01-10  2:47     ` Qing Xu
     [not found]       ` <Pine.LNX.4.64.1101100853490.24479@axis700.grange>
2011-01-10 10:33         ` Laurent Pinchart
2011-01-17  9:53           ` soc-camera jpeg support? Qing Xu
2011-01-17 17:38             ` Guennadi Liakhovetski
2011-01-18  6:06               ` Qing Xu
2011-01-18 17:30                 ` Guennadi Liakhovetski
2011-01-19  2:50                   ` Qing Xu
2011-01-19  6:32                     ` How to support MIPI CSI-2 controller in soc-camera framework? Qing Xu
2011-01-19 16:20                       ` Guennadi Liakhovetski
2011-01-20  2:26                         ` Qing Xu
2011-02-04  7:45                           ` Guennadi Liakhovetski
2011-02-09  3:36                             ` Qing Xu
2011-04-07  5:21                             ` Kassey Li
2011-01-19 16:00                     ` soc-camera jpeg support? Guennadi Liakhovetski
2011-04-07  3:12                       ` Kassey Li

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.