All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC 0/6] Common Display Framework-T
@ 2012-12-14 14:27 ` Tomi Valkeinen
  0 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Hi,

I have been testing Common Display Framework on OMAP, and making changes that
I've discussed in the posts I've sent in reply to the CDF series from Laurent.
While my CDF code is rather hacky and not at all ready, I wanted to post the
code for comments and also as a reference code to my posts.

So here is CDF-T (Tomi-edition =). 

Changes compared to Laurent's CDF
---------------------------------

- DBI bus is removed. I don't have any DBI devices, and as I said in my posts,
  I didn't see need for a real bus for DBI (or DSI). I thus also removed the
  DBI panels.

- Split the display_entity into two parts: video_source, which is used to
  connect the display chips and panels, and display_entity, which represent
  either the panel or the whole pipeline (I'm not sure which is better).

- Added ops for DVI and DSI

- Added driver for TFP410, a DPI to DVI converter chip
- Added driver for generic DVI monitor
- Added driver for Taal, a DSI command mode panel

- Removed DISPLAY_ENTITY_STREAM_SINGLE_SHOT, and instead there's update() op.
  It's perhaps possible to use the single-shot method, but I went for a
  separate op to get a "update done"-callback implemented easily.

Notes about the code
--------------------

As I said, the code is rather untidy, but I think it's more or less
understandable. I've skipped adding any helper funcs, for example to call the
ops, so it's easier for me to change the API.

I also (ab)used some of the omapdss panel headers to ease my hacking for
omapdss+cdf. These should be quite clear.

The code, including hacks to omapdss to make it use CDF, can be found from:
git://gitorious.org/linux-omap-dss2/linux.git work/dss-dev-model-cdf

The DSI part is very unfinished. I am able to use it to start up the Taal
panel, and send frames to the panel, but it's really missing lots of stuff.

About display_entity & video_source
-----------------------------------

I've discussed this split in previous posts, but I'll describe it here again.

As I see it, the connections between the display blocks, and the use of the
panel (and thus the whole pipeline) are very different things, and I think they
should be separated. So in my version I have struct video_source, used to
connect the blocks, and display_entity, representing the panel or the whole
pipeline. display_entity is probably not a good name for it anymore, but I
didn't come up with a good one yet.

As an example, let's look at chip-tfp410.c and panel-dvi.c.

tfp410 uses two video_sources, one for input and one for output. The input
comes from some other display block, in my case from OMAP display subsystem.
OMAP DSS has registered a DPI video_source, and the tfp410 driver will get this
source using video_source_find().

tfp410 registers its output as a video source, using video_source_register.
panel-dvi will in turn use video_source_find() to get it.

Both drivers use video_source to configure the input bus, i.e. tfp410
configures the DPI bus between OMAP DSS and TFP410 using, for example,
set_data_lines op to configure the number of datalines on the DPI bus.

With the video_sources in place, the whole video pipeline can be used. However,
we still need to expose an API so that somebody can actually use the pipeline.
This is done with display_entity. display_entity contains higher level ops,
which are not bus specific. The display_entity is registered by the panel at
the end of the chain.

In my model the display_entity ops go to the panel driver, which then calls ops
in the input video source to do the work. Laurent has objected to this model,
and would rather have the display_entity ops go to the DSS side (if I
understood right), which would then propagate forward towards the panel. I have
still kept my model, as I don't see the downsides with my model, nor do I see
any use for propagating the ops from DSS to the panel. But I'm happy to hear
examples how it is beneficial and could be used.


About the bus model
-------------------

Lauren't version uses a linux bus for DBI. The idea here is that DBI is the
control bus fro the panel/chip, and should thus be represented by a real bus.
While I agreed to this approach when we discussed about it, I now am slightly
against it.

My reason is that DBI (or DSI or any other similar bus) is not really control
bus, it is a video bus. It _can_ be used for control, but video is the main
purpose. This has the partical issues:

- There's never use for DBI only for control. DBI is always used for either
  video only, or control+video.

- If DBI is used only for video, there's no DBI bus. How to configure DBI in
  this case?

- If DBI is used for control and video, we have two separate APIs for the bus.
  In theory it's possible to handle this, but in practice it may be impossible,
  especially for more complex busses like DSI.

So in my version I added DSI as a plain video_source, without a real linux bus.
I think this model is a lot simpler, and works better.

 Tomi

Tomi Valkeinen (6):
  video: add display-core
  video: add generic dpi panel
  video: add tfp410
  video: add generic dvi monitor
  video: add taal panel
  video: add makefile & kconfig

 drivers/video/Kconfig                |    1 +
 drivers/video/Makefile               |    1 +
 drivers/video/display/Kconfig        |   26 +++
 drivers/video/display/Makefile       |    5 +
 drivers/video/display/chip-tfp410.c  |  164 +++++++++++++++
 drivers/video/display/display-core.c |  207 ++++++++++++++++++
 drivers/video/display/panel-dpi.c    |  155 ++++++++++++++
 drivers/video/display/panel-dvi.c    |  164 +++++++++++++++
 drivers/video/display/panel-taal.c   |  383 ++++++++++++++++++++++++++++++++++
 include/video/display.h              |  166 +++++++++++++++
 include/video/omap-panel-nokia-dsi.h |    4 +-
 include/video/omap-panel-tfp410.h    |    4 +
 include/video/panel-dpi.h            |   25 +++
 include/video/panel-dvi.h            |   18 ++
 14 files changed, 1321 insertions(+), 2 deletions(-)
 create mode 100644 drivers/video/display/Kconfig
 create mode 100644 drivers/video/display/Makefile
 create mode 100644 drivers/video/display/chip-tfp410.c
 create mode 100644 drivers/video/display/display-core.c
 create mode 100644 drivers/video/display/panel-dpi.c
 create mode 100644 drivers/video/display/panel-dvi.c
 create mode 100644 drivers/video/display/panel-taal.c
 create mode 100644 include/video/display.h
 create mode 100644 include/video/panel-dpi.h
 create mode 100644 include/video/panel-dvi.h

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [RFC 0/6] Common Display Framework-T
@ 2012-12-14 14:27 ` Tomi Valkeinen
  0 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Hi,

I have been testing Common Display Framework on OMAP, and making changes that
I've discussed in the posts I've sent in reply to the CDF series from Laurent.
While my CDF code is rather hacky and not at all ready, I wanted to post the
code for comments and also as a reference code to my posts.

So here is CDF-T (Tomi-edition =). 

Changes compared to Laurent's CDF
---------------------------------

- DBI bus is removed. I don't have any DBI devices, and as I said in my posts,
  I didn't see need for a real bus for DBI (or DSI). I thus also removed the
  DBI panels.

- Split the display_entity into two parts: video_source, which is used to
  connect the display chips and panels, and display_entity, which represent
  either the panel or the whole pipeline (I'm not sure which is better).

- Added ops for DVI and DSI

- Added driver for TFP410, a DPI to DVI converter chip
- Added driver for generic DVI monitor
- Added driver for Taal, a DSI command mode panel

- Removed DISPLAY_ENTITY_STREAM_SINGLE_SHOT, and instead there's update() op.
  It's perhaps possible to use the single-shot method, but I went for a
  separate op to get a "update done"-callback implemented easily.

Notes about the code
--------------------

As I said, the code is rather untidy, but I think it's more or less
understandable. I've skipped adding any helper funcs, for example to call the
ops, so it's easier for me to change the API.

I also (ab)used some of the omapdss panel headers to ease my hacking for
omapdss+cdf. These should be quite clear.

The code, including hacks to omapdss to make it use CDF, can be found from:
git://gitorious.org/linux-omap-dss2/linux.git work/dss-dev-model-cdf

The DSI part is very unfinished. I am able to use it to start up the Taal
panel, and send frames to the panel, but it's really missing lots of stuff.

About display_entity & video_source
-----------------------------------

I've discussed this split in previous posts, but I'll describe it here again.

As I see it, the connections between the display blocks, and the use of the
panel (and thus the whole pipeline) are very different things, and I think they
should be separated. So in my version I have struct video_source, used to
connect the blocks, and display_entity, representing the panel or the whole
pipeline. display_entity is probably not a good name for it anymore, but I
didn't come up with a good one yet.

As an example, let's look at chip-tfp410.c and panel-dvi.c.

tfp410 uses two video_sources, one for input and one for output. The input
comes from some other display block, in my case from OMAP display subsystem.
OMAP DSS has registered a DPI video_source, and the tfp410 driver will get this
source using video_source_find().

tfp410 registers its output as a video source, using video_source_register.
panel-dvi will in turn use video_source_find() to get it.

Both drivers use video_source to configure the input bus, i.e. tfp410
configures the DPI bus between OMAP DSS and TFP410 using, for example,
set_data_lines op to configure the number of datalines on the DPI bus.

With the video_sources in place, the whole video pipeline can be used. However,
we still need to expose an API so that somebody can actually use the pipeline.
This is done with display_entity. display_entity contains higher level ops,
which are not bus specific. The display_entity is registered by the panel at
the end of the chain.

In my model the display_entity ops go to the panel driver, which then calls ops
in the input video source to do the work. Laurent has objected to this model,
and would rather have the display_entity ops go to the DSS side (if I
understood right), which would then propagate forward towards the panel. I have
still kept my model, as I don't see the downsides with my model, nor do I see
any use for propagating the ops from DSS to the panel. But I'm happy to hear
examples how it is beneficial and could be used.


About the bus model
-------------------

Lauren't version uses a linux bus for DBI. The idea here is that DBI is the
control bus fro the panel/chip, and should thus be represented by a real bus.
While I agreed to this approach when we discussed about it, I now am slightly
against it.

My reason is that DBI (or DSI or any other similar bus) is not really control
bus, it is a video bus. It _can_ be used for control, but video is the main
purpose. This has the partical issues:

- There's never use for DBI only for control. DBI is always used for either
  video only, or control+video.

- If DBI is used only for video, there's no DBI bus. How to configure DBI in
  this case?

- If DBI is used for control and video, we have two separate APIs for the bus.
  In theory it's possible to handle this, but in practice it may be impossible,
  especially for more complex busses like DSI.

So in my version I added DSI as a plain video_source, without a real linux bus.
I think this model is a lot simpler, and works better.

 Tomi

Tomi Valkeinen (6):
  video: add display-core
  video: add generic dpi panel
  video: add tfp410
  video: add generic dvi monitor
  video: add taal panel
  video: add makefile & kconfig

 drivers/video/Kconfig                |    1 +
 drivers/video/Makefile               |    1 +
 drivers/video/display/Kconfig        |   26 +++
 drivers/video/display/Makefile       |    5 +
 drivers/video/display/chip-tfp410.c  |  164 +++++++++++++++
 drivers/video/display/display-core.c |  207 ++++++++++++++++++
 drivers/video/display/panel-dpi.c    |  155 ++++++++++++++
 drivers/video/display/panel-dvi.c    |  164 +++++++++++++++
 drivers/video/display/panel-taal.c   |  383 ++++++++++++++++++++++++++++++++++
 include/video/display.h              |  166 +++++++++++++++
 include/video/omap-panel-nokia-dsi.h |    4 +-
 include/video/omap-panel-tfp410.h    |    4 +
 include/video/panel-dpi.h            |   25 +++
 include/video/panel-dvi.h            |   18 ++
 14 files changed, 1321 insertions(+), 2 deletions(-)
 create mode 100644 drivers/video/display/Kconfig
 create mode 100644 drivers/video/display/Makefile
 create mode 100644 drivers/video/display/chip-tfp410.c
 create mode 100644 drivers/video/display/display-core.c
 create mode 100644 drivers/video/display/panel-dpi.c
 create mode 100644 drivers/video/display/panel-dvi.c
 create mode 100644 drivers/video/display/panel-taal.c
 create mode 100644 include/video/display.h
 create mode 100644 include/video/panel-dpi.h
 create mode 100644 include/video/panel-dvi.h

-- 
1.7.10.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [RFC 1/6] video: add display-core
  2012-12-14 14:27 ` Tomi Valkeinen
@ 2012-12-14 14:27   ` Tomi Valkeinen
  -1 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/display/display-core.c |  207 ++++++++++++++++++++++++++++++++++
 include/video/display.h              |  166 +++++++++++++++++++++++++++
 2 files changed, 373 insertions(+)
 create mode 100644 drivers/video/display/display-core.c
 create mode 100644 include/video/display.h

diff --git a/drivers/video/display/display-core.c b/drivers/video/display/display-core.c
new file mode 100644
index 0000000..5f8be30
--- /dev/null
+++ b/drivers/video/display/display-core.c
@@ -0,0 +1,207 @@
+/*
+ * Display Core
+ *
+ * Copyright (C) 2012 Renesas Solutions Corp.
+ *
+ * Contacts: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/videomode.h>
+
+#include <video/display.h>
+
+/* -----------------------------------------------------------------------------
+ * Display Entity
+ */
+
+static LIST_HEAD(display_entity_list);
+static DEFINE_MUTEX(display_entity_mutex);
+
+struct display_entity *display_entity_get_first(void)
+{
+	if (list_empty(&display_entity_list))
+		return NULL;
+
+	return list_first_entry(&display_entity_list, struct display_entity,
+			list);
+}
+EXPORT_SYMBOL(display_entity_get_first);
+
+int display_entity_set_state(struct display_entity *entity,
+			     enum display_entity_state state)
+{
+	int ret;
+
+	if (entity->state = state)
+		return 0;
+
+	if (!entity->ops || !entity->ops->set_state)
+		return 0;
+
+	ret = entity->ops->set_state(entity, state);
+	if (ret < 0)
+		return ret;
+
+	entity->state = state;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(display_entity_set_state);
+
+int display_entity_get_modes(struct display_entity *entity,
+			     const struct videomode **modes)
+{
+	if (!entity->ops || !entity->ops->get_modes)
+		return 0;
+
+	return entity->ops->get_modes(entity, modes);
+}
+EXPORT_SYMBOL_GPL(display_entity_get_modes);
+
+int display_entity_get_size(struct display_entity *entity,
+			    unsigned int *width, unsigned int *height)
+{
+	if (!entity->ops || !entity->ops->get_size)
+		return -EOPNOTSUPP;
+
+	return entity->ops->get_size(entity, width, height);
+}
+EXPORT_SYMBOL_GPL(display_entity_get_size);
+
+static void display_entity_release(struct kref *ref)
+{
+	struct display_entity *entity +		container_of(ref, struct display_entity, ref);
+
+	if (entity->release)
+		entity->release(entity);
+}
+
+struct display_entity *display_entity_get(struct display_entity *entity)
+{
+	if (entity = NULL)
+		return NULL;
+
+	kref_get(&entity->ref);
+	return entity;
+}
+EXPORT_SYMBOL_GPL(display_entity_get);
+
+void display_entity_put(struct display_entity *entity)
+{
+	kref_put(&entity->ref, display_entity_release);
+}
+EXPORT_SYMBOL_GPL(display_entity_put);
+
+int __must_check __display_entity_register(struct display_entity *entity,
+					   struct module *owner)
+{
+	kref_init(&entity->ref);
+	entity->owner = owner;
+	entity->state = DISPLAY_ENTITY_STATE_OFF;
+
+	mutex_lock(&display_entity_mutex);
+	list_add(&entity->list, &display_entity_list);
+
+	mutex_unlock(&display_entity_mutex);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(__display_entity_register);
+
+void display_entity_unregister(struct display_entity *entity)
+{
+	mutex_lock(&display_entity_mutex);
+
+	list_del(&entity->list);
+	mutex_unlock(&display_entity_mutex);
+
+	display_entity_put(entity);
+}
+EXPORT_SYMBOL_GPL(display_entity_unregister);
+
+/* -----------------------------------------------------------------------------
+ * Video Source
+ */
+
+static LIST_HEAD(video_source_list);
+static DEFINE_MUTEX(video_source_mutex);
+
+int __must_check __video_source_register(struct video_source *src,
+					   struct module *owner)
+{
+	kref_init(&src->ref);
+	src->owner = owner;
+
+	mutex_lock(&video_source_mutex);
+	list_add(&src->list, &video_source_list);
+
+	mutex_unlock(&video_source_mutex);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(__video_source_register);
+
+void video_source_unregister(struct video_source *src)
+{
+	mutex_lock(&video_source_mutex);
+
+	list_del(&src->list);
+	mutex_unlock(&video_source_mutex);
+
+	video_source_put(src);
+}
+EXPORT_SYMBOL_GPL(video_source_unregister);
+
+
+static void video_source_release(struct kref *ref)
+{
+	struct video_source *src +		container_of(ref, struct video_source, ref);
+
+	if (src->release)
+		src->release(src);
+}
+
+struct video_source *video_source_get(struct video_source *src)
+{
+	if (src = NULL)
+		return NULL;
+
+	kref_get(&src->ref);
+	return src;
+}
+EXPORT_SYMBOL_GPL(video_source_get);
+
+void video_source_put(struct video_source *src)
+{
+	kref_put(&src->ref, video_source_release);
+}
+EXPORT_SYMBOL_GPL(video_source_put);
+
+struct video_source *video_source_find(const char *name)
+{
+	struct video_source *src;
+
+	list_for_each_entry(src, &video_source_list, list) {
+		if (strcmp(src->name, name) = 0) {
+			kref_get(&src->ref);
+			return src;
+		}
+	}
+
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(video_source_find);
+
+MODULE_AUTHOR("Laurent Pinchart <laurent.pinchart@ideasonboard.com>");
+MODULE_DESCRIPTION("Display Core");
+MODULE_LICENSE("GPL");
diff --git a/include/video/display.h b/include/video/display.h
new file mode 100644
index 0000000..b639fd0
--- /dev/null
+++ b/include/video/display.h
@@ -0,0 +1,166 @@
+/*
+ * Display Core
+ *
+ * Copyright (C) 2012 Renesas Solutions Corp.
+ *
+ * Contacts: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __DISPLAY_H__
+#define __DISPLAY_H__
+
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <video/omapdss.h>
+
+struct display_entity;
+struct video_source;
+struct videomode;
+
+/* -----------------------------------------------------------------------------
+ * Display Entity
+ */
+
+/* Hack to get the first registered display entity */
+struct display_entity *display_entity_get_first(void);
+
+enum display_entity_state {
+	DISPLAY_ENTITY_STATE_OFF,
+	DISPLAY_ENTITY_STATE_STANDBY,
+	DISPLAY_ENTITY_STATE_ON,
+};
+
+struct display_entity_control_ops {
+	int (*set_state)(struct display_entity *ent,
+			 enum display_entity_state state);
+	int (*update)(struct display_entity *ent,
+			void (*callback)(int, void *), void *data);
+	int (*get_modes)(struct display_entity *ent,
+			 const struct videomode **modes);
+	int (*get_size)(struct display_entity *ent,
+			unsigned int *width, unsigned int *height);
+};
+
+struct display_entity {
+	struct list_head list;
+	struct device *dev;
+	struct module *owner;
+	struct kref ref;
+
+	const struct display_entity_control_ops *ops;
+
+	void(*release)(struct display_entity *ent);
+
+	enum display_entity_state state;
+};
+
+int display_entity_set_state(struct display_entity *entity,
+			     enum display_entity_state state);
+int display_entity_get_modes(struct display_entity *entity,
+			     const struct videomode **modes);
+int display_entity_get_size(struct display_entity *entity,
+			    unsigned int *width, unsigned int *height);
+
+struct display_entity *display_entity_get(struct display_entity *entity);
+void display_entity_put(struct display_entity *entity);
+
+int __must_check __display_entity_register(struct display_entity *entity,
+					   struct module *owner);
+void display_entity_unregister(struct display_entity *entity);
+
+#define display_entity_register(display_entity) \
+	__display_entity_register(display_entity, THIS_MODULE)
+
+
+/* -----------------------------------------------------------------------------
+ * Video Source
+ */
+
+enum video_source_stream_state {
+	DISPLAY_ENTITY_STREAM_STOPPED,
+	DISPLAY_ENTITY_STREAM_CONTINUOUS,
+};
+
+struct common_video_source_ops {
+	int (*set_stream)(struct video_source *src,
+			 enum video_source_stream_state state);
+};
+
+struct dpi_video_source_ops {
+	int (*set_videomode)(struct video_source *src,
+			const struct videomode *vm);
+	int (*set_data_lines)(struct video_source *src, int lines);
+};
+
+struct dsi_video_source_ops {
+	/* enable/disable dsi bus */
+	int (*enable)(struct video_source *src);
+	void (*disable)(struct video_source *src);
+
+	/* bus configuration */
+	int (*configure_pins)(struct video_source *src,
+			const struct omap_dsi_pin_config *pins);
+	int (*set_clocks)(struct video_source *src,
+			unsigned long ddr_clk,
+			unsigned long lp_clk);
+
+	void (*set_operation_mode)(struct video_source *src,
+			enum omap_dss_dsi_mode mode);
+	void (*set_pixel_format)(struct video_source *src,
+			enum omap_dss_dsi_pixel_format fmt);
+	void (*set_size)(struct video_source *src, u16 w, u16 h);
+
+	void (*enable_hs)(struct video_source *src, bool enable);
+
+	/* data transfer */
+	int (*dcs_write)(struct video_source *src, int channel,
+			u8 *data, size_t len);
+	int (*dcs_read)(struct video_source *src, int channel, u8 dcs_cmd,
+			u8 *data, size_t len);
+	int (*update)(struct video_source *src, int channel,
+			void (*callback)(int, void *), void *data);
+};
+
+struct dvi_video_source_ops {
+	int (*set_videomode)(struct video_source *src,
+			const struct videomode *vm);
+};
+
+struct video_source {
+	struct list_head list;
+	struct device *dev;
+	struct module *owner;
+	struct kref ref;
+
+	const char *name;
+
+	const struct common_video_source_ops *common_ops;
+
+	union {
+		const struct dpi_video_source_ops *dpi;
+		const struct dsi_video_source_ops *dsi;
+		const struct dvi_video_source_ops *dvi;
+	} ops;
+
+	void(*release)(struct video_source *src);
+};
+
+
+#define video_source_register(video_source) \
+	__video_source_register(video_source, THIS_MODULE)
+
+int __must_check __video_source_register(struct video_source *entity,
+					   struct module *owner);
+void video_source_unregister(struct video_source *entity);
+
+struct video_source *video_source_get(struct video_source *src);
+void video_source_put(struct video_source *src);
+
+struct video_source *video_source_find(const char *name);
+
+#endif /* __DISPLAY_H__ */
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 1/6] video: add display-core
@ 2012-12-14 14:27   ` Tomi Valkeinen
  0 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/display/display-core.c |  207 ++++++++++++++++++++++++++++++++++
 include/video/display.h              |  166 +++++++++++++++++++++++++++
 2 files changed, 373 insertions(+)
 create mode 100644 drivers/video/display/display-core.c
 create mode 100644 include/video/display.h

diff --git a/drivers/video/display/display-core.c b/drivers/video/display/display-core.c
new file mode 100644
index 0000000..5f8be30
--- /dev/null
+++ b/drivers/video/display/display-core.c
@@ -0,0 +1,207 @@
+/*
+ * Display Core
+ *
+ * Copyright (C) 2012 Renesas Solutions Corp.
+ *
+ * Contacts: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/export.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/videomode.h>
+
+#include <video/display.h>
+
+/* -----------------------------------------------------------------------------
+ * Display Entity
+ */
+
+static LIST_HEAD(display_entity_list);
+static DEFINE_MUTEX(display_entity_mutex);
+
+struct display_entity *display_entity_get_first(void)
+{
+	if (list_empty(&display_entity_list))
+		return NULL;
+
+	return list_first_entry(&display_entity_list, struct display_entity,
+			list);
+}
+EXPORT_SYMBOL(display_entity_get_first);
+
+int display_entity_set_state(struct display_entity *entity,
+			     enum display_entity_state state)
+{
+	int ret;
+
+	if (entity->state == state)
+		return 0;
+
+	if (!entity->ops || !entity->ops->set_state)
+		return 0;
+
+	ret = entity->ops->set_state(entity, state);
+	if (ret < 0)
+		return ret;
+
+	entity->state = state;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(display_entity_set_state);
+
+int display_entity_get_modes(struct display_entity *entity,
+			     const struct videomode **modes)
+{
+	if (!entity->ops || !entity->ops->get_modes)
+		return 0;
+
+	return entity->ops->get_modes(entity, modes);
+}
+EXPORT_SYMBOL_GPL(display_entity_get_modes);
+
+int display_entity_get_size(struct display_entity *entity,
+			    unsigned int *width, unsigned int *height)
+{
+	if (!entity->ops || !entity->ops->get_size)
+		return -EOPNOTSUPP;
+
+	return entity->ops->get_size(entity, width, height);
+}
+EXPORT_SYMBOL_GPL(display_entity_get_size);
+
+static void display_entity_release(struct kref *ref)
+{
+	struct display_entity *entity =
+		container_of(ref, struct display_entity, ref);
+
+	if (entity->release)
+		entity->release(entity);
+}
+
+struct display_entity *display_entity_get(struct display_entity *entity)
+{
+	if (entity == NULL)
+		return NULL;
+
+	kref_get(&entity->ref);
+	return entity;
+}
+EXPORT_SYMBOL_GPL(display_entity_get);
+
+void display_entity_put(struct display_entity *entity)
+{
+	kref_put(&entity->ref, display_entity_release);
+}
+EXPORT_SYMBOL_GPL(display_entity_put);
+
+int __must_check __display_entity_register(struct display_entity *entity,
+					   struct module *owner)
+{
+	kref_init(&entity->ref);
+	entity->owner = owner;
+	entity->state = DISPLAY_ENTITY_STATE_OFF;
+
+	mutex_lock(&display_entity_mutex);
+	list_add(&entity->list, &display_entity_list);
+
+	mutex_unlock(&display_entity_mutex);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(__display_entity_register);
+
+void display_entity_unregister(struct display_entity *entity)
+{
+	mutex_lock(&display_entity_mutex);
+
+	list_del(&entity->list);
+	mutex_unlock(&display_entity_mutex);
+
+	display_entity_put(entity);
+}
+EXPORT_SYMBOL_GPL(display_entity_unregister);
+
+/* -----------------------------------------------------------------------------
+ * Video Source
+ */
+
+static LIST_HEAD(video_source_list);
+static DEFINE_MUTEX(video_source_mutex);
+
+int __must_check __video_source_register(struct video_source *src,
+					   struct module *owner)
+{
+	kref_init(&src->ref);
+	src->owner = owner;
+
+	mutex_lock(&video_source_mutex);
+	list_add(&src->list, &video_source_list);
+
+	mutex_unlock(&video_source_mutex);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(__video_source_register);
+
+void video_source_unregister(struct video_source *src)
+{
+	mutex_lock(&video_source_mutex);
+
+	list_del(&src->list);
+	mutex_unlock(&video_source_mutex);
+
+	video_source_put(src);
+}
+EXPORT_SYMBOL_GPL(video_source_unregister);
+
+
+static void video_source_release(struct kref *ref)
+{
+	struct video_source *src =
+		container_of(ref, struct video_source, ref);
+
+	if (src->release)
+		src->release(src);
+}
+
+struct video_source *video_source_get(struct video_source *src)
+{
+	if (src == NULL)
+		return NULL;
+
+	kref_get(&src->ref);
+	return src;
+}
+EXPORT_SYMBOL_GPL(video_source_get);
+
+void video_source_put(struct video_source *src)
+{
+	kref_put(&src->ref, video_source_release);
+}
+EXPORT_SYMBOL_GPL(video_source_put);
+
+struct video_source *video_source_find(const char *name)
+{
+	struct video_source *src;
+
+	list_for_each_entry(src, &video_source_list, list) {
+		if (strcmp(src->name, name) == 0) {
+			kref_get(&src->ref);
+			return src;
+		}
+	}
+
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(video_source_find);
+
+MODULE_AUTHOR("Laurent Pinchart <laurent.pinchart@ideasonboard.com>");
+MODULE_DESCRIPTION("Display Core");
+MODULE_LICENSE("GPL");
diff --git a/include/video/display.h b/include/video/display.h
new file mode 100644
index 0000000..b639fd0
--- /dev/null
+++ b/include/video/display.h
@@ -0,0 +1,166 @@
+/*
+ * Display Core
+ *
+ * Copyright (C) 2012 Renesas Solutions Corp.
+ *
+ * Contacts: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __DISPLAY_H__
+#define __DISPLAY_H__
+
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <video/omapdss.h>
+
+struct display_entity;
+struct video_source;
+struct videomode;
+
+/* -----------------------------------------------------------------------------
+ * Display Entity
+ */
+
+/* Hack to get the first registered display entity */
+struct display_entity *display_entity_get_first(void);
+
+enum display_entity_state {
+	DISPLAY_ENTITY_STATE_OFF,
+	DISPLAY_ENTITY_STATE_STANDBY,
+	DISPLAY_ENTITY_STATE_ON,
+};
+
+struct display_entity_control_ops {
+	int (*set_state)(struct display_entity *ent,
+			 enum display_entity_state state);
+	int (*update)(struct display_entity *ent,
+			void (*callback)(int, void *), void *data);
+	int (*get_modes)(struct display_entity *ent,
+			 const struct videomode **modes);
+	int (*get_size)(struct display_entity *ent,
+			unsigned int *width, unsigned int *height);
+};
+
+struct display_entity {
+	struct list_head list;
+	struct device *dev;
+	struct module *owner;
+	struct kref ref;
+
+	const struct display_entity_control_ops *ops;
+
+	void(*release)(struct display_entity *ent);
+
+	enum display_entity_state state;
+};
+
+int display_entity_set_state(struct display_entity *entity,
+			     enum display_entity_state state);
+int display_entity_get_modes(struct display_entity *entity,
+			     const struct videomode **modes);
+int display_entity_get_size(struct display_entity *entity,
+			    unsigned int *width, unsigned int *height);
+
+struct display_entity *display_entity_get(struct display_entity *entity);
+void display_entity_put(struct display_entity *entity);
+
+int __must_check __display_entity_register(struct display_entity *entity,
+					   struct module *owner);
+void display_entity_unregister(struct display_entity *entity);
+
+#define display_entity_register(display_entity) \
+	__display_entity_register(display_entity, THIS_MODULE)
+
+
+/* -----------------------------------------------------------------------------
+ * Video Source
+ */
+
+enum video_source_stream_state {
+	DISPLAY_ENTITY_STREAM_STOPPED,
+	DISPLAY_ENTITY_STREAM_CONTINUOUS,
+};
+
+struct common_video_source_ops {
+	int (*set_stream)(struct video_source *src,
+			 enum video_source_stream_state state);
+};
+
+struct dpi_video_source_ops {
+	int (*set_videomode)(struct video_source *src,
+			const struct videomode *vm);
+	int (*set_data_lines)(struct video_source *src, int lines);
+};
+
+struct dsi_video_source_ops {
+	/* enable/disable dsi bus */
+	int (*enable)(struct video_source *src);
+	void (*disable)(struct video_source *src);
+
+	/* bus configuration */
+	int (*configure_pins)(struct video_source *src,
+			const struct omap_dsi_pin_config *pins);
+	int (*set_clocks)(struct video_source *src,
+			unsigned long ddr_clk,
+			unsigned long lp_clk);
+
+	void (*set_operation_mode)(struct video_source *src,
+			enum omap_dss_dsi_mode mode);
+	void (*set_pixel_format)(struct video_source *src,
+			enum omap_dss_dsi_pixel_format fmt);
+	void (*set_size)(struct video_source *src, u16 w, u16 h);
+
+	void (*enable_hs)(struct video_source *src, bool enable);
+
+	/* data transfer */
+	int (*dcs_write)(struct video_source *src, int channel,
+			u8 *data, size_t len);
+	int (*dcs_read)(struct video_source *src, int channel, u8 dcs_cmd,
+			u8 *data, size_t len);
+	int (*update)(struct video_source *src, int channel,
+			void (*callback)(int, void *), void *data);
+};
+
+struct dvi_video_source_ops {
+	int (*set_videomode)(struct video_source *src,
+			const struct videomode *vm);
+};
+
+struct video_source {
+	struct list_head list;
+	struct device *dev;
+	struct module *owner;
+	struct kref ref;
+
+	const char *name;
+
+	const struct common_video_source_ops *common_ops;
+
+	union {
+		const struct dpi_video_source_ops *dpi;
+		const struct dsi_video_source_ops *dsi;
+		const struct dvi_video_source_ops *dvi;
+	} ops;
+
+	void(*release)(struct video_source *src);
+};
+
+
+#define video_source_register(video_source) \
+	__video_source_register(video_source, THIS_MODULE)
+
+int __must_check __video_source_register(struct video_source *entity,
+					   struct module *owner);
+void video_source_unregister(struct video_source *entity);
+
+struct video_source *video_source_get(struct video_source *src);
+void video_source_put(struct video_source *src);
+
+struct video_source *video_source_find(const char *name);
+
+#endif /* __DISPLAY_H__ */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 2/6] video: add generic dpi panel
  2012-12-14 14:27 ` Tomi Valkeinen
@ 2012-12-14 14:27   ` Tomi Valkeinen
  -1 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/display/panel-dpi.c |  155 +++++++++++++++++++++++++++++++++++++
 include/video/panel-dpi.h         |   25 ++++++
 2 files changed, 180 insertions(+)
 create mode 100644 drivers/video/display/panel-dpi.c
 create mode 100644 include/video/panel-dpi.h

diff --git a/drivers/video/display/panel-dpi.c b/drivers/video/display/panel-dpi.c
new file mode 100644
index 0000000..824cd88
--- /dev/null
+++ b/drivers/video/display/panel-dpi.c
@@ -0,0 +1,155 @@
+/*
+ * DPI Display Panel
+ *
+ * Copyright (C) 2012 Renesas Solutions Corp.
+ *
+ * Contacts: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include <video/display.h>
+#include <video/panel-dpi.h>
+
+struct panel_dpi {
+	struct display_entity entity;
+	struct video_source *src;
+	const struct panel_dpi_platform_data *pdata;
+};
+
+#define to_panel_dpi(p)		container_of(p, struct panel_dpi, entity)
+
+static int panel_dpi_set_state(struct display_entity *entity,
+			       enum display_entity_state state)
+{
+	struct panel_dpi *panel = to_panel_dpi(entity);
+	struct video_source *src = panel->src;
+
+	switch (state) {
+	case DISPLAY_ENTITY_STATE_OFF:
+	case DISPLAY_ENTITY_STATE_STANDBY:
+		src->common_ops->set_stream(src,
+				DISPLAY_ENTITY_STREAM_STOPPED);
+		break;
+
+	case DISPLAY_ENTITY_STATE_ON:
+		src->common_ops->set_stream(src,
+				DISPLAY_ENTITY_STREAM_CONTINUOUS);
+		break;
+	}
+
+	return 0;
+}
+
+static int panel_dpi_get_modes(struct display_entity *entity,
+			       const struct videomode **modes)
+{
+	struct panel_dpi *panel = to_panel_dpi(entity);
+
+	*modes = panel->pdata->mode;
+	return 1;
+}
+
+static int panel_dpi_get_size(struct display_entity *entity,
+			      unsigned int *width, unsigned int *height)
+{
+	struct panel_dpi *panel = to_panel_dpi(entity);
+
+	*width = panel->pdata->width;
+	*height = panel->pdata->height;
+	return 0;
+}
+
+static const struct display_entity_control_ops panel_dpi_control_ops = {
+	.set_state = panel_dpi_set_state,
+	.get_modes = panel_dpi_get_modes,
+	.get_size = panel_dpi_get_size,
+};
+
+static void panel_dpi_release(struct display_entity *entity)
+{
+	struct panel_dpi *panel = to_panel_dpi(entity);
+
+	kfree(panel);
+}
+
+static int panel_dpi_remove(struct platform_device *pdev)
+{
+	struct panel_dpi *panel = platform_get_drvdata(pdev);
+
+	display_entity_unregister(&panel->entity);
+
+	if (panel->src) {
+		video_source_put(panel->src);
+		panel->src = NULL;
+	}
+
+	platform_set_drvdata(pdev, NULL);
+
+	return 0;
+}
+
+static int __devinit panel_dpi_probe(struct platform_device *pdev)
+{
+	const struct panel_dpi_platform_data *pdata = pdev->dev.platform_data;
+	struct panel_dpi *panel;
+	int ret;
+
+	if (pdata = NULL)
+		return -ENODEV;
+
+	panel = kzalloc(sizeof(*panel), GFP_KERNEL);
+	if (panel = NULL)
+		return -ENOMEM;
+
+	panel->pdata = pdata;
+
+	panel->src = video_source_find(pdata->video_source);
+	if (panel->src = NULL) {
+		printk("failed to get video source\n");
+		return -EINVAL;
+	}
+
+	panel->src->ops.dpi->set_data_lines(panel->src, 24);
+	panel->src->ops.dpi->set_videomode(panel->src, pdata->mode);
+
+	panel->entity.dev = &pdev->dev;
+	panel->entity.release = panel_dpi_release;
+	panel->entity.ops = &panel_dpi_control_ops;
+
+	ret = display_entity_register(&panel->entity);
+	if (ret < 0) {
+		kfree(panel);
+		return ret;
+	}
+
+	platform_set_drvdata(pdev, panel);
+
+	return 0;
+}
+
+static const struct dev_pm_ops panel_dpi_dev_pm_ops = {
+};
+
+static struct platform_driver panel_dpi_driver = {
+	.probe = panel_dpi_probe,
+	.remove = panel_dpi_remove,
+	.driver = {
+		.name = "panel_dpi",
+		.owner = THIS_MODULE,
+		.pm = &panel_dpi_dev_pm_ops,
+	},
+};
+
+module_platform_driver(panel_dpi_driver);
+
+MODULE_AUTHOR("Laurent Pinchart <laurent.pinchart@ideasonboard.com>");
+MODULE_DESCRIPTION("DPI Display Panel");
+MODULE_LICENSE("GPL");
diff --git a/include/video/panel-dpi.h b/include/video/panel-dpi.h
new file mode 100644
index 0000000..0c5856e
--- /dev/null
+++ b/include/video/panel-dpi.h
@@ -0,0 +1,25 @@
+/*
+ * DPI Display Panel
+ *
+ * Copyright (C) 2012 Renesas Solutions Corp.
+ *
+ * Contacts: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __PANEL_DPI_H__
+#define __PANEL_DPI_H__
+
+#include <linux/videomode.h>
+
+struct panel_dpi_platform_data {
+	const char *video_source;
+	unsigned long width;		/* Panel width in mm */
+	unsigned long height;		/* Panel height in mm */
+	const struct videomode *mode;
+};
+
+#endif /* __PANEL_DPI_H__ */
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 2/6] video: add generic dpi panel
@ 2012-12-14 14:27   ` Tomi Valkeinen
  0 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/display/panel-dpi.c |  155 +++++++++++++++++++++++++++++++++++++
 include/video/panel-dpi.h         |   25 ++++++
 2 files changed, 180 insertions(+)
 create mode 100644 drivers/video/display/panel-dpi.c
 create mode 100644 include/video/panel-dpi.h

diff --git a/drivers/video/display/panel-dpi.c b/drivers/video/display/panel-dpi.c
new file mode 100644
index 0000000..824cd88
--- /dev/null
+++ b/drivers/video/display/panel-dpi.c
@@ -0,0 +1,155 @@
+/*
+ * DPI Display Panel
+ *
+ * Copyright (C) 2012 Renesas Solutions Corp.
+ *
+ * Contacts: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include <video/display.h>
+#include <video/panel-dpi.h>
+
+struct panel_dpi {
+	struct display_entity entity;
+	struct video_source *src;
+	const struct panel_dpi_platform_data *pdata;
+};
+
+#define to_panel_dpi(p)		container_of(p, struct panel_dpi, entity)
+
+static int panel_dpi_set_state(struct display_entity *entity,
+			       enum display_entity_state state)
+{
+	struct panel_dpi *panel = to_panel_dpi(entity);
+	struct video_source *src = panel->src;
+
+	switch (state) {
+	case DISPLAY_ENTITY_STATE_OFF:
+	case DISPLAY_ENTITY_STATE_STANDBY:
+		src->common_ops->set_stream(src,
+				DISPLAY_ENTITY_STREAM_STOPPED);
+		break;
+
+	case DISPLAY_ENTITY_STATE_ON:
+		src->common_ops->set_stream(src,
+				DISPLAY_ENTITY_STREAM_CONTINUOUS);
+		break;
+	}
+
+	return 0;
+}
+
+static int panel_dpi_get_modes(struct display_entity *entity,
+			       const struct videomode **modes)
+{
+	struct panel_dpi *panel = to_panel_dpi(entity);
+
+	*modes = panel->pdata->mode;
+	return 1;
+}
+
+static int panel_dpi_get_size(struct display_entity *entity,
+			      unsigned int *width, unsigned int *height)
+{
+	struct panel_dpi *panel = to_panel_dpi(entity);
+
+	*width = panel->pdata->width;
+	*height = panel->pdata->height;
+	return 0;
+}
+
+static const struct display_entity_control_ops panel_dpi_control_ops = {
+	.set_state = panel_dpi_set_state,
+	.get_modes = panel_dpi_get_modes,
+	.get_size = panel_dpi_get_size,
+};
+
+static void panel_dpi_release(struct display_entity *entity)
+{
+	struct panel_dpi *panel = to_panel_dpi(entity);
+
+	kfree(panel);
+}
+
+static int panel_dpi_remove(struct platform_device *pdev)
+{
+	struct panel_dpi *panel = platform_get_drvdata(pdev);
+
+	display_entity_unregister(&panel->entity);
+
+	if (panel->src) {
+		video_source_put(panel->src);
+		panel->src = NULL;
+	}
+
+	platform_set_drvdata(pdev, NULL);
+
+	return 0;
+}
+
+static int __devinit panel_dpi_probe(struct platform_device *pdev)
+{
+	const struct panel_dpi_platform_data *pdata = pdev->dev.platform_data;
+	struct panel_dpi *panel;
+	int ret;
+
+	if (pdata == NULL)
+		return -ENODEV;
+
+	panel = kzalloc(sizeof(*panel), GFP_KERNEL);
+	if (panel == NULL)
+		return -ENOMEM;
+
+	panel->pdata = pdata;
+
+	panel->src = video_source_find(pdata->video_source);
+	if (panel->src == NULL) {
+		printk("failed to get video source\n");
+		return -EINVAL;
+	}
+
+	panel->src->ops.dpi->set_data_lines(panel->src, 24);
+	panel->src->ops.dpi->set_videomode(panel->src, pdata->mode);
+
+	panel->entity.dev = &pdev->dev;
+	panel->entity.release = panel_dpi_release;
+	panel->entity.ops = &panel_dpi_control_ops;
+
+	ret = display_entity_register(&panel->entity);
+	if (ret < 0) {
+		kfree(panel);
+		return ret;
+	}
+
+	platform_set_drvdata(pdev, panel);
+
+	return 0;
+}
+
+static const struct dev_pm_ops panel_dpi_dev_pm_ops = {
+};
+
+static struct platform_driver panel_dpi_driver = {
+	.probe = panel_dpi_probe,
+	.remove = panel_dpi_remove,
+	.driver = {
+		.name = "panel_dpi",
+		.owner = THIS_MODULE,
+		.pm = &panel_dpi_dev_pm_ops,
+	},
+};
+
+module_platform_driver(panel_dpi_driver);
+
+MODULE_AUTHOR("Laurent Pinchart <laurent.pinchart@ideasonboard.com>");
+MODULE_DESCRIPTION("DPI Display Panel");
+MODULE_LICENSE("GPL");
diff --git a/include/video/panel-dpi.h b/include/video/panel-dpi.h
new file mode 100644
index 0000000..0c5856e
--- /dev/null
+++ b/include/video/panel-dpi.h
@@ -0,0 +1,25 @@
+/*
+ * DPI Display Panel
+ *
+ * Copyright (C) 2012 Renesas Solutions Corp.
+ *
+ * Contacts: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __PANEL_DPI_H__
+#define __PANEL_DPI_H__
+
+#include <linux/videomode.h>
+
+struct panel_dpi_platform_data {
+	const char *video_source;
+	unsigned long width;		/* Panel width in mm */
+	unsigned long height;		/* Panel height in mm */
+	const struct videomode *mode;
+};
+
+#endif /* __PANEL_DPI_H__ */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 3/6] video: add tfp410
  2012-12-14 14:27 ` Tomi Valkeinen
@ 2012-12-14 14:27   ` Tomi Valkeinen
  -1 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/display/chip-tfp410.c |  164 +++++++++++++++++++++++++++++++++++
 include/video/omap-panel-tfp410.h   |    4 +
 2 files changed, 168 insertions(+)
 create mode 100644 drivers/video/display/chip-tfp410.c

diff --git a/drivers/video/display/chip-tfp410.c b/drivers/video/display/chip-tfp410.c
new file mode 100644
index 0000000..2f8bdb6
--- /dev/null
+++ b/drivers/video/display/chip-tfp410.c
@@ -0,0 +1,164 @@
+/*
+ * TFP410 DPI-to-DVI bridge
+ *
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Contacts: Tomi Valkeinen <tomi.valkeinen@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/gpio.h>
+#include <linux/platform_device.h>
+
+#include <video/display.h>
+#include <video/omap-panel-tfp410.h>
+
+struct tfp410_data {
+	struct video_source *in;
+
+	struct video_source out;
+
+	int power_down_gpio;
+};
+
+#define to_tfp410(p) container_of(p, struct tfp410_data, out)
+
+static int tfp410_set_stream(struct video_source *src,
+				     enum video_source_stream_state state)
+{
+	struct tfp410_data *data = to_tfp410(src);
+	struct video_source *in = data->in;
+	int r;
+
+	r = in->common_ops->set_stream(in, state);
+	if (r)
+		return r;
+
+	switch (state) {
+	case DISPLAY_ENTITY_STREAM_STOPPED:
+		printk("tfp410 set_stream: STOPPED\n");
+
+		gpio_set_value_cansleep(data->power_down_gpio, 0);
+
+		break;
+
+	case DISPLAY_ENTITY_STREAM_CONTINUOUS:
+		printk("tfp410 set_stream: CONTINUOUS\n");
+
+		gpio_set_value_cansleep(data->power_down_gpio, 1);
+
+		break;
+
+	default:
+		printk("tfp410 set_stream error\n");
+		break;
+	}
+
+	return 0;
+}
+
+static int tfp410_set_vm(struct video_source *src, const struct videomode *vm)
+{
+	struct tfp410_data *data = to_tfp410(src);
+	struct video_source *in = data->in;
+
+	printk("tfp410 set vm\n");
+
+	return in->ops.dpi->set_videomode(in, vm);
+}
+
+static const struct common_video_source_ops tfp410_common_ops = {
+	.set_stream = tfp410_set_stream,
+};
+
+static const struct dvi_video_source_ops tfp410_dvi_ops = {
+	.set_videomode = tfp410_set_vm,
+};
+
+static void tfp410_release(struct video_source *src)
+{
+	printk("tfp410 entity release\n");
+}
+
+static int __devinit tfp410_probe(struct platform_device *pdev)
+{
+	const struct tfp410_platform_data *pdata = pdev->dev.platform_data;
+	struct tfp410_data *data;
+	int r;
+
+	if (pdata = NULL)
+		return -ENODEV;
+
+	data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+	if (data = NULL)
+		return -ENOMEM;
+
+	data->power_down_gpio = pdata->power_down_gpio;
+
+	r = devm_gpio_request_one(&pdev->dev, pdata->power_down_gpio,
+			GPIOF_OUT_INIT_LOW, "tfp410 pd");
+	if (r) {
+		printk("failed to request pd gpio\n");
+		return r;
+	}
+
+	/* setup input */
+
+	data->in = video_source_find(pdata->video_source);
+	if (data->in = NULL) {
+		printk("failed to get video source\n");
+		return -EINVAL;
+	}
+
+	data->in->ops.dpi->set_data_lines(data->in, 24);
+
+	/* setup output */
+
+	data->out.dev = &pdev->dev;
+	data->out.release = tfp410_release;
+	data->out.common_ops = &tfp410_common_ops;
+	data->out.ops.dvi = &tfp410_dvi_ops;
+	data->out.name = pdata->video_output;
+
+	r = video_source_register(&data->out);
+	if (r < 0) {
+		printk("tfp410 entity register failed\n");
+		video_source_put(data->in);
+		return r;
+	}
+
+	platform_set_drvdata(pdev, data);
+
+	return 0;
+}
+
+static int tfp410_remove(struct platform_device *pdev)
+{
+	struct tfp410_data *data = platform_get_drvdata(pdev);
+
+	video_source_unregister(&data->out);
+
+	video_source_put(data->in);
+
+	return 0;
+}
+
+static struct platform_driver tfp410_driver = {
+	.probe = tfp410_probe,
+	.remove = tfp410_remove,
+	.driver = {
+		.name = "tfp410",
+		.owner = THIS_MODULE,
+	},
+};
+
+module_platform_driver(tfp410_driver);
+
+MODULE_AUTHOR("Tomi Valkeinen <tomi.valkeinen@ti.com>");
+MODULE_DESCRIPTION("TFP410 DPI-to-DVI Bridge");
+MODULE_LICENSE("GPL");
diff --git a/include/video/omap-panel-tfp410.h b/include/video/omap-panel-tfp410.h
index b5b05f4..18f2b46 100644
--- a/include/video/omap-panel-tfp410.h
+++ b/include/video/omap-panel-tfp410.h
@@ -30,6 +30,10 @@ struct omap_dss_device;
  */
 struct tfp410_platform_data {
 	const char *name;
+
+	const char *video_source;
+	const char *video_output;
+
 	u16 i2c_bus_num;
 	int power_down_gpio;
 };
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 3/6] video: add tfp410
@ 2012-12-14 14:27   ` Tomi Valkeinen
  0 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/display/chip-tfp410.c |  164 +++++++++++++++++++++++++++++++++++
 include/video/omap-panel-tfp410.h   |    4 +
 2 files changed, 168 insertions(+)
 create mode 100644 drivers/video/display/chip-tfp410.c

diff --git a/drivers/video/display/chip-tfp410.c b/drivers/video/display/chip-tfp410.c
new file mode 100644
index 0000000..2f8bdb6
--- /dev/null
+++ b/drivers/video/display/chip-tfp410.c
@@ -0,0 +1,164 @@
+/*
+ * TFP410 DPI-to-DVI bridge
+ *
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Contacts: Tomi Valkeinen <tomi.valkeinen@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/gpio.h>
+#include <linux/platform_device.h>
+
+#include <video/display.h>
+#include <video/omap-panel-tfp410.h>
+
+struct tfp410_data {
+	struct video_source *in;
+
+	struct video_source out;
+
+	int power_down_gpio;
+};
+
+#define to_tfp410(p) container_of(p, struct tfp410_data, out)
+
+static int tfp410_set_stream(struct video_source *src,
+				     enum video_source_stream_state state)
+{
+	struct tfp410_data *data = to_tfp410(src);
+	struct video_source *in = data->in;
+	int r;
+
+	r = in->common_ops->set_stream(in, state);
+	if (r)
+		return r;
+
+	switch (state) {
+	case DISPLAY_ENTITY_STREAM_STOPPED:
+		printk("tfp410 set_stream: STOPPED\n");
+
+		gpio_set_value_cansleep(data->power_down_gpio, 0);
+
+		break;
+
+	case DISPLAY_ENTITY_STREAM_CONTINUOUS:
+		printk("tfp410 set_stream: CONTINUOUS\n");
+
+		gpio_set_value_cansleep(data->power_down_gpio, 1);
+
+		break;
+
+	default:
+		printk("tfp410 set_stream error\n");
+		break;
+	}
+
+	return 0;
+}
+
+static int tfp410_set_vm(struct video_source *src, const struct videomode *vm)
+{
+	struct tfp410_data *data = to_tfp410(src);
+	struct video_source *in = data->in;
+
+	printk("tfp410 set vm\n");
+
+	return in->ops.dpi->set_videomode(in, vm);
+}
+
+static const struct common_video_source_ops tfp410_common_ops = {
+	.set_stream = tfp410_set_stream,
+};
+
+static const struct dvi_video_source_ops tfp410_dvi_ops = {
+	.set_videomode = tfp410_set_vm,
+};
+
+static void tfp410_release(struct video_source *src)
+{
+	printk("tfp410 entity release\n");
+}
+
+static int __devinit tfp410_probe(struct platform_device *pdev)
+{
+	const struct tfp410_platform_data *pdata = pdev->dev.platform_data;
+	struct tfp410_data *data;
+	int r;
+
+	if (pdata == NULL)
+		return -ENODEV;
+
+	data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+	if (data == NULL)
+		return -ENOMEM;
+
+	data->power_down_gpio = pdata->power_down_gpio;
+
+	r = devm_gpio_request_one(&pdev->dev, pdata->power_down_gpio,
+			GPIOF_OUT_INIT_LOW, "tfp410 pd");
+	if (r) {
+		printk("failed to request pd gpio\n");
+		return r;
+	}
+
+	/* setup input */
+
+	data->in = video_source_find(pdata->video_source);
+	if (data->in == NULL) {
+		printk("failed to get video source\n");
+		return -EINVAL;
+	}
+
+	data->in->ops.dpi->set_data_lines(data->in, 24);
+
+	/* setup output */
+
+	data->out.dev = &pdev->dev;
+	data->out.release = tfp410_release;
+	data->out.common_ops = &tfp410_common_ops;
+	data->out.ops.dvi = &tfp410_dvi_ops;
+	data->out.name = pdata->video_output;
+
+	r = video_source_register(&data->out);
+	if (r < 0) {
+		printk("tfp410 entity register failed\n");
+		video_source_put(data->in);
+		return r;
+	}
+
+	platform_set_drvdata(pdev, data);
+
+	return 0;
+}
+
+static int tfp410_remove(struct platform_device *pdev)
+{
+	struct tfp410_data *data = platform_get_drvdata(pdev);
+
+	video_source_unregister(&data->out);
+
+	video_source_put(data->in);
+
+	return 0;
+}
+
+static struct platform_driver tfp410_driver = {
+	.probe = tfp410_probe,
+	.remove = tfp410_remove,
+	.driver = {
+		.name = "tfp410",
+		.owner = THIS_MODULE,
+	},
+};
+
+module_platform_driver(tfp410_driver);
+
+MODULE_AUTHOR("Tomi Valkeinen <tomi.valkeinen@ti.com>");
+MODULE_DESCRIPTION("TFP410 DPI-to-DVI Bridge");
+MODULE_LICENSE("GPL");
diff --git a/include/video/omap-panel-tfp410.h b/include/video/omap-panel-tfp410.h
index b5b05f4..18f2b46 100644
--- a/include/video/omap-panel-tfp410.h
+++ b/include/video/omap-panel-tfp410.h
@@ -30,6 +30,10 @@ struct omap_dss_device;
  */
 struct tfp410_platform_data {
 	const char *name;
+
+	const char *video_source;
+	const char *video_output;
+
 	u16 i2c_bus_num;
 	int power_down_gpio;
 };
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 4/6] video: add generic dvi monitor
  2012-12-14 14:27 ` Tomi Valkeinen
@ 2012-12-14 14:27   ` Tomi Valkeinen
  -1 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/display/panel-dvi.c |  164 +++++++++++++++++++++++++++++++++++++
 include/video/panel-dvi.h         |   18 ++++
 2 files changed, 182 insertions(+)
 create mode 100644 drivers/video/display/panel-dvi.c
 create mode 100644 include/video/panel-dvi.h

diff --git a/drivers/video/display/panel-dvi.c b/drivers/video/display/panel-dvi.c
new file mode 100644
index 0000000..01cea09
--- /dev/null
+++ b/drivers/video/display/panel-dvi.c
@@ -0,0 +1,164 @@
+/*
+ * Generic DVI monitor
+ *
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Contacts: Tomi Valkeinen <tomi.valkeinen@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+
+#include <video/display.h>
+#include <video/panel-dvi.h>
+
+struct panel_data {
+	struct display_entity entity;
+	struct video_source *in;
+};
+
+#define to_panel(p) container_of(p, struct panel_data, entity)
+
+static int panel_dvi_set_state(struct display_entity *entity,
+			       enum display_entity_state state)
+{
+	struct panel_data *data = to_panel(entity);
+	struct video_source *in = data->in;
+
+	switch (state) {
+	case DISPLAY_ENTITY_STATE_OFF:
+	case DISPLAY_ENTITY_STATE_STANDBY:
+		in->common_ops->set_stream(in, DISPLAY_ENTITY_STREAM_STOPPED);
+		break;
+
+	case DISPLAY_ENTITY_STATE_ON:
+		in->common_ops->set_stream(in, DISPLAY_ENTITY_STREAM_CONTINUOUS);
+		break;
+	}
+
+	return 0;
+}
+
+static const struct videomode vga_mode = {
+	.pixelclock = 23500,
+
+	.hactive = 640,
+	.hfront_porch = 48,
+	.hback_porch = 80,
+	.hsync_len = 32,
+
+	.vactive = 480,
+	.vfront_porch = 3,
+	.vback_porch = 7,
+	.vsync_len = 4,
+
+	.hah = true,
+	.vah = true,
+	.de = true,
+};
+
+static int panel_dvi_get_modes(struct display_entity *entity,
+			       const struct videomode **modes)
+{
+	//struct panel_data *data = to_panel(entity);
+
+	*modes = &vga_mode;
+	return 1;
+}
+
+static int panel_dvi_get_size(struct display_entity *entity,
+			      unsigned int *width, unsigned int *height)
+{
+	//struct panel_data *data = to_panel(entity);
+
+	*width = 10;
+	*height = 10;
+	return 0;
+}
+
+static const struct display_entity_control_ops panel_dvi_control_ops = {
+	.set_state = panel_dvi_set_state,
+	.get_modes = panel_dvi_get_modes,
+	.get_size = panel_dvi_get_size,
+};
+
+static void panel_dvi_release(struct display_entity *entity)
+{
+	printk("panel dvi release\n");
+}
+
+static int __devinit panel_dvi_probe(struct platform_device *pdev)
+{
+	const struct panel_dvi_platform_data *pdata = pdev->dev.platform_data;
+	struct panel_data *data;
+	int ret;
+
+	if (pdata = NULL)
+		return -ENODEV;
+
+	data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+	if (data = NULL)
+		return -ENOMEM;
+
+	/* setup input */
+	data->in = video_source_find(pdata->video_source);
+	if (data->in = NULL) {
+		printk("failed to get video source\n");
+		return -EINVAL;
+	}
+
+	/* setup default mode */
+	data->in->ops.dvi->set_videomode(data->in, &vga_mode);
+
+	/* setup panel entity */
+
+	data->entity.dev = &pdev->dev;
+	data->entity.release = panel_dvi_release;
+	data->entity.ops = &panel_dvi_control_ops;
+
+	ret = display_entity_register(&data->entity);
+	if (ret < 0) {
+		video_source_put(data->in);
+		return ret;
+	}
+
+	platform_set_drvdata(pdev, data);
+
+	return 0;
+}
+
+static int panel_dvi_remove(struct platform_device *pdev)
+{
+	struct panel_data *data = platform_get_drvdata(pdev);
+
+	display_entity_unregister(&data->entity);
+
+	video_source_put(data->in);
+
+	return 0;
+}
+
+
+static const struct dev_pm_ops panel_dvi_dev_pm_ops = {
+};
+
+static struct platform_driver panel_dvi_driver = {
+	.probe = panel_dvi_probe,
+	.remove = panel_dvi_remove,
+	.driver = {
+		.name = "panel_dvi",
+		.owner = THIS_MODULE,
+		.pm = &panel_dvi_dev_pm_ops,
+	},
+};
+
+module_platform_driver(panel_dvi_driver);
+
+MODULE_AUTHOR("Tomi Valkeinen <tomi.valkeinen@ti.com>");
+MODULE_DESCRIPTION("DVI Monitor");
+MODULE_LICENSE("GPL");
diff --git a/include/video/panel-dvi.h b/include/video/panel-dvi.h
new file mode 100644
index 0000000..ab88380
--- /dev/null
+++ b/include/video/panel-dvi.h
@@ -0,0 +1,18 @@
+/*
+ * DVI Display Panel
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __PANEL_DVI_H__
+#define __PANEL_DVI_H__
+
+#include <linux/videomode.h>
+
+struct panel_dvi_platform_data {
+	const char *video_source;
+};
+
+#endif /* __PANEL_DVI_H__ */
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 4/6] video: add generic dvi monitor
@ 2012-12-14 14:27   ` Tomi Valkeinen
  0 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/display/panel-dvi.c |  164 +++++++++++++++++++++++++++++++++++++
 include/video/panel-dvi.h         |   18 ++++
 2 files changed, 182 insertions(+)
 create mode 100644 drivers/video/display/panel-dvi.c
 create mode 100644 include/video/panel-dvi.h

diff --git a/drivers/video/display/panel-dvi.c b/drivers/video/display/panel-dvi.c
new file mode 100644
index 0000000..01cea09
--- /dev/null
+++ b/drivers/video/display/panel-dvi.c
@@ -0,0 +1,164 @@
+/*
+ * Generic DVI monitor
+ *
+ * Copyright (C) 2012 Texas Instruments
+ *
+ * Contacts: Tomi Valkeinen <tomi.valkeinen@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+
+#include <video/display.h>
+#include <video/panel-dvi.h>
+
+struct panel_data {
+	struct display_entity entity;
+	struct video_source *in;
+};
+
+#define to_panel(p) container_of(p, struct panel_data, entity)
+
+static int panel_dvi_set_state(struct display_entity *entity,
+			       enum display_entity_state state)
+{
+	struct panel_data *data = to_panel(entity);
+	struct video_source *in = data->in;
+
+	switch (state) {
+	case DISPLAY_ENTITY_STATE_OFF:
+	case DISPLAY_ENTITY_STATE_STANDBY:
+		in->common_ops->set_stream(in, DISPLAY_ENTITY_STREAM_STOPPED);
+		break;
+
+	case DISPLAY_ENTITY_STATE_ON:
+		in->common_ops->set_stream(in, DISPLAY_ENTITY_STREAM_CONTINUOUS);
+		break;
+	}
+
+	return 0;
+}
+
+static const struct videomode vga_mode = {
+	.pixelclock = 23500,
+
+	.hactive = 640,
+	.hfront_porch = 48,
+	.hback_porch = 80,
+	.hsync_len = 32,
+
+	.vactive = 480,
+	.vfront_porch = 3,
+	.vback_porch = 7,
+	.vsync_len = 4,
+
+	.hah = true,
+	.vah = true,
+	.de = true,
+};
+
+static int panel_dvi_get_modes(struct display_entity *entity,
+			       const struct videomode **modes)
+{
+	//struct panel_data *data = to_panel(entity);
+
+	*modes = &vga_mode;
+	return 1;
+}
+
+static int panel_dvi_get_size(struct display_entity *entity,
+			      unsigned int *width, unsigned int *height)
+{
+	//struct panel_data *data = to_panel(entity);
+
+	*width = 10;
+	*height = 10;
+	return 0;
+}
+
+static const struct display_entity_control_ops panel_dvi_control_ops = {
+	.set_state = panel_dvi_set_state,
+	.get_modes = panel_dvi_get_modes,
+	.get_size = panel_dvi_get_size,
+};
+
+static void panel_dvi_release(struct display_entity *entity)
+{
+	printk("panel dvi release\n");
+}
+
+static int __devinit panel_dvi_probe(struct platform_device *pdev)
+{
+	const struct panel_dvi_platform_data *pdata = pdev->dev.platform_data;
+	struct panel_data *data;
+	int ret;
+
+	if (pdata == NULL)
+		return -ENODEV;
+
+	data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
+	if (data == NULL)
+		return -ENOMEM;
+
+	/* setup input */
+	data->in = video_source_find(pdata->video_source);
+	if (data->in == NULL) {
+		printk("failed to get video source\n");
+		return -EINVAL;
+	}
+
+	/* setup default mode */
+	data->in->ops.dvi->set_videomode(data->in, &vga_mode);
+
+	/* setup panel entity */
+
+	data->entity.dev = &pdev->dev;
+	data->entity.release = panel_dvi_release;
+	data->entity.ops = &panel_dvi_control_ops;
+
+	ret = display_entity_register(&data->entity);
+	if (ret < 0) {
+		video_source_put(data->in);
+		return ret;
+	}
+
+	platform_set_drvdata(pdev, data);
+
+	return 0;
+}
+
+static int panel_dvi_remove(struct platform_device *pdev)
+{
+	struct panel_data *data = platform_get_drvdata(pdev);
+
+	display_entity_unregister(&data->entity);
+
+	video_source_put(data->in);
+
+	return 0;
+}
+
+
+static const struct dev_pm_ops panel_dvi_dev_pm_ops = {
+};
+
+static struct platform_driver panel_dvi_driver = {
+	.probe = panel_dvi_probe,
+	.remove = panel_dvi_remove,
+	.driver = {
+		.name = "panel_dvi",
+		.owner = THIS_MODULE,
+		.pm = &panel_dvi_dev_pm_ops,
+	},
+};
+
+module_platform_driver(panel_dvi_driver);
+
+MODULE_AUTHOR("Tomi Valkeinen <tomi.valkeinen@ti.com>");
+MODULE_DESCRIPTION("DVI Monitor");
+MODULE_LICENSE("GPL");
diff --git a/include/video/panel-dvi.h b/include/video/panel-dvi.h
new file mode 100644
index 0000000..ab88380
--- /dev/null
+++ b/include/video/panel-dvi.h
@@ -0,0 +1,18 @@
+/*
+ * DVI Display Panel
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __PANEL_DVI_H__
+#define __PANEL_DVI_H__
+
+#include <linux/videomode.h>
+
+struct panel_dvi_platform_data {
+	const char *video_source;
+};
+
+#endif /* __PANEL_DVI_H__ */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 5/6] video: add taal panel
  2012-12-14 14:27 ` Tomi Valkeinen
@ 2012-12-14 14:27   ` Tomi Valkeinen
  -1 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/display/panel-taal.c   |  383 ++++++++++++++++++++++++++++++++++
 include/video/omap-panel-nokia-dsi.h |    4 +-
 2 files changed, 385 insertions(+), 2 deletions(-)
 create mode 100644 drivers/video/display/panel-taal.c

diff --git a/drivers/video/display/panel-taal.c b/drivers/video/display/panel-taal.c
new file mode 100644
index 0000000..f1c2196
--- /dev/null
+++ b/drivers/video/display/panel-taal.c
@@ -0,0 +1,383 @@
+/*
+ * Taal DSI command mode panel
+ *
+ * Copyright (C) 2012 Texas Instruments
+ * Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#define DEBUG
+
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/sched.h>
+#include <linux/gpio.h>
+#include <linux/mutex.h>
+#include <linux/platform_device.h>
+#include <linux/videomode.h>
+
+#include <video/omapdss.h>
+#include <video/display.h>
+#include <video/omap-panel-nokia-dsi.h>
+#include <video/mipi_display.h>
+
+/* DSI Virtual channel. Hardcoded for now. */
+#define TCH 0
+
+#define DCS_READ_NUM_ERRORS	0x05
+#define DCS_BRIGHTNESS		0x51
+#define DCS_CTRL_DISPLAY	0x53
+#define DCS_WRITE_CABC		0x55
+#define DCS_READ_CABC		0x56
+#define DCS_GET_ID1		0xda
+#define DCS_GET_ID2		0xdb
+#define DCS_GET_ID3		0xdc
+
+struct taal_data {
+	struct platform_device *pdev;
+	struct video_source *src;
+	struct display_entity entity;
+
+	struct mutex lock;
+
+	unsigned long	hw_guard_end;	/* next value of jiffies when we can
+					 * issue the next sleep in/out command
+					 */
+	unsigned long	hw_guard_wait;	/* max guard time in jiffies */
+
+	/* panel HW configuration from DT or platform data */
+	int reset_gpio;
+
+	/* runtime variables */
+	bool enabled;
+
+	bool te_enabled;
+
+	int channel;
+
+	bool cabc_broken;
+	unsigned cabc_mode;
+
+	bool intro_printed;
+};
+
+static void hw_guard_start(struct taal_data *td, int guard_msec)
+{
+	td->hw_guard_wait = msecs_to_jiffies(guard_msec);
+	td->hw_guard_end = jiffies + td->hw_guard_wait;
+}
+
+static void hw_guard_wait(struct taal_data *td)
+{
+	unsigned long wait = td->hw_guard_end - jiffies;
+
+	if ((long)wait > 0 && wait <= td->hw_guard_wait) {
+		set_current_state(TASK_UNINTERRUPTIBLE);
+		schedule_timeout(wait);
+	}
+}
+
+static int taal_dcs_read_1(struct taal_data *td, u8 dcs_cmd, u8 *data)
+{
+	int r;
+	u8 buf[1];
+	struct video_source *src = td->src;
+
+	r = src->ops.dsi->dcs_read(src, td->channel, dcs_cmd, buf, 1);
+	if (r < 0)
+		return r;
+
+	*data = buf[0];
+
+	return 0;
+}
+
+static int taal_dcs_write_0(struct taal_data *td, u8 dcs_cmd)
+{
+	struct video_source *src = td->src;
+
+	return src->ops.dsi->dcs_write(src, td->channel, &dcs_cmd, 1);
+}
+
+static int taal_sleep_out(struct taal_data *td)
+{
+	int r;
+
+	hw_guard_wait(td);
+
+	r = taal_dcs_write_0(td, MIPI_DCS_EXIT_SLEEP_MODE);
+	if (r)
+		return r;
+
+	hw_guard_start(td, 120);
+
+	msleep(5);
+
+	return 0;
+}
+
+static int taal_get_id(struct taal_data *td, u8 *id1, u8 *id2, u8 *id3)
+{
+	int r;
+
+	r = taal_dcs_read_1(td, DCS_GET_ID1, id1);
+	if (r)
+		return r;
+	r = taal_dcs_read_1(td, DCS_GET_ID2, id2);
+	if (r)
+		return r;
+	r = taal_dcs_read_1(td, DCS_GET_ID3, id3);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static void taal_hw_reset(struct taal_data *td)
+{
+	if (!gpio_is_valid(td->reset_gpio))
+		return;
+
+	gpio_set_value(td->reset_gpio, 1);
+	udelay(10);
+	/* reset the panel */
+	gpio_set_value(td->reset_gpio, 0);
+	/* assert reset */
+	udelay(10);
+	gpio_set_value(td->reset_gpio, 1);
+	/* wait after releasing reset */
+	msleep(5);
+}
+
+#define to_panel(p) container_of(p, struct taal_data, entity)
+
+static int taal_set_state(struct display_entity *entity,
+			       enum display_entity_state state)
+{
+	struct taal_data *td = to_panel(entity);
+	struct video_source *src = td->src;
+	int r;
+
+	switch (state) {
+	case DISPLAY_ENTITY_STATE_OFF:
+	case DISPLAY_ENTITY_STATE_STANDBY:
+		r = taal_dcs_write_0(td, MIPI_DCS_SET_DISPLAY_OFF);
+		if (r)
+			printk("display off failed\n");
+
+		src->ops.dsi->disable(src);
+
+		break;
+
+	case DISPLAY_ENTITY_STATE_ON:
+		r = src->ops.dsi->enable(src);
+		if (r)
+			printk("failed to enable bus\n");
+
+		taal_hw_reset(td);
+
+		r = taal_sleep_out(td);
+		if (r)
+			printk("sleep out failed\n");
+
+		src->ops.dsi->enable_hs(src, true);
+
+
+		r = taal_dcs_write_0(td, MIPI_DCS_SET_DISPLAY_ON);
+		if (r)
+			printk("display on failed\n");
+		break;
+	}
+
+	return 0;
+}
+
+static const struct videomode taal_mode = {
+	.hactive = 864,
+	.vactive = 480,
+};
+
+static int taal_get_modes(struct display_entity *entity,
+			       const struct videomode **modes)
+{
+	//struct panel_data *data = to_panel(entity);
+
+	*modes = &taal_mode;
+	return 1;
+}
+
+static int taal_get_size(struct display_entity *entity,
+			      unsigned int *width, unsigned int *height)
+{
+	//struct panel_data *data = to_panel(entity);
+
+	*width = 10;
+	*height = 10;
+	return 0;
+}
+
+static int taal_update(struct display_entity *entity,
+		void (*callback)(int, void *), void *data)
+{
+	struct taal_data *td = to_panel(entity);
+	struct video_source *src = td->src;
+
+	return src->ops.dsi->update(src, td->channel, callback, data);
+}
+
+static const struct display_entity_control_ops taal_control_ops = {
+	.set_state = taal_set_state,
+	.get_modes = taal_get_modes,
+	.get_size = taal_get_size,
+	.update = taal_update,
+};
+
+static void panel_taal_release(struct display_entity *entity)
+{
+	printk("panel taal release\n");
+}
+
+static int taal_probe(struct platform_device *pdev)
+{
+	const struct nokia_dsi_panel_data *pdata = pdev->dev.platform_data;
+	struct taal_data *td;
+	int r;
+	u8 id1, id2, id3;
+	struct video_source *src;
+
+	dev_dbg(&pdev->dev, "probe\n");
+
+	td = devm_kzalloc(&pdev->dev, sizeof(*td), GFP_KERNEL);
+	if (!td)
+		return -ENOMEM;
+
+	td->pdev = pdev;
+
+
+	td->reset_gpio = pdata->reset_gpio;
+
+	platform_set_drvdata(pdev, td);
+
+	mutex_init(&td->lock);
+
+	if (gpio_is_valid(td->reset_gpio)) {
+		r = devm_gpio_request_one(&pdev->dev, td->reset_gpio,
+				GPIOF_OUT_INIT_LOW, "taal rst");
+		if (r) {
+			dev_err(&pdev->dev, "failed to request reset gpio\n");
+			return r;
+		}
+	}
+
+
+	/* setup input */
+	src = video_source_find(pdata->video_source);
+	if (src = NULL) {
+		printk("failed to get video source\n");
+		return -EINVAL;
+	}
+
+	td->src = src;
+
+	r = src->ops.dsi->configure_pins(src, &pdata->pin_config);
+	if (r)
+		dev_err(&pdev->dev, "failed to configure DSI pins\n");
+
+	r = src->ops.dsi->set_clocks(src, 216000000, 10000000);
+	if (r)
+		dev_err(&pdev->dev, "failed to set HS and LP clocks\n");
+
+	src->ops.dsi->set_size(src, 864, 480);
+	src->ops.dsi->set_pixel_format(src, OMAP_DSS_DSI_FMT_RGB888);
+	src->ops.dsi->set_operation_mode(src, OMAP_DSS_DSI_CMD_MODE);
+
+	/* setup panel entity */
+
+	td->entity.dev = &pdev->dev;
+	td->entity.release = panel_taal_release;
+	td->entity.ops = &taal_control_ops;
+
+	r = display_entity_register(&td->entity);
+	if (r < 0) {
+		printk("failed to register display entity\n");
+		return r;
+	}
+
+	/* show version */
+
+	r = src->ops.dsi->enable(src);
+	if (r)
+		dev_err(&pdev->dev, "failed to enable bus\n");
+
+	taal_hw_reset(td);
+
+	r = taal_get_id(td, &id1, &id2, &id3);
+	if (r)
+		return r;
+
+	dev_info(&pdev->dev, "panel revision %02x.%02x.%02x\n", id1, id2, id3);
+
+	src->ops.dsi->disable(src);
+
+
+	return 0;
+#if 0
+	r = omap_dsi_request_vc(dssdev, &td->channel);
+	if (r) {
+		dev_err(&pdev->dev, "failed to get virtual channel\n");
+		goto err_req_vc;
+	}
+
+	r = omap_dsi_set_vc_id(dssdev, td->channel, TCH);
+	if (r) {
+		dev_err(&pdev->dev, "failed to set VC_ID\n");
+		goto err_vc_id;
+	}
+#endif
+}
+
+static int taal_remove(struct platform_device *pdev)
+{
+	struct taal_data *td = platform_get_drvdata(pdev);
+
+	dev_dbg(&pdev->dev, "remove\n");
+
+	display_entity_unregister(&td->entity);
+
+	video_source_put(td->src);
+
+	/* reset, to be sure that the panel is in a valid state */
+	taal_hw_reset(td);
+
+#if 0
+	omap_dsi_release_vc(dssdev, td->channel);
+#endif
+	return 0;
+}
+
+static struct platform_driver taal_driver = {
+	.probe	= taal_probe,
+	.remove	= taal_remove,
+	.driver	= {
+		.name	= "taal",
+		.owner	= THIS_MODULE,
+	},
+};
+
+module_platform_driver(taal_driver);
+
+MODULE_AUTHOR("Tomi Valkeinen <tomi.valkeinen@ti.com>");
+MODULE_DESCRIPTION("Taal Driver");
+MODULE_LICENSE("GPL");
diff --git a/include/video/omap-panel-nokia-dsi.h b/include/video/omap-panel-nokia-dsi.h
index 225a841..fe274a5 100644
--- a/include/video/omap-panel-nokia-dsi.h
+++ b/include/video/omap-panel-nokia-dsi.h
@@ -14,6 +14,8 @@ struct omap_dss_device;
  * @pin_config: DSI pin configuration
  */
 struct nokia_dsi_panel_data {
+	const char *video_source;
+
 	const char *name;
 
 	int reset_gpio;
@@ -27,8 +29,6 @@ struct nokia_dsi_panel_data {
 	bool use_dsi_backlight;
 
 	struct omap_dsi_pin_config pin_config;
-
-	void *video_source;
 };
 
 #endif /* __OMAP_NOKIA_DSI_PANEL_H */
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 5/6] video: add taal panel
@ 2012-12-14 14:27   ` Tomi Valkeinen
  0 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/display/panel-taal.c   |  383 ++++++++++++++++++++++++++++++++++
 include/video/omap-panel-nokia-dsi.h |    4 +-
 2 files changed, 385 insertions(+), 2 deletions(-)
 create mode 100644 drivers/video/display/panel-taal.c

diff --git a/drivers/video/display/panel-taal.c b/drivers/video/display/panel-taal.c
new file mode 100644
index 0000000..f1c2196
--- /dev/null
+++ b/drivers/video/display/panel-taal.c
@@ -0,0 +1,383 @@
+/*
+ * Taal DSI command mode panel
+ *
+ * Copyright (C) 2012 Texas Instruments
+ * Author: Tomi Valkeinen <tomi.valkeinen@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#define DEBUG
+
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/sched.h>
+#include <linux/gpio.h>
+#include <linux/mutex.h>
+#include <linux/platform_device.h>
+#include <linux/videomode.h>
+
+#include <video/omapdss.h>
+#include <video/display.h>
+#include <video/omap-panel-nokia-dsi.h>
+#include <video/mipi_display.h>
+
+/* DSI Virtual channel. Hardcoded for now. */
+#define TCH 0
+
+#define DCS_READ_NUM_ERRORS	0x05
+#define DCS_BRIGHTNESS		0x51
+#define DCS_CTRL_DISPLAY	0x53
+#define DCS_WRITE_CABC		0x55
+#define DCS_READ_CABC		0x56
+#define DCS_GET_ID1		0xda
+#define DCS_GET_ID2		0xdb
+#define DCS_GET_ID3		0xdc
+
+struct taal_data {
+	struct platform_device *pdev;
+	struct video_source *src;
+	struct display_entity entity;
+
+	struct mutex lock;
+
+	unsigned long	hw_guard_end;	/* next value of jiffies when we can
+					 * issue the next sleep in/out command
+					 */
+	unsigned long	hw_guard_wait;	/* max guard time in jiffies */
+
+	/* panel HW configuration from DT or platform data */
+	int reset_gpio;
+
+	/* runtime variables */
+	bool enabled;
+
+	bool te_enabled;
+
+	int channel;
+
+	bool cabc_broken;
+	unsigned cabc_mode;
+
+	bool intro_printed;
+};
+
+static void hw_guard_start(struct taal_data *td, int guard_msec)
+{
+	td->hw_guard_wait = msecs_to_jiffies(guard_msec);
+	td->hw_guard_end = jiffies + td->hw_guard_wait;
+}
+
+static void hw_guard_wait(struct taal_data *td)
+{
+	unsigned long wait = td->hw_guard_end - jiffies;
+
+	if ((long)wait > 0 && wait <= td->hw_guard_wait) {
+		set_current_state(TASK_UNINTERRUPTIBLE);
+		schedule_timeout(wait);
+	}
+}
+
+static int taal_dcs_read_1(struct taal_data *td, u8 dcs_cmd, u8 *data)
+{
+	int r;
+	u8 buf[1];
+	struct video_source *src = td->src;
+
+	r = src->ops.dsi->dcs_read(src, td->channel, dcs_cmd, buf, 1);
+	if (r < 0)
+		return r;
+
+	*data = buf[0];
+
+	return 0;
+}
+
+static int taal_dcs_write_0(struct taal_data *td, u8 dcs_cmd)
+{
+	struct video_source *src = td->src;
+
+	return src->ops.dsi->dcs_write(src, td->channel, &dcs_cmd, 1);
+}
+
+static int taal_sleep_out(struct taal_data *td)
+{
+	int r;
+
+	hw_guard_wait(td);
+
+	r = taal_dcs_write_0(td, MIPI_DCS_EXIT_SLEEP_MODE);
+	if (r)
+		return r;
+
+	hw_guard_start(td, 120);
+
+	msleep(5);
+
+	return 0;
+}
+
+static int taal_get_id(struct taal_data *td, u8 *id1, u8 *id2, u8 *id3)
+{
+	int r;
+
+	r = taal_dcs_read_1(td, DCS_GET_ID1, id1);
+	if (r)
+		return r;
+	r = taal_dcs_read_1(td, DCS_GET_ID2, id2);
+	if (r)
+		return r;
+	r = taal_dcs_read_1(td, DCS_GET_ID3, id3);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static void taal_hw_reset(struct taal_data *td)
+{
+	if (!gpio_is_valid(td->reset_gpio))
+		return;
+
+	gpio_set_value(td->reset_gpio, 1);
+	udelay(10);
+	/* reset the panel */
+	gpio_set_value(td->reset_gpio, 0);
+	/* assert reset */
+	udelay(10);
+	gpio_set_value(td->reset_gpio, 1);
+	/* wait after releasing reset */
+	msleep(5);
+}
+
+#define to_panel(p) container_of(p, struct taal_data, entity)
+
+static int taal_set_state(struct display_entity *entity,
+			       enum display_entity_state state)
+{
+	struct taal_data *td = to_panel(entity);
+	struct video_source *src = td->src;
+	int r;
+
+	switch (state) {
+	case DISPLAY_ENTITY_STATE_OFF:
+	case DISPLAY_ENTITY_STATE_STANDBY:
+		r = taal_dcs_write_0(td, MIPI_DCS_SET_DISPLAY_OFF);
+		if (r)
+			printk("display off failed\n");
+
+		src->ops.dsi->disable(src);
+
+		break;
+
+	case DISPLAY_ENTITY_STATE_ON:
+		r = src->ops.dsi->enable(src);
+		if (r)
+			printk("failed to enable bus\n");
+
+		taal_hw_reset(td);
+
+		r = taal_sleep_out(td);
+		if (r)
+			printk("sleep out failed\n");
+
+		src->ops.dsi->enable_hs(src, true);
+
+
+		r = taal_dcs_write_0(td, MIPI_DCS_SET_DISPLAY_ON);
+		if (r)
+			printk("display on failed\n");
+		break;
+	}
+
+	return 0;
+}
+
+static const struct videomode taal_mode = {
+	.hactive = 864,
+	.vactive = 480,
+};
+
+static int taal_get_modes(struct display_entity *entity,
+			       const struct videomode **modes)
+{
+	//struct panel_data *data = to_panel(entity);
+
+	*modes = &taal_mode;
+	return 1;
+}
+
+static int taal_get_size(struct display_entity *entity,
+			      unsigned int *width, unsigned int *height)
+{
+	//struct panel_data *data = to_panel(entity);
+
+	*width = 10;
+	*height = 10;
+	return 0;
+}
+
+static int taal_update(struct display_entity *entity,
+		void (*callback)(int, void *), void *data)
+{
+	struct taal_data *td = to_panel(entity);
+	struct video_source *src = td->src;
+
+	return src->ops.dsi->update(src, td->channel, callback, data);
+}
+
+static const struct display_entity_control_ops taal_control_ops = {
+	.set_state = taal_set_state,
+	.get_modes = taal_get_modes,
+	.get_size = taal_get_size,
+	.update = taal_update,
+};
+
+static void panel_taal_release(struct display_entity *entity)
+{
+	printk("panel taal release\n");
+}
+
+static int taal_probe(struct platform_device *pdev)
+{
+	const struct nokia_dsi_panel_data *pdata = pdev->dev.platform_data;
+	struct taal_data *td;
+	int r;
+	u8 id1, id2, id3;
+	struct video_source *src;
+
+	dev_dbg(&pdev->dev, "probe\n");
+
+	td = devm_kzalloc(&pdev->dev, sizeof(*td), GFP_KERNEL);
+	if (!td)
+		return -ENOMEM;
+
+	td->pdev = pdev;
+
+
+	td->reset_gpio = pdata->reset_gpio;
+
+	platform_set_drvdata(pdev, td);
+
+	mutex_init(&td->lock);
+
+	if (gpio_is_valid(td->reset_gpio)) {
+		r = devm_gpio_request_one(&pdev->dev, td->reset_gpio,
+				GPIOF_OUT_INIT_LOW, "taal rst");
+		if (r) {
+			dev_err(&pdev->dev, "failed to request reset gpio\n");
+			return r;
+		}
+	}
+
+
+	/* setup input */
+	src = video_source_find(pdata->video_source);
+	if (src == NULL) {
+		printk("failed to get video source\n");
+		return -EINVAL;
+	}
+
+	td->src = src;
+
+	r = src->ops.dsi->configure_pins(src, &pdata->pin_config);
+	if (r)
+		dev_err(&pdev->dev, "failed to configure DSI pins\n");
+
+	r = src->ops.dsi->set_clocks(src, 216000000, 10000000);
+	if (r)
+		dev_err(&pdev->dev, "failed to set HS and LP clocks\n");
+
+	src->ops.dsi->set_size(src, 864, 480);
+	src->ops.dsi->set_pixel_format(src, OMAP_DSS_DSI_FMT_RGB888);
+	src->ops.dsi->set_operation_mode(src, OMAP_DSS_DSI_CMD_MODE);
+
+	/* setup panel entity */
+
+	td->entity.dev = &pdev->dev;
+	td->entity.release = panel_taal_release;
+	td->entity.ops = &taal_control_ops;
+
+	r = display_entity_register(&td->entity);
+	if (r < 0) {
+		printk("failed to register display entity\n");
+		return r;
+	}
+
+	/* show version */
+
+	r = src->ops.dsi->enable(src);
+	if (r)
+		dev_err(&pdev->dev, "failed to enable bus\n");
+
+	taal_hw_reset(td);
+
+	r = taal_get_id(td, &id1, &id2, &id3);
+	if (r)
+		return r;
+
+	dev_info(&pdev->dev, "panel revision %02x.%02x.%02x\n", id1, id2, id3);
+
+	src->ops.dsi->disable(src);
+
+
+	return 0;
+#if 0
+	r = omap_dsi_request_vc(dssdev, &td->channel);
+	if (r) {
+		dev_err(&pdev->dev, "failed to get virtual channel\n");
+		goto err_req_vc;
+	}
+
+	r = omap_dsi_set_vc_id(dssdev, td->channel, TCH);
+	if (r) {
+		dev_err(&pdev->dev, "failed to set VC_ID\n");
+		goto err_vc_id;
+	}
+#endif
+}
+
+static int taal_remove(struct platform_device *pdev)
+{
+	struct taal_data *td = platform_get_drvdata(pdev);
+
+	dev_dbg(&pdev->dev, "remove\n");
+
+	display_entity_unregister(&td->entity);
+
+	video_source_put(td->src);
+
+	/* reset, to be sure that the panel is in a valid state */
+	taal_hw_reset(td);
+
+#if 0
+	omap_dsi_release_vc(dssdev, td->channel);
+#endif
+	return 0;
+}
+
+static struct platform_driver taal_driver = {
+	.probe	= taal_probe,
+	.remove	= taal_remove,
+	.driver	= {
+		.name	= "taal",
+		.owner	= THIS_MODULE,
+	},
+};
+
+module_platform_driver(taal_driver);
+
+MODULE_AUTHOR("Tomi Valkeinen <tomi.valkeinen@ti.com>");
+MODULE_DESCRIPTION("Taal Driver");
+MODULE_LICENSE("GPL");
diff --git a/include/video/omap-panel-nokia-dsi.h b/include/video/omap-panel-nokia-dsi.h
index 225a841..fe274a5 100644
--- a/include/video/omap-panel-nokia-dsi.h
+++ b/include/video/omap-panel-nokia-dsi.h
@@ -14,6 +14,8 @@ struct omap_dss_device;
  * @pin_config: DSI pin configuration
  */
 struct nokia_dsi_panel_data {
+	const char *video_source;
+
 	const char *name;
 
 	int reset_gpio;
@@ -27,8 +29,6 @@ struct nokia_dsi_panel_data {
 	bool use_dsi_backlight;
 
 	struct omap_dsi_pin_config pin_config;
-
-	void *video_source;
 };
 
 #endif /* __OMAP_NOKIA_DSI_PANEL_H */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 6/6] video: add makefile & kconfig
  2012-12-14 14:27 ` Tomi Valkeinen
@ 2012-12-14 14:27   ` Tomi Valkeinen
  -1 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/Kconfig          |    1 +
 drivers/video/Makefile         |    1 +
 drivers/video/display/Kconfig  |   26 ++++++++++++++++++++++++++
 drivers/video/display/Makefile |    5 +++++
 4 files changed, 33 insertions(+)
 create mode 100644 drivers/video/display/Kconfig
 create mode 100644 drivers/video/display/Makefile

diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig
index c5b7bcf..e91f03e 100644
--- a/drivers/video/Kconfig
+++ b/drivers/video/Kconfig
@@ -2442,6 +2442,7 @@ source "drivers/video/omap/Kconfig"
 source "drivers/video/omap2/Kconfig"
 source "drivers/video/exynos/Kconfig"
 source "drivers/video/backlight/Kconfig"
+source "drivers/video/display/Kconfig"
 
 if VT
 	source "drivers/video/console/Kconfig"
diff --git a/drivers/video/Makefile b/drivers/video/Makefile
index b936b00..0a4cfea 100644
--- a/drivers/video/Makefile
+++ b/drivers/video/Makefile
@@ -14,6 +14,7 @@ fb-objs                           := $(fb-y)
 obj-$(CONFIG_VT)		  += console/
 obj-$(CONFIG_LOGO)		  += logo/
 obj-y				  += backlight/
+obj-y				  += display/
 
 obj-$(CONFIG_EXYNOS_VIDEO)     += exynos/
 
diff --git a/drivers/video/display/Kconfig b/drivers/video/display/Kconfig
new file mode 100644
index 0000000..1d1a590
--- /dev/null
+++ b/drivers/video/display/Kconfig
@@ -0,0 +1,26 @@
+menuconfig DISPLAY_CORE
+	tristate "Display Core"
+	---help---
+	  Support common display framework for graphics devices.
+
+if DISPLAY_CORE
+
+config DISPLAY_PANEL_DPI
+	tristate "DPI (Parallel) Display Panels"
+	---help---
+	  Support for simple digital (parallel) pixel interface panels. Those
+	  panels receive pixel data through a parallel bus and have no control
+	  bus.
+
+	  If you are in doubt, say N.
+
+config DISPLAY_PANEL_DVI
+	tristate "DVI Monitor"
+
+config DISPLAY_PANEL_TAAL
+	tristate "Taal DSI command mode panel"
+
+config DISPLAY_CHIP_TFP410
+	tristate "DPI to DVI chip"
+
+endif # DISPLAY_CORE
diff --git a/drivers/video/display/Makefile b/drivers/video/display/Makefile
new file mode 100644
index 0000000..ac97dfd
--- /dev/null
+++ b/drivers/video/display/Makefile
@@ -0,0 +1,5 @@
+obj-$(CONFIG_DISPLAY_CORE) += display-core.o
+obj-$(CONFIG_DISPLAY_PANEL_DPI) += panel-dpi.o
+obj-$(CONFIG_DISPLAY_PANEL_DVI) += panel-dvi.o
+obj-$(CONFIG_DISPLAY_PANEL_TAAL) += panel-taal.o
+obj-$(CONFIG_DISPLAY_CHIP_TFP410) += chip-tfp410.o
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [RFC 6/6] video: add makefile & kconfig
@ 2012-12-14 14:27   ` Tomi Valkeinen
  0 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-14 14:27 UTC (permalink / raw)
  To: Laurent Pinchart, linux-fbdev, dri-devel; +Cc: Tomi Valkeinen

Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
---
 drivers/video/Kconfig          |    1 +
 drivers/video/Makefile         |    1 +
 drivers/video/display/Kconfig  |   26 ++++++++++++++++++++++++++
 drivers/video/display/Makefile |    5 +++++
 4 files changed, 33 insertions(+)
 create mode 100644 drivers/video/display/Kconfig
 create mode 100644 drivers/video/display/Makefile

diff --git a/drivers/video/Kconfig b/drivers/video/Kconfig
index c5b7bcf..e91f03e 100644
--- a/drivers/video/Kconfig
+++ b/drivers/video/Kconfig
@@ -2442,6 +2442,7 @@ source "drivers/video/omap/Kconfig"
 source "drivers/video/omap2/Kconfig"
 source "drivers/video/exynos/Kconfig"
 source "drivers/video/backlight/Kconfig"
+source "drivers/video/display/Kconfig"
 
 if VT
 	source "drivers/video/console/Kconfig"
diff --git a/drivers/video/Makefile b/drivers/video/Makefile
index b936b00..0a4cfea 100644
--- a/drivers/video/Makefile
+++ b/drivers/video/Makefile
@@ -14,6 +14,7 @@ fb-objs                           := $(fb-y)
 obj-$(CONFIG_VT)		  += console/
 obj-$(CONFIG_LOGO)		  += logo/
 obj-y				  += backlight/
+obj-y				  += display/
 
 obj-$(CONFIG_EXYNOS_VIDEO)     += exynos/
 
diff --git a/drivers/video/display/Kconfig b/drivers/video/display/Kconfig
new file mode 100644
index 0000000..1d1a590
--- /dev/null
+++ b/drivers/video/display/Kconfig
@@ -0,0 +1,26 @@
+menuconfig DISPLAY_CORE
+	tristate "Display Core"
+	---help---
+	  Support common display framework for graphics devices.
+
+if DISPLAY_CORE
+
+config DISPLAY_PANEL_DPI
+	tristate "DPI (Parallel) Display Panels"
+	---help---
+	  Support for simple digital (parallel) pixel interface panels. Those
+	  panels receive pixel data through a parallel bus and have no control
+	  bus.
+
+	  If you are in doubt, say N.
+
+config DISPLAY_PANEL_DVI
+	tristate "DVI Monitor"
+
+config DISPLAY_PANEL_TAAL
+	tristate "Taal DSI command mode panel"
+
+config DISPLAY_CHIP_TFP410
+	tristate "DPI to DVI chip"
+
+endif # DISPLAY_CORE
diff --git a/drivers/video/display/Makefile b/drivers/video/display/Makefile
new file mode 100644
index 0000000..ac97dfd
--- /dev/null
+++ b/drivers/video/display/Makefile
@@ -0,0 +1,5 @@
+obj-$(CONFIG_DISPLAY_CORE) += display-core.o
+obj-$(CONFIG_DISPLAY_PANEL_DPI) += panel-dpi.o
+obj-$(CONFIG_DISPLAY_PANEL_DVI) += panel-dvi.o
+obj-$(CONFIG_DISPLAY_PANEL_TAAL) += panel-taal.o
+obj-$(CONFIG_DISPLAY_CHIP_TFP410) += chip-tfp410.o
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
  2012-12-14 14:27 ` Tomi Valkeinen
@ 2012-12-19 13:21   ` Laurent Pinchart
  -1 siblings, 0 replies; 26+ messages in thread
From: Laurent Pinchart @ 2012-12-19 13:21 UTC (permalink / raw)
  To: Tomi Valkeinen; +Cc: linux-fbdev, dri-devel

Hi Tomi,

On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
> Hi,
> 
> I have been testing Common Display Framework on OMAP, and making changes
> that I've discussed in the posts I've sent in reply to the CDF series from
> Laurent. While my CDF code is rather hacky and not at all ready, I wanted
> to post the code for comments and also as a reference code to my posts.
> 
> So here is CDF-T (Tomi-edition =).

We've discussed your approach extensively face-to-face today so I won't review 
the patches in detail, but I will instead summarize our discussion to make 
sure we understood each other (and let other developers jump in).

For the purpose of this discussion the term "display controller driver" (or 
just "display controller") refer to both the low-level driver layer that 
communicates directly with the display controller hardware, and to the higher-
level driver layer that implements and exposes the userspace API (FBDEV, KMS 
and/or V4L). Those layers can be implemented in multiple kernel modules (such 
as in the OMAP DSS case, with omapdss for the low-level layer and omapdrm, 
omapfb and omapvout for the API-level layer) or a single kernel module.

Control model
-------------

The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
model.png shows the CDF control model.

The panel object depicted on the figure doesn't need to be a panel in the 
stricter sense but could be any chain of off-SoC (both on-board or off-board) 
display entities. It however helps thinking about it as a panel and doesn't 
hurt the model.

The panel is controlled through abstract control requests. Those requests are 
used to retrieve panel information (such as the physical size, the supported 
video modes, EDID information, ...), set the panel configuration (such as the 
active video timings) or control the panel operation state (enabling/disabling 
the panel, controlling panel blanking and power management, ...). They are 
exposed by the panel using function pointers, and called by other kernel 
components in response to userspace requests (through the FBDEV, KMS or V4L2 
APIs) or in-kernel events (for instance hotplug notifications).

In response to the control requests the panel driver will communicate with the 
panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ..., not shown 
on the figure) and will control the video stream it receives on its input.

The panel is connected at the hardware level to a video source (shown as a 
green hashed rectangle) that provides it with a video stream. The video stream 
flows from the video source to the panel and is directly controlled by its 
source, as shown by the green arrow from the display controller to the video 
stream. The video source exposes stream control operations as function 
pointers that are used by the panel to control the video stream, as shown by 
the green arrow from the panel to the video source.

The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
model-2.png shows the call flow across entities when the panel is a pipeline 
made of more than a single entity. In this case the SoC (on the left of the 
dashed line) outputs a video stream on a DSI bus connected to a DSI to LVDS 
transmitter. The output of the DSI to LVDS transmitter is connected to an LVDS 
panel (or, more accurately, an LVDS panel module made of an LVDS panel 
controller and a panel).

The transmitter and panel module are seen by the display controller and 
userspace API implementations as a single entity that exposes control request 
operations and controls its input video stream. When a control request is 
performed (outermost green arrow) the DSI to LVDS transmitter will propagate 
it to the panel, possibly mangling the input parameters or the response. For 
panel operation state control requests the last entity in the pipeline will 
likely want to control the video stream it receives on its input. The video 
stream control calls will be propagated from right to left as shown by the red 
arrows.

Every entity in the call stack can communicate with its hardware device 
through the corresponding control bus, and/or control the video stream it 
receives on its input.

This model allows filtering out modes and timings supported by the panel but 
unsupported by the transmitter and mangling the modes and timings according to 
the transmitter limitations. It has no complexity drawback for simple devices, 
as the corresponding drivers can just forward the calls directly. Similar use 
cases could exist for other control operations than mode and information 
retrieval.

Discovery
---------

Before being able to issue control requests, panel devices need to be 
discovered and associated with the connected display controller(s).

Panels and display controllers are cross-dependent. There is no way around 
that, as the display controller needs a reference to the panel to call control 
requests in response to userspace API, and the panel needs a reference to the 
display controller to call video stream control functions (in addition to 
requiring generic resources such as clocks, GPIOs or even regulators that 
could be provided by the display controller).

As we can't probe the display controller and the panel together, a probe order 
needs to be defined. The decision was to consider video sources as resources 
and defer panel probing until all required resources (video stream source, 
clocks, GPIOs, regulators and more) are available. Display controller probing 
must succeed without the panel being available. This mimicks the hotpluggable 
monitor model (VGA, HDMI, DP) that doesn't prevent display controllers from 
being successfully probed without a connected monitor.

Our design goal is to handle panel discovery in a similar (if not identical) 
way as HDMI/DP hotplug in order to implement a single display discovery method 
in display controller drivers. This might not be achievable, in which case 
we'll reconsider the design requirement.

When the display controller driver probes the device it will register the 
video source(s) at the output of the display controller with the CDF core. 
Those sources will be identified by the display controller dev_name() and a 
source integer index. A new structure, likely called display_entity_port, will 
be used to represent a source or sink video port on a display entity.

Panel drivers will handle video sources as resources. They will retrieve at 
probe time the video source the panel is connected to using a phandle or a 
source name (depending on whether the platform uses DT). If the source isn't 
available the probe function will return -EPROBE_DEFER.

In addition to the video stream control operations mentioned above, ports will 
also expose a connect/disconnect operation use to notify them of 
connection/disconnection events. After retrieving the connected video source 
panel drivers call the connect/disconnect operation on the video source to 
notify it that the panel is available.

When the panel is a pipeline made of more than a single entity, entities are 
probed in video source to video sink order. Out-of-order probe will result in 
probe deferral as explained above due to the video source not being available, 
resulting in the source to sink probe order. Entities should not call the 
connect operation of their video source at probe time in that case, but only 
when their own connect operation for the video source(s) they provide to the 
next entity is called by the next entity. Connect operations will thus be 
called in sink to source order starting at the entity at the end of the 
pipeline and going all the way back to the display controller.

This notification system is a hotplug mechanism that replaces the display 
entity notifier system from my previous RFC. Alan Cox rightly objected to the 
notification system, arguing that such system-wide notifications were used by 
FBDEV and very subject to abuse. I agree with his argument, this new mechanism 
should result in a cleaner implementation as video sources will only be 
notified of connect/disconnect events for the entity they're connected to.

DBI/DSI busses
--------------

My RFC introduced a DBI bus using the Linux device and bus model. Its purpose 
was multifold:

- Support (un)registration, matching and binding of devices and drivers.

- Provide power management (suspend/resume) services through the standard 
Linux PM bus/device model, to make sure that DBI devices will be 
suspended/resumed after/before their DBI bus controller.

- Provide bus services to access the connected devices. For DBI that took the 
form of command read and data read/write functions.

A DSI bus implementation using the same model was also planned.

Tomi's patches removed the DBI bus and replaced DBI devices with platform 
devices, moving the bus services implementation to the video source. DBI and 
DSI busses are always either pure video or video + control busses (although 
controlling a DPI panel through DSI is conceivable, nobody in his right mind, 
not even a hardware engineer, would likely implement that), so there will 
always be a video source to provide the DBI/DSI control operations.

(Un)registration, matching and binding of devices and drivers is provided by 
the platform device bus. Bus services to access connected devices are provided 
by the video source, wrapper functions will be used to handle serialization 
and locking, and possibly to offer higher level services (such as DCS for 
instance).

One drawback of using the platform bus is that PM relationships between the 
bus master and slaves will not be taken into account during suspend/resume. 
However, a similar issue exists for DPI panels, and PM relationships at the 
video bus level for DBI and DSI are not handled by the DBI/DSI busses either. 
As we need a generic solution to handle those (likely through early suspend 
and late resume), the same solution can be used to handle DBI and DSI control 
bus PM relationships without requiring a Linux DBI or DSI bus.

Even though I still like the idea of DBI and DSI busses, I agree with Tomi 
that they're not strictly needed and I will drop them.

Entity model
------------

Tomi's proposal split the display entities into video sources (struct  
video_source) and display entities (struct display_entity). To make generic 
pipeline operations easier, we agreed to merge the video source and the 
display entity back. struct display_entity thus models a display entity that 
has any number of sink and/or source ports, modeled as struct 
display_entity_port instances.

Video stream operations will be exposed by the display entity as function 
pointers and will take a port reference as argument (this could take the form 
of struct display_entity * and port index, or struct display_entity_port *). 
The DVI and DSI operations model proposed by Tomi in this patch series will be 
kept.

Points that we forgot to discuss
--------------------------------

- DISPLAY_ENTITY_STREAM_SINGLE_SHOT vs. update() operation

I'll look into that.

Please let me know if I've forgotten anything.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
@ 2012-12-19 13:21   ` Laurent Pinchart
  0 siblings, 0 replies; 26+ messages in thread
From: Laurent Pinchart @ 2012-12-19 13:21 UTC (permalink / raw)
  To: Tomi Valkeinen; +Cc: linux-fbdev, dri-devel

Hi Tomi,

On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
> Hi,
> 
> I have been testing Common Display Framework on OMAP, and making changes
> that I've discussed in the posts I've sent in reply to the CDF series from
> Laurent. While my CDF code is rather hacky and not at all ready, I wanted
> to post the code for comments and also as a reference code to my posts.
> 
> So here is CDF-T (Tomi-edition =).

We've discussed your approach extensively face-to-face today so I won't review 
the patches in detail, but I will instead summarize our discussion to make 
sure we understood each other (and let other developers jump in).

For the purpose of this discussion the term "display controller driver" (or 
just "display controller") refer to both the low-level driver layer that 
communicates directly with the display controller hardware, and to the higher-
level driver layer that implements and exposes the userspace API (FBDEV, KMS 
and/or V4L). Those layers can be implemented in multiple kernel modules (such 
as in the OMAP DSS case, with omapdss for the low-level layer and omapdrm, 
omapfb and omapvout for the API-level layer) or a single kernel module.

Control model
-------------

The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
model.png shows the CDF control model.

The panel object depicted on the figure doesn't need to be a panel in the 
stricter sense but could be any chain of off-SoC (both on-board or off-board) 
display entities. It however helps thinking about it as a panel and doesn't 
hurt the model.

The panel is controlled through abstract control requests. Those requests are 
used to retrieve panel information (such as the physical size, the supported 
video modes, EDID information, ...), set the panel configuration (such as the 
active video timings) or control the panel operation state (enabling/disabling 
the panel, controlling panel blanking and power management, ...). They are 
exposed by the panel using function pointers, and called by other kernel 
components in response to userspace requests (through the FBDEV, KMS or V4L2 
APIs) or in-kernel events (for instance hotplug notifications).

In response to the control requests the panel driver will communicate with the 
panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ..., not shown 
on the figure) and will control the video stream it receives on its input.

The panel is connected at the hardware level to a video source (shown as a 
green hashed rectangle) that provides it with a video stream. The video stream 
flows from the video source to the panel and is directly controlled by its 
source, as shown by the green arrow from the display controller to the video 
stream. The video source exposes stream control operations as function 
pointers that are used by the panel to control the video stream, as shown by 
the green arrow from the panel to the video source.

The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
model-2.png shows the call flow across entities when the panel is a pipeline 
made of more than a single entity. In this case the SoC (on the left of the 
dashed line) outputs a video stream on a DSI bus connected to a DSI to LVDS 
transmitter. The output of the DSI to LVDS transmitter is connected to an LVDS 
panel (or, more accurately, an LVDS panel module made of an LVDS panel 
controller and a panel).

The transmitter and panel module are seen by the display controller and 
userspace API implementations as a single entity that exposes control request 
operations and controls its input video stream. When a control request is 
performed (outermost green arrow) the DSI to LVDS transmitter will propagate 
it to the panel, possibly mangling the input parameters or the response. For 
panel operation state control requests the last entity in the pipeline will 
likely want to control the video stream it receives on its input. The video 
stream control calls will be propagated from right to left as shown by the red 
arrows.

Every entity in the call stack can communicate with its hardware device 
through the corresponding control bus, and/or control the video stream it 
receives on its input.

This model allows filtering out modes and timings supported by the panel but 
unsupported by the transmitter and mangling the modes and timings according to 
the transmitter limitations. It has no complexity drawback for simple devices, 
as the corresponding drivers can just forward the calls directly. Similar use 
cases could exist for other control operations than mode and information 
retrieval.

Discovery
---------

Before being able to issue control requests, panel devices need to be 
discovered and associated with the connected display controller(s).

Panels and display controllers are cross-dependent. There is no way around 
that, as the display controller needs a reference to the panel to call control 
requests in response to userspace API, and the panel needs a reference to the 
display controller to call video stream control functions (in addition to 
requiring generic resources such as clocks, GPIOs or even regulators that 
could be provided by the display controller).

As we can't probe the display controller and the panel together, a probe order 
needs to be defined. The decision was to consider video sources as resources 
and defer panel probing until all required resources (video stream source, 
clocks, GPIOs, regulators and more) are available. Display controller probing 
must succeed without the panel being available. This mimicks the hotpluggable 
monitor model (VGA, HDMI, DP) that doesn't prevent display controllers from 
being successfully probed without a connected monitor.

Our design goal is to handle panel discovery in a similar (if not identical) 
way as HDMI/DP hotplug in order to implement a single display discovery method 
in display controller drivers. This might not be achievable, in which case 
we'll reconsider the design requirement.

When the display controller driver probes the device it will register the 
video source(s) at the output of the display controller with the CDF core. 
Those sources will be identified by the display controller dev_name() and a 
source integer index. A new structure, likely called display_entity_port, will 
be used to represent a source or sink video port on a display entity.

Panel drivers will handle video sources as resources. They will retrieve at 
probe time the video source the panel is connected to using a phandle or a 
source name (depending on whether the platform uses DT). If the source isn't 
available the probe function will return -EPROBE_DEFER.

In addition to the video stream control operations mentioned above, ports will 
also expose a connect/disconnect operation use to notify them of 
connection/disconnection events. After retrieving the connected video source 
panel drivers call the connect/disconnect operation on the video source to 
notify it that the panel is available.

When the panel is a pipeline made of more than a single entity, entities are 
probed in video source to video sink order. Out-of-order probe will result in 
probe deferral as explained above due to the video source not being available, 
resulting in the source to sink probe order. Entities should not call the 
connect operation of their video source at probe time in that case, but only 
when their own connect operation for the video source(s) they provide to the 
next entity is called by the next entity. Connect operations will thus be 
called in sink to source order starting at the entity at the end of the 
pipeline and going all the way back to the display controller.

This notification system is a hotplug mechanism that replaces the display 
entity notifier system from my previous RFC. Alan Cox rightly objected to the 
notification system, arguing that such system-wide notifications were used by 
FBDEV and very subject to abuse. I agree with his argument, this new mechanism 
should result in a cleaner implementation as video sources will only be 
notified of connect/disconnect events for the entity they're connected to.

DBI/DSI busses
--------------

My RFC introduced a DBI bus using the Linux device and bus model. Its purpose 
was multifold:

- Support (un)registration, matching and binding of devices and drivers.

- Provide power management (suspend/resume) services through the standard 
Linux PM bus/device model, to make sure that DBI devices will be 
suspended/resumed after/before their DBI bus controller.

- Provide bus services to access the connected devices. For DBI that took the 
form of command read and data read/write functions.

A DSI bus implementation using the same model was also planned.

Tomi's patches removed the DBI bus and replaced DBI devices with platform 
devices, moving the bus services implementation to the video source. DBI and 
DSI busses are always either pure video or video + control busses (although 
controlling a DPI panel through DSI is conceivable, nobody in his right mind, 
not even a hardware engineer, would likely implement that), so there will 
always be a video source to provide the DBI/DSI control operations.

(Un)registration, matching and binding of devices and drivers is provided by 
the platform device bus. Bus services to access connected devices are provided 
by the video source, wrapper functions will be used to handle serialization 
and locking, and possibly to offer higher level services (such as DCS for 
instance).

One drawback of using the platform bus is that PM relationships between the 
bus master and slaves will not be taken into account during suspend/resume. 
However, a similar issue exists for DPI panels, and PM relationships at the 
video bus level for DBI and DSI are not handled by the DBI/DSI busses either. 
As we need a generic solution to handle those (likely through early suspend 
and late resume), the same solution can be used to handle DBI and DSI control 
bus PM relationships without requiring a Linux DBI or DSI bus.

Even though I still like the idea of DBI and DSI busses, I agree with Tomi 
that they're not strictly needed and I will drop them.

Entity model
------------

Tomi's proposal split the display entities into video sources (struct  
video_source) and display entities (struct display_entity). To make generic 
pipeline operations easier, we agreed to merge the video source and the 
display entity back. struct display_entity thus models a display entity that 
has any number of sink and/or source ports, modeled as struct 
display_entity_port instances.

Video stream operations will be exposed by the display entity as function 
pointers and will take a port reference as argument (this could take the form 
of struct display_entity * and port index, or struct display_entity_port *). 
The DVI and DSI operations model proposed by Tomi in this patch series will be 
kept.

Points that we forgot to discuss
--------------------------------

- DISPLAY_ENTITY_STREAM_SINGLE_SHOT vs. update() operation

I'll look into that.

Please let me know if I've forgotten anything.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
  2012-12-19 13:21   ` Laurent Pinchart
@ 2012-12-19 14:53     ` Tomi Valkeinen
  -1 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-19 14:53 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: linux-fbdev, dri-devel

[-- Attachment #1: Type: text/plain, Size: 14789 bytes --]

On 2012-12-19 15:21, Laurent Pinchart wrote:
> Hi Tomi,
> 
> On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
>> Hi,
>>
>> I have been testing Common Display Framework on OMAP, and making changes
>> that I've discussed in the posts I've sent in reply to the CDF series from
>> Laurent. While my CDF code is rather hacky and not at all ready, I wanted
>> to post the code for comments and also as a reference code to my posts.
>>
>> So here is CDF-T (Tomi-edition =).
> 
> We've discussed your approach extensively face-to-face today so I won't review 
> the patches in detail, but I will instead summarize our discussion to make 
> sure we understood each other (and let other developers jump in).

I have some comments =). But mostly it looks good.

> For the purpose of this discussion the term "display controller driver" (or 
> just "display controller") refer to both the low-level driver layer that 
> communicates directly with the display controller hardware, and to the higher-
> level driver layer that implements and exposes the userspace API (FBDEV, KMS 
> and/or V4L). Those layers can be implemented in multiple kernel modules (such 
> as in the OMAP DSS case, with omapdss for the low-level layer and omapdrm, 
> omapfb and omapvout for the API-level layer) or a single kernel module.
> 
> Control model
> -------------
> 
> The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> model.png shows the CDF control model.
> 
> The panel object depicted on the figure doesn't need to be a panel in the 
> stricter sense but could be any chain of off-SoC (both on-board or off-board) 
> display entities. It however helps thinking about it as a panel and doesn't 
> hurt the model.

I don't think it needs to be off-soc. The dispc and the panel in the
image could be any two components, in- or off-soc.

> The panel is controlled through abstract control requests. Those requests are 
> used to retrieve panel information (such as the physical size, the supported 
> video modes, EDID information, ...), set the panel configuration (such as the 
> active video timings) or control the panel operation state (enabling/disabling 
> the panel, controlling panel blanking and power management, ...). They are 
> exposed by the panel using function pointers, and called by other kernel 
> components in response to userspace requests (through the FBDEV, KMS or V4L2 
> APIs) or in-kernel events (for instance hotplug notifications).
> 
> In response to the control requests the panel driver will communicate with the 
> panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ..., not shown 
> on the figure) and will control the video stream it receives on its input.
> 
> The panel is connected at the hardware level to a video source (shown as a 
> green hashed rectangle) that provides it with a video stream. The video stream 
> flows from the video source to the panel and is directly controlled by its 
> source, as shown by the green arrow from the display controller to the video 
> stream. The video source exposes stream control operations as function 
> pointers that are used by the panel to control the video stream, as shown by 
> the green arrow from the panel to the video source.
> 
> The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> model-2.png shows the call flow across entities when the panel is a pipeline 
> made of more than a single entity. In this case the SoC (on the left of the 
> dashed line) outputs a video stream on a DSI bus connected to a DSI to LVDS 
> transmitter. The output of the DSI to LVDS transmitter is connected to an LVDS 
> panel (or, more accurately, an LVDS panel module made of an LVDS panel 
> controller and a panel).

Here also I don't see any reason to separate in- or off-soc components.
I think the call from DISPC, which now goes to the transmitter, should
first go to the DPI/DSI block. Whether the DPI/DSI block is in- or
off-soc should be irrelevant regarding CDF.

> The transmitter and panel module are seen by the display controller and 
> userspace API implementations as a single entity that exposes control request 
> operations and controls its input video stream. When a control request is 

I don't like the sound of this. I think the CDF shouldn't care how the
userspace API is implemented. There's no reason in CDF level to separate
in- or out-soc entities, nor expose only one entity. If DRM requires
mapping DRM's crtc, encoder and connector to, respectively, dispc,
DPI/DSI, the-rest, it should be able to do that, but CDF shouldn't force
that model.

Of course, the implementor of the particular SoC display driver could
decide to use one display entity that covers both dispc and DPI/DSI
blocks. But at least I would like to have OMAP's DISPC as a display
entity (or actually multiple entities, one for each DISPC output), and
the various in-SoC DPI-to-something encoders as display entities.

> performed (outermost green arrow) the DSI to LVDS transmitter will propagate 
> it to the panel, possibly mangling the input parameters or the response. For 
> panel operation state control requests the last entity in the pipeline will 
> likely want to control the video stream it receives on its input. The video 
> stream control calls will be propagated from right to left as shown by the red 
> arrows.
> 
> Every entity in the call stack can communicate with its hardware device 
> through the corresponding control bus, and/or control the video stream it 
> receives on its input.
> 
> This model allows filtering out modes and timings supported by the panel but 
> unsupported by the transmitter and mangling the modes and timings according to 
> the transmitter limitations. It has no complexity drawback for simple devices, 
> as the corresponding drivers can just forward the calls directly. Similar use 
> cases could exist for other control operations than mode and information 
> retrieval.
> 
> Discovery
> ---------
> 
> Before being able to issue control requests, panel devices need to be 
> discovered and associated with the connected display controller(s).
> 
> Panels and display controllers are cross-dependent. There is no way around 

Perhaps semantics, but I don't think they are cross-dependent. True,
they will call ops in each other, but the dispc will get the pointer to
the panel when the panel connects the dispc and the panel. And when the
panel disconnects, dispc will lose the reference to the panel.

For me, cross-dependent would mean that dispc could have a reference to
the panel regardless of what the panel does. In our case it is not so,
and there's no harm with the dispc's reference to the panel, as the
panel can remove it (almost) at any time.

> that, as the display controller needs a reference to the panel to call control 
> requests in response to userspace API, and the panel needs a reference to the 
> display controller to call video stream control functions (in addition to 
> requiring generic resources such as clocks, GPIOs or even regulators that 
> could be provided by the display controller).
> 
> As we can't probe the display controller and the panel together, a probe order 
> needs to be defined. The decision was to consider video sources as resources 
> and defer panel probing until all required resources (video stream source, 
> clocks, GPIOs, regulators and more) are available. Display controller probing 
> must succeed without the panel being available. This mimicks the hotpluggable 
> monitor model (VGA, HDMI, DP) that doesn't prevent display controllers from 
> being successfully probed without a connected monitor.
> 
> Our design goal is to handle panel discovery in a similar (if not identical) 
> way as HDMI/DP hotplug in order to implement a single display discovery method 
> in display controller drivers. This might not be achievable, in which case 
> we'll reconsider the design requirement.
> 
> When the display controller driver probes the device it will register the 
> video source(s) at the output of the display controller with the CDF core. 
> Those sources will be identified by the display controller dev_name() and a 
> source integer index. A new structure, likely called display_entity_port, will 
> be used to represent a source or sink video port on a display entity.
> 
> Panel drivers will handle video sources as resources. They will retrieve at 
> probe time the video source the panel is connected to using a phandle or a 
> source name (depending on whether the platform uses DT). If the source isn't 
> available the probe function will return -EPROBE_DEFER.
> 
> In addition to the video stream control operations mentioned above, ports will 
> also expose a connect/disconnect operation use to notify them of 
> connection/disconnection events. After retrieving the connected video source 
> panel drivers call the connect/disconnect operation on the video source to 
> notify it that the panel is available.
> 
> When the panel is a pipeline made of more than a single entity, entities are 
> probed in video source to video sink order. Out-of-order probe will result in 
> probe deferral as explained above due to the video source not being available, 
> resulting in the source to sink probe order. Entities should not call the 
> connect operation of their video source at probe time in that case, but only 
> when their own connect operation for the video source(s) they provide to the 
> next entity is called by the next entity. Connect operations will thus be 
> called in sink to source order starting at the entity at the end of the 
> pipeline and going all the way back to the display controller.
> 
> This notification system is a hotplug mechanism that replaces the display 
> entity notifier system from my previous RFC. Alan Cox rightly objected to the 
> notification system, arguing that such system-wide notifications were used by 
> FBDEV and very subject to abuse. I agree with his argument, this new mechanism 
> should result in a cleaner implementation as video sources will only be 
> notified of connect/disconnect events for the entity they're connected to.
> 
> DBI/DSI busses
> --------------
> 
> My RFC introduced a DBI bus using the Linux device and bus model. Its purpose 
> was multifold:
> 
> - Support (un)registration, matching and binding of devices and drivers.
> 
> - Provide power management (suspend/resume) services through the standard 
> Linux PM bus/device model, to make sure that DBI devices will be 
> suspended/resumed after/before their DBI bus controller.
> 
> - Provide bus services to access the connected devices. For DBI that took the 
> form of command read and data read/write functions.
> 
> A DSI bus implementation using the same model was also planned.
> 
> Tomi's patches removed the DBI bus and replaced DBI devices with platform 
> devices, moving the bus services implementation to the video source. DBI and 
> DSI busses are always either pure video or video + control busses (although 
> controlling a DPI panel through DSI is conceivable, nobody in his right mind, 
> not even a hardware engineer, would likely implement that), so there will 
> always be a video source to provide the DBI/DSI control operations.
> 
> (Un)registration, matching and binding of devices and drivers is provided by 
> the platform device bus. Bus services to access connected devices are provided 
> by the video source, wrapper functions will be used to handle serialization 
> and locking, and possibly to offer higher level services (such as DCS for 
> instance).
> 
> One drawback of using the platform bus is that PM relationships between the 
> bus master and slaves will not be taken into account during suspend/resume. 
> However, a similar issue exists for DPI panels, and PM relationships at the 
> video bus level for DBI and DSI are not handled by the DBI/DSI busses either. 
> As we need a generic solution to handle those (likely through early suspend 
> and late resume), the same solution can be used to handle DBI and DSI control 
> bus PM relationships without requiring a Linux DBI or DSI bus.
> 
> Even though I still like the idea of DBI and DSI busses, I agree with Tomi 
> that they're not strictly needed and I will drop them.

I'd like to highlight two points I made about the bus model:

- If DBI is used only for video, there's no DBI bus. How to configure
DBI in this case?

- If DBI is used for control and video, we have two separate APIs for
the bus. In theory it's possible to handle this, but in practice it may
be impossible, especially for more complex busses like DSI.

I think both of those issues would make the bus model very difficult to
implement. I have no idea how it could be done neatly. So as I see it,
it's not only about "not strictly needed", but that the bus model
wouldn't work without complex code.

> Entity model
> ------------
> 
> Tomi's proposal split the display entities into video sources (struct  
> video_source) and display entities (struct display_entity). To make generic 
> pipeline operations easier, we agreed to merge the video source and the 
> display entity back. struct display_entity thus models a display entity that 
> has any number of sink and/or source ports, modeled as struct 
> display_entity_port instances.
> 
> Video stream operations will be exposed by the display entity as function 
> pointers and will take a port reference as argument (this could take the form 
> of struct display_entity * and port index, or struct display_entity_port *). 

I'd very much like to have only one parameter to pass, as there may be
lots of ops for some busses. Having two parameters to refer to the
source is just extra code that has no extra benefit when using video
source ops. Then again, having separate port index parameter could be
perhaps simpler to implement for the one handling the video source ops,
so...

> The DVI and DSI operations model proposed by Tomi in this patch series will be 
> kept.
> 
> Points that we forgot to discuss
> --------------------------------
> 
> - DISPLAY_ENTITY_STREAM_SINGLE_SHOT vs. update() operation

Ah, yes, we missed that. I think it's possible to use SINGLE_SHOT, but
then it requires some kind of system to inform about finished update. Or
we could, of course, decide that informing about the update is done in
dispc-specific code, like handling VSYNC.

Hmm, except that won't probably work, as the panel (or any DSI device)
may need to know if the DSI bus is currently used or not.

 Tomi



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 899 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
@ 2012-12-19 14:53     ` Tomi Valkeinen
  0 siblings, 0 replies; 26+ messages in thread
From: Tomi Valkeinen @ 2012-12-19 14:53 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: linux-fbdev, dri-devel


[-- Attachment #1.1: Type: text/plain, Size: 14789 bytes --]

On 2012-12-19 15:21, Laurent Pinchart wrote:
> Hi Tomi,
> 
> On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
>> Hi,
>>
>> I have been testing Common Display Framework on OMAP, and making changes
>> that I've discussed in the posts I've sent in reply to the CDF series from
>> Laurent. While my CDF code is rather hacky and not at all ready, I wanted
>> to post the code for comments and also as a reference code to my posts.
>>
>> So here is CDF-T (Tomi-edition =).
> 
> We've discussed your approach extensively face-to-face today so I won't review 
> the patches in detail, but I will instead summarize our discussion to make 
> sure we understood each other (and let other developers jump in).

I have some comments =). But mostly it looks good.

> For the purpose of this discussion the term "display controller driver" (or 
> just "display controller") refer to both the low-level driver layer that 
> communicates directly with the display controller hardware, and to the higher-
> level driver layer that implements and exposes the userspace API (FBDEV, KMS 
> and/or V4L). Those layers can be implemented in multiple kernel modules (such 
> as in the OMAP DSS case, with omapdss for the low-level layer and omapdrm, 
> omapfb and omapvout for the API-level layer) or a single kernel module.
> 
> Control model
> -------------
> 
> The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> model.png shows the CDF control model.
> 
> The panel object depicted on the figure doesn't need to be a panel in the 
> stricter sense but could be any chain of off-SoC (both on-board or off-board) 
> display entities. It however helps thinking about it as a panel and doesn't 
> hurt the model.

I don't think it needs to be off-soc. The dispc and the panel in the
image could be any two components, in- or off-soc.

> The panel is controlled through abstract control requests. Those requests are 
> used to retrieve panel information (such as the physical size, the supported 
> video modes, EDID information, ...), set the panel configuration (such as the 
> active video timings) or control the panel operation state (enabling/disabling 
> the panel, controlling panel blanking and power management, ...). They are 
> exposed by the panel using function pointers, and called by other kernel 
> components in response to userspace requests (through the FBDEV, KMS or V4L2 
> APIs) or in-kernel events (for instance hotplug notifications).
> 
> In response to the control requests the panel driver will communicate with the 
> panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ..., not shown 
> on the figure) and will control the video stream it receives on its input.
> 
> The panel is connected at the hardware level to a video source (shown as a 
> green hashed rectangle) that provides it with a video stream. The video stream 
> flows from the video source to the panel and is directly controlled by its 
> source, as shown by the green arrow from the display controller to the video 
> stream. The video source exposes stream control operations as function 
> pointers that are used by the panel to control the video stream, as shown by 
> the green arrow from the panel to the video source.
> 
> The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> model-2.png shows the call flow across entities when the panel is a pipeline 
> made of more than a single entity. In this case the SoC (on the left of the 
> dashed line) outputs a video stream on a DSI bus connected to a DSI to LVDS 
> transmitter. The output of the DSI to LVDS transmitter is connected to an LVDS 
> panel (or, more accurately, an LVDS panel module made of an LVDS panel 
> controller and a panel).

Here also I don't see any reason to separate in- or off-soc components.
I think the call from DISPC, which now goes to the transmitter, should
first go to the DPI/DSI block. Whether the DPI/DSI block is in- or
off-soc should be irrelevant regarding CDF.

> The transmitter and panel module are seen by the display controller and 
> userspace API implementations as a single entity that exposes control request 
> operations and controls its input video stream. When a control request is 

I don't like the sound of this. I think the CDF shouldn't care how the
userspace API is implemented. There's no reason in CDF level to separate
in- or out-soc entities, nor expose only one entity. If DRM requires
mapping DRM's crtc, encoder and connector to, respectively, dispc,
DPI/DSI, the-rest, it should be able to do that, but CDF shouldn't force
that model.

Of course, the implementor of the particular SoC display driver could
decide to use one display entity that covers both dispc and DPI/DSI
blocks. But at least I would like to have OMAP's DISPC as a display
entity (or actually multiple entities, one for each DISPC output), and
the various in-SoC DPI-to-something encoders as display entities.

> performed (outermost green arrow) the DSI to LVDS transmitter will propagate 
> it to the panel, possibly mangling the input parameters or the response. For 
> panel operation state control requests the last entity in the pipeline will 
> likely want to control the video stream it receives on its input. The video 
> stream control calls will be propagated from right to left as shown by the red 
> arrows.
> 
> Every entity in the call stack can communicate with its hardware device 
> through the corresponding control bus, and/or control the video stream it 
> receives on its input.
> 
> This model allows filtering out modes and timings supported by the panel but 
> unsupported by the transmitter and mangling the modes and timings according to 
> the transmitter limitations. It has no complexity drawback for simple devices, 
> as the corresponding drivers can just forward the calls directly. Similar use 
> cases could exist for other control operations than mode and information 
> retrieval.
> 
> Discovery
> ---------
> 
> Before being able to issue control requests, panel devices need to be 
> discovered and associated with the connected display controller(s).
> 
> Panels and display controllers are cross-dependent. There is no way around 

Perhaps semantics, but I don't think they are cross-dependent. True,
they will call ops in each other, but the dispc will get the pointer to
the panel when the panel connects the dispc and the panel. And when the
panel disconnects, dispc will lose the reference to the panel.

For me, cross-dependent would mean that dispc could have a reference to
the panel regardless of what the panel does. In our case it is not so,
and there's no harm with the dispc's reference to the panel, as the
panel can remove it (almost) at any time.

> that, as the display controller needs a reference to the panel to call control 
> requests in response to userspace API, and the panel needs a reference to the 
> display controller to call video stream control functions (in addition to 
> requiring generic resources such as clocks, GPIOs or even regulators that 
> could be provided by the display controller).
> 
> As we can't probe the display controller and the panel together, a probe order 
> needs to be defined. The decision was to consider video sources as resources 
> and defer panel probing until all required resources (video stream source, 
> clocks, GPIOs, regulators and more) are available. Display controller probing 
> must succeed without the panel being available. This mimicks the hotpluggable 
> monitor model (VGA, HDMI, DP) that doesn't prevent display controllers from 
> being successfully probed without a connected monitor.
> 
> Our design goal is to handle panel discovery in a similar (if not identical) 
> way as HDMI/DP hotplug in order to implement a single display discovery method 
> in display controller drivers. This might not be achievable, in which case 
> we'll reconsider the design requirement.
> 
> When the display controller driver probes the device it will register the 
> video source(s) at the output of the display controller with the CDF core. 
> Those sources will be identified by the display controller dev_name() and a 
> source integer index. A new structure, likely called display_entity_port, will 
> be used to represent a source or sink video port on a display entity.
> 
> Panel drivers will handle video sources as resources. They will retrieve at 
> probe time the video source the panel is connected to using a phandle or a 
> source name (depending on whether the platform uses DT). If the source isn't 
> available the probe function will return -EPROBE_DEFER.
> 
> In addition to the video stream control operations mentioned above, ports will 
> also expose a connect/disconnect operation use to notify them of 
> connection/disconnection events. After retrieving the connected video source 
> panel drivers call the connect/disconnect operation on the video source to 
> notify it that the panel is available.
> 
> When the panel is a pipeline made of more than a single entity, entities are 
> probed in video source to video sink order. Out-of-order probe will result in 
> probe deferral as explained above due to the video source not being available, 
> resulting in the source to sink probe order. Entities should not call the 
> connect operation of their video source at probe time in that case, but only 
> when their own connect operation for the video source(s) they provide to the 
> next entity is called by the next entity. Connect operations will thus be 
> called in sink to source order starting at the entity at the end of the 
> pipeline and going all the way back to the display controller.
> 
> This notification system is a hotplug mechanism that replaces the display 
> entity notifier system from my previous RFC. Alan Cox rightly objected to the 
> notification system, arguing that such system-wide notifications were used by 
> FBDEV and very subject to abuse. I agree with his argument, this new mechanism 
> should result in a cleaner implementation as video sources will only be 
> notified of connect/disconnect events for the entity they're connected to.
> 
> DBI/DSI busses
> --------------
> 
> My RFC introduced a DBI bus using the Linux device and bus model. Its purpose 
> was multifold:
> 
> - Support (un)registration, matching and binding of devices and drivers.
> 
> - Provide power management (suspend/resume) services through the standard 
> Linux PM bus/device model, to make sure that DBI devices will be 
> suspended/resumed after/before their DBI bus controller.
> 
> - Provide bus services to access the connected devices. For DBI that took the 
> form of command read and data read/write functions.
> 
> A DSI bus implementation using the same model was also planned.
> 
> Tomi's patches removed the DBI bus and replaced DBI devices with platform 
> devices, moving the bus services implementation to the video source. DBI and 
> DSI busses are always either pure video or video + control busses (although 
> controlling a DPI panel through DSI is conceivable, nobody in his right mind, 
> not even a hardware engineer, would likely implement that), so there will 
> always be a video source to provide the DBI/DSI control operations.
> 
> (Un)registration, matching and binding of devices and drivers is provided by 
> the platform device bus. Bus services to access connected devices are provided 
> by the video source, wrapper functions will be used to handle serialization 
> and locking, and possibly to offer higher level services (such as DCS for 
> instance).
> 
> One drawback of using the platform bus is that PM relationships between the 
> bus master and slaves will not be taken into account during suspend/resume. 
> However, a similar issue exists for DPI panels, and PM relationships at the 
> video bus level for DBI and DSI are not handled by the DBI/DSI busses either. 
> As we need a generic solution to handle those (likely through early suspend 
> and late resume), the same solution can be used to handle DBI and DSI control 
> bus PM relationships without requiring a Linux DBI or DSI bus.
> 
> Even though I still like the idea of DBI and DSI busses, I agree with Tomi 
> that they're not strictly needed and I will drop them.

I'd like to highlight two points I made about the bus model:

- If DBI is used only for video, there's no DBI bus. How to configure
DBI in this case?

- If DBI is used for control and video, we have two separate APIs for
the bus. In theory it's possible to handle this, but in practice it may
be impossible, especially for more complex busses like DSI.

I think both of those issues would make the bus model very difficult to
implement. I have no idea how it could be done neatly. So as I see it,
it's not only about "not strictly needed", but that the bus model
wouldn't work without complex code.

> Entity model
> ------------
> 
> Tomi's proposal split the display entities into video sources (struct  
> video_source) and display entities (struct display_entity). To make generic 
> pipeline operations easier, we agreed to merge the video source and the 
> display entity back. struct display_entity thus models a display entity that 
> has any number of sink and/or source ports, modeled as struct 
> display_entity_port instances.
> 
> Video stream operations will be exposed by the display entity as function 
> pointers and will take a port reference as argument (this could take the form 
> of struct display_entity * and port index, or struct display_entity_port *). 

I'd very much like to have only one parameter to pass, as there may be
lots of ops for some busses. Having two parameters to refer to the
source is just extra code that has no extra benefit when using video
source ops. Then again, having separate port index parameter could be
perhaps simpler to implement for the one handling the video source ops,
so...

> The DVI and DSI operations model proposed by Tomi in this patch series will be 
> kept.
> 
> Points that we forgot to discuss
> --------------------------------
> 
> - DISPLAY_ENTITY_STREAM_SINGLE_SHOT vs. update() operation

Ah, yes, we missed that. I think it's possible to use SINGLE_SHOT, but
then it requires some kind of system to inform about finished update. Or
we could, of course, decide that informing about the update is done in
dispc-specific code, like handling VSYNC.

Hmm, except that won't probably work, as the panel (or any DSI device)
may need to know if the DSI bus is currently used or not.

 Tomi



[-- Attachment #1.2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 899 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
  2012-12-19 14:53     ` Tomi Valkeinen
@ 2012-12-19 23:00       ` Laurent Pinchart
  -1 siblings, 0 replies; 26+ messages in thread
From: Laurent Pinchart @ 2012-12-19 22:59 UTC (permalink / raw)
  To: Tomi Valkeinen; +Cc: linux-fbdev, dri-devel

[-- Attachment #1: Type: text/plain, Size: 15634 bytes --]

Hi Tomi,

On Wednesday 19 December 2012 16:53:10 Tomi Valkeinen wrote:
> On 2012-12-19 15:21, Laurent Pinchart wrote:
> > On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
> >> Hi,
> >> 
> >> I have been testing Common Display Framework on OMAP, and making changes
> >> that I've discussed in the posts I've sent in reply to the CDF series
> >> from Laurent. While my CDF code is rather hacky and not at all ready, I
> >> wanted to post the code for comments and also as a reference code to my
> >> posts.
> >> 
> >> So here is CDF-T (Tomi-edition =).
> > 
> > We've discussed your approach extensively face-to-face today so I won't
> > review the patches in detail, but I will instead summarize our discussion
> > to make sure we understood each other (and let other developers jump in).
> 
> I have some comments =). But mostly it looks good.

Thanks :-)

> > For the purpose of this discussion the term "display controller driver"
> > (or just "display controller") refer to both the low-level driver layer
> > that communicates directly with the display controller hardware, and to
> > the higher- level driver layer that implements and exposes the userspace
> > API (FBDEV, KMS and/or V4L). Those layers can be implemented in multiple
> > kernel modules (such as in the OMAP DSS case, with omapdss for the
> > low-level layer and omapdrm, omapfb and omapvout for the API-level layer)
> > or a single kernel module.
> > 
> > Control model
> > -------------
> > 
> > The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > model.png shows the CDF control model.
> > 
> > The panel object depicted on the figure doesn't need to be a panel in the
> > stricter sense but could be any chain of off-SoC (both on-board or
> > off-board) display entities. It however helps thinking about it as a
> > panel and doesn't hurt the model.
> 
> I don't think it needs to be off-soc. The dispc and the panel in the
> image could be any two components, in- or off-soc.

That's correct, whether the components are on-SoC or off-SoC doesn't matter 
here.

> > The panel is controlled through abstract control requests. Those requests
> > are used to retrieve panel information (such as the physical size, the
> > supported video modes, EDID information, ...), set the panel
> > configuration (such as the active video timings) or control the panel
> > operation state (enabling/disabling the panel, controlling panel blanking
> > and power management, ...). They are exposed by the panel using function
> > pointers, and called by other kernel components in response to userspace
> > requests (through the FBDEV, KMS or V4L2 APIs) or in-kernel events (for
> > instance hotplug notifications).
> > 
> > In response to the control requests the panel driver will communicate with
> > the panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ...,
> > not shown on the figure) and will control the video stream it receives on
> > its input.
> > 
> > The panel is connected at the hardware level to a video source (shown as a
> > green hashed rectangle) that provides it with a video stream. The video
> > stream flows from the video source to the panel and is directly
> > controlled by its source, as shown by the green arrow from the display
> > controller to the video stream. The video source exposes stream control
> > operations as function pointers that are used by the panel to control the
> > video stream, as shown by the green arrow from the panel to the video
> > source.
> > 
> > The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > model-2.png shows the call flow across entities when the panel is a
> > pipeline made of more than a single entity. In this case the SoC (on the
> > left of the dashed line) outputs a video stream on a DSI bus connected to
> > a DSI to LVDS transmitter. The output of the DSI to LVDS transmitter is
> > connected to an LVDS panel (or, more accurately, an LVDS panel module
> > made of an LVDS panel controller and a panel).
> 
> Here also I don't see any reason to separate in- or off-soc components.
> I think the call from DISPC, which now goes to the transmitter, should
> first go to the DPI/DSI block. Whether the DPI/DSI block is in- or
> off-soc should be irrelevant regarding CDF.

Agreed.

> > The transmitter and panel module are seen by the display controller and
> > userspace API implementations as a single entity that exposes control
> > request operations and controls its input video stream. When a control
> > request is
>
> I don't like the sound of this. I think the CDF shouldn't care how the
> userspace API is implemented. There's no reason in CDF level to separate
> in- or out-soc entities, nor expose only one entity. If DRM requires
> mapping DRM's crtc, encoder and connector to, respectively, dispc,
> DPI/DSI, the-rest, it should be able to do that, but CDF shouldn't force
> that model.
> 
> Of course, the implementor of the particular SoC display driver could decide
> to use one display entity that covers both dispc and DPI/DSI blocks. But at
> least I would like to have OMAP's DISPC as a display entity (or actually
> multiple entities, one for each DISPC output), and the various in-SoC DPI-
> to-something encoders as display entities.

OK, I haven't expressed myself correctly. I tried to describe here how the 
"russian doll model" of cascaded calls across entities works. I didn't want to 
force any in-/off-SoC model.

> > performed (outermost green arrow) the DSI to LVDS transmitter will
> > propagate it to the panel, possibly mangling the input parameters or the
> > response. For panel operation state control requests the last entity in
> > the pipeline will likely want to control the video stream it receives on
> > its input. The video stream control calls will be propagated from right
> > to left as shown by the red arrows.
> > 
> > Every entity in the call stack can communicate with its hardware device
> > through the corresponding control bus, and/or control the video stream it
> > receives on its input.
> > 
> > This model allows filtering out modes and timings supported by the panel
> > but unsupported by the transmitter and mangling the modes and timings
> > according to the transmitter limitations. It has no complexity drawback
> > for simple devices, as the corresponding drivers can just forward the
> > calls directly. Similar use cases could exist for other control
> > operations than mode and information retrieval.
> > 
> > Discovery
> > ---------
> > 
> > Before being able to issue control requests, panel devices need to be
> > discovered and associated with the connected display controller(s).
> > 
> > Panels and display controllers are cross-dependent. There is no way around
> 
> Perhaps semantics, but I don't think they are cross-dependent. True,
> they will call ops in each other, but the dispc will get the pointer to
> the panel when the panel connects the dispc and the panel. And when the
> panel disconnects, dispc will lose the reference to the panel.
> 
> For me, cross-dependent would mean that dispc could have a reference to
> the panel regardless of what the panel does. In our case it is not so,
> and there's no harm with the dispc's reference to the panel, as the
> panel can remove it (almost) at any time.

The panel can ask the dispc to release its reference to the panel by calling 
the disconnect function, but it can't disappear all of a sudden. I will test 
the model for RFCv3.

> > that, as the display controller needs a reference to the panel to call
> > control requests in response to userspace API, and the panel needs a
> > reference to the display controller to call video stream control
> > functions (in addition to requiring generic resources such as clocks,
> > GPIOs or even regulators that could be provided by the display
> > controller).
> > 
> > As we can't probe the display controller and the panel together, a probe
> > order needs to be defined. The decision was to consider video sources as
> > resources and defer panel probing until all required resources (video
> > stream source, clocks, GPIOs, regulators and more) are available. Display
> > controller probing must succeed without the panel being available. This
> > mimicks the hotpluggable monitor model (VGA, HDMI, DP) that doesn't
> > prevent display controllers from being successfully probed without a
> > connected monitor.
> > 
> > Our design goal is to handle panel discovery in a similar (if not
> > identical) way as HDMI/DP hotplug in order to implement a single display
> > discovery method in display controller drivers. This might not be
> > achievable, in which case we'll reconsider the design requirement.
> > 
> > When the display controller driver probes the device it will register the
> > video source(s) at the output of the display controller with the CDF core.
> > Those sources will be identified by the display controller dev_name() and
> > a source integer index. A new structure, likely called
> > display_entity_port, will be used to represent a source or sink video port
> > on a display entity.
> > 
> > Panel drivers will handle video sources as resources. They will retrieve
> > at probe time the video source the panel is connected to using a phandle
> > or a source name (depending on whether the platform uses DT). If the
> > source isn't available the probe function will return -EPROBE_DEFER.
> > 
> > In addition to the video stream control operations mentioned above, ports
> > will also expose a connect/disconnect operation use to notify them of
> > connection/disconnection events. After retrieving the connected video
> > source panel drivers call the connect/disconnect operation on the video
> > source to notify it that the panel is available.
> > 
> > When the panel is a pipeline made of more than a single entity, entities
> > are probed in video source to video sink order. Out-of-order probe will
> > result in probe deferral as explained above due to the video source not
> > being available, resulting in the source to sink probe order. Entities
> > should not call the connect operation of their video source at probe time
> > in that case, but only when their own connect operation for the video
> > source(s) they provide to the next entity is called by the next entity.
> > Connect operations will thus be called in sink to source order starting
> > at the entity at the end of the pipeline and going all the way back to
> > the display controller.
> > 
> > This notification system is a hotplug mechanism that replaces the display
> > entity notifier system from my previous RFC. Alan Cox rightly objected to
> > the notification system, arguing that such system-wide notifications were
> > used by FBDEV and very subject to abuse. I agree with his argument, this
> > new mechanism should result in a cleaner implementation as video sources
> > will only be notified of connect/disconnect events for the entity they're
> > connected to.
> > 
> > DBI/DSI busses
> > --------------
> > 
> > My RFC introduced a DBI bus using the Linux device and bus model. Its
> > purpose was multifold:
> > 
> > - Support (un)registration, matching and binding of devices and drivers.
> > 
> > - Provide power management (suspend/resume) services through the standard
> > Linux PM bus/device model, to make sure that DBI devices will be
> > suspended/resumed after/before their DBI bus controller.
> > 
> > - Provide bus services to access the connected devices. For DBI that took
> > the form of command read and data read/write functions.
> > 
> > A DSI bus implementation using the same model was also planned.
> > 
> > Tomi's patches removed the DBI bus and replaced DBI devices with platform
> > devices, moving the bus services implementation to the video source. DBI
> > and DSI busses are always either pure video or video + control busses
> > (although controlling a DPI panel through DSI is conceivable, nobody in
> > his right mind, not even a hardware engineer, would likely implement
> > that), so there will always be a video source to provide the DBI/DSI
> > control operations.
> > 
> > (Un)registration, matching and binding of devices and drivers is provided
> > by the platform device bus. Bus services to access connected devices are
> > provided by the video source, wrapper functions will be used to handle
> > serialization and locking, and possibly to offer higher level services
> > (such as DCS for instance).
> > 
> > One drawback of using the platform bus is that PM relationships between
> > the bus master and slaves will not be taken into account during
> > suspend/resume. However, a similar issue exists for DPI panels, and PM
> > relationships at the video bus level for DBI and DSI are not handled by
> > the DBI/DSI busses either. As we need a generic solution to handle those
> > (likely through early suspend and late resume), the same solution can be
> > used to handle DBI and DSI control bus PM relationships without requiring
> > a Linux DBI or DSI bus.
> > 
> > Even though I still like the idea of DBI and DSI busses, I agree with Tomi
> > that they're not strictly needed and I will drop them.
> 
> I'd like to highlight two points I made about the bus model:
> 
> - If DBI is used only for video, there's no DBI bus. How to configure
> DBI in this case?

I was planning to handle DBI video configuration through video stream 
operations, so we agree here.

> - If DBI is used for control and video, we have two separate APIs for
> the bus. In theory it's possible to handle this, but in practice it may
> be impossible, especially for more complex busses like DSI.
> 
> I think both of those issues would make the bus model very difficult to
> implement. I have no idea how it could be done neatly. So as I see it,
> it's not only about "not strictly needed", but that the bus model
> wouldn't work without complex code.
> 
> > Entity model
> > ------------
> > 
> > Tomi's proposal split the display entities into video sources (struct
> > video_source) and display entities (struct display_entity). To make
> > generic pipeline operations easier, we agreed to merge the video source
> > and the display entity back. struct display_entity thus models a display
> > entity that has any number of sink and/or source ports, modeled as struct
> > display_entity_port instances.
> > 
> > Video stream operations will be exposed by the display entity as function
> > pointers and will take a port reference as argument (this could take the
> > form of struct display_entity * and port index, or struct
> > display_entity_port *).
>
> I'd very much like to have only one parameter to pass, as there may be lots
> of ops for some busses. Having two parameters to refer to the source is just
> extra code that has no extra benefit when using video source ops. Then
> again, having separate port index parameter could be perhaps simpler to
> implement for the one handling the video source ops, so...
> 
> > The DVI and DSI operations model proposed by Tomi in this patch series
> > will be kept.
> > 
> > Points that we forgot to discuss
> > --------------------------------
> > 
> > - DISPLAY_ENTITY_STREAM_SINGLE_SHOT vs. update() operation
> 
> Ah, yes, we missed that. I think it's possible to use SINGLE_SHOT, but
> then it requires some kind of system to inform about finished update. Or
> we could, of course, decide that informing about the update is done in
> dispc-specific code, like handling VSYNC.
> 
> Hmm, except that won't probably work, as the panel (or any DSI device) may
> need to know if the DSI bus is currently used or not.

-- 
Regards,

Laurent Pinchart

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
@ 2012-12-19 23:00       ` Laurent Pinchart
  0 siblings, 0 replies; 26+ messages in thread
From: Laurent Pinchart @ 2012-12-19 23:00 UTC (permalink / raw)
  To: Tomi Valkeinen; +Cc: linux-fbdev, dri-devel


[-- Attachment #1.1: Type: text/plain, Size: 15634 bytes --]

Hi Tomi,

On Wednesday 19 December 2012 16:53:10 Tomi Valkeinen wrote:
> On 2012-12-19 15:21, Laurent Pinchart wrote:
> > On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
> >> Hi,
> >> 
> >> I have been testing Common Display Framework on OMAP, and making changes
> >> that I've discussed in the posts I've sent in reply to the CDF series
> >> from Laurent. While my CDF code is rather hacky and not at all ready, I
> >> wanted to post the code for comments and also as a reference code to my
> >> posts.
> >> 
> >> So here is CDF-T (Tomi-edition =).
> > 
> > We've discussed your approach extensively face-to-face today so I won't
> > review the patches in detail, but I will instead summarize our discussion
> > to make sure we understood each other (and let other developers jump in).
> 
> I have some comments =). But mostly it looks good.

Thanks :-)

> > For the purpose of this discussion the term "display controller driver"
> > (or just "display controller") refer to both the low-level driver layer
> > that communicates directly with the display controller hardware, and to
> > the higher- level driver layer that implements and exposes the userspace
> > API (FBDEV, KMS and/or V4L). Those layers can be implemented in multiple
> > kernel modules (such as in the OMAP DSS case, with omapdss for the
> > low-level layer and omapdrm, omapfb and omapvout for the API-level layer)
> > or a single kernel module.
> > 
> > Control model
> > -------------
> > 
> > The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > model.png shows the CDF control model.
> > 
> > The panel object depicted on the figure doesn't need to be a panel in the
> > stricter sense but could be any chain of off-SoC (both on-board or
> > off-board) display entities. It however helps thinking about it as a
> > panel and doesn't hurt the model.
> 
> I don't think it needs to be off-soc. The dispc and the panel in the
> image could be any two components, in- or off-soc.

That's correct, whether the components are on-SoC or off-SoC doesn't matter 
here.

> > The panel is controlled through abstract control requests. Those requests
> > are used to retrieve panel information (such as the physical size, the
> > supported video modes, EDID information, ...), set the panel
> > configuration (such as the active video timings) or control the panel
> > operation state (enabling/disabling the panel, controlling panel blanking
> > and power management, ...). They are exposed by the panel using function
> > pointers, and called by other kernel components in response to userspace
> > requests (through the FBDEV, KMS or V4L2 APIs) or in-kernel events (for
> > instance hotplug notifications).
> > 
> > In response to the control requests the panel driver will communicate with
> > the panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ...,
> > not shown on the figure) and will control the video stream it receives on
> > its input.
> > 
> > The panel is connected at the hardware level to a video source (shown as a
> > green hashed rectangle) that provides it with a video stream. The video
> > stream flows from the video source to the panel and is directly
> > controlled by its source, as shown by the green arrow from the display
> > controller to the video stream. The video source exposes stream control
> > operations as function pointers that are used by the panel to control the
> > video stream, as shown by the green arrow from the panel to the video
> > source.
> > 
> > The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > model-2.png shows the call flow across entities when the panel is a
> > pipeline made of more than a single entity. In this case the SoC (on the
> > left of the dashed line) outputs a video stream on a DSI bus connected to
> > a DSI to LVDS transmitter. The output of the DSI to LVDS transmitter is
> > connected to an LVDS panel (or, more accurately, an LVDS panel module
> > made of an LVDS panel controller and a panel).
> 
> Here also I don't see any reason to separate in- or off-soc components.
> I think the call from DISPC, which now goes to the transmitter, should
> first go to the DPI/DSI block. Whether the DPI/DSI block is in- or
> off-soc should be irrelevant regarding CDF.

Agreed.

> > The transmitter and panel module are seen by the display controller and
> > userspace API implementations as a single entity that exposes control
> > request operations and controls its input video stream. When a control
> > request is
>
> I don't like the sound of this. I think the CDF shouldn't care how the
> userspace API is implemented. There's no reason in CDF level to separate
> in- or out-soc entities, nor expose only one entity. If DRM requires
> mapping DRM's crtc, encoder and connector to, respectively, dispc,
> DPI/DSI, the-rest, it should be able to do that, but CDF shouldn't force
> that model.
> 
> Of course, the implementor of the particular SoC display driver could decide
> to use one display entity that covers both dispc and DPI/DSI blocks. But at
> least I would like to have OMAP's DISPC as a display entity (or actually
> multiple entities, one for each DISPC output), and the various in-SoC DPI-
> to-something encoders as display entities.

OK, I haven't expressed myself correctly. I tried to describe here how the 
"russian doll model" of cascaded calls across entities works. I didn't want to 
force any in-/off-SoC model.

> > performed (outermost green arrow) the DSI to LVDS transmitter will
> > propagate it to the panel, possibly mangling the input parameters or the
> > response. For panel operation state control requests the last entity in
> > the pipeline will likely want to control the video stream it receives on
> > its input. The video stream control calls will be propagated from right
> > to left as shown by the red arrows.
> > 
> > Every entity in the call stack can communicate with its hardware device
> > through the corresponding control bus, and/or control the video stream it
> > receives on its input.
> > 
> > This model allows filtering out modes and timings supported by the panel
> > but unsupported by the transmitter and mangling the modes and timings
> > according to the transmitter limitations. It has no complexity drawback
> > for simple devices, as the corresponding drivers can just forward the
> > calls directly. Similar use cases could exist for other control
> > operations than mode and information retrieval.
> > 
> > Discovery
> > ---------
> > 
> > Before being able to issue control requests, panel devices need to be
> > discovered and associated with the connected display controller(s).
> > 
> > Panels and display controllers are cross-dependent. There is no way around
> 
> Perhaps semantics, but I don't think they are cross-dependent. True,
> they will call ops in each other, but the dispc will get the pointer to
> the panel when the panel connects the dispc and the panel. And when the
> panel disconnects, dispc will lose the reference to the panel.
> 
> For me, cross-dependent would mean that dispc could have a reference to
> the panel regardless of what the panel does. In our case it is not so,
> and there's no harm with the dispc's reference to the panel, as the
> panel can remove it (almost) at any time.

The panel can ask the dispc to release its reference to the panel by calling 
the disconnect function, but it can't disappear all of a sudden. I will test 
the model for RFCv3.

> > that, as the display controller needs a reference to the panel to call
> > control requests in response to userspace API, and the panel needs a
> > reference to the display controller to call video stream control
> > functions (in addition to requiring generic resources such as clocks,
> > GPIOs or even regulators that could be provided by the display
> > controller).
> > 
> > As we can't probe the display controller and the panel together, a probe
> > order needs to be defined. The decision was to consider video sources as
> > resources and defer panel probing until all required resources (video
> > stream source, clocks, GPIOs, regulators and more) are available. Display
> > controller probing must succeed without the panel being available. This
> > mimicks the hotpluggable monitor model (VGA, HDMI, DP) that doesn't
> > prevent display controllers from being successfully probed without a
> > connected monitor.
> > 
> > Our design goal is to handle panel discovery in a similar (if not
> > identical) way as HDMI/DP hotplug in order to implement a single display
> > discovery method in display controller drivers. This might not be
> > achievable, in which case we'll reconsider the design requirement.
> > 
> > When the display controller driver probes the device it will register the
> > video source(s) at the output of the display controller with the CDF core.
> > Those sources will be identified by the display controller dev_name() and
> > a source integer index. A new structure, likely called
> > display_entity_port, will be used to represent a source or sink video port
> > on a display entity.
> > 
> > Panel drivers will handle video sources as resources. They will retrieve
> > at probe time the video source the panel is connected to using a phandle
> > or a source name (depending on whether the platform uses DT). If the
> > source isn't available the probe function will return -EPROBE_DEFER.
> > 
> > In addition to the video stream control operations mentioned above, ports
> > will also expose a connect/disconnect operation use to notify them of
> > connection/disconnection events. After retrieving the connected video
> > source panel drivers call the connect/disconnect operation on the video
> > source to notify it that the panel is available.
> > 
> > When the panel is a pipeline made of more than a single entity, entities
> > are probed in video source to video sink order. Out-of-order probe will
> > result in probe deferral as explained above due to the video source not
> > being available, resulting in the source to sink probe order. Entities
> > should not call the connect operation of their video source at probe time
> > in that case, but only when their own connect operation for the video
> > source(s) they provide to the next entity is called by the next entity.
> > Connect operations will thus be called in sink to source order starting
> > at the entity at the end of the pipeline and going all the way back to
> > the display controller.
> > 
> > This notification system is a hotplug mechanism that replaces the display
> > entity notifier system from my previous RFC. Alan Cox rightly objected to
> > the notification system, arguing that such system-wide notifications were
> > used by FBDEV and very subject to abuse. I agree with his argument, this
> > new mechanism should result in a cleaner implementation as video sources
> > will only be notified of connect/disconnect events for the entity they're
> > connected to.
> > 
> > DBI/DSI busses
> > --------------
> > 
> > My RFC introduced a DBI bus using the Linux device and bus model. Its
> > purpose was multifold:
> > 
> > - Support (un)registration, matching and binding of devices and drivers.
> > 
> > - Provide power management (suspend/resume) services through the standard
> > Linux PM bus/device model, to make sure that DBI devices will be
> > suspended/resumed after/before their DBI bus controller.
> > 
> > - Provide bus services to access the connected devices. For DBI that took
> > the form of command read and data read/write functions.
> > 
> > A DSI bus implementation using the same model was also planned.
> > 
> > Tomi's patches removed the DBI bus and replaced DBI devices with platform
> > devices, moving the bus services implementation to the video source. DBI
> > and DSI busses are always either pure video or video + control busses
> > (although controlling a DPI panel through DSI is conceivable, nobody in
> > his right mind, not even a hardware engineer, would likely implement
> > that), so there will always be a video source to provide the DBI/DSI
> > control operations.
> > 
> > (Un)registration, matching and binding of devices and drivers is provided
> > by the platform device bus. Bus services to access connected devices are
> > provided by the video source, wrapper functions will be used to handle
> > serialization and locking, and possibly to offer higher level services
> > (such as DCS for instance).
> > 
> > One drawback of using the platform bus is that PM relationships between
> > the bus master and slaves will not be taken into account during
> > suspend/resume. However, a similar issue exists for DPI panels, and PM
> > relationships at the video bus level for DBI and DSI are not handled by
> > the DBI/DSI busses either. As we need a generic solution to handle those
> > (likely through early suspend and late resume), the same solution can be
> > used to handle DBI and DSI control bus PM relationships without requiring
> > a Linux DBI or DSI bus.
> > 
> > Even though I still like the idea of DBI and DSI busses, I agree with Tomi
> > that they're not strictly needed and I will drop them.
> 
> I'd like to highlight two points I made about the bus model:
> 
> - If DBI is used only for video, there's no DBI bus. How to configure
> DBI in this case?

I was planning to handle DBI video configuration through video stream 
operations, so we agree here.

> - If DBI is used for control and video, we have two separate APIs for
> the bus. In theory it's possible to handle this, but in practice it may
> be impossible, especially for more complex busses like DSI.
> 
> I think both of those issues would make the bus model very difficult to
> implement. I have no idea how it could be done neatly. So as I see it,
> it's not only about "not strictly needed", but that the bus model
> wouldn't work without complex code.
> 
> > Entity model
> > ------------
> > 
> > Tomi's proposal split the display entities into video sources (struct
> > video_source) and display entities (struct display_entity). To make
> > generic pipeline operations easier, we agreed to merge the video source
> > and the display entity back. struct display_entity thus models a display
> > entity that has any number of sink and/or source ports, modeled as struct
> > display_entity_port instances.
> > 
> > Video stream operations will be exposed by the display entity as function
> > pointers and will take a port reference as argument (this could take the
> > form of struct display_entity * and port index, or struct
> > display_entity_port *).
>
> I'd very much like to have only one parameter to pass, as there may be lots
> of ops for some busses. Having two parameters to refer to the source is just
> extra code that has no extra benefit when using video source ops. Then
> again, having separate port index parameter could be perhaps simpler to
> implement for the one handling the video source ops, so...
> 
> > The DVI and DSI operations model proposed by Tomi in this patch series
> > will be kept.
> > 
> > Points that we forgot to discuss
> > --------------------------------
> > 
> > - DISPLAY_ENTITY_STREAM_SINGLE_SHOT vs. update() operation
> 
> Ah, yes, we missed that. I think it's possible to use SINGLE_SHOT, but
> then it requires some kind of system to inform about finished update. Or
> we could, of course, decide that informing about the update is done in
> dispc-specific code, like handling VSYNC.
> 
> Hmm, except that won't probably work, as the panel (or any DSI device) may
> need to know if the DSI bus is currently used or not.

-- 
Regards,

Laurent Pinchart

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 490 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
  2012-12-19 13:21   ` Laurent Pinchart
@ 2012-12-24  7:15     ` Vikas Sajjan
  -1 siblings, 0 replies; 26+ messages in thread
From: Vikas Sajjan @ 2012-12-24  7:03 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: linux-fbdev, Tomi Valkeinen, dri-devel

Hi Laurent,

On Wed, Dec 19, 2012 at 6:51 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hi Tomi,
>
> On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
>> Hi,
>>
>> I have been testing Common Display Framework on OMAP, and making changes
>> that I've discussed in the posts I've sent in reply to the CDF series from
>> Laurent. While my CDF code is rather hacky and not at all ready, I wanted
>> to post the code for comments and also as a reference code to my posts.
>>
>> So here is CDF-T (Tomi-edition =).
>
> We've discussed your approach extensively face-to-face today so I won't review
> the patches in detail, but I will instead summarize our discussion to make
> sure we understood each other (and let other developers jump in).
>
> For the purpose of this discussion the term "display controller driver" (or
> just "display controller") refer to both the low-level driver layer that
> communicates directly with the display controller hardware, and to the higher-
> level driver layer that implements and exposes the userspace API (FBDEV, KMS
> and/or V4L). Those layers can be implemented in multiple kernel modules (such
> as in the OMAP DSS case, with omapdss for the low-level layer and omapdrm,
> omapfb and omapvout for the API-level layer) or a single kernel module.
>
> Control model
> -------------
>
> The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> model.png shows the CDF control model.
>
> The panel object depicted on the figure doesn't need to be a panel in the
> stricter sense but could be any chain of off-SoC (both on-board or off-board)
> display entities. It however helps thinking about it as a panel and doesn't
> hurt the model.
>
> The panel is controlled through abstract control requests. Those requests are
> used to retrieve panel information (such as the physical size, the supported
> video modes, EDID information, ...), set the panel configuration (such as the
> active video timings) or control the panel operation state (enabling/disabling
> the panel, controlling panel blanking and power management, ...). They are
> exposed by the panel using function pointers, and called by other kernel
> components in response to userspace requests (through the FBDEV, KMS or V4L2
> APIs) or in-kernel events (for instance hotplug notifications).
>
> In response to the control requests the panel driver will communicate with the
> panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ..., not shown
> on the figure) and will control the video stream it receives on its input.
>
> The panel is connected at the hardware level to a video source (shown as a
> green hashed rectangle) that provides it with a video stream. The video stream
> flows from the video source to the panel and is directly controlled by its
> source, as shown by the green arrow from the display controller to the video
> stream. The video source exposes stream control operations as function
> pointers that are used by the panel to control the video stream, as shown by
> the green arrow from the panel to the video source.
>
> The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> model-2.png shows the call flow across entities when the panel is a pipeline
> made of more than a single entity. In this case the SoC (on the left of the
> dashed line) outputs a video stream on a DSI bus connected to a DSI to LVDS
> transmitter. The output of the DSI to LVDS transmitter is connected to an LVDS
> panel (or, more accurately, an LVDS panel module made of an LVDS panel
> controller and a panel).
>
> The transmitter and panel module are seen by the display controller and
> userspace API implementations as a single entity that exposes control request
> operations and controls its input video stream. When a control request is
> performed (outermost green arrow) the DSI to LVDS transmitter will propagate
> it to the panel, possibly mangling the input parameters or the response. For
> panel operation state control requests the last entity in the pipeline will
> likely want to control the video stream it receives on its input. The video
> stream control calls will be propagated from right to left as shown by the red
> arrows.
>
> Every entity in the call stack can communicate with its hardware device
> through the corresponding control bus, and/or control the video stream it
> receives on its input.
>
> This model allows filtering out modes and timings supported by the panel but
> unsupported by the transmitter and mangling the modes and timings according to
> the transmitter limitations. It has no complexity drawback for simple devices,
> as the corresponding drivers can just forward the calls directly. Similar use
> cases could exist for other control operations than mode and information
> retrieval.
>
> Discovery
> ---------
>
> Before being able to issue control requests, panel devices need to be
> discovered and associated with the connected display controller(s).
>
> Panels and display controllers are cross-dependent. There is no way around
> that, as the display controller needs a reference to the panel to call control
> requests in response to userspace API, and the panel needs a reference to the
> display controller to call video stream control functions (in addition to
> requiring generic resources such as clocks, GPIOs or even regulators that
> could be provided by the display controller).
>
> As we can't probe the display controller and the panel together, a probe order
> needs to be defined. The decision was to consider video sources as resources
> and defer panel probing until all required resources (video stream source,
> clocks, GPIOs, regulators and more) are available. Display controller probing
> must succeed without the panel being available. This mimicks the hotpluggable
> monitor model (VGA, HDMI, DP) that doesn't prevent display controllers from
> being successfully probed without a connected monitor.
>
> Our design goal is to handle panel discovery in a similar (if not identical)
> way as HDMI/DP hotplug in order to implement a single display discovery method
> in display controller drivers. This might not be achievable, in which case
> we'll reconsider the design requirement.
>
> When the display controller driver probes the device it will register the
> video source(s) at the output of the display controller with the CDF core.
> Those sources will be identified by the display controller dev_name() and a
> source integer index. A new structure, likely called display_entity_port, will
> be used to represent a source or sink video port on a display entity.
>
> Panel drivers will handle video sources as resources. They will retrieve at
> probe time the video source the panel is connected to using a phandle or a
> source name (depending on whether the platform uses DT). If the source isn't
> available the probe function will return -EPROBE_DEFER.
>
> In addition to the video stream control operations mentioned above, ports will
> also expose a connect/disconnect operation use to notify them of
> connection/disconnection events. After retrieving the connected video source
> panel drivers call the connect/disconnect operation on the video source to
> notify it that the panel is available.
>
> When the panel is a pipeline made of more than a single entity, entities are
> probed in video source to video sink order. Out-of-order probe will result in
> probe deferral as explained above due to the video source not being available,
> resulting in the source to sink probe order. Entities should not call the
> connect operation of their video source at probe time in that case, but only
> when their own connect operation for the video source(s) they provide to the
> next entity is called by the next entity. Connect operations will thus be
> called in sink to source order starting at the entity at the end of the
> pipeline and going all the way back to the display controller.
>
> This notification system is a hotplug mechanism that replaces the display
> entity notifier system from my previous RFC. Alan Cox rightly objected to the
> notification system, arguing that such system-wide notifications were used by
> FBDEV and very subject to abuse. I agree with his argument, this new mechanism
> should result in a cleaner implementation as video sources will only be
> notified of connect/disconnect events for the entity they're connected to.
>
> DBI/DSI busses
> --------------
>
> My RFC introduced a DBI bus using the Linux device and bus model. Its purpose
> was multifold:
>
> - Support (un)registration, matching and binding of devices and drivers.
>
> - Provide power management (suspend/resume) services through the standard
> Linux PM bus/device model, to make sure that DBI devices will be
> suspended/resumed after/before their DBI bus controller.
>
> - Provide bus services to access the connected devices. For DBI that took the
> form of command read and data read/write functions.
>
> A DSI bus implementation using the same model was also planned.
>
> Tomi's patches removed the DBI bus and replaced DBI devices with platform
> devices, moving the bus services implementation to the video source. DBI and
> DSI busses are always either pure video or video + control busses (although
> controlling a DPI panel through DSI is conceivable, nobody in his right mind,
> not even a hardware engineer, would likely implement that), so there will
> always be a video source to provide the DBI/DSI control operations.
>
> (Un)registration, matching and binding of devices and drivers is provided by
> the platform device bus. Bus services to access connected devices are provided
> by the video source, wrapper functions will be used to handle serialization
> and locking, and possibly to offer higher level services (such as DCS for
> instance).
>
> One drawback of using the platform bus is that PM relationships between the
> bus master and slaves will not be taken into account during suspend/resume.
> However, a similar issue exists for DPI panels, and PM relationships at the
> video bus level for DBI and DSI are not handled by the DBI/DSI busses either.
> As we need a generic solution to handle those (likely through early suspend
> and late resume), the same solution can be used to handle DBI and DSI control
> bus PM relationships without requiring a Linux DBI or DSI bus.
>
> Even though I still like the idea of DBI and DSI busses, I agree with Tomi
> that they're not strictly needed and I will drop them.
>
> Entity model
> ------------
>
> Tomi's proposal split the display entities into video sources (struct
> video_source) and display entities (struct display_entity). To make generic
> pipeline operations easier, we agreed to merge the video source and the
> display entity back. struct display_entity thus models a display entity that
> has any number of sink and/or source ports, modeled as struct
> display_entity_port instances.
>
Looking at Tomi's patchset, he has considered panel as "display entity"
and  MIPI DSI as "video source entity". So if we are planning to merge it back
how should we treat panel and MIPI DSI. i mean should we consider both
panel  and MIPI DSI has 2 different display entities.
i.e, during the probe of each of these drivers, should we register a
display entity with CDF.

> Video stream operations will be exposed by the display entity as function
> pointers and will take a port reference as argument (this could take the form
> of struct display_entity * and port index, or struct display_entity_port *).
> The DVI and DSI operations model proposed by Tomi in this patch series will be
> kept.
>

so you mean you will be adding these "ops" as part of "struct display
entity" rather than  video source ops,

static const struct dsi_video_source_ops dsi_dsi_ops = {

	.update = dsi_bus_update,
	.dcs_write = dsi_bus_dcs_write,
	.dcs_read = dsi_bus_dcs_read,
	.configure_pins = dsi_bus_configure_pins,
	.set_clocks = dsi_bus_set_clocks,
	.enable = dsi_bus_enable,
	.disable = dsi_bus_disable,
	.set_size = dsi_bus_set_size,
	.set_operation_mode = dsi_bus_set_operation_mode,
	.set_pixel_format = dsi_bus_set_pixel_format,
	.enable_hs = dsi_bus_enable_hs,
};

if you can post CDF v3 patches early, it will give us more clarity
w.r.t to discussions you and Tomi had.

> Points that we forgot to discuss
> --------------------------------
>
> - DISPLAY_ENTITY_STREAM_SINGLE_SHOT vs. update() operation
>
> I'll look into that.
>
> Please let me know if I've forgotten anything.
>
> --
> Regards,
>
> Laurent Pinchart


Regards
Vikas Sajjan
Samsung India Software Operations Pvt Ltd.
Bangalore , India.
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
@ 2012-12-24  7:15     ` Vikas Sajjan
  0 siblings, 0 replies; 26+ messages in thread
From: Vikas Sajjan @ 2012-12-24  7:15 UTC (permalink / raw)
  To: Laurent Pinchart; +Cc: linux-fbdev, Tomi Valkeinen, dri-devel

Hi Laurent,

On Wed, Dec 19, 2012 at 6:51 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hi Tomi,
>
> On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
>> Hi,
>>
>> I have been testing Common Display Framework on OMAP, and making changes
>> that I've discussed in the posts I've sent in reply to the CDF series from
>> Laurent. While my CDF code is rather hacky and not at all ready, I wanted
>> to post the code for comments and also as a reference code to my posts.
>>
>> So here is CDF-T (Tomi-edition =).
>
> We've discussed your approach extensively face-to-face today so I won't review
> the patches in detail, but I will instead summarize our discussion to make
> sure we understood each other (and let other developers jump in).
>
> For the purpose of this discussion the term "display controller driver" (or
> just "display controller") refer to both the low-level driver layer that
> communicates directly with the display controller hardware, and to the higher-
> level driver layer that implements and exposes the userspace API (FBDEV, KMS
> and/or V4L). Those layers can be implemented in multiple kernel modules (such
> as in the OMAP DSS case, with omapdss for the low-level layer and omapdrm,
> omapfb and omapvout for the API-level layer) or a single kernel module.
>
> Control model
> -------------
>
> The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> model.png shows the CDF control model.
>
> The panel object depicted on the figure doesn't need to be a panel in the
> stricter sense but could be any chain of off-SoC (both on-board or off-board)
> display entities. It however helps thinking about it as a panel and doesn't
> hurt the model.
>
> The panel is controlled through abstract control requests. Those requests are
> used to retrieve panel information (such as the physical size, the supported
> video modes, EDID information, ...), set the panel configuration (such as the
> active video timings) or control the panel operation state (enabling/disabling
> the panel, controlling panel blanking and power management, ...). They are
> exposed by the panel using function pointers, and called by other kernel
> components in response to userspace requests (through the FBDEV, KMS or V4L2
> APIs) or in-kernel events (for instance hotplug notifications).
>
> In response to the control requests the panel driver will communicate with the
> panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ..., not shown
> on the figure) and will control the video stream it receives on its input.
>
> The panel is connected at the hardware level to a video source (shown as a
> green hashed rectangle) that provides it with a video stream. The video stream
> flows from the video source to the panel and is directly controlled by its
> source, as shown by the green arrow from the display controller to the video
> stream. The video source exposes stream control operations as function
> pointers that are used by the panel to control the video stream, as shown by
> the green arrow from the panel to the video source.
>
> The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> model-2.png shows the call flow across entities when the panel is a pipeline
> made of more than a single entity. In this case the SoC (on the left of the
> dashed line) outputs a video stream on a DSI bus connected to a DSI to LVDS
> transmitter. The output of the DSI to LVDS transmitter is connected to an LVDS
> panel (or, more accurately, an LVDS panel module made of an LVDS panel
> controller and a panel).
>
> The transmitter and panel module are seen by the display controller and
> userspace API implementations as a single entity that exposes control request
> operations and controls its input video stream. When a control request is
> performed (outermost green arrow) the DSI to LVDS transmitter will propagate
> it to the panel, possibly mangling the input parameters or the response. For
> panel operation state control requests the last entity in the pipeline will
> likely want to control the video stream it receives on its input. The video
> stream control calls will be propagated from right to left as shown by the red
> arrows.
>
> Every entity in the call stack can communicate with its hardware device
> through the corresponding control bus, and/or control the video stream it
> receives on its input.
>
> This model allows filtering out modes and timings supported by the panel but
> unsupported by the transmitter and mangling the modes and timings according to
> the transmitter limitations. It has no complexity drawback for simple devices,
> as the corresponding drivers can just forward the calls directly. Similar use
> cases could exist for other control operations than mode and information
> retrieval.
>
> Discovery
> ---------
>
> Before being able to issue control requests, panel devices need to be
> discovered and associated with the connected display controller(s).
>
> Panels and display controllers are cross-dependent. There is no way around
> that, as the display controller needs a reference to the panel to call control
> requests in response to userspace API, and the panel needs a reference to the
> display controller to call video stream control functions (in addition to
> requiring generic resources such as clocks, GPIOs or even regulators that
> could be provided by the display controller).
>
> As we can't probe the display controller and the panel together, a probe order
> needs to be defined. The decision was to consider video sources as resources
> and defer panel probing until all required resources (video stream source,
> clocks, GPIOs, regulators and more) are available. Display controller probing
> must succeed without the panel being available. This mimicks the hotpluggable
> monitor model (VGA, HDMI, DP) that doesn't prevent display controllers from
> being successfully probed without a connected monitor.
>
> Our design goal is to handle panel discovery in a similar (if not identical)
> way as HDMI/DP hotplug in order to implement a single display discovery method
> in display controller drivers. This might not be achievable, in which case
> we'll reconsider the design requirement.
>
> When the display controller driver probes the device it will register the
> video source(s) at the output of the display controller with the CDF core.
> Those sources will be identified by the display controller dev_name() and a
> source integer index. A new structure, likely called display_entity_port, will
> be used to represent a source or sink video port on a display entity.
>
> Panel drivers will handle video sources as resources. They will retrieve at
> probe time the video source the panel is connected to using a phandle or a
> source name (depending on whether the platform uses DT). If the source isn't
> available the probe function will return -EPROBE_DEFER.
>
> In addition to the video stream control operations mentioned above, ports will
> also expose a connect/disconnect operation use to notify them of
> connection/disconnection events. After retrieving the connected video source
> panel drivers call the connect/disconnect operation on the video source to
> notify it that the panel is available.
>
> When the panel is a pipeline made of more than a single entity, entities are
> probed in video source to video sink order. Out-of-order probe will result in
> probe deferral as explained above due to the video source not being available,
> resulting in the source to sink probe order. Entities should not call the
> connect operation of their video source at probe time in that case, but only
> when their own connect operation for the video source(s) they provide to the
> next entity is called by the next entity. Connect operations will thus be
> called in sink to source order starting at the entity at the end of the
> pipeline and going all the way back to the display controller.
>
> This notification system is a hotplug mechanism that replaces the display
> entity notifier system from my previous RFC. Alan Cox rightly objected to the
> notification system, arguing that such system-wide notifications were used by
> FBDEV and very subject to abuse. I agree with his argument, this new mechanism
> should result in a cleaner implementation as video sources will only be
> notified of connect/disconnect events for the entity they're connected to.
>
> DBI/DSI busses
> --------------
>
> My RFC introduced a DBI bus using the Linux device and bus model. Its purpose
> was multifold:
>
> - Support (un)registration, matching and binding of devices and drivers.
>
> - Provide power management (suspend/resume) services through the standard
> Linux PM bus/device model, to make sure that DBI devices will be
> suspended/resumed after/before their DBI bus controller.
>
> - Provide bus services to access the connected devices. For DBI that took the
> form of command read and data read/write functions.
>
> A DSI bus implementation using the same model was also planned.
>
> Tomi's patches removed the DBI bus and replaced DBI devices with platform
> devices, moving the bus services implementation to the video source. DBI and
> DSI busses are always either pure video or video + control busses (although
> controlling a DPI panel through DSI is conceivable, nobody in his right mind,
> not even a hardware engineer, would likely implement that), so there will
> always be a video source to provide the DBI/DSI control operations.
>
> (Un)registration, matching and binding of devices and drivers is provided by
> the platform device bus. Bus services to access connected devices are provided
> by the video source, wrapper functions will be used to handle serialization
> and locking, and possibly to offer higher level services (such as DCS for
> instance).
>
> One drawback of using the platform bus is that PM relationships between the
> bus master and slaves will not be taken into account during suspend/resume.
> However, a similar issue exists for DPI panels, and PM relationships at the
> video bus level for DBI and DSI are not handled by the DBI/DSI busses either.
> As we need a generic solution to handle those (likely through early suspend
> and late resume), the same solution can be used to handle DBI and DSI control
> bus PM relationships without requiring a Linux DBI or DSI bus.
>
> Even though I still like the idea of DBI and DSI busses, I agree with Tomi
> that they're not strictly needed and I will drop them.
>
> Entity model
> ------------
>
> Tomi's proposal split the display entities into video sources (struct
> video_source) and display entities (struct display_entity). To make generic
> pipeline operations easier, we agreed to merge the video source and the
> display entity back. struct display_entity thus models a display entity that
> has any number of sink and/or source ports, modeled as struct
> display_entity_port instances.
>
Looking at Tomi's patchset, he has considered panel as "display entity"
and  MIPI DSI as "video source entity". So if we are planning to merge it back
how should we treat panel and MIPI DSI. i mean should we consider both
panel  and MIPI DSI has 2 different display entities.
i.e, during the probe of each of these drivers, should we register a
display entity with CDF.

> Video stream operations will be exposed by the display entity as function
> pointers and will take a port reference as argument (this could take the form
> of struct display_entity * and port index, or struct display_entity_port *).
> The DVI and DSI operations model proposed by Tomi in this patch series will be
> kept.
>

so you mean you will be adding these "ops" as part of "struct display
entity" rather than  video source ops,

static const struct dsi_video_source_ops dsi_dsi_ops = {

	.update = dsi_bus_update,
	.dcs_write = dsi_bus_dcs_write,
	.dcs_read = dsi_bus_dcs_read,
	.configure_pins = dsi_bus_configure_pins,
	.set_clocks = dsi_bus_set_clocks,
	.enable = dsi_bus_enable,
	.disable = dsi_bus_disable,
	.set_size = dsi_bus_set_size,
	.set_operation_mode = dsi_bus_set_operation_mode,
	.set_pixel_format = dsi_bus_set_pixel_format,
	.enable_hs = dsi_bus_enable_hs,
};

if you can post CDF v3 patches early, it will give us more clarity
w.r.t to discussions you and Tomi had.

> Points that we forgot to discuss
> --------------------------------
>
> - DISPLAY_ENTITY_STREAM_SINGLE_SHOT vs. update() operation
>
> I'll look into that.
>
> Please let me know if I've forgotten anything.
>
> --
> Regards,
>
> Laurent Pinchart


Regards
Vikas Sajjan
Samsung India Software Operations Pvt Ltd.
Bangalore , India.
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
  2012-12-24  7:15     ` Vikas Sajjan
@ 2012-12-26 12:14       ` Laurent Pinchart
  -1 siblings, 0 replies; 26+ messages in thread
From: Laurent Pinchart @ 2012-12-26 12:14 UTC (permalink / raw)
  To: Vikas Sajjan; +Cc: linux-fbdev, Tomi Valkeinen, dri-devel

Hi Vikas,

On Monday 24 December 2012 12:33:50 Vikas Sajjan wrote:
> On Wed, Dec 19, 2012 at 6:51 PM, Laurent Pinchart wrote:
> > On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
> >> Hi,
> >> 
> >> I have been testing Common Display Framework on OMAP, and making changes
> >> that I've discussed in the posts I've sent in reply to the CDF series
> >> from Laurent. While my CDF code is rather hacky and not at all ready, I
> >> wanted to post the code for comments and also as a reference code to my
> >> posts.
> >> 
> >> So here is CDF-T (Tomi-edition =).
> > 
> > We've discussed your approach extensively face-to-face today so I won't
> > review the patches in detail, but I will instead summarize our discussion
> > to make sure we understood each other (and let other developers jump in).
> > 
> > For the purpose of this discussion the term "display controller driver"
> > (or just "display controller") refer to both the low-level driver layer
> > that communicates directly with the display controller hardware, and to
> > the higher- level driver layer that implements and exposes the userspace
> > API (FBDEV, KMS and/or V4L). Those layers can be implemented in multiple
> > kernel modules (such as in the OMAP DSS case, with omapdss for the
> > low-level layer and omapdrm, omapfb and omapvout for the API-level layer)
> > or a single kernel module.
> > 
> > Control model
> > -------------
> > 
> > The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > model.png shows the CDF control model.
> > 
> > The panel object depicted on the figure doesn't need to be a panel in the
> > stricter sense but could be any chain of off-SoC (both on-board or
> > off-board) display entities. It however helps thinking about it as a
> > panel and doesn't hurt the model.
> > 
> > The panel is controlled through abstract control requests. Those requests
> > are used to retrieve panel information (such as the physical size, the
> > supported video modes, EDID information, ...), set the panel
> > configuration (such as the active video timings) or control the panel
> > operation state (enabling/disabling the panel, controlling panel blanking
> > and power management, ...). They are exposed by the panel using function
> > pointers, and called by other kernel components in response to userspace
> > requests (through the FBDEV, KMS or V4L2 APIs) or in-kernel events (for
> > instance hotplug notifications).
> > 
> > In response to the control requests the panel driver will communicate with
> > the panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ...,
> > not shown on the figure) and will control the video stream it receives on
> > its input.
> > 
> > The panel is connected at the hardware level to a video source (shown as a
> > green hashed rectangle) that provides it with a video stream. The video
> > stream flows from the video source to the panel and is directly
> > controlled by its source, as shown by the green arrow from the display
> > controller to the video stream. The video source exposes stream control
> > operations as function pointers that are used by the panel to control the
> > video stream, as shown by the green arrow from the panel to the video
> > source.
> > 
> > The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > model-2.png shows the call flow across entities when the panel is a
> > pipeline made of more than a single entity. In this case the SoC (on the
> > left of the dashed line) outputs a video stream on a DSI bus connected to
> > a DSI to LVDS transmitter. The output of the DSI to LVDS transmitter is
> > connected to an LVDS panel (or, more accurately, an LVDS panel module
> > made of an LVDS panel controller and a panel).
> > 
> > The transmitter and panel module are seen by the display controller and
> > userspace API implementations as a single entity that exposes control
> > request operations and controls its input video stream. When a control
> > request is performed (outermost green arrow) the DSI to LVDS transmitter
> > will propagate it to the panel, possibly mangling the input parameters or
> > the response. For panel operation state control requests the last entity
> > in the pipeline will likely want to control the video stream it receives
> > on its input. The video stream control calls will be propagated from
> > right to left as shown by the red arrows.
> > 
> > Every entity in the call stack can communicate with its hardware device
> > through the corresponding control bus, and/or control the video stream it
> > receives on its input.
> > 
> > This model allows filtering out modes and timings supported by the panel
> > but unsupported by the transmitter and mangling the modes and timings
> > according to the transmitter limitations. It has no complexity drawback
> > for simple devices, as the corresponding drivers can just forward the
> > calls directly. Similar use cases could exist for other control
> > operations than mode and information retrieval.
> > 
> > Discovery
> > ---------
> > 
> > Before being able to issue control requests, panel devices need to be
> > discovered and associated with the connected display controller(s).
> > 
> > Panels and display controllers are cross-dependent. There is no way around
> > that, as the display controller needs a reference to the panel to call
> > control requests in response to userspace API, and the panel needs a
> > reference to the display controller to call video stream control
> > functions (in addition to requiring generic resources such as clocks,
> > GPIOs or even regulators that could be provided by the display
> > controller).
> > 
> > As we can't probe the display controller and the panel together, a probe
> > order needs to be defined. The decision was to consider video sources as
> > resources and defer panel probing until all required resources (video
> > stream source, clocks, GPIOs, regulators and more) are available. Display
> > controller probing must succeed without the panel being available. This
> > mimicks the hotpluggable monitor model (VGA, HDMI, DP) that doesn't
> > prevent display controllers from being successfully probed without a
> > connected monitor.
> > 
> > Our design goal is to handle panel discovery in a similar (if not
> > identical) way as HDMI/DP hotplug in order to implement a single display
> > discovery method in display controller drivers. This might not be
> > achievable, in which case we'll reconsider the design requirement.
> > 
> > When the display controller driver probes the device it will register the
> > video source(s) at the output of the display controller with the CDF core.
> > Those sources will be identified by the display controller dev_name() and
> > a source integer index. A new structure, likely called
> > display_entity_port, will be used to represent a source or sink video port
> > on a display entity.
> > 
> > Panel drivers will handle video sources as resources. They will retrieve
> > at probe time the video source the panel is connected to using a phandle
> > or a source name (depending on whether the platform uses DT). If the
> > source isn't available the probe function will return -EPROBE_DEFER.
> > 
> > In addition to the video stream control operations mentioned above, ports
> > will also expose a connect/disconnect operation use to notify them of
> > connection/disconnection events. After retrieving the connected video
> > source panel drivers call the connect/disconnect operation on the video
> > source to notify it that the panel is available.
> > 
> > When the panel is a pipeline made of more than a single entity, entities
> > are probed in video source to video sink order. Out-of-order probe will
> > result in probe deferral as explained above due to the video source not
> > being available, resulting in the source to sink probe order. Entities
> > should not call the connect operation of their video source at probe time
> > in that case, but only when their own connect operation for the video
> > source(s) they provide to the next entity is called by the next entity.
> > Connect operations will thus be called in sink to source order starting
> > at the entity at the end of the pipeline and going all the way back to
> > the display controller.
> > 
> > This notification system is a hotplug mechanism that replaces the display
> > entity notifier system from my previous RFC. Alan Cox rightly objected to
> > the notification system, arguing that such system-wide notifications were
> > used by FBDEV and very subject to abuse. I agree with his argument, this
> > new mechanism should result in a cleaner implementation as video sources
> > will only be notified of connect/disconnect events for the entity they're
> > connected to.
> > 
> > DBI/DSI busses
> > --------------
> > 
> > My RFC introduced a DBI bus using the Linux device and bus model. Its
> > purpose was multifold:
> > 
> > - Support (un)registration, matching and binding of devices and drivers.
> > 
> > - Provide power management (suspend/resume) services through the standard
> > Linux PM bus/device model, to make sure that DBI devices will be
> > suspended/resumed after/before their DBI bus controller.
> > 
> > - Provide bus services to access the connected devices. For DBI that took
> > the form of command read and data read/write functions.
> > 
> > A DSI bus implementation using the same model was also planned.
> > 
> > Tomi's patches removed the DBI bus and replaced DBI devices with platform
> > devices, moving the bus services implementation to the video source. DBI
> > and DSI busses are always either pure video or video + control busses
> > (although controlling a DPI panel through DSI is conceivable, nobody in
> > his right mind, not even a hardware engineer, would likely implement
> > that), so there will always be a video source to provide the DBI/DSI
> > control operations.
> > 
> > (Un)registration, matching and binding of devices and drivers is provided
> > by the platform device bus. Bus services to access connected devices are
> > provided by the video source, wrapper functions will be used to handle
> > serialization and locking, and possibly to offer higher level services
> > (such as DCS for instance).
> > 
> > One drawback of using the platform bus is that PM relationships between
> > the bus master and slaves will not be taken into account during
> > suspend/resume. However, a similar issue exists for DPI panels, and PM
> > relationships at the video bus level for DBI and DSI are not handled by
> > the DBI/DSI busses either. As we need a generic solution to handle those
> > (likely through early suspend and late resume), the same solution can be
> > used to handle DBI and DSI control bus PM relationships without requiring
> > a Linux DBI or DSI bus.
> > 
> > Even though I still like the idea of DBI and DSI busses, I agree with Tomi
> > that they're not strictly needed and I will drop them.
> > 
> > Entity model
> > ------------
> > 
> > Tomi's proposal split the display entities into video sources (struct
> > video_source) and display entities (struct display_entity). To make
> > generic pipeline operations easier, we agreed to merge the video source
> > and the display entity back. struct display_entity thus models a display
> > entity that has any number of sink and/or source ports, modeled as struct
> > display_entity_port instances.
> 
> Looking at Tomi's patchset, he has considered panel as "display entity"
> and  MIPI DSI as "video source entity". So if we are planning to merge it
> back how should we treat panel and MIPI DSI. i mean should we consider both
> panel  and MIPI DSI has 2 different display entities.
> i.e, during the probe of each of these drivers, should we register a
> display entity with CDF.

Both the DSI encoder and the DSI panel would be modeled as display entities. 
The DSI encoder would have a source port that models its DSI video source, and 
the DSI panel would have a sink port.

> > Video stream operations will be exposed by the display entity as function
> > pointers and will take a port reference as argument (this could take the
> > form of struct display_entity * and port index, or struct
> > display_entity_port *). The DVI and DSI operations model proposed by Tomi
> > in this patch series will be kept.
> 
> so you mean you will be adding these "ops" as part of "struct display
> entity" rather than  video source ops,

That's correct.

> static const struct dsi_video_source_ops dsi_dsi_ops = {
> 
> 	.update = dsi_bus_update,
> 	.dcs_write = dsi_bus_dcs_write,
> 	.dcs_read = dsi_bus_dcs_read,
> 	.configure_pins = dsi_bus_configure_pins,
> 	.set_clocks = dsi_bus_set_clocks,
> 	.enable = dsi_bus_enable,
> 	.disable = dsi_bus_disable,
> 	.set_size = dsi_bus_set_size,
> 	.set_operation_mode = dsi_bus_set_operation_mode,
> 	.set_pixel_format = dsi_bus_set_pixel_format,
> 	.enable_hs = dsi_bus_enable_hs,
> };
> 
> if you can post CDF v3 patches early, it will give us more clarity
> w.r.t to discussions you and Tomi had.

I'm working on that.

> > Points that we forgot to discuss
> > --------------------------------
> > 
> > - DISPLAY_ENTITY_STREAM_SINGLE_SHOT vs. update() operation
> > 
> > I'll look into that.
> > 
> > Please let me know if I've forgotten anything.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
@ 2012-12-26 12:14       ` Laurent Pinchart
  0 siblings, 0 replies; 26+ messages in thread
From: Laurent Pinchart @ 2012-12-26 12:14 UTC (permalink / raw)
  To: Vikas Sajjan; +Cc: linux-fbdev, Tomi Valkeinen, dri-devel

Hi Vikas,

On Monday 24 December 2012 12:33:50 Vikas Sajjan wrote:
> On Wed, Dec 19, 2012 at 6:51 PM, Laurent Pinchart wrote:
> > On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
> >> Hi,
> >> 
> >> I have been testing Common Display Framework on OMAP, and making changes
> >> that I've discussed in the posts I've sent in reply to the CDF series
> >> from Laurent. While my CDF code is rather hacky and not at all ready, I
> >> wanted to post the code for comments and also as a reference code to my
> >> posts.
> >> 
> >> So here is CDF-T (Tomi-edition =).
> > 
> > We've discussed your approach extensively face-to-face today so I won't
> > review the patches in detail, but I will instead summarize our discussion
> > to make sure we understood each other (and let other developers jump in).
> > 
> > For the purpose of this discussion the term "display controller driver"
> > (or just "display controller") refer to both the low-level driver layer
> > that communicates directly with the display controller hardware, and to
> > the higher- level driver layer that implements and exposes the userspace
> > API (FBDEV, KMS and/or V4L). Those layers can be implemented in multiple
> > kernel modules (such as in the OMAP DSS case, with omapdss for the
> > low-level layer and omapdrm, omapfb and omapvout for the API-level layer)
> > or a single kernel module.
> > 
> > Control model
> > -------------
> > 
> > The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > model.png shows the CDF control model.
> > 
> > The panel object depicted on the figure doesn't need to be a panel in the
> > stricter sense but could be any chain of off-SoC (both on-board or
> > off-board) display entities. It however helps thinking about it as a
> > panel and doesn't hurt the model.
> > 
> > The panel is controlled through abstract control requests. Those requests
> > are used to retrieve panel information (such as the physical size, the
> > supported video modes, EDID information, ...), set the panel
> > configuration (such as the active video timings) or control the panel
> > operation state (enabling/disabling the panel, controlling panel blanking
> > and power management, ...). They are exposed by the panel using function
> > pointers, and called by other kernel components in response to userspace
> > requests (through the FBDEV, KMS or V4L2 APIs) or in-kernel events (for
> > instance hotplug notifications).
> > 
> > In response to the control requests the panel driver will communicate with
> > the panel through the panel control bus (I2C, SPI, DBI, DSI, GPIO, ...,
> > not shown on the figure) and will control the video stream it receives on
> > its input.
> > 
> > The panel is connected at the hardware level to a video source (shown as a
> > green hashed rectangle) that provides it with a video stream. The video
> > stream flows from the video source to the panel and is directly
> > controlled by its source, as shown by the green arrow from the display
> > controller to the video stream. The video source exposes stream control
> > operations as function pointers that are used by the panel to control the
> > video stream, as shown by the green arrow from the panel to the video
> > source.
> > 
> > The figure at http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > model-2.png shows the call flow across entities when the panel is a
> > pipeline made of more than a single entity. In this case the SoC (on the
> > left of the dashed line) outputs a video stream on a DSI bus connected to
> > a DSI to LVDS transmitter. The output of the DSI to LVDS transmitter is
> > connected to an LVDS panel (or, more accurately, an LVDS panel module
> > made of an LVDS panel controller and a panel).
> > 
> > The transmitter and panel module are seen by the display controller and
> > userspace API implementations as a single entity that exposes control
> > request operations and controls its input video stream. When a control
> > request is performed (outermost green arrow) the DSI to LVDS transmitter
> > will propagate it to the panel, possibly mangling the input parameters or
> > the response. For panel operation state control requests the last entity
> > in the pipeline will likely want to control the video stream it receives
> > on its input. The video stream control calls will be propagated from
> > right to left as shown by the red arrows.
> > 
> > Every entity in the call stack can communicate with its hardware device
> > through the corresponding control bus, and/or control the video stream it
> > receives on its input.
> > 
> > This model allows filtering out modes and timings supported by the panel
> > but unsupported by the transmitter and mangling the modes and timings
> > according to the transmitter limitations. It has no complexity drawback
> > for simple devices, as the corresponding drivers can just forward the
> > calls directly. Similar use cases could exist for other control
> > operations than mode and information retrieval.
> > 
> > Discovery
> > ---------
> > 
> > Before being able to issue control requests, panel devices need to be
> > discovered and associated with the connected display controller(s).
> > 
> > Panels and display controllers are cross-dependent. There is no way around
> > that, as the display controller needs a reference to the panel to call
> > control requests in response to userspace API, and the panel needs a
> > reference to the display controller to call video stream control
> > functions (in addition to requiring generic resources such as clocks,
> > GPIOs or even regulators that could be provided by the display
> > controller).
> > 
> > As we can't probe the display controller and the panel together, a probe
> > order needs to be defined. The decision was to consider video sources as
> > resources and defer panel probing until all required resources (video
> > stream source, clocks, GPIOs, regulators and more) are available. Display
> > controller probing must succeed without the panel being available. This
> > mimicks the hotpluggable monitor model (VGA, HDMI, DP) that doesn't
> > prevent display controllers from being successfully probed without a
> > connected monitor.
> > 
> > Our design goal is to handle panel discovery in a similar (if not
> > identical) way as HDMI/DP hotplug in order to implement a single display
> > discovery method in display controller drivers. This might not be
> > achievable, in which case we'll reconsider the design requirement.
> > 
> > When the display controller driver probes the device it will register the
> > video source(s) at the output of the display controller with the CDF core.
> > Those sources will be identified by the display controller dev_name() and
> > a source integer index. A new structure, likely called
> > display_entity_port, will be used to represent a source or sink video port
> > on a display entity.
> > 
> > Panel drivers will handle video sources as resources. They will retrieve
> > at probe time the video source the panel is connected to using a phandle
> > or a source name (depending on whether the platform uses DT). If the
> > source isn't available the probe function will return -EPROBE_DEFER.
> > 
> > In addition to the video stream control operations mentioned above, ports
> > will also expose a connect/disconnect operation use to notify them of
> > connection/disconnection events. After retrieving the connected video
> > source panel drivers call the connect/disconnect operation on the video
> > source to notify it that the panel is available.
> > 
> > When the panel is a pipeline made of more than a single entity, entities
> > are probed in video source to video sink order. Out-of-order probe will
> > result in probe deferral as explained above due to the video source not
> > being available, resulting in the source to sink probe order. Entities
> > should not call the connect operation of their video source at probe time
> > in that case, but only when their own connect operation for the video
> > source(s) they provide to the next entity is called by the next entity.
> > Connect operations will thus be called in sink to source order starting
> > at the entity at the end of the pipeline and going all the way back to
> > the display controller.
> > 
> > This notification system is a hotplug mechanism that replaces the display
> > entity notifier system from my previous RFC. Alan Cox rightly objected to
> > the notification system, arguing that such system-wide notifications were
> > used by FBDEV and very subject to abuse. I agree with his argument, this
> > new mechanism should result in a cleaner implementation as video sources
> > will only be notified of connect/disconnect events for the entity they're
> > connected to.
> > 
> > DBI/DSI busses
> > --------------
> > 
> > My RFC introduced a DBI bus using the Linux device and bus model. Its
> > purpose was multifold:
> > 
> > - Support (un)registration, matching and binding of devices and drivers.
> > 
> > - Provide power management (suspend/resume) services through the standard
> > Linux PM bus/device model, to make sure that DBI devices will be
> > suspended/resumed after/before their DBI bus controller.
> > 
> > - Provide bus services to access the connected devices. For DBI that took
> > the form of command read and data read/write functions.
> > 
> > A DSI bus implementation using the same model was also planned.
> > 
> > Tomi's patches removed the DBI bus and replaced DBI devices with platform
> > devices, moving the bus services implementation to the video source. DBI
> > and DSI busses are always either pure video or video + control busses
> > (although controlling a DPI panel through DSI is conceivable, nobody in
> > his right mind, not even a hardware engineer, would likely implement
> > that), so there will always be a video source to provide the DBI/DSI
> > control operations.
> > 
> > (Un)registration, matching and binding of devices and drivers is provided
> > by the platform device bus. Bus services to access connected devices are
> > provided by the video source, wrapper functions will be used to handle
> > serialization and locking, and possibly to offer higher level services
> > (such as DCS for instance).
> > 
> > One drawback of using the platform bus is that PM relationships between
> > the bus master and slaves will not be taken into account during
> > suspend/resume. However, a similar issue exists for DPI panels, and PM
> > relationships at the video bus level for DBI and DSI are not handled by
> > the DBI/DSI busses either. As we need a generic solution to handle those
> > (likely through early suspend and late resume), the same solution can be
> > used to handle DBI and DSI control bus PM relationships without requiring
> > a Linux DBI or DSI bus.
> > 
> > Even though I still like the idea of DBI and DSI busses, I agree with Tomi
> > that they're not strictly needed and I will drop them.
> > 
> > Entity model
> > ------------
> > 
> > Tomi's proposal split the display entities into video sources (struct
> > video_source) and display entities (struct display_entity). To make
> > generic pipeline operations easier, we agreed to merge the video source
> > and the display entity back. struct display_entity thus models a display
> > entity that has any number of sink and/or source ports, modeled as struct
> > display_entity_port instances.
> 
> Looking at Tomi's patchset, he has considered panel as "display entity"
> and  MIPI DSI as "video source entity". So if we are planning to merge it
> back how should we treat panel and MIPI DSI. i mean should we consider both
> panel  and MIPI DSI has 2 different display entities.
> i.e, during the probe of each of these drivers, should we register a
> display entity with CDF.

Both the DSI encoder and the DSI panel would be modeled as display entities. 
The DSI encoder would have a source port that models its DSI video source, and 
the DSI panel would have a sink port.

> > Video stream operations will be exposed by the display entity as function
> > pointers and will take a port reference as argument (this could take the
> > form of struct display_entity * and port index, or struct
> > display_entity_port *). The DVI and DSI operations model proposed by Tomi
> > in this patch series will be kept.
> 
> so you mean you will be adding these "ops" as part of "struct display
> entity" rather than  video source ops,

That's correct.

> static const struct dsi_video_source_ops dsi_dsi_ops = {
> 
> 	.update = dsi_bus_update,
> 	.dcs_write = dsi_bus_dcs_write,
> 	.dcs_read = dsi_bus_dcs_read,
> 	.configure_pins = dsi_bus_configure_pins,
> 	.set_clocks = dsi_bus_set_clocks,
> 	.enable = dsi_bus_enable,
> 	.disable = dsi_bus_disable,
> 	.set_size = dsi_bus_set_size,
> 	.set_operation_mode = dsi_bus_set_operation_mode,
> 	.set_pixel_format = dsi_bus_set_pixel_format,
> 	.enable_hs = dsi_bus_enable_hs,
> };
> 
> if you can post CDF v3 patches early, it will give us more clarity
> w.r.t to discussions you and Tomi had.

I'm working on that.

> > Points that we forgot to discuss
> > --------------------------------
> > 
> > - DISPLAY_ENTITY_STREAM_SINGLE_SHOT vs. update() operation
> > 
> > I'll look into that.
> > 
> > Please let me know if I've forgotten anything.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
  2012-12-26 12:14       ` Laurent Pinchart
@ 2012-12-31 11:36         ` Tomasz Figa
  -1 siblings, 0 replies; 26+ messages in thread
From: Tomasz Figa @ 2012-12-31 11:36 UTC (permalink / raw)
  To: dri-devel; +Cc: linux-fbdev, Tomi Valkeinen, Laurent Pinchart

Hi Laurent,

On Wednesday 26 of December 2012 13:14:46 Laurent Pinchart wrote:
> Hi Vikas,
> 
> On Monday 24 December 2012 12:33:50 Vikas Sajjan wrote:
> > On Wed, Dec 19, 2012 at 6:51 PM, Laurent Pinchart wrote:
> > > On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
> > >> Hi,
> > >> 
> > >> I have been testing Common Display Framework on OMAP, and making
> > >> changes that I've discussed in the posts I've sent in reply to the
> > >> CDF series from Laurent. While my CDF code is rather hacky and not
> > >> at all ready, I wanted to post the code for comments and also as a
> > >> reference code to my posts.
> > >> 
> > >> So here is CDF-T (Tomi-edition =).
> > > 
> > > We've discussed your approach extensively face-to-face today so I
> > > won't
> > > review the patches in detail, but I will instead summarize our
> > > discussion to make sure we understood each other (and let other
> > > developers jump in).
> > > 
> > > For the purpose of this discussion the term "display controller
> > > driver"
> > > (or just "display controller") refer to both the low-level driver
> > > layer
> > > that communicates directly with the display controller hardware, and
> > > to
> > > the higher- level driver layer that implements and exposes the
> > > userspace API (FBDEV, KMS and/or V4L). Those layers can be
> > > implemented in multiple kernel modules (such as in the OMAP DSS
> > > case, with omapdss for the low-level layer and omapdrm, omapfb and
> > > omapvout for the API-level layer) or a single kernel module.
> > > 
> > > Control model
> > > -------------
> > > 
> > > The figure at
> > > http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > > model.png shows the CDF control model.
> > > 
> > > The panel object depicted on the figure doesn't need to be a panel
> > > in the stricter sense but could be any chain of off-SoC (both
> > > on-board or off-board) display entities. It however helps thinking
> > > about it as a panel and doesn't hurt the model.
> > > 
> > > The panel is controlled through abstract control requests. Those
> > > requests are used to retrieve panel information (such as the
> > > physical size, the supported video modes, EDID information, ...),
> > > set the panel
> > > configuration (such as the active video timings) or control the
> > > panel
> > > operation state (enabling/disabling the panel, controlling panel
> > > blanking and power management, ...). They are exposed by the panel
> > > using function pointers, and called by other kernel components in
> > > response to userspace requests (through the FBDEV, KMS or V4L2
> > > APIs) or in-kernel events (for instance hotplug notifications).
> > > 
> > > In response to the control requests the panel driver will
> > > communicate with the panel through the panel control bus (I2C, SPI,
> > > DBI, DSI, GPIO, ..., not shown on the figure) and will control the
> > > video stream it receives on its input.
> > > 
> > > The panel is connected at the hardware level to a video source
> > > (shown as a green hashed rectangle) that provides it with a video
> > > stream. The video stream flows from the video source to the panel
> > > and is directly controlled by its source, as shown by the green
> > > arrow from the display controller to the video stream. The video
> > > source exposes stream control operations as function pointers that
> > > are used by the panel to control the video stream, as shown by the
> > > green arrow from the panel to the video source.
> > > 
> > > The figure at
> > > http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > > model-2.png shows the call flow across entities when the panel is a
> > > pipeline made of more than a single entity. In this case the SoC (on
> > > the left of the dashed line) outputs a video stream on a DSI bus
> > > connected to a DSI to LVDS transmitter. The output of the DSI to
> > > LVDS transmitter is connected to an LVDS panel (or, more
> > > accurately, an LVDS panel module made of an LVDS panel controller
> > > and a panel).
> > > 
> > > The transmitter and panel module are seen by the display controller
> > > and
> > > userspace API implementations as a single entity that exposes
> > > control
> > > request operations and controls its input video stream. When a
> > > control
> > > request is performed (outermost green arrow) the DSI to LVDS
> > > transmitter will propagate it to the panel, possibly mangling the
> > > input parameters or the response. For panel operation state control
> > > requests the last entity in the pipeline will likely want to
> > > control the video stream it receives on its input. The video stream
> > > control calls will be propagated from right to left as shown by the
> > > red arrows.
> > > 
> > > Every entity in the call stack can communicate with its hardware
> > > device
> > > through the corresponding control bus, and/or control the video
> > > stream it receives on its input.
> > > 
> > > This model allows filtering out modes and timings supported by the
> > > panel but unsupported by the transmitter and mangling the modes and
> > > timings according to the transmitter limitations. It has no
> > > complexity drawback for simple devices, as the corresponding
> > > drivers can just forward the calls directly. Similar use cases
> > > could exist for other control operations than mode and information
> > > retrieval.
> > > 
> > > Discovery
> > > ---------
> > > 
> > > Before being able to issue control requests, panel devices need to
> > > be
> > > discovered and associated with the connected display controller(s).
> > > 
> > > Panels and display controllers are cross-dependent. There is no way
> > > around that, as the display controller needs a reference to the
> > > panel to call control requests in response to userspace API, and
> > > the panel needs a reference to the display controller to call video
> > > stream control functions (in addition to requiring generic
> > > resources such as clocks, GPIOs or even regulators that could be
> > > provided by the display controller).
> > > 
> > > As we can't probe the display controller and the panel together, a
> > > probe order needs to be defined. The decision was to consider video
> > > sources as resources and defer panel probing until all required
> > > resources (video stream source, clocks, GPIOs, regulators and more)
> > > are available. Display controller probing must succeed without the
> > > panel being available. This mimicks the hotpluggable monitor model
> > > (VGA, HDMI, DP) that doesn't prevent display controllers from being
> > > successfully probed without a connected monitor.
> > > 
> > > Our design goal is to handle panel discovery in a similar (if not
> > > identical) way as HDMI/DP hotplug in order to implement a single
> > > display discovery method in display controller drivers. This might
> > > not be achievable, in which case we'll reconsider the design
> > > requirement.
> > > 
> > > When the display controller driver probes the device it will
> > > register the video source(s) at the output of the display
> > > controller with the CDF core. Those sources will be identified by
> > > the display controller dev_name() and a source integer index. A new
> > > structure, likely called
> > > display_entity_port, will be used to represent a source or sink
> > > video port on a display entity.
> > > 
> > > Panel drivers will handle video sources as resources. They will
> > > retrieve at probe time the video source the panel is connected to
> > > using a phandle or a source name (depending on whether the platform
> > > uses DT). If the source isn't available the probe function will
> > > return -EPROBE_DEFER.
> > > 
> > > In addition to the video stream control operations mentioned above,
> > > ports will also expose a connect/disconnect operation use to notify
> > > them of connection/disconnection events. After retrieving the
> > > connected video source panel drivers call the connect/disconnect
> > > operation on the video source to notify it that the panel is
> > > available.
> > > 
> > > When the panel is a pipeline made of more than a single entity,
> > > entities are probed in video source to video sink order.
> > > Out-of-order probe will result in probe deferral as explained above
> > > due to the video source not being available, resulting in the
> > > source to sink probe order. Entities should not call the connect
> > > operation of their video source at probe time in that case, but
> > > only when their own connect operation for the video source(s) they
> > > provide to the next entity is called by the next entity. Connect
> > > operations will thus be called in sink to source order starting at
> > > the entity at the end of the pipeline and going all the way back to
> > > the display controller.
> > > 
> > > This notification system is a hotplug mechanism that replaces the
> > > display entity notifier system from my previous RFC. Alan Cox
> > > rightly objected to the notification system, arguing that such
> > > system-wide notifications were used by FBDEV and very subject to
> > > abuse. I agree with his argument, this new mechanism should result
> > > in a cleaner implementation as video sources will only be notified
> > > of connect/disconnect events for the entity they're connected to.
> > > 
> > > DBI/DSI busses
> > > --------------
> > > 
> > > My RFC introduced a DBI bus using the Linux device and bus model.
> > > Its
> > > purpose was multifold:
> > > 
> > > - Support (un)registration, matching and binding of devices and
> > > drivers.
> > > 
> > > - Provide power management (suspend/resume) services through the
> > > standard Linux PM bus/device model, to make sure that DBI devices
> > > will be suspended/resumed after/before their DBI bus controller.
> > > 
> > > - Provide bus services to access the connected devices. For DBI that
> > > took the form of command read and data read/write functions.
> > > 
> > > A DSI bus implementation using the same model was also planned.
> > > 
> > > Tomi's patches removed the DBI bus and replaced DBI devices with
> > > platform devices, moving the bus services implementation to the
> > > video source. DBI and DSI busses are always either pure video or
> > > video + control busses (although controlling a DPI panel through
> > > DSI is conceivable, nobody in his right mind, not even a hardware
> > > engineer, would likely implement that), so there will always be a
> > > video source to provide the DBI/DSI control operations.
> > > 
> > > (Un)registration, matching and binding of devices and drivers is
> > > provided by the platform device bus. Bus services to access
> > > connected devices are provided by the video source, wrapper
> > > functions will be used to handle serialization and locking, and
> > > possibly to offer higher level services (such as DCS for instance).
> > > 
> > > One drawback of using the platform bus is that PM relationships
> > > between
> > > the bus master and slaves will not be taken into account during
> > > suspend/resume. However, a similar issue exists for DPI panels, and
> > > PM
> > > relationships at the video bus level for DBI and DSI are not handled
> > > by
> > > the DBI/DSI busses either. As we need a generic solution to handle
> > > those (likely through early suspend and late resume), the same
> > > solution can be used to handle DBI and DSI control bus PM
> > > relationships without requiring a Linux DBI or DSI bus.
> > > 
> > > Even though I still like the idea of DBI and DSI busses, I agree
> > > with Tomi that they're not strictly needed and I will drop them.
> > > 
> > > Entity model
> > > ------------
> > > 
> > > Tomi's proposal split the display entities into video sources
> > > (struct
> > > video_source) and display entities (struct display_entity). To make
> > > generic pipeline operations easier, we agreed to merge the video
> > > source
> > > and the display entity back. struct display_entity thus models a
> > > display entity that has any number of sink and/or source ports,
> > > modeled as struct display_entity_port instances.
> > 
> > Looking at Tomi's patchset, he has considered panel as "display
> > entity"
> > and  MIPI DSI as "video source entity". So if we are planning to merge
> > it back how should we treat panel and MIPI DSI. i mean should we
> > consider both panel  and MIPI DSI has 2 different display entities.
> > i.e, during the probe of each of these drivers, should we register a
> > display entity with CDF.
> 
> Both the DSI encoder and the DSI panel would be modeled as display
> entities. The DSI encoder would have a source port that models its DSI
> video source, and the DSI panel would have a sink port.
> 
> > > Video stream operations will be exposed by the display entity as
> > > function pointers and will take a port reference as argument (this
> > > could take the form of struct display_entity * and port index, or
> > > struct
> > > display_entity_port *). The DVI and DSI operations model proposed by
> > > Tomi in this patch series will be kept.
> > 
> > so you mean you will be adding these "ops" as part of "struct display
> > entity" rather than  video source ops,
> 
> That's correct.
> 
> > static const struct dsi_video_source_ops dsi_dsi_ops = {
> > 
> > 	.update = dsi_bus_update,
> > 	.dcs_write = dsi_bus_dcs_write,
> > 	.dcs_read = dsi_bus_dcs_read,
> > 	.configure_pins = dsi_bus_configure_pins,
> > 	.set_clocks = dsi_bus_set_clocks,
> > 	.enable = dsi_bus_enable,
> > 	.disable = dsi_bus_disable,
> > 	.set_size = dsi_bus_set_size,
> > 	.set_operation_mode = dsi_bus_set_operation_mode,
> > 	.set_pixel_format = dsi_bus_set_pixel_format,
> > 	.enable_hs = dsi_bus_enable_hs,
> > 
> > };
> > 
> > if you can post CDF v3 patches early, it will give us more clarity
> > w.r.t to discussions you and Tomi had.
> 
> I'm working on that.

May I ask you to add me on CC in v3?

This would allow me to track the development, without missing anything 
like it happened with the discussion about bus-less design.

Best regards,
-- 
Tomasz Figa
Samsung Poland R&D Center
SW Solution Development, Linux Platform


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC 0/6] Common Display Framework-T
@ 2012-12-31 11:36         ` Tomasz Figa
  0 siblings, 0 replies; 26+ messages in thread
From: Tomasz Figa @ 2012-12-31 11:36 UTC (permalink / raw)
  To: dri-devel; +Cc: linux-fbdev, Tomi Valkeinen, Laurent Pinchart

Hi Laurent,

On Wednesday 26 of December 2012 13:14:46 Laurent Pinchart wrote:
> Hi Vikas,
> 
> On Monday 24 December 2012 12:33:50 Vikas Sajjan wrote:
> > On Wed, Dec 19, 2012 at 6:51 PM, Laurent Pinchart wrote:
> > > On Friday 14 December 2012 16:27:26 Tomi Valkeinen wrote:
> > >> Hi,
> > >> 
> > >> I have been testing Common Display Framework on OMAP, and making
> > >> changes that I've discussed in the posts I've sent in reply to the
> > >> CDF series from Laurent. While my CDF code is rather hacky and not
> > >> at all ready, I wanted to post the code for comments and also as a
> > >> reference code to my posts.
> > >> 
> > >> So here is CDF-T (Tomi-edition =).
> > > 
> > > We've discussed your approach extensively face-to-face today so I
> > > won't
> > > review the patches in detail, but I will instead summarize our
> > > discussion to make sure we understood each other (and let other
> > > developers jump in).
> > > 
> > > For the purpose of this discussion the term "display controller
> > > driver"
> > > (or just "display controller") refer to both the low-level driver
> > > layer
> > > that communicates directly with the display controller hardware, and
> > > to
> > > the higher- level driver layer that implements and exposes the
> > > userspace API (FBDEV, KMS and/or V4L). Those layers can be
> > > implemented in multiple kernel modules (such as in the OMAP DSS
> > > case, with omapdss for the low-level layer and omapdrm, omapfb and
> > > omapvout for the API-level layer) or a single kernel module.
> > > 
> > > Control model
> > > -------------
> > > 
> > > The figure at
> > > http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > > model.png shows the CDF control model.
> > > 
> > > The panel object depicted on the figure doesn't need to be a panel
> > > in the stricter sense but could be any chain of off-SoC (both
> > > on-board or off-board) display entities. It however helps thinking
> > > about it as a panel and doesn't hurt the model.
> > > 
> > > The panel is controlled through abstract control requests. Those
> > > requests are used to retrieve panel information (such as the
> > > physical size, the supported video modes, EDID information, ...),
> > > set the panel
> > > configuration (such as the active video timings) or control the
> > > panel
> > > operation state (enabling/disabling the panel, controlling panel
> > > blanking and power management, ...). They are exposed by the panel
> > > using function pointers, and called by other kernel components in
> > > response to userspace requests (through the FBDEV, KMS or V4L2
> > > APIs) or in-kernel events (for instance hotplug notifications).
> > > 
> > > In response to the control requests the panel driver will
> > > communicate with the panel through the panel control bus (I2C, SPI,
> > > DBI, DSI, GPIO, ..., not shown on the figure) and will control the
> > > video stream it receives on its input.
> > > 
> > > The panel is connected at the hardware level to a video source
> > > (shown as a green hashed rectangle) that provides it with a video
> > > stream. The video stream flows from the video source to the panel
> > > and is directly controlled by its source, as shown by the green
> > > arrow from the display controller to the video stream. The video
> > > source exposes stream control operations as function pointers that
> > > are used by the panel to control the video stream, as shown by the
> > > green arrow from the panel to the video source.
> > > 
> > > The figure at
> > > http://www.ideasonboard.org/media/cdf/cdf-panel-control-
> > > model-2.png shows the call flow across entities when the panel is a
> > > pipeline made of more than a single entity. In this case the SoC (on
> > > the left of the dashed line) outputs a video stream on a DSI bus
> > > connected to a DSI to LVDS transmitter. The output of the DSI to
> > > LVDS transmitter is connected to an LVDS panel (or, more
> > > accurately, an LVDS panel module made of an LVDS panel controller
> > > and a panel).
> > > 
> > > The transmitter and panel module are seen by the display controller
> > > and
> > > userspace API implementations as a single entity that exposes
> > > control
> > > request operations and controls its input video stream. When a
> > > control
> > > request is performed (outermost green arrow) the DSI to LVDS
> > > transmitter will propagate it to the panel, possibly mangling the
> > > input parameters or the response. For panel operation state control
> > > requests the last entity in the pipeline will likely want to
> > > control the video stream it receives on its input. The video stream
> > > control calls will be propagated from right to left as shown by the
> > > red arrows.
> > > 
> > > Every entity in the call stack can communicate with its hardware
> > > device
> > > through the corresponding control bus, and/or control the video
> > > stream it receives on its input.
> > > 
> > > This model allows filtering out modes and timings supported by the
> > > panel but unsupported by the transmitter and mangling the modes and
> > > timings according to the transmitter limitations. It has no
> > > complexity drawback for simple devices, as the corresponding
> > > drivers can just forward the calls directly. Similar use cases
> > > could exist for other control operations than mode and information
> > > retrieval.
> > > 
> > > Discovery
> > > ---------
> > > 
> > > Before being able to issue control requests, panel devices need to
> > > be
> > > discovered and associated with the connected display controller(s).
> > > 
> > > Panels and display controllers are cross-dependent. There is no way
> > > around that, as the display controller needs a reference to the
> > > panel to call control requests in response to userspace API, and
> > > the panel needs a reference to the display controller to call video
> > > stream control functions (in addition to requiring generic
> > > resources such as clocks, GPIOs or even regulators that could be
> > > provided by the display controller).
> > > 
> > > As we can't probe the display controller and the panel together, a
> > > probe order needs to be defined. The decision was to consider video
> > > sources as resources and defer panel probing until all required
> > > resources (video stream source, clocks, GPIOs, regulators and more)
> > > are available. Display controller probing must succeed without the
> > > panel being available. This mimicks the hotpluggable monitor model
> > > (VGA, HDMI, DP) that doesn't prevent display controllers from being
> > > successfully probed without a connected monitor.
> > > 
> > > Our design goal is to handle panel discovery in a similar (if not
> > > identical) way as HDMI/DP hotplug in order to implement a single
> > > display discovery method in display controller drivers. This might
> > > not be achievable, in which case we'll reconsider the design
> > > requirement.
> > > 
> > > When the display controller driver probes the device it will
> > > register the video source(s) at the output of the display
> > > controller with the CDF core. Those sources will be identified by
> > > the display controller dev_name() and a source integer index. A new
> > > structure, likely called
> > > display_entity_port, will be used to represent a source or sink
> > > video port on a display entity.
> > > 
> > > Panel drivers will handle video sources as resources. They will
> > > retrieve at probe time the video source the panel is connected to
> > > using a phandle or a source name (depending on whether the platform
> > > uses DT). If the source isn't available the probe function will
> > > return -EPROBE_DEFER.
> > > 
> > > In addition to the video stream control operations mentioned above,
> > > ports will also expose a connect/disconnect operation use to notify
> > > them of connection/disconnection events. After retrieving the
> > > connected video source panel drivers call the connect/disconnect
> > > operation on the video source to notify it that the panel is
> > > available.
> > > 
> > > When the panel is a pipeline made of more than a single entity,
> > > entities are probed in video source to video sink order.
> > > Out-of-order probe will result in probe deferral as explained above
> > > due to the video source not being available, resulting in the
> > > source to sink probe order. Entities should not call the connect
> > > operation of their video source at probe time in that case, but
> > > only when their own connect operation for the video source(s) they
> > > provide to the next entity is called by the next entity. Connect
> > > operations will thus be called in sink to source order starting at
> > > the entity at the end of the pipeline and going all the way back to
> > > the display controller.
> > > 
> > > This notification system is a hotplug mechanism that replaces the
> > > display entity notifier system from my previous RFC. Alan Cox
> > > rightly objected to the notification system, arguing that such
> > > system-wide notifications were used by FBDEV and very subject to
> > > abuse. I agree with his argument, this new mechanism should result
> > > in a cleaner implementation as video sources will only be notified
> > > of connect/disconnect events for the entity they're connected to.
> > > 
> > > DBI/DSI busses
> > > --------------
> > > 
> > > My RFC introduced a DBI bus using the Linux device and bus model.
> > > Its
> > > purpose was multifold:
> > > 
> > > - Support (un)registration, matching and binding of devices and
> > > drivers.
> > > 
> > > - Provide power management (suspend/resume) services through the
> > > standard Linux PM bus/device model, to make sure that DBI devices
> > > will be suspended/resumed after/before their DBI bus controller.
> > > 
> > > - Provide bus services to access the connected devices. For DBI that
> > > took the form of command read and data read/write functions.
> > > 
> > > A DSI bus implementation using the same model was also planned.
> > > 
> > > Tomi's patches removed the DBI bus and replaced DBI devices with
> > > platform devices, moving the bus services implementation to the
> > > video source. DBI and DSI busses are always either pure video or
> > > video + control busses (although controlling a DPI panel through
> > > DSI is conceivable, nobody in his right mind, not even a hardware
> > > engineer, would likely implement that), so there will always be a
> > > video source to provide the DBI/DSI control operations.
> > > 
> > > (Un)registration, matching and binding of devices and drivers is
> > > provided by the platform device bus. Bus services to access
> > > connected devices are provided by the video source, wrapper
> > > functions will be used to handle serialization and locking, and
> > > possibly to offer higher level services (such as DCS for instance).
> > > 
> > > One drawback of using the platform bus is that PM relationships
> > > between
> > > the bus master and slaves will not be taken into account during
> > > suspend/resume. However, a similar issue exists for DPI panels, and
> > > PM
> > > relationships at the video bus level for DBI and DSI are not handled
> > > by
> > > the DBI/DSI busses either. As we need a generic solution to handle
> > > those (likely through early suspend and late resume), the same
> > > solution can be used to handle DBI and DSI control bus PM
> > > relationships without requiring a Linux DBI or DSI bus.
> > > 
> > > Even though I still like the idea of DBI and DSI busses, I agree
> > > with Tomi that they're not strictly needed and I will drop them.
> > > 
> > > Entity model
> > > ------------
> > > 
> > > Tomi's proposal split the display entities into video sources
> > > (struct
> > > video_source) and display entities (struct display_entity). To make
> > > generic pipeline operations easier, we agreed to merge the video
> > > source
> > > and the display entity back. struct display_entity thus models a
> > > display entity that has any number of sink and/or source ports,
> > > modeled as struct display_entity_port instances.
> > 
> > Looking at Tomi's patchset, he has considered panel as "display
> > entity"
> > and  MIPI DSI as "video source entity". So if we are planning to merge
> > it back how should we treat panel and MIPI DSI. i mean should we
> > consider both panel  and MIPI DSI has 2 different display entities.
> > i.e, during the probe of each of these drivers, should we register a
> > display entity with CDF.
> 
> Both the DSI encoder and the DSI panel would be modeled as display
> entities. The DSI encoder would have a source port that models its DSI
> video source, and the DSI panel would have a sink port.
> 
> > > Video stream operations will be exposed by the display entity as
> > > function pointers and will take a port reference as argument (this
> > > could take the form of struct display_entity * and port index, or
> > > struct
> > > display_entity_port *). The DVI and DSI operations model proposed by
> > > Tomi in this patch series will be kept.
> > 
> > so you mean you will be adding these "ops" as part of "struct display
> > entity" rather than  video source ops,
> 
> That's correct.
> 
> > static const struct dsi_video_source_ops dsi_dsi_ops = {
> > 
> > 	.update = dsi_bus_update,
> > 	.dcs_write = dsi_bus_dcs_write,
> > 	.dcs_read = dsi_bus_dcs_read,
> > 	.configure_pins = dsi_bus_configure_pins,
> > 	.set_clocks = dsi_bus_set_clocks,
> > 	.enable = dsi_bus_enable,
> > 	.disable = dsi_bus_disable,
> > 	.set_size = dsi_bus_set_size,
> > 	.set_operation_mode = dsi_bus_set_operation_mode,
> > 	.set_pixel_format = dsi_bus_set_pixel_format,
> > 	.enable_hs = dsi_bus_enable_hs,
> > 
> > };
> > 
> > if you can post CDF v3 patches early, it will give us more clarity
> > w.r.t to discussions you and Tomi had.
> 
> I'm working on that.

May I ask you to add me on CC in v3?

This would allow me to track the development, without missing anything 
like it happened with the discussion about bus-less design.

Best regards,
-- 
Tomasz Figa
Samsung Poland R&D Center
SW Solution Development, Linux Platform

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2012-12-31 11:36 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-12-14 14:27 [RFC 0/6] Common Display Framework-T Tomi Valkeinen
2012-12-14 14:27 ` Tomi Valkeinen
2012-12-14 14:27 ` [RFC 1/6] video: add display-core Tomi Valkeinen
2012-12-14 14:27   ` Tomi Valkeinen
2012-12-14 14:27 ` [RFC 2/6] video: add generic dpi panel Tomi Valkeinen
2012-12-14 14:27   ` Tomi Valkeinen
2012-12-14 14:27 ` [RFC 3/6] video: add tfp410 Tomi Valkeinen
2012-12-14 14:27   ` Tomi Valkeinen
2012-12-14 14:27 ` [RFC 4/6] video: add generic dvi monitor Tomi Valkeinen
2012-12-14 14:27   ` Tomi Valkeinen
2012-12-14 14:27 ` [RFC 5/6] video: add taal panel Tomi Valkeinen
2012-12-14 14:27   ` Tomi Valkeinen
2012-12-14 14:27 ` [RFC 6/6] video: add makefile & kconfig Tomi Valkeinen
2012-12-14 14:27   ` Tomi Valkeinen
2012-12-19 13:21 ` [RFC 0/6] Common Display Framework-T Laurent Pinchart
2012-12-19 13:21   ` Laurent Pinchart
2012-12-19 14:53   ` Tomi Valkeinen
2012-12-19 14:53     ` Tomi Valkeinen
2012-12-19 22:59     ` Laurent Pinchart
2012-12-19 23:00       ` Laurent Pinchart
2012-12-24  7:03   ` Vikas Sajjan
2012-12-24  7:15     ` Vikas Sajjan
2012-12-26 12:14     ` Laurent Pinchart
2012-12-26 12:14       ` Laurent Pinchart
2012-12-31 11:36       ` Tomasz Figa
2012-12-31 11:36         ` Tomasz Figa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.