All of lore.kernel.org
 help / color / mirror / Atom feed
* Proposal for a low-level Linux display framework
@ 2011-09-15 12:07 ` Tomi Valkeinen
  0 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-15 12:07 UTC (permalink / raw)
  To: linux-fbdev, linux-kernel, dri-devel, linaro-dev
  Cc: Clark, Rob, Archit Taneja, Nishant Kamat

Hi,

I am the author of OMAP display driver, and while developing it I've
often felt that there's something missing in Linux's display area. I've
been planning to write a post about this for a few years already, but I
never got to it. So here goes at last!

---

First I want to (try to) describe shortly what we have on OMAP, to give
a bit of a background for my point of view, and to have an example HW.

The display subsystem (DSS) hardware on OMAP handles only showing pixels
on a display, so it doesn't contain anything that produces pixels like
3D stuff or accelerated copying. All it does is fetch pixels from SDRAM,
possibly do some modifications for them (color format conversions etc),
and output them to a display.

The hardware has multiple overlays, which are like hardware windows.
They fetch pixels from SDRAM, and output them in a certain area on the
display (possibly with scaling). Multiple overlays can be composited
into one output.

So we may have something like this, when all overlays read pixels from
separate areas in the memory, and all overlays are on LCD display:

 .-----.         .------.           .------.
 | mem |-------->| ovl0 |-----.---->| LCD  |
 '-----'         '------'     |     '------'
 .-----.         .------.     |
 | mem |-------->| ovl1 |-----|
 '-----'         '------'     |
 .-----.         .------.     |     .------.
 | mem |-------->| ovl2 |-----'     |  TV  |
 '-----'         '------'           '------'

The LCD display can be rather simple one, like a standard monitor or a
simple panel directly connected to parallel RGB output, or a more
complex one. A complex panel needs something else than just
turn-it-on-and-go. This may involve sending and receiving messages
between OMAP and the panel, but more generally, there's need to have
custom code that handles the particular panel. And the complex panel is
not necessarily a panel at all, it may be a buffer chip between OMAP and
the actual panel.

The software side can be divided into three parts: the lower level
omapdss driver, the lower level panel drivers, and higher level drivers
like omapfb, v4l2 and omapdrm.

The omapdss driver handles the OMAP DSS hardware, and offers a kernel
internal API which the higher level drivers use. The omapdss does not
know anything about fb or drm, it just offers core display services.

The panel drivers handle particular panels/chips. The panel driver may
be very simple in case of a conventional display, basically doing pretty
much nothing, or bigger piece of code, handling communication with the
panel.

The higher level drivers handle buffers and tell omapdss things like
where to find the pixels, what size the overlays should be, and use the
omapdss API to turn displays on/off, etc.

---

There are two things that I'm proposing to improve the Linux display
support:

First, there should be a bunch of common video structs and helpers that
are independent of any higher level framework. Things like video
timings, mode databases, and EDID seem to be implemented multiple times
in the kernel. But there shouldn't be anything in those things that
depend on any particular display framework, so they could be implemented
just once and all the frameworks could use them.

Second, I think there could be use for a common low level display
framework. Currently the lower level code (display HW handling, etc.)
and higher level code (buffer management, policies, etc) seem to be
usually tied together, like the fb framework or the drm. Granted, the
frameworks do not force that, and for OMAP we indeed have omapfb and
omapdrm using the lower level omapdss. But I don't see that it's
anything OMAP specific as such.

I think the lower level framework could have components something like
this (the naming is OMAP oriented, of course):

overlay - a hardware "window", gets pixels from memory, possibly does
format conversions, scaling, etc.

overlay compositor - composes multiple overlays into one output,
possibly doing things like translucency.

output - gets the pixels from overlay compositor, and sends them out
according to particular video timings when using conventional video
interface, or via any other mean when using non-conventional video buses
like DSI command mode.

display - handles an external display. For conventional displays this
wouldn't do much, but for complex ones it does whatever needed by that
particular display.

This is something similar to what DRM has, I believe. The biggest
difference is that the display can be a full blown driver for a complex
piece of HW.

This kind of low level framework would be good for two purposes: 1) I
think it's a good division generally, having the low level HW driver
separate from the higher level buffer/policy management and 2) fb, drm,
v4l2 or any possible future framework could all use the same low level
framework.

---

Now, I'm quite sure the above framework could work quite well with any
OMAP like hardware, with unified memory (i.e. the video buffers are in
SDRAM) and 3D chips and similar components are separate. But what I'm
not sure is how desktop world's gfx cards change things. Most probably
all the above components can be found from there also in some form, but
are there some interdependencies between 3D/buffer management/something
else and the video output side?

This was a very rough and quite short proposal, but I'm happy to improve
and extend it if it's not totally shot down.

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Proposal for a low-level Linux display framework
@ 2011-09-15 12:07 ` Tomi Valkeinen
  0 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-15 12:07 UTC (permalink / raw)
  To: linux-fbdev, linux-kernel, dri-devel, linaro-dev
  Cc: Clark, Rob, Archit Taneja

Hi,

I am the author of OMAP display driver, and while developing it I've
often felt that there's something missing in Linux's display area. I've
been planning to write a post about this for a few years already, but I
never got to it. So here goes at last!

---

First I want to (try to) describe shortly what we have on OMAP, to give
a bit of a background for my point of view, and to have an example HW.

The display subsystem (DSS) hardware on OMAP handles only showing pixels
on a display, so it doesn't contain anything that produces pixels like
3D stuff or accelerated copying. All it does is fetch pixels from SDRAM,
possibly do some modifications for them (color format conversions etc),
and output them to a display.

The hardware has multiple overlays, which are like hardware windows.
They fetch pixels from SDRAM, and output them in a certain area on the
display (possibly with scaling). Multiple overlays can be composited
into one output.

So we may have something like this, when all overlays read pixels from
separate areas in the memory, and all overlays are on LCD display:

 .-----.         .------.           .------.
 | mem |-------->| ovl0 |-----.---->| LCD  |
 '-----'         '------'     |     '------'
 .-----.         .------.     |
 | mem |-------->| ovl1 |-----|
 '-----'         '------'     |
 .-----.         .------.     |     .------.
 | mem |-------->| ovl2 |-----'     |  TV  |
 '-----'         '------'           '------'

The LCD display can be rather simple one, like a standard monitor or a
simple panel directly connected to parallel RGB output, or a more
complex one. A complex panel needs something else than just
turn-it-on-and-go. This may involve sending and receiving messages
between OMAP and the panel, but more generally, there's need to have
custom code that handles the particular panel. And the complex panel is
not necessarily a panel at all, it may be a buffer chip between OMAP and
the actual panel.

The software side can be divided into three parts: the lower level
omapdss driver, the lower level panel drivers, and higher level drivers
like omapfb, v4l2 and omapdrm.

The omapdss driver handles the OMAP DSS hardware, and offers a kernel
internal API which the higher level drivers use. The omapdss does not
know anything about fb or drm, it just offers core display services.

The panel drivers handle particular panels/chips. The panel driver may
be very simple in case of a conventional display, basically doing pretty
much nothing, or bigger piece of code, handling communication with the
panel.

The higher level drivers handle buffers and tell omapdss things like
where to find the pixels, what size the overlays should be, and use the
omapdss API to turn displays on/off, etc.

---

There are two things that I'm proposing to improve the Linux display
support:

First, there should be a bunch of common video structs and helpers that
are independent of any higher level framework. Things like video
timings, mode databases, and EDID seem to be implemented multiple times
in the kernel. But there shouldn't be anything in those things that
depend on any particular display framework, so they could be implemented
just once and all the frameworks could use them.

Second, I think there could be use for a common low level display
framework. Currently the lower level code (display HW handling, etc.)
and higher level code (buffer management, policies, etc) seem to be
usually tied together, like the fb framework or the drm. Granted, the
frameworks do not force that, and for OMAP we indeed have omapfb and
omapdrm using the lower level omapdss. But I don't see that it's
anything OMAP specific as such.

I think the lower level framework could have components something like
this (the naming is OMAP oriented, of course):

overlay - a hardware "window", gets pixels from memory, possibly does
format conversions, scaling, etc.

overlay compositor - composes multiple overlays into one output,
possibly doing things like translucency.

output - gets the pixels from overlay compositor, and sends them out
according to particular video timings when using conventional video
interface, or via any other mean when using non-conventional video buses
like DSI command mode.

display - handles an external display. For conventional displays this
wouldn't do much, but for complex ones it does whatever needed by that
particular display.

This is something similar to what DRM has, I believe. The biggest
difference is that the display can be a full blown driver for a complex
piece of HW.

This kind of low level framework would be good for two purposes: 1) I
think it's a good division generally, having the low level HW driver
separate from the higher level buffer/policy management and 2) fb, drm,
v4l2 or any possible future framework could all use the same low level
framework.

---

Now, I'm quite sure the above framework could work quite well with any
OMAP like hardware, with unified memory (i.e. the video buffers are in
SDRAM) and 3D chips and similar components are separate. But what I'm
not sure is how desktop world's gfx cards change things. Most probably
all the above components can be found from there also in some form, but
are there some interdependencies between 3D/buffer management/something
else and the video output side?

This was a very rough and quite short proposal, but I'm happy to improve
and extend it if it's not totally shot down.

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Proposal for a low-level Linux display framework
@ 2011-09-15 12:07 ` Tomi Valkeinen
  0 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-15 12:07 UTC (permalink / raw)
  To: linux-fbdev, linux-kernel, dri-devel, linaro-dev
  Cc: Clark, Rob, Archit Taneja

Hi,

I am the author of OMAP display driver, and while developing it I've
often felt that there's something missing in Linux's display area. I've
been planning to write a post about this for a few years already, but I
never got to it. So here goes at last!

---

First I want to (try to) describe shortly what we have on OMAP, to give
a bit of a background for my point of view, and to have an example HW.

The display subsystem (DSS) hardware on OMAP handles only showing pixels
on a display, so it doesn't contain anything that produces pixels like
3D stuff or accelerated copying. All it does is fetch pixels from SDRAM,
possibly do some modifications for them (color format conversions etc),
and output them to a display.

The hardware has multiple overlays, which are like hardware windows.
They fetch pixels from SDRAM, and output them in a certain area on the
display (possibly with scaling). Multiple overlays can be composited
into one output.

So we may have something like this, when all overlays read pixels from
separate areas in the memory, and all overlays are on LCD display:

 .-----.         .------.           .------.
 | mem |-------->| ovl0 |-----.---->| LCD  |
 '-----'         '------'     |     '------'
 .-----.         .------.     |
 | mem |-------->| ovl1 |-----|
 '-----'         '------'     |
 .-----.         .------.     |     .------.
 | mem |-------->| ovl2 |-----'     |  TV  |
 '-----'         '------'           '------'

The LCD display can be rather simple one, like a standard monitor or a
simple panel directly connected to parallel RGB output, or a more
complex one. A complex panel needs something else than just
turn-it-on-and-go. This may involve sending and receiving messages
between OMAP and the panel, but more generally, there's need to have
custom code that handles the particular panel. And the complex panel is
not necessarily a panel at all, it may be a buffer chip between OMAP and
the actual panel.

The software side can be divided into three parts: the lower level
omapdss driver, the lower level panel drivers, and higher level drivers
like omapfb, v4l2 and omapdrm.

The omapdss driver handles the OMAP DSS hardware, and offers a kernel
internal API which the higher level drivers use. The omapdss does not
know anything about fb or drm, it just offers core display services.

The panel drivers handle particular panels/chips. The panel driver may
be very simple in case of a conventional display, basically doing pretty
much nothing, or bigger piece of code, handling communication with the
panel.

The higher level drivers handle buffers and tell omapdss things like
where to find the pixels, what size the overlays should be, and use the
omapdss API to turn displays on/off, etc.

---

There are two things that I'm proposing to improve the Linux display
support:

First, there should be a bunch of common video structs and helpers that
are independent of any higher level framework. Things like video
timings, mode databases, and EDID seem to be implemented multiple times
in the kernel. But there shouldn't be anything in those things that
depend on any particular display framework, so they could be implemented
just once and all the frameworks could use them.

Second, I think there could be use for a common low level display
framework. Currently the lower level code (display HW handling, etc.)
and higher level code (buffer management, policies, etc) seem to be
usually tied together, like the fb framework or the drm. Granted, the
frameworks do not force that, and for OMAP we indeed have omapfb and
omapdrm using the lower level omapdss. But I don't see that it's
anything OMAP specific as such.

I think the lower level framework could have components something like
this (the naming is OMAP oriented, of course):

overlay - a hardware "window", gets pixels from memory, possibly does
format conversions, scaling, etc.

overlay compositor - composes multiple overlays into one output,
possibly doing things like translucency.

output - gets the pixels from overlay compositor, and sends them out
according to particular video timings when using conventional video
interface, or via any other mean when using non-conventional video buses
like DSI command mode.

display - handles an external display. For conventional displays this
wouldn't do much, but for complex ones it does whatever needed by that
particular display.

This is something similar to what DRM has, I believe. The biggest
difference is that the display can be a full blown driver for a complex
piece of HW.

This kind of low level framework would be good for two purposes: 1) I
think it's a good division generally, having the low level HW driver
separate from the higher level buffer/policy management and 2) fb, drm,
v4l2 or any possible future framework could all use the same low level
framework.

---

Now, I'm quite sure the above framework could work quite well with any
OMAP like hardware, with unified memory (i.e. the video buffers are in
SDRAM) and 3D chips and similar components are separate. But what I'm
not sure is how desktop world's gfx cards change things. Most probably
all the above components can be found from there also in some form, but
are there some interdependencies between 3D/buffer management/something
else and the video output side?

This was a very rough and quite short proposal, but I'm happy to improve
and extend it if it's not totally shot down.

 Tomi

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 12:07 ` Tomi Valkeinen
  (?)
@ 2011-09-15 14:59   ` Keith Packard
  -1 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-15 14:59 UTC (permalink / raw)
  To: Tomi Valkeinen, linux-fbdev, linux-kernel, dri-devel, linaro-dev
  Cc: Clark, Rob, Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 825 bytes --]

On Thu, 15 Sep 2011 15:07:05 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> This was a very rough and quite short proposal, but I'm happy to improve
> and extend it if it's not totally shot down.

Jesse Barnes has put together a proposal much like this to work within
the existing DRM environment. This is pretty much the last piece of
missing mode-setting functionality that we know of, making DRM capable
of fully supporting existing (and planned) devices.

Here's a link to some older discussion on the issue, things have changed
a bit since then and we had a long talk about this during the X
Developers' Conference this week in Chicago. Expect an update to his
proposal in the coming weeks.

http://lists.freedesktop.org/archives/dri-devel/2011-April/010559.html

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 14:59   ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-15 14:59 UTC (permalink / raw)
  To: Tomi Valkeinen, linux-fbdev, linux-kernel, dri-devel, linaro-dev
  Cc: Clark, Rob, Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 825 bytes --]

On Thu, 15 Sep 2011 15:07:05 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> This was a very rough and quite short proposal, but I'm happy to improve
> and extend it if it's not totally shot down.

Jesse Barnes has put together a proposal much like this to work within
the existing DRM environment. This is pretty much the last piece of
missing mode-setting functionality that we know of, making DRM capable
of fully supporting existing (and planned) devices.

Here's a link to some older discussion on the issue, things have changed
a bit since then and we had a long talk about this during the X
Developers' Conference this week in Chicago. Expect an update to his
proposal in the coming weeks.

http://lists.freedesktop.org/archives/dri-devel/2011-April/010559.html

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 14:59   ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-15 14:59 UTC (permalink / raw)
  To: Tomi Valkeinen, linux-fbdev, linux-kernel, dri-devel, linaro-dev
  Cc: Clark, Rob, Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 825 bytes --]

On Thu, 15 Sep 2011 15:07:05 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> This was a very rough and quite short proposal, but I'm happy to improve
> and extend it if it's not totally shot down.

Jesse Barnes has put together a proposal much like this to work within
the existing DRM environment. This is pretty much the last piece of
missing mode-setting functionality that we know of, making DRM capable
of fully supporting existing (and planned) devices.

Here's a link to some older discussion on the issue, things have changed
a bit since then and we had a long talk about this during the X
Developers' Conference this week in Chicago. Expect an update to his
proposal in the coming weeks.

http://lists.freedesktop.org/archives/dri-devel/2011-April/010559.html

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 12:07 ` Tomi Valkeinen
@ 2011-09-15 15:03   ` Kyungmin Park
  -1 siblings, 0 replies; 143+ messages in thread
From: Kyungmin Park @ 2011-09-15 15:03 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Archit Taneja,
	대인기

Hi Tomi,

On Thu, Sep 15, 2011 at 9:07 PM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> Hi,
>
> I am the author of OMAP display driver, and while developing it I've
> often felt that there's something missing in Linux's display area. I've
> been planning to write a post about this for a few years already, but I
> never got to it. So here goes at last!
>
> ---
>
> First I want to (try to) describe shortly what we have on OMAP, to give
> a bit of a background for my point of view, and to have an example HW.
>
> The display subsystem (DSS) hardware on OMAP handles only showing pixels
> on a display, so it doesn't contain anything that produces pixels like
> 3D stuff or accelerated copying. All it does is fetch pixels from SDRAM,
> possibly do some modifications for them (color format conversions etc),
> and output them to a display.
>
> The hardware has multiple overlays, which are like hardware windows.
> They fetch pixels from SDRAM, and output them in a certain area on the
> display (possibly with scaling). Multiple overlays can be composited
> into one output.
>
> So we may have something like this, when all overlays read pixels from
> separate areas in the memory, and all overlays are on LCD display:
>
>  .-----.         .------.           .------.
>  | mem |-------->| ovl0 |-----.---->| LCD  |
>  '-----'         '------'     |     '------'
>  .-----.         .------.     |
>  | mem |-------->| ovl1 |-----|
>  '-----'         '------'     |
>  .-----.         .------.     |     .------.
>  | mem |-------->| ovl2 |-----'     |  TV  |
>  '-----'         '------'           '------'
>
Same feature at samsung display subsystem.

> The LCD display can be rather simple one, like a standard monitor or a
> simple panel directly connected to parallel RGB output, or a more
> complex one. A complex panel needs something else than just
> turn-it-on-and-go. This may involve sending and receiving messages
> between OMAP and the panel, but more generally, there's need to have
> custom code that handles the particular panel. And the complex panel is
> not necessarily a panel at all, it may be a buffer chip between OMAP and
> the actual panel.
>
> The software side can be divided into three parts: the lower level
> omapdss driver, the lower level panel drivers, and higher level drivers
> like omapfb, v4l2 and omapdrm.

Current omapdrm codes use the omapfb and omapdss codes even though
omapdrm is located drivers/staging, some time later it should be
drivers/gpu/gem/omap. but it still uses the drivers/video/omap2/dss
codes.
In case of samsung DRM, it has almost similar codes for lowlevel
access from the drivers/video/s3c-fb.c for FIMD and
drivers/media/video/s5p-tv for HDMI.


>
> The omapdss driver handles the OMAP DSS hardware, and offers a kernel
> internal API which the higher level drivers use. The omapdss does not
> know anything about fb or drm, it just offers core display services.
>
> The panel drivers handle particular panels/chips. The panel driver may
> be very simple in case of a conventional display, basically doing pretty
> much nothing, or bigger piece of code, handling communication with the
> panel.
>
> The higher level drivers handle buffers and tell omapdss things like
> where to find the pixels, what size the overlays should be, and use the
> omapdss API to turn displays on/off, etc.
>
> ---
>
> There are two things that I'm proposing to improve the Linux display
> support:
>
> First, there should be a bunch of common video structs and helpers that
> are independent of any higher level framework. Things like video
> timings, mode databases, and EDID seem to be implemented multiple times
> in the kernel. But there shouldn't be anything in those things that
> depend on any particular display framework, so they could be implemented
> just once and all the frameworks could use them.
>
> Second, I think there could be use for a common low level display
> framework. Currently the lower level code (display HW handling, etc.)
> and higher level code (buffer management, policies, etc) seem to be
> usually tied together, like the fb framework or the drm. Granted, the
> frameworks do not force that, and for OMAP we indeed have omapfb and
> omapdrm using the lower level omapdss. But I don't see that it's
> anything OMAP specific as such.

So I suggest the create the drivers/graphics for lowlevel codes and
each framework, DRM, V4L2 and FB uses these lowlevel codes.

Thank you,
Kyungmin Park
>
> I think the lower level framework could have components something like
> this (the naming is OMAP oriented, of course):
>
> overlay - a hardware "window", gets pixels from memory, possibly does
> format conversions, scaling, etc.
>
> overlay compositor - composes multiple overlays into one output,
> possibly doing things like translucency.
>
> output - gets the pixels from overlay compositor, and sends them out
> according to particular video timings when using conventional video
> interface, or via any other mean when using non-conventional video buses
> like DSI command mode.
>
> display - handles an external display. For conventional displays this
> wouldn't do much, but for complex ones it does whatever needed by that
> particular display.
>
> This is something similar to what DRM has, I believe. The biggest
> difference is that the display can be a full blown driver for a complex
> piece of HW.
>
> This kind of low level framework would be good for two purposes: 1) I
> think it's a good division generally, having the low level HW driver
> separate from the higher level buffer/policy management and 2) fb, drm,
> v4l2 or any possible future framework could all use the same low level
> framework.
>
> ---
>
> Now, I'm quite sure the above framework could work quite well with any
> OMAP like hardware, with unified memory (i.e. the video buffers are in
> SDRAM) and 3D chips and similar components are separate. But what I'm
> not sure is how desktop world's gfx cards change things. Most probably
> all the above components can be found from there also in some form, but
> are there some interdependencies between 3D/buffer management/something
> else and the video output side?
>
> This was a very rough and quite short proposal, but I'm happy to improve
> and extend it if it's not totally shot down.
>
>  Tomi
>
>
>
> _______________________________________________
> linaro-dev mailing list
> linaro-dev@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/linaro-dev
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 15:03   ` Kyungmin Park
  0 siblings, 0 replies; 143+ messages in thread
From: Kyungmin Park @ 2011-09-15 15:03 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Archit Taneja,
	대인기

Hi Tomi,

On Thu, Sep 15, 2011 at 9:07 PM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> Hi,
>
> I am the author of OMAP display driver, and while developing it I've
> often felt that there's something missing in Linux's display area. I've
> been planning to write a post about this for a few years already, but I
> never got to it. So here goes at last!
>
> ---
>
> First I want to (try to) describe shortly what we have on OMAP, to give
> a bit of a background for my point of view, and to have an example HW.
>
> The display subsystem (DSS) hardware on OMAP handles only showing pixels
> on a display, so it doesn't contain anything that produces pixels like
> 3D stuff or accelerated copying. All it does is fetch pixels from SDRAM,
> possibly do some modifications for them (color format conversions etc),
> and output them to a display.
>
> The hardware has multiple overlays, which are like hardware windows.
> They fetch pixels from SDRAM, and output them in a certain area on the
> display (possibly with scaling). Multiple overlays can be composited
> into one output.
>
> So we may have something like this, when all overlays read pixels from
> separate areas in the memory, and all overlays are on LCD display:
>
>  .-----.         .------.           .------.
>  | mem |-------->| ovl0 |-----.---->| LCD  |
>  '-----'         '------'     |     '------'
>  .-----.         .------.     |
>  | mem |-------->| ovl1 |-----|
>  '-----'         '------'     |
>  .-----.         .------.     |     .------.
>  | mem |-------->| ovl2 |-----'     |  TV  |
>  '-----'         '------'           '------'
>
Same feature at samsung display subsystem.

> The LCD display can be rather simple one, like a standard monitor or a
> simple panel directly connected to parallel RGB output, or a more
> complex one. A complex panel needs something else than just
> turn-it-on-and-go. This may involve sending and receiving messages
> between OMAP and the panel, but more generally, there's need to have
> custom code that handles the particular panel. And the complex panel is
> not necessarily a panel at all, it may be a buffer chip between OMAP and
> the actual panel.
>
> The software side can be divided into three parts: the lower level
> omapdss driver, the lower level panel drivers, and higher level drivers
> like omapfb, v4l2 and omapdrm.

Current omapdrm codes use the omapfb and omapdss codes even though
omapdrm is located drivers/staging, some time later it should be
drivers/gpu/gem/omap. but it still uses the drivers/video/omap2/dss
codes.
In case of samsung DRM, it has almost similar codes for lowlevel
access from the drivers/video/s3c-fb.c for FIMD and
drivers/media/video/s5p-tv for HDMI.


>
> The omapdss driver handles the OMAP DSS hardware, and offers a kernel
> internal API which the higher level drivers use. The omapdss does not
> know anything about fb or drm, it just offers core display services.
>
> The panel drivers handle particular panels/chips. The panel driver may
> be very simple in case of a conventional display, basically doing pretty
> much nothing, or bigger piece of code, handling communication with the
> panel.
>
> The higher level drivers handle buffers and tell omapdss things like
> where to find the pixels, what size the overlays should be, and use the
> omapdss API to turn displays on/off, etc.
>
> ---
>
> There are two things that I'm proposing to improve the Linux display
> support:
>
> First, there should be a bunch of common video structs and helpers that
> are independent of any higher level framework. Things like video
> timings, mode databases, and EDID seem to be implemented multiple times
> in the kernel. But there shouldn't be anything in those things that
> depend on any particular display framework, so they could be implemented
> just once and all the frameworks could use them.
>
> Second, I think there could be use for a common low level display
> framework. Currently the lower level code (display HW handling, etc.)
> and higher level code (buffer management, policies, etc) seem to be
> usually tied together, like the fb framework or the drm. Granted, the
> frameworks do not force that, and for OMAP we indeed have omapfb and
> omapdrm using the lower level omapdss. But I don't see that it's
> anything OMAP specific as such.

So I suggest the create the drivers/graphics for lowlevel codes and
each framework, DRM, V4L2 and FB uses these lowlevel codes.

Thank you,
Kyungmin Park
>
> I think the lower level framework could have components something like
> this (the naming is OMAP oriented, of course):
>
> overlay - a hardware "window", gets pixels from memory, possibly does
> format conversions, scaling, etc.
>
> overlay compositor - composes multiple overlays into one output,
> possibly doing things like translucency.
>
> output - gets the pixels from overlay compositor, and sends them out
> according to particular video timings when using conventional video
> interface, or via any other mean when using non-conventional video buses
> like DSI command mode.
>
> display - handles an external display. For conventional displays this
> wouldn't do much, but for complex ones it does whatever needed by that
> particular display.
>
> This is something similar to what DRM has, I believe. The biggest
> difference is that the display can be a full blown driver for a complex
> piece of HW.
>
> This kind of low level framework would be good for two purposes: 1) I
> think it's a good division generally, having the low level HW driver
> separate from the higher level buffer/policy management and 2) fb, drm,
> v4l2 or any possible future framework could all use the same low level
> framework.
>
> ---
>
> Now, I'm quite sure the above framework could work quite well with any
> OMAP like hardware, with unified memory (i.e. the video buffers are in
> SDRAM) and 3D chips and similar components are separate. But what I'm
> not sure is how desktop world's gfx cards change things. Most probably
> all the above components can be found from there also in some form, but
> are there some interdependencies between 3D/buffer management/something
> else and the video output side?
>
> This was a very rough and quite short proposal, but I'm happy to improve
> and extend it if it's not totally shot down.
>
>  Tomi
>
>
>
> _______________________________________________
> linaro-dev mailing list
> linaro-dev@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/linaro-dev
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 14:59   ` Keith Packard
  (?)
@ 2011-09-15 15:29     ` Tomi Valkeinen
  -1 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-15 15:29 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

On Thu, 2011-09-15 at 09:59 -0500, Keith Packard wrote:
> On Thu, 15 Sep 2011 15:07:05 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > This was a very rough and quite short proposal, but I'm happy to improve
> > and extend it if it's not totally shot down.
> 
> Jesse Barnes has put together a proposal much like this to work within
> the existing DRM environment. This is pretty much the last piece of
> missing mode-setting functionality that we know of, making DRM capable
> of fully supporting existing (and planned) devices.
> 
> Here's a link to some older discussion on the issue, things have changed
> a bit since then and we had a long talk about this during the X
> Developers' Conference this week in Chicago. Expect an update to his
> proposal in the coming weeks.
> 
> http://lists.freedesktop.org/archives/dri-devel/2011-April/010559.html
> 

Thanks for the link.

Right, DRM has already components I described in my proposal, and adding
overlays brings it even closer. However, I think there are two major
differences:

1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
the plan is to make DRM the core Linux display framework, upon which
everything else is built, and fb and v4l2 are changed to use DRM.

But even if it was done like that, I see that it's combining two
separate things: 1) the lower level HW control, and 2) the upper level
buffer management, policies and userspace interfaces.

2) It's missing the panel driver part. This is rather important on
embedded systems, as the panels often are not "dummy" panels, but they
need things like custom initialization, sending commands to adjust
backlight, etc.

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 15:29     ` Tomi Valkeinen
  0 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-15 15:29 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

On Thu, 2011-09-15 at 09:59 -0500, Keith Packard wrote:
> On Thu, 15 Sep 2011 15:07:05 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > This was a very rough and quite short proposal, but I'm happy to improve
> > and extend it if it's not totally shot down.
> 
> Jesse Barnes has put together a proposal much like this to work within
> the existing DRM environment. This is pretty much the last piece of
> missing mode-setting functionality that we know of, making DRM capable
> of fully supporting existing (and planned) devices.
> 
> Here's a link to some older discussion on the issue, things have changed
> a bit since then and we had a long talk about this during the X
> Developers' Conference this week in Chicago. Expect an update to his
> proposal in the coming weeks.
> 
> http://lists.freedesktop.org/archives/dri-devel/2011-April/010559.html
> 

Thanks for the link.

Right, DRM has already components I described in my proposal, and adding
overlays brings it even closer. However, I think there are two major
differences:

1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
the plan is to make DRM the core Linux display framework, upon which
everything else is built, and fb and v4l2 are changed to use DRM.

But even if it was done like that, I see that it's combining two
separate things: 1) the lower level HW control, and 2) the upper level
buffer management, policies and userspace interfaces.

2) It's missing the panel driver part. This is rather important on
embedded systems, as the panels often are not "dummy" panels, but they
need things like custom initialization, sending commands to adjust
backlight, etc.

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 15:29     ` Tomi Valkeinen
  0 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-15 15:29 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

On Thu, 2011-09-15 at 09:59 -0500, Keith Packard wrote:
> On Thu, 15 Sep 2011 15:07:05 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > This was a very rough and quite short proposal, but I'm happy to improve
> > and extend it if it's not totally shot down.
> 
> Jesse Barnes has put together a proposal much like this to work within
> the existing DRM environment. This is pretty much the last piece of
> missing mode-setting functionality that we know of, making DRM capable
> of fully supporting existing (and planned) devices.
> 
> Here's a link to some older discussion on the issue, things have changed
> a bit since then and we had a long talk about this during the X
> Developers' Conference this week in Chicago. Expect an update to his
> proposal in the coming weeks.
> 
> http://lists.freedesktop.org/archives/dri-devel/2011-April/010559.html
> 

Thanks for the link.

Right, DRM has already components I described in my proposal, and adding
overlays brings it even closer. However, I think there are two major
differences:

1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
the plan is to make DRM the core Linux display framework, upon which
everything else is built, and fb and v4l2 are changed to use DRM.

But even if it was done like that, I see that it's combining two
separate things: 1) the lower level HW control, and 2) the upper level
buffer management, policies and userspace interfaces.

2) It's missing the panel driver part. This is rather important on
embedded systems, as the panels often are not "dummy" panels, but they
need things like custom initialization, sending commands to adjust
backlight, etc.

 Tomi

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 15:29     ` Tomi Valkeinen
  (?)
@ 2011-09-15 15:50       ` Keith Packard
  -1 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-15 15:50 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 1797 bytes --]

On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> the plan is to make DRM the core Linux display framework, upon which
> everything else is built, and fb and v4l2 are changed to use DRM.

I'd like to think we could make DRM the underlying display framework;
it already exposes an fb interface, and with overlays, a bit more of the
v4l2 stuff is done as well. Certainly eliminating three copies of mode
setting infrastructure would be nice...

> But even if it was done like that, I see that it's combining two
> separate things: 1) the lower level HW control, and 2) the upper level
> buffer management, policies and userspace interfaces.

Those are split between the DRM layer and the underlying device driver,
which provides both kernel (via fb) and user space interfaces.

> 2) It's missing the panel driver part. This is rather important on
> embedded systems, as the panels often are not "dummy" panels, but they
> need things like custom initialization, sending commands to adjust
> backlight, etc.

We integrate the panel (and other video output) drivers into the device
drivers. With desktop chips, they're not easily separable. None of the
desktop output drivers are simple; things like DisplayPort require link
training, and everyone needs EDID. We share some of that code in the DRM
layer today, and it would be nice to share even more.

We should figure out if the DRM interfaces are sufficient for your
needs; they're pretty flexible at this point.

Of course, backlight remains a mess in the desktop world; with many
custom backlight drivers along with generic ACPI and then
per-video-device drivers as well.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 15:50       ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-15 15:50 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 1797 bytes --]

On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> the plan is to make DRM the core Linux display framework, upon which
> everything else is built, and fb and v4l2 are changed to use DRM.

I'd like to think we could make DRM the underlying display framework;
it already exposes an fb interface, and with overlays, a bit more of the
v4l2 stuff is done as well. Certainly eliminating three copies of mode
setting infrastructure would be nice...

> But even if it was done like that, I see that it's combining two
> separate things: 1) the lower level HW control, and 2) the upper level
> buffer management, policies and userspace interfaces.

Those are split between the DRM layer and the underlying device driver,
which provides both kernel (via fb) and user space interfaces.

> 2) It's missing the panel driver part. This is rather important on
> embedded systems, as the panels often are not "dummy" panels, but they
> need things like custom initialization, sending commands to adjust
> backlight, etc.

We integrate the panel (and other video output) drivers into the device
drivers. With desktop chips, they're not easily separable. None of the
desktop output drivers are simple; things like DisplayPort require link
training, and everyone needs EDID. We share some of that code in the DRM
layer today, and it would be nice to share even more.

We should figure out if the DRM interfaces are sufficient for your
needs; they're pretty flexible at this point.

Of course, backlight remains a mess in the desktop world; with many
custom backlight drivers along with generic ACPI and then
per-video-device drivers as well.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 15:50       ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-15 15:50 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 1797 bytes --]

On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> the plan is to make DRM the core Linux display framework, upon which
> everything else is built, and fb and v4l2 are changed to use DRM.

I'd like to think we could make DRM the underlying display framework;
it already exposes an fb interface, and with overlays, a bit more of the
v4l2 stuff is done as well. Certainly eliminating three copies of mode
setting infrastructure would be nice...

> But even if it was done like that, I see that it's combining two
> separate things: 1) the lower level HW control, and 2) the upper level
> buffer management, policies and userspace interfaces.

Those are split between the DRM layer and the underlying device driver,
which provides both kernel (via fb) and user space interfaces.

> 2) It's missing the panel driver part. This is rather important on
> embedded systems, as the panels often are not "dummy" panels, but they
> need things like custom initialization, sending commands to adjust
> backlight, etc.

We integrate the panel (and other video output) drivers into the device
drivers. With desktop chips, they're not easily separable. None of the
desktop output drivers are simple; things like DisplayPort require link
training, and everyone needs EDID. We share some of that code in the DRM
layer today, and it would be nice to share even more.

We should figure out if the DRM interfaces are sufficient for your
needs; they're pretty flexible at this point.

Of course, backlight remains a mess in the desktop world; with many
custom backlight drivers along with generic ACPI and then
per-video-device drivers as well.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 15:50       ` Keith Packard
  (?)
@ 2011-09-15 17:05         ` Alan Cox
  -1 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 17:05 UTC (permalink / raw)
  To: Keith Packard
  Cc: Tomi Valkeinen, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

On Thu, 15 Sep 2011 10:50:32 -0500
Keith Packard <keithp@keithp.com> wrote:

> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> > the plan is to make DRM the core Linux display framework, upon which
> > everything else is built, and fb and v4l2 are changed to use DRM.
> 
> I'd like to think we could make DRM the underlying display framework;
> it already exposes an fb interface, and with overlays, a bit more of the
> v4l2 stuff is done as well. Certainly eliminating three copies of mode
> setting infrastructure would be nice...

V4L2 needs to interface with the DRM anyway. Lots of current hardware
wants things like shared 1080i/p camera buffers with video in order to do
preview on video and the like.

In my semi-perfect world vision fb would be a legacy layer on top of DRM.
DRM would get the silly recovery fail cases fixed, and a kernel console
would be attachable to a GEM object of your choice.

Alan



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 17:05         ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 17:05 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

On Thu, 15 Sep 2011 10:50:32 -0500
Keith Packard <keithp@keithp.com> wrote:

> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> > the plan is to make DRM the core Linux display framework, upon which
> > everything else is built, and fb and v4l2 are changed to use DRM.
> 
> I'd like to think we could make DRM the underlying display framework;
> it already exposes an fb interface, and with overlays, a bit more of the
> v4l2 stuff is done as well. Certainly eliminating three copies of mode
> setting infrastructure would be nice...

V4L2 needs to interface with the DRM anyway. Lots of current hardware
wants things like shared 1080i/p camera buffers with video in order to do
preview on video and the like.

In my semi-perfect world vision fb would be a legacy layer on top of DRM.
DRM would get the silly recovery fail cases fixed, and a kernel console
would be attachable to a GEM object of your choice.

Alan



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 17:05         ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 17:05 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

On Thu, 15 Sep 2011 10:50:32 -0500
Keith Packard <keithp@keithp.com> wrote:

> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> > the plan is to make DRM the core Linux display framework, upon which
> > everything else is built, and fb and v4l2 are changed to use DRM.
> 
> I'd like to think we could make DRM the underlying display framework;
> it already exposes an fb interface, and with overlays, a bit more of the
> v4l2 stuff is done as well. Certainly eliminating three copies of mode
> setting infrastructure would be nice...

V4L2 needs to interface with the DRM anyway. Lots of current hardware
wants things like shared 1080i/p camera buffers with video in order to do
preview on video and the like.

In my semi-perfect world vision fb would be a legacy layer on top of DRM.
DRM would get the silly recovery fail cases fixed, and a kernel console
would be attachable to a GEM object of your choice.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 15:50       ` Keith Packard
@ 2011-09-15 17:12         ` Florian Tobias Schandinat
  -1 siblings, 0 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-15 17:12 UTC (permalink / raw)
  To: Keith Packard
  Cc: Tomi Valkeinen, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

On 09/15/2011 03:50 PM, Keith Packard wrote:
> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
>> 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
>> the plan is to make DRM the core Linux display framework, upon which
>> everything else is built, and fb and v4l2 are changed to use DRM.
> 
> I'd like to think we could make DRM the underlying display framework;
> it already exposes an fb interface, and with overlays, a bit more of the
> v4l2 stuff is done as well. Certainly eliminating three copies of mode
> setting infrastructure would be nice...

Interesting that this comes from the people that pushed the latest mode setting
code into the kernel. But I don't think that this will happen, the exposed user
interfaces will be around for decades and the infrastructure code could be
shared, in theory.
For fb and V4L2 I think we'll develop some level of interoperability, share
concepts and maybe even some code. The FOURCC pixel formats and overlays are
such examples. As Laurent is really interested in it I think we can get some
nice progress here.
For fb and DRM the situation is entirely different. The last proposal I remember
ended in the DRM people stating that only their implementation is acceptable as
is and we could use it. Such attitude is not helpful and as I don't see any
serious intention of the DRM guys to cooperate I think those subsystems are more
likely to diverge. At least I'll never accept any change to the fb
infrastructure that requires DRM.


Regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 17:12         ` Florian Tobias Schandinat
  0 siblings, 0 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-15 17:12 UTC (permalink / raw)
  To: Keith Packard
  Cc: Tomi Valkeinen, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

On 09/15/2011 03:50 PM, Keith Packard wrote:
> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
>> 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
>> the plan is to make DRM the core Linux display framework, upon which
>> everything else is built, and fb and v4l2 are changed to use DRM.
> 
> I'd like to think we could make DRM the underlying display framework;
> it already exposes an fb interface, and with overlays, a bit more of the
> v4l2 stuff is done as well. Certainly eliminating three copies of mode
> setting infrastructure would be nice...

Interesting that this comes from the people that pushed the latest mode setting
code into the kernel. But I don't think that this will happen, the exposed user
interfaces will be around for decades and the infrastructure code could be
shared, in theory.
For fb and V4L2 I think we'll develop some level of interoperability, share
concepts and maybe even some code. The FOURCC pixel formats and overlays are
such examples. As Laurent is really interested in it I think we can get some
nice progress here.
For fb and DRM the situation is entirely different. The last proposal I remember
ended in the DRM people stating that only their implementation is acceptable as
is and we could use it. Such attitude is not helpful and as I don't see any
serious intention of the DRM guys to cooperate I think those subsystems are more
likely to diverge. At least I'll never accept any change to the fb
infrastructure that requires DRM.


Regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:12         ` Florian Tobias Schandinat
  (?)
@ 2011-09-15 17:18           ` Alan Cox
  -1 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 17:18 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Keith Packard, Tomi Valkeinen, linux-fbdev, linux-kernel,
	dri-devel, linaro-dev, Clark, Rob, Archit Taneja

> is and we could use it. Such attitude is not helpful and as I don't see any
> serious intention of the DRM guys to cooperate I think those subsystems are more
> likely to diverge. At least I'll never accept any change to the fb
> infrastructure that requires DRM.

There are aspects of the fb code that want changing for DRM (and indeed
modern hardware) but which won't break for other stuff. Given the move to
using main memory for video and the need for the OS to do buffer
management for framebuffers I suspect a move to DRM is pretty much
inevitable, along with having to fix the fb layer to cope with
discontiguous framebuffers.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 17:18           ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 17:18 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

> is and we could use it. Such attitude is not helpful and as I don't see any
> serious intention of the DRM guys to cooperate I think those subsystems are more
> likely to diverge. At least I'll never accept any change to the fb
> infrastructure that requires DRM.

There are aspects of the fb code that want changing for DRM (and indeed
modern hardware) but which won't break for other stuff. Given the move to
using main memory for video and the need for the OS to do buffer
management for framebuffers I suspect a move to DRM is pretty much
inevitable, along with having to fix the fb layer to cope with
discontiguous framebuffers.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 17:18           ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 17:18 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

> is and we could use it. Such attitude is not helpful and as I don't see any
> serious intention of the DRM guys to cooperate I think those subsystems are more
> likely to diverge. At least I'll never accept any change to the fb
> infrastructure that requires DRM.

There are aspects of the fb code that want changing for DRM (and indeed
modern hardware) but which won't break for other stuff. Given the move to
using main memory for video and the need for the OS to do buffer
management for framebuffers I suspect a move to DRM is pretty much
inevitable, along with having to fix the fb layer to cope with
discontiguous framebuffers.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 15:50       ` Keith Packard
@ 2011-09-15 17:21         ` Tomi Valkeinen
  -1 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-15 17:21 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

On Thu, 2011-09-15 at 10:50 -0500, Keith Packard wrote:
> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> > the plan is to make DRM the core Linux display framework, upon which
> > everything else is built, and fb and v4l2 are changed to use DRM.
> 
> I'd like to think we could make DRM the underlying display framework;
> it already exposes an fb interface, and with overlays, a bit more of the
> v4l2 stuff is done as well. Certainly eliminating three copies of mode
> setting infrastructure would be nice...

Ok, sounds good to me. We (as in OMAP display people) are already
planning to take DRM into use, so no problem there.

> > But even if it was done like that, I see that it's combining two
> > separate things: 1) the lower level HW control, and 2) the upper level
> > buffer management, policies and userspace interfaces.
> 
> Those are split between the DRM layer and the underlying device driver,
> which provides both kernel (via fb) and user space interfaces.

I'm not so familiar with DRM, but with device driver you mean a driver
for the the hardware which handles display output (gfx cards or whatever
it is on that platform)?

If so, it sounds good. That quite well matches what omapdss driver does
currently for us. But we still have semi-complex omapdrm between omapdss
and the standard drm layer.

Rob, would you say omapdrm is more of a DRM wrapper for omapdss than a
real separate entity? If so, then we could possibly in the future (when
nobody else uses omapdss) change omapdss to support DRM natively. (or
make omapdrm support omap HW natively, which ever way =).

> > 2) It's missing the panel driver part. This is rather important on
> > embedded systems, as the panels often are not "dummy" panels, but they
> > need things like custom initialization, sending commands to adjust
> > backlight, etc.
> 
> We integrate the panel (and other video output) drivers into the device
> drivers. With desktop chips, they're not easily separable. None of the
> desktop output drivers are simple; things like DisplayPort require link
> training, and everyone needs EDID. We share some of that code in the DRM
> layer today, and it would be nice to share even more.

I don't think we speak of similar panel drivers. I think there are two
different drivers here:

1) output drivers, handles the output from the SoC / gfx card. For
example DVI, DisplayPort, MIPI DPI/DBI/DSI.

2) panel drivers, handles panel specific things. Each panel may support
custom commands and features, for which we need a dedicated driver. And
this driver is not platform specific, but should work with any platform
which has the output used with the panel.

As an example, DSI command mode displays can be quite complex:

DSI bus is a half-duplex serial bus, and while it's designed for
displays you could use it easily for any communication between the SoC
and the peripheral.

The panel could have a feature like content adaptive backlight control,
and this would be configured via the DSI bus, sending a particular
command to the panel (possibly by first reading something from the
panel). The panel driver would accomplish this more or less the same way
one uses, say, i2c, so it would use the platform's DSI support to send
and receive packets.

Or a more complex scenario (but still a realistic scenario, been there,
done that) is sending the image to the panel in multiple parts, and
between each part sending configuration commands to the panel. (and
still getting it done in time so we avoid tearing).

And to complicate things more, there are DSI bus features like LP mode
(low power, basically low speed mode) and HS mode (high speed), virtual
channel IDs, and whatnot, which each panel may need to be used in
particular manner. Some panels may require initial configuration done in
LP, or configuration commands sent to a certain virtual channel ID.

The point is that we cannot have standard "MIPI DSI command mode panel
driver" which would work for all DSI cmd mode panels, but we need (in
the worst case) separate driver for each panel.

The same goes to lesser extent for other panels also. Some are
configured via i2c or spi.

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 17:21         ` Tomi Valkeinen
  0 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-15 17:21 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

On Thu, 2011-09-15 at 10:50 -0500, Keith Packard wrote:
> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> > the plan is to make DRM the core Linux display framework, upon which
> > everything else is built, and fb and v4l2 are changed to use DRM.
> 
> I'd like to think we could make DRM the underlying display framework;
> it already exposes an fb interface, and with overlays, a bit more of the
> v4l2 stuff is done as well. Certainly eliminating three copies of mode
> setting infrastructure would be nice...

Ok, sounds good to me. We (as in OMAP display people) are already
planning to take DRM into use, so no problem there.

> > But even if it was done like that, I see that it's combining two
> > separate things: 1) the lower level HW control, and 2) the upper level
> > buffer management, policies and userspace interfaces.
> 
> Those are split between the DRM layer and the underlying device driver,
> which provides both kernel (via fb) and user space interfaces.

I'm not so familiar with DRM, but with device driver you mean a driver
for the the hardware which handles display output (gfx cards or whatever
it is on that platform)?

If so, it sounds good. That quite well matches what omapdss driver does
currently for us. But we still have semi-complex omapdrm between omapdss
and the standard drm layer.

Rob, would you say omapdrm is more of a DRM wrapper for omapdss than a
real separate entity? If so, then we could possibly in the future (when
nobody else uses omapdss) change omapdss to support DRM natively. (or
make omapdrm support omap HW natively, which ever way =).

> > 2) It's missing the panel driver part. This is rather important on
> > embedded systems, as the panels often are not "dummy" panels, but they
> > need things like custom initialization, sending commands to adjust
> > backlight, etc.
> 
> We integrate the panel (and other video output) drivers into the device
> drivers. With desktop chips, they're not easily separable. None of the
> desktop output drivers are simple; things like DisplayPort require link
> training, and everyone needs EDID. We share some of that code in the DRM
> layer today, and it would be nice to share even more.

I don't think we speak of similar panel drivers. I think there are two
different drivers here:

1) output drivers, handles the output from the SoC / gfx card. For
example DVI, DisplayPort, MIPI DPI/DBI/DSI.

2) panel drivers, handles panel specific things. Each panel may support
custom commands and features, for which we need a dedicated driver. And
this driver is not platform specific, but should work with any platform
which has the output used with the panel.

As an example, DSI command mode displays can be quite complex:

DSI bus is a half-duplex serial bus, and while it's designed for
displays you could use it easily for any communication between the SoC
and the peripheral.

The panel could have a feature like content adaptive backlight control,
and this would be configured via the DSI bus, sending a particular
command to the panel (possibly by first reading something from the
panel). The panel driver would accomplish this more or less the same way
one uses, say, i2c, so it would use the platform's DSI support to send
and receive packets.

Or a more complex scenario (but still a realistic scenario, been there,
done that) is sending the image to the panel in multiple parts, and
between each part sending configuration commands to the panel. (and
still getting it done in time so we avoid tearing).

And to complicate things more, there are DSI bus features like LP mode
(low power, basically low speed mode) and HS mode (high speed), virtual
channel IDs, and whatnot, which each panel may need to be used in
particular manner. Some panels may require initial configuration done in
LP, or configuration commands sent to a certain virtual channel ID.

The point is that we cannot have standard "MIPI DSI command mode panel
driver" which would work for all DSI cmd mode panels, but we need (in
the worst case) separate driver for each panel.

The same goes to lesser extent for other panels also. Some are
configured via i2c or spi.

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:18           ` Alan Cox
@ 2011-09-15 17:47             ` Florian Tobias Schandinat
  -1 siblings, 0 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-15 17:47 UTC (permalink / raw)
  To: Alan Cox
  Cc: Keith Packard, Tomi Valkeinen, linux-fbdev, linux-kernel,
	dri-devel, linaro-dev, Clark, Rob, Archit Taneja

Hi Alan,

On 09/15/2011 05:18 PM, Alan Cox wrote:
>> is and we could use it. Such attitude is not helpful and as I don't see any
>> serious intention of the DRM guys to cooperate I think those subsystems are more
>> likely to diverge. At least I'll never accept any change to the fb
>> infrastructure that requires DRM.
> 
> There are aspects of the fb code that want changing for DRM (and indeed
> modern hardware) but which won't break for other stuff. Given the move to
> using main memory for video and the need for the OS to do buffer
> management for framebuffers I suspect a move to DRM is pretty much
> inevitable, along with having to fix the fb layer to cope with
> discontiguous framebuffers.

What is your problem with discontigous framebuffers? (I assume discontigous
refers to the pages the framebuffer is composed of)
Sounds to me like you should implement your own fb_mmap and either map it
contigous to screen_base or implement your own fb_read/write.
In theory you could even have each pixel at a completely different memory
location although some userspace wouldn't be happy when it could no longer mmap
the framebuffer.


Best regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 17:47             ` Florian Tobias Schandinat
  0 siblings, 0 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-15 17:47 UTC (permalink / raw)
  To: Alan Cox
  Cc: Keith Packard, Tomi Valkeinen, linux-fbdev, linux-kernel,
	dri-devel, linaro-dev, Clark, Rob, Archit Taneja

Hi Alan,

On 09/15/2011 05:18 PM, Alan Cox wrote:
>> is and we could use it. Such attitude is not helpful and as I don't see any
>> serious intention of the DRM guys to cooperate I think those subsystems are more
>> likely to diverge. At least I'll never accept any change to the fb
>> infrastructure that requires DRM.
> 
> There are aspects of the fb code that want changing for DRM (and indeed
> modern hardware) but which won't break for other stuff. Given the move to
> using main memory for video and the need for the OS to do buffer
> management for framebuffers I suspect a move to DRM is pretty much
> inevitable, along with having to fix the fb layer to cope with
> discontiguous framebuffers.

What is your problem with discontigous framebuffers? (I assume discontigous
refers to the pages the framebuffer is composed of)
Sounds to me like you should implement your own fb_mmap and either map it
contigous to screen_base or implement your own fb_read/write.
In theory you could even have each pixel at a completely different memory
location although some userspace wouldn't be happy when it could no longer mmap
the framebuffer.


Best regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:12         ` Florian Tobias Schandinat
@ 2011-09-15 17:52           ` Alex Deucher
  -1 siblings, 0 replies; 143+ messages in thread
From: Alex Deucher @ 2011-09-15 17:52 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Keith Packard, linux-fbdev, linaro-dev, linux-kernel, dri-devel,
	Archit Taneja, Clark, Rob

On Thu, Sep 15, 2011 at 1:12 PM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> On 09/15/2011 03:50 PM, Keith Packard wrote:
>> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
>>
>>> 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
>>> the plan is to make DRM the core Linux display framework, upon which
>>> everything else is built, and fb and v4l2 are changed to use DRM.
>>
>> I'd like to think we could make DRM the underlying display framework;
>> it already exposes an fb interface, and with overlays, a bit more of the
>> v4l2 stuff is done as well. Certainly eliminating three copies of mode
>> setting infrastructure would be nice...
>
> Interesting that this comes from the people that pushed the latest mode setting
> code into the kernel. But I don't think that this will happen, the exposed user
> interfaces will be around for decades and the infrastructure code could be
> shared, in theory.
> For fb and V4L2 I think we'll develop some level of interoperability, share
> concepts and maybe even some code. The FOURCC pixel formats and overlays are
> such examples. As Laurent is really interested in it I think we can get some
> nice progress here.
> For fb and DRM the situation is entirely different. The last proposal I remember
> ended in the DRM people stating that only their implementation is acceptable as
> is and we could use it. Such attitude is not helpful and as I don't see any
> serious intention of the DRM guys to cooperate I think those subsystems are more
> likely to diverge. At least I'll never accept any change to the fb
> infrastructure that requires DRM.

Not exactly.  This point was that the drm modesetting and EDID
handling was derived from X which has had 20+ years of of quirks and
things added to it to deal with tons of wonky monitors and such.  That
information should be preserved.  As mode structs and EDID handling
are pretty self contained, why not use the DRM variants of that code
rather than writing a new version?

While the DRM has historically targeted 3D acceleration, that is not a
requirement to use the DRM KMS modesetting API.  The current fb API
has no concept of display controllers or connectors or overlays, etc.
To match it to modern hardware, it needs a major overhaul.  Why create
a new modern fb interface that's largely the same as DRM KMS?  What if
we just consider the KMS API as the new fb API?  If there are any
inadequacies in the DRM KMS API we would be happy to work out any
changes.

Please don't claim that the DRM developers do not want to cooperate.
I realize that people have strong opinions about existing APIs, put
there has been just as much, if not more obstinacy from the v4l and fb
people.

Alex

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 17:52           ` Alex Deucher
  0 siblings, 0 replies; 143+ messages in thread
From: Alex Deucher @ 2011-09-15 17:52 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Keith Packard, linux-fbdev, linaro-dev, linux-kernel, dri-devel,
	Archit Taneja, Clark, Rob

On Thu, Sep 15, 2011 at 1:12 PM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> On 09/15/2011 03:50 PM, Keith Packard wrote:
>> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
>>
>>> 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
>>> the plan is to make DRM the core Linux display framework, upon which
>>> everything else is built, and fb and v4l2 are changed to use DRM.
>>
>> I'd like to think we could make DRM the underlying display framework;
>> it already exposes an fb interface, and with overlays, a bit more of the
>> v4l2 stuff is done as well. Certainly eliminating three copies of mode
>> setting infrastructure would be nice...
>
> Interesting that this comes from the people that pushed the latest mode setting
> code into the kernel. But I don't think that this will happen, the exposed user
> interfaces will be around for decades and the infrastructure code could be
> shared, in theory.
> For fb and V4L2 I think we'll develop some level of interoperability, share
> concepts and maybe even some code. The FOURCC pixel formats and overlays are
> such examples. As Laurent is really interested in it I think we can get some
> nice progress here.
> For fb and DRM the situation is entirely different. The last proposal I remember
> ended in the DRM people stating that only their implementation is acceptable as
> is and we could use it. Such attitude is not helpful and as I don't see any
> serious intention of the DRM guys to cooperate I think those subsystems are more
> likely to diverge. At least I'll never accept any change to the fb
> infrastructure that requires DRM.

Not exactly.  This point was that the drm modesetting and EDID
handling was derived from X which has had 20+ years of of quirks and
things added to it to deal with tons of wonky monitors and such.  That
information should be preserved.  As mode structs and EDID handling
are pretty self contained, why not use the DRM variants of that code
rather than writing a new version?

While the DRM has historically targeted 3D acceleration, that is not a
requirement to use the DRM KMS modesetting API.  The current fb API
has no concept of display controllers or connectors or overlays, etc.
To match it to modern hardware, it needs a major overhaul.  Why create
a new modern fb interface that's largely the same as DRM KMS?  What if
we just consider the KMS API as the new fb API?  If there are any
inadequacies in the DRM KMS API we would be happy to work out any
changes.

Please don't claim that the DRM developers do not want to cooperate.
I realize that people have strong opinions about existing APIs, put
there has been just as much, if not more obstinacy from the v4l and fb
people.

Alex

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:52           ` Alex Deucher
@ 2011-09-15 17:56             ` Geert Uytterhoeven
  -1 siblings, 0 replies; 143+ messages in thread
From: Geert Uytterhoeven @ 2011-09-15 17:56 UTC (permalink / raw)
  To: Alex Deucher
  Cc: Florian Tobias Schandinat, Keith Packard, linux-fbdev,
	linaro-dev, linux-kernel, dri-devel, Archit Taneja, Clark, Rob

On Thu, Sep 15, 2011 at 19:52, Alex Deucher <alexdeucher@gmail.com> wrote:
> While the DRM has historically targeted 3D acceleration, that is not a
> requirement to use the DRM KMS modesetting API.  The current fb API
> has no concept of display controllers or connectors or overlays, etc.
> To match it to modern hardware, it needs a major overhaul.  Why create
> a new modern fb interface that's largely the same as DRM KMS?  What if
> we just consider the KMS API as the new fb API?  If there are any
> inadequacies in the DRM KMS API we would be happy to work out any
> changes.

I admit I didn't look for it, but does there exist a sample DRM KMS driver
for dumb frame buffer hardware with one fixed video mode?

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 17:56             ` Geert Uytterhoeven
  0 siblings, 0 replies; 143+ messages in thread
From: Geert Uytterhoeven @ 2011-09-15 17:56 UTC (permalink / raw)
  To: Alex Deucher
  Cc: Florian Tobias Schandinat, Keith Packard, linux-fbdev,
	linaro-dev, linux-kernel, dri-devel, Archit Taneja, Clark, Rob

On Thu, Sep 15, 2011 at 19:52, Alex Deucher <alexdeucher@gmail.com> wrote:
> While the DRM has historically targeted 3D acceleration, that is not a
> requirement to use the DRM KMS modesetting API.  The current fb API
> has no concept of display controllers or connectors or overlays, etc.
> To match it to modern hardware, it needs a major overhaul.  Why create
> a new modern fb interface that's largely the same as DRM KMS?  What if
> we just consider the KMS API as the new fb API?  If there are any
> inadequacies in the DRM KMS API we would be happy to work out any
> changes.

I admit I didn't look for it, but does there exist a sample DRM KMS driver
for dumb frame buffer hardware with one fixed video mode?

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:56             ` Geert Uytterhoeven
  (?)
@ 2011-09-15 18:04               ` Alex Deucher
  -1 siblings, 0 replies; 143+ messages in thread
From: Alex Deucher @ 2011-09-15 18:04 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Florian Tobias Schandinat, Keith Packard, linux-fbdev,
	linaro-dev, linux-kernel, dri-devel, Archit Taneja, Clark, Rob

On Thu, Sep 15, 2011 at 1:56 PM, Geert Uytterhoeven
<geert@linux-m68k.org> wrote:
> On Thu, Sep 15, 2011 at 19:52, Alex Deucher <alexdeucher@gmail.com> wrote:
>> While the DRM has historically targeted 3D acceleration, that is not a
>> requirement to use the DRM KMS modesetting API.  The current fb API
>> has no concept of display controllers or connectors or overlays, etc.
>> To match it to modern hardware, it needs a major overhaul.  Why create
>> a new modern fb interface that's largely the same as DRM KMS?  What if
>> we just consider the KMS API as the new fb API?  If there are any
>> inadequacies in the DRM KMS API we would be happy to work out any
>> changes.
>
> I admit I didn't look for it, but does there exist a sample DRM KMS driver
> for dumb frame buffer hardware with one fixed video mode?

Not at the moment.  However, there drivers for AMD, Intel, and nvidia
chips as well patches for a number of ARM SoCs that are in the process
of moving upstream.  Also, Matt Turner wrote a KMS driver for 3D labs
glint hardware that is pretty simple (single display controller,
single DAC, etc.), however it hasn't been merged upstream yet.
http://code.google.com/p/google-summer-of-code-2010-xorg/downloads/detail?name=Matt_Turner.tar.gz&can=2&q=
His kernel git tree was on kernel.org so it's down at the moment,
hence the link to the tarball.

Alex

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 18:04               ` Alex Deucher
  0 siblings, 0 replies; 143+ messages in thread
From: Alex Deucher @ 2011-09-15 18:04 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: linaro-dev, Florian Tobias Schandinat, linux-kernel, dri-devel,
	Archit Taneja, linux-fbdev, Clark, Rob

On Thu, Sep 15, 2011 at 1:56 PM, Geert Uytterhoeven
<geert@linux-m68k.org> wrote:
> On Thu, Sep 15, 2011 at 19:52, Alex Deucher <alexdeucher@gmail.com> wrote:
>> While the DRM has historically targeted 3D acceleration, that is not a
>> requirement to use the DRM KMS modesetting API.  The current fb API
>> has no concept of display controllers or connectors or overlays, etc.
>> To match it to modern hardware, it needs a major overhaul.  Why create
>> a new modern fb interface that's largely the same as DRM KMS?  What if
>> we just consider the KMS API as the new fb API?  If there are any
>> inadequacies in the DRM KMS API we would be happy to work out any
>> changes.
>
> I admit I didn't look for it, but does there exist a sample DRM KMS driver
> for dumb frame buffer hardware with one fixed video mode?

Not at the moment.  However, there drivers for AMD, Intel, and nvidia
chips as well patches for a number of ARM SoCs that are in the process
of moving upstream.  Also, Matt Turner wrote a KMS driver for 3D labs
glint hardware that is pretty simple (single display controller,
single DAC, etc.), however it hasn't been merged upstream yet.
http://code.google.com/p/google-summer-of-code-2010-xorg/downloads/detail?name=Matt_Turner.tar.gz&can=2&qHis kernel git tree was on kernel.org so it's down at the moment,
hence the link to the tarball.

Alex

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 18:04               ` Alex Deucher
  0 siblings, 0 replies; 143+ messages in thread
From: Alex Deucher @ 2011-09-15 18:04 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: linaro-dev, Florian Tobias Schandinat, linux-kernel, dri-devel,
	Archit Taneja, linux-fbdev, Clark, Rob

On Thu, Sep 15, 2011 at 1:56 PM, Geert Uytterhoeven
<geert@linux-m68k.org> wrote:
> On Thu, Sep 15, 2011 at 19:52, Alex Deucher <alexdeucher@gmail.com> wrote:
>> While the DRM has historically targeted 3D acceleration, that is not a
>> requirement to use the DRM KMS modesetting API.  The current fb API
>> has no concept of display controllers or connectors or overlays, etc.
>> To match it to modern hardware, it needs a major overhaul.  Why create
>> a new modern fb interface that's largely the same as DRM KMS?  What if
>> we just consider the KMS API as the new fb API?  If there are any
>> inadequacies in the DRM KMS API we would be happy to work out any
>> changes.
>
> I admit I didn't look for it, but does there exist a sample DRM KMS driver
> for dumb frame buffer hardware with one fixed video mode?

Not at the moment.  However, there drivers for AMD, Intel, and nvidia
chips as well patches for a number of ARM SoCs that are in the process
of moving upstream.  Also, Matt Turner wrote a KMS driver for 3D labs
glint hardware that is pretty simple (single display controller,
single DAC, etc.), however it hasn't been merged upstream yet.
http://code.google.com/p/google-summer-of-code-2010-xorg/downloads/detail?name=Matt_Turner.tar.gz&can=2&q=
His kernel git tree was on kernel.org so it's down at the moment,
hence the link to the tarball.

Alex

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 18:04               ` Alex Deucher
  (?)
  (?)
@ 2011-09-15 18:07               ` Corbin Simpson
  -1 siblings, 0 replies; 143+ messages in thread
From: Corbin Simpson @ 2011-09-15 18:07 UTC (permalink / raw)
  To: Alex Deucher
  Cc: linux-fbdev, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja, Clark, Rob, Geert Uytterhoeven


[-- Attachment #1.1: Type: text/plain, Size: 1685 bytes --]

Sending from a mobile, pardon my terseness. ~ C.
On Sep 15, 2011 1:05 PM, "Alex Deucher" <alexdeucher@gmail.com> wrote:
> On Thu, Sep 15, 2011 at 1:56 PM, Geert Uytterhoeven
> <geert@linux-m68k.org> wrote:
>> On Thu, Sep 15, 2011 at 19:52, Alex Deucher <alexdeucher@gmail.com>
wrote:
>>> While the DRM has historically targeted 3D acceleration, that is not a
>>> requirement to use the DRM KMS modesetting API.  The current fb API
>>> has no concept of display controllers or connectors or overlays, etc.
>>> To match it to modern hardware, it needs a major overhaul.  Why create
>>> a new modern fb interface that's largely the same as DRM KMS?  What if
>>> we just consider the KMS API as the new fb API?  If there are any
>>> inadequacies in the DRM KMS API we would be happy to work out any
>>> changes.
>>
>> I admit I didn't look for it, but does there exist a sample DRM KMS
driver
>> for dumb frame buffer hardware with one fixed video mode?
>
> Not at the moment. However, there drivers for AMD, Intel, and nvidia
> chips as well patches for a number of ARM SoCs that are in the process
> of moving upstream. Also, Matt Turner wrote a KMS driver for 3D labs
> glint hardware that is pretty simple (single display controller,
> single DAC, etc.), however it hasn't been merged upstream yet.
>
http://code.google.com/p/google-summer-of-code-2010-xorg/downloads/detail?name=Matt_Turner.tar.gz&can=2&q=
> His kernel git tree was on kernel.org so it's down at the moment,
> hence the link to the tarball.
>
> Alex
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

[-- Attachment #1.2: Type: text/html, Size: 2477 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 18:04               ` Alex Deucher
                                 ` (2 preceding siblings ...)
  (?)
@ 2011-09-15 18:07               ` Corbin Simpson
  -1 siblings, 0 replies; 143+ messages in thread
From: Corbin Simpson @ 2011-09-15 18:07 UTC (permalink / raw)
  To: Alex Deucher
  Cc: linux-fbdev, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja, Clark, Rob, Geert Uytterhoeven


[-- Attachment #1.1: Type: text/plain, Size: 1736 bytes --]

Wasn't there a driver for qemu cirrus "hardware"?

Sending from a mobile, pardon my terseness. ~ C.
On Sep 15, 2011 1:05 PM, "Alex Deucher" <alexdeucher@gmail.com> wrote:
> On Thu, Sep 15, 2011 at 1:56 PM, Geert Uytterhoeven
> <geert@linux-m68k.org> wrote:
>> On Thu, Sep 15, 2011 at 19:52, Alex Deucher <alexdeucher@gmail.com>
wrote:
>>> While the DRM has historically targeted 3D acceleration, that is not a
>>> requirement to use the DRM KMS modesetting API.  The current fb API
>>> has no concept of display controllers or connectors or overlays, etc.
>>> To match it to modern hardware, it needs a major overhaul.  Why create
>>> a new modern fb interface that's largely the same as DRM KMS?  What if
>>> we just consider the KMS API as the new fb API?  If there are any
>>> inadequacies in the DRM KMS API we would be happy to work out any
>>> changes.
>>
>> I admit I didn't look for it, but does there exist a sample DRM KMS
driver
>> for dumb frame buffer hardware with one fixed video mode?
>
> Not at the moment. However, there drivers for AMD, Intel, and nvidia
> chips as well patches for a number of ARM SoCs that are in the process
> of moving upstream. Also, Matt Turner wrote a KMS driver for 3D labs
> glint hardware that is pretty simple (single display controller,
> single DAC, etc.), however it hasn't been merged upstream yet.
>
http://code.google.com/p/google-summer-of-code-2010-xorg/downloads/detail?name=Matt_Turner.tar.gz&can=2&q=
> His kernel git tree was on kernel.org so it's down at the moment,
> hence the link to the tarball.
>
> Alex
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

[-- Attachment #1.2: Type: text/html, Size: 2540 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:12         ` Florian Tobias Schandinat
  (?)
@ 2011-09-15 18:12           ` Keith Packard
  -1 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-15 18:12 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Tomi Valkeinen, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 1542 bytes --]

On Thu, 15 Sep 2011 17:12:43 +0000, Florian Tobias Schandinat <FlorianSchandinat@gmx.de> wrote:

> Interesting that this comes from the people that pushed the latest mode setting
> code into the kernel. But I don't think that this will happen, the exposed user
> interfaces will be around for decades and the infrastructure code could be
> shared, in theory.

We moved mode setting code from user space to kernel space -- the DRM
stuff comes directly from X, which has a fairly long history of
complicated display environments.

The DRM code does expose fb interfaces to both kernel and user mode,
figuring out how to integrate v4l2 and drm seems like the remaining
challenge.

> For fb and V4L2 I think we'll develop some level of interoperability, share
> concepts and maybe even some code. The FOURCC pixel formats and overlays are
> such examples. As Laurent is really interested in it I think we can get some
> nice progress here.

Jesse's design for the DRM overlay code will expose the pixel formats as
FOURCC codes so that DRM and v4l2 can interoperate -- we've got a lot of
hardware that has both video decode and 3D acceleration, so those are
going to get integrated somehow. And, we have to figure out how to share
buffers between these APIs to avoid copying data with the CPU.

DRM provides fb interfaces, so you don't need to change fb at all --
hardware that requires the capabilities provided by DRM will use that
and not use any of the other fb code in the kernel.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 18:12           ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-15 18:12 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Tomi Valkeinen, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 1542 bytes --]

On Thu, 15 Sep 2011 17:12:43 +0000, Florian Tobias Schandinat <FlorianSchandinat@gmx.de> wrote:

> Interesting that this comes from the people that pushed the latest mode setting
> code into the kernel. But I don't think that this will happen, the exposed user
> interfaces will be around for decades and the infrastructure code could be
> shared, in theory.

We moved mode setting code from user space to kernel space -- the DRM
stuff comes directly from X, which has a fairly long history of
complicated display environments.

The DRM code does expose fb interfaces to both kernel and user mode,
figuring out how to integrate v4l2 and drm seems like the remaining
challenge.

> For fb and V4L2 I think we'll develop some level of interoperability, share
> concepts and maybe even some code. The FOURCC pixel formats and overlays are
> such examples. As Laurent is really interested in it I think we can get some
> nice progress here.

Jesse's design for the DRM overlay code will expose the pixel formats as
FOURCC codes so that DRM and v4l2 can interoperate -- we've got a lot of
hardware that has both video decode and 3D acceleration, so those are
going to get integrated somehow. And, we have to figure out how to share
buffers between these APIs to avoid copying data with the CPU.

DRM provides fb interfaces, so you don't need to change fb at all --
hardware that requires the capabilities provided by DRM will use that
and not use any of the other fb code in the kernel.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 18:12           ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-15 18:12 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Tomi Valkeinen, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 1542 bytes --]

On Thu, 15 Sep 2011 17:12:43 +0000, Florian Tobias Schandinat <FlorianSchandinat@gmx.de> wrote:

> Interesting that this comes from the people that pushed the latest mode setting
> code into the kernel. But I don't think that this will happen, the exposed user
> interfaces will be around for decades and the infrastructure code could be
> shared, in theory.

We moved mode setting code from user space to kernel space -- the DRM
stuff comes directly from X, which has a fairly long history of
complicated display environments.

The DRM code does expose fb interfaces to both kernel and user mode,
figuring out how to integrate v4l2 and drm seems like the remaining
challenge.

> For fb and V4L2 I think we'll develop some level of interoperability, share
> concepts and maybe even some code. The FOURCC pixel formats and overlays are
> such examples. As Laurent is really interested in it I think we can get some
> nice progress here.

Jesse's design for the DRM overlay code will expose the pixel formats as
FOURCC codes so that DRM and v4l2 can interoperate -- we've got a lot of
hardware that has both video decode and 3D acceleration, so those are
going to get integrated somehow. And, we have to figure out how to share
buffers between these APIs to avoid copying data with the CPU.

DRM provides fb interfaces, so you don't need to change fb at all --
hardware that requires the capabilities provided by DRM will use that
and not use any of the other fb code in the kernel.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:21         ` Tomi Valkeinen
  (?)
@ 2011-09-15 18:32           ` Rob Clark
  -1 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-15 18:32 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: Keith Packard, linux-fbdev, linaro-dev, linux-kernel, dri-devel,
	Archit Taneja

On Thu, Sep 15, 2011 at 12:21 PM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> On Thu, 2011-09-15 at 10:50 -0500, Keith Packard wrote:
>> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
>>
>> > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
>> > the plan is to make DRM the core Linux display framework, upon which
>> > everything else is built, and fb and v4l2 are changed to use DRM.
>>
>> I'd like to think we could make DRM the underlying display framework;
>> it already exposes an fb interface, and with overlays, a bit more of the
>> v4l2 stuff is done as well. Certainly eliminating three copies of mode
>> setting infrastructure would be nice...
>
> Ok, sounds good to me. We (as in OMAP display people) are already
> planning to take DRM into use, so no problem there.
>
>> > But even if it was done like that, I see that it's combining two
>> > separate things: 1) the lower level HW control, and 2) the upper level
>> > buffer management, policies and userspace interfaces.
>>
>> Those are split between the DRM layer and the underlying device driver,
>> which provides both kernel (via fb) and user space interfaces.
>
> I'm not so familiar with DRM, but with device driver you mean a driver
> for the the hardware which handles display output (gfx cards or whatever
> it is on that platform)?

I think he is more referring to the DRM core and the individual device drivers..

We are (AFAIK) unique in having a two layer driver, where the DRM part
is more of a wrapper (for the KMS parts)... but I see that as more of
a transition thing.. eventually we should be able to merge it all into
the DRM layer.

> If so, it sounds good. That quite well matches what omapdss driver does
> currently for us. But we still have semi-complex omapdrm between omapdss
> and the standard drm layer.
>
> Rob, would you say omapdrm is more of a DRM wrapper for omapdss than a
> real separate entity? If so, then we could possibly in the future (when
> nobody else uses omapdss) change omapdss to support DRM natively. (or
> make omapdrm support omap HW natively, which ever way =).

Yeah, I think eventually it would make sense to merge all into one.
Although I'm not sure about how best to handle various different
custom DSI panels..

BR,
-R


>> > 2) It's missing the panel driver part. This is rather important on
>> > embedded systems, as the panels often are not "dummy" panels, but they
>> > need things like custom initialization, sending commands to adjust
>> > backlight, etc.
>>
>> We integrate the panel (and other video output) drivers into the device
>> drivers. With desktop chips, they're not easily separable. None of the
>> desktop output drivers are simple; things like DisplayPort require link
>> training, and everyone needs EDID. We share some of that code in the DRM
>> layer today, and it would be nice to share even more.
>
> I don't think we speak of similar panel drivers. I think there are two
> different drivers here:
>
> 1) output drivers, handles the output from the SoC / gfx card. For
> example DVI, DisplayPort, MIPI DPI/DBI/DSI.
>
> 2) panel drivers, handles panel specific things. Each panel may support
> custom commands and features, for which we need a dedicated driver. And
> this driver is not platform specific, but should work with any platform
> which has the output used with the panel.
>
> As an example, DSI command mode displays can be quite complex:
>
> DSI bus is a half-duplex serial bus, and while it's designed for
> displays you could use it easily for any communication between the SoC
> and the peripheral.
>
> The panel could have a feature like content adaptive backlight control,
> and this would be configured via the DSI bus, sending a particular
> command to the panel (possibly by first reading something from the
> panel). The panel driver would accomplish this more or less the same way
> one uses, say, i2c, so it would use the platform's DSI support to send
> and receive packets.
>
> Or a more complex scenario (but still a realistic scenario, been there,
> done that) is sending the image to the panel in multiple parts, and
> between each part sending configuration commands to the panel. (and
> still getting it done in time so we avoid tearing).
>
> And to complicate things more, there are DSI bus features like LP mode
> (low power, basically low speed mode) and HS mode (high speed), virtual
> channel IDs, and whatnot, which each panel may need to be used in
> particular manner. Some panels may require initial configuration done in
> LP, or configuration commands sent to a certain virtual channel ID.
>
> The point is that we cannot have standard "MIPI DSI command mode panel
> driver" which would work for all DSI cmd mode panels, but we need (in
> the worst case) separate driver for each panel.
>
> The same goes to lesser extent for other panels also. Some are
> configured via i2c or spi.
>
>  Tomi
>
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 18:32           ` Rob Clark
  0 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-15 18:32 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja

On Thu, Sep 15, 2011 at 12:21 PM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> On Thu, 2011-09-15 at 10:50 -0500, Keith Packard wrote:
>> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
>>
>> > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
>> > the plan is to make DRM the core Linux display framework, upon which
>> > everything else is built, and fb and v4l2 are changed to use DRM.
>>
>> I'd like to think we could make DRM the underlying display framework;
>> it already exposes an fb interface, and with overlays, a bit more of the
>> v4l2 stuff is done as well. Certainly eliminating three copies of mode
>> setting infrastructure would be nice...
>
> Ok, sounds good to me. We (as in OMAP display people) are already
> planning to take DRM into use, so no problem there.
>
>> > But even if it was done like that, I see that it's combining two
>> > separate things: 1) the lower level HW control, and 2) the upper level
>> > buffer management, policies and userspace interfaces.
>>
>> Those are split between the DRM layer and the underlying device driver,
>> which provides both kernel (via fb) and user space interfaces.
>
> I'm not so familiar with DRM, but with device driver you mean a driver
> for the the hardware which handles display output (gfx cards or whatever
> it is on that platform)?

I think he is more referring to the DRM core and the individual device drivers..

We are (AFAIK) unique in having a two layer driver, where the DRM part
is more of a wrapper (for the KMS parts)... but I see that as more of
a transition thing.. eventually we should be able to merge it all into
the DRM layer.

> If so, it sounds good. That quite well matches what omapdss driver does
> currently for us. But we still have semi-complex omapdrm between omapdss
> and the standard drm layer.
>
> Rob, would you say omapdrm is more of a DRM wrapper for omapdss than a
> real separate entity? If so, then we could possibly in the future (when
> nobody else uses omapdss) change omapdss to support DRM natively. (or
> make omapdrm support omap HW natively, which ever way =).

Yeah, I think eventually it would make sense to merge all into one.
Although I'm not sure about how best to handle various different
custom DSI panels..

BR,
-R


>> > 2) It's missing the panel driver part. This is rather important on
>> > embedded systems, as the panels often are not "dummy" panels, but they
>> > need things like custom initialization, sending commands to adjust
>> > backlight, etc.
>>
>> We integrate the panel (and other video output) drivers into the device
>> drivers. With desktop chips, they're not easily separable. None of the
>> desktop output drivers are simple; things like DisplayPort require link
>> training, and everyone needs EDID. We share some of that code in the DRM
>> layer today, and it would be nice to share even more.
>
> I don't think we speak of similar panel drivers. I think there are two
> different drivers here:
>
> 1) output drivers, handles the output from the SoC / gfx card. For
> example DVI, DisplayPort, MIPI DPI/DBI/DSI.
>
> 2) panel drivers, handles panel specific things. Each panel may support
> custom commands and features, for which we need a dedicated driver. And
> this driver is not platform specific, but should work with any platform
> which has the output used with the panel.
>
> As an example, DSI command mode displays can be quite complex:
>
> DSI bus is a half-duplex serial bus, and while it's designed for
> displays you could use it easily for any communication between the SoC
> and the peripheral.
>
> The panel could have a feature like content adaptive backlight control,
> and this would be configured via the DSI bus, sending a particular
> command to the panel (possibly by first reading something from the
> panel). The panel driver would accomplish this more or less the same way
> one uses, say, i2c, so it would use the platform's DSI support to send
> and receive packets.
>
> Or a more complex scenario (but still a realistic scenario, been there,
> done that) is sending the image to the panel in multiple parts, and
> between each part sending configuration commands to the panel. (and
> still getting it done in time so we avoid tearing).
>
> And to complicate things more, there are DSI bus features like LP mode
> (low power, basically low speed mode) and HS mode (high speed), virtual
> channel IDs, and whatnot, which each panel may need to be used in
> particular manner. Some panels may require initial configuration done in
> LP, or configuration commands sent to a certain virtual channel ID.
>
> The point is that we cannot have standard "MIPI DSI command mode panel
> driver" which would work for all DSI cmd mode panels, but we need (in
> the worst case) separate driver for each panel.
>
> The same goes to lesser extent for other panels also. Some are
> configured via i2c or spi.
>
>  Tomi
>
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 18:32           ` Rob Clark
  0 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-15 18:32 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja

On Thu, Sep 15, 2011 at 12:21 PM, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> On Thu, 2011-09-15 at 10:50 -0500, Keith Packard wrote:
>> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
>>
>> > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
>> > the plan is to make DRM the core Linux display framework, upon which
>> > everything else is built, and fb and v4l2 are changed to use DRM.
>>
>> I'd like to think we could make DRM the underlying display framework;
>> it already exposes an fb interface, and with overlays, a bit more of the
>> v4l2 stuff is done as well. Certainly eliminating three copies of mode
>> setting infrastructure would be nice...
>
> Ok, sounds good to me. We (as in OMAP display people) are already
> planning to take DRM into use, so no problem there.
>
>> > But even if it was done like that, I see that it's combining two
>> > separate things: 1) the lower level HW control, and 2) the upper level
>> > buffer management, policies and userspace interfaces.
>>
>> Those are split between the DRM layer and the underlying device driver,
>> which provides both kernel (via fb) and user space interfaces.
>
> I'm not so familiar with DRM, but with device driver you mean a driver
> for the the hardware which handles display output (gfx cards or whatever
> it is on that platform)?

I think he is more referring to the DRM core and the individual device drivers..

We are (AFAIK) unique in having a two layer driver, where the DRM part
is more of a wrapper (for the KMS parts)... but I see that as more of
a transition thing.. eventually we should be able to merge it all into
the DRM layer.

> If so, it sounds good. That quite well matches what omapdss driver does
> currently for us. But we still have semi-complex omapdrm between omapdss
> and the standard drm layer.
>
> Rob, would you say omapdrm is more of a DRM wrapper for omapdss than a
> real separate entity? If so, then we could possibly in the future (when
> nobody else uses omapdss) change omapdss to support DRM natively. (or
> make omapdrm support omap HW natively, which ever way =).

Yeah, I think eventually it would make sense to merge all into one.
Although I'm not sure about how best to handle various different
custom DSI panels..

BR,
-R


>> > 2) It's missing the panel driver part. This is rather important on
>> > embedded systems, as the panels often are not "dummy" panels, but they
>> > need things like custom initialization, sending commands to adjust
>> > backlight, etc.
>>
>> We integrate the panel (and other video output) drivers into the device
>> drivers. With desktop chips, they're not easily separable. None of the
>> desktop output drivers are simple; things like DisplayPort require link
>> training, and everyone needs EDID. We share some of that code in the DRM
>> layer today, and it would be nice to share even more.
>
> I don't think we speak of similar panel drivers. I think there are two
> different drivers here:
>
> 1) output drivers, handles the output from the SoC / gfx card. For
> example DVI, DisplayPort, MIPI DPI/DBI/DSI.
>
> 2) panel drivers, handles panel specific things. Each panel may support
> custom commands and features, for which we need a dedicated driver. And
> this driver is not platform specific, but should work with any platform
> which has the output used with the panel.
>
> As an example, DSI command mode displays can be quite complex:
>
> DSI bus is a half-duplex serial bus, and while it's designed for
> displays you could use it easily for any communication between the SoC
> and the peripheral.
>
> The panel could have a feature like content adaptive backlight control,
> and this would be configured via the DSI bus, sending a particular
> command to the panel (possibly by first reading something from the
> panel). The panel driver would accomplish this more or less the same way
> one uses, say, i2c, so it would use the platform's DSI support to send
> and receive packets.
>
> Or a more complex scenario (but still a realistic scenario, been there,
> done that) is sending the image to the panel in multiple parts, and
> between each part sending configuration commands to the panel. (and
> still getting it done in time so we avoid tearing).
>
> And to complicate things more, there are DSI bus features like LP mode
> (low power, basically low speed mode) and HS mode (high speed), virtual
> channel IDs, and whatnot, which each panel may need to be used in
> particular manner. Some panels may require initial configuration done in
> LP, or configuration commands sent to a certain virtual channel ID.
>
> The point is that we cannot have standard "MIPI DSI command mode panel
> driver" which would work for all DSI cmd mode panels, but we need (in
> the worst case) separate driver for each panel.
>
> The same goes to lesser extent for other panels also. Some are
> configured via i2c or spi.
>
>  Tomi
>
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:52           ` Alex Deucher
  (?)
  (?)
@ 2011-09-15 18:39           ` Florian Tobias Schandinat
  2011-09-15 18:58               ` Alan Cox
                               ` (2 more replies)
  -1 siblings, 3 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-15 18:39 UTC (permalink / raw)
  To: Alex Deucher
  Cc: Keith Packard, linux-fbdev, linaro-dev, linux-kernel, dri-devel,
	Archit Taneja, Clark, Rob

On 09/15/2011 05:52 PM, Alex Deucher wrote:
> On Thu, Sep 15, 2011 at 1:12 PM, Florian Tobias Schandinat
> <FlorianSchandinat@gmx.de> wrote:
>> On 09/15/2011 03:50 PM, Keith Packard wrote:
>>> On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
>>>
>>>> 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
>>>> the plan is to make DRM the core Linux display framework, upon which
>>>> everything else is built, and fb and v4l2 are changed to use DRM.
>>>
>>> I'd like to think we could make DRM the underlying display framework;
>>> it already exposes an fb interface, and with overlays, a bit more of the
>>> v4l2 stuff is done as well. Certainly eliminating three copies of mode
>>> setting infrastructure would be nice...
>>
>> Interesting that this comes from the people that pushed the latest mode setting
>> code into the kernel. But I don't think that this will happen, the exposed user
>> interfaces will be around for decades and the infrastructure code could be
>> shared, in theory.
>> For fb and V4L2 I think we'll develop some level of interoperability, share
>> concepts and maybe even some code. The FOURCC pixel formats and overlays are
>> such examples. As Laurent is really interested in it I think we can get some
>> nice progress here.
>> For fb and DRM the situation is entirely different. The last proposal I remember
>> ended in the DRM people stating that only their implementation is acceptable as
>> is and we could use it. Such attitude is not helpful and as I don't see any
>> serious intention of the DRM guys to cooperate I think those subsystems are more
>> likely to diverge. At least I'll never accept any change to the fb
>> infrastructure that requires DRM.
> 
> Not exactly.  This point was that the drm modesetting and EDID
> handling was derived from X which has had 20+ years of of quirks and
> things added to it to deal with tons of wonky monitors and such.  That
> information should be preserved.  As mode structs and EDID handling
> are pretty self contained, why not use the DRM variants of that code
> rather than writing a new version?

Well, I'm not against sharing the code and not against taking DRM's current
implementation as a base but the steps required to make it generally acceptable
would be to split it of, probably as a standalone module and strip all DRM
specific things off. Than all things that require EDID can use it, DRM can add
DRM-specific things on top and fb can add fb-specific things.

> While the DRM has historically targeted 3D acceleration, that is not a
> requirement to use the DRM KMS modesetting API.  The current fb API
> has no concept of display controllers or connectors or overlays, etc.
> To match it to modern hardware, it needs a major overhaul.  Why create
> a new modern fb interface that's largely the same as DRM KMS?  What if
> we just consider the KMS API as the new fb API?  If there are any
> inadequacies in the DRM KMS API we would be happy to work out any
> changes.

Well, I rather think that the fb API is more user centric to allow every program
to use it directly in contrast to the KMS/DRM API which aims to support every
feature the hardware has. For this the fb API should not change much, but I
understand some additions were needed for some special users, probably limited
to X and wayland.
One of my biggest problems with KMS is that it has (naturally) a lot more
complexity than the fb API which leads to instability. Basically it's very
difficult to implement a framebuffer in a way that it crashes your machine
during operation which is quite a contrast to my KMS/DRM experience on my toy
(on my work machines I use framebuffer only). And I really hate it when I have
to type my passwords again just because the KMS/DRM thing allowed a program to
crash my machine. Yes, those are driver bugs but the API encourages them and I
did not yet find the feature/config option DOES_NOT_CRASH or SLOW_BUT_STABLE.
And as I said already, I think the fb API is a lot better for direct interaction
with userspace programs and certainly has more direct users at the moment.

> Please don't claim that the DRM developers do not want to cooperate.
> I realize that people have strong opinions about existing APIs, put
> there has been just as much, if not more obstinacy from the v4l and fb
> people.

Well, I think it's too late to really fix this thing. We now have 3 APIs in the
kernel that have to be kept. Probably the best we can do now is figure out how
we can reduce code duplication and do extensions to those APIs in a way that
they are compatible with each other or completely independent and can be used
across the APIs.


Best regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 18:39           ` Florian Tobias Schandinat
  2011-09-15 18:58               ` Alan Cox
@ 2011-09-15 18:58               ` Alan Cox
  2011-09-17 23:12               ` Laurent Pinchart
  2 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 18:58 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Alex Deucher, Keith Packard, linux-fbdev, linaro-dev,
	linux-kernel, dri-devel, Archit Taneja, Clark, Rob

> Well, I rather think that the fb API is more user centric to allow every program
> to use it directly in contrast to the KMS/DRM API which aims to support every
> feature the hardware has. For this the fb API should not change much, but I
> understand some additions were needed for some special users, probably limited
> to X and wayland.

Wayland needs vblank frame buffer switching and the like. Likewise given
you want to composite buffers really any serious accelerated device ends
up needing a full memory manager and that ends up needing a buffer
manager. Wayland needs clients to be doing their own rendering into
objects which means authorisation and management of the render engine
which ends up looking much like DRM.

> One of my biggest problems with KMS is that it has (naturally) a lot more
> complexity than the fb API which leads to instability. Basically it's very

It shouldn't do - and a sample of one (your machine) is not a
statistically valid set. Fb is pretty much ununsable in contrast on my
main box, but that's not a statistically valid sample either.

I'm not that convinced by the complexity either. For a simple video card
setup such as those that the fb layer can kind of cope with (ie linear
buffer, simple mode changes, no client rendering, no vblank flipping,
limited mode management, no serious multi-head) a DRM driver is also
pretty tiny and simple.

> Well, I think it's too late to really fix this thing. We now have 3 APIs in the
> kernel that have to be kept. Probably the best we can do now is figure out how
> we can reduce code duplication and do extensions to those APIs in a way that
> they are compatible with each other or completely independent and can be used
> across the APIs.

I think it comes down to 'when nobody is using the old fb drivers they can
drop into staging and oblivion'. Right now the fb layer is essentially
compatibility glue on most modern x86 platforms.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 18:58               ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 18:58 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit, Taneja,
	Clark, Rob

> Well, I rather think that the fb API is more user centric to allow every program
> to use it directly in contrast to the KMS/DRM API which aims to support every
> feature the hardware has. For this the fb API should not change much, but I
> understand some additions were needed for some special users, probably limited
> to X and wayland.

Wayland needs vblank frame buffer switching and the like. Likewise given
you want to composite buffers really any serious accelerated device ends
up needing a full memory manager and that ends up needing a buffer
manager. Wayland needs clients to be doing their own rendering into
objects which means authorisation and management of the render engine
which ends up looking much like DRM.

> One of my biggest problems with KMS is that it has (naturally) a lot more
> complexity than the fb API which leads to instability. Basically it's very

It shouldn't do - and a sample of one (your machine) is not a
statistically valid set. Fb is pretty much ununsable in contrast on my
main box, but that's not a statistically valid sample either.

I'm not that convinced by the complexity either. For a simple video card
setup such as those that the fb layer can kind of cope with (ie linear
buffer, simple mode changes, no client rendering, no vblank flipping,
limited mode management, no serious multi-head) a DRM driver is also
pretty tiny and simple.

> Well, I think it's too late to really fix this thing. We now have 3 APIs in the
> kernel that have to be kept. Probably the best we can do now is figure out how
> we can reduce code duplication and do extensions to those APIs in a way that
> they are compatible with each other or completely independent and can be used
> across the APIs.

I think it comes down to 'when nobody is using the old fb drivers they can
drop into staging and oblivion'. Right now the fb layer is essentially
compatibility glue on most modern x86 platforms.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 18:58               ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 18:58 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit, Taneja,
	Clark, Rob

> Well, I rather think that the fb API is more user centric to allow every program
> to use it directly in contrast to the KMS/DRM API which aims to support every
> feature the hardware has. For this the fb API should not change much, but I
> understand some additions were needed for some special users, probably limited
> to X and wayland.

Wayland needs vblank frame buffer switching and the like. Likewise given
you want to composite buffers really any serious accelerated device ends
up needing a full memory manager and that ends up needing a buffer
manager. Wayland needs clients to be doing their own rendering into
objects which means authorisation and management of the render engine
which ends up looking much like DRM.

> One of my biggest problems with KMS is that it has (naturally) a lot more
> complexity than the fb API which leads to instability. Basically it's very

It shouldn't do - and a sample of one (your machine) is not a
statistically valid set. Fb is pretty much ununsable in contrast on my
main box, but that's not a statistically valid sample either.

I'm not that convinced by the complexity either. For a simple video card
setup such as those that the fb layer can kind of cope with (ie linear
buffer, simple mode changes, no client rendering, no vblank flipping,
limited mode management, no serious multi-head) a DRM driver is also
pretty tiny and simple.

> Well, I think it's too late to really fix this thing. We now have 3 APIs in the
> kernel that have to be kept. Probably the best we can do now is figure out how
> we can reduce code duplication and do extensions to those APIs in a way that
> they are compatible with each other or completely independent and can be used
> across the APIs.

I think it comes down to 'when nobody is using the old fb drivers they can
drop into staging and oblivion'. Right now the fb layer is essentially
compatibility glue on most modern x86 platforms.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:47             ` Florian Tobias Schandinat
  (?)
@ 2011-09-15 19:05               ` Alan Cox
  -1 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 19:05 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Keith Packard, Tomi Valkeinen, linux-fbdev, linux-kernel,
	dri-devel, linaro-dev, Clark, Rob, Archit Taneja

> What is your problem with discontigous framebuffers? (I assume discontigous
> refers to the pages the framebuffer is composed of)
> Sounds to me like you should implement your own fb_mmap and either map it
> contigous to screen_base or implement your own fb_read/write.
> In theory you could even have each pixel at a completely different memory
> location although some userspace wouldn't be happy when it could no longer mmap
> the framebuffer.

The mmap side is trivial, the problem is that the fb layer default
implementations of blits, fills etc only work on a kernel linear frame
buffer. And (for example) there is not enough linear stolen memory on
some Intel video for a 1080p console on HDMI even though the hardware is
perfectly capable of using an HDTV as its monitor. Nor - on a 32bit box-
is there enough space to vremap it.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 19:05               ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 19:05 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

> What is your problem with discontigous framebuffers? (I assume discontigous
> refers to the pages the framebuffer is composed of)
> Sounds to me like you should implement your own fb_mmap and either map it
> contigous to screen_base or implement your own fb_read/write.
> In theory you could even have each pixel at a completely different memory
> location although some userspace wouldn't be happy when it could no longer mmap
> the framebuffer.

The mmap side is trivial, the problem is that the fb layer default
implementations of blits, fills etc only work on a kernel linear frame
buffer. And (for example) there is not enough linear stolen memory on
some Intel video for a 1080p console on HDMI even though the hardware is
perfectly capable of using an HDTV as its monitor. Nor - on a 32bit box-
is there enough space to vremap it.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 19:05               ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 19:05 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

> What is your problem with discontigous framebuffers? (I assume discontigous
> refers to the pages the framebuffer is composed of)
> Sounds to me like you should implement your own fb_mmap and either map it
> contigous to screen_base or implement your own fb_read/write.
> In theory you could even have each pixel at a completely different memory
> location although some userspace wouldn't be happy when it could no longer mmap
> the framebuffer.

The mmap side is trivial, the problem is that the fb layer default
implementations of blits, fills etc only work on a kernel linear frame
buffer. And (for example) there is not enough linear stolen memory on
some Intel video for a 1080p console on HDMI even though the hardware is
perfectly capable of using an HDTV as its monitor. Nor - on a 32bit box-
is there enough space to vremap it.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 18:58               ` Alan Cox
  (?)
  (?)
@ 2011-09-15 19:18               ` Florian Tobias Schandinat
  2011-09-15 19:28                   ` Alan Cox
  2011-09-15 19:45                   ` Alex Deucher
  -1 siblings, 2 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-15 19:18 UTC (permalink / raw)
  To: Alan Cox
  Cc: Alex Deucher, Keith Packard, linux-fbdev, linaro-dev,
	linux-kernel, dri-devel, Archit Taneja, Clark, Rob

On 09/15/2011 06:58 PM, Alan Cox wrote:
>> Well, I rather think that the fb API is more user centric to allow every program
>> to use it directly in contrast to the KMS/DRM API which aims to support every
>> feature the hardware has. For this the fb API should not change much, but I
>> understand some additions were needed for some special users, probably limited
>> to X and wayland.
> 
> Wayland needs vblank frame buffer switching and the like. Likewise given
> you want to composite buffers really any serious accelerated device ends
> up needing a full memory manager and that ends up needing a buffer
> manager. Wayland needs clients to be doing their own rendering into
> objects which means authorisation and management of the render engine
> which ends up looking much like DRM.

As you have DRM now and as I'm not interested in wayland I won't discuss this,
but I guess it might be a good start for Geert's question what would be needed
to use it on dumb framebuffers.

>> One of my biggest problems with KMS is that it has (naturally) a lot more
>> complexity than the fb API which leads to instability. Basically it's very
> 
> It shouldn't do - and a sample of one (your machine) is not a
> statistically valid set. Fb is pretty much ununsable in contrast on my
> main box, but that's not a statistically valid sample either.
> 
> I'm not that convinced by the complexity either. For a simple video card
> setup such as those that the fb layer can kind of cope with (ie linear
> buffer, simple mode changes, no client rendering, no vblank flipping,
> limited mode management, no serious multi-head) a DRM driver is also
> pretty tiny and simple.

Yes, if you limit DRM to the functionality of the fb API I guess you could reach
the same stability level. But where can I do this? Where is a option to forbid
all acceleration or at least limit to the acceleration that can be done without
any risk?

>> Well, I think it's too late to really fix this thing. We now have 3 APIs in the
>> kernel that have to be kept. Probably the best we can do now is figure out how
>> we can reduce code duplication and do extensions to those APIs in a way that
>> they are compatible with each other or completely independent and can be used
>> across the APIs.
> 
> I think it comes down to 'when nobody is using the old fb drivers they can
> drop into staging and oblivion'. Right now the fb layer is essentially
> compatibility glue on most modern x86 platforms.

That's a really difficult question. Determining the users is difficult and there
are people that use their hardware very long, for example we are about to get a
new driver for i740. For the framebuffer infrastructure I guess you have to at
least wait for my death.


Regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 19:28                   ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 19:28 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Alex Deucher, Keith Packard, linux-fbdev, linaro-dev,
	linux-kernel, dri-devel, Archit Taneja, Clark, Rob

> As you have DRM now and as I'm not interested in wayland I won't discuss this,
> but I guess it might be a good start for Geert's question what would be needed
> to use it on dumb framebuffers.

GMA500 is basically a 2D or dumb frame buffer setup but with a lot of
rather complicated output and memory management required due to the
hardware. With the latest changes to GEM (private objects) it's basically
trivial to use the frame buffer management interfaces.

> Yes, if you limit DRM to the functionality of the fb API I guess you could reach
> the same stability level. But where can I do this? Where is a option to forbid
> all acceleration or at least limit to the acceleration that can be done without
> any risk?

A driver can provide such module options as it wants.

> That's a really difficult question. Determining the users is difficult and there
> are people that use their hardware very long, for example we are about to get a
> new driver for i740. For the framebuffer infrastructure I guess you have to at
> least wait for my death.

I doubt it'll be that long - but you are right it will take time and
there isn't really any need to push or force it. These things take care
of themselves and in time nobody will care about the old fb stuff, either
because DRM covers it all or equally possibly because it doesn't support
3D holographic projection 8)

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 19:28                   ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 19:28 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Keith Packard, linaro-dev-cunTk1MwBs8s++Sfvej+rw,
	linux-fbdev-u79uwXL29TY76Z2rM5mHXA,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Archit Taneja, Alex Deucher

> As you have DRM now and as I'm not interested in wayland I won't discuss this,
> but I guess it might be a good start for Geert's question what would be needed
> to use it on dumb framebuffers.

GMA500 is basically a 2D or dumb frame buffer setup but with a lot of
rather complicated output and memory management required due to the
hardware. With the latest changes to GEM (private objects) it's basically
trivial to use the frame buffer management interfaces.

> Yes, if you limit DRM to the functionality of the fb API I guess you could reach
> the same stability level. But where can I do this? Where is a option to forbid
> all acceleration or at least limit to the acceleration that can be done without
> any risk?

A driver can provide such module options as it wants.

> That's a really difficult question. Determining the users is difficult and there
> are people that use their hardware very long, for example we are about to get a
> new driver for i740. For the framebuffer infrastructure I guess you have to at
> least wait for my death.

I doubt it'll be that long - but you are right it will take time and
there isn't really any need to push or force it. These things take care
of themselves and in time nobody will care about the old fb stuff, either
because DRM covers it all or equally possibly because it doesn't support
3D holographic projection 8)

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 19:28                   ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 19:28 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Keith Packard, linaro-dev-cunTk1MwBs8s++Sfvej+rw,
	linux-fbdev-u79uwXL29TY76Z2rM5mHXA,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Archit Taneja, Alex Deucher

> As you have DRM now and as I'm not interested in wayland I won't discuss this,
> but I guess it might be a good start for Geert's question what would be needed
> to use it on dumb framebuffers.

GMA500 is basically a 2D or dumb frame buffer setup but with a lot of
rather complicated output and memory management required due to the
hardware. With the latest changes to GEM (private objects) it's basically
trivial to use the frame buffer management interfaces.

> Yes, if you limit DRM to the functionality of the fb API I guess you could reach
> the same stability level. But where can I do this? Where is a option to forbid
> all acceleration or at least limit to the acceleration that can be done without
> any risk?

A driver can provide such module options as it wants.

> That's a really difficult question. Determining the users is difficult and there
> are people that use their hardware very long, for example we are about to get a
> new driver for i740. For the framebuffer infrastructure I guess you have to at
> least wait for my death.

I doubt it'll be that long - but you are right it will take time and
there isn't really any need to push or force it. These things take care
of themselves and in time nobody will care about the old fb stuff, either
because DRM covers it all or equally possibly because it doesn't support
3D holographic projection 8)

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 19:18               ` Florian Tobias Schandinat
@ 2011-09-15 19:45                   ` Alex Deucher
  2011-09-15 19:45                   ` Alex Deucher
  1 sibling, 0 replies; 143+ messages in thread
From: Alex Deucher @ 2011-09-15 19:45 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Alan Cox, Keith Packard, linux-fbdev, linaro-dev, linux-kernel,
	dri-devel, Archit Taneja, Clark, Rob

On Thu, Sep 15, 2011 at 3:18 PM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> On 09/15/2011 06:58 PM, Alan Cox wrote:
>>> Well, I rather think that the fb API is more user centric to allow every program
>>> to use it directly in contrast to the KMS/DRM API which aims to support every
>>> feature the hardware has. For this the fb API should not change much, but I
>>> understand some additions were needed for some special users, probably limited
>>> to X and wayland.
>>
>> Wayland needs vblank frame buffer switching and the like. Likewise given
>> you want to composite buffers really any serious accelerated device ends
>> up needing a full memory manager and that ends up needing a buffer
>> manager. Wayland needs clients to be doing their own rendering into
>> objects which means authorisation and management of the render engine
>> which ends up looking much like DRM.
>
> As you have DRM now and as I'm not interested in wayland I won't discuss this,
> but I guess it might be a good start for Geert's question what would be needed
> to use it on dumb framebuffers.
>
>>> One of my biggest problems with KMS is that it has (naturally) a lot more
>>> complexity than the fb API which leads to instability. Basically it's very
>>
>> It shouldn't do - and a sample of one (your machine) is not a
>> statistically valid set. Fb is pretty much ununsable in contrast on my
>> main box, but that's not a statistically valid sample either.
>>
>> I'm not that convinced by the complexity either. For a simple video card
>> setup such as those that the fb layer can kind of cope with (ie linear
>> buffer, simple mode changes, no client rendering, no vblank flipping,
>> limited mode management, no serious multi-head) a DRM driver is also
>> pretty tiny and simple.
>
> Yes, if you limit DRM to the functionality of the fb API I guess you could reach
> the same stability level. But where can I do this? Where is a option to forbid
> all acceleration or at least limit to the acceleration that can be done without
> any risk?

Right now most of the KMS DRM drivers do not support accelerated fb,
so as long as you don't run accelerated X or a 3D app, it should be
just as stable as an fb driver.  You may run into modesetting fail in
certain cases due to wonky hardware or driver bugs, but you will hit
that with an fb driver as well.

Alex

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 19:45                   ` Alex Deucher
  0 siblings, 0 replies; 143+ messages in thread
From: Alex Deucher @ 2011-09-15 19:45 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Alan Cox, Keith Packard, linux-fbdev, linaro-dev, linux-kernel,
	dri-devel, Archit Taneja, Clark, Rob

On Thu, Sep 15, 2011 at 3:18 PM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> On 09/15/2011 06:58 PM, Alan Cox wrote:
>>> Well, I rather think that the fb API is more user centric to allow every program
>>> to use it directly in contrast to the KMS/DRM API which aims to support every
>>> feature the hardware has. For this the fb API should not change much, but I
>>> understand some additions were needed for some special users, probably limited
>>> to X and wayland.
>>
>> Wayland needs vblank frame buffer switching and the like. Likewise given
>> you want to composite buffers really any serious accelerated device ends
>> up needing a full memory manager and that ends up needing a buffer
>> manager. Wayland needs clients to be doing their own rendering into
>> objects which means authorisation and management of the render engine
>> which ends up looking much like DRM.
>
> As you have DRM now and as I'm not interested in wayland I won't discuss this,
> but I guess it might be a good start for Geert's question what would be needed
> to use it on dumb framebuffers.
>
>>> One of my biggest problems with KMS is that it has (naturally) a lot more
>>> complexity than the fb API which leads to instability. Basically it's very
>>
>> It shouldn't do - and a sample of one (your machine) is not a
>> statistically valid set. Fb is pretty much ununsable in contrast on my
>> main box, but that's not a statistically valid sample either.
>>
>> I'm not that convinced by the complexity either. For a simple video card
>> setup such as those that the fb layer can kind of cope with (ie linear
>> buffer, simple mode changes, no client rendering, no vblank flipping,
>> limited mode management, no serious multi-head) a DRM driver is also
>> pretty tiny and simple.
>
> Yes, if you limit DRM to the functionality of the fb API I guess you could reach
> the same stability level. But where can I do this? Where is a option to forbid
> all acceleration or at least limit to the acceleration that can be done without
> any risk?

Right now most of the KMS DRM drivers do not support accelerated fb,
so as long as you don't run accelerated X or a 3D app, it should be
just as stable as an fb driver.  You may run into modesetting fail in
certain cases due to wonky hardware or driver bugs, but you will hit
that with an fb driver as well.

Alex

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 19:05               ` Alan Cox
@ 2011-09-15 19:46                 ` Florian Tobias Schandinat
  -1 siblings, 0 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-15 19:46 UTC (permalink / raw)
  To: Alan Cox
  Cc: Keith Packard, Tomi Valkeinen, linux-fbdev, linux-kernel,
	dri-devel, linaro-dev, Clark, Rob, Archit Taneja

On 09/15/2011 07:05 PM, Alan Cox wrote:
>> What is your problem with discontigous framebuffers? (I assume discontigous
>> refers to the pages the framebuffer is composed of)
>> Sounds to me like you should implement your own fb_mmap and either map it
>> contigous to screen_base or implement your own fb_read/write.
>> In theory you could even have each pixel at a completely different memory
>> location although some userspace wouldn't be happy when it could no longer mmap
>> the framebuffer.
> 
> The mmap side is trivial, the problem is that the fb layer default
> implementations of blits, fills etc only work on a kernel linear frame
> buffer. And (for example) there is not enough linear stolen memory on
> some Intel video for a 1080p console on HDMI even though the hardware is
> perfectly capable of using an HDTV as its monitor. Nor - on a 32bit box-
> is there enough space to vremap it.

Okay, I see your problem. It's a bit strange you don't have acceleration. I
guess you need either your own implementation of those or adding function
pointers like fb_read/write just without the __user and use those instead of
direct memory access in the cfb* implementation if screen_base is NULL. Does not
sound like a big problem to me, but pretty inefficient, so probably copying the
existing ones and adjusting it to your needs would be preferred (just like the
sys* implementations exist).


Best regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 19:46                 ` Florian Tobias Schandinat
  0 siblings, 0 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-15 19:46 UTC (permalink / raw)
  To: Alan Cox
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

On 09/15/2011 07:05 PM, Alan Cox wrote:
>> What is your problem with discontigous framebuffers? (I assume discontigous
>> refers to the pages the framebuffer is composed of)
>> Sounds to me like you should implement your own fb_mmap and either map it
>> contigous to screen_base or implement your own fb_read/write.
>> In theory you could even have each pixel at a completely different memory
>> location although some userspace wouldn't be happy when it could no longer mmap
>> the framebuffer.
> 
> The mmap side is trivial, the problem is that the fb layer default
> implementations of blits, fills etc only work on a kernel linear frame
> buffer. And (for example) there is not enough linear stolen memory on
> some Intel video for a 1080p console on HDMI even though the hardware is
> perfectly capable of using an HDTV as its monitor. Nor - on a 32bit box-
> is there enough space to vremap it.

Okay, I see your problem. It's a bit strange you don't have acceleration. I
guess you need either your own implementation of those or adding function
pointers like fb_read/write just without the __user and use those instead of
direct memory access in the cfb* implementation if screen_base is NULL. Does not
sound like a big problem to me, but pretty inefficient, so probably copying the
existing ones and adjusting it to your needs would be preferred (just like the
sys* implementations exist).


Best regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 19:46                 ` Florian Tobias Schandinat
  (?)
@ 2011-09-15 21:31                   ` Alan Cox
  -1 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 21:31 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Keith Packard, Tomi Valkeinen, linux-fbdev, linux-kernel,
	dri-devel, linaro-dev, Clark, Rob, Archit Taneja

> Okay, I see your problem. It's a bit strange you don't have acceleration. I

The hardware has 3D acceleration but not open so we can't support it.
There is no 2D acceleration - which seems to be increasingly common.

At some point I'll add hardware scrolling however by using the GTT to
implemnent scroll wrapping.

> sound like a big problem to me, but pretty inefficient, so probably copying the
> existing ones and adjusting it to your needs would be preferred (just like the
> sys* implementations exist).

I did have a look at the current ones but fixing them up given scan lines
can span page boundaries looked pretty horrible so I deferred it until I
feel inspired.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 21:31                   ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 21:31 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

> Okay, I see your problem. It's a bit strange you don't have acceleration. I

The hardware has 3D acceleration but not open so we can't support it.
There is no 2D acceleration - which seems to be increasingly common.

At some point I'll add hardware scrolling however by using the GTT to
implemnent scroll wrapping.

> sound like a big problem to me, but pretty inefficient, so probably copying the
> existing ones and adjusting it to your needs would be preferred (just like the
> sys* implementations exist).

I did have a look at the current ones but fixing them up given scan lines
can span page boundaries looked pretty horrible so I deferred it until I
feel inspired.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-15 21:31                   ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-15 21:31 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

> Okay, I see your problem. It's a bit strange you don't have acceleration. I

The hardware has 3D acceleration but not open so we can't support it.
There is no 2D acceleration - which seems to be increasingly common.

At some point I'll add hardware scrolling however by using the GTT to
implemnent scroll wrapping.

> sound like a big problem to me, but pretty inefficient, so probably copying the
> existing ones and adjusting it to your needs would be preferred (just like the
> sys* implementations exist).

I did have a look at the current ones but fixing them up given scan lines
can span page boundaries looked pretty horrible so I deferred it until I
feel inspired.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:21         ` Tomi Valkeinen
  (?)
@ 2011-09-16  0:55           ` Keith Packard
  -1 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-16  0:55 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 2162 bytes --]

On Thu, 15 Sep 2011 20:21:15 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> 2) panel drivers, handles panel specific things. Each panel may support
> custom commands and features, for which we need a dedicated driver. And
> this driver is not platform specific, but should work with any platform
> which has the output used with the panel.

Right, we've got DDC ports (which are just i2c) and DisplayPort aux
channel stuff.

The DDC stuff is abstracted out and shared across the drivers, but the
DisplayPort aux channel code is not -- it's duplicated in every output
driver. 

> DSI bus is a half-duplex serial bus, and while it's designed for
> displays you could use it easily for any communication between the SoC
> and the peripheral.

Yeah, HDMI uses DDC for all kinds of crazy stuff in the CE world.

> The point is that we cannot have standard "MIPI DSI command mode panel
> driver" which would work for all DSI cmd mode panels, but we need (in
> the worst case) separate driver for each panel.

It sounds like we do want to share code for those bits, much like we
have DDC split out now. And, we should do something about the
DisplayPort aux channel stuff to avoid duplicating it everywhere.

I'm not sure a common interface to all of these different
channels makes sense, but surely a DSI library and an aux channel
library would fit nicely alongside the existing DDC library.

I suspect helper functions would be a good model to follow, rather than
trying to create a whole new device infrastructure; some of the
communication paths aren't easily separable from the underlying output
devices.

Oh, I think you're also trying to get at how we expose some of these
controls outside of the display driver -- right now, they're mostly
exposed as properties on the output device. Things like backlight
brightness, a million analog TV output values, dithering control and
other more esoteric controls.

DRM properties include booleans, enumerations, integer ranges and chunks
of binary data. Some are read-only (like EDID data), some are writable
(like overscan values).

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-16  0:55           ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-16  0:55 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 2162 bytes --]

On Thu, 15 Sep 2011 20:21:15 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> 2) panel drivers, handles panel specific things. Each panel may support
> custom commands and features, for which we need a dedicated driver. And
> this driver is not platform specific, but should work with any platform
> which has the output used with the panel.

Right, we've got DDC ports (which are just i2c) and DisplayPort aux
channel stuff.

The DDC stuff is abstracted out and shared across the drivers, but the
DisplayPort aux channel code is not -- it's duplicated in every output
driver. 

> DSI bus is a half-duplex serial bus, and while it's designed for
> displays you could use it easily for any communication between the SoC
> and the peripheral.

Yeah, HDMI uses DDC for all kinds of crazy stuff in the CE world.

> The point is that we cannot have standard "MIPI DSI command mode panel
> driver" which would work for all DSI cmd mode panels, but we need (in
> the worst case) separate driver for each panel.

It sounds like we do want to share code for those bits, much like we
have DDC split out now. And, we should do something about the
DisplayPort aux channel stuff to avoid duplicating it everywhere.

I'm not sure a common interface to all of these different
channels makes sense, but surely a DSI library and an aux channel
library would fit nicely alongside the existing DDC library.

I suspect helper functions would be a good model to follow, rather than
trying to create a whole new device infrastructure; some of the
communication paths aren't easily separable from the underlying output
devices.

Oh, I think you're also trying to get at how we expose some of these
controls outside of the display driver -- right now, they're mostly
exposed as properties on the output device. Things like backlight
brightness, a million analog TV output values, dithering control and
other more esoteric controls.

DRM properties include booleans, enumerations, integer ranges and chunks
of binary data. Some are read-only (like EDID data), some are writable
(like overscan values).

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-16  0:55           ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-16  0:55 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 2162 bytes --]

On Thu, 15 Sep 2011 20:21:15 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> 2) panel drivers, handles panel specific things. Each panel may support
> custom commands and features, for which we need a dedicated driver. And
> this driver is not platform specific, but should work with any platform
> which has the output used with the panel.

Right, we've got DDC ports (which are just i2c) and DisplayPort aux
channel stuff.

The DDC stuff is abstracted out and shared across the drivers, but the
DisplayPort aux channel code is not -- it's duplicated in every output
driver. 

> DSI bus is a half-duplex serial bus, and while it's designed for
> displays you could use it easily for any communication between the SoC
> and the peripheral.

Yeah, HDMI uses DDC for all kinds of crazy stuff in the CE world.

> The point is that we cannot have standard "MIPI DSI command mode panel
> driver" which would work for all DSI cmd mode panels, but we need (in
> the worst case) separate driver for each panel.

It sounds like we do want to share code for those bits, much like we
have DDC split out now. And, we should do something about the
DisplayPort aux channel stuff to avoid duplicating it everywhere.

I'm not sure a common interface to all of these different
channels makes sense, but surely a DSI library and an aux channel
library would fit nicely alongside the existing DDC library.

I suspect helper functions would be a good model to follow, rather than
trying to create a whole new device infrastructure; some of the
communication paths aren't easily separable from the underlying output
devices.

Oh, I think you're also trying to get at how we expose some of these
controls outside of the display driver -- right now, they're mostly
exposed as properties on the output device. Things like backlight
brightness, a million analog TV output values, dithering control and
other more esoteric controls.

DRM properties include booleans, enumerations, integer ranges and chunks
of binary data. Some are read-only (like EDID data), some are writable
(like overscan values).

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 18:39           ` Florian Tobias Schandinat
  2011-09-15 18:58               ` Alan Cox
@ 2011-09-16  4:53               ` Keith Packard
  2011-09-17 23:12               ` Laurent Pinchart
  2 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-16  4:53 UTC (permalink / raw)
  To: Florian Tobias Schandinat, Alex Deucher
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

[-- Attachment #1: Type: text/plain, Size: 1676 bytes --]

On Thu, 15 Sep 2011 18:39:21 +0000, Florian Tobias Schandinat <FlorianSchandinat@gmx.de> wrote:

> Well, I'm not against sharing the code and not against taking DRM's current
> implementation as a base but the steps required to make it generally acceptable
> would be to split it of, probably as a standalone module and strip all DRM
> specific things off. Than all things that require EDID can use it, DRM can add
> DRM-specific things on top and fb can add fb-specific things.

The rendering portions of the DRM drivers are all device-specific. The
core DRM ioctls are largely about providing some sharing control over
the device, mapping memory around and mode setting.

> One of my biggest problems with KMS is that it has (naturally) a lot more
> complexity than the fb API which leads to instability.

The mode setting portions are of necessity the same. The KMS API exposes
more functionality for mode setting, but doesn't actually require any
additional hardware-specific knowledge. You still have to be able to
bring the hardware up from power on and light up every connected
monitor.

However, if you want acceleration, you're going to run into bugs that
crash the machine. It's a sad reality that graphics hardware just isn't
able to recover cleanly in all cases from programmer errors, and that
includes errors that come from user mode.

Hardware is improving in this area, and reset is getting more reliable
than it used to be. But, until we can context switch the graphics
hardware at arbitrary points during execution, we're kinda stuck with
using the really big reset hammer when programs go awry.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-16  4:53               ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-16  4:53 UTC (permalink / raw)
  To: Florian Tobias Schandinat, Alex Deucher
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

[-- Attachment #1: Type: text/plain, Size: 1676 bytes --]

On Thu, 15 Sep 2011 18:39:21 +0000, Florian Tobias Schandinat <FlorianSchandinat@gmx.de> wrote:

> Well, I'm not against sharing the code and not against taking DRM's current
> implementation as a base but the steps required to make it generally acceptable
> would be to split it of, probably as a standalone module and strip all DRM
> specific things off. Than all things that require EDID can use it, DRM can add
> DRM-specific things on top and fb can add fb-specific things.

The rendering portions of the DRM drivers are all device-specific. The
core DRM ioctls are largely about providing some sharing control over
the device, mapping memory around and mode setting.

> One of my biggest problems with KMS is that it has (naturally) a lot more
> complexity than the fb API which leads to instability.

The mode setting portions are of necessity the same. The KMS API exposes
more functionality for mode setting, but doesn't actually require any
additional hardware-specific knowledge. You still have to be able to
bring the hardware up from power on and light up every connected
monitor.

However, if you want acceleration, you're going to run into bugs that
crash the machine. It's a sad reality that graphics hardware just isn't
able to recover cleanly in all cases from programmer errors, and that
includes errors that come from user mode.

Hardware is improving in this area, and reset is getting more reliable
than it used to be. But, until we can context switch the graphics
hardware at arbitrary points during execution, we're kinda stuck with
using the really big reset hammer when programs go awry.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-16  4:53               ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-16  4:53 UTC (permalink / raw)
  To: Florian Tobias Schandinat, Alex Deucher
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

[-- Attachment #1: Type: text/plain, Size: 1676 bytes --]

On Thu, 15 Sep 2011 18:39:21 +0000, Florian Tobias Schandinat <FlorianSchandinat@gmx.de> wrote:

> Well, I'm not against sharing the code and not against taking DRM's current
> implementation as a base but the steps required to make it generally acceptable
> would be to split it of, probably as a standalone module and strip all DRM
> specific things off. Than all things that require EDID can use it, DRM can add
> DRM-specific things on top and fb can add fb-specific things.

The rendering portions of the DRM drivers are all device-specific. The
core DRM ioctls are largely about providing some sharing control over
the device, mapping memory around and mode setting.

> One of my biggest problems with KMS is that it has (naturally) a lot more
> complexity than the fb API which leads to instability.

The mode setting portions are of necessity the same. The KMS API exposes
more functionality for mode setting, but doesn't actually require any
additional hardware-specific knowledge. You still have to be able to
bring the hardware up from power on and light up every connected
monitor.

However, if you want acceleration, you're going to run into bugs that
crash the machine. It's a sad reality that graphics hardware just isn't
able to recover cleanly in all cases from programmer errors, and that
includes errors that come from user mode.

Hardware is improving in this area, and reset is getting more reliable
than it used to be. But, until we can context switch the graphics
hardware at arbitrary points during execution, we're kinda stuck with
using the really big reset hammer when programs go awry.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-16  0:55           ` Keith Packard
@ 2011-09-16  6:38             ` Tomi Valkeinen
  -1 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-16  6:38 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

On Thu, 2011-09-15 at 19:55 -0500, Keith Packard wrote:
> On Thu, 15 Sep 2011 20:21:15 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > 2) panel drivers, handles panel specific things. Each panel may support
> > custom commands and features, for which we need a dedicated driver. And
> > this driver is not platform specific, but should work with any platform
> > which has the output used with the panel.
> 
> Right, we've got DDC ports (which are just i2c) and DisplayPort aux
> channel stuff.
> 
> The DDC stuff is abstracted out and shared across the drivers, but the
> DisplayPort aux channel code is not -- it's duplicated in every output
> driver. 

I feel that you are still talking about the output driver, not the
panel. DDC and DP aux are part of the connector-entity in DRM, right?
But there's no separate display-entity behind the connector, which would
handle the peculiarities for a particular panel/display, say DSI panel
model L33T from AcmeCorp.

So, as I see it, DDC and DP aux are on the output driver, and the panel
driver uses those to do whatever is needed for a particular panel.

> > DSI bus is a half-duplex serial bus, and while it's designed for
> > displays you could use it easily for any communication between the SoC
> > and the peripheral.
> 
> Yeah, HDMI uses DDC for all kinds of crazy stuff in the CE world.

But that is still more or less standard HDMI stuff, isn't it? So you
implement it once for HDMI, and then it works with all HDMI monitors?

Or is there some way to implement custom behavior for one particular
HDMI monitor? Is this custom behavior in a kernel driver or handled in
userspace?

> > The point is that we cannot have standard "MIPI DSI command mode panel
> > driver" which would work for all DSI cmd mode panels, but we need (in
> > the worst case) separate driver for each panel.
> 
> It sounds like we do want to share code for those bits, much like we
> have DDC split out now. And, we should do something about the
> DisplayPort aux channel stuff to avoid duplicating it everywhere.

Yep. What I had in mind for DSI with my low-level fmwk would be a
mipi_dsi component that offers services to use the DSI bus. Each
platform which supports DSI would implement the DSI support for their
HW. Then the DSI panel driver could do things like:

dsi->write(dev, virtual_channel_id, buf, len);

dsi->set_max_return_packet_size(dev, 10);
dsi->read(dev, virtual_channel_id, read_cmd, recv_buf, len);

An example DSI command mode panel driver can be found from
drivers/video/omap2/displays/panel-taal.c, which uses omapdss' dsi
functions directly but could quite easily use a common DSI interface and
thus be platform independent.

> I'm not sure a common interface to all of these different
> channels makes sense, but surely a DSI library and an aux channel
> library would fit nicely alongside the existing DDC library.

What do you mean with "channel"? Any video or command bus going to the
display? Yes, I think they are quite different and I don't see a point
in trying to make a common interface for them. 

DSI is in many ways a real bus. You can connect multiple peripherals to
one DSI bus (but it needs a DSI hub), and communicate with them by using
their virtual channel ID. And quite often there are DSI chips that
transform the DSI packets to some other form. Some real example
configurations:

Plain DSI panel:

[SoC] ---DSI--- [DSI panel]

DSI-2-DisplayPort converter chip:

[SoC] ---DSI--- [DSI chip] ---DP--- [DP monitor]

DSI buffer chip supporting to DSI panels:

[SoC] ---DSI--- [DSI chip] +--DSI--- [DSI panel 1]
                           |--DSI--- [DSI panel 2]

It would be nice to be able to model this somehow neatly with device
drivers. For example, the DSI panel from the first example could be used
in the two-panel configuration, and if (and when) the panel requires
custom configuration, the same panel driver could be used in both cases.
In the first case the panel driver would use DSI support from the Soc,
in the third case the panel driver would use the DSI support from the
DSI chip (which would, in turn, use DSI support from the SoC).

 Tomi




^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-16  6:38             ` Tomi Valkeinen
  0 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-16  6:38 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

On Thu, 2011-09-15 at 19:55 -0500, Keith Packard wrote:
> On Thu, 15 Sep 2011 20:21:15 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > 2) panel drivers, handles panel specific things. Each panel may support
> > custom commands and features, for which we need a dedicated driver. And
> > this driver is not platform specific, but should work with any platform
> > which has the output used with the panel.
> 
> Right, we've got DDC ports (which are just i2c) and DisplayPort aux
> channel stuff.
> 
> The DDC stuff is abstracted out and shared across the drivers, but the
> DisplayPort aux channel code is not -- it's duplicated in every output
> driver. 

I feel that you are still talking about the output driver, not the
panel. DDC and DP aux are part of the connector-entity in DRM, right?
But there's no separate display-entity behind the connector, which would
handle the peculiarities for a particular panel/display, say DSI panel
model L33T from AcmeCorp.

So, as I see it, DDC and DP aux are on the output driver, and the panel
driver uses those to do whatever is needed for a particular panel.

> > DSI bus is a half-duplex serial bus, and while it's designed for
> > displays you could use it easily for any communication between the SoC
> > and the peripheral.
> 
> Yeah, HDMI uses DDC for all kinds of crazy stuff in the CE world.

But that is still more or less standard HDMI stuff, isn't it? So you
implement it once for HDMI, and then it works with all HDMI monitors?

Or is there some way to implement custom behavior for one particular
HDMI monitor? Is this custom behavior in a kernel driver or handled in
userspace?

> > The point is that we cannot have standard "MIPI DSI command mode panel
> > driver" which would work for all DSI cmd mode panels, but we need (in
> > the worst case) separate driver for each panel.
> 
> It sounds like we do want to share code for those bits, much like we
> have DDC split out now. And, we should do something about the
> DisplayPort aux channel stuff to avoid duplicating it everywhere.

Yep. What I had in mind for DSI with my low-level fmwk would be a
mipi_dsi component that offers services to use the DSI bus. Each
platform which supports DSI would implement the DSI support for their
HW. Then the DSI panel driver could do things like:

dsi->write(dev, virtual_channel_id, buf, len);

dsi->set_max_return_packet_size(dev, 10);
dsi->read(dev, virtual_channel_id, read_cmd, recv_buf, len);

An example DSI command mode panel driver can be found from
drivers/video/omap2/displays/panel-taal.c, which uses omapdss' dsi
functions directly but could quite easily use a common DSI interface and
thus be platform independent.

> I'm not sure a common interface to all of these different
> channels makes sense, but surely a DSI library and an aux channel
> library would fit nicely alongside the existing DDC library.

What do you mean with "channel"? Any video or command bus going to the
display? Yes, I think they are quite different and I don't see a point
in trying to make a common interface for them. 

DSI is in many ways a real bus. You can connect multiple peripherals to
one DSI bus (but it needs a DSI hub), and communicate with them by using
their virtual channel ID. And quite often there are DSI chips that
transform the DSI packets to some other form. Some real example
configurations:

Plain DSI panel:

[SoC] ---DSI--- [DSI panel]

DSI-2-DisplayPort converter chip:

[SoC] ---DSI--- [DSI chip] ---DP--- [DP monitor]

DSI buffer chip supporting to DSI panels:

[SoC] ---DSI--- [DSI chip] +--DSI--- [DSI panel 1]
                           |--DSI--- [DSI panel 2]

It would be nice to be able to model this somehow neatly with device
drivers. For example, the DSI panel from the first example could be used
in the two-panel configuration, and if (and when) the panel requires
custom configuration, the same panel driver could be used in both cases.
In the first case the panel driver would use DSI support from the Soc,
in the third case the panel driver would use the DSI support from the
DSI chip (which would, in turn, use DSI support from the SoC).

 Tomi




^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-16  0:55           ` Keith Packard
@ 2011-09-16 14:17             ` Daniel Vetter
  -1 siblings, 0 replies; 143+ messages in thread
From: Daniel Vetter @ 2011-09-16 14:17 UTC (permalink / raw)
  To: Keith Packard
  Cc: Tomi Valkeinen, linux-fbdev, linaro-dev, linux-kernel, dri-devel,
	Archit Taneja, Clark, Rob

On Thu, Sep 15, 2011 at 07:55:37PM -0500, Keith Packard wrote:
> I suspect helper functions would be a good model to follow, rather than
> trying to create a whole new device infrastructure; some of the
> communication paths aren't easily separable from the underlying output
> devices.

This. Helper functions make the driver writers life so much easier - if
your hw doesn't quite fit the model, you can usually extend functionality
with much less fuzz than if there's a full framework. This is also pretty
much the reason, why I don't like ttm.
-Daniel
-- 
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-16 14:17             ` Daniel Vetter
  0 siblings, 0 replies; 143+ messages in thread
From: Daniel Vetter @ 2011-09-16 14:17 UTC (permalink / raw)
  To: Keith Packard
  Cc: Tomi Valkeinen, linux-fbdev, linaro-dev, linux-kernel, dri-devel,
	Archit Taneja, Clark, Rob

On Thu, Sep 15, 2011 at 07:55:37PM -0500, Keith Packard wrote:
> I suspect helper functions would be a good model to follow, rather than
> trying to create a whole new device infrastructure; some of the
> communication paths aren't easily separable from the underlying output
> devices.

This. Helper functions make the driver writers life so much easier - if
your hw doesn't quite fit the model, you can usually extend functionality
with much less fuzz than if there's a full framework. This is also pretty
much the reason, why I don't like ttm.
-Daniel
-- 
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-16  0:55           ` Keith Packard
  (?)
@ 2011-09-16 16:53             ` Alan Cox
  -1 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-16 16:53 UTC (permalink / raw)
  To: Keith Packard
  Cc: Tomi Valkeinen, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

> I'm not sure a common interface to all of these different
> channels makes sense, but surely a DSI library and an aux channel
> library would fit nicely alongside the existing DDC library.

DSI and the various other MIPI bits tend to be horribly panel and device
specific. In one sense yes its a standard with standard commands,
processes, queries etc, on the other a lot of stuff is oriented around
the 'its a fixed configuration unit we don't need to have queries' view.
There also tends to be a lot of vendor magic initialisation logic both
chipset and device dependant, and often 'plumbing dependant' on SoC
systems. This is doubly ugly with the I²C abstractions for DDC because
SoC systems are not above putting the DDC on a standard I²C port being
shared with other functionality.

> Oh, I think you're also trying to get at how we expose some of these
> controls outside of the display driver -- right now, they're mostly
> exposed as properties on the output device. Things like backlight
> brightness, a million analog TV output values, dithering control and
> other more esoteric controls.

This is how the MIPI handling in the GMA500 driver works, although the
existing code needs to be taken out and shot, which should be happening
soon. There is a lot, like panel initialisation which is however not
really going to fit a properties model.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-16 16:53             ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-16 16:53 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

> I'm not sure a common interface to all of these different
> channels makes sense, but surely a DSI library and an aux channel
> library would fit nicely alongside the existing DDC library.

DSI and the various other MIPI bits tend to be horribly panel and device
specific. In one sense yes its a standard with standard commands,
processes, queries etc, on the other a lot of stuff is oriented around
the 'its a fixed configuration unit we don't need to have queries' view.
There also tends to be a lot of vendor magic initialisation logic both
chipset and device dependant, and often 'plumbing dependant' on SoC
systems. This is doubly ugly with the I²C abstractions for DDC because
SoC systems are not above putting the DDC on a standard I²C port being
shared with other functionality.

> Oh, I think you're also trying to get at how we expose some of these
> controls outside of the display driver -- right now, they're mostly
> exposed as properties on the output device. Things like backlight
> brightness, a million analog TV output values, dithering control and
> other more esoteric controls.

This is how the MIPI handling in the GMA500 driver works, although the
existing code needs to be taken out and shot, which should be happening
soon. There is a lot, like panel initialisation which is however not
really going to fit a properties model.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-16 16:53             ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-16 16:53 UTC (permalink / raw)
  To: Keith Packard
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

> I'm not sure a common interface to all of these different
> channels makes sense, but surely a DSI library and an aux channel
> library would fit nicely alongside the existing DDC library.

DSI and the various other MIPI bits tend to be horribly panel and device
specific. In one sense yes its a standard with standard commands,
processes, queries etc, on the other a lot of stuff is oriented around
the 'its a fixed configuration unit we don't need to have queries' view.
There also tends to be a lot of vendor magic initialisation logic both
chipset and device dependant, and often 'plumbing dependant' on SoC
systems. This is doubly ugly with the I²C abstractions for DDC because
SoC systems are not above putting the DDC on a standard I²C port being
shared with other functionality.

> Oh, I think you're also trying to get at how we expose some of these
> controls outside of the display driver -- right now, they're mostly
> exposed as properties on the output device. Things like backlight
> brightness, a million analog TV output values, dithering control and
> other more esoteric controls.

This is how the MIPI handling in the GMA500 driver works, although the
existing code needs to be taken out and shot, which should be happening
soon. There is a lot, like panel initialisation which is however not
really going to fit a properties model.

Alan
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 18:58               ` Alan Cox
@ 2011-09-17 14:44                 ` Felipe Contreras
  -1 siblings, 0 replies; 143+ messages in thread
From: Felipe Contreras @ 2011-09-17 14:44 UTC (permalink / raw)
  To: Alan Cox
  Cc: Florian Tobias Schandinat, Alex Deucher, Keith Packard,
	linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

On Thu, Sep 15, 2011 at 9:58 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
>> One of my biggest problems with KMS is that it has (naturally) a lot more
>> complexity than the fb API which leads to instability. Basically it's very
>
> It shouldn't do - and a sample of one (your machine) is not a
> statistically valid set. Fb is pretty much ununsable in contrast on my
> main box, but that's not a statistically valid sample either.
>
> I'm not that convinced by the complexity either. For a simple video card
> setup such as those that the fb layer can kind of cope with (ie linear
> buffer, simple mode changes, no client rendering, no vblank flipping,
> limited mode management, no serious multi-head) a DRM driver is also
> pretty tiny and simple.

That's not true, many drivers work around the lack of features in the
fb API by providing custom interfaces. For example, in omapfb it's
possible to use the overlays from user-space, configure some YUV
format, do vsink, and multipages just fine:

https://github.com/felipec/gst-omapfb/blob/master/omapfb.c

It's perfect to render video clips. Of course, it would be even better
if those custom interfaces were merged into the fb API.

-- 
Felipe Contreras

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 14:44                 ` Felipe Contreras
  0 siblings, 0 replies; 143+ messages in thread
From: Felipe Contreras @ 2011-09-17 14:44 UTC (permalink / raw)
  To: Alan Cox
  Cc: Florian Tobias Schandinat, Alex Deucher, Keith Packard,
	linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

On Thu, Sep 15, 2011 at 9:58 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
>> One of my biggest problems with KMS is that it has (naturally) a lot more
>> complexity than the fb API which leads to instability. Basically it's very
>
> It shouldn't do - and a sample of one (your machine) is not a
> statistically valid set. Fb is pretty much ununsable in contrast on my
> main box, but that's not a statistically valid sample either.
>
> I'm not that convinced by the complexity either. For a simple video card
> setup such as those that the fb layer can kind of cope with (ie linear
> buffer, simple mode changes, no client rendering, no vblank flipping,
> limited mode management, no serious multi-head) a DRM driver is also
> pretty tiny and simple.

That's not true, many drivers work around the lack of features in the
fb API by providing custom interfaces. For example, in omapfb it's
possible to use the overlays from user-space, configure some YUV
format, do vsink, and multipages just fine:

https://github.com/felipec/gst-omapfb/blob/master/omapfb.c

It's perfect to render video clips. Of course, it would be even better
if those custom interfaces were merged into the fb API.

-- 
Felipe Contreras

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 14:44                 ` Felipe Contreras
  (?)
@ 2011-09-17 15:16                   ` Rob Clark
  -1 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-17 15:16 UTC (permalink / raw)
  To: Felipe Contreras
  Cc: Alan Cox, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja, linux-fbdev

On Sat, Sep 17, 2011 at 9:44 AM, Felipe Contreras
<felipe.contreras@gmail.com> wrote:
> On Thu, Sep 15, 2011 at 9:58 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
>>> One of my biggest problems with KMS is that it has (naturally) a lot more
>>> complexity than the fb API which leads to instability. Basically it's very
>>
>> It shouldn't do - and a sample of one (your machine) is not a
>> statistically valid set. Fb is pretty much ununsable in contrast on my
>> main box, but that's not a statistically valid sample either.
>>
>> I'm not that convinced by the complexity either. For a simple video card
>> setup such as those that the fb layer can kind of cope with (ie linear
>> buffer, simple mode changes, no client rendering, no vblank flipping,
>> limited mode management, no serious multi-head) a DRM driver is also
>> pretty tiny and simple.
>
> That's not true, many drivers work around the lack of features in the
> fb API by providing custom interfaces. For example, in omapfb it's
> possible to use the overlays from user-space, configure some YUV
> format, do vsink, and multipages just fine:
>
> https://github.com/felipec/gst-omapfb/blob/master/omapfb.c
>
> It's perfect to render video clips. Of course, it would be even better
> if those custom interfaces were merged into the fb API.

fwiw, as was mentioned earlier in the thread, there is already an
effort underway for a standardized overlay interface for KMS:

http://lists.freedesktop.org/archives/dri-devel/2011-April/010559.html

Anyways, it is also possible to extend DRM drivers w/ custom API.. and
even possible extend the fbdev on top of DRM/KMS with custom
interfaces if you *really* wanted to.  I have some patches somewhere
that add support a portion of the omapfb ioctls to the fbdev layer in
omapdrm driver for the benefit of some legacy display test app.  If
someone really wanted to, I guess there is no reason that you couldn't
support all of the omapfb custom ioctls.

>From userspace perspective, fbdev doesn't go away.  It is just a
legacy interface provided on top of DRM/KMS driver mostly via helper
functions.  With this approach, you get the richer KMS API (and all
the related plumbing for hotplug, EDID parsing, multi-head support,
flipping, etc) for userspace stuff that needs that, but can keep the
fbdev userspace interface for legacy apps.  It is the best of both
worlds.  There isn't really any good reason to propagate standalone
fbdev driver anymore.

BR,
-R

> --
> Felipe Contreras
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 15:16                   ` Rob Clark
  0 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-17 15:16 UTC (permalink / raw)
  To: Felipe Contreras
  Cc: linux-fbdev, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja

On Sat, Sep 17, 2011 at 9:44 AM, Felipe Contreras
<felipe.contreras@gmail.com> wrote:
> On Thu, Sep 15, 2011 at 9:58 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
>>> One of my biggest problems with KMS is that it has (naturally) a lot more
>>> complexity than the fb API which leads to instability. Basically it's very
>>
>> It shouldn't do - and a sample of one (your machine) is not a
>> statistically valid set. Fb is pretty much ununsable in contrast on my
>> main box, but that's not a statistically valid sample either.
>>
>> I'm not that convinced by the complexity either. For a simple video card
>> setup such as those that the fb layer can kind of cope with (ie linear
>> buffer, simple mode changes, no client rendering, no vblank flipping,
>> limited mode management, no serious multi-head) a DRM driver is also
>> pretty tiny and simple.
>
> That's not true, many drivers work around the lack of features in the
> fb API by providing custom interfaces. For example, in omapfb it's
> possible to use the overlays from user-space, configure some YUV
> format, do vsink, and multipages just fine:
>
> https://github.com/felipec/gst-omapfb/blob/master/omapfb.c
>
> It's perfect to render video clips. Of course, it would be even better
> if those custom interfaces were merged into the fb API.

fwiw, as was mentioned earlier in the thread, there is already an
effort underway for a standardized overlay interface for KMS:

http://lists.freedesktop.org/archives/dri-devel/2011-April/010559.html

Anyways, it is also possible to extend DRM drivers w/ custom API.. and
even possible extend the fbdev on top of DRM/KMS with custom
interfaces if you *really* wanted to.  I have some patches somewhere
that add support a portion of the omapfb ioctls to the fbdev layer in
omapdrm driver for the benefit of some legacy display test app.  If
someone really wanted to, I guess there is no reason that you couldn't
support all of the omapfb custom ioctls.

From userspace perspective, fbdev doesn't go away.  It is just a
legacy interface provided on top of DRM/KMS driver mostly via helper
functions.  With this approach, you get the richer KMS API (and all
the related plumbing for hotplug, EDID parsing, multi-head support,
flipping, etc) for userspace stuff that needs that, but can keep the
fbdev userspace interface for legacy apps.  It is the best of both
worlds.  There isn't really any good reason to propagate standalone
fbdev driver anymore.

BR,
-R

> --
> Felipe Contreras
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 15:16                   ` Rob Clark
  0 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-17 15:16 UTC (permalink / raw)
  To: Felipe Contreras
  Cc: linux-fbdev, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja

On Sat, Sep 17, 2011 at 9:44 AM, Felipe Contreras
<felipe.contreras@gmail.com> wrote:
> On Thu, Sep 15, 2011 at 9:58 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
>>> One of my biggest problems with KMS is that it has (naturally) a lot more
>>> complexity than the fb API which leads to instability. Basically it's very
>>
>> It shouldn't do - and a sample of one (your machine) is not a
>> statistically valid set. Fb is pretty much ununsable in contrast on my
>> main box, but that's not a statistically valid sample either.
>>
>> I'm not that convinced by the complexity either. For a simple video card
>> setup such as those that the fb layer can kind of cope with (ie linear
>> buffer, simple mode changes, no client rendering, no vblank flipping,
>> limited mode management, no serious multi-head) a DRM driver is also
>> pretty tiny and simple.
>
> That's not true, many drivers work around the lack of features in the
> fb API by providing custom interfaces. For example, in omapfb it's
> possible to use the overlays from user-space, configure some YUV
> format, do vsink, and multipages just fine:
>
> https://github.com/felipec/gst-omapfb/blob/master/omapfb.c
>
> It's perfect to render video clips. Of course, it would be even better
> if those custom interfaces were merged into the fb API.

fwiw, as was mentioned earlier in the thread, there is already an
effort underway for a standardized overlay interface for KMS:

http://lists.freedesktop.org/archives/dri-devel/2011-April/010559.html

Anyways, it is also possible to extend DRM drivers w/ custom API.. and
even possible extend the fbdev on top of DRM/KMS with custom
interfaces if you *really* wanted to.  I have some patches somewhere
that add support a portion of the omapfb ioctls to the fbdev layer in
omapdrm driver for the benefit of some legacy display test app.  If
someone really wanted to, I guess there is no reason that you couldn't
support all of the omapfb custom ioctls.

>From userspace perspective, fbdev doesn't go away.  It is just a
legacy interface provided on top of DRM/KMS driver mostly via helper
functions.  With this approach, you get the richer KMS API (and all
the related plumbing for hotplug, EDID parsing, multi-head support,
flipping, etc) for userspace stuff that needs that, but can keep the
fbdev userspace interface for legacy apps.  It is the best of both
worlds.  There isn't really any good reason to propagate standalone
fbdev driver anymore.

BR,
-R

> --
> Felipe Contreras
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 15:16                   ` Rob Clark
@ 2011-09-17 16:11                     ` Florian Tobias Schandinat
  -1 siblings, 0 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-17 16:11 UTC (permalink / raw)
  To: Rob Clark
  Cc: Felipe Contreras, Alan Cox, linaro-dev, linux-kernel, dri-devel,
	Archit Taneja, linux-fbdev

On 09/17/2011 03:16 PM, Rob Clark wrote:
>>From userspace perspective, fbdev doesn't go away.  It is just a
> legacy interface provided on top of DRM/KMS driver mostly via helper
> functions.  With this approach, you get the richer KMS API (and all
> the related plumbing for hotplug, EDID parsing, multi-head support,
> flipping, etc) for userspace stuff that needs that, but can keep the
> fbdev userspace interface for legacy apps.  It is the best of both
> worlds.  There isn't really any good reason to propagate standalone
> fbdev driver anymore.

I disagree. This depends on the functionality the hardware has, the desired
userspace and the manpower one has to do it. And of course if you just want fb
having fb via DRM/KMS has some overhead/bloat. It's perfectly okay to have just
an fb driver for devices that can't do more anyway.
And fb is no legacy interface but actively developed, just with other goals than
DRM/KMS is, it aims for stability and to provide a direct interface, not needing
any X or wayland crap.


Best regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 16:11                     ` Florian Tobias Schandinat
  0 siblings, 0 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-17 16:11 UTC (permalink / raw)
  To: Rob Clark; +Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja

On 09/17/2011 03:16 PM, Rob Clark wrote:
>>From userspace perspective, fbdev doesn't go away.  It is just a
> legacy interface provided on top of DRM/KMS driver mostly via helper
> functions.  With this approach, you get the richer KMS API (and all
> the related plumbing for hotplug, EDID parsing, multi-head support,
> flipping, etc) for userspace stuff that needs that, but can keep the
> fbdev userspace interface for legacy apps.  It is the best of both
> worlds.  There isn't really any good reason to propagate standalone
> fbdev driver anymore.

I disagree. This depends on the functionality the hardware has, the desired
userspace and the manpower one has to do it. And of course if you just want fb
having fb via DRM/KMS has some overhead/bloat. It's perfectly okay to have just
an fb driver for devices that can't do more anyway.
And fb is no legacy interface but actively developed, just with other goals than
DRM/KMS is, it aims for stability and to provide a direct interface, not needing
any X or wayland crap.


Best regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 16:11                     ` Florian Tobias Schandinat
  (?)
@ 2011-09-17 16:47                       ` Dave Airlie
  -1 siblings, 0 replies; 143+ messages in thread
From: Dave Airlie @ 2011-09-17 16:47 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Rob Clark, Felipe Contreras, Alan Cox, linaro-dev, linux-kernel,
	dri-devel, Archit Taneja, linux-fbdev

>
> I disagree. This depends on the functionality the hardware has, the desired
> userspace and the manpower one has to do it. And of course if you just want fb
> having fb via DRM/KMS has some overhead/bloat. It's perfectly okay to have just
> an fb driver for devices that can't do more anyway.
> And fb is no legacy interface but actively developed, just with other goals than
> DRM/KMS is, it aims for stability and to provide a direct interface, not needing
> any X or wayland crap.

Stability is a total misnomer, whats worse is you know it. If you just
want to do software render your whole GUI whether you use KMS or fbdev
doesn't matter. Instability is only to do with GPU hardware
acceleration, whether fb or kms expose accel doesn't matter. So less
attitude please.

fbdev is totally uninteresting for any modern multi-output hardware
with an acceleration engine, you can't even memory manage the GPU
memory in any useful way, try resizing the fb console dynamically when
you've allocated the memory immediately following it in VRAM, you
can't as userspace has it direct mapped, with no way to remove the
mappings or repage them. Even now I'm still thinking we should do
kmscon without exposing the fbdev interface to userspace because the
whole mmap semantics are totally broken, look at the recent fb
handover race fixes.

Dave.

Dave.

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 16:47                       ` Dave Airlie
  0 siblings, 0 replies; 143+ messages in thread
From: Dave Airlie @ 2011-09-17 16:47 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Rob Clark

>
> I disagree. This depends on the functionality the hardware has, the desired
> userspace and the manpower one has to do it. And of course if you just want fb
> having fb via DRM/KMS has some overhead/bloat. It's perfectly okay to have just
> an fb driver for devices that can't do more anyway.
> And fb is no legacy interface but actively developed, just with other goals than
> DRM/KMS is, it aims for stability and to provide a direct interface, not needing
> any X or wayland crap.

Stability is a total misnomer, whats worse is you know it. If you just
want to do software render your whole GUI whether you use KMS or fbdev
doesn't matter. Instability is only to do with GPU hardware
acceleration, whether fb or kms expose accel doesn't matter. So less
attitude please.

fbdev is totally uninteresting for any modern multi-output hardware
with an acceleration engine, you can't even memory manage the GPU
memory in any useful way, try resizing the fb console dynamically when
you've allocated the memory immediately following it in VRAM, you
can't as userspace has it direct mapped, with no way to remove the
mappings or repage them. Even now I'm still thinking we should do
kmscon without exposing the fbdev interface to userspace because the
whole mmap semantics are totally broken, look at the recent fb
handover race fixes.

Dave.

Dave.

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 16:47                       ` Dave Airlie
  0 siblings, 0 replies; 143+ messages in thread
From: Dave Airlie @ 2011-09-17 16:47 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Rob Clark

>
> I disagree. This depends on the functionality the hardware has, the desired
> userspace and the manpower one has to do it. And of course if you just want fb
> having fb via DRM/KMS has some overhead/bloat. It's perfectly okay to have just
> an fb driver for devices that can't do more anyway.
> And fb is no legacy interface but actively developed, just with other goals than
> DRM/KMS is, it aims for stability and to provide a direct interface, not needing
> any X or wayland crap.

Stability is a total misnomer, whats worse is you know it. If you just
want to do software render your whole GUI whether you use KMS or fbdev
doesn't matter. Instability is only to do with GPU hardware
acceleration, whether fb or kms expose accel doesn't matter. So less
attitude please.

fbdev is totally uninteresting for any modern multi-output hardware
with an acceleration engine, you can't even memory manage the GPU
memory in any useful way, try resizing the fb console dynamically when
you've allocated the memory immediately following it in VRAM, you
can't as userspace has it direct mapped, with no way to remove the
mappings or repage them. Even now I'm still thinking we should do
kmscon without exposing the fbdev interface to userspace because the
whole mmap semantics are totally broken, look at the recent fb
handover race fixes.

Dave.

Dave.

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 16:11                     ` Florian Tobias Schandinat
@ 2011-09-17 16:50                       ` Rob Clark
  -1 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-17 16:50 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja

On Sat, Sep 17, 2011 at 11:11 AM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> On 09/17/2011 03:16 PM, Rob Clark wrote:
>>>From userspace perspective, fbdev doesn't go away.  It is just a
>> legacy interface provided on top of DRM/KMS driver mostly via helper
>> functions.  With this approach, you get the richer KMS API (and all
>> the related plumbing for hotplug, EDID parsing, multi-head support,
>> flipping, etc) for userspace stuff that needs that, but can keep the
>> fbdev userspace interface for legacy apps.  It is the best of both
>> worlds.  There isn't really any good reason to propagate standalone
>> fbdev driver anymore.
>
> I disagree. This depends on the functionality the hardware has, the desired
> userspace and the manpower one has to do it. And of course if you just want fb
> having fb via DRM/KMS has some overhead/bloat. It's perfectly okay to have just
> an fb driver for devices that can't do more anyway.
> And fb is no legacy interface but actively developed, just with other goals than
> DRM/KMS is, it aims for stability and to provide a direct interface, not needing
> any X or wayland crap.

Hmm, for simple enough devices, maybe fb is fine.. but if you are
covering a range of devices which include stuff with more
sophisticated userspace (X/wayland), then just doing DRM/KMS and using
the DRM fbdev helpers, vs doing both DRM/KMS and standalone fbdev..
well that seems like a no-brainer.

I still think, if you are starting a new driver, you should just go
ahead and use DRM/KMS.. a simple DRM/KMS driver that doesn't support
all the features is not so complex, and going this route future-proofs
you better when future generations of hardware gain more capabilities
and sw gain more requirements.

BR,
-R

>
> Best regards,
>
> Florian Tobias Schandinat
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 16:50                       ` Rob Clark
  0 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-17 16:50 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja

On Sat, Sep 17, 2011 at 11:11 AM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> On 09/17/2011 03:16 PM, Rob Clark wrote:
>>>From userspace perspective, fbdev doesn't go away.  It is just a
>> legacy interface provided on top of DRM/KMS driver mostly via helper
>> functions.  With this approach, you get the richer KMS API (and all
>> the related plumbing for hotplug, EDID parsing, multi-head support,
>> flipping, etc) for userspace stuff that needs that, but can keep the
>> fbdev userspace interface for legacy apps.  It is the best of both
>> worlds.  There isn't really any good reason to propagate standalone
>> fbdev driver anymore.
>
> I disagree. This depends on the functionality the hardware has, the desired
> userspace and the manpower one has to do it. And of course if you just want fb
> having fb via DRM/KMS has some overhead/bloat. It's perfectly okay to have just
> an fb driver for devices that can't do more anyway.
> And fb is no legacy interface but actively developed, just with other goals than
> DRM/KMS is, it aims for stability and to provide a direct interface, not needing
> any X or wayland crap.

Hmm, for simple enough devices, maybe fb is fine.. but if you are
covering a range of devices which include stuff with more
sophisticated userspace (X/wayland), then just doing DRM/KMS and using
the DRM fbdev helpers, vs doing both DRM/KMS and standalone fbdev..
well that seems like a no-brainer.

I still think, if you are starting a new driver, you should just go
ahead and use DRM/KMS.. a simple DRM/KMS driver that doesn't support
all the features is not so complex, and going this route future-proofs
you better when future generations of hardware gain more capabilities
and sw gain more requirements.

BR,
-R

>
> Best regards,
>
> Florian Tobias Schandinat
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 16:47                       ` Dave Airlie
  (?)
  (?)
@ 2011-09-17 18:15                       ` Florian Tobias Schandinat
  2011-09-17 18:23                           ` Dave Airlie
  -1 siblings, 1 reply; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-17 18:15 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Rob Clark, Felipe Contreras, Alan Cox, linaro-dev, linux-kernel,
	dri-devel, Archit Taneja, linux-fbdev

On 09/17/2011 04:47 PM, Dave Airlie wrote:
>>
>> I disagree. This depends on the functionality the hardware has, the desired
>> userspace and the manpower one has to do it. And of course if you just want fb
>> having fb via DRM/KMS has some overhead/bloat. It's perfectly okay to have just
>> an fb driver for devices that can't do more anyway.
>> And fb is no legacy interface but actively developed, just with other goals than
>> DRM/KMS is, it aims for stability and to provide a direct interface, not needing
>> any X or wayland crap.
> 
> Stability is a total misnomer, whats worse is you know it. If you just
> want to do software render your whole GUI whether you use KMS or fbdev
> doesn't matter. Instability is only to do with GPU hardware
> acceleration, whether fb or kms expose accel doesn't matter. So less
> attitude please.

Is it? Well, okay, I don't want to use any acceleration that can crash my
machine, where can I select it, preferably as compile time option? I didn't find
such a thing for Intel or Radeon. Don't say, I should rely on userspace here or
use fbdev for this.
The thing is that the core fbdev API does not expose any acceleration to
userspace, maybe some drivers do via IOCTLs, but I hope that are only things
that can be done in a sane way, otherwise I'd consider it a bug. The story is
different for DRM/KMS, as I understand, as this was primarily for acceleration
and only recently got modesetting capabilities.

> fbdev is totally uninteresting for any modern multi-output hardware
> with an acceleration engine, you can't even memory manage the GPU
> memory in any useful way, try resizing the fb console dynamically when
> you've allocated the memory immediately following it in VRAM, you
> can't as userspace has it direct mapped, with no way to remove the
> mappings or repage them. Even now I'm still thinking we should do
> kmscon without exposing the fbdev interface to userspace because the
> whole mmap semantics are totally broken, look at the recent fb
> handover race fixes.

It's true that mmap can be PITA, but I don't see any real alternative given that
you want directly map video memory, especially on low end systems. And there are
ways around it, you can forbid mapping (though probably most userspace wouldn't
like it, I guess) or use any other solution like defio.
If you'd stop exposing the fbdev userspace interface it'd just harden my opinion
that KMS is a piece of trash and that I should avoid hardware that does not have
a native framebuffer driver. I think you shouldn't do this, as it's just a
disadvantage for your end users, but I personally do not really care.


Regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 18:15                       ` Florian Tobias Schandinat
  2011-09-17 18:23                           ` Dave Airlie
@ 2011-09-17 18:23                           ` Dave Airlie
  0 siblings, 0 replies; 143+ messages in thread
From: Dave Airlie @ 2011-09-17 18:23 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Rob Clark, Felipe Contreras, Alan Cox, linaro-dev, linux-kernel,
	dri-devel, Archit Taneja, linux-fbdev

>
> Is it? Well, okay, I don't want to use any acceleration that can crash my
> machine, where can I select it, preferably as compile time option? I didn't find
> such a thing for Intel or Radeon. Don't say, I should rely on userspace here or
> use fbdev for this.

Just tell the X driver to not use acceleration, and it you won't get
any acceleration used, then you get complete stability. If a driver
writer wants to turn off all accel in the kernel driver, it can, its
not an option we've bothered with for intel or radeon since it really
makes no sense. To put it simply you don't really seem to understand
the driver model around KMS. If no userspace app uses acceleration
then no acceleration features will magically happen. If you want to
write a simple app against the KMS API like plymouth you can now use
the dumb ioctls to create and map a buffer that can be made into a
framebuffer. Also you get hw cursors + modesetting.

> The thing is that the core fbdev API does not expose any acceleration to
> userspace, maybe some drivers do via IOCTLs, but I hope that are only things
> that can be done in a sane way, otherwise I'd consider it a bug. The story is
> different for DRM/KMS, as I understand, as this was primarily for acceleration
> and only recently got modesetting capabilities.

The core drm/kms ioctls don't expose acceleration to userspace either,
again misinformation seems to drive most of your logic. You can't do
generic useful acceleration from the kernel. A lot of modern GPU
hardware doesn't even have bitblt engines.


> It's true that mmap can be PITA, but I don't see any real alternative given that
> you want directly map video memory, especially on low end systems. And there are
> ways around it, you can forbid mapping (though probably most userspace wouldn't
> like it, I guess) or use any other solution like defio.
> If you'd stop exposing the fbdev userspace interface it'd just harden my opinion
> that KMS is a piece of trash and that I should avoid hardware that does not have
> a native framebuffer driver. I think you shouldn't do this, as it's just a
> disadvantage for your end users, but I personally do not really care.

We've fixed this in KMS, we don't pass direct mappings to userspace
that we can't tear down and refault. We only provide objects via
handles. The only place its a problem is where we expose fbdev legacy
emulation, since we have to fix the pages.

Dave.

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 18:23                           ` Dave Airlie
  0 siblings, 0 replies; 143+ messages in thread
From: Dave Airlie @ 2011-09-17 18:23 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Rob Clark

>
> Is it? Well, okay, I don't want to use any acceleration that can crash my
> machine, where can I select it, preferably as compile time option? I didn't find
> such a thing for Intel or Radeon. Don't say, I should rely on userspace here or
> use fbdev for this.

Just tell the X driver to not use acceleration, and it you won't get
any acceleration used, then you get complete stability. If a driver
writer wants to turn off all accel in the kernel driver, it can, its
not an option we've bothered with for intel or radeon since it really
makes no sense. To put it simply you don't really seem to understand
the driver model around KMS. If no userspace app uses acceleration
then no acceleration features will magically happen. If you want to
write a simple app against the KMS API like plymouth you can now use
the dumb ioctls to create and map a buffer that can be made into a
framebuffer. Also you get hw cursors + modesetting.

> The thing is that the core fbdev API does not expose any acceleration to
> userspace, maybe some drivers do via IOCTLs, but I hope that are only things
> that can be done in a sane way, otherwise I'd consider it a bug. The story is
> different for DRM/KMS, as I understand, as this was primarily for acceleration
> and only recently got modesetting capabilities.

The core drm/kms ioctls don't expose acceleration to userspace either,
again misinformation seems to drive most of your logic. You can't do
generic useful acceleration from the kernel. A lot of modern GPU
hardware doesn't even have bitblt engines.


> It's true that mmap can be PITA, but I don't see any real alternative given that
> you want directly map video memory, especially on low end systems. And there are
> ways around it, you can forbid mapping (though probably most userspace wouldn't
> like it, I guess) or use any other solution like defio.
> If you'd stop exposing the fbdev userspace interface it'd just harden my opinion
> that KMS is a piece of trash and that I should avoid hardware that does not have
> a native framebuffer driver. I think you shouldn't do this, as it's just a
> disadvantage for your end users, but I personally do not really care.

We've fixed this in KMS, we don't pass direct mappings to userspace
that we can't tear down and refault. We only provide objects via
handles. The only place its a problem is where we expose fbdev legacy
emulation, since we have to fix the pages.

Dave.

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 18:23                           ` Dave Airlie
  0 siblings, 0 replies; 143+ messages in thread
From: Dave Airlie @ 2011-09-17 18:23 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Rob Clark

>
> Is it? Well, okay, I don't want to use any acceleration that can crash my
> machine, where can I select it, preferably as compile time option? I didn't find
> such a thing for Intel or Radeon. Don't say, I should rely on userspace here or
> use fbdev for this.

Just tell the X driver to not use acceleration, and it you won't get
any acceleration used, then you get complete stability. If a driver
writer wants to turn off all accel in the kernel driver, it can, its
not an option we've bothered with for intel or radeon since it really
makes no sense. To put it simply you don't really seem to understand
the driver model around KMS. If no userspace app uses acceleration
then no acceleration features will magically happen. If you want to
write a simple app against the KMS API like plymouth you can now use
the dumb ioctls to create and map a buffer that can be made into a
framebuffer. Also you get hw cursors + modesetting.

> The thing is that the core fbdev API does not expose any acceleration to
> userspace, maybe some drivers do via IOCTLs, but I hope that are only things
> that can be done in a sane way, otherwise I'd consider it a bug. The story is
> different for DRM/KMS, as I understand, as this was primarily for acceleration
> and only recently got modesetting capabilities.

The core drm/kms ioctls don't expose acceleration to userspace either,
again misinformation seems to drive most of your logic. You can't do
generic useful acceleration from the kernel. A lot of modern GPU
hardware doesn't even have bitblt engines.


> It's true that mmap can be PITA, but I don't see any real alternative given that
> you want directly map video memory, especially on low end systems. And there are
> ways around it, you can forbid mapping (though probably most userspace wouldn't
> like it, I guess) or use any other solution like defio.
> If you'd stop exposing the fbdev userspace interface it'd just harden my opinion
> that KMS is a piece of trash and that I should avoid hardware that does not have
> a native framebuffer driver. I think you shouldn't do this, as it's just a
> disadvantage for your end users, but I personally do not really care.

We've fixed this in KMS, we don't pass direct mappings to userspace
that we can't tear down and refault. We only provide objects via
handles. The only place its a problem is where we expose fbdev legacy
emulation, since we have to fix the pages.

Dave.

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 18:23                           ` Dave Airlie
  (?)
  (?)
@ 2011-09-17 19:06                           ` Florian Tobias Schandinat
  2011-09-17 19:25                               ` Corbin Simpson
  2011-09-17 21:25                               ` Alex Deucher
  -1 siblings, 2 replies; 143+ messages in thread
From: Florian Tobias Schandinat @ 2011-09-17 19:06 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Rob Clark, Felipe Contreras, Alan Cox, linaro-dev, linux-kernel,
	dri-devel, Archit Taneja, linux-fbdev

On 09/17/2011 06:23 PM, Dave Airlie wrote:
>>
>> Is it? Well, okay, I don't want to use any acceleration that can crash my
>> machine, where can I select it, preferably as compile time option? I didn't find
>> such a thing for Intel or Radeon. Don't say, I should rely on userspace here or
>> use fbdev for this.
> 
> Just tell the X driver to not use acceleration, and it you won't get
> any acceleration used, then you get complete stability. If a driver
> writer wants to turn off all accel in the kernel driver, it can, its
> not an option we've bothered with for intel or radeon since it really
> makes no sense. To put it simply you don't really seem to understand
> the driver model around KMS. If no userspace app uses acceleration
> then no acceleration features will magically happen. If you want to
> write a simple app against the KMS API like plymouth you can now use
> the dumb ioctls to create and map a buffer that can be made into a
> framebuffer. Also you get hw cursors + modesetting.

Again, you seem to not understand my reasoning. The "if" is the problem, it's
the kernels job to ensure stability. Allowing the userspace to decide whether it
crashes my machine is not acceptable to me.
I do not claim that it is impossible to write a KMS driver in a way that it does
not crash, but it seems more difficult than writing an fbdev driver.

>> The thing is that the core fbdev API does not expose any acceleration to
>> userspace, maybe some drivers do via IOCTLs, but I hope that are only things
>> that can be done in a sane way, otherwise I'd consider it a bug. The story is
>> different for DRM/KMS, as I understand, as this was primarily for acceleration
>> and only recently got modesetting capabilities.
> 
> The core drm/kms ioctls don't expose acceleration to userspace either,
> again misinformation seems to drive most of your logic. You can't do
> generic useful acceleration from the kernel. A lot of modern GPU
> hardware doesn't even have bitblt engines.

I did not say that it is used directly for acceleration, but wasn't the point of
DRM to allow acceleration in the first place?

>> It's true that mmap can be PITA, but I don't see any real alternative given that
>> you want directly map video memory, especially on low end systems. And there are
>> ways around it, you can forbid mapping (though probably most userspace wouldn't
>> like it, I guess) or use any other solution like defio.
>> If you'd stop exposing the fbdev userspace interface it'd just harden my opinion
>> that KMS is a piece of trash and that I should avoid hardware that does not have
>> a native framebuffer driver. I think you shouldn't do this, as it's just a
>> disadvantage for your end users, but I personally do not really care.
> 
> We've fixed this in KMS, we don't pass direct mappings to userspace
> that we can't tear down and refault. We only provide objects via
> handles. The only place its a problem is where we expose fbdev legacy
> emulation, since we have to fix the pages.

I guess we could do the same in fbdev. It's probably just that nobody is
interested in it as we do not really care about memory management.


Best regards,

Florian Tobias Schandinat

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 19:06                           ` Florian Tobias Schandinat
@ 2011-09-17 19:25                               ` Corbin Simpson
  2011-09-17 21:25                               ` Alex Deucher
  1 sibling, 0 replies; 143+ messages in thread
From: Corbin Simpson @ 2011-09-17 19:25 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Dave Airlie, linux-fbdev, linaro-dev, linux-kernel, dri-devel,
	Archit Taneja, Rob Clark

On Sat, Sep 17, 2011 at 12:06 PM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> Again, you seem to not understand my reasoning. The "if" is the problem, it's
> the kernels job to ensure stability. Allowing the userspace to decide whether it
> crashes my machine is not acceptable to me.
> I do not claim that it is impossible to write a KMS driver in a way that it does
> not crash, but it seems more difficult than writing an fbdev driver.

It is a non-trivial problem, which I would argue is impossible, to
permit acceleration without also permitting the possibility of a GPU
lockup. It is completely legal, in every graphics API, to submit
requests which take multiple seconds to render but are totally valid.
Differentiating between long-running rendering and GPU lockup is
difficult. In addition, determining whether or not a permutation of
register writes will lock up a GPU is a pretty hard problem,
computationally, not to mention the walls of code that would be
required to make this happen. At that point, you might as well run
unaccelerated.

>> The core drm/kms ioctls don't expose acceleration to userspace either,
>> again misinformation seems to drive most of your logic. You can't do
>> generic useful acceleration from the kernel. A lot of modern GPU
>> hardware doesn't even have bitblt engines.
>
> I did not say that it is used directly for acceleration, but wasn't the point of
> DRM to allow acceleration in the first place?

In the Voodoo era, sure. DRM really means that userspace can talk
directly to a card, in a card-specific way; beyond the basic DRM
ioctls for gathering card info, nearly every ioctl is device-specific.
You can't command an nV card with Radeon ioctls. It just so happens,
fortunately, that the only things which differ from card to card and
cannot be abstracted from kernel to userspace are accelerated
rendering commands.

You're conflating DRM and KMS. It's possible to provide KMS without
DRM: Modesetting without acceleration.

-- 
When the facts change, I change my mind. What do you do, sir? ~ Keynes

Corbin Simpson
<MostAwesomeDude@gmail.com>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 19:25                               ` Corbin Simpson
  0 siblings, 0 replies; 143+ messages in thread
From: Corbin Simpson @ 2011-09-17 19:25 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Dave Airlie, linux-fbdev, linaro-dev, linux-kernel, dri-devel,
	Archit Taneja, Rob Clark

On Sat, Sep 17, 2011 at 12:06 PM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> Again, you seem to not understand my reasoning. The "if" is the problem, it's
> the kernels job to ensure stability. Allowing the userspace to decide whether it
> crashes my machine is not acceptable to me.
> I do not claim that it is impossible to write a KMS driver in a way that it does
> not crash, but it seems more difficult than writing an fbdev driver.

It is a non-trivial problem, which I would argue is impossible, to
permit acceleration without also permitting the possibility of a GPU
lockup. It is completely legal, in every graphics API, to submit
requests which take multiple seconds to render but are totally valid.
Differentiating between long-running rendering and GPU lockup is
difficult. In addition, determining whether or not a permutation of
register writes will lock up a GPU is a pretty hard problem,
computationally, not to mention the walls of code that would be
required to make this happen. At that point, you might as well run
unaccelerated.

>> The core drm/kms ioctls don't expose acceleration to userspace either,
>> again misinformation seems to drive most of your logic. You can't do
>> generic useful acceleration from the kernel. A lot of modern GPU
>> hardware doesn't even have bitblt engines.
>
> I did not say that it is used directly for acceleration, but wasn't the point of
> DRM to allow acceleration in the first place?

In the Voodoo era, sure. DRM really means that userspace can talk
directly to a card, in a card-specific way; beyond the basic DRM
ioctls for gathering card info, nearly every ioctl is device-specific.
You can't command an nV card with Radeon ioctls. It just so happens,
fortunately, that the only things which differ from card to card and
cannot be abstracted from kernel to userspace are accelerated
rendering commands.

You're conflating DRM and KMS. It's possible to provide KMS without
DRM: Modesetting without acceleration.

-- 
When the facts change, I change my mind. What do you do, sir? ~ Keynes

Corbin Simpson
<MostAwesomeDude@gmail.com>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 18:23                           ` Dave Airlie
  (?)
@ 2011-09-17 20:25                             ` Alan Cox
  -1 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-17 20:25 UTC (permalink / raw)
  To: Dave Airlie
  Cc: Florian Tobias Schandinat, Rob Clark, Felipe Contreras,
	linaro-dev, linux-kernel, dri-devel, Archit Taneja, linux-fbdev

> Just tell the X driver to not use acceleration, and it you won't get
> any acceleration used, then you get complete stability. If a driver
> writer wants to turn off all accel in the kernel driver, it can, its

In fact one thing we actually need really is a "dumb" KMS X server to
replace the fbdev X server that unaccel stuff depends upon and which
can't do proper mode handling, multi-head or resizing as a result. A dumb
fb generic request for a back to front copy might also be useful for
shadowfb, or at least indicators so you know what the cache behaviour is
so the X server can pick the right policy.

> We've fixed this in KMS, we don't pass direct mappings to userspace
> that we can't tear down and refault. We only provide objects via
> handles. The only place its a problem is where we expose fbdev legacy
> emulation, since we have to fix the pages.

Which is doable. Horrible but doable. The usb framebuffer code has to
play games like this with the virtual framebuffer in order to track
changes by faulting.

There are still some architectural screwups however. DRM continues the
fbdev worldview that outputs, memory and accelerators are tied together
in lumps we call video cards. That isn't really true for all cases and
with capture/overlay it gets even less true.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 20:25                             ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-17 20:25 UTC (permalink / raw)
  To: Dave Airlie
  Cc: linux-fbdev, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja, Rob Clark

> Just tell the X driver to not use acceleration, and it you won't get
> any acceleration used, then you get complete stability. If a driver
> writer wants to turn off all accel in the kernel driver, it can, its

In fact one thing we actually need really is a "dumb" KMS X server to
replace the fbdev X server that unaccel stuff depends upon and which
can't do proper mode handling, multi-head or resizing as a result. A dumb
fb generic request for a back to front copy might also be useful for
shadowfb, or at least indicators so you know what the cache behaviour is
so the X server can pick the right policy.

> We've fixed this in KMS, we don't pass direct mappings to userspace
> that we can't tear down and refault. We only provide objects via
> handles. The only place its a problem is where we expose fbdev legacy
> emulation, since we have to fix the pages.

Which is doable. Horrible but doable. The usb framebuffer code has to
play games like this with the virtual framebuffer in order to track
changes by faulting.

There are still some architectural screwups however. DRM continues the
fbdev worldview that outputs, memory and accelerators are tied together
in lumps we call video cards. That isn't really true for all cases and
with capture/overlay it gets even less true.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 20:25                             ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-17 20:25 UTC (permalink / raw)
  To: Dave Airlie
  Cc: linux-fbdev, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja, Rob Clark

> Just tell the X driver to not use acceleration, and it you won't get
> any acceleration used, then you get complete stability. If a driver
> writer wants to turn off all accel in the kernel driver, it can, its

In fact one thing we actually need really is a "dumb" KMS X server to
replace the fbdev X server that unaccel stuff depends upon and which
can't do proper mode handling, multi-head or resizing as a result. A dumb
fb generic request for a back to front copy might also be useful for
shadowfb, or at least indicators so you know what the cache behaviour is
so the X server can pick the right policy.

> We've fixed this in KMS, we don't pass direct mappings to userspace
> that we can't tear down and refault. We only provide objects via
> handles. The only place its a problem is where we expose fbdev legacy
> emulation, since we have to fix the pages.

Which is doable. Horrible but doable. The usb framebuffer code has to
play games like this with the virtual framebuffer in order to track
changes by faulting.

There are still some architectural screwups however. DRM continues the
fbdev worldview that outputs, memory and accelerators are tied together
in lumps we call video cards. That isn't really true for all cases and
with capture/overlay it gets even less true.

Alan

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 19:06                           ` Florian Tobias Schandinat
  2011-09-17 19:25                               ` Corbin Simpson
@ 2011-09-17 21:25                               ` Alex Deucher
  1 sibling, 0 replies; 143+ messages in thread
From: Alex Deucher @ 2011-09-17 21:25 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Dave Airlie, linux-fbdev, linaro-dev, linux-kernel, dri-devel,
	Archit Taneja, Rob Clark

On Sat, Sep 17, 2011 at 3:06 PM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> On 09/17/2011 06:23 PM, Dave Airlie wrote:
>>>
>>> Is it? Well, okay, I don't want to use any acceleration that can crash my
>>> machine, where can I select it, preferably as compile time option? I didn't find
>>> such a thing for Intel or Radeon. Don't say, I should rely on userspace here or
>>> use fbdev for this.
>>
>> Just tell the X driver to not use acceleration, and it you won't get
>> any acceleration used, then you get complete stability. If a driver
>> writer wants to turn off all accel in the kernel driver, it can, its
>> not an option we've bothered with for intel or radeon since it really
>> makes no sense. To put it simply you don't really seem to understand
>> the driver model around KMS. If no userspace app uses acceleration
>> then no acceleration features will magically happen. If you want to
>> write a simple app against the KMS API like plymouth you can now use
>> the dumb ioctls to create and map a buffer that can be made into a
>> framebuffer. Also you get hw cursors + modesetting.
>
> Again, you seem to not understand my reasoning. The "if" is the problem, it's
> the kernels job to ensure stability. Allowing the userspace to decide whether it
> crashes my machine is not acceptable to me.
> I do not claim that it is impossible to write a KMS driver in a way that it does
> not crash, but it seems more difficult than writing an fbdev driver.
>

It's perfectly valid to write a KMS DRM driver that doesn't support
acceleration in which case it will be just as "stable" as a fbdev
driver.  In fact on modern hardware it's probably easier to write a
KMS DRM driver than a fbdev driver because the API and internal
abstractions match the hardware better.  If you have hardware with 4
display controllers, 2 DACs, a TMDS encoder, and a DP encoder how do
you decide which combination of components and modes to light up at
boot using fbdev?

Alternatively, if you wanted to support acceleration as well, you can
add a module option to force acceleration off at the kernel level
rather than from userspace.  It's trivial.

Alex

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 21:25                               ` Alex Deucher
  0 siblings, 0 replies; 143+ messages in thread
From: Alex Deucher @ 2011-09-17 21:25 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Rob Clark

On Sat, Sep 17, 2011 at 3:06 PM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> On 09/17/2011 06:23 PM, Dave Airlie wrote:
>>>
>>> Is it? Well, okay, I don't want to use any acceleration that can crash my
>>> machine, where can I select it, preferably as compile time option? I didn't find
>>> such a thing for Intel or Radeon. Don't say, I should rely on userspace here or
>>> use fbdev for this.
>>
>> Just tell the X driver to not use acceleration, and it you won't get
>> any acceleration used, then you get complete stability. If a driver
>> writer wants to turn off all accel in the kernel driver, it can, its
>> not an option we've bothered with for intel or radeon since it really
>> makes no sense. To put it simply you don't really seem to understand
>> the driver model around KMS. If no userspace app uses acceleration
>> then no acceleration features will magically happen. If you want to
>> write a simple app against the KMS API like plymouth you can now use
>> the dumb ioctls to create and map a buffer that can be made into a
>> framebuffer. Also you get hw cursors + modesetting.
>
> Again, you seem to not understand my reasoning. The "if" is the problem, it's
> the kernels job to ensure stability. Allowing the userspace to decide whether it
> crashes my machine is not acceptable to me.
> I do not claim that it is impossible to write a KMS driver in a way that it does
> not crash, but it seems more difficult than writing an fbdev driver.
>

It's perfectly valid to write a KMS DRM driver that doesn't support
acceleration in which case it will be just as "stable" as a fbdev
driver.  In fact on modern hardware it's probably easier to write a
KMS DRM driver than a fbdev driver because the API and internal
abstractions match the hardware better.  If you have hardware with 4
display controllers, 2 DACs, a TMDS encoder, and a DP encoder how do
you decide which combination of components and modes to light up at
boot using fbdev?

Alternatively, if you wanted to support acceleration as well, you can
add a module option to force acceleration off at the kernel level
rather than from userspace.  It's trivial.

Alex

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 21:25                               ` Alex Deucher
  0 siblings, 0 replies; 143+ messages in thread
From: Alex Deucher @ 2011-09-17 21:25 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Rob Clark

On Sat, Sep 17, 2011 at 3:06 PM, Florian Tobias Schandinat
<FlorianSchandinat@gmx.de> wrote:
> On 09/17/2011 06:23 PM, Dave Airlie wrote:
>>>
>>> Is it? Well, okay, I don't want to use any acceleration that can crash my
>>> machine, where can I select it, preferably as compile time option? I didn't find
>>> such a thing for Intel or Radeon. Don't say, I should rely on userspace here or
>>> use fbdev for this.
>>
>> Just tell the X driver to not use acceleration, and it you won't get
>> any acceleration used, then you get complete stability. If a driver
>> writer wants to turn off all accel in the kernel driver, it can, its
>> not an option we've bothered with for intel or radeon since it really
>> makes no sense. To put it simply you don't really seem to understand
>> the driver model around KMS. If no userspace app uses acceleration
>> then no acceleration features will magically happen. If you want to
>> write a simple app against the KMS API like plymouth you can now use
>> the dumb ioctls to create and map a buffer that can be made into a
>> framebuffer. Also you get hw cursors + modesetting.
>
> Again, you seem to not understand my reasoning. The "if" is the problem, it's
> the kernels job to ensure stability. Allowing the userspace to decide whether it
> crashes my machine is not acceptable to me.
> I do not claim that it is impossible to write a KMS driver in a way that it does
> not crash, but it seems more difficult than writing an fbdev driver.
>

It's perfectly valid to write a KMS DRM driver that doesn't support
acceleration in which case it will be just as "stable" as a fbdev
driver.  In fact on modern hardware it's probably easier to write a
KMS DRM driver than a fbdev driver because the API and internal
abstractions match the hardware better.  If you have hardware with 4
display controllers, 2 DACs, a TMDS encoder, and a DP encoder how do
you decide which combination of components and modes to light up at
boot using fbdev?

Alternatively, if you wanted to support acceleration as well, you can
add a module option to force acceleration off at the kernel level
rather than from userspace.  It's trivial.

Alex

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:05         ` Alan Cox
  (?)
@ 2011-09-17 21:36           ` Laurent Pinchart
  -1 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-17 21:36 UTC (permalink / raw)
  To: Alan Cox
  Cc: Keith Packard, Tomi Valkeinen, linux-fbdev, linux-kernel,
	dri-devel, linaro-dev, Clark, Rob, Archit Taneja

On Thursday 15 September 2011 19:05:00 Alan Cox wrote:
> On Thu, 15 Sep 2011 10:50:32 -0500
> Keith Packard wrote:
> > On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen wrote:
> > > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> > > the plan is to make DRM the core Linux display framework, upon which
> > > everything else is built, and fb and v4l2 are changed to use DRM.
> > 
> > I'd like to think we could make DRM the underlying display framework;
> > it already exposes an fb interface, and with overlays, a bit more of the
> > v4l2 stuff is done as well. Certainly eliminating three copies of mode
> > setting infrastructure would be nice...
> 
> V4L2 needs to interface with the DRM anyway. Lots of current hardware
> wants things like shared 1080i/p camera buffers with video in order to do
> preview on video and the like.

Buffers sharing is a hot topic that has been discussed during Linaro Connect 
in August 2011. Even though the discussions were aimed at solving ARM-related 
embedded issues, the solution we're working on is not limited to the ARM 
platform and will allow applications to pass buffers around between device 
drivers from different subsystems.

> In my semi-perfect world vision fb would be a legacy layer on top of DRM.
> DRM would get the silly recovery fail cases fixed, and a kernel console
> would be attachable to a GEM object of your choice.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 21:36           ` Laurent Pinchart
  0 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-17 21:36 UTC (permalink / raw)
  To: Alan Cox
  Cc: Keith Packard, Tomi Valkeinen, linux-fbdev, linux-kernel,
	dri-devel, linaro-dev, Clark, Rob, Archit Taneja

On Thursday 15 September 2011 19:05:00 Alan Cox wrote:
> On Thu, 15 Sep 2011 10:50:32 -0500
> Keith Packard wrote:
> > On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen wrote:
> > > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> > > the plan is to make DRM the core Linux display framework, upon which
> > > everything else is built, and fb and v4l2 are changed to use DRM.
> > 
> > I'd like to think we could make DRM the underlying display framework;
> > it already exposes an fb interface, and with overlays, a bit more of the
> > v4l2 stuff is done as well. Certainly eliminating three copies of mode
> > setting infrastructure would be nice...
> 
> V4L2 needs to interface with the DRM anyway. Lots of current hardware
> wants things like shared 1080i/p camera buffers with video in order to do
> preview on video and the like.

Buffers sharing is a hot topic that has been discussed during Linaro Connect 
in August 2011. Even though the discussions were aimed at solving ARM-related 
embedded issues, the solution we're working on is not limited to the ARM 
platform and will allow applications to pass buffers around between device 
drivers from different subsystems.

> In my semi-perfect world vision fb would be a legacy layer on top of DRM.
> DRM would get the silly recovery fail cases fixed, and a kernel console
> would be attachable to a GEM object of your choice.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 21:36           ` Laurent Pinchart
  0 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-17 21:36 UTC (permalink / raw)
  To: Alan Cox
  Cc: Keith Packard, Tomi Valkeinen, linux-fbdev, linux-kernel,
	dri-devel, linaro-dev, Clark, Rob, Archit Taneja

On Thursday 15 September 2011 19:05:00 Alan Cox wrote:
> On Thu, 15 Sep 2011 10:50:32 -0500
> Keith Packard wrote:
> > On Thu, 15 Sep 2011 18:29:54 +0300, Tomi Valkeinen wrote:
> > > 1) It's part of DRM, so it doesn't help fb or v4l2 drivers. Except if
> > > the plan is to make DRM the core Linux display framework, upon which
> > > everything else is built, and fb and v4l2 are changed to use DRM.
> > 
> > I'd like to think we could make DRM the underlying display framework;
> > it already exposes an fb interface, and with overlays, a bit more of the
> > v4l2 stuff is done as well. Certainly eliminating three copies of mode
> > setting infrastructure would be nice...
> 
> V4L2 needs to interface with the DRM anyway. Lots of current hardware
> wants things like shared 1080i/p camera buffers with video in order to do
> preview on video and the like.

Buffers sharing is a hot topic that has been discussed during Linaro Connect 
in August 2011. Even though the discussions were aimed at solving ARM-related 
embedded issues, the solution we're working on is not limited to the ARM 
platform and will allow applications to pass buffers around between device 
drivers from different subsystems.

> In my semi-perfect world vision fb would be a legacy layer on top of DRM.
> DRM would get the silly recovery fail cases fixed, and a kernel console
> would be attachable to a GEM object of your choice.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 18:39           ` Florian Tobias Schandinat
  2011-09-15 18:58               ` Alan Cox
@ 2011-09-17 23:12               ` Laurent Pinchart
  2011-09-17 23:12               ` Laurent Pinchart
  2 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-17 23:12 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: Alex Deucher, Keith Packard, linux-fbdev, linaro-dev,
	linux-kernel, dri-devel, Archit Taneja, Clark, Rob

Hi everybody,

On Thursday 15 September 2011 20:39:21 Florian Tobias Schandinat wrote:
> On 09/15/2011 05:52 PM, Alex Deucher wrote:
> >
> > Please don't claim that the DRM developers do not want to cooperate.
> > I realize that people have strong opinions about existing APIs, put
> > there has been just as much, if not more obstinacy from the v4l and fb
> > people.
> 
> Well, I think it's too late to really fix this thing. We now have 3 APIs in
> the kernel that have to be kept. Probably the best we can do now is figure
> out how we can reduce code duplication and do extensions to those APIs in
> a way that they are compatible with each other or completely independent
> and can be used across the APIs.

Sorry for jumping late into the discussion. Let me try to shed some new light 
on this.

I've been thinking about the DRM/KMS/FB/V4L APIs overlap for quite some time 
now. All of them have their share of issues, historical nonsense and unique 
features. I don't think we can pick one of those APIs today and decide to drop 
the others, but we certainly need to make DRM, KMS, FB and V4L interoperable 
at various levels. The alternative is to keep ignoring each other and let the 
market decice. Thinking that the market could pick something like OpenMAX 
scares me, so I'd rather find a good compromise and move forward.

Disclaimer: My DRM/KMS knowledge isn't as good as my FB and V4L knowledge, so 
please feel free to correct my mistakes.

All our video-related APIs started as solutions to different problems. They 
all share an important feature: they assume that the devices they control is 
more or less monolithic. For that reason they expose a single device to 
userspace, and mix device configuration and data transfer on the same device 
node.

This shortcoming became painful in V4L a couple of years ago. When I started 
working on the OMAP3 ISP (camera) driver I realized that trying to configure a 
complex hardware pipeline without exposing its internals to userspace 
applications wouldn't be possible. DRM, KMS and FB ran into the exact same 
problem, just more recently, as showed by various RFCs ([1], [2]).

To fix this issue, the V4L community developed a new API called the Media 
Controller [3]. In a nutshell, the MC aims at

- exposing the device topology to userspace as an oriented graph of entities 
connected with links through pads

- controlling the device topology from userspace by enabling/disabling links

- giving userspace access to per-entity controls

- configuring formats at individual points in the pipeline from userspace.

The MC API solves the first two problems. The last two require help from V4L 
(which has been extended with new MC-aware ioctls), as MC is media-agnostic 
and can't thus configure video formats.

To support this, the V4L subsystem exposes an in-kernel API based around the 
concept of sub-devices. A single high-level hardware device is handled by 
multiple sub-devices, possibly controlled by different drivers. For instance, 
in the OMAP3-based N900 digital camera, the OMAP3 ISP is made of 8 sub-devices 
(all controlled by the OMAP3 ISP driver), and the two sensors, flash 
controller and lens controller all have their own sub-device, each of them 
controlled by its own driver.

All this infrastructure exposes the devices a the graph showed in [4] to 
applications, and the V4L sub-device API can be used to set formats at 
individual pads. This allows controlling scaling, cropping, composing and 
other video-related operations on the pipeline.

With the introduction of the media controller architecture, I now see V4L as 
being made of three parts.

1. The V4L video nodes streaming API, used to manage video buffers memory, map 
it to userspace, and control video streaming (and data transfers).

2. The V4L sub-devices API, used to control parameters on individual entities 
in the graph and configure formats.

3. The V4L video nodes formats and control API, used to perform the same tasks 
as the V4L sub-devices API for drivers that don't support the media controller 
API, or to provide support for pure V4L applications with drivers that support 
the media controller API.

V4L is made of those three parts, but I believe it helps to think about them 
individually. With today's (and tomorrow's) devices, DRM, KMS and FB are in a 
situation similar to what V4L experienced a couple of years ago. They need to 
give control of complex pipelines to userspace, and I believe this should be 
done by (logically) splitting DRM, KMS and FB into a pipeline control part and 
a data flow part, as we did with V4L.

Keeping the monolithic device model and handling pipeline control without 
exposing the pipeline topology would in my opinion be a mistake. Even if this 
could support today's hardware, I don't think it would be future-proof. I 
would rather see the DRM, KMS and FB topologies being exposed to applications 
by implementing the MC API in DRM, KMS and FB drivers. I'm working on a proof 
of concept for the FB sh_mobile_lcdc driver and will post patches soon. 
Something similar can be done for DRM and KMS.

This would leave us with the issue of controlling formats and other parameters 
on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that, 
but would it really make sense ? I don't think so. Obviously I would be happy 
to use the V4L API, as we already have a working solution :-) I don't see that 
as being realistic though, we will probably need to create a central graphics-
related API here (possibly close to what we already have in V4L if it can 
fulfil everybody's needs).

To paraphrase Alan, in my semi-perfect world vision the MC API would be used 
to expose hardware pipelines to userspace, a common graphics API would be used 
to control parameters on the pipeline shared by DRM, KMS, FB and V4L, the 
individual APIs would control subsystem-specific parameters and DRM, KMS, FB 
and V4L would be implemented on top of this to manage memory, command queues 
and data transfers.

Am I looking too far in the future ?

[1] http://www.mail-archive.com/intel-gfx@lists.freedesktop.org/msg04421.html
[2] http://www.mail-archive.com/linux-samsung-
soc@vger.kernel.org/msg06292.html
[3] http://linuxtv.org/downloads/v4l-dvb-apis/media_common.html
[4] http://www.ideasonboard.org/media/omap3isp.ps

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 23:12               ` Laurent Pinchart
  0 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-17 23:12 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

Hi everybody,

On Thursday 15 September 2011 20:39:21 Florian Tobias Schandinat wrote:
> On 09/15/2011 05:52 PM, Alex Deucher wrote:
> >
> > Please don't claim that the DRM developers do not want to cooperate.
> > I realize that people have strong opinions about existing APIs, put
> > there has been just as much, if not more obstinacy from the v4l and fb
> > people.
> 
> Well, I think it's too late to really fix this thing. We now have 3 APIs in
> the kernel that have to be kept. Probably the best we can do now is figure
> out how we can reduce code duplication and do extensions to those APIs in
> a way that they are compatible with each other or completely independent
> and can be used across the APIs.

Sorry for jumping late into the discussion. Let me try to shed some new light 
on this.

I've been thinking about the DRM/KMS/FB/V4L APIs overlap for quite some time 
now. All of them have their share of issues, historical nonsense and unique 
features. I don't think we can pick one of those APIs today and decide to drop 
the others, but we certainly need to make DRM, KMS, FB and V4L interoperable 
at various levels. The alternative is to keep ignoring each other and let the 
market decice. Thinking that the market could pick something like OpenMAX 
scares me, so I'd rather find a good compromise and move forward.

Disclaimer: My DRM/KMS knowledge isn't as good as my FB and V4L knowledge, so 
please feel free to correct my mistakes.

All our video-related APIs started as solutions to different problems. They 
all share an important feature: they assume that the devices they control is 
more or less monolithic. For that reason they expose a single device to 
userspace, and mix device configuration and data transfer on the same device 
node.

This shortcoming became painful in V4L a couple of years ago. When I started 
working on the OMAP3 ISP (camera) driver I realized that trying to configure a 
complex hardware pipeline without exposing its internals to userspace 
applications wouldn't be possible. DRM, KMS and FB ran into the exact same 
problem, just more recently, as showed by various RFCs ([1], [2]).

To fix this issue, the V4L community developed a new API called the Media 
Controller [3]. In a nutshell, the MC aims at

- exposing the device topology to userspace as an oriented graph of entities 
connected with links through pads

- controlling the device topology from userspace by enabling/disabling links

- giving userspace access to per-entity controls

- configuring formats at individual points in the pipeline from userspace.

The MC API solves the first two problems. The last two require help from V4L 
(which has been extended with new MC-aware ioctls), as MC is media-agnostic 
and can't thus configure video formats.

To support this, the V4L subsystem exposes an in-kernel API based around the 
concept of sub-devices. A single high-level hardware device is handled by 
multiple sub-devices, possibly controlled by different drivers. For instance, 
in the OMAP3-based N900 digital camera, the OMAP3 ISP is made of 8 sub-devices 
(all controlled by the OMAP3 ISP driver), and the two sensors, flash 
controller and lens controller all have their own sub-device, each of them 
controlled by its own driver.

All this infrastructure exposes the devices a the graph showed in [4] to 
applications, and the V4L sub-device API can be used to set formats at 
individual pads. This allows controlling scaling, cropping, composing and 
other video-related operations on the pipeline.

With the introduction of the media controller architecture, I now see V4L as 
being made of three parts.

1. The V4L video nodes streaming API, used to manage video buffers memory, map 
it to userspace, and control video streaming (and data transfers).

2. The V4L sub-devices API, used to control parameters on individual entities 
in the graph and configure formats.

3. The V4L video nodes formats and control API, used to perform the same tasks 
as the V4L sub-devices API for drivers that don't support the media controller 
API, or to provide support for pure V4L applications with drivers that support 
the media controller API.

V4L is made of those three parts, but I believe it helps to think about them 
individually. With today's (and tomorrow's) devices, DRM, KMS and FB are in a 
situation similar to what V4L experienced a couple of years ago. They need to 
give control of complex pipelines to userspace, and I believe this should be 
done by (logically) splitting DRM, KMS and FB into a pipeline control part and 
a data flow part, as we did with V4L.

Keeping the monolithic device model and handling pipeline control without 
exposing the pipeline topology would in my opinion be a mistake. Even if this 
could support today's hardware, I don't think it would be future-proof. I 
would rather see the DRM, KMS and FB topologies being exposed to applications 
by implementing the MC API in DRM, KMS and FB drivers. I'm working on a proof 
of concept for the FB sh_mobile_lcdc driver and will post patches soon. 
Something similar can be done for DRM and KMS.

This would leave us with the issue of controlling formats and other parameters 
on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that, 
but would it really make sense ? I don't think so. Obviously I would be happy 
to use the V4L API, as we already have a working solution :-) I don't see that 
as being realistic though, we will probably need to create a central graphics-
related API here (possibly close to what we already have in V4L if it can 
fulfil everybody's needs).

To paraphrase Alan, in my semi-perfect world vision the MC API would be used 
to expose hardware pipelines to userspace, a common graphics API would be used 
to control parameters on the pipeline shared by DRM, KMS, FB and V4L, the 
individual APIs would control subsystem-specific parameters and DRM, KMS, FB 
and V4L would be implemented on top of this to manage memory, command queues 
and data transfers.

Am I looking too far in the future ?

[1] http://www.mail-archive.com/intel-gfx@lists.freedesktop.org/msg04421.html
[2] http://www.mail-archive.com/linux-samsung-
soc@vger.kernel.org/msg06292.html
[3] http://linuxtv.org/downloads/v4l-dvb-apis/media_common.html
[4] http://www.ideasonboard.org/media/omap3isp.ps

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-17 23:12               ` Laurent Pinchart
  0 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-17 23:12 UTC (permalink / raw)
  To: Florian Tobias Schandinat
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

Hi everybody,

On Thursday 15 September 2011 20:39:21 Florian Tobias Schandinat wrote:
> On 09/15/2011 05:52 PM, Alex Deucher wrote:
> >
> > Please don't claim that the DRM developers do not want to cooperate.
> > I realize that people have strong opinions about existing APIs, put
> > there has been just as much, if not more obstinacy from the v4l and fb
> > people.
> 
> Well, I think it's too late to really fix this thing. We now have 3 APIs in
> the kernel that have to be kept. Probably the best we can do now is figure
> out how we can reduce code duplication and do extensions to those APIs in
> a way that they are compatible with each other or completely independent
> and can be used across the APIs.

Sorry for jumping late into the discussion. Let me try to shed some new light 
on this.

I've been thinking about the DRM/KMS/FB/V4L APIs overlap for quite some time 
now. All of them have their share of issues, historical nonsense and unique 
features. I don't think we can pick one of those APIs today and decide to drop 
the others, but we certainly need to make DRM, KMS, FB and V4L interoperable 
at various levels. The alternative is to keep ignoring each other and let the 
market decice. Thinking that the market could pick something like OpenMAX 
scares me, so I'd rather find a good compromise and move forward.

Disclaimer: My DRM/KMS knowledge isn't as good as my FB and V4L knowledge, so 
please feel free to correct my mistakes.

All our video-related APIs started as solutions to different problems. They 
all share an important feature: they assume that the devices they control is 
more or less monolithic. For that reason they expose a single device to 
userspace, and mix device configuration and data transfer on the same device 
node.

This shortcoming became painful in V4L a couple of years ago. When I started 
working on the OMAP3 ISP (camera) driver I realized that trying to configure a 
complex hardware pipeline without exposing its internals to userspace 
applications wouldn't be possible. DRM, KMS and FB ran into the exact same 
problem, just more recently, as showed by various RFCs ([1], [2]).

To fix this issue, the V4L community developed a new API called the Media 
Controller [3]. In a nutshell, the MC aims at

- exposing the device topology to userspace as an oriented graph of entities 
connected with links through pads

- controlling the device topology from userspace by enabling/disabling links

- giving userspace access to per-entity controls

- configuring formats at individual points in the pipeline from userspace.

The MC API solves the first two problems. The last two require help from V4L 
(which has been extended with new MC-aware ioctls), as MC is media-agnostic 
and can't thus configure video formats.

To support this, the V4L subsystem exposes an in-kernel API based around the 
concept of sub-devices. A single high-level hardware device is handled by 
multiple sub-devices, possibly controlled by different drivers. For instance, 
in the OMAP3-based N900 digital camera, the OMAP3 ISP is made of 8 sub-devices 
(all controlled by the OMAP3 ISP driver), and the two sensors, flash 
controller and lens controller all have their own sub-device, each of them 
controlled by its own driver.

All this infrastructure exposes the devices a the graph showed in [4] to 
applications, and the V4L sub-device API can be used to set formats at 
individual pads. This allows controlling scaling, cropping, composing and 
other video-related operations on the pipeline.

With the introduction of the media controller architecture, I now see V4L as 
being made of three parts.

1. The V4L video nodes streaming API, used to manage video buffers memory, map 
it to userspace, and control video streaming (and data transfers).

2. The V4L sub-devices API, used to control parameters on individual entities 
in the graph and configure formats.

3. The V4L video nodes formats and control API, used to perform the same tasks 
as the V4L sub-devices API for drivers that don't support the media controller 
API, or to provide support for pure V4L applications with drivers that support 
the media controller API.

V4L is made of those three parts, but I believe it helps to think about them 
individually. With today's (and tomorrow's) devices, DRM, KMS and FB are in a 
situation similar to what V4L experienced a couple of years ago. They need to 
give control of complex pipelines to userspace, and I believe this should be 
done by (logically) splitting DRM, KMS and FB into a pipeline control part and 
a data flow part, as we did with V4L.

Keeping the monolithic device model and handling pipeline control without 
exposing the pipeline topology would in my opinion be a mistake. Even if this 
could support today's hardware, I don't think it would be future-proof. I 
would rather see the DRM, KMS and FB topologies being exposed to applications 
by implementing the MC API in DRM, KMS and FB drivers. I'm working on a proof 
of concept for the FB sh_mobile_lcdc driver and will post patches soon. 
Something similar can be done for DRM and KMS.

This would leave us with the issue of controlling formats and other parameters 
on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that, 
but would it really make sense ? I don't think so. Obviously I would be happy 
to use the V4L API, as we already have a working solution :-) I don't see that 
as being realistic though, we will probably need to create a central graphics-
related API here (possibly close to what we already have in V4L if it can 
fulfil everybody's needs).

To paraphrase Alan, in my semi-perfect world vision the MC API would be used 
to expose hardware pipelines to userspace, a common graphics API would be used 
to control parameters on the pipeline shared by DRM, KMS, FB and V4L, the 
individual APIs would control subsystem-specific parameters and DRM, KMS, FB 
and V4L would be implemented on top of this to manage memory, command queues 
and data transfers.

Am I looking too far in the future ?

[1] http://www.mail-archive.com/intel-gfx@lists.freedesktop.org/msg04421.html
[2] http://www.mail-archive.com/linux-samsung-
soc@vger.kernel.org/msg06292.html
[3] http://linuxtv.org/downloads/v4l-dvb-apis/media_common.html
[4] http://www.ideasonboard.org/media/omap3isp.ps

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 23:12               ` Laurent Pinchart
  (?)
@ 2011-09-18 16:14                 ` Rob Clark
  -1 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-18 16:14 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Florian Tobias Schandinat, linaro-dev, linux-fbdev, dri-devel,
	linux-kernel, Archit Taneja

On Sat, Sep 17, 2011 at 6:12 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hi everybody,
>
> On Thursday 15 September 2011 20:39:21 Florian Tobias Schandinat wrote:
>> On 09/15/2011 05:52 PM, Alex Deucher wrote:
>> >
>> > Please don't claim that the DRM developers do not want to cooperate.
>> > I realize that people have strong opinions about existing APIs, put
>> > there has been just as much, if not more obstinacy from the v4l and fb
>> > people.
>>
>> Well, I think it's too late to really fix this thing. We now have 3 APIs in
>> the kernel that have to be kept. Probably the best we can do now is figure
>> out how we can reduce code duplication and do extensions to those APIs in
>> a way that they are compatible with each other or completely independent
>> and can be used across the APIs.
>
> Sorry for jumping late into the discussion. Let me try to shed some new light
> on this.
>
> I've been thinking about the DRM/KMS/FB/V4L APIs overlap for quite some time
> now. All of them have their share of issues, historical nonsense and unique
> features. I don't think we can pick one of those APIs today and decide to drop
> the others, but we certainly need to make DRM, KMS, FB and V4L interoperable
> at various levels. The alternative is to keep ignoring each other and let the
> market decice.

I think we need to differentiate between V4L camera, and display..

MC and subdev stuff clearly seem to be the way to go for complex
camera / imaging subsystems.  But that is a very different problem
domain from GPU+display.  We need to stop blurring the two topics.

> Thinking that the market could pick something like OpenMAX
> scares me, so I'd rather find a good compromise and move forward.
>
> Disclaimer: My DRM/KMS knowledge isn't as good as my FB and V4L knowledge, so
> please feel free to correct my mistakes.
>
> All our video-related APIs started as solutions to different problems. They
> all share an important feature: they assume that the devices they control is
> more or less monolithic. For that reason they expose a single device to
> userspace, and mix device configuration and data transfer on the same device
> node.
>
> This shortcoming became painful in V4L a couple of years ago. When I started
> working on the OMAP3 ISP (camera) driver I realized that trying to configure a
> complex hardware pipeline without exposing its internals to userspace
> applications wouldn't be possible. DRM, KMS and FB ran into the exact same
> problem, just more recently, as showed by various RFCs ([1], [2]).

But I do think that overlays need to be part of the DRM/KMS interface,
simply because flipping still needs to be synchronized w/ the GPU.  I
have some experience using V4L for display, and this is one (of
several) broken aspects of that.

> To fix this issue, the V4L community developed a new API called the Media
> Controller [3]. In a nutshell, the MC aims at
>
> - exposing the device topology to userspace as an oriented graph of entities
> connected with links through pads
>
> - controlling the device topology from userspace by enabling/disabling links
>
> - giving userspace access to per-entity controls
>
> - configuring formats at individual points in the pipeline from userspace.
>
> The MC API solves the first two problems. The last two require help from V4L
> (which has been extended with new MC-aware ioctls), as MC is media-agnostic
> and can't thus configure video formats.
>
> To support this, the V4L subsystem exposes an in-kernel API based around the
> concept of sub-devices. A single high-level hardware device is handled by
> multiple sub-devices, possibly controlled by different drivers. For instance,
> in the OMAP3-based N900 digital camera, the OMAP3 ISP is made of 8 sub-devices
> (all controlled by the OMAP3 ISP driver), and the two sensors, flash
> controller and lens controller all have their own sub-device, each of them
> controlled by its own driver.
>
> All this infrastructure exposes the devices a the graph showed in [4] to
> applications, and the V4L sub-device API can be used to set formats at
> individual pads. This allows controlling scaling, cropping, composing and
> other video-related operations on the pipeline.
>
> With the introduction of the media controller architecture, I now see V4L as
> being made of three parts.
>
> 1. The V4L video nodes streaming API, used to manage video buffers memory, map
> it to userspace, and control video streaming (and data transfers).
>
> 2. The V4L sub-devices API, used to control parameters on individual entities
> in the graph and configure formats.
>
> 3. The V4L video nodes formats and control API, used to perform the same tasks
> as the V4L sub-devices API for drivers that don't support the media controller
> API, or to provide support for pure V4L applications with drivers that support
> the media controller API.
>
> V4L is made of those three parts, but I believe it helps to think about them
> individually. With today's (and tomorrow's) devices, DRM, KMS and FB are in a
> situation similar to what V4L experienced a couple of years ago. They need to
> give control of complex pipelines to userspace, and I believe this should be
> done by (logically) splitting DRM, KMS and FB into a pipeline control part and
> a data flow part, as we did with V4L.
>
> Keeping the monolithic device model and handling pipeline control without
> exposing the pipeline topology would in my opinion be a mistake. Even if this
> could support today's hardware, I don't think it would be future-proof. I
> would rather see the DRM, KMS and FB topologies being exposed to applications
> by implementing the MC API in DRM, KMS and FB drivers. I'm working on a proof
> of concept for the FB sh_mobile_lcdc driver and will post patches soon.
> Something similar can be done for DRM and KMS.
>
> This would leave us with the issue of controlling formats and other parameters
> on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that,
> but would it really make sense ? I don't think so. Obviously I would be happy
> to use the V4L API, as we already have a working solution :-) I don't see that
> as being realistic though, we will probably need to create a central graphics-
> related API here (possibly close to what we already have in V4L if it can
> fulfil everybody's needs).
>
> To paraphrase Alan, in my semi-perfect world vision the MC API would be used
> to expose hardware pipelines to userspace, a common graphics API would be used
> to control parameters on the pipeline shared by DRM, KMS, FB and V4L, the
> individual APIs would control subsystem-specific parameters and DRM, KMS, FB
> and V4L would be implemented on top of this to manage memory, command queues
> and data transfers.

I guess in theory it would be possible to let MC iterate the
plane->crtc->encoder->connector topology.. I'm not entirely sure what
benefit that would bring, other than change for the sake of change.

V4L and DRM are very different APIs designed to solves very different
problems.  The KMS / mode-setting part may look somewhat similar to
something you can express w/ a camera-like graph of nodes.  But the
memory management is very different.  And display updates (like page
flipping) need to be synchronized w/ GPU rendering.. etc.  Trying to
fit V4L here, just seems like trying to force a square peg in a round
hole.  You'd have to end up morphing V4L so much that in the end it
looks like DRM.  And that might not be the right thing for cameras.

So V4L for camera, DRM for gpu/display.  Those are the two APIs we need.

BR,
-R

> Am I looking too far in the future ?
>
> [1] http://www.mail-archive.com/intel-gfx@lists.freedesktop.org/msg04421.html
> [2] http://www.mail-archive.com/linux-samsung-
> soc@vger.kernel.org/msg06292.html
> [3] http://linuxtv.org/downloads/v4l-dvb-apis/media_common.html
> [4] http://www.ideasonboard.org/media/omap3isp.ps
>
> --
> Regards,
>
> Laurent Pinchart
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-18 16:14                 ` Rob Clark
  0 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-18 16:14 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: linux-fbdev, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja

On Sat, Sep 17, 2011 at 6:12 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hi everybody,
>
> On Thursday 15 September 2011 20:39:21 Florian Tobias Schandinat wrote:
>> On 09/15/2011 05:52 PM, Alex Deucher wrote:
>> >
>> > Please don't claim that the DRM developers do not want to cooperate.
>> > I realize that people have strong opinions about existing APIs, put
>> > there has been just as much, if not more obstinacy from the v4l and fb
>> > people.
>>
>> Well, I think it's too late to really fix this thing. We now have 3 APIs in
>> the kernel that have to be kept. Probably the best we can do now is figure
>> out how we can reduce code duplication and do extensions to those APIs in
>> a way that they are compatible with each other or completely independent
>> and can be used across the APIs.
>
> Sorry for jumping late into the discussion. Let me try to shed some new light
> on this.
>
> I've been thinking about the DRM/KMS/FB/V4L APIs overlap for quite some time
> now. All of them have their share of issues, historical nonsense and unique
> features. I don't think we can pick one of those APIs today and decide to drop
> the others, but we certainly need to make DRM, KMS, FB and V4L interoperable
> at various levels. The alternative is to keep ignoring each other and let the
> market decice.

I think we need to differentiate between V4L camera, and display..

MC and subdev stuff clearly seem to be the way to go for complex
camera / imaging subsystems.  But that is a very different problem
domain from GPU+display.  We need to stop blurring the two topics.

> Thinking that the market could pick something like OpenMAX
> scares me, so I'd rather find a good compromise and move forward.
>
> Disclaimer: My DRM/KMS knowledge isn't as good as my FB and V4L knowledge, so
> please feel free to correct my mistakes.
>
> All our video-related APIs started as solutions to different problems. They
> all share an important feature: they assume that the devices they control is
> more or less monolithic. For that reason they expose a single device to
> userspace, and mix device configuration and data transfer on the same device
> node.
>
> This shortcoming became painful in V4L a couple of years ago. When I started
> working on the OMAP3 ISP (camera) driver I realized that trying to configure a
> complex hardware pipeline without exposing its internals to userspace
> applications wouldn't be possible. DRM, KMS and FB ran into the exact same
> problem, just more recently, as showed by various RFCs ([1], [2]).

But I do think that overlays need to be part of the DRM/KMS interface,
simply because flipping still needs to be synchronized w/ the GPU.  I
have some experience using V4L for display, and this is one (of
several) broken aspects of that.

> To fix this issue, the V4L community developed a new API called the Media
> Controller [3]. In a nutshell, the MC aims at
>
> - exposing the device topology to userspace as an oriented graph of entities
> connected with links through pads
>
> - controlling the device topology from userspace by enabling/disabling links
>
> - giving userspace access to per-entity controls
>
> - configuring formats at individual points in the pipeline from userspace.
>
> The MC API solves the first two problems. The last two require help from V4L
> (which has been extended with new MC-aware ioctls), as MC is media-agnostic
> and can't thus configure video formats.
>
> To support this, the V4L subsystem exposes an in-kernel API based around the
> concept of sub-devices. A single high-level hardware device is handled by
> multiple sub-devices, possibly controlled by different drivers. For instance,
> in the OMAP3-based N900 digital camera, the OMAP3 ISP is made of 8 sub-devices
> (all controlled by the OMAP3 ISP driver), and the two sensors, flash
> controller and lens controller all have their own sub-device, each of them
> controlled by its own driver.
>
> All this infrastructure exposes the devices a the graph showed in [4] to
> applications, and the V4L sub-device API can be used to set formats at
> individual pads. This allows controlling scaling, cropping, composing and
> other video-related operations on the pipeline.
>
> With the introduction of the media controller architecture, I now see V4L as
> being made of three parts.
>
> 1. The V4L video nodes streaming API, used to manage video buffers memory, map
> it to userspace, and control video streaming (and data transfers).
>
> 2. The V4L sub-devices API, used to control parameters on individual entities
> in the graph and configure formats.
>
> 3. The V4L video nodes formats and control API, used to perform the same tasks
> as the V4L sub-devices API for drivers that don't support the media controller
> API, or to provide support for pure V4L applications with drivers that support
> the media controller API.
>
> V4L is made of those three parts, but I believe it helps to think about them
> individually. With today's (and tomorrow's) devices, DRM, KMS and FB are in a
> situation similar to what V4L experienced a couple of years ago. They need to
> give control of complex pipelines to userspace, and I believe this should be
> done by (logically) splitting DRM, KMS and FB into a pipeline control part and
> a data flow part, as we did with V4L.
>
> Keeping the monolithic device model and handling pipeline control without
> exposing the pipeline topology would in my opinion be a mistake. Even if this
> could support today's hardware, I don't think it would be future-proof. I
> would rather see the DRM, KMS and FB topologies being exposed to applications
> by implementing the MC API in DRM, KMS and FB drivers. I'm working on a proof
> of concept for the FB sh_mobile_lcdc driver and will post patches soon.
> Something similar can be done for DRM and KMS.
>
> This would leave us with the issue of controlling formats and other parameters
> on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that,
> but would it really make sense ? I don't think so. Obviously I would be happy
> to use the V4L API, as we already have a working solution :-) I don't see that
> as being realistic though, we will probably need to create a central graphics-
> related API here (possibly close to what we already have in V4L if it can
> fulfil everybody's needs).
>
> To paraphrase Alan, in my semi-perfect world vision the MC API would be used
> to expose hardware pipelines to userspace, a common graphics API would be used
> to control parameters on the pipeline shared by DRM, KMS, FB and V4L, the
> individual APIs would control subsystem-specific parameters and DRM, KMS, FB
> and V4L would be implemented on top of this to manage memory, command queues
> and data transfers.

I guess in theory it would be possible to let MC iterate the
plane->crtc->encoder->connector topology.. I'm not entirely sure what
benefit that would bring, other than change for the sake of change.

V4L and DRM are very different APIs designed to solves very different
problems.  The KMS / mode-setting part may look somewhat similar to
something you can express w/ a camera-like graph of nodes.  But the
memory management is very different.  And display updates (like page
flipping) need to be synchronized w/ GPU rendering.. etc.  Trying to
fit V4L here, just seems like trying to force a square peg in a round
hole.  You'd have to end up morphing V4L so much that in the end it
looks like DRM.  And that might not be the right thing for cameras.

So V4L for camera, DRM for gpu/display.  Those are the two APIs we need.

BR,
-R

> Am I looking too far in the future ?
>
> [1] http://www.mail-archive.com/intel-gfx@lists.freedesktop.org/msg04421.html
> [2] http://www.mail-archive.com/linux-samsung-
> soc@vger.kernel.org/msg06292.html
> [3] http://linuxtv.org/downloads/v4l-dvb-apis/media_common.html
> [4] http://www.ideasonboard.org/media/omap3isp.ps
>
> --
> Regards,
>
> Laurent Pinchart
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-18 16:14                 ` Rob Clark
  0 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-18 16:14 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: linux-fbdev, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja

On Sat, Sep 17, 2011 at 6:12 PM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hi everybody,
>
> On Thursday 15 September 2011 20:39:21 Florian Tobias Schandinat wrote:
>> On 09/15/2011 05:52 PM, Alex Deucher wrote:
>> >
>> > Please don't claim that the DRM developers do not want to cooperate.
>> > I realize that people have strong opinions about existing APIs, put
>> > there has been just as much, if not more obstinacy from the v4l and fb
>> > people.
>>
>> Well, I think it's too late to really fix this thing. We now have 3 APIs in
>> the kernel that have to be kept. Probably the best we can do now is figure
>> out how we can reduce code duplication and do extensions to those APIs in
>> a way that they are compatible with each other or completely independent
>> and can be used across the APIs.
>
> Sorry for jumping late into the discussion. Let me try to shed some new light
> on this.
>
> I've been thinking about the DRM/KMS/FB/V4L APIs overlap for quite some time
> now. All of them have their share of issues, historical nonsense and unique
> features. I don't think we can pick one of those APIs today and decide to drop
> the others, but we certainly need to make DRM, KMS, FB and V4L interoperable
> at various levels. The alternative is to keep ignoring each other and let the
> market decice.

I think we need to differentiate between V4L camera, and display..

MC and subdev stuff clearly seem to be the way to go for complex
camera / imaging subsystems.  But that is a very different problem
domain from GPU+display.  We need to stop blurring the two topics.

> Thinking that the market could pick something like OpenMAX
> scares me, so I'd rather find a good compromise and move forward.
>
> Disclaimer: My DRM/KMS knowledge isn't as good as my FB and V4L knowledge, so
> please feel free to correct my mistakes.
>
> All our video-related APIs started as solutions to different problems. They
> all share an important feature: they assume that the devices they control is
> more or less monolithic. For that reason they expose a single device to
> userspace, and mix device configuration and data transfer on the same device
> node.
>
> This shortcoming became painful in V4L a couple of years ago. When I started
> working on the OMAP3 ISP (camera) driver I realized that trying to configure a
> complex hardware pipeline without exposing its internals to userspace
> applications wouldn't be possible. DRM, KMS and FB ran into the exact same
> problem, just more recently, as showed by various RFCs ([1], [2]).

But I do think that overlays need to be part of the DRM/KMS interface,
simply because flipping still needs to be synchronized w/ the GPU.  I
have some experience using V4L for display, and this is one (of
several) broken aspects of that.

> To fix this issue, the V4L community developed a new API called the Media
> Controller [3]. In a nutshell, the MC aims at
>
> - exposing the device topology to userspace as an oriented graph of entities
> connected with links through pads
>
> - controlling the device topology from userspace by enabling/disabling links
>
> - giving userspace access to per-entity controls
>
> - configuring formats at individual points in the pipeline from userspace.
>
> The MC API solves the first two problems. The last two require help from V4L
> (which has been extended with new MC-aware ioctls), as MC is media-agnostic
> and can't thus configure video formats.
>
> To support this, the V4L subsystem exposes an in-kernel API based around the
> concept of sub-devices. A single high-level hardware device is handled by
> multiple sub-devices, possibly controlled by different drivers. For instance,
> in the OMAP3-based N900 digital camera, the OMAP3 ISP is made of 8 sub-devices
> (all controlled by the OMAP3 ISP driver), and the two sensors, flash
> controller and lens controller all have their own sub-device, each of them
> controlled by its own driver.
>
> All this infrastructure exposes the devices a the graph showed in [4] to
> applications, and the V4L sub-device API can be used to set formats at
> individual pads. This allows controlling scaling, cropping, composing and
> other video-related operations on the pipeline.
>
> With the introduction of the media controller architecture, I now see V4L as
> being made of three parts.
>
> 1. The V4L video nodes streaming API, used to manage video buffers memory, map
> it to userspace, and control video streaming (and data transfers).
>
> 2. The V4L sub-devices API, used to control parameters on individual entities
> in the graph and configure formats.
>
> 3. The V4L video nodes formats and control API, used to perform the same tasks
> as the V4L sub-devices API for drivers that don't support the media controller
> API, or to provide support for pure V4L applications with drivers that support
> the media controller API.
>
> V4L is made of those three parts, but I believe it helps to think about them
> individually. With today's (and tomorrow's) devices, DRM, KMS and FB are in a
> situation similar to what V4L experienced a couple of years ago. They need to
> give control of complex pipelines to userspace, and I believe this should be
> done by (logically) splitting DRM, KMS and FB into a pipeline control part and
> a data flow part, as we did with V4L.
>
> Keeping the monolithic device model and handling pipeline control without
> exposing the pipeline topology would in my opinion be a mistake. Even if this
> could support today's hardware, I don't think it would be future-proof. I
> would rather see the DRM, KMS and FB topologies being exposed to applications
> by implementing the MC API in DRM, KMS and FB drivers. I'm working on a proof
> of concept for the FB sh_mobile_lcdc driver and will post patches soon.
> Something similar can be done for DRM and KMS.
>
> This would leave us with the issue of controlling formats and other parameters
> on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that,
> but would it really make sense ? I don't think so. Obviously I would be happy
> to use the V4L API, as we already have a working solution :-) I don't see that
> as being realistic though, we will probably need to create a central graphics-
> related API here (possibly close to what we already have in V4L if it can
> fulfil everybody's needs).
>
> To paraphrase Alan, in my semi-perfect world vision the MC API would be used
> to expose hardware pipelines to userspace, a common graphics API would be used
> to control parameters on the pipeline shared by DRM, KMS, FB and V4L, the
> individual APIs would control subsystem-specific parameters and DRM, KMS, FB
> and V4L would be implemented on top of this to manage memory, command queues
> and data transfers.

I guess in theory it would be possible to let MC iterate the
plane->crtc->encoder->connector topology.. I'm not entirely sure what
benefit that would bring, other than change for the sake of change.

V4L and DRM are very different APIs designed to solves very different
problems.  The KMS / mode-setting part may look somewhat similar to
something you can express w/ a camera-like graph of nodes.  But the
memory management is very different.  And display updates (like page
flipping) need to be synchronized w/ GPU rendering.. etc.  Trying to
fit V4L here, just seems like trying to force a square peg in a round
hole.  You'd have to end up morphing V4L so much that in the end it
looks like DRM.  And that might not be the right thing for cameras.

So V4L for camera, DRM for gpu/display.  Those are the two APIs we need.

BR,
-R

> Am I looking too far in the future ?
>
> [1] http://www.mail-archive.com/intel-gfx@lists.freedesktop.org/msg04421.html
> [2] http://www.mail-archive.com/linux-samsung-
> soc@vger.kernel.org/msg06292.html
> [3] http://linuxtv.org/downloads/v4l-dvb-apis/media_common.html
> [4] http://www.ideasonboard.org/media/omap3isp.ps
>
> --
> Regards,
>
> Laurent Pinchart
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-18 16:14                 ` Rob Clark
  (?)
@ 2011-09-18 21:55                   ` Laurent Pinchart
  -1 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-18 21:55 UTC (permalink / raw)
  To: Rob Clark
  Cc: Florian Tobias Schandinat, linaro-dev, linux-fbdev, dri-devel,
	linux-kernel, Archit Taneja, Sakari Ailus, linux-media

Hi Rob,

(CC'ing linux-media, as I believe this is very on-topic)

On Sunday 18 September 2011 18:14:26 Rob Clark wrote:
> On Sat, Sep 17, 2011 at 6:12 PM, Laurent Pinchart wrote:
> > On Thursday 15 September 2011 20:39:21 Florian Tobias Schandinat wrote:
> >> On 09/15/2011 05:52 PM, Alex Deucher wrote:
> >> > Please don't claim that the DRM developers do not want to cooperate.
> >> > I realize that people have strong opinions about existing APIs, put
> >> > there has been just as much, if not more obstinacy from the v4l and fb
> >> > people.
> >> 
> >> Well, I think it's too late to really fix this thing. We now have 3 APIs
> >> in the kernel that have to be kept. Probably the best we can do now is
> >> figure out how we can reduce code duplication and do extensions to
> >> those APIs in a way that they are compatible with each other or
> >> completely independent and can be used across the APIs.
> > 
> > Sorry for jumping late into the discussion. Let me try to shed some new
> > light on this.
> > 
> > I've been thinking about the DRM/KMS/FB/V4L APIs overlap for quite some
> > time now. All of them have their share of issues, historical nonsense
> > and unique features. I don't think we can pick one of those APIs today
> > and decide to drop the others, but we certainly need to make DRM, KMS,
> > FB and V4L interoperable at various levels. The alternative is to keep
> > ignoring each other and let the market decice.
> 
> I think we need to differentiate between V4L camera, and display..
> 
> MC and subdev stuff clearly seem to be the way to go for complex
> camera / imaging subsystems.  But that is a very different problem
> domain from GPU+display.  We need to stop blurring the two topics.

I would agree with you if we were only talking about GPU, but display is 
broader than that. Many hardware available today have complex display 
pipelines with "deep tunneling" between other IP blocks (such as the camera 
subsystem) and the display. Configuration of such pipelines isn't specific to 
DRM/KMS.

> > Thinking that the market could pick something like OpenMAX
> > scares me, so I'd rather find a good compromise and move forward.
> > 
> > Disclaimer: My DRM/KMS knowledge isn't as good as my FB and V4L
> > knowledge, so please feel free to correct my mistakes.
> > 
> > All our video-related APIs started as solutions to different problems.
> > They all share an important feature: they assume that the devices they
> > control is more or less monolithic. For that reason they expose a single
> > device to userspace, and mix device configuration and data transfer on
> > the same device node.
> > 
> > This shortcoming became painful in V4L a couple of years ago. When I
> > started working on the OMAP3 ISP (camera) driver I realized that trying
> > to configure a complex hardware pipeline without exposing its internals
> > to userspace applications wouldn't be possible. DRM, KMS and FB ran into
> > the exact same problem, just more recently, as showed by various RFCs
> > ([1], [2]).
> 
> But I do think that overlays need to be part of the DRM/KMS interface,
> simply because flipping still needs to be synchronized w/ the GPU.  I
> have some experience using V4L for display, and this is one (of
> several) broken aspects of that.

I agree that DRM/KMS must be used to address needs specific to display 
hardware, but I don't think *all* display needs are specific to the display.

> > To fix this issue, the V4L community developed a new API called the Media
> > Controller [3]. In a nutshell, the MC aims at
> > 
> > - exposing the device topology to userspace as an oriented graph of
> > entities connected with links through pads
> > 
> > - controlling the device topology from userspace by enabling/disabling
> > links
> > 
> > - giving userspace access to per-entity controls
> > 
> > - configuring formats at individual points in the pipeline from
> > userspace.
> > 
> > The MC API solves the first two problems. The last two require help from
> > V4L (which has been extended with new MC-aware ioctls), as MC is
> > media-agnostic and can't thus configure video formats.
> > 
> > To support this, the V4L subsystem exposes an in-kernel API based around
> > the concept of sub-devices. A single high-level hardware device is
> > handled by multiple sub-devices, possibly controlled by different
> > drivers. For instance, in the OMAP3-based N900 digital camera, the OMAP3
> > ISP is made of 8 sub-devices (all controlled by the OMAP3 ISP driver),
> > and the two sensors, flash controller and lens controller all have their
> > own sub-device, each of them controlled by its own driver.
> > 
> > All this infrastructure exposes the devices a the graph showed in [4] to
> > applications, and the V4L sub-device API can be used to set formats at
> > individual pads. This allows controlling scaling, cropping, composing and
> > other video-related operations on the pipeline.
> > 
> > With the introduction of the media controller architecture, I now see V4L
> > as being made of three parts.
> > 
> > 1. The V4L video nodes streaming API, used to manage video buffers
> > memory, map it to userspace, and control video streaming (and data
> > transfers).
> > 
> > 2. The V4L sub-devices API, used to control parameters on individual
> > entities in the graph and configure formats.
> > 
> > 3. The V4L video nodes formats and control API, used to perform the same
> > tasks as the V4L sub-devices API for drivers that don't support the
> > media controller API, or to provide support for pure V4L applications
> > with drivers that support the media controller API.
> > 
> > V4L is made of those three parts, but I believe it helps to think about
> > them individually. With today's (and tomorrow's) devices, DRM, KMS and
> > FB are in a situation similar to what V4L experienced a couple of years
> > ago. They need to give control of complex pipelines to userspace, and I
> > believe this should be done by (logically) splitting DRM, KMS and FB
> > into a pipeline control part and a data flow part, as we did with V4L.
> > 
> > Keeping the monolithic device model and handling pipeline control without
> > exposing the pipeline topology would in my opinion be a mistake. Even if
> > this could support today's hardware, I don't think it would be
> > future-proof. I would rather see the DRM, KMS and FB topologies being
> > exposed to applications by implementing the MC API in DRM, KMS and FB
> > drivers. I'm working on a proof of concept for the FB sh_mobile_lcdc
> > driver and will post patches soon. Something similar can be done for DRM
> > and KMS.
> > 
> > This would leave us with the issue of controlling formats and other
> > parameters on the pipelines. We could keep separate DRM, KMS, FB and V4L
> > APIs for that, but would it really make sense ? I don't think so.
> > Obviously I would be happy to use the V4L API, as we already have a
> > working solution :-) I don't see that as being realistic though, we will
> > probably need to create a central graphics- related API here (possibly
> > close to what we already have in V4L if it can fulfil everybody's
> > needs).
> > 
> > To paraphrase Alan, in my semi-perfect world vision the MC API would be
> > used to expose hardware pipelines to userspace, a common graphics API
> > would be used to control parameters on the pipeline shared by DRM, KMS,
> > FB and V4L, the individual APIs would control subsystem-specific
> > parameters and DRM, KMS, FB and V4L would be implemented on top of this
> > to manage memory, command queues and data transfers.
> 
> I guess in theory it would be possible to let MC iterate the
> plane->crtc->encoder->connector topology.. I'm not entirely sure what
> benefit that would bring, other than change for the sake of change.

The MC API has been designed to expose pipeline topologies to userspace. In 
the plane->crtc->encoder->connector case, DRM/KMS is probably enough. However, 
many pipelines can't be described so simply. Reinventing the wheel doesn't 
look like the best solution to me.

> V4L and DRM are very different APIs designed to solves very different
> problems.  The KMS / mode-setting part may look somewhat similar to
> something you can express w/ a camera-like graph of nodes.  But the
> memory management is very different.  And display updates (like page
> flipping) need to be synchronized w/ GPU rendering.. etc.  Trying to
> fit V4L here, just seems like trying to force a square peg in a round
> hole.  You'd have to end up morphing V4L so much that in the end it
> looks like DRM.  And that might not be the right thing for cameras.
> 
> So V4L for camera, DRM for gpu/display.  Those are the two APIs we need.

That's why I'm not advocating replacing DRM with V4L :-)

As explained in my previous mail, V4L and DRM started as monolithic APIs to 
solve different needs. We now realize that they're actually made (or should be 
made) of several sub-APIs. In the V4L case, that's pipeline discovery, 
pipeline setup, format configuration (at the pad level in the pipeline, 
including cropping, scaling and composing), controls (at the entity and/or pad 
level in the pipeline), memory management and stream control (there are a 
couple of other tasks we can add, but that's the basic idea). Some of those 
tasks need to be performed for display hardware as well, and I believe we 
should standardize a cross-subsystem (DRM, FB and V4L) API there. All the 
display-specific needs that DRM has been designed to handle should continue to 
be handled by DRM, I have no doubt about that.

To summarize my point, I don't want to fit V4L in DRM, but I would like to 
find out which needs are common between V4L and DRM, and see if we can share 
an API there.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-18 21:55                   ` Laurent Pinchart
  0 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-18 21:55 UTC (permalink / raw)
  To: Rob Clark
  Cc: linux-fbdev, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja, Sakari Ailus, linux-media

Hi Rob,

(CC'ing linux-media, as I believe this is very on-topic)

On Sunday 18 September 2011 18:14:26 Rob Clark wrote:
> On Sat, Sep 17, 2011 at 6:12 PM, Laurent Pinchart wrote:
> > On Thursday 15 September 2011 20:39:21 Florian Tobias Schandinat wrote:
> >> On 09/15/2011 05:52 PM, Alex Deucher wrote:
> >> > Please don't claim that the DRM developers do not want to cooperate.
> >> > I realize that people have strong opinions about existing APIs, put
> >> > there has been just as much, if not more obstinacy from the v4l and fb
> >> > people.
> >> 
> >> Well, I think it's too late to really fix this thing. We now have 3 APIs
> >> in the kernel that have to be kept. Probably the best we can do now is
> >> figure out how we can reduce code duplication and do extensions to
> >> those APIs in a way that they are compatible with each other or
> >> completely independent and can be used across the APIs.
> > 
> > Sorry for jumping late into the discussion. Let me try to shed some new
> > light on this.
> > 
> > I've been thinking about the DRM/KMS/FB/V4L APIs overlap for quite some
> > time now. All of them have their share of issues, historical nonsense
> > and unique features. I don't think we can pick one of those APIs today
> > and decide to drop the others, but we certainly need to make DRM, KMS,
> > FB and V4L interoperable at various levels. The alternative is to keep
> > ignoring each other and let the market decice.
> 
> I think we need to differentiate between V4L camera, and display..
> 
> MC and subdev stuff clearly seem to be the way to go for complex
> camera / imaging subsystems.  But that is a very different problem
> domain from GPU+display.  We need to stop blurring the two topics.

I would agree with you if we were only talking about GPU, but display is 
broader than that. Many hardware available today have complex display 
pipelines with "deep tunneling" between other IP blocks (such as the camera 
subsystem) and the display. Configuration of such pipelines isn't specific to 
DRM/KMS.

> > Thinking that the market could pick something like OpenMAX
> > scares me, so I'd rather find a good compromise and move forward.
> > 
> > Disclaimer: My DRM/KMS knowledge isn't as good as my FB and V4L
> > knowledge, so please feel free to correct my mistakes.
> > 
> > All our video-related APIs started as solutions to different problems.
> > They all share an important feature: they assume that the devices they
> > control is more or less monolithic. For that reason they expose a single
> > device to userspace, and mix device configuration and data transfer on
> > the same device node.
> > 
> > This shortcoming became painful in V4L a couple of years ago. When I
> > started working on the OMAP3 ISP (camera) driver I realized that trying
> > to configure a complex hardware pipeline without exposing its internals
> > to userspace applications wouldn't be possible. DRM, KMS and FB ran into
> > the exact same problem, just more recently, as showed by various RFCs
> > ([1], [2]).
> 
> But I do think that overlays need to be part of the DRM/KMS interface,
> simply because flipping still needs to be synchronized w/ the GPU.  I
> have some experience using V4L for display, and this is one (of
> several) broken aspects of that.

I agree that DRM/KMS must be used to address needs specific to display 
hardware, but I don't think *all* display needs are specific to the display.

> > To fix this issue, the V4L community developed a new API called the Media
> > Controller [3]. In a nutshell, the MC aims at
> > 
> > - exposing the device topology to userspace as an oriented graph of
> > entities connected with links through pads
> > 
> > - controlling the device topology from userspace by enabling/disabling
> > links
> > 
> > - giving userspace access to per-entity controls
> > 
> > - configuring formats at individual points in the pipeline from
> > userspace.
> > 
> > The MC API solves the first two problems. The last two require help from
> > V4L (which has been extended with new MC-aware ioctls), as MC is
> > media-agnostic and can't thus configure video formats.
> > 
> > To support this, the V4L subsystem exposes an in-kernel API based around
> > the concept of sub-devices. A single high-level hardware device is
> > handled by multiple sub-devices, possibly controlled by different
> > drivers. For instance, in the OMAP3-based N900 digital camera, the OMAP3
> > ISP is made of 8 sub-devices (all controlled by the OMAP3 ISP driver),
> > and the two sensors, flash controller and lens controller all have their
> > own sub-device, each of them controlled by its own driver.
> > 
> > All this infrastructure exposes the devices a the graph showed in [4] to
> > applications, and the V4L sub-device API can be used to set formats at
> > individual pads. This allows controlling scaling, cropping, composing and
> > other video-related operations on the pipeline.
> > 
> > With the introduction of the media controller architecture, I now see V4L
> > as being made of three parts.
> > 
> > 1. The V4L video nodes streaming API, used to manage video buffers
> > memory, map it to userspace, and control video streaming (and data
> > transfers).
> > 
> > 2. The V4L sub-devices API, used to control parameters on individual
> > entities in the graph and configure formats.
> > 
> > 3. The V4L video nodes formats and control API, used to perform the same
> > tasks as the V4L sub-devices API for drivers that don't support the
> > media controller API, or to provide support for pure V4L applications
> > with drivers that support the media controller API.
> > 
> > V4L is made of those three parts, but I believe it helps to think about
> > them individually. With today's (and tomorrow's) devices, DRM, KMS and
> > FB are in a situation similar to what V4L experienced a couple of years
> > ago. They need to give control of complex pipelines to userspace, and I
> > believe this should be done by (logically) splitting DRM, KMS and FB
> > into a pipeline control part and a data flow part, as we did with V4L.
> > 
> > Keeping the monolithic device model and handling pipeline control without
> > exposing the pipeline topology would in my opinion be a mistake. Even if
> > this could support today's hardware, I don't think it would be
> > future-proof. I would rather see the DRM, KMS and FB topologies being
> > exposed to applications by implementing the MC API in DRM, KMS and FB
> > drivers. I'm working on a proof of concept for the FB sh_mobile_lcdc
> > driver and will post patches soon. Something similar can be done for DRM
> > and KMS.
> > 
> > This would leave us with the issue of controlling formats and other
> > parameters on the pipelines. We could keep separate DRM, KMS, FB and V4L
> > APIs for that, but would it really make sense ? I don't think so.
> > Obviously I would be happy to use the V4L API, as we already have a
> > working solution :-) I don't see that as being realistic though, we will
> > probably need to create a central graphics- related API here (possibly
> > close to what we already have in V4L if it can fulfil everybody's
> > needs).
> > 
> > To paraphrase Alan, in my semi-perfect world vision the MC API would be
> > used to expose hardware pipelines to userspace, a common graphics API
> > would be used to control parameters on the pipeline shared by DRM, KMS,
> > FB and V4L, the individual APIs would control subsystem-specific
> > parameters and DRM, KMS, FB and V4L would be implemented on top of this
> > to manage memory, command queues and data transfers.
> 
> I guess in theory it would be possible to let MC iterate the
> plane->crtc->encoder->connector topology.. I'm not entirely sure what
> benefit that would bring, other than change for the sake of change.

The MC API has been designed to expose pipeline topologies to userspace. In 
the plane->crtc->encoder->connector case, DRM/KMS is probably enough. However, 
many pipelines can't be described so simply. Reinventing the wheel doesn't 
look like the best solution to me.

> V4L and DRM are very different APIs designed to solves very different
> problems.  The KMS / mode-setting part may look somewhat similar to
> something you can express w/ a camera-like graph of nodes.  But the
> memory management is very different.  And display updates (like page
> flipping) need to be synchronized w/ GPU rendering.. etc.  Trying to
> fit V4L here, just seems like trying to force a square peg in a round
> hole.  You'd have to end up morphing V4L so much that in the end it
> looks like DRM.  And that might not be the right thing for cameras.
> 
> So V4L for camera, DRM for gpu/display.  Those are the two APIs we need.

That's why I'm not advocating replacing DRM with V4L :-)

As explained in my previous mail, V4L and DRM started as monolithic APIs to 
solve different needs. We now realize that they're actually made (or should be 
made) of several sub-APIs. In the V4L case, that's pipeline discovery, 
pipeline setup, format configuration (at the pad level in the pipeline, 
including cropping, scaling and composing), controls (at the entity and/or pad 
level in the pipeline), memory management and stream control (there are a 
couple of other tasks we can add, but that's the basic idea). Some of those 
tasks need to be performed for display hardware as well, and I believe we 
should standardize a cross-subsystem (DRM, FB and V4L) API there. All the 
display-specific needs that DRM has been designed to handle should continue to 
be handled by DRM, I have no doubt about that.

To summarize my point, I don't want to fit V4L in DRM, but I would like to 
find out which needs are common between V4L and DRM, and see if we can share 
an API there.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-18 21:55                   ` Laurent Pinchart
  0 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-18 21:55 UTC (permalink / raw)
  To: Rob Clark
  Cc: linux-fbdev, linaro-dev, Florian Tobias Schandinat, linux-kernel,
	dri-devel, Archit Taneja, Sakari Ailus, linux-media

Hi Rob,

(CC'ing linux-media, as I believe this is very on-topic)

On Sunday 18 September 2011 18:14:26 Rob Clark wrote:
> On Sat, Sep 17, 2011 at 6:12 PM, Laurent Pinchart wrote:
> > On Thursday 15 September 2011 20:39:21 Florian Tobias Schandinat wrote:
> >> On 09/15/2011 05:52 PM, Alex Deucher wrote:
> >> > Please don't claim that the DRM developers do not want to cooperate.
> >> > I realize that people have strong opinions about existing APIs, put
> >> > there has been just as much, if not more obstinacy from the v4l and fb
> >> > people.
> >> 
> >> Well, I think it's too late to really fix this thing. We now have 3 APIs
> >> in the kernel that have to be kept. Probably the best we can do now is
> >> figure out how we can reduce code duplication and do extensions to
> >> those APIs in a way that they are compatible with each other or
> >> completely independent and can be used across the APIs.
> > 
> > Sorry for jumping late into the discussion. Let me try to shed some new
> > light on this.
> > 
> > I've been thinking about the DRM/KMS/FB/V4L APIs overlap for quite some
> > time now. All of them have their share of issues, historical nonsense
> > and unique features. I don't think we can pick one of those APIs today
> > and decide to drop the others, but we certainly need to make DRM, KMS,
> > FB and V4L interoperable at various levels. The alternative is to keep
> > ignoring each other and let the market decice.
> 
> I think we need to differentiate between V4L camera, and display..
> 
> MC and subdev stuff clearly seem to be the way to go for complex
> camera / imaging subsystems.  But that is a very different problem
> domain from GPU+display.  We need to stop blurring the two topics.

I would agree with you if we were only talking about GPU, but display is 
broader than that. Many hardware available today have complex display 
pipelines with "deep tunneling" between other IP blocks (such as the camera 
subsystem) and the display. Configuration of such pipelines isn't specific to 
DRM/KMS.

> > Thinking that the market could pick something like OpenMAX
> > scares me, so I'd rather find a good compromise and move forward.
> > 
> > Disclaimer: My DRM/KMS knowledge isn't as good as my FB and V4L
> > knowledge, so please feel free to correct my mistakes.
> > 
> > All our video-related APIs started as solutions to different problems.
> > They all share an important feature: they assume that the devices they
> > control is more or less monolithic. For that reason they expose a single
> > device to userspace, and mix device configuration and data transfer on
> > the same device node.
> > 
> > This shortcoming became painful in V4L a couple of years ago. When I
> > started working on the OMAP3 ISP (camera) driver I realized that trying
> > to configure a complex hardware pipeline without exposing its internals
> > to userspace applications wouldn't be possible. DRM, KMS and FB ran into
> > the exact same problem, just more recently, as showed by various RFCs
> > ([1], [2]).
> 
> But I do think that overlays need to be part of the DRM/KMS interface,
> simply because flipping still needs to be synchronized w/ the GPU.  I
> have some experience using V4L for display, and this is one (of
> several) broken aspects of that.

I agree that DRM/KMS must be used to address needs specific to display 
hardware, but I don't think *all* display needs are specific to the display.

> > To fix this issue, the V4L community developed a new API called the Media
> > Controller [3]. In a nutshell, the MC aims at
> > 
> > - exposing the device topology to userspace as an oriented graph of
> > entities connected with links through pads
> > 
> > - controlling the device topology from userspace by enabling/disabling
> > links
> > 
> > - giving userspace access to per-entity controls
> > 
> > - configuring formats at individual points in the pipeline from
> > userspace.
> > 
> > The MC API solves the first two problems. The last two require help from
> > V4L (which has been extended with new MC-aware ioctls), as MC is
> > media-agnostic and can't thus configure video formats.
> > 
> > To support this, the V4L subsystem exposes an in-kernel API based around
> > the concept of sub-devices. A single high-level hardware device is
> > handled by multiple sub-devices, possibly controlled by different
> > drivers. For instance, in the OMAP3-based N900 digital camera, the OMAP3
> > ISP is made of 8 sub-devices (all controlled by the OMAP3 ISP driver),
> > and the two sensors, flash controller and lens controller all have their
> > own sub-device, each of them controlled by its own driver.
> > 
> > All this infrastructure exposes the devices a the graph showed in [4] to
> > applications, and the V4L sub-device API can be used to set formats at
> > individual pads. This allows controlling scaling, cropping, composing and
> > other video-related operations on the pipeline.
> > 
> > With the introduction of the media controller architecture, I now see V4L
> > as being made of three parts.
> > 
> > 1. The V4L video nodes streaming API, used to manage video buffers
> > memory, map it to userspace, and control video streaming (and data
> > transfers).
> > 
> > 2. The V4L sub-devices API, used to control parameters on individual
> > entities in the graph and configure formats.
> > 
> > 3. The V4L video nodes formats and control API, used to perform the same
> > tasks as the V4L sub-devices API for drivers that don't support the
> > media controller API, or to provide support for pure V4L applications
> > with drivers that support the media controller API.
> > 
> > V4L is made of those three parts, but I believe it helps to think about
> > them individually. With today's (and tomorrow's) devices, DRM, KMS and
> > FB are in a situation similar to what V4L experienced a couple of years
> > ago. They need to give control of complex pipelines to userspace, and I
> > believe this should be done by (logically) splitting DRM, KMS and FB
> > into a pipeline control part and a data flow part, as we did with V4L.
> > 
> > Keeping the monolithic device model and handling pipeline control without
> > exposing the pipeline topology would in my opinion be a mistake. Even if
> > this could support today's hardware, I don't think it would be
> > future-proof. I would rather see the DRM, KMS and FB topologies being
> > exposed to applications by implementing the MC API in DRM, KMS and FB
> > drivers. I'm working on a proof of concept for the FB sh_mobile_lcdc
> > driver and will post patches soon. Something similar can be done for DRM
> > and KMS.
> > 
> > This would leave us with the issue of controlling formats and other
> > parameters on the pipelines. We could keep separate DRM, KMS, FB and V4L
> > APIs for that, but would it really make sense ? I don't think so.
> > Obviously I would be happy to use the V4L API, as we already have a
> > working solution :-) I don't see that as being realistic though, we will
> > probably need to create a central graphics- related API here (possibly
> > close to what we already have in V4L if it can fulfil everybody's
> > needs).
> > 
> > To paraphrase Alan, in my semi-perfect world vision the MC API would be
> > used to expose hardware pipelines to userspace, a common graphics API
> > would be used to control parameters on the pipeline shared by DRM, KMS,
> > FB and V4L, the individual APIs would control subsystem-specific
> > parameters and DRM, KMS, FB and V4L would be implemented on top of this
> > to manage memory, command queues and data transfers.
> 
> I guess in theory it would be possible to let MC iterate the
> plane->crtc->encoder->connector topology.. I'm not entirely sure what
> benefit that would bring, other than change for the sake of change.

The MC API has been designed to expose pipeline topologies to userspace. In 
the plane->crtc->encoder->connector case, DRM/KMS is probably enough. However, 
many pipelines can't be described so simply. Reinventing the wheel doesn't 
look like the best solution to me.

> V4L and DRM are very different APIs designed to solves very different
> problems.  The KMS / mode-setting part may look somewhat similar to
> something you can express w/ a camera-like graph of nodes.  But the
> memory management is very different.  And display updates (like page
> flipping) need to be synchronized w/ GPU rendering.. etc.  Trying to
> fit V4L here, just seems like trying to force a square peg in a round
> hole.  You'd have to end up morphing V4L so much that in the end it
> looks like DRM.  And that might not be the right thing for cameras.
> 
> So V4L for camera, DRM for gpu/display.  Those are the two APIs we need.

That's why I'm not advocating replacing DRM with V4L :-)

As explained in my previous mail, V4L and DRM started as monolithic APIs to 
solve different needs. We now realize that they're actually made (or should be 
made) of several sub-APIs. In the V4L case, that's pipeline discovery, 
pipeline setup, format configuration (at the pad level in the pipeline, 
including cropping, scaling and composing), controls (at the entity and/or pad 
level in the pipeline), memory management and stream control (there are a 
couple of other tasks we can add, but that's the basic idea). Some of those 
tasks need to be performed for display hardware as well, and I believe we 
should standardize a cross-subsystem (DRM, FB and V4L) API there. All the 
display-specific needs that DRM has been designed to handle should continue to 
be handled by DRM, I have no doubt about that.

To summarize my point, I don't want to fit V4L in DRM, but I would like to 
find out which needs are common between V4L and DRM, and see if we can share 
an API there.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-18 22:23                 ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-18 22:23 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Florian Tobias Schandinat, Alex Deucher, Keith Packard,
	linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

> This would leave us with the issue of controlling formats and other parameters 
> on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that, 

There are some other differences that matter. The exact state and
behaviour of memory, sequencing of accesses, cache control and management
are a critical part of DRM for most GPUs, as is the ability to have them
in swap backed objects and to do memory management of them. Fences and
the like are a big part of the logic of many renderers and the same
fencing has to be applied between capture and GPU, and also in some cases
between playback accelerators (eg MP4 playback) and GPU.

To glue them together I think you'd need to support the use of GEM objects
(maybe extended) in V4L. That may actually make life cleaner and simpler
in some respects because GEM objects are refcounted nicely and have
handles.

DRM and KMS abstract out stuff into what is akin to V4L subdevices for
the various objects the video card has that matter for display - from
scanout buffers to the various video outputs, timings and the like.

I don't know what it's like with OMAP but for some of the x86 stuff
particularly low speed/power stuff the capture devices, GPU and overlays
tend to be fairly incestuous in order to do things like 1080i/p preview
while recording from the camera.

GPU is also a bit weird in some ways because while its normally
nonsensical to do things like use the capture facility one card to drive
part of another, it's actually rather useful (although not supported
really by DRM) to do exactly that with GPUs. A simple example is a dual
headed box with a dumb frame buffer and an accelerated output both of
which are using memory that can be hit by the accelerated card. Classic
example being a USB plug in monitor.

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-18 22:23                 ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-18 22:23 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Keith Packard, linaro-dev-cunTk1MwBs8s++Sfvej+rw,
	Florian Tobias Schandinat, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW, Archit Taneja,
	linux-fbdev-u79uwXL29TY76Z2rM5mHXA, Alex Deucher

> This would leave us with the issue of controlling formats and other parameters 
> on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that, 

There are some other differences that matter. The exact state and
behaviour of memory, sequencing of accesses, cache control and management
are a critical part of DRM for most GPUs, as is the ability to have them
in swap backed objects and to do memory management of them. Fences and
the like are a big part of the logic of many renderers and the same
fencing has to be applied between capture and GPU, and also in some cases
between playback accelerators (eg MP4 playback) and GPU.

To glue them together I think you'd need to support the use of GEM objects
(maybe extended) in V4L. That may actually make life cleaner and simpler
in some respects because GEM objects are refcounted nicely and have
handles.

DRM and KMS abstract out stuff into what is akin to V4L subdevices for
the various objects the video card has that matter for display - from
scanout buffers to the various video outputs, timings and the like.

I don't know what it's like with OMAP but for some of the x86 stuff
particularly low speed/power stuff the capture devices, GPU and overlays
tend to be fairly incestuous in order to do things like 1080i/p preview
while recording from the camera.

GPU is also a bit weird in some ways because while its normally
nonsensical to do things like use the capture facility one card to drive
part of another, it's actually rather useful (although not supported
really by DRM) to do exactly that with GPUs. A simple example is a dual
headed box with a dumb frame buffer and an accelerated output both of
which are using memory that can be hit by the accelerated card. Classic
example being a USB plug in monitor.

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-18 22:23                 ` Alan Cox
  0 siblings, 0 replies; 143+ messages in thread
From: Alan Cox @ 2011-09-18 22:23 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Keith Packard, linaro-dev-cunTk1MwBs8s++Sfvej+rw,
	Florian Tobias Schandinat, linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW, Archit Taneja,
	linux-fbdev-u79uwXL29TY76Z2rM5mHXA, Alex Deucher

> This would leave us with the issue of controlling formats and other parameters 
> on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that, 

There are some other differences that matter. The exact state and
behaviour of memory, sequencing of accesses, cache control and management
are a critical part of DRM for most GPUs, as is the ability to have them
in swap backed objects and to do memory management of them. Fences and
the like are a big part of the logic of many renderers and the same
fencing has to be applied between capture and GPU, and also in some cases
between playback accelerators (eg MP4 playback) and GPU.

To glue them together I think you'd need to support the use of GEM objects
(maybe extended) in V4L. That may actually make life cleaner and simpler
in some respects because GEM objects are refcounted nicely and have
handles.

DRM and KMS abstract out stuff into what is akin to V4L subdevices for
the various objects the video card has that matter for display - from
scanout buffers to the various video outputs, timings and the like.

I don't know what it's like with OMAP but for some of the x86 stuff
particularly low speed/power stuff the capture devices, GPU and overlays
tend to be fairly incestuous in order to do things like 1080i/p preview
while recording from the camera.

GPU is also a bit weird in some ways because while its normally
nonsensical to do things like use the capture facility one card to drive
part of another, it's actually rather useful (although not supported
really by DRM) to do exactly that with GPUs. A simple example is a dual
headed box with a dumb frame buffer and an accelerated output both of
which are using memory that can be hit by the accelerated card. Classic
example being a USB plug in monitor.

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-18 22:23                 ` Alan Cox
@ 2011-09-19  0:09                   ` Rob Clark
  -1 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-19  0:09 UTC (permalink / raw)
  To: Alan Cox
  Cc: Laurent Pinchart, Keith Packard, linaro-dev,
	Florian Tobias Schandinat, linux-kernel, dri-devel,
	Archit Taneja, linux-fbdev, Alex Deucher

On Sun, Sep 18, 2011 at 5:23 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
>> This would leave us with the issue of controlling formats and other parameters
>> on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that,
>
> There are some other differences that matter. The exact state and
> behaviour of memory, sequencing of accesses, cache control and management
> are a critical part of DRM for most GPUs, as is the ability to have them
> in swap backed objects and to do memory management of them. Fences and
> the like are a big part of the logic of many renderers and the same
> fencing has to be applied between capture and GPU, and also in some cases
> between playback accelerators (eg MP4 playback) and GPU.
>
> To glue them together I think you'd need to support the use of GEM objects
> (maybe extended) in V4L. That may actually make life cleaner and simpler
> in some respects because GEM objects are refcounted nicely and have
> handles.

fwiw, I think the dmabuf proposal that linaro GWG is working on should
be sufficient for V4L to capture directly into a GEM buffer that can
be scanned out (overlay) or composited by GPU, etc, in cases where the
different dma initiators can all access some common memory:

http://lists.linaro.org/pipermail/linaro-mm-sig/2011-September/000616.html

The idea is that you could allocate a GEM buffer, export a dmabuf
handle for that buffer that could be passed to v4l2 camera device (ie.
V4L2_MEMORY_DMABUF), video encoder, etc..  the importing device should
bracket DMA to/from the buffer w/ get/put_scatterlist() so an unused
buffer could be unpinned if needed.

> DRM and KMS abstract out stuff into what is akin to V4L subdevices for
> the various objects the video card has that matter for display - from
> scanout buffers to the various video outputs, timings and the like.
>
> I don't know what it's like with OMAP but for some of the x86 stuff
> particularly low speed/power stuff the capture devices, GPU and overlays
> tend to be fairly incestuous in order to do things like 1080i/p preview
> while recording from the camera.

We don't like extra memcpy's, but something like dmabuf fits us
nicely.. and I expect it would work well in any sort of UMA system
where camera, encoder, GPU, overlay, etc all can share the same memory
and formats.  I suspect the situation is similar in the x86 SoC
world.. but would be good to get some feedback on the proposal.  (I
guess next version of the RFC would go out to more mailing lists for
broader review.)

BR,
-R

> GPU is also a bit weird in some ways because while its normally
> nonsensical to do things like use the capture facility one card to drive
> part of another, it's actually rather useful (although not supported
> really by DRM) to do exactly that with GPUs. A simple example is a dual
> headed box with a dumb frame buffer and an accelerated output both of
> which are using memory that can be hit by the accelerated card. Classic
> example being a USB plug in monitor.
>
> _______________________________________________
> linaro-dev mailing list
> linaro-dev@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/linaro-dev
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-19  0:09                   ` Rob Clark
  0 siblings, 0 replies; 143+ messages in thread
From: Rob Clark @ 2011-09-19  0:09 UTC (permalink / raw)
  To: Alan Cox
  Cc: Laurent Pinchart, Keith Packard, linaro-dev,
	Florian Tobias Schandinat, linux-kernel, dri-devel,
	Archit Taneja, linux-fbdev, Alex Deucher

On Sun, Sep 18, 2011 at 5:23 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
>> This would leave us with the issue of controlling formats and other parameters
>> on the pipelines. We could keep separate DRM, KMS, FB and V4L APIs for that,
>
> There are some other differences that matter. The exact state and
> behaviour of memory, sequencing of accesses, cache control and management
> are a critical part of DRM for most GPUs, as is the ability to have them
> in swap backed objects and to do memory management of them. Fences and
> the like are a big part of the logic of many renderers and the same
> fencing has to be applied between capture and GPU, and also in some cases
> between playback accelerators (eg MP4 playback) and GPU.
>
> To glue them together I think you'd need to support the use of GEM objects
> (maybe extended) in V4L. That may actually make life cleaner and simpler
> in some respects because GEM objects are refcounted nicely and have
> handles.

fwiw, I think the dmabuf proposal that linaro GWG is working on should
be sufficient for V4L to capture directly into a GEM buffer that can
be scanned out (overlay) or composited by GPU, etc, in cases where the
different dma initiators can all access some common memory:

http://lists.linaro.org/pipermail/linaro-mm-sig/2011-September/000616.html

The idea is that you could allocate a GEM buffer, export a dmabuf
handle for that buffer that could be passed to v4l2 camera device (ie.
V4L2_MEMORY_DMABUF), video encoder, etc..  the importing device should
bracket DMA to/from the buffer w/ get/put_scatterlist() so an unused
buffer could be unpinned if needed.

> DRM and KMS abstract out stuff into what is akin to V4L subdevices for
> the various objects the video card has that matter for display - from
> scanout buffers to the various video outputs, timings and the like.
>
> I don't know what it's like with OMAP but for some of the x86 stuff
> particularly low speed/power stuff the capture devices, GPU and overlays
> tend to be fairly incestuous in order to do things like 1080i/p preview
> while recording from the camera.

We don't like extra memcpy's, but something like dmabuf fits us
nicely.. and I expect it would work well in any sort of UMA system
where camera, encoder, GPU, overlay, etc all can share the same memory
and formats.  I suspect the situation is similar in the x86 SoC
world.. but would be good to get some feedback on the proposal.  (I
guess next version of the RFC would go out to more mailing lists for
broader review.)

BR,
-R

> GPU is also a bit weird in some ways because while its normally
> nonsensical to do things like use the capture facility one card to drive
> part of another, it's actually rather useful (although not supported
> really by DRM) to do exactly that with GPUs. A simple example is a dual
> headed box with a dumb frame buffer and an accelerated output both of
> which are using memory that can be hit by the accelerated card. Classic
> example being a USB plug in monitor.
>
> _______________________________________________
> linaro-dev mailing list
> linaro-dev@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/linaro-dev
>

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-16 16:53             ` Alan Cox
@ 2011-09-19  6:33               ` Tomi Valkeinen
  -1 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-19  6:33 UTC (permalink / raw)
  To: Alan Cox
  Cc: Keith Packard, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

On Fri, 2011-09-16 at 17:53 +0100, Alan Cox wrote:
> > I'm not sure a common interface to all of these different
> > channels makes sense, but surely a DSI library and an aux channel
> > library would fit nicely alongside the existing DDC library.
> 
> DSI and the various other MIPI bits tend to be horribly panel and device
> specific. In one sense yes its a standard with standard commands,
> processes, queries etc, on the other a lot of stuff is oriented around
> the 'its a fixed configuration unit we don't need to have queries' view.

I think it's a bit more complex than that. True, there are MIPI
standards, for the video there are DPI, DBI, DSI, and for the commands
there is DCS. And, as you mentioned, many panels need custom
initialization, or support only parts of the DCS, or have other quirks.

However, I think the biggest thing to realize here is that DSI
peripherals can be anything. It's not always so that you have a DSI bus
and a single panel connected to it. You can have hubs, buffer chips,
etc.

We need DSI peripheral device drivers for those, a single "DSI output"
driver cannot work. And this is also the most important thing in my
original proposition.

> There also tends to be a lot of vendor magic initialisation logic both
> chipset and device dependant, and often 'plumbing dependant' on SoC

While I have DSI experience with only one SoC, I do believe it'd be
possible to create a DSI API that would allow us to have platform
independent DSI peripheral drivers.

Can you share any SoC side DSI peculiarities on Intel's SoCs?

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-19  6:33               ` Tomi Valkeinen
  0 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-19  6:33 UTC (permalink / raw)
  To: Alan Cox
  Cc: Keith Packard, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

On Fri, 2011-09-16 at 17:53 +0100, Alan Cox wrote:
> > I'm not sure a common interface to all of these different
> > channels makes sense, but surely a DSI library and an aux channel
> > library would fit nicely alongside the existing DDC library.
> 
> DSI and the various other MIPI bits tend to be horribly panel and device
> specific. In one sense yes its a standard with standard commands,
> processes, queries etc, on the other a lot of stuff is oriented around
> the 'its a fixed configuration unit we don't need to have queries' view.

I think it's a bit more complex than that. True, there are MIPI
standards, for the video there are DPI, DBI, DSI, and for the commands
there is DCS. And, as you mentioned, many panels need custom
initialization, or support only parts of the DCS, or have other quirks.

However, I think the biggest thing to realize here is that DSI
peripherals can be anything. It's not always so that you have a DSI bus
and a single panel connected to it. You can have hubs, buffer chips,
etc.

We need DSI peripheral device drivers for those, a single "DSI output"
driver cannot work. And this is also the most important thing in my
original proposition.

> There also tends to be a lot of vendor magic initialisation logic both
> chipset and device dependant, and often 'plumbing dependant' on SoC

While I have DSI experience with only one SoC, I do believe it'd be
possible to create a DSI API that would allow us to have platform
independent DSI peripheral drivers.

Can you share any SoC side DSI peculiarities on Intel's SoCs?

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-19  6:33               ` Tomi Valkeinen
  (?)
@ 2011-09-19  6:53                 ` Keith Packard
  -1 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-19  6:53 UTC (permalink / raw)
  To: Tomi Valkeinen, Alan Cox
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 848 bytes --]

On Mon, 19 Sep 2011 09:33:34 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> I think it's a bit more complex than that. True, there are MIPI
> standards, for the video there are DPI, DBI, DSI, and for the commands
> there is DCS. And, as you mentioned, many panels need custom
> initialization, or support only parts of the DCS, or have other
> quirks.

So DSI is more like i2c than the DisplayPort aux channel or DDC. That
seems fine; you can create a DSI infrastructure like the i2c
infrastructure and then just have your display drivers use it to talk to
the panel. We might eventually end up with some shared DRM code to deal
with common DSI functions for display devices, like the EDID code
today, but that doesn't need to happen before you can write your first
DSI-using display driver.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-19  6:53                 ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-19  6:53 UTC (permalink / raw)
  To: Tomi Valkeinen, Alan Cox
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob

[-- Attachment #1: Type: text/plain, Size: 848 bytes --]

On Mon, 19 Sep 2011 09:33:34 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> I think it's a bit more complex than that. True, there are MIPI
> standards, for the video there are DPI, DBI, DSI, and for the commands
> there is DCS. And, as you mentioned, many panels need custom
> initialization, or support only parts of the DCS, or have other
> quirks.

So DSI is more like i2c than the DisplayPort aux channel or DDC. That
seems fine; you can create a DSI infrastructure like the i2c
infrastructure and then just have your display drivers use it to talk to
the panel. We might eventually end up with some shared DRM code to deal
with common DSI functions for display devices, like the EDID code
today, but that doesn't need to happen before you can write your first
DSI-using display driver.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-19  6:53                 ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-19  6:53 UTC (permalink / raw)
  To: Tomi Valkeinen, Alan Cox
  Cc: linux-fbdev, linaro-dev, linux-kernel, dri-devel, Archit Taneja,
	Clark, Rob


[-- Attachment #1.1: Type: text/plain, Size: 848 bytes --]

On Mon, 19 Sep 2011 09:33:34 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> I think it's a bit more complex than that. True, there are MIPI
> standards, for the video there are DPI, DBI, DSI, and for the commands
> there is DCS. And, as you mentioned, many panels need custom
> initialization, or support only parts of the DCS, or have other
> quirks.

So DSI is more like i2c than the DisplayPort aux channel or DDC. That
seems fine; you can create a DSI infrastructure like the i2c
infrastructure and then just have your display drivers use it to talk to
the panel. We might eventually end up with some shared DRM code to deal
with common DSI functions for display devices, like the EDID code
today, but that doesn't need to happen before you can write your first
DSI-using display driver.

-- 
keith.packard@intel.com

[-- Attachment #1.2: Type: application/pgp-signature, Size: 189 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-19  6:53                 ` Keith Packard
@ 2011-09-19  7:29                   ` Tomi Valkeinen
  -1 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-19  7:29 UTC (permalink / raw)
  To: Keith Packard
  Cc: Alan Cox, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

On Sun, 2011-09-18 at 23:53 -0700, Keith Packard wrote:
> On Mon, 19 Sep 2011 09:33:34 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > I think it's a bit more complex than that. True, there are MIPI
> > standards, for the video there are DPI, DBI, DSI, and for the commands
> > there is DCS. And, as you mentioned, many panels need custom
> > initialization, or support only parts of the DCS, or have other
> > quirks.
> 
> So DSI is more like i2c than the DisplayPort aux channel or DDC. That

Well, not quite. DSI is like DisplayPort and DisplayPort aux combined;
there's only one bus, DSI, which is used to transfer video data and
commands.

For DSI video mode, the transfer is somewhat like traditional displays,
and video data is send according to a pixel clock as a constant stream.
However, before the video stream is enabled the bus can be used in
bi-directional communication. And even when the video stream is enabled,
there can be other communication in the blanking periods.

For DSI command mode the transfer is a bit like high speed i2c; messages
are sent when needed (when the userspace gives the command to update),
without any strict timings. In practice this means that the peripheral
needs to have a framebuffer memory of its own, which it uses to refresh
the actual panel (or send the pixels forward to another peripheral).

As the use patterns of these two types of displays are quite different,
we have the terms auto-update and manual-update displays for them.

> seems fine; you can create a DSI infrastructure like the i2c
> infrastructure and then just have your display drivers use it to talk to
> the panel. We might eventually end up with some shared DRM code to deal
> with common DSI functions for display devices, like the EDID code
> today, but that doesn't need to happen before you can write your first
> DSI-using display driver.

One difference with i2c and DSI is that i2c is independent of the video
path, so it's easy to keep that separate from DRM. But for DSI the data
is produced by the video hardware using the overlays, encoders etc. I
don't quite see how we could have an i2c-like separate DSI API, which
wasn't part of DRM.

And even in simpler case MIPI DPI, which is a traditional parallel RGB
interface, a panel may need custom configuration via, say, i2c or spi.
We can, of course, create a i2c device driver for the panel, but how is
that then connected to DRM? The i2c driver may need to know things like
when the display is enabled/disabled, about backlight changes, or any
other display related event.

Is there a way for the i2c driver to get these events, and add new
properties to the DRM (say, if the panel has a feature configured via
i2c, but we'd like it to be visible to the userspace via the DRM
driver)?

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-19  7:29                   ` Tomi Valkeinen
  0 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-19  7:29 UTC (permalink / raw)
  To: Keith Packard
  Cc: Alan Cox, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

On Sun, 2011-09-18 at 23:53 -0700, Keith Packard wrote:
> On Mon, 19 Sep 2011 09:33:34 +0300, Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:
> 
> > I think it's a bit more complex than that. True, there are MIPI
> > standards, for the video there are DPI, DBI, DSI, and for the commands
> > there is DCS. And, as you mentioned, many panels need custom
> > initialization, or support only parts of the DCS, or have other
> > quirks.
> 
> So DSI is more like i2c than the DisplayPort aux channel or DDC. That

Well, not quite. DSI is like DisplayPort and DisplayPort aux combined;
there's only one bus, DSI, which is used to transfer video data and
commands.

For DSI video mode, the transfer is somewhat like traditional displays,
and video data is send according to a pixel clock as a constant stream.
However, before the video stream is enabled the bus can be used in
bi-directional communication. And even when the video stream is enabled,
there can be other communication in the blanking periods.

For DSI command mode the transfer is a bit like high speed i2c; messages
are sent when needed (when the userspace gives the command to update),
without any strict timings. In practice this means that the peripheral
needs to have a framebuffer memory of its own, which it uses to refresh
the actual panel (or send the pixels forward to another peripheral).

As the use patterns of these two types of displays are quite different,
we have the terms auto-update and manual-update displays for them.

> seems fine; you can create a DSI infrastructure like the i2c
> infrastructure and then just have your display drivers use it to talk to
> the panel. We might eventually end up with some shared DRM code to deal
> with common DSI functions for display devices, like the EDID code
> today, but that doesn't need to happen before you can write your first
> DSI-using display driver.

One difference with i2c and DSI is that i2c is independent of the video
path, so it's easy to keep that separate from DRM. But for DSI the data
is produced by the video hardware using the overlays, encoders etc. I
don't quite see how we could have an i2c-like separate DSI API, which
wasn't part of DRM.

And even in simpler case MIPI DPI, which is a traditional parallel RGB
interface, a panel may need custom configuration via, say, i2c or spi.
We can, of course, create a i2c device driver for the panel, but how is
that then connected to DRM? The i2c driver may need to know things like
when the display is enabled/disabled, about backlight changes, or any
other display related event.

Is there a way for the i2c driver to get these events, and add new
properties to the DRM (say, if the panel has a feature configured via
i2c, but we'd like it to be visible to the userspace via the DRM
driver)?

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-19  7:29                   ` Tomi Valkeinen
  (?)
@ 2011-09-20  8:29                     ` Patrik Jakobsson
  -1 siblings, 0 replies; 143+ messages in thread
From: Patrik Jakobsson @ 2011-09-20  8:29 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: Keith Packard, Alan Cox, linux-fbdev, linux-kernel, dri-devel,
	linaro-dev, Clark, Rob, Archit Taneja

On Mon, Sep 19, 2011 at 9:29 AM, Tomi Valkeinen wrote:
>> So DSI is more like i2c than the DisplayPort aux channel or DDC. That
>
> Well, not quite. DSI is like DisplayPort and DisplayPort aux combined;
> there's only one bus, DSI, which is used to transfer video data and
> commands.
>
> For DSI video mode, the transfer is somewhat like traditional displays,
> and video data is send according to a pixel clock as a constant stream.
> However, before the video stream is enabled the bus can be used in
> bi-directional communication. And even when the video stream is enabled,
> there can be other communication in the blanking periods.

This sounds a lot like SDVO. You communicate with the SDVO chip through
i2c and then do a bus switch to get to the DDC. You also have the GMBus
with interrupt support that can help you do the i2c transfers.

SDVO supports many connectors and can have multiple in and out channels
so some setups are a bit complicated.

It would be nice to have a model that fits both DSI and SDVO, and the option
to configure some of it from userspace.

I thought the purpose of drm_encoder was to abstract hardware like this?

-Patrik

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-20  8:29                     ` Patrik Jakobsson
  0 siblings, 0 replies; 143+ messages in thread
From: Patrik Jakobsson @ 2011-09-20  8:29 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

On Mon, Sep 19, 2011 at 9:29 AM, Tomi Valkeinen wrote:
>> So DSI is more like i2c than the DisplayPort aux channel or DDC. That
>
> Well, not quite. DSI is like DisplayPort and DisplayPort aux combined;
> there's only one bus, DSI, which is used to transfer video data and
> commands.
>
> For DSI video mode, the transfer is somewhat like traditional displays,
> and video data is send according to a pixel clock as a constant stream.
> However, before the video stream is enabled the bus can be used in
> bi-directional communication. And even when the video stream is enabled,
> there can be other communication in the blanking periods.

This sounds a lot like SDVO. You communicate with the SDVO chip through
i2c and then do a bus switch to get to the DDC. You also have the GMBus
with interrupt support that can help you do the i2c transfers.

SDVO supports many connectors and can have multiple in and out channels
so some setups are a bit complicated.

It would be nice to have a model that fits both DSI and SDVO, and the option
to configure some of it from userspace.

I thought the purpose of drm_encoder was to abstract hardware like this?

-Patrik

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-20  8:29                     ` Patrik Jakobsson
  0 siblings, 0 replies; 143+ messages in thread
From: Patrik Jakobsson @ 2011-09-20  8:29 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

On Mon, Sep 19, 2011 at 9:29 AM, Tomi Valkeinen wrote:
>> So DSI is more like i2c than the DisplayPort aux channel or DDC. That
>
> Well, not quite. DSI is like DisplayPort and DisplayPort aux combined;
> there's only one bus, DSI, which is used to transfer video data and
> commands.
>
> For DSI video mode, the transfer is somewhat like traditional displays,
> and video data is send according to a pixel clock as a constant stream.
> However, before the video stream is enabled the bus can be used in
> bi-directional communication. And even when the video stream is enabled,
> there can be other communication in the blanking periods.

This sounds a lot like SDVO. You communicate with the SDVO chip through
i2c and then do a bus switch to get to the DDC. You also have the GMBus
with interrupt support that can help you do the i2c transfers.

SDVO supports many connectors and can have multiple in and out channels
so some setups are a bit complicated.

It would be nice to have a model that fits both DSI and SDVO, and the option
to configure some of it from userspace.

I thought the purpose of drm_encoder was to abstract hardware like this?

-Patrik

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-20  8:29                     ` Patrik Jakobsson
  (?)
@ 2011-09-20 15:55                       ` Keith Packard
  -1 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-20 15:55 UTC (permalink / raw)
  To: Patrik Jakobsson, Tomi Valkeinen
  Cc: Alan Cox, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 697 bytes --]

On Tue, 20 Sep 2011 10:29:23 +0200, Patrik Jakobsson <patrik.r.jakobsson@gmail.com> wrote:

> It would be nice to have a model that fits both DSI and SDVO, and the option
> to configure some of it from userspace.

> I thought the purpose of drm_encoder was to abstract hardware like this?

SDVO is entirely hidden by the drm_encoder interface; some of the
controls (like TV encoder parameters) are exposed through DRM
properties, others are used in the basic configuration of the device.

I'm not sure we need a new abstraction that subsumes both DSI and SDVO,
but we may need a DSI library that can be used by both DRM and other
parts of the kernel.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-20 15:55                       ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-20 15:55 UTC (permalink / raw)
  To: Patrik Jakobsson, Tomi Valkeinen
  Cc: Alan Cox, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 697 bytes --]

On Tue, 20 Sep 2011 10:29:23 +0200, Patrik Jakobsson <patrik.r.jakobsson@gmail.com> wrote:

> It would be nice to have a model that fits both DSI and SDVO, and the option
> to configure some of it from userspace.

> I thought the purpose of drm_encoder was to abstract hardware like this?

SDVO is entirely hidden by the drm_encoder interface; some of the
controls (like TV encoder parameters) are exposed through DRM
properties, others are used in the basic configuration of the device.

I'm not sure we need a new abstraction that subsumes both DSI and SDVO,
but we may need a DSI library that can be used by both DRM and other
parts of the kernel.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-20 15:55                       ` Keith Packard
  0 siblings, 0 replies; 143+ messages in thread
From: Keith Packard @ 2011-09-20 15:55 UTC (permalink / raw)
  To: Patrik Jakobsson, Tomi Valkeinen
  Cc: Alan Cox, linux-fbdev, linux-kernel, dri-devel, linaro-dev,
	Clark, Rob, Archit Taneja

[-- Attachment #1: Type: text/plain, Size: 697 bytes --]

On Tue, 20 Sep 2011 10:29:23 +0200, Patrik Jakobsson <patrik.r.jakobsson@gmail.com> wrote:

> It would be nice to have a model that fits both DSI and SDVO, and the option
> to configure some of it from userspace.

> I thought the purpose of drm_encoder was to abstract hardware like this?

SDVO is entirely hidden by the drm_encoder interface; some of the
controls (like TV encoder parameters) are exposed through DRM
properties, others are used in the basic configuration of the device.

I'm not sure we need a new abstraction that subsumes both DSI and SDVO,
but we may need a DSI library that can be used by both DRM and other
parts of the kernel.

-- 
keith.packard@intel.com

[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-20 15:55                       ` Keith Packard
@ 2011-09-20 21:20                         ` Patrik Jakobsson
  -1 siblings, 0 replies; 143+ messages in thread
From: Patrik Jakobsson @ 2011-09-20 21:20 UTC (permalink / raw)
  To: Keith Packard
  Cc: Tomi Valkeinen, Alan Cox, linux-fbdev, linux-kernel, dri-devel,
	linaro-dev, Clark, Rob, Archit Taneja

On Tue, Sep 20, 2011 at 5:55 PM, Keith Packard wrote:
> I'm not sure we need a new abstraction that subsumes both DSI and SDVO,

Ok. SDVO fits within the current abstraction, but I guess what I'm fishing
for is more code sharing of encoders. For instance, the SDVO code in GMA500
could be shared with i915. I'm currently working on copying the i915 SDVO code
over to GMA500, but sharing it would be even better.

> but we may need a DSI library that can be used by both DRM and other
> parts of the kernel.

Ok, not sure I understand the complexity of DSI. Can overlay composition
occur after/at the DSI stage (through MCS perhaps)? Or is it a matter of
panels requiring special scanout buffer formats that for instance V4L needs
to know about in order to overlay stuff properly? Or am I getting it all wrong?

Thanks
-Patrik

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-20 21:20                         ` Patrik Jakobsson
  0 siblings, 0 replies; 143+ messages in thread
From: Patrik Jakobsson @ 2011-09-20 21:20 UTC (permalink / raw)
  To: Keith Packard
  Cc: Tomi Valkeinen, Alan Cox, linux-fbdev, linux-kernel, dri-devel,
	linaro-dev, Clark, Rob, Archit Taneja

On Tue, Sep 20, 2011 at 5:55 PM, Keith Packard wrote:
> I'm not sure we need a new abstraction that subsumes both DSI and SDVO,

Ok. SDVO fits within the current abstraction, but I guess what I'm fishing
for is more code sharing of encoders. For instance, the SDVO code in GMA500
could be shared with i915. I'm currently working on copying the i915 SDVO code
over to GMA500, but sharing it would be even better.

> but we may need a DSI library that can be used by both DRM and other
> parts of the kernel.

Ok, not sure I understand the complexity of DSI. Can overlay composition
occur after/at the DSI stage (through MCS perhaps)? Or is it a matter of
panels requiring special scanout buffer formats that for instance V4L needs
to know about in order to overlay stuff properly? Or am I getting it all wrong?

Thanks
-Patrik

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-19  0:09                   ` Rob Clark
@ 2011-09-20 23:32                     ` Laurent Pinchart
  -1 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-20 23:32 UTC (permalink / raw)
  To: Rob Clark
  Cc: Alan Cox, Keith Packard, linaro-dev, Florian Tobias Schandinat,
	linux-kernel, dri-devel, Archit Taneja, linux-fbdev,
	Alex Deucher, linux-media

Hi Alan and Rob,

On Monday 19 September 2011 02:09:36 Rob Clark wrote:
> On Sun, Sep 18, 2011 at 5:23 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
> >> This would leave us with the issue of controlling formats and other
> >> parameters on the pipelines. We could keep separate DRM, KMS, FB and
> >> V4L APIs for that,
> > 
> > There are some other differences that matter. The exact state and
> > behaviour of memory, sequencing of accesses, cache control and management
> > are a critical part of DRM for most GPUs, as is the ability to have them
> > in swap backed objects and to do memory management of them. Fences and
> > the like are a big part of the logic of many renderers and the same
> > fencing has to be applied between capture and GPU, and also in some cases
> > between playback accelerators (eg MP4 playback) and GPU.

That's why I believe the DRM API is our best solution to address all those 
issues.

I'm not advocating merging the DRM, FB and V4L APIs for memory management. 
What I would like to investigate is whether we can use a common API for the 
common needs, which are (in my opinion):

- reporting the entities that make up the graphics pipeline (such as planes, 
overlays, compositors, transmitters,  connectors, ...), especially when 
pipelines get more complex than the plane->crtc->encoder->connector DRM model

- configuring data routing in those complex pipelines

- and possibly configuring formats (pixel format, frame size, crop rectangle, 
composition rectangle, ...) on those entities

> > To glue them together I think you'd need to support the use of GEM
> > objects (maybe extended) in V4L. That may actually make life cleaner and
> > simpler in some respects because GEM objects are refcounted nicely and
> > have handles.
> 
> fwiw, I think the dmabuf proposal that linaro GWG is working on should
> be sufficient for V4L to capture directly into a GEM buffer that can
> be scanned out (overlay) or composited by GPU, etc, in cases where the
> different dma initiators can all access some common memory:
> 
> http://lists.linaro.org/pipermail/linaro-mm-sig/2011-September/000616.html
> 
> The idea is that you could allocate a GEM buffer, export a dmabuf
> handle for that buffer that could be passed to v4l2 camera device (ie.
> V4L2_MEMORY_DMABUF), video encoder, etc..  the importing device should
> bracket DMA to/from the buffer w/ get/put_scatterlist() so an unused
> buffer could be unpinned if needed.

I second Rob here, I think that API should be enough to solve our memory 
sharing problems between different devices. This is a bit out of scope though, 
as neither the low-level Linux display framework proposal nor my comments 
target that, but it's an important topic worth mentioning.

> > DRM and KMS abstract out stuff into what is akin to V4L subdevices for
> > the various objects the video card has that matter for display - from
> > scanout buffers to the various video outputs, timings and the like.
> > 
> > I don't know what it's like with OMAP but for some of the x86 stuff
> > particularly low speed/power stuff the capture devices, GPU and overlays
> > tend to be fairly incestuous in order to do things like 1080i/p preview
> > while recording from the camera.
> 
> We don't like extra memcpy's, but something like dmabuf fits us
> nicely.. and I expect it would work well in any sort of UMA system
> where camera, encoder, GPU, overlay, etc all can share the same memory
> and formats.  I suspect the situation is similar in the x86 SoC
> world.. but would be good to get some feedback on the proposal.  (I
> guess next version of the RFC would go out to more mailing lists for
> broader review.)
> 
> > GPU is also a bit weird in some ways because while its normally
> > nonsensical to do things like use the capture facility one card to drive
> > part of another, it's actually rather useful (although not supported
> > really by DRM) to do exactly that with GPUs. A simple example is a dual
> > headed box with a dumb frame buffer and an accelerated output both of
> > which are using memory that can be hit by the accelerated card. Classic
> > example being a USB plug in monitor.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-20 23:32                     ` Laurent Pinchart
  0 siblings, 0 replies; 143+ messages in thread
From: Laurent Pinchart @ 2011-09-20 23:32 UTC (permalink / raw)
  To: Rob Clark
  Cc: Alan Cox, Keith Packard, linaro-dev, Florian Tobias Schandinat,
	linux-kernel, dri-devel, Archit Taneja, linux-fbdev,
	Alex Deucher, linux-media

Hi Alan and Rob,

On Monday 19 September 2011 02:09:36 Rob Clark wrote:
> On Sun, Sep 18, 2011 at 5:23 PM, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:
> >> This would leave us with the issue of controlling formats and other
> >> parameters on the pipelines. We could keep separate DRM, KMS, FB and
> >> V4L APIs for that,
> > 
> > There are some other differences that matter. The exact state and
> > behaviour of memory, sequencing of accesses, cache control and management
> > are a critical part of DRM for most GPUs, as is the ability to have them
> > in swap backed objects and to do memory management of them. Fences and
> > the like are a big part of the logic of many renderers and the same
> > fencing has to be applied between capture and GPU, and also in some cases
> > between playback accelerators (eg MP4 playback) and GPU.

That's why I believe the DRM API is our best solution to address all those 
issues.

I'm not advocating merging the DRM, FB and V4L APIs for memory management. 
What I would like to investigate is whether we can use a common API for the 
common needs, which are (in my opinion):

- reporting the entities that make up the graphics pipeline (such as planes, 
overlays, compositors, transmitters,  connectors, ...), especially when 
pipelines get more complex than the plane->crtc->encoder->connector DRM model

- configuring data routing in those complex pipelines

- and possibly configuring formats (pixel format, frame size, crop rectangle, 
composition rectangle, ...) on those entities

> > To glue them together I think you'd need to support the use of GEM
> > objects (maybe extended) in V4L. That may actually make life cleaner and
> > simpler in some respects because GEM objects are refcounted nicely and
> > have handles.
> 
> fwiw, I think the dmabuf proposal that linaro GWG is working on should
> be sufficient for V4L to capture directly into a GEM buffer that can
> be scanned out (overlay) or composited by GPU, etc, in cases where the
> different dma initiators can all access some common memory:
> 
> http://lists.linaro.org/pipermail/linaro-mm-sig/2011-September/000616.html
> 
> The idea is that you could allocate a GEM buffer, export a dmabuf
> handle for that buffer that could be passed to v4l2 camera device (ie.
> V4L2_MEMORY_DMABUF), video encoder, etc..  the importing device should
> bracket DMA to/from the buffer w/ get/put_scatterlist() so an unused
> buffer could be unpinned if needed.

I second Rob here, I think that API should be enough to solve our memory 
sharing problems between different devices. This is a bit out of scope though, 
as neither the low-level Linux display framework proposal nor my comments 
target that, but it's an important topic worth mentioning.

> > DRM and KMS abstract out stuff into what is akin to V4L subdevices for
> > the various objects the video card has that matter for display - from
> > scanout buffers to the various video outputs, timings and the like.
> > 
> > I don't know what it's like with OMAP but for some of the x86 stuff
> > particularly low speed/power stuff the capture devices, GPU and overlays
> > tend to be fairly incestuous in order to do things like 1080i/p preview
> > while recording from the camera.
> 
> We don't like extra memcpy's, but something like dmabuf fits us
> nicely.. and I expect it would work well in any sort of UMA system
> where camera, encoder, GPU, overlay, etc all can share the same memory
> and formats.  I suspect the situation is similar in the x86 SoC
> world.. but would be good to get some feedback on the proposal.  (I
> guess next version of the RFC would go out to more mailing lists for
> broader review.)
> 
> > GPU is also a bit weird in some ways because while its normally
> > nonsensical to do things like use the capture facility one card to drive
> > part of another, it's actually rather useful (although not supported
> > really by DRM) to do exactly that with GPUs. A simple example is a dual
> > headed box with a dumb frame buffer and an accelerated output both of
> > which are using memory that can be hit by the accelerated card. Classic
> > example being a USB plug in monitor.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-20 21:20                         ` Patrik Jakobsson
@ 2011-09-21  6:01                           ` Tomi Valkeinen
  -1 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-21  6:01 UTC (permalink / raw)
  To: Patrik Jakobsson
  Cc: Keith Packard, Alan Cox, linux-fbdev, linux-kernel, dri-devel,
	linaro-dev, Clark, Rob, Archit Taneja

On Tue, 2011-09-20 at 23:20 +0200, Patrik Jakobsson wrote:

> Ok, not sure I understand the complexity of DSI. Can overlay composition
> occur after/at the DSI stage (through MCS perhaps)? Or is it a matter of
> panels requiring special scanout buffer formats that for instance V4L needs
> to know about in order to overlay stuff properly? Or am I getting it all wrong?

I don't know what MCS is. But DSI is just a bi-directional transfer
protocol between the SoC and the peripheral, you can send arbitrary data
over it.

Normally the SoC composes a pixel stream using overlays and whatnot,
which goes to the DSI hardware, which then serializes the data and sends
it to the peripheral.

But you can as well send any data, like commands, and the peripheral can
respond with any relevant data.

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-21  6:01                           ` Tomi Valkeinen
  0 siblings, 0 replies; 143+ messages in thread
From: Tomi Valkeinen @ 2011-09-21  6:01 UTC (permalink / raw)
  To: Patrik Jakobsson
  Cc: Keith Packard, Alan Cox, linux-fbdev, linux-kernel, dri-devel,
	linaro-dev, Clark, Rob, Archit Taneja

On Tue, 2011-09-20 at 23:20 +0200, Patrik Jakobsson wrote:

> Ok, not sure I understand the complexity of DSI. Can overlay composition
> occur after/at the DSI stage (through MCS perhaps)? Or is it a matter of
> panels requiring special scanout buffer formats that for instance V4L needs
> to know about in order to overlay stuff properly? Or am I getting it all wrong?

I don't know what MCS is. But DSI is just a bi-directional transfer
protocol between the SoC and the peripheral, you can send arbitrary data
over it.

Normally the SoC composes a pixel stream using overlays and whatnot,
which goes to the DSI hardware, which then serializes the data and sends
it to the peripheral.

But you can as well send any data, like commands, and the peripheral can
respond with any relevant data.

 Tomi



^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 12:07 ` Tomi Valkeinen
@ 2011-09-21 13:26   ` Heiko Stübner
  -1 siblings, 0 replies; 143+ messages in thread
From: Heiko Stübner @ 2011-09-21 13:26 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja, Nishant Kamat

Hi,

Am Donnerstag, 15. September 2011, 14:07:05 schrieb Tomi Valkeinen:
> Now, I'm quite sure the above framework could work quite well with any
> OMAP like hardware, with unified memory (i.e. the video buffers are in
> SDRAM) and 3D chips and similar components are separate. But what I'm
> not sure is how desktop world's gfx cards change things. Most probably
> all the above components can be found from there also in some form, but
> are there some interdependencies between 3D/buffer management/something
> else and the video output side?
If I have read your proposal right, it could also help in better supporting 
epaper-displays. 

The current drivers each (re-)implement the deferredIo logic needed to drive 
such systems (catching and combining updates to the framebuffer) and combine 
this with the actual display driver which issues specific commands to the 
display hardware. Also a board-specific driver is needed to implement the 
actual transport to the display which seems be done via this i80 command 
protocol over GPIOs or the LCD controllers of SoCs.

If one were to split this it could be realised like

----------------   ---------------   -------------
| deferredIOFb | - | DisplayCtrl | - | Transport |
----------------   ---------------   -------------

An interesting tidbit is that on the ereaders I'm working on the LCD 
controller should be able to do some sort of pseudo DMA operation.
The normal way to transmit data to epaper displays is:
- issue update command with dimension of the regions to update
- read relevant pixeldata from fb and write it to the i80 in a loop
- issue stop-update command

On the S3C2416 it could go like:
- issue update command
- describe to the lcd controller the source memory region
- let the lcd controller transmit exactly one frame
- issue stop update command
So the transport would need to get the memory addresses of the region to 
update from the framebuffer driver.

Also this system could possibly use some of the drawing routines from the 
S3C2416 SoC.

Essentially needed would be a s3c-fb with deferredio-addon and the lcd 
controller separated from it. I'm still not sure how this all could fit 
together but I would guess separating framebuffer and display control would 
help.


Heiko 

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-21 13:26   ` Heiko Stübner
  0 siblings, 0 replies; 143+ messages in thread
From: Heiko Stübner @ 2011-09-21 13:26 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linux-fbdev, linux-kernel, dri-devel, linaro-dev, Clark, Rob,
	Archit Taneja, Nishant Kamat

Hi,

Am Donnerstag, 15. September 2011, 14:07:05 schrieb Tomi Valkeinen:
> Now, I'm quite sure the above framework could work quite well with any
> OMAP like hardware, with unified memory (i.e. the video buffers are in
> SDRAM) and 3D chips and similar components are separate. But what I'm
> not sure is how desktop world's gfx cards change things. Most probably
> all the above components can be found from there also in some form, but
> are there some interdependencies between 3D/buffer management/something
> else and the video output side?
If I have read your proposal right, it could also help in better supporting 
epaper-displays. 

The current drivers each (re-)implement the deferredIo logic needed to drive 
such systems (catching and combining updates to the framebuffer) and combine 
this with the actual display driver which issues specific commands to the 
display hardware. Also a board-specific driver is needed to implement the 
actual transport to the display which seems be done via this i80 command 
protocol over GPIOs or the LCD controllers of SoCs.

If one were to split this it could be realised like

----------------   ---------------   -------------
| deferredIOFb | - | DisplayCtrl | - | Transport |
----------------   ---------------   -------------

An interesting tidbit is that on the ereaders I'm working on the LCD 
controller should be able to do some sort of pseudo DMA operation.
The normal way to transmit data to epaper displays is:
- issue update command with dimension of the regions to update
- read relevant pixeldata from fb and write it to the i80 in a loop
- issue stop-update command

On the S3C2416 it could go like:
- issue update command
- describe to the lcd controller the source memory region
- let the lcd controller transmit exactly one frame
- issue stop update command
So the transport would need to get the memory addresses of the region to 
update from the framebuffer driver.

Also this system could possibly use some of the drawing routines from the 
S3C2416 SoC.

Essentially needed would be a s3c-fb with deferredio-addon and the lcd 
controller separated from it. I'm still not sure how this all could fit 
together but I would guess separating framebuffer and display control would 
help.


Heiko 

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-21  6:01                           ` Tomi Valkeinen
  (?)
@ 2011-09-21 18:07                             ` Patrik Jakobsson
  -1 siblings, 0 replies; 143+ messages in thread
From: Patrik Jakobsson @ 2011-09-21 18:07 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: Keith Packard, Alan Cox, linux-fbdev, linux-kernel, dri-devel,
	linaro-dev, Clark, Rob, Archit Taneja

On Wed, Sep 21, 2011 at 8:01 AM, Tomi Valkeinen wrote:
> I don't know what MCS is.

MCS is manufacturer specific commands (Manufacturer Command Set).

> But DSI is just a bi-directional transfer
> protocol between the SoC and the peripheral, you can send arbitrary data
> over it.
>
> Normally the SoC composes a pixel stream using overlays and whatnot,
> which goes to the DSI hardware, which then serializes the data and sends
> it to the peripheral.
>
> But you can as well send any data, like commands, and the peripheral can
> respond with any relevant data.

Ok it makes sense now, and I see how it can get more complicated than SDVO.
A DSI lib is a good idea and interfaces for stuff like backlight and mechanisms
for queuing commands that need to go in during blanking.

Thanks for clearing it up
-Patrik

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-21 18:07                             ` Patrik Jakobsson
  0 siblings, 0 replies; 143+ messages in thread
From: Patrik Jakobsson @ 2011-09-21 18:07 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

On Wed, Sep 21, 2011 at 8:01 AM, Tomi Valkeinen wrote:
> I don't know what MCS is.

MCS is manufacturer specific commands (Manufacturer Command Set).

> But DSI is just a bi-directional transfer
> protocol between the SoC and the peripheral, you can send arbitrary data
> over it.
>
> Normally the SoC composes a pixel stream using overlays and whatnot,
> which goes to the DSI hardware, which then serializes the data and sends
> it to the peripheral.
>
> But you can as well send any data, like commands, and the peripheral can
> respond with any relevant data.

Ok it makes sense now, and I see how it can get more complicated than SDVO.
A DSI lib is a good idea and interfaces for stuff like backlight and mechanisms
for queuing commands that need to go in during blanking.

Thanks for clearing it up
-Patrik

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-09-21 18:07                             ` Patrik Jakobsson
  0 siblings, 0 replies; 143+ messages in thread
From: Patrik Jakobsson @ 2011-09-21 18:07 UTC (permalink / raw)
  To: Tomi Valkeinen
  Cc: linaro-dev, linux-fbdev, dri-devel, linux-kernel, Archit Taneja,
	Clark, Rob

On Wed, Sep 21, 2011 at 8:01 AM, Tomi Valkeinen wrote:
> I don't know what MCS is.

MCS is manufacturer specific commands (Manufacturer Command Set).

> But DSI is just a bi-directional transfer
> protocol between the SoC and the peripheral, you can send arbitrary data
> over it.
>
> Normally the SoC composes a pixel stream using overlays and whatnot,
> which goes to the DSI hardware, which then serializes the data and sends
> it to the peripheral.
>
> But you can as well send any data, like commands, and the peripheral can
> respond with any relevant data.

Ok it makes sense now, and I see how it can get more complicated than SDVO.
A DSI lib is a good idea and interfaces for stuff like backlight and mechanisms
for queuing commands that need to go in during blanking.

Thanks for clearing it up
-Patrik

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 12:07 ` Tomi Valkeinen
                   ` (4 preceding siblings ...)
  (?)
@ 2011-10-01 16:52 ` Enrico Weigelt
  -1 siblings, 0 replies; 143+ messages in thread
From: Enrico Weigelt @ 2011-10-01 16:52 UTC (permalink / raw)
  To: linux-kernel

* Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

Hi folks,


just some *naive* thoughts (actually, I dont really understand much
of the hardware and the APIs ;-o).

> So we may have something like this, when all overlays read pixels from
> separate areas in the memory, and all overlays are on LCD display:
> 
>  .-----.         .------.           .------.
>  | mem |-------->| ovl0 |-----.---->| LCD  |
>  '-----'         '------'     |     '------'
>  .-----.         .------.     |
>  | mem |-------->| ovl1 |-----|
>  '-----'         '------'     |
>  .-----.         .------.     |     .------.
>  | mem |-------->| ovl2 |-----'     |  TV  |
>  '-----'         '------'           '------'

This somehow reminds me on the spire concept on C64 ;-)

Wouldn't this call for an model and API which understands the
concept of windows or sprites ?

Lets say: we have some (virtual?) image (that's in the end seen
by the user in front) that's made of certain visual objects.
These objects all have their datasource (might be some in-memory
framebuffer or even some external input). Somethings in the middle,
lets call it compositor, somehow attached to these objects puts
them together and generate the output for the physical output
device (end the end of the pipeline, we'll have somehing like analog
VGA or digital HDMI signal, or whatever).

Ah, of course, the connections between these (sub)devices are not
arbitrary, they might be switchable between a certain set of endpoints
and might support different formats or transformation types
(just like a amplifier in a HiFi could switch between several inputs
where other devices are cabled-onto, but it wouldn't be able to
connect directly to a video recorder that's just plugged to the 
TV by SCART cabling).

In the end this IMHO looks like its all about routing and transformation.
Not as simple as IP routing, more like carrier or phone networks
w/ different media types, encodings, in- vs. outbound-signaling, etc.
(hmm, welcome to the world of traffic engineering ;-))

Things like power management could go the opposite direction by an
dependency graph: switching off the monitor could tell the output
hardware that it's not needed now, and this one could tell its inputs
that they're also currently not needed, and so on, and so on. All of
them could tell their power supplies that they're not needed, and if
no active users are left in a subgraph, the power is cut.


No idea, if this all makes sense in that area, so feel free to
correct me if I'm wrong ;-O


cu
-- 
----------------------------------------------------------------------
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weigelt@metux.de
 mobile: +49 151 27565287  icq:   210169427         skype: nekrad666
----------------------------------------------------------------------
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
----------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 18:12           ` Keith Packard
  (?)
  (?)
@ 2011-10-01 17:30           ` Enrico Weigelt
  -1 siblings, 0 replies; 143+ messages in thread
From: Enrico Weigelt @ 2011-10-01 17:30 UTC (permalink / raw)
  To: linux-kernel

* Keith Packard <keithp@keithp.com> wrote:

> Jesse's design for the DRM overlay code will expose the pixel formats as
> FOURCC codes so that DRM and v4l2 can interoperate -- we've got a lot of
> hardware that has both video decode and 3D acceleration, so those are
> going to get integrated somehow. And, we have to figure out how to share
> buffers between these APIs to avoid copying data with the CPU.

Really naive question: where are these buffers laying ?
I guess, somewhere in the device, not in main memory (otherwise,
it wouldn't be a problem, right ?). So, the question to answer is:
how (of the subdevices) can see whose buffers and how to map
their local addresses.

Am I correct ?


cu
-- 
----------------------------------------------------------------------
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weigelt@metux.de
 mobile: +49 151 27565287  icq:   210169427         skype: nekrad666
----------------------------------------------------------------------
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
----------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-15 17:21         ` Tomi Valkeinen
                           ` (2 preceding siblings ...)
  (?)
@ 2011-10-01 17:34         ` Enrico Weigelt
  -1 siblings, 0 replies; 143+ messages in thread
From: Enrico Weigelt @ 2011-10-01 17:34 UTC (permalink / raw)
  To: linux-kernel

* Tomi Valkeinen <tomi.valkeinen@ti.com> wrote:

> The point is that we cannot have standard "MIPI DSI command mode panel
> driver" which would work for all DSI cmd mode panels, but we need (in
> the worst case) separate driver for each panel.

Is it possible to do create some abstract interface (hi-level
commands), where just the individual driver needs to know the
details (eg. wether LP or HP needs to be used right now) ?


cu
-- 
----------------------------------------------------------------------
 Enrico Weigelt, metux IT service -- http://www.metux.de/

 phone:  +49 36207 519931  email: weigelt@metux.de
 mobile: +49 151 27565287  icq:   210169427         skype: nekrad666
----------------------------------------------------------------------
 Embedded-Linux / Portierung / Opensource-QM / Verteilte Systeme
----------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
  2011-09-17 20:25                             ` Alan Cox
@ 2011-10-31 20:24                               ` Jesse Barnes
  -1 siblings, 0 replies; 143+ messages in thread
From: Jesse Barnes @ 2011-10-31 20:24 UTC (permalink / raw)
  To: Alan Cox
  Cc: Dave Airlie, linux-fbdev, linaro-dev, Florian Tobias Schandinat,
	linux-kernel, dri-devel, Archit Taneja, Rob Clark

[-- Attachment #1: Type: text/plain, Size: 3068 bytes --]

On Sat, 17 Sep 2011 21:25:29 +0100
Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:

> > Just tell the X driver to not use acceleration, and it you won't get
> > any acceleration used, then you get complete stability. If a driver
> > writer wants to turn off all accel in the kernel driver, it can, its
> 
> In fact one thing we actually need really is a "dumb" KMS X server to
> replace the fbdev X server that unaccel stuff depends upon and which
> can't do proper mode handling, multi-head or resizing as a result. A dumb
> fb generic request for a back to front copy might also be useful for
> shadowfb, or at least indicators so you know what the cache behaviour is
> so the X server can pick the right policy.
> 
> > We've fixed this in KMS, we don't pass direct mappings to userspace
> > that we can't tear down and refault. We only provide objects via
> > handles. The only place its a problem is where we expose fbdev legacy
> > emulation, since we have to fix the pages.
> 
> Which is doable. Horrible but doable. The usb framebuffer code has to
> play games like this with the virtual framebuffer in order to track
> changes by faulting.
> 
> There are still some architectural screwups however. DRM continues the
> fbdev worldview that outputs, memory and accelerators are tied together
> in lumps we call video cards. That isn't really true for all cases and
> with capture/overlay it gets even less true.

Sorry for re-opening this ancient thread; I'm catching up from the past
2 months of travel & misc.

I definitely agree about the PC card centric architecture of DRM KMS
(and before it, X).  But we have a path out of it now, and lots of
interest from vendors and developers, so I don't think it's an
insurmountable problem by any means.

I definitely understand Florian's worries about DRM vs fb.  If nothing
else, there's certainly a perception that fb is simpler and easier to
get right.  But really, as others have pointed out, it's solving a
different set of problems than the DRM layer.  The latter is actually
trying to expose the features of contemporary hardware in a way that's
as portable as possible.  That portability comes at a cost though: the
APIs we add need to get lots of review, and there's no doubt we'll need
to add more as newer, weirder hardware comes along.

Really, I see no reason why fb and DRM can't continue to live side by
side.  If a vendor really only needs the features provided by the fb
layer, they're free to stick with a simple fb driver.  However, I
expect most vendors making phones, tablets, notebooks, etc will need
and want an architecture that looks a lot like the DRM layer, with
authentication for rendering clients, an command submission ioctl for
acceleration, and memory management, so I expect most of the driver
growth to be in DRM in the near future.

And I totally agree with Dave about having a kmscon.  I really wish
someone would implement it so I could have my VTs spinning on a cube.

-- 
Jesse Barnes, Intel Open Source Technology Center

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

* Re: Proposal for a low-level Linux display framework
@ 2011-10-31 20:24                               ` Jesse Barnes
  0 siblings, 0 replies; 143+ messages in thread
From: Jesse Barnes @ 2011-10-31 20:24 UTC (permalink / raw)
  To: Alan Cox
  Cc: Dave Airlie, linux-fbdev, linaro-dev, Florian Tobias Schandinat,
	linux-kernel, dri-devel, Archit Taneja, Rob Clark

[-- Attachment #1: Type: text/plain, Size: 3068 bytes --]

On Sat, 17 Sep 2011 21:25:29 +0100
Alan Cox <alan@lxorguk.ukuu.org.uk> wrote:

> > Just tell the X driver to not use acceleration, and it you won't get
> > any acceleration used, then you get complete stability. If a driver
> > writer wants to turn off all accel in the kernel driver, it can, its
> 
> In fact one thing we actually need really is a "dumb" KMS X server to
> replace the fbdev X server that unaccel stuff depends upon and which
> can't do proper mode handling, multi-head or resizing as a result. A dumb
> fb generic request for a back to front copy might also be useful for
> shadowfb, or at least indicators so you know what the cache behaviour is
> so the X server can pick the right policy.
> 
> > We've fixed this in KMS, we don't pass direct mappings to userspace
> > that we can't tear down and refault. We only provide objects via
> > handles. The only place its a problem is where we expose fbdev legacy
> > emulation, since we have to fix the pages.
> 
> Which is doable. Horrible but doable. The usb framebuffer code has to
> play games like this with the virtual framebuffer in order to track
> changes by faulting.
> 
> There are still some architectural screwups however. DRM continues the
> fbdev worldview that outputs, memory and accelerators are tied together
> in lumps we call video cards. That isn't really true for all cases and
> with capture/overlay it gets even less true.

Sorry for re-opening this ancient thread; I'm catching up from the past
2 months of travel & misc.

I definitely agree about the PC card centric architecture of DRM KMS
(and before it, X).  But we have a path out of it now, and lots of
interest from vendors and developers, so I don't think it's an
insurmountable problem by any means.

I definitely understand Florian's worries about DRM vs fb.  If nothing
else, there's certainly a perception that fb is simpler and easier to
get right.  But really, as others have pointed out, it's solving a
different set of problems than the DRM layer.  The latter is actually
trying to expose the features of contemporary hardware in a way that's
as portable as possible.  That portability comes at a cost though: the
APIs we add need to get lots of review, and there's no doubt we'll need
to add more as newer, weirder hardware comes along.

Really, I see no reason why fb and DRM can't continue to live side by
side.  If a vendor really only needs the features provided by the fb
layer, they're free to stick with a simple fb driver.  However, I
expect most vendors making phones, tablets, notebooks, etc will need
and want an architecture that looks a lot like the DRM layer, with
authentication for rendering clients, an command submission ioctl for
acceleration, and memory management, so I expect most of the driver
growth to be in DRM in the near future.

And I totally agree with Dave about having a kmscon.  I really wish
someone would implement it so I could have my VTs spinning on a cube.

-- 
Jesse Barnes, Intel Open Source Technology Center

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 143+ messages in thread

end of thread, other threads:[~2011-10-31 20:24 UTC | newest]

Thread overview: 143+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-15 12:07 Proposal for a low-level Linux display framework Tomi Valkeinen
2011-09-15 12:07 ` Tomi Valkeinen
2011-09-15 12:07 ` Tomi Valkeinen
2011-09-15 14:59 ` Keith Packard
2011-09-15 14:59   ` Keith Packard
2011-09-15 14:59   ` Keith Packard
2011-09-15 15:29   ` Tomi Valkeinen
2011-09-15 15:29     ` Tomi Valkeinen
2011-09-15 15:29     ` Tomi Valkeinen
2011-09-15 15:50     ` Keith Packard
2011-09-15 15:50       ` Keith Packard
2011-09-15 15:50       ` Keith Packard
2011-09-15 17:05       ` Alan Cox
2011-09-15 17:05         ` Alan Cox
2011-09-15 17:05         ` Alan Cox
2011-09-17 21:36         ` Laurent Pinchart
2011-09-17 21:36           ` Laurent Pinchart
2011-09-17 21:36           ` Laurent Pinchart
2011-09-15 17:12       ` Florian Tobias Schandinat
2011-09-15 17:12         ` Florian Tobias Schandinat
2011-09-15 17:18         ` Alan Cox
2011-09-15 17:18           ` Alan Cox
2011-09-15 17:18           ` Alan Cox
2011-09-15 17:47           ` Florian Tobias Schandinat
2011-09-15 17:47             ` Florian Tobias Schandinat
2011-09-15 19:05             ` Alan Cox
2011-09-15 19:05               ` Alan Cox
2011-09-15 19:05               ` Alan Cox
2011-09-15 19:46               ` Florian Tobias Schandinat
2011-09-15 19:46                 ` Florian Tobias Schandinat
2011-09-15 21:31                 ` Alan Cox
2011-09-15 21:31                   ` Alan Cox
2011-09-15 21:31                   ` Alan Cox
2011-09-15 17:52         ` Alex Deucher
2011-09-15 17:52           ` Alex Deucher
2011-09-15 17:56           ` Geert Uytterhoeven
2011-09-15 17:56             ` Geert Uytterhoeven
2011-09-15 18:04             ` Alex Deucher
2011-09-15 18:04               ` Alex Deucher
2011-09-15 18:04               ` Alex Deucher
2011-09-15 18:07               ` Corbin Simpson
2011-09-15 18:07               ` Corbin Simpson
2011-09-15 18:39           ` Florian Tobias Schandinat
2011-09-15 18:58             ` Alan Cox
2011-09-15 18:58               ` Alan Cox
2011-09-15 18:58               ` Alan Cox
2011-09-15 19:18               ` Florian Tobias Schandinat
2011-09-15 19:28                 ` Alan Cox
2011-09-15 19:28                   ` Alan Cox
2011-09-15 19:28                   ` Alan Cox
2011-09-15 19:45                 ` Alex Deucher
2011-09-15 19:45                   ` Alex Deucher
2011-09-17 14:44               ` Felipe Contreras
2011-09-17 14:44                 ` Felipe Contreras
2011-09-17 15:16                 ` Rob Clark
2011-09-17 15:16                   ` Rob Clark
2011-09-17 15:16                   ` Rob Clark
2011-09-17 16:11                   ` Florian Tobias Schandinat
2011-09-17 16:11                     ` Florian Tobias Schandinat
2011-09-17 16:47                     ` Dave Airlie
2011-09-17 16:47                       ` Dave Airlie
2011-09-17 16:47                       ` Dave Airlie
2011-09-17 18:15                       ` Florian Tobias Schandinat
2011-09-17 18:23                         ` Dave Airlie
2011-09-17 18:23                           ` Dave Airlie
2011-09-17 18:23                           ` Dave Airlie
2011-09-17 19:06                           ` Florian Tobias Schandinat
2011-09-17 19:25                             ` Corbin Simpson
2011-09-17 19:25                               ` Corbin Simpson
2011-09-17 21:25                             ` Alex Deucher
2011-09-17 21:25                               ` Alex Deucher
2011-09-17 21:25                               ` Alex Deucher
2011-09-17 20:25                           ` Alan Cox
2011-09-17 20:25                             ` Alan Cox
2011-09-17 20:25                             ` Alan Cox
2011-10-31 20:24                             ` Jesse Barnes
2011-10-31 20:24                               ` Jesse Barnes
2011-09-17 16:50                     ` Rob Clark
2011-09-17 16:50                       ` Rob Clark
2011-09-16  4:53             ` Keith Packard
2011-09-16  4:53               ` Keith Packard
2011-09-16  4:53               ` Keith Packard
2011-09-17 23:12             ` Laurent Pinchart
2011-09-17 23:12               ` Laurent Pinchart
2011-09-17 23:12               ` Laurent Pinchart
2011-09-18 16:14               ` Rob Clark
2011-09-18 16:14                 ` Rob Clark
2011-09-18 16:14                 ` Rob Clark
2011-09-18 21:55                 ` Laurent Pinchart
2011-09-18 21:55                   ` Laurent Pinchart
2011-09-18 21:55                   ` Laurent Pinchart
2011-09-18 22:23               ` Alan Cox
2011-09-18 22:23                 ` Alan Cox
2011-09-18 22:23                 ` Alan Cox
2011-09-19  0:09                 ` Rob Clark
2011-09-19  0:09                   ` Rob Clark
2011-09-20 23:32                   ` Laurent Pinchart
2011-09-20 23:32                     ` Laurent Pinchart
2011-09-15 18:12         ` Keith Packard
2011-09-15 18:12           ` Keith Packard
2011-09-15 18:12           ` Keith Packard
2011-10-01 17:30           ` Enrico Weigelt
2011-09-15 17:21       ` Tomi Valkeinen
2011-09-15 17:21         ` Tomi Valkeinen
2011-09-15 18:32         ` Rob Clark
2011-09-15 18:32           ` Rob Clark
2011-09-15 18:32           ` Rob Clark
2011-09-16  0:55         ` Keith Packard
2011-09-16  0:55           ` Keith Packard
2011-09-16  0:55           ` Keith Packard
2011-09-16  6:38           ` Tomi Valkeinen
2011-09-16  6:38             ` Tomi Valkeinen
2011-09-16 14:17           ` Daniel Vetter
2011-09-16 14:17             ` Daniel Vetter
2011-09-16 16:53           ` Alan Cox
2011-09-16 16:53             ` Alan Cox
2011-09-16 16:53             ` Alan Cox
2011-09-19  6:33             ` Tomi Valkeinen
2011-09-19  6:33               ` Tomi Valkeinen
2011-09-19  6:53               ` Keith Packard
2011-09-19  6:53                 ` Keith Packard
2011-09-19  6:53                 ` Keith Packard
2011-09-19  7:29                 ` Tomi Valkeinen
2011-09-19  7:29                   ` Tomi Valkeinen
2011-09-20  8:29                   ` Patrik Jakobsson
2011-09-20  8:29                     ` Patrik Jakobsson
2011-09-20  8:29                     ` Patrik Jakobsson
2011-09-20 15:55                     ` Keith Packard
2011-09-20 15:55                       ` Keith Packard
2011-09-20 15:55                       ` Keith Packard
2011-09-20 21:20                       ` Patrik Jakobsson
2011-09-20 21:20                         ` Patrik Jakobsson
2011-09-21  6:01                         ` Tomi Valkeinen
2011-09-21  6:01                           ` Tomi Valkeinen
2011-09-21 18:07                           ` Patrik Jakobsson
2011-09-21 18:07                             ` Patrik Jakobsson
2011-09-21 18:07                             ` Patrik Jakobsson
2011-10-01 17:34         ` Enrico Weigelt
2011-09-15 15:03 ` Kyungmin Park
2011-09-15 15:03   ` Kyungmin Park
2011-09-21 13:26 ` Heiko Stübner
2011-09-21 13:26   ` Heiko Stübner
2011-10-01 16:52 ` Enrico Weigelt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.