All of lore.kernel.org
 help / color / mirror / Atom feed
* Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
       [not found] ` <1654816.MX2JJ87BEo@avalon>
@ 2012-02-16 23:25     ` Laurent Pinchart
  0 siblings, 0 replies; 49+ messages in thread
From: Laurent Pinchart @ 2012-02-16 23:25 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Sumit Semwal, Jesse Barker, Jesse Barnes,
	Rob Clark, Pawel Osciak, Marek Szyprowski, Tomasz Stanislawski,
	Magnus Damm, Marcus Lorentzon, Alexander Deucher, linux-media,
	dri-devel, linux-fbdev

Hello everybody,

First of all, I would like to thank all the attendees for their participation 
in the mini-summit that helped make the meeting a success.

Here are my consolidated notes that cover both the Linaro Connect meeting and 
the ELC meeting. They're also available at 
http://www.ideasonboard.org/media/meetings/.


Kernel Display and Video API Consolidation mini-summit at ELC 2012
------------------------------------------------------------------

***  Video Transmitter Drivers ***

  Goal: Sharing video transmitter drivers (i2c or 'IP block') between V4L and
  DRM.

  This is mostly useful for embedded systems. Transmitters can be used by both
  GPUs and direct video outputs. Having a single driver for external (or even
  internal) HDMI transmitters is desired to avoid code duplication between
  subsystems.

  - DRM/KMS implements support for transmitters as DRM encoders. Code is
    embedded in the main DRM/KMS driver.
  - V4L2 implements transmitter drivers as generic kernel objects called
    v4l2_subdev that implement a set of abstract operations.

  v4l2_subdev already solves (most of ?) the problem, but is specific to V4L2.
  The proposed approach is to create a media_video_entity (exact name to be
  defined) object similar to v4l2_subdev.

  Using that API should not be mandatory, especially on hardware where the
  transmitter and the display controller are deeply cross-dependent.

  How to instantiate the transmitter device and how to handle probe order
  needs to be solved, especially when DT binding come into play. The problem
  already exists to some extend in V4L2.

  The subdev API takes mbus formats as arguments, which is not handled by
  DRM/KMS. V4L2 media bus codes will then need to be shared.

  Action points:
  - Initially, mostly FBDEV (HDMI-on-DSI, Renesas) and V4L2 (TI hardware,
    Cisco). Hans + Laurent to work on a proposal.

***  Display Panel Drivers ***

  Goal: Sharing display panel drivers between display controllers from
  different vendors.

  Panels are connected to the display controller through a standard bus with a
  control channel (DSI and eDP are two major such buses). Various vendors have
  created proprietary interfaces for panel drivers:

  - TI on OMAP (drivers/video/omap2/displays).
  - Samsung on Exynos (drivers/video/exynos).
  - ST-Ericsson on MCDE (http://www.igloocommunity.org/gitweb/?p=kernel/igloo-
kernel.git;a=tree;f=drivers/video/mcde)
  - Renesas is working on a similar interface for SH Mobile.

  HDMI-on-DSI transmitters, while not panels per-se, can be supported through
  the same API.

  A Low level Linux Display Framework (https://lkml.org/lkml/2011/9/15/107)
  has been proposed and overlaps with this topic.

  For DSI, a possible abstraction level would be a DCS (Display Command Set)
  bus. Panels and/or HDMI-on-DSI transmitter drivers would be implemented as
  DCS drivers.

  Action points:
  - Marcus to work on a proposal for DCS-based panels (with Tomi Valkeinen and
    Morimoto-san).

***  Common video mode data structure and EDID parser ***

  Goal: Sharing an EDID parser between DRM/KMS, FBDEV and V4L2.

  The DRM EDID parser is currently the most advanced implementation and will
  be taken as a starting point.

  Different subsystems use different data structures to describe video
  mode/timing information:

  - struct drm_mode_modeinfo in DRM/KMS
  - struct fb_videomode in FBDEV
  - struct v4l2_bt_timings in V4L2

  A new common video mode/timing data structure (struct media_video_mode_info,
  exact name is to be defined), not tied to any specific subsystem, is
  required to share the EDID parser. That structure won't be exported to
  userspace.

  Helper functions will be implemented in the subsystems to convert between
  that generic structure and the various subsystem-specific structures.

  The mode list is stored in the DRM connector in the EDID parser. A new mode
  list data structure can be added, or a callback function can be used by the
  parser to give modes one at a time to the caller.

  3D needs to be taken into account (this is similar to interlacing).

  Action points:
  - Laurent to work on a proposal. The DRM/KMS EDID parser will be reused.
  
***  Shared in-kernel image/framebuffer object type ***

  Goal: Describe dma-buf content in a generic cross-subsystem way.

  Most formats can be described by 4CC, width, height and pitch. Strange
  hardware-specific formats might not be possible to describes completely
  generically.

  However, format negotiation goes through userspace anyway, so there should
  be no need for passing format information directly between drivers.

  Action points:
  - None, this task will not be worked on.

***  Central 4CC Documentation ***

  Goal: Define and document 4CCs in a central location to make sure 4CCs won't
  overlap or have different meanings for different subsystems.

  DRM and V4L2 define their own 4CCs:

  - include/drm/drm-fourccs.h
  - include/linux/videodev2.h

  A new header file will centralize the definitions, with a new
  cross-subsystem prefix. DRM and V4L2 4CCs will be redefined as aliases for
  the new centralized 4CCs.

  Colorspace (including both color matrix and Y/U/V ranges) should be shared
  as well. VDPAU (and VAAPI ?) pass the color matrix to userspace. The kernel
  API should not be more restrictive, but we just need a couple of presets in
  most cases. We can define a list of common presets, with a way to upload a
  custom matrix.

  Action points:
  - Start with the V4L2 documentation, create a shared header file. Sakari to
    work on a proposal.

***  Split KMS and GPU Drivers ***

  Goal: Split KMS and GPU drivers with in kernel API inbetween.
 
  In most (all ?) SoCs, the GPU and the display controller are separate
  devices. Splitting them into separate drivers would allow reusing the GPU
  driver with different devices (e.g. using a single common PowerVR kernel
  module with different display controller drivers). The same approach can be
  used on the desktop for the multi-GPU case and the USB display case.

  - OMAP already separates the GPU and DSS drivers, but the GPU driver is some
  kind of DSS plugin. This isn't a long-term approach.
  - Exynos also separates the GPU and FIMD drivers. It's hard to merge GPU
  into  display subsystem since UMP, GPU has own memory management codes.

  One of the biggest challenges would be to get GPU vendors to use this new
  model. ARM could help here, by making the Mali kernel driver split from the
  display controller drivers. Once one vendor jumps onboard, others could have
  a bigger incentive to follow.

  Action points:
  - Rob planning to work on a reference implementation, as part of the sync
    object case. This is a pretty long term plan. 
  - Jesse will handle the coordination with ARM for Mali.
  
***  HDMI CEC Support ***

  Goal: Support HDMI CEC and offer a userspace API for applications.

  A new kernel API is needed and must be usable by KMS, V4L2 and possibly
  LIRC. There's ongoing effort from Cisco to implement HDMI CEC support. Given
  their background, V4L2 is their initial target. A proposal is available at
  http://www.mail-archive.com/linux-media@vger.kernel.org/msg29241.html with a
  sample implementation at
  http://git.linuxtv.org/hverkuil/cisco.git/shortlog/refs/heads/cobalt-
mainline
  (drivers/media/video/adv7604.c and ad9389b.c.

  In order to avoid API duplication, a new CEC subsystem is probably needed.
  CEC could be modeled as a bus, or as a network device. With the network
  device approach, we could have both kernel and userspace protocol handlers.

  CEC devices need to be associated with HDMI connectors. The Media Controller
  API is a good candidate.

  Action points:
  - Hans is planning to push CEC support to mainline this year. Marcus can
    provide contact information for Per Persson (ST Ericsson).

***  Hardware Pipeline Configuration ***

  Goal: Create a central shared API to configure hardware pipelines on any
  display- or video-related device.

  Hardware pipeline configuration includes both link and format configuration.
  To handle complex pipelines, V4L2 created a userspace V4L2 subdev API that
  works in cooperation with the Media Controller API. Such an approach could
  be generalized to support DRM/KMS, FB and V4L2 devices with a single
  pipeline configuration API.

  However, DRM/KMS can configure a complete display pipeline directly, without
  any need for userspace to access formats on specific pads directly. There is
  thus no direct need (at least for today's hardware) to expose low-level
  pipeline configuration to userspace.

  For display devices, DRM/KMS is going to support configuration of multiple
  overlays/planes. fbdev support will be available "for free" on top of
  DRM/KMS for legacy applications and for fbcon. fbdev should probably not be
  extended to support multiple overlays/planes explicitly. Drivers and
  applications should implement and use KMS instead, and no new FB or V4L2
  frontend should be implemented in new display drivers.

  Implementing the Media Controller API in DRM/KMS is still useful to
  associate connectors with HDMI audio/CEC devices. More than that would
  probably be overkill.

  Action points:
  - Laurent to check the DRM/KMS API to make sure it fulfills all the V4L2
    needs. dma-buf importer role in DRM/KMS is one of the required features to
    implement use cases currently supported by V4L2 USERPTR.
  - Implement a proof-of-concept media controller API in DRM to expose the 
    pipeline topology to userspace. Sumit is working on dma-buf, could maybe
    help on this. Laurent will coordinate the effort.

***  Synchronous pipeline changes ***

  Goal: Create an API to apply complex changes to a video pipeline atomically.

  Needed for complex camera use cases. On the DRM/KMS side, the approach is to
  use one big ioctl to configure the whole pipeline.

  One solution is a commit ioctl, through the media controller device, that
  would be dispatched to entities internally with a prepare step and a commit
  step.

  Parameters to be committed need to be stored in a context. We can either use
  one cross-device context, or per-device contexts that would then need to be
  associated with the commit operation.

  Action points:
  - Sakari will provide a proof-of-concept and/or proposal if needed.

***  Sync objects ***

  Goal: Implement in-kernel support for buffer swapping, dependency system,
  sync objects, queue/dequeue userspace API (think EGLstream & EGLsync)

  This can be implemented in kernel-space (with direct communication between
  drivers to schedule buffers around), user-space (with ioctls to
  queue/dequeue buffers), or a mix of both. SoCs with direct sync object
  support at the hardware level between different IP blocks can be foreseen in
  the (near ?) future. A kernel-space API would then be needed.

  Sharing sync objects between subsystems could result in the creation of a
  cross-subsystem queue/dequeue API. Integration with dma_buf would make
  sense, a dma_buf_pool object would then likely be needed.

  If the SoC supports direct signaling between IP blocks, this could be
  considered (and implemented) as a pipeline configurable through the Media
  Controller API. However, applications will then need to be link-aware. Using
  sync/stream objects would offer a single API to userspace, regardless of
  whether the synchronization is handled by the CPU in kernel space or by the
  IP blocks directly.

  Sync objects are not always tied to buffers, they need to be implemented as
  stand-alone objects on the kernel side. However, when exposing the sync
  object to userspace in order to share it between devices, all current use
  cases involve dma-buf. The first implementation will thus not expose the
  sync objects explicitly to userspace, but associate them with a dma-buf. If
  sync objects with no associated dma-buf are later needed, an explicit
  userspace API can be added.

  eventfd is a possible candidate for sync object implementation.

  http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=include/linux/eventfd.h
  http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/cgroups/cgroups.txt
  http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/cgroups/memory.txt
  http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/virtual/kvm/api.txt

  Action points:
  - TBD, will be on the agenda for the Graphics Summit @ELC2012.

***  dma-buf Implementation in V4L2 ***

  Goal: Implement the dma-buf API in V4L2.

  Sumit Semwal has submitted patches to implement the dma-buf importer role in
  videobuf2. Tomasz Stanislawski has then submitted incremental patches to add
  exporter role support.

  Action points:
  - Create a git branch to host all the latest patches. Sumit will provide
    that.



Other points that have been briefly discussed
---------------------------------------------

***  Device Tree Aware 'Subdevice' Instantiation ***

  Goal: Design standard device tree bindings to instantiate "subdevices"
  (including the generic subdev-like video transmitters).

  U-Boot developers are working on automatic configuration using the device
  tree and want to support LCDs. Proposed patches are available at
  http://thread.gmane.org/gmane.comp.boot-loaders.u-boot/122864/focus=122881

  On a related note, kernel PWM proposed DT binding patches for backlight
  control are available at
  http://www.spinics.net/lists/linux-tegra/msg03988.html.

*** HDMI audio output ***

  Goal: Give applications a way to associate an HDMI connector with an ALSA
  HDMI audio device.

  This should be implemented with the Media Controller API.
 
*** 2D Kernel APIs ***

  Goal: Expose a 2D acceleration API to userspace for devices that support
  hardware-accelerated 2D rendering.

  If the hardware is based on a command stream, a userspace library is needed
  anyway to build the command stream. A 2D kernel API would then not be very
  useful. This could be split to a DRM device without a KMS interface.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-16 23:25     ` Laurent Pinchart
  0 siblings, 0 replies; 49+ messages in thread
From: Laurent Pinchart @ 2012-02-16 23:25 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Sumit Semwal, Jesse Barker, Jesse Barnes,
	Rob Clark, Pawel Osciak, Marek Szyprowski, Tomasz Stanislawski,
	Magnus Damm, Marcus Lorentzon, Alexander Deucher, linux-media,
	dri-devel, linux-fbdev

Hello everybody,

First of all, I would like to thank all the attendees for their participation 
in the mini-summit that helped make the meeting a success.

Here are my consolidated notes that cover both the Linaro Connect meeting and 
the ELC meeting. They're also available at 
http://www.ideasonboard.org/media/meetings/.


Kernel Display and Video API Consolidation mini-summit at ELC 2012
------------------------------------------------------------------

***  Video Transmitter Drivers ***

  Goal: Sharing video transmitter drivers (i2c or 'IP block') between V4L and
  DRM.

  This is mostly useful for embedded systems. Transmitters can be used by both
  GPUs and direct video outputs. Having a single driver for external (or even
  internal) HDMI transmitters is desired to avoid code duplication between
  subsystems.

  - DRM/KMS implements support for transmitters as DRM encoders. Code is
    embedded in the main DRM/KMS driver.
  - V4L2 implements transmitter drivers as generic kernel objects called
    v4l2_subdev that implement a set of abstract operations.

  v4l2_subdev already solves (most of ?) the problem, but is specific to V4L2.
  The proposed approach is to create a media_video_entity (exact name to be
  defined) object similar to v4l2_subdev.

  Using that API should not be mandatory, especially on hardware where the
  transmitter and the display controller are deeply cross-dependent.

  How to instantiate the transmitter device and how to handle probe order
  needs to be solved, especially when DT binding come into play. The problem
  already exists to some extend in V4L2.

  The subdev API takes mbus formats as arguments, which is not handled by
  DRM/KMS. V4L2 media bus codes will then need to be shared.

  Action points:
  - Initially, mostly FBDEV (HDMI-on-DSI, Renesas) and V4L2 (TI hardware,
    Cisco). Hans + Laurent to work on a proposal.

***  Display Panel Drivers ***

  Goal: Sharing display panel drivers between display controllers from
  different vendors.

  Panels are connected to the display controller through a standard bus with a
  control channel (DSI and eDP are two major such buses). Various vendors have
  created proprietary interfaces for panel drivers:

  - TI on OMAP (drivers/video/omap2/displays).
  - Samsung on Exynos (drivers/video/exynos).
  - ST-Ericsson on MCDE (http://www.igloocommunity.org/gitweb/?p=kernel/igloo-
kernel.git;a=tree;f=drivers/video/mcde)
  - Renesas is working on a similar interface for SH Mobile.

  HDMI-on-DSI transmitters, while not panels per-se, can be supported through
  the same API.

  A Low level Linux Display Framework (https://lkml.org/lkml/2011/9/15/107)
  has been proposed and overlaps with this topic.

  For DSI, a possible abstraction level would be a DCS (Display Command Set)
  bus. Panels and/or HDMI-on-DSI transmitter drivers would be implemented as
  DCS drivers.

  Action points:
  - Marcus to work on a proposal for DCS-based panels (with Tomi Valkeinen and
    Morimoto-san).

***  Common video mode data structure and EDID parser ***

  Goal: Sharing an EDID parser between DRM/KMS, FBDEV and V4L2.

  The DRM EDID parser is currently the most advanced implementation and will
  be taken as a starting point.

  Different subsystems use different data structures to describe video
  mode/timing information:

  - struct drm_mode_modeinfo in DRM/KMS
  - struct fb_videomode in FBDEV
  - struct v4l2_bt_timings in V4L2

  A new common video mode/timing data structure (struct media_video_mode_info,
  exact name is to be defined), not tied to any specific subsystem, is
  required to share the EDID parser. That structure won't be exported to
  userspace.

  Helper functions will be implemented in the subsystems to convert between
  that generic structure and the various subsystem-specific structures.

  The mode list is stored in the DRM connector in the EDID parser. A new mode
  list data structure can be added, or a callback function can be used by the
  parser to give modes one at a time to the caller.

  3D needs to be taken into account (this is similar to interlacing).

  Action points:
  - Laurent to work on a proposal. The DRM/KMS EDID parser will be reused.
  
***  Shared in-kernel image/framebuffer object type ***

  Goal: Describe dma-buf content in a generic cross-subsystem way.

  Most formats can be described by 4CC, width, height and pitch. Strange
  hardware-specific formats might not be possible to describes completely
  generically.

  However, format negotiation goes through userspace anyway, so there should
  be no need for passing format information directly between drivers.

  Action points:
  - None, this task will not be worked on.

***  Central 4CC Documentation ***

  Goal: Define and document 4CCs in a central location to make sure 4CCs won't
  overlap or have different meanings for different subsystems.

  DRM and V4L2 define their own 4CCs:

  - include/drm/drm-fourccs.h
  - include/linux/videodev2.h

  A new header file will centralize the definitions, with a new
  cross-subsystem prefix. DRM and V4L2 4CCs will be redefined as aliases for
  the new centralized 4CCs.

  Colorspace (including both color matrix and Y/U/V ranges) should be shared
  as well. VDPAU (and VAAPI ?) pass the color matrix to userspace. The kernel
  API should not be more restrictive, but we just need a couple of presets in
  most cases. We can define a list of common presets, with a way to upload a
  custom matrix.

  Action points:
  - Start with the V4L2 documentation, create a shared header file. Sakari to
    work on a proposal.

***  Split KMS and GPU Drivers ***

  Goal: Split KMS and GPU drivers with in kernel API inbetween.
 
  In most (all ?) SoCs, the GPU and the display controller are separate
  devices. Splitting them into separate drivers would allow reusing the GPU
  driver with different devices (e.g. using a single common PowerVR kernel
  module with different display controller drivers). The same approach can be
  used on the desktop for the multi-GPU case and the USB display case.

  - OMAP already separates the GPU and DSS drivers, but the GPU driver is some
  kind of DSS plugin. This isn't a long-term approach.
  - Exynos also separates the GPU and FIMD drivers. It's hard to merge GPU
  into  display subsystem since UMP, GPU has own memory management codes.

  One of the biggest challenges would be to get GPU vendors to use this new
  model. ARM could help here, by making the Mali kernel driver split from the
  display controller drivers. Once one vendor jumps onboard, others could have
  a bigger incentive to follow.

  Action points:
  - Rob planning to work on a reference implementation, as part of the sync
    object case. This is a pretty long term plan. 
  - Jesse will handle the coordination with ARM for Mali.
  
***  HDMI CEC Support ***

  Goal: Support HDMI CEC and offer a userspace API for applications.

  A new kernel API is needed and must be usable by KMS, V4L2 and possibly
  LIRC. There's ongoing effort from Cisco to implement HDMI CEC support. Given
  their background, V4L2 is their initial target. A proposal is available at
  http://www.mail-archive.com/linux-media@vger.kernel.org/msg29241.html with a
  sample implementation at
  http://git.linuxtv.org/hverkuil/cisco.git/shortlog/refs/heads/cobalt-
mainline
  (drivers/media/video/adv7604.c and ad9389b.c.

  In order to avoid API duplication, a new CEC subsystem is probably needed.
  CEC could be modeled as a bus, or as a network device. With the network
  device approach, we could have both kernel and userspace protocol handlers.

  CEC devices need to be associated with HDMI connectors. The Media Controller
  API is a good candidate.

  Action points:
  - Hans is planning to push CEC support to mainline this year. Marcus can
    provide contact information for Per Persson (ST Ericsson).

***  Hardware Pipeline Configuration ***

  Goal: Create a central shared API to configure hardware pipelines on any
  display- or video-related device.

  Hardware pipeline configuration includes both link and format configuration.
  To handle complex pipelines, V4L2 created a userspace V4L2 subdev API that
  works in cooperation with the Media Controller API. Such an approach could
  be generalized to support DRM/KMS, FB and V4L2 devices with a single
  pipeline configuration API.

  However, DRM/KMS can configure a complete display pipeline directly, without
  any need for userspace to access formats on specific pads directly. There is
  thus no direct need (at least for today's hardware) to expose low-level
  pipeline configuration to userspace.

  For display devices, DRM/KMS is going to support configuration of multiple
  overlays/planes. fbdev support will be available "for free" on top of
  DRM/KMS for legacy applications and for fbcon. fbdev should probably not be
  extended to support multiple overlays/planes explicitly. Drivers and
  applications should implement and use KMS instead, and no new FB or V4L2
  frontend should be implemented in new display drivers.

  Implementing the Media Controller API in DRM/KMS is still useful to
  associate connectors with HDMI audio/CEC devices. More than that would
  probably be overkill.

  Action points:
  - Laurent to check the DRM/KMS API to make sure it fulfills all the V4L2
    needs. dma-buf importer role in DRM/KMS is one of the required features to
    implement use cases currently supported by V4L2 USERPTR.
  - Implement a proof-of-concept media controller API in DRM to expose the 
    pipeline topology to userspace. Sumit is working on dma-buf, could maybe
    help on this. Laurent will coordinate the effort.

***  Synchronous pipeline changes ***

  Goal: Create an API to apply complex changes to a video pipeline atomically.

  Needed for complex camera use cases. On the DRM/KMS side, the approach is to
  use one big ioctl to configure the whole pipeline.

  One solution is a commit ioctl, through the media controller device, that
  would be dispatched to entities internally with a prepare step and a commit
  step.

  Parameters to be committed need to be stored in a context. We can either use
  one cross-device context, or per-device contexts that would then need to be
  associated with the commit operation.

  Action points:
  - Sakari will provide a proof-of-concept and/or proposal if needed.

***  Sync objects ***

  Goal: Implement in-kernel support for buffer swapping, dependency system,
  sync objects, queue/dequeue userspace API (think EGLstream & EGLsync)

  This can be implemented in kernel-space (with direct communication between
  drivers to schedule buffers around), user-space (with ioctls to
  queue/dequeue buffers), or a mix of both. SoCs with direct sync object
  support at the hardware level between different IP blocks can be foreseen in
  the (near ?) future. A kernel-space API would then be needed.

  Sharing sync objects between subsystems could result in the creation of a
  cross-subsystem queue/dequeue API. Integration with dma_buf would make
  sense, a dma_buf_pool object would then likely be needed.

  If the SoC supports direct signaling between IP blocks, this could be
  considered (and implemented) as a pipeline configurable through the Media
  Controller API. However, applications will then need to be link-aware. Using
  sync/stream objects would offer a single API to userspace, regardless of
  whether the synchronization is handled by the CPU in kernel space or by the
  IP blocks directly.

  Sync objects are not always tied to buffers, they need to be implemented as
  stand-alone objects on the kernel side. However, when exposing the sync
  object to userspace in order to share it between devices, all current use
  cases involve dma-buf. The first implementation will thus not expose the
  sync objects explicitly to userspace, but associate them with a dma-buf. If
  sync objects with no associated dma-buf are later needed, an explicit
  userspace API can be added.

  eventfd is a possible candidate for sync object implementation.

  http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=include/linux/eventfd.h
  http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/cgroups/cgroups.txt
  http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/cgroups/memory.txt
  http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/virtual/kvm/api.txt

  Action points:
  - TBD, will be on the agenda for the Graphics Summit @ELC2012.

***  dma-buf Implementation in V4L2 ***

  Goal: Implement the dma-buf API in V4L2.

  Sumit Semwal has submitted patches to implement the dma-buf importer role in
  videobuf2. Tomasz Stanislawski has then submitted incremental patches to add
  exporter role support.

  Action points:
  - Create a git branch to host all the latest patches. Sumit will provide
    that.



Other points that have been briefly discussed
---------------------------------------------

***  Device Tree Aware 'Subdevice' Instantiation ***

  Goal: Design standard device tree bindings to instantiate "subdevices"
  (including the generic subdev-like video transmitters).

  U-Boot developers are working on automatic configuration using the device
  tree and want to support LCDs. Proposed patches are available at
  http://thread.gmane.org/gmane.comp.boot-loaders.u-boot/122864/focus\x122881

  On a related note, kernel PWM proposed DT binding patches for backlight
  control are available at
  http://www.spinics.net/lists/linux-tegra/msg03988.html.

*** HDMI audio output ***

  Goal: Give applications a way to associate an HDMI connector with an ALSA
  HDMI audio device.

  This should be implemented with the Media Controller API.
 
*** 2D Kernel APIs ***

  Goal: Expose a 2D acceleration API to userspace for devices that support
  hardware-accelerated 2D rendering.

  If the hardware is based on a command stream, a userspace library is needed
  anyway to build the command stream. A 2D kernel API would then not be very
  useful. This could be split to a DRM device without a KMS interface.

-- 
Regards,

Laurent Pinchart


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-16 23:25     ` Laurent Pinchart
@ 2012-02-17  9:55       ` Daniel Vetter
  -1 siblings, 0 replies; 49+ messages in thread
From: Daniel Vetter @ 2012-02-17  9:55 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Magnus Damm, Marcus Lorentzon, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

On Fri, Feb 17, 2012 at 12:25:51AM +0100, Laurent Pinchart wrote:
> Hello everybody,
> 
> First of all, I would like to thank all the attendees for their participation 
> in the mini-summit that helped make the meeting a success.
> 
> Here are my consolidated notes that cover both the Linaro Connect meeting and 
> the ELC meeting. They're also available at 
> http://www.ideasonboard.org/media/meetings/.

Looks like you've been all really busy ;-) A few quick comments below.

> Kernel Display and Video API Consolidation mini-summit at ELC 2012
> ------------------------------------------------------------------

[snip]

> ***  Common video mode data structure and EDID parser ***
> 
>   Goal: Sharing an EDID parser between DRM/KMS, FBDEV and V4L2.
> 
>   The DRM EDID parser is currently the most advanced implementation and will
>   be taken as a starting point.
> 
>   Different subsystems use different data structures to describe video
>   mode/timing information:
> 
>   - struct drm_mode_modeinfo in DRM/KMS
>   - struct fb_videomode in FBDEV
>   - struct v4l2_bt_timings in V4L2
> 
>   A new common video mode/timing data structure (struct media_video_mode_info,
>   exact name is to be defined), not tied to any specific subsystem, is
>   required to share the EDID parser. That structure won't be exported to
>   userspace.
> 
>   Helper functions will be implemented in the subsystems to convert between
>   that generic structure and the various subsystem-specific structures.
> 
>   The mode list is stored in the DRM connector in the EDID parser. A new mode
>   list data structure can be added, or a callback function can be used by the
>   parser to give modes one at a time to the caller.
> 
>   3D needs to be taken into account (this is similar to interlacing).
> 
>   Action points:
>   - Laurent to work on a proposal. The DRM/KMS EDID parser will be reused.

I think we should include kernel cmdline video mode parsing here, afaik
kms and fbdev are rather similar (won't work if they're too different,
obviously).

[snip]

> ***  Central 4CC Documentation ***
> 
>   Goal: Define and document 4CCs in a central location to make sure 4CCs won't
>   overlap or have different meanings for different subsystems.
> 
>   DRM and V4L2 define their own 4CCs:
> 
>   - include/drm/drm-fourccs.h
>   - include/linux/videodev2.h
> 
>   A new header file will centralize the definitions, with a new
>   cross-subsystem prefix. DRM and V4L2 4CCs will be redefined as aliases for
>   the new centralized 4CCs.
> 
>   Colorspace (including both color matrix and Y/U/V ranges) should be shared
>   as well. VDPAU (and VAAPI ?) pass the color matrix to userspace. The kernel
>   API should not be more restrictive, but we just need a couple of presets in
>   most cases. We can define a list of common presets, with a way to upload a
>   custom matrix.
> 
>   Action points:
>   - Start with the V4L2 documentation, create a shared header file. Sakari to
>     work on a proposal.

I'm looking forward to the bikeshed discussion here ;-)
</snide-remark>

> ***  Split KMS and GPU Drivers ***
> 
>   Goal: Split KMS and GPU drivers with in kernel API inbetween.
>  
>   In most (all ?) SoCs, the GPU and the display controller are separate
>   devices. Splitting them into separate drivers would allow reusing the GPU
>   driver with different devices (e.g. using a single common PowerVR kernel
>   module with different display controller drivers). The same approach can be
>   used on the desktop for the multi-GPU case and the USB display case.
> 
>   - OMAP already separates the GPU and DSS drivers, but the GPU driver is some
>   kind of DSS plugin. This isn't a long-term approach.
>   - Exynos also separates the GPU and FIMD drivers. It's hard to merge GPU
>   into  display subsystem since UMP, GPU has own memory management codes.
> 
>   One of the biggest challenges would be to get GPU vendors to use this new
>   model. ARM could help here, by making the Mali kernel driver split from the
>   display controller drivers. Once one vendor jumps onboard, others could have
>   a bigger incentive to follow.
> 
>   Action points:
>   - Rob planning to work on a reference implementation, as part of the sync
>     object case. This is a pretty long term plan. 
>   - Jesse will handle the coordination with ARM for Mali.

Imo splitting up SoC drm drivers into separate drivers for the different
blocks makes tons of sense. The one controlling the display subsystem
would then also support kms, all the others would just support gem and
share buffers with dma_buf (and maybe synchronize with some new-fangled
sync objects). Otoh it doesn't make much sense to push this if we don't
have at least one of the SoC ip block verndors on board. We can dream ...

[snip]

> ***  Sync objects ***
> 
>   Goal: Implement in-kernel support for buffer swapping, dependency system,
>   sync objects, queue/dequeue userspace API (think EGLstream & EGLsync)
> 
>   This can be implemented in kernel-space (with direct communication between
>   drivers to schedule buffers around), user-space (with ioctls to
>   queue/dequeue buffers), or a mix of both. SoCs with direct sync object
>   support at the hardware level between different IP blocks can be foreseen in
>   the (near ?) future. A kernel-space API would then be needed.
> 
>   Sharing sync objects between subsystems could result in the creation of a
>   cross-subsystem queue/dequeue API. Integration with dma_buf would make
>   sense, a dma_buf_pool object would then likely be needed.
> 
>   If the SoC supports direct signaling between IP blocks, this could be
>   considered (and implemented) as a pipeline configurable through the Media
>   Controller API. However, applications will then need to be link-aware. Using
>   sync/stream objects would offer a single API to userspace, regardless of
>   whether the synchronization is handled by the CPU in kernel space or by the
>   IP blocks directly.
> 
>   Sync objects are not always tied to buffers, they need to be implemented as
>   stand-alone objects on the kernel side. However, when exposing the sync
>   object to userspace in order to share it between devices, all current use
>   cases involve dma-buf. The first implementation will thus not expose the
>   sync objects explicitly to userspace, but associate them with a dma-buf. If
>   sync objects with no associated dma-buf are later needed, an explicit
>   userspace API can be added.
> 
>   eventfd is a possible candidate for sync object implementation.
> 
>   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=include/linux/eventfd.h
>   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/cgroups/cgroups.txt
>   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/cgroups/memory.txt
>   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/virtual/kvm/api.txt
> 
>   Action points:
>   - TBD, will be on the agenda for the Graphics Summit @ELC2012.

I've already started discussing this a bit with Rob. I'm not sure whether
implicitly associating a sync object with a dma_buf makes sense, afaik
sync objects can be used rather independently from buffers. But this is a
long-term feature, so we still have plenty of time to discuss this.

[snip]

> *** 2D Kernel APIs ***
> 
>   Goal: Expose a 2D acceleration API to userspace for devices that support
>   hardware-accelerated 2D rendering.
> 
>   If the hardware is based on a command stream, a userspace library is needed
>   anyway to build the command stream. A 2D kernel API would then not be very
>   useful. This could be split to a DRM device without a KMS interface.

Imo we should ditch this - fb accel doesn't belong into the kernel. Even
on hw that still has a blitter for easy 2d accel without a complete 3d
state setup necessary, it's not worth it. Chris Wilson from our team once
played around with implementing fb accel in the kernel (i915 hw still has
a blitter engine in the latest generations). He quickly noticed that to
have decent speed, competitive with s/w rendering by the cpu he needs the
entire batch and buffer management stuff from userspace. And to really
beat the cpu, you need even more magic.

If you want fast 2d accel, use something like cairo.

Cheers, Daniel
-- 
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-17  9:55       ` Daniel Vetter
  0 siblings, 0 replies; 49+ messages in thread
From: Daniel Vetter @ 2012-02-17  9:55 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Magnus Damm, Marcus Lorentzon, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

On Fri, Feb 17, 2012 at 12:25:51AM +0100, Laurent Pinchart wrote:
> Hello everybody,
> 
> First of all, I would like to thank all the attendees for their participation 
> in the mini-summit that helped make the meeting a success.
> 
> Here are my consolidated notes that cover both the Linaro Connect meeting and 
> the ELC meeting. They're also available at 
> http://www.ideasonboard.org/media/meetings/.

Looks like you've been all really busy ;-) A few quick comments below.

> Kernel Display and Video API Consolidation mini-summit at ELC 2012
> ------------------------------------------------------------------

[snip]

> ***  Common video mode data structure and EDID parser ***
> 
>   Goal: Sharing an EDID parser between DRM/KMS, FBDEV and V4L2.
> 
>   The DRM EDID parser is currently the most advanced implementation and will
>   be taken as a starting point.
> 
>   Different subsystems use different data structures to describe video
>   mode/timing information:
> 
>   - struct drm_mode_modeinfo in DRM/KMS
>   - struct fb_videomode in FBDEV
>   - struct v4l2_bt_timings in V4L2
> 
>   A new common video mode/timing data structure (struct media_video_mode_info,
>   exact name is to be defined), not tied to any specific subsystem, is
>   required to share the EDID parser. That structure won't be exported to
>   userspace.
> 
>   Helper functions will be implemented in the subsystems to convert between
>   that generic structure and the various subsystem-specific structures.
> 
>   The mode list is stored in the DRM connector in the EDID parser. A new mode
>   list data structure can be added, or a callback function can be used by the
>   parser to give modes one at a time to the caller.
> 
>   3D needs to be taken into account (this is similar to interlacing).
> 
>   Action points:
>   - Laurent to work on a proposal. The DRM/KMS EDID parser will be reused.

I think we should include kernel cmdline video mode parsing here, afaik
kms and fbdev are rather similar (won't work if they're too different,
obviously).

[snip]

> ***  Central 4CC Documentation ***
> 
>   Goal: Define and document 4CCs in a central location to make sure 4CCs won't
>   overlap or have different meanings for different subsystems.
> 
>   DRM and V4L2 define their own 4CCs:
> 
>   - include/drm/drm-fourccs.h
>   - include/linux/videodev2.h
> 
>   A new header file will centralize the definitions, with a new
>   cross-subsystem prefix. DRM and V4L2 4CCs will be redefined as aliases for
>   the new centralized 4CCs.
> 
>   Colorspace (including both color matrix and Y/U/V ranges) should be shared
>   as well. VDPAU (and VAAPI ?) pass the color matrix to userspace. The kernel
>   API should not be more restrictive, but we just need a couple of presets in
>   most cases. We can define a list of common presets, with a way to upload a
>   custom matrix.
> 
>   Action points:
>   - Start with the V4L2 documentation, create a shared header file. Sakari to
>     work on a proposal.

I'm looking forward to the bikeshed discussion here ;-)
</snide-remark>

> ***  Split KMS and GPU Drivers ***
> 
>   Goal: Split KMS and GPU drivers with in kernel API inbetween.
>  
>   In most (all ?) SoCs, the GPU and the display controller are separate
>   devices. Splitting them into separate drivers would allow reusing the GPU
>   driver with different devices (e.g. using a single common PowerVR kernel
>   module with different display controller drivers). The same approach can be
>   used on the desktop for the multi-GPU case and the USB display case.
> 
>   - OMAP already separates the GPU and DSS drivers, but the GPU driver is some
>   kind of DSS plugin. This isn't a long-term approach.
>   - Exynos also separates the GPU and FIMD drivers. It's hard to merge GPU
>   into  display subsystem since UMP, GPU has own memory management codes.
> 
>   One of the biggest challenges would be to get GPU vendors to use this new
>   model. ARM could help here, by making the Mali kernel driver split from the
>   display controller drivers. Once one vendor jumps onboard, others could have
>   a bigger incentive to follow.
> 
>   Action points:
>   - Rob planning to work on a reference implementation, as part of the sync
>     object case. This is a pretty long term plan. 
>   - Jesse will handle the coordination with ARM for Mali.

Imo splitting up SoC drm drivers into separate drivers for the different
blocks makes tons of sense. The one controlling the display subsystem
would then also support kms, all the others would just support gem and
share buffers with dma_buf (and maybe synchronize with some new-fangled
sync objects). Otoh it doesn't make much sense to push this if we don't
have at least one of the SoC ip block verndors on board. We can dream ...

[snip]

> ***  Sync objects ***
> 
>   Goal: Implement in-kernel support for buffer swapping, dependency system,
>   sync objects, queue/dequeue userspace API (think EGLstream & EGLsync)
> 
>   This can be implemented in kernel-space (with direct communication between
>   drivers to schedule buffers around), user-space (with ioctls to
>   queue/dequeue buffers), or a mix of both. SoCs with direct sync object
>   support at the hardware level between different IP blocks can be foreseen in
>   the (near ?) future. A kernel-space API would then be needed.
> 
>   Sharing sync objects between subsystems could result in the creation of a
>   cross-subsystem queue/dequeue API. Integration with dma_buf would make
>   sense, a dma_buf_pool object would then likely be needed.
> 
>   If the SoC supports direct signaling between IP blocks, this could be
>   considered (and implemented) as a pipeline configurable through the Media
>   Controller API. However, applications will then need to be link-aware. Using
>   sync/stream objects would offer a single API to userspace, regardless of
>   whether the synchronization is handled by the CPU in kernel space or by the
>   IP blocks directly.
> 
>   Sync objects are not always tied to buffers, they need to be implemented as
>   stand-alone objects on the kernel side. However, when exposing the sync
>   object to userspace in order to share it between devices, all current use
>   cases involve dma-buf. The first implementation will thus not expose the
>   sync objects explicitly to userspace, but associate them with a dma-buf. If
>   sync objects with no associated dma-buf are later needed, an explicit
>   userspace API can be added.
> 
>   eventfd is a possible candidate for sync object implementation.
> 
>   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=include/linux/eventfd.h
>   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/cgroups/cgroups.txt
>   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/cgroups/memory.txt
>   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/virtual/kvm/api.txt
> 
>   Action points:
>   - TBD, will be on the agenda for the Graphics Summit @ELC2012.

I've already started discussing this a bit with Rob. I'm not sure whether
implicitly associating a sync object with a dma_buf makes sense, afaik
sync objects can be used rather independently from buffers. But this is a
long-term feature, so we still have plenty of time to discuss this.

[snip]

> *** 2D Kernel APIs ***
> 
>   Goal: Expose a 2D acceleration API to userspace for devices that support
>   hardware-accelerated 2D rendering.
> 
>   If the hardware is based on a command stream, a userspace library is needed
>   anyway to build the command stream. A 2D kernel API would then not be very
>   useful. This could be split to a DRM device without a KMS interface.

Imo we should ditch this - fb accel doesn't belong into the kernel. Even
on hw that still has a blitter for easy 2d accel without a complete 3d
state setup necessary, it's not worth it. Chris Wilson from our team once
played around with implementing fb accel in the kernel (i915 hw still has
a blitter engine in the latest generations). He quickly noticed that to
have decent speed, competitive with s/w rendering by the cpu he needs the
entire batch and buffer management stuff from userspace. And to really
beat the cpu, you need even more magic.

If you want fast 2d accel, use something like cairo.

Cheers, Daniel
-- 
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-16 23:25     ` Laurent Pinchart
@ 2012-02-17 11:19       ` Semwal, Sumit
  -1 siblings, 0 replies; 49+ messages in thread
From: Semwal, Sumit @ 2012-02-17 11:07 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Jesse Barker, Jesse Barnes, Rob Clark,
	Pawel Osciak, Marek Szyprowski, Tomasz Stanislawski, Magnus Damm,
	Marcus Lorentzon, Alexander Deucher, linux-media, dri-devel,
	linux-fbdev

Hello Laurent, Everyone:


On Fri, Feb 17, 2012 at 4:55 AM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hello everybody,
>
> First of all, I would like to thank all the attendees for their participation
> in the mini-summit that helped make the meeting a success.
>
<snip>
> ***  dma-buf Implementation in V4L2 ***
>
>  Goal: Implement the dma-buf API in V4L2.
>
>  Sumit Semwal has submitted patches to implement the dma-buf importer role in
>  videobuf2. Tomasz Stanislawski has then submitted incremental patches to add
>  exporter role support.
>
>  Action points:
>  - Create a git branch to host all the latest patches. Sumit will provide
>    that.
>
>
Against my Action Item: I have created the following branch at my
github (obviously, it is an RFC branch only)

tree: git://github.com/sumitsemwal/kernel-omap4.git
branch: 3.3rc3-v4l2-dmabuf-RFCv1

As the name partially suggests, it is based out of:
3.3-rc3 +
dmav6 [1] +
some minor dma-buf updates [2] +
my v4l2-as-importer RFC [3] +
Tomasz' RFC for v4l2-as-exporter (and related patches) [4]

Since Tomasz' RFC had a patch-pair which first removed and then added
drivers/media/video/videobuf2-dma-contig.c file, I 'combined' these
into one - but since the patch-pair heavily refactored the file, I am
not able to take responsibility of completeness / correctness of the
same.

[1]: http://git.infradead.org/users/kmpark/linux-samsung/shortlog/refs/heads/3.3-rc2-dma-v6
[2]: git://git.linaro.org/people/sumitsemwal/linux-3.x.git 'dev' branch
[3]: http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/42966/focus=42968
[4]: http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/43793

Best regards,
~Sumit.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-17 11:19       ` Semwal, Sumit
  0 siblings, 0 replies; 49+ messages in thread
From: Semwal, Sumit @ 2012-02-17 11:19 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Jesse Barker, Jesse Barnes, Rob Clark,
	Pawel Osciak, Marek Szyprowski, Tomasz Stanislawski, Magnus Damm,
	Marcus Lorentzon, Alexander Deucher, linux-media, dri-devel,
	linux-fbdev

Hello Laurent, Everyone:


On Fri, Feb 17, 2012 at 4:55 AM, Laurent Pinchart
<laurent.pinchart@ideasonboard.com> wrote:
> Hello everybody,
>
> First of all, I would like to thank all the attendees for their participation
> in the mini-summit that helped make the meeting a success.
>
<snip>
> ***  dma-buf Implementation in V4L2 ***
>
>  Goal: Implement the dma-buf API in V4L2.
>
>  Sumit Semwal has submitted patches to implement the dma-buf importer role in
>  videobuf2. Tomasz Stanislawski has then submitted incremental patches to add
>  exporter role support.
>
>  Action points:
>  - Create a git branch to host all the latest patches. Sumit will provide
>    that.
>
>
Against my Action Item: I have created the following branch at my
github (obviously, it is an RFC branch only)

tree: git://github.com/sumitsemwal/kernel-omap4.git
branch: 3.3rc3-v4l2-dmabuf-RFCv1

As the name partially suggests, it is based out of:
3.3-rc3 +
dmav6 [1] +
some minor dma-buf updates [2] +
my v4l2-as-importer RFC [3] +
Tomasz' RFC for v4l2-as-exporter (and related patches) [4]

Since Tomasz' RFC had a patch-pair which first removed and then added
drivers/media/video/videobuf2-dma-contig.c file, I 'combined' these
into one - but since the patch-pair heavily refactored the file, I am
not able to take responsibility of completeness / correctness of the
same.

[1]: http://git.infradead.org/users/kmpark/linux-samsung/shortlog/refs/heads/3.3-rc2-dma-v6
[2]: git://git.linaro.org/people/sumitsemwal/linux-3.x.git 'dev' branch
[3]: http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/42966/focusB968
[4]: http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/43793

Best regards,
~Sumit.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-17  9:55       ` Daniel Vetter
@ 2012-02-17 18:46         ` Laurent Pinchart
  -1 siblings, 0 replies; 49+ messages in thread
From: Laurent Pinchart @ 2012-02-17 18:46 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Magnus Damm, Marcus Lorentzon, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

Hi Daniel,

On Friday 17 February 2012 10:55:54 Daniel Vetter wrote:
> On Fri, Feb 17, 2012 at 12:25:51AM +0100, Laurent Pinchart wrote:
> > Hello everybody,
> > 
> > First of all, I would like to thank all the attendees for their
> > participation in the mini-summit that helped make the meeting a success.
> > 
> > Here are my consolidated notes that cover both the Linaro Connect meeting
> > and the ELC meeting. They're also available at
> > http://www.ideasonboard.org/media/meetings/.
> 
> Looks like you've been all really busy ;-)

I like to think so :-)

> A few quick comments below.

Thanks for your feedback.
 
> > Kernel Display and Video API Consolidation mini-summit at ELC 2012
> > ------------------------------------------------------------------
> 
> [snip]
> 
> > ***  Common video mode data structure and EDID parser ***
> > 
> >   Goal: Sharing an EDID parser between DRM/KMS, FBDEV and V4L2.
> >   
> >   The DRM EDID parser is currently the most advanced implementation and
> >   will
> >   be taken as a starting point.
> >   
> >   Different subsystems use different data structures to describe video
> >   mode/timing information:
> >   
> >   - struct drm_mode_modeinfo in DRM/KMS
> >   - struct fb_videomode in FBDEV
> >   - struct v4l2_bt_timings in V4L2
> >   
> >   A new common video mode/timing data structure (struct
> >   media_video_mode_info, exact name is to be defined), not tied to any
> >   specific subsystem, is required to share the EDID parser. That
> >   structure won't be exported to userspace.
> >   
> >   Helper functions will be implemented in the subsystems to convert
> >   between
> >   that generic structure and the various subsystem-specific structures.
> >   
> >   The mode list is stored in the DRM connector in the EDID parser. A new
> >   mode
> >   list data structure can be added, or a callback function can be used by
> >   the
> >   parser to give modes one at a time to the caller.
> >   
> >   3D needs to be taken into account (this is similar to interlacing).
> >   
> >   Action points:
> >   - Laurent to work on a proposal. The DRM/KMS EDID parser will be reused.
> 
> I think we should include kernel cmdline video mode parsing here, afaik
> kms and fbdev are rather similar (won't work if they're too different,
> obviously).

Good point. I'll add that to the notes and will look into it.

> [snip]
> 
> > ***  Central 4CC Documentation ***
> > 
> >   Goal: Define and document 4CCs in a central location to make sure 4CCs
> >   won't overlap or have different meanings for different subsystems.
> >   
> >   DRM and V4L2 define their own 4CCs:
> >   
> >   - include/drm/drm-fourccs.h
> >   - include/linux/videodev2.h
> >   
> >   A new header file will centralize the definitions, with a new
> >   cross-subsystem prefix. DRM and V4L2 4CCs will be redefined as aliases
> >   for
> >   the new centralized 4CCs.
> >   
> >   Colorspace (including both color matrix and Y/U/V ranges) should be
> >   shared as well. VDPAU (and VAAPI ?) pass the color matrix to userspace.
> >   The kernel API should not be more restrictive, but we just need a couple
> >   of presets in most cases. We can define a list of common presets, with a
> >   way to upload a custom matrix.
> >   
> >   Action points:
> >   - Start with the V4L2 documentation, create a shared header file. Sakari
> >   to work on a proposal.
> 
> I'm looking forward to the bikeshed discussion here ;-)
> </snide-remark>

I'm certainly going to NACK if we try to paint 4CCs in the wrong colorspace 
;-)
 
> > ***  Split KMS and GPU Drivers ***
> > 
> >   Goal: Split KMS and GPU drivers with in kernel API inbetween.
> >   
> >   In most (all ?) SoCs, the GPU and the display controller are separate
> >   devices. Splitting them into separate drivers would allow reusing the
> >   GPU driver with different devices (e.g. using a single common PowerVR
> >   kernel module with different display controller drivers). The same
> >   approach can be used on the desktop for the multi-GPU case and the USB
> >   display case.
> >   
> >   - OMAP already separates the GPU and DSS drivers, but the GPU driver is
> >   some kind of DSS plugin. This isn't a long-term approach.
> >   - Exynos also separates the GPU and FIMD drivers. It's hard to merge GPU
> >   into  display subsystem since UMP, GPU has own memory management codes.
> >   
> >   One of the biggest challenges would be to get GPU vendors to use this
> >   new model. ARM could help here, by making the Mali kernel driver split
> >   from the display controller drivers. Once one vendor jumps onboard,
> >   others could have a bigger incentive to follow.
> >   
> >   Action points:
> >   - Rob planning to work on a reference implementation, as part of the
> >   sync object case. This is a pretty long term plan.
> >   
> >   - Jesse will handle the coordination with ARM for Mali.
> 
> Imo splitting up SoC drm drivers into separate drivers for the different
> blocks makes tons of sense. The one controlling the display subsystem
> would then also support kms, all the others would just support gem and
> share buffers with dma_buf (and maybe synchronize with some new-fangled
> sync objects). Otoh it doesn't make much sense to push this if we don't
> have at least one of the SoC ip block verndors on board. We can dream ...

That's the conclusion we came up to. ARM might be able to help here with Mali, 
and we could then try to sell the idea to other vendors when a reference 
implementation will be complete (or at least usable enough).

> [snip]
> 
> > ***  Sync objects ***
> > 
> >   Goal: Implement in-kernel support for buffer swapping, dependency
> >   system, sync objects, queue/dequeue userspace API (think EGLstream &
> >   EGLsync)
> >   
> >   This can be implemented in kernel-space (with direct communication
> >   between drivers to schedule buffers around), user-space (with ioctls to
> >   queue/dequeue buffers), or a mix of both. SoCs with direct sync object
> >   support at the hardware level between different IP blocks can be
> >   foreseen in the (near ?) future. A kernel-space API would then be
> >   needed.
> >   
> >   Sharing sync objects between subsystems could result in the creation of
> >   a cross-subsystem queue/dequeue API. Integration with dma_buf would make
> >   sense, a dma_buf_pool object would then likely be needed.
> >   
> >   If the SoC supports direct signaling between IP blocks, this could be
> >   considered (and implemented) as a pipeline configurable through the
> >   Media Controller API. However, applications will then need to be link-
> >   aware. Using sync/stream objects would offer a single API to userspace,
> >   regardless of whether the synchronization is handled by the CPU in
> >   kernel space or by the IP blocks directly.
> >   
> >   Sync objects are not always tied to buffers, they need to be implemented
> >   as stand-alone objects on the kernel side. However, when exposing the
> >   sync object to userspace in order to share it between devices, all
> >   current use cases involve dma-buf. The first implementation will thus not
> >   expose the sync objects explicitly to userspace, but associate them with
> >   a dma-buf. If sync objects with no associated dma-buf are later needed,
> >   an explicit userspace API can be added.
> >   
> >   eventfd is a possible candidate for sync object implementation.
> >   
> >   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=in
> >   clude/linux/eventfd.h
> >   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=D
> >   ocumentation/cgroups/cgroups.txt
> >   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=D
> >   ocumentation/cgroups/memory.txt
> >   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=D
> >   ocumentation/virtual/kvm/api.txt
> >   
> >   Action points:
> >   - TBD, will be on the agenda for the Graphics Summit @ELC2012.
> 
> I've already started discussing this a bit with Rob. I'm not sure whether
> implicitly associating a sync object with a dma_buf makes sense, afaik
> sync objects can be used rather independently from buffers. But this is a
> long-term feature, so we still have plenty of time to discuss this.

I think we need to work on this one step at a time. Associating a sync object 
with a dma_buf is probably not going to solve all our problems in the long 
term, but it might provide us with a first solution that doesn't require 
exposing the sync object API to userspace. I'm not saying we need to do this, 
but it could be a reasonable first step that would buy us some time while we 
sort out the sync object userspace API.

> [snip]
> 
> > *** 2D Kernel APIs ***
> > 
> >   Goal: Expose a 2D acceleration API to userspace for devices that support
> >   hardware-accelerated 2D rendering.
> >   
> >   If the hardware is based on a command stream, a userspace library is
> >   needed anyway to build the command stream. A 2D kernel API would then
> >   not be very useful. This could be split to a DRM device without a KMS
> >   interface.
> 
> Imo we should ditch this - fb accel doesn't belong into the kernel. Even
> on hw that still has a blitter for easy 2d accel without a complete 3d
> state setup necessary, it's not worth it. Chris Wilson from our team once
> played around with implementing fb accel in the kernel (i915 hw still has
> a blitter engine in the latest generations). He quickly noticed that to
> have decent speed, competitive with s/w rendering by the cpu he needs the
> entire batch and buffer management stuff from userspace. And to really
> beat the cpu, you need even more magic.
> 
> If you want fast 2d accel, use something like cairo.

Our conclusion on this is that we should not expose an explicit 2D 
acceleration API at the kernel level. If really needed, hardware 2D 
acceleration could be implemented as a DRM device to handle memory management, 
commands ring setup, synchronization, ... but I'm not even sure if that's 
worth it. I might not have conveyed it well in my notes.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-17 18:46         ` Laurent Pinchart
  0 siblings, 0 replies; 49+ messages in thread
From: Laurent Pinchart @ 2012-02-17 18:46 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Magnus Damm, Marcus Lorentzon, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

Hi Daniel,

On Friday 17 February 2012 10:55:54 Daniel Vetter wrote:
> On Fri, Feb 17, 2012 at 12:25:51AM +0100, Laurent Pinchart wrote:
> > Hello everybody,
> > 
> > First of all, I would like to thank all the attendees for their
> > participation in the mini-summit that helped make the meeting a success.
> > 
> > Here are my consolidated notes that cover both the Linaro Connect meeting
> > and the ELC meeting. They're also available at
> > http://www.ideasonboard.org/media/meetings/.
> 
> Looks like you've been all really busy ;-)

I like to think so :-)

> A few quick comments below.

Thanks for your feedback.
 
> > Kernel Display and Video API Consolidation mini-summit at ELC 2012
> > ------------------------------------------------------------------
> 
> [snip]
> 
> > ***  Common video mode data structure and EDID parser ***
> > 
> >   Goal: Sharing an EDID parser between DRM/KMS, FBDEV and V4L2.
> >   
> >   The DRM EDID parser is currently the most advanced implementation and
> >   will
> >   be taken as a starting point.
> >   
> >   Different subsystems use different data structures to describe video
> >   mode/timing information:
> >   
> >   - struct drm_mode_modeinfo in DRM/KMS
> >   - struct fb_videomode in FBDEV
> >   - struct v4l2_bt_timings in V4L2
> >   
> >   A new common video mode/timing data structure (struct
> >   media_video_mode_info, exact name is to be defined), not tied to any
> >   specific subsystem, is required to share the EDID parser. That
> >   structure won't be exported to userspace.
> >   
> >   Helper functions will be implemented in the subsystems to convert
> >   between
> >   that generic structure and the various subsystem-specific structures.
> >   
> >   The mode list is stored in the DRM connector in the EDID parser. A new
> >   mode
> >   list data structure can be added, or a callback function can be used by
> >   the
> >   parser to give modes one at a time to the caller.
> >   
> >   3D needs to be taken into account (this is similar to interlacing).
> >   
> >   Action points:
> >   - Laurent to work on a proposal. The DRM/KMS EDID parser will be reused.
> 
> I think we should include kernel cmdline video mode parsing here, afaik
> kms and fbdev are rather similar (won't work if they're too different,
> obviously).

Good point. I'll add that to the notes and will look into it.

> [snip]
> 
> > ***  Central 4CC Documentation ***
> > 
> >   Goal: Define and document 4CCs in a central location to make sure 4CCs
> >   won't overlap or have different meanings for different subsystems.
> >   
> >   DRM and V4L2 define their own 4CCs:
> >   
> >   - include/drm/drm-fourccs.h
> >   - include/linux/videodev2.h
> >   
> >   A new header file will centralize the definitions, with a new
> >   cross-subsystem prefix. DRM and V4L2 4CCs will be redefined as aliases
> >   for
> >   the new centralized 4CCs.
> >   
> >   Colorspace (including both color matrix and Y/U/V ranges) should be
> >   shared as well. VDPAU (and VAAPI ?) pass the color matrix to userspace.
> >   The kernel API should not be more restrictive, but we just need a couple
> >   of presets in most cases. We can define a list of common presets, with a
> >   way to upload a custom matrix.
> >   
> >   Action points:
> >   - Start with the V4L2 documentation, create a shared header file. Sakari
> >   to work on a proposal.
> 
> I'm looking forward to the bikeshed discussion here ;-)
> </snide-remark>

I'm certainly going to NACK if we try to paint 4CCs in the wrong colorspace 
;-)
 
> > ***  Split KMS and GPU Drivers ***
> > 
> >   Goal: Split KMS and GPU drivers with in kernel API inbetween.
> >   
> >   In most (all ?) SoCs, the GPU and the display controller are separate
> >   devices. Splitting them into separate drivers would allow reusing the
> >   GPU driver with different devices (e.g. using a single common PowerVR
> >   kernel module with different display controller drivers). The same
> >   approach can be used on the desktop for the multi-GPU case and the USB
> >   display case.
> >   
> >   - OMAP already separates the GPU and DSS drivers, but the GPU driver is
> >   some kind of DSS plugin. This isn't a long-term approach.
> >   - Exynos also separates the GPU and FIMD drivers. It's hard to merge GPU
> >   into  display subsystem since UMP, GPU has own memory management codes.
> >   
> >   One of the biggest challenges would be to get GPU vendors to use this
> >   new model. ARM could help here, by making the Mali kernel driver split
> >   from the display controller drivers. Once one vendor jumps onboard,
> >   others could have a bigger incentive to follow.
> >   
> >   Action points:
> >   - Rob planning to work on a reference implementation, as part of the
> >   sync object case. This is a pretty long term plan.
> >   
> >   - Jesse will handle the coordination with ARM for Mali.
> 
> Imo splitting up SoC drm drivers into separate drivers for the different
> blocks makes tons of sense. The one controlling the display subsystem
> would then also support kms, all the others would just support gem and
> share buffers with dma_buf (and maybe synchronize with some new-fangled
> sync objects). Otoh it doesn't make much sense to push this if we don't
> have at least one of the SoC ip block verndors on board. We can dream ...

That's the conclusion we came up to. ARM might be able to help here with Mali, 
and we could then try to sell the idea to other vendors when a reference 
implementation will be complete (or at least usable enough).

> [snip]
> 
> > ***  Sync objects ***
> > 
> >   Goal: Implement in-kernel support for buffer swapping, dependency
> >   system, sync objects, queue/dequeue userspace API (think EGLstream &
> >   EGLsync)
> >   
> >   This can be implemented in kernel-space (with direct communication
> >   between drivers to schedule buffers around), user-space (with ioctls to
> >   queue/dequeue buffers), or a mix of both. SoCs with direct sync object
> >   support at the hardware level between different IP blocks can be
> >   foreseen in the (near ?) future. A kernel-space API would then be
> >   needed.
> >   
> >   Sharing sync objects between subsystems could result in the creation of
> >   a cross-subsystem queue/dequeue API. Integration with dma_buf would make
> >   sense, a dma_buf_pool object would then likely be needed.
> >   
> >   If the SoC supports direct signaling between IP blocks, this could be
> >   considered (and implemented) as a pipeline configurable through the
> >   Media Controller API. However, applications will then need to be link-
> >   aware. Using sync/stream objects would offer a single API to userspace,
> >   regardless of whether the synchronization is handled by the CPU in
> >   kernel space or by the IP blocks directly.
> >   
> >   Sync objects are not always tied to buffers, they need to be implemented
> >   as stand-alone objects on the kernel side. However, when exposing the
> >   sync object to userspace in order to share it between devices, all
> >   current use cases involve dma-buf. The first implementation will thus not
> >   expose the sync objects explicitly to userspace, but associate them with
> >   a dma-buf. If sync objects with no associated dma-buf are later needed,
> >   an explicit userspace API can be added.
> >   
> >   eventfd is a possible candidate for sync object implementation.
> >   
> >   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=in
> >   clude/linux/eventfd.h
> >   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=D
> >   ocumentation/cgroups/cgroups.txt
> >   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=D
> >   ocumentation/cgroups/memory.txt
> >   http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=D
> >   ocumentation/virtual/kvm/api.txt
> >   
> >   Action points:
> >   - TBD, will be on the agenda for the Graphics Summit @ELC2012.
> 
> I've already started discussing this a bit with Rob. I'm not sure whether
> implicitly associating a sync object with a dma_buf makes sense, afaik
> sync objects can be used rather independently from buffers. But this is a
> long-term feature, so we still have plenty of time to discuss this.

I think we need to work on this one step at a time. Associating a sync object 
with a dma_buf is probably not going to solve all our problems in the long 
term, but it might provide us with a first solution that doesn't require 
exposing the sync object API to userspace. I'm not saying we need to do this, 
but it could be a reasonable first step that would buy us some time while we 
sort out the sync object userspace API.

> [snip]
> 
> > *** 2D Kernel APIs ***
> > 
> >   Goal: Expose a 2D acceleration API to userspace for devices that support
> >   hardware-accelerated 2D rendering.
> >   
> >   If the hardware is based on a command stream, a userspace library is
> >   needed anyway to build the command stream. A 2D kernel API would then
> >   not be very useful. This could be split to a DRM device without a KMS
> >   interface.
> 
> Imo we should ditch this - fb accel doesn't belong into the kernel. Even
> on hw that still has a blitter for easy 2d accel without a complete 3d
> state setup necessary, it's not worth it. Chris Wilson from our team once
> played around with implementing fb accel in the kernel (i915 hw still has
> a blitter engine in the latest generations). He quickly noticed that to
> have decent speed, competitive with s/w rendering by the cpu he needs the
> entire batch and buffer management stuff from userspace. And to really
> beat the cpu, you need even more magic.
> 
> If you want fast 2d accel, use something like cairo.

Our conclusion on this is that we should not expose an explicit 2D 
acceleration API at the kernel level. If really needed, hardware 2D 
acceleration could be implemented as a DRM device to handle memory management, 
commands ring setup, synchronization, ... but I'm not even sure if that's 
worth it. I might not have conveyed it well in my notes.

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-17 11:19       ` Semwal, Sumit
@ 2012-02-17 18:49         ` Laurent Pinchart
  -1 siblings, 0 replies; 49+ messages in thread
From: Laurent Pinchart @ 2012-02-17 18:49 UTC (permalink / raw)
  To: Semwal, Sumit
  Cc: Sakari Ailus, Jesse Barker, Jesse Barnes, Rob Clark,
	Pawel Osciak, Marek Szyprowski, Tomasz Stanislawski, Magnus Damm,
	Marcus Lorentzon, Alexander Deucher, linux-media, dri-devel,
	linux-fbdev

Hi Sumit,

On Friday 17 February 2012 16:37:35 Semwal, Sumit wrote:
> On Fri, Feb 17, 2012 at 4:55 AM, Laurent Pinchart wrote: 
> > Hello everybody,
> > 
> > First of all, I would like to thank all the attendees for their
> > participation in the mini-summit that helped make the meeting a success.
> 
> <snip>
> 
> > ***  dma-buf Implementation in V4L2 ***
> > 
> >  Goal: Implement the dma-buf API in V4L2.
> > 
> >  Sumit Semwal has submitted patches to implement the dma-buf importer role
> > in videobuf2. Tomasz Stanislawski has then submitted incremental patches
> > to add exporter role support.
> > 
> >  Action points:
> >  - Create a git branch to host all the latest patches. Sumit will provide
> >    that.
> 
> Against my Action Item: I have created the following branch at my
> github (obviously, it is an RFC branch only)

That was very fast :-) Thank you for your work on this.

> tree: git://github.com/sumitsemwal/kernel-omap4.git
> branch: 3.3rc3-v4l2-dmabuf-RFCv1
> 
> As the name partially suggests, it is based out of:
> 3.3-rc3 +
> dmav6 [1] +
> some minor dma-buf updates [2] +
> my v4l2-as-importer RFC [3] +
> Tomasz' RFC for v4l2-as-exporter (and related patches) [4]
> 
> Since Tomasz' RFC had a patch-pair which first removed and then added
> drivers/media/video/videobuf2-dma-contig.c file, I 'combined' these
> into one - but since the patch-pair heavily refactored the file, I am
> not able to take responsibility of completeness / correctness of the
> same.

No worries. The branch's main purpose is to provide people with a starting 
point to use dma-buf, patch review will go through mailing lists anyway.

> [1]:
> http://git.infradead.org/users/kmpark/linux-samsung/shortlog/refs/heads/3.3
> -rc2-dma-v6 [2]: git://git.linaro.org/people/sumitsemwal/linux-3.x.git 'dev'
> branch [3]:
> http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/4296
> 6/focus=42968 [4]:
> http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/4379
> 3

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-17 18:49         ` Laurent Pinchart
  0 siblings, 0 replies; 49+ messages in thread
From: Laurent Pinchart @ 2012-02-17 18:49 UTC (permalink / raw)
  To: Semwal, Sumit
  Cc: Sakari Ailus, Jesse Barker, Jesse Barnes, Rob Clark,
	Pawel Osciak, Marek Szyprowski, Tomasz Stanislawski, Magnus Damm,
	Marcus Lorentzon, Alexander Deucher, linux-media, dri-devel,
	linux-fbdev

Hi Sumit,

On Friday 17 February 2012 16:37:35 Semwal, Sumit wrote:
> On Fri, Feb 17, 2012 at 4:55 AM, Laurent Pinchart wrote: 
> > Hello everybody,
> > 
> > First of all, I would like to thank all the attendees for their
> > participation in the mini-summit that helped make the meeting a success.
> 
> <snip>
> 
> > ***  dma-buf Implementation in V4L2 ***
> > 
> >  Goal: Implement the dma-buf API in V4L2.
> > 
> >  Sumit Semwal has submitted patches to implement the dma-buf importer role
> > in videobuf2. Tomasz Stanislawski has then submitted incremental patches
> > to add exporter role support.
> > 
> >  Action points:
> >  - Create a git branch to host all the latest patches. Sumit will provide
> >    that.
> 
> Against my Action Item: I have created the following branch at my
> github (obviously, it is an RFC branch only)

That was very fast :-) Thank you for your work on this.

> tree: git://github.com/sumitsemwal/kernel-omap4.git
> branch: 3.3rc3-v4l2-dmabuf-RFCv1
> 
> As the name partially suggests, it is based out of:
> 3.3-rc3 +
> dmav6 [1] +
> some minor dma-buf updates [2] +
> my v4l2-as-importer RFC [3] +
> Tomasz' RFC for v4l2-as-exporter (and related patches) [4]
> 
> Since Tomasz' RFC had a patch-pair which first removed and then added
> drivers/media/video/videobuf2-dma-contig.c file, I 'combined' these
> into one - but since the patch-pair heavily refactored the file, I am
> not able to take responsibility of completeness / correctness of the
> same.

No worries. The branch's main purpose is to provide people with a starting 
point to use dma-buf, patch review will go through mailing lists anyway.

> [1]:
> http://git.infradead.org/users/kmpark/linux-samsung/shortlog/refs/heads/3.3
> -rc2-dma-v6 [2]: git://git.linaro.org/people/sumitsemwal/linux-3.x.git 'dev'
> branch [3]:
> http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/4296
> 6/focusB968 [4]:
> http://thread.gmane.org/gmane.linux.drivers.video-input-infrastructure/4379
> 3

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-16 23:25     ` Laurent Pinchart
@ 2012-02-17 19:42       ` Adam Jackson
  -1 siblings, 0 replies; 49+ messages in thread
From: Adam Jackson @ 2012-02-17 19:42 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Magnus Damm, Marcus Lorentzon, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

On 2/16/12 6:25 PM, Laurent Pinchart wrote:

> ***  Common video mode data structure and EDID parser ***
>
>    Goal: Sharing an EDID parser between DRM/KMS, FBDEV and V4L2.
>
>    The DRM EDID parser is currently the most advanced implementation and will
>    be taken as a starting point.
>
>    Different subsystems use different data structures to describe video
>    mode/timing information:
>
>    - struct drm_mode_modeinfo in DRM/KMS
>    - struct fb_videomode in FBDEV
>    - struct v4l2_bt_timings in V4L2
>
>    A new common video mode/timing data structure (struct media_video_mode_info,
>    exact name is to be defined), not tied to any specific subsystem, is
>    required to share the EDID parser. That structure won't be exported to
>    userspace.
>
>    Helper functions will be implemented in the subsystems to convert between
>    that generic structure and the various subsystem-specific structures.

I guess.  I don't really see a reason not to unify the structs too, but 
then I don't have binary blobs to pretend to be ABI-compatible with.

>    The mode list is stored in the DRM connector in the EDID parser. A new mode
>    list data structure can be added, or a callback function can be used by the
>    parser to give modes one at a time to the caller.
>
>    3D needs to be taken into account (this is similar to interlacing).

Would also be pleasant if the new mode structure had a reasonable way of 
representing borders, we copied that mistake from xserver and have been 
regretting it.

>    Action points:
>    - Laurent to work on a proposal. The DRM/KMS EDID parser will be reused.

I'm totally in favor of this.  I've long loathed fbdev having such a 
broken parser, I just never got around to fixing it since we don't use 
fbdev in any real way.

The existing drm_edid.c needs a little detangling, DDC fetch and EDID 
parse should be better split.  Shouldn't be too terrible though.

Has the embedded world seen any adoption of DisplayID?  I wrote a fair 
bit of a parser for it at one point [1] but I've yet to find a machine 
that's required it.

> ***  Split KMS and GPU Drivers ***
>
>    Goal: Split KMS and GPU drivers with in kernel API inbetween.
>
>    In most (all ?) SoCs, the GPU and the display controller are separate
>    devices. Splitting them into separate drivers would allow reusing the GPU
>    driver with different devices (e.g. using a single common PowerVR kernel
>    module with different display controller drivers). The same approach can be
>    used on the desktop for the multi-GPU case and the USB display case.
>
>    - OMAP already separates the GPU and DSS drivers, but the GPU driver is some
>    kind of DSS plugin. This isn't a long-term approach.
>    - Exynos also separates the GPU and FIMD drivers. It's hard to merge GPU
>    into  display subsystem since UMP, GPU has own memory management codes.
>
>    One of the biggest challenges would be to get GPU vendors to use this new
>    model. ARM could help here, by making the Mali kernel driver split from the
>    display controller drivers. Once one vendor jumps onboard, others could have
>    a bigger incentive to follow.

Honestly I want this for Intel already, given how identical Poulsbo's 
display block is to gen3.

> ***  HDMI CEC Support ***
>
>    Goal: Support HDMI CEC and offer a userspace API for applications.
>
>    A new kernel API is needed and must be usable by KMS, V4L2 and possibly
>    LIRC. There's ongoing effort from Cisco to implement HDMI CEC support. Given
>    their background, V4L2 is their initial target. A proposal is available at
>    http://www.mail-archive.com/linux-media@vger.kernel.org/msg29241.html with a
>    sample implementation at
>    http://git.linuxtv.org/hverkuil/cisco.git/shortlog/refs/heads/cobalt-
> mainline
>    (drivers/media/video/adv7604.c and ad9389b.c.
>
>    In order to avoid API duplication, a new CEC subsystem is probably needed.
>    CEC could be modeled as a bus, or as a network device. With the network
>    device approach, we could have both kernel and userspace protocol handlers.

I'm not a huge fan of userspace protocol for this.  Seems like it'd just 
give people more license to do their own subtly-incompatible things that 
only work between devices of the same vendor.  Interoperability is the 
_whole_ point of CEC.  (Yes I know every vendor tries to spin it as 
their own magical branded thing, but I'd appreciate it if they grew up.)

[1] - 
http://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/modes/xf86DisplayIDModes.c

- ajax

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-17 19:42       ` Adam Jackson
  0 siblings, 0 replies; 49+ messages in thread
From: Adam Jackson @ 2012-02-17 19:42 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Magnus Damm, Marcus Lorentzon, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

On 2/16/12 6:25 PM, Laurent Pinchart wrote:

> ***  Common video mode data structure and EDID parser ***
>
>    Goal: Sharing an EDID parser between DRM/KMS, FBDEV and V4L2.
>
>    The DRM EDID parser is currently the most advanced implementation and will
>    be taken as a starting point.
>
>    Different subsystems use different data structures to describe video
>    mode/timing information:
>
>    - struct drm_mode_modeinfo in DRM/KMS
>    - struct fb_videomode in FBDEV
>    - struct v4l2_bt_timings in V4L2
>
>    A new common video mode/timing data structure (struct media_video_mode_info,
>    exact name is to be defined), not tied to any specific subsystem, is
>    required to share the EDID parser. That structure won't be exported to
>    userspace.
>
>    Helper functions will be implemented in the subsystems to convert between
>    that generic structure and the various subsystem-specific structures.

I guess.  I don't really see a reason not to unify the structs too, but 
then I don't have binary blobs to pretend to be ABI-compatible with.

>    The mode list is stored in the DRM connector in the EDID parser. A new mode
>    list data structure can be added, or a callback function can be used by the
>    parser to give modes one at a time to the caller.
>
>    3D needs to be taken into account (this is similar to interlacing).

Would also be pleasant if the new mode structure had a reasonable way of 
representing borders, we copied that mistake from xserver and have been 
regretting it.

>    Action points:
>    - Laurent to work on a proposal. The DRM/KMS EDID parser will be reused.

I'm totally in favor of this.  I've long loathed fbdev having such a 
broken parser, I just never got around to fixing it since we don't use 
fbdev in any real way.

The existing drm_edid.c needs a little detangling, DDC fetch and EDID 
parse should be better split.  Shouldn't be too terrible though.

Has the embedded world seen any adoption of DisplayID?  I wrote a fair 
bit of a parser for it at one point [1] but I've yet to find a machine 
that's required it.

> ***  Split KMS and GPU Drivers ***
>
>    Goal: Split KMS and GPU drivers with in kernel API inbetween.
>
>    In most (all ?) SoCs, the GPU and the display controller are separate
>    devices. Splitting them into separate drivers would allow reusing the GPU
>    driver with different devices (e.g. using a single common PowerVR kernel
>    module with different display controller drivers). The same approach can be
>    used on the desktop for the multi-GPU case and the USB display case.
>
>    - OMAP already separates the GPU and DSS drivers, but the GPU driver is some
>    kind of DSS plugin. This isn't a long-term approach.
>    - Exynos also separates the GPU and FIMD drivers. It's hard to merge GPU
>    into  display subsystem since UMP, GPU has own memory management codes.
>
>    One of the biggest challenges would be to get GPU vendors to use this new
>    model. ARM could help here, by making the Mali kernel driver split from the
>    display controller drivers. Once one vendor jumps onboard, others could have
>    a bigger incentive to follow.

Honestly I want this for Intel already, given how identical Poulsbo's 
display block is to gen3.

> ***  HDMI CEC Support ***
>
>    Goal: Support HDMI CEC and offer a userspace API for applications.
>
>    A new kernel API is needed and must be usable by KMS, V4L2 and possibly
>    LIRC. There's ongoing effort from Cisco to implement HDMI CEC support. Given
>    their background, V4L2 is their initial target. A proposal is available at
>    http://www.mail-archive.com/linux-media@vger.kernel.org/msg29241.html with a
>    sample implementation at
>    http://git.linuxtv.org/hverkuil/cisco.git/shortlog/refs/heads/cobalt-
> mainline
>    (drivers/media/video/adv7604.c and ad9389b.c.
>
>    In order to avoid API duplication, a new CEC subsystem is probably needed.
>    CEC could be modeled as a bus, or as a network device. With the network
>    device approach, we could have both kernel and userspace protocol handlers.

I'm not a huge fan of userspace protocol for this.  Seems like it'd just 
give people more license to do their own subtly-incompatible things that 
only work between devices of the same vendor.  Interoperability is the 
_whole_ point of CEC.  (Yes I know every vendor tries to spin it as 
their own magical branded thing, but I'd appreciate it if they grew up.)

[1] - 
http://cgit.freedesktop.org/xorg/xserver/tree/hw/xfree86/modes/xf86DisplayIDModes.c

- ajax

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-16 23:25     ` Laurent Pinchart
  (?)
@ 2012-02-18  0:56       ` Keith Packard
  -1 siblings, 0 replies; 49+ messages in thread
From: Keith Packard @ 2012-02-18  0:56 UTC (permalink / raw)
  To: Laurent Pinchart, Laurent Pinchart
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Magnus Damm, Marcus Lorentzon, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

<#part sign=pgpmime>
On Fri, 17 Feb 2012 00:25:51 +0100, Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:

> ***  Synchronous pipeline changes ***
> 
>   Goal: Create an API to apply complex changes to a video pipeline atomically.
> 
>   Needed for complex camera use cases. On the DRM/KMS side, the approach is to
>   use one big ioctl to configure the whole pipeline.

This is the only credible approach for most desktop chips -- you must
have the whole configuration available before you can make any
commitment to supporting the requested modes.

>   One solution is a commit ioctl, through the media controller device, that
>   would be dispatched to entities internally with a prepare step and a commit
>   step.

The current plan for the i915 KMS code is to use a single ioctl -- the
application sends a buffer full of configuration commands down to the
kernel which can then figure out whether it can be supported or not.

The kernel will have to store the intermediate data until the commit
arrives anyways, and you still need a central authority in the kernel
controlling the final commit decision.

-- 
keith.packard@intel.com

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-18  0:56       ` Keith Packard
  0 siblings, 0 replies; 49+ messages in thread
From: Keith Packard @ 2012-02-18  0:56 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Marcus Lorentzon, Magnus Damm, dri-devel, Alexander Deucher,
	Rob Clark, Marek Szyprowski, linux-media

<#part sign=pgpmime>
On Fri, 17 Feb 2012 00:25:51 +0100, Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:

> ***  Synchronous pipeline changes ***
> 
>   Goal: Create an API to apply complex changes to a video pipeline atomically.
> 
>   Needed for complex camera use cases. On the DRM/KMS side, the approach is to
>   use one big ioctl to configure the whole pipeline.

This is the only credible approach for most desktop chips -- you must
have the whole configuration available before you can make any
commitment to supporting the requested modes.

>   One solution is a commit ioctl, through the media controller device, that
>   would be dispatched to entities internally with a prepare step and a commit
>   step.

The current plan for the i915 KMS code is to use a single ioctl -- the
application sends a buffer full of configuration commands down to the
kernel which can then figure out whether it can be supported or not.

The kernel will have to store the intermediate data until the commit
arrives anyways, and you still need a central authority in the kernel
controlling the final commit decision.

-- 
keith.packard@intel.com

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-18  0:56       ` Keith Packard
  0 siblings, 0 replies; 49+ messages in thread
From: Keith Packard @ 2012-02-18  0:56 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Marcus Lorentzon, Magnus Damm, dri-devel, Alexander Deucher,
	Rob Clark, Marek Szyprowski, linux-media

<#part sign=pgpmime>
On Fri, 17 Feb 2012 00:25:51 +0100, Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:

> ***  Synchronous pipeline changes ***
> 
>   Goal: Create an API to apply complex changes to a video pipeline atomically.
> 
>   Needed for complex camera use cases. On the DRM/KMS side, the approach is to
>   use one big ioctl to configure the whole pipeline.

This is the only credible approach for most desktop chips -- you must
have the whole configuration available before you can make any
commitment to supporting the requested modes.

>   One solution is a commit ioctl, through the media controller device, that
>   would be dispatched to entities internally with a prepare step and a commit
>   step.

The current plan for the i915 KMS code is to use a single ioctl -- the
application sends a buffer full of configuration commands down to the
kernel which can then figure out whether it can be supported or not.

The kernel will have to store the intermediate data until the commit
arrives anyways, and you still need a central authority in the kernel
controlling the final commit decision.

-- 
keith.packard@intel.com

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-17 19:42       ` Adam Jackson
@ 2012-02-18 17:53         ` Clark, Rob
  -1 siblings, 0 replies; 49+ messages in thread
From: Clark, Rob @ 2012-02-18 17:53 UTC (permalink / raw)
  To: Adam Jackson
  Cc: Laurent Pinchart, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Pawel Osciak, Magnus Damm, Marcus Lorentzon, dri-devel,
	Alexander Deucher, linux-media, Marek Szyprowski

On Fri, Feb 17, 2012 at 1:42 PM, Adam Jackson <ajax@redhat.com> wrote:
> On 2/16/12 6:25 PM, Laurent Pinchart wrote:
>
>>   Helper functions will be implemented in the subsystems to convert
>> between
>>   that generic structure and the various subsystem-specific structures.
>
>
> I guess.  I don't really see a reason not to unify the structs too, but then
> I don't have binary blobs to pretend to be ABI-compatible with.
>

this is just for where timing struct is exposed to userspace

BR,
-R

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-18 17:53         ` Clark, Rob
  0 siblings, 0 replies; 49+ messages in thread
From: Clark, Rob @ 2012-02-18 17:53 UTC (permalink / raw)
  To: Adam Jackson
  Cc: Laurent Pinchart, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Pawel Osciak, Magnus Damm, Marcus Lorentzon, dri-devel,
	Alexander Deucher, linux-media, Marek Szyprowski

On Fri, Feb 17, 2012 at 1:42 PM, Adam Jackson <ajax@redhat.com> wrote:
> On 2/16/12 6:25 PM, Laurent Pinchart wrote:
>
>>   Helper functions will be implemented in the subsystems to convert
>> between
>>   that generic structure and the various subsystem-specific structures.
>
>
> I guess.  I don't really see a reason not to unify the structs too, but then
> I don't have binary blobs to pretend to be ABI-compatible with.
>

this is just for where timing struct is exposed to userspace

BR,
-R

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-17  9:55       ` Daniel Vetter
@ 2012-02-20 16:09         ` Guennadi Liakhovetski
  -1 siblings, 0 replies; 49+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-20 16:09 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Laurent Pinchart, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Pawel Osciak, Magnus Damm, Marcus Lorentzon, dri-devel,
	Alexander Deucher, Rob Clark, linux-media, Marek Szyprowski

On Fri, 17 Feb 2012, Daniel Vetter wrote:

> On Fri, Feb 17, 2012 at 12:25:51AM +0100, Laurent Pinchart wrote:
> > Hello everybody,
> > 
> > First of all, I would like to thank all the attendees for their participation 
> > in the mini-summit that helped make the meeting a success.
> > 
> > Here are my consolidated notes that cover both the Linaro Connect meeting and 
> > the ELC meeting. They're also available at 
> > http://www.ideasonboard.org/media/meetings/.
> 
> Looks like you've been all really busy ;-) A few quick comments below.
> 
> > Kernel Display and Video API Consolidation mini-summit at ELC 2012
> > ------------------------------------------------------------------
> 
> [snip]
> 
> > ***  Common video mode data structure and EDID parser ***
> > 
> >   Goal: Sharing an EDID parser between DRM/KMS, FBDEV and V4L2.
> > 
> >   The DRM EDID parser is currently the most advanced implementation and will
> >   be taken as a starting point.

I'm certainly absolutely in favour of creating a common EDID parser, and 
the DRM/KMS implementation might indeed be the most complete / advanced 
one, but at least back in 2010 as I was working on the sh-mobile HDMI 
driver, some functinality was still missing there, which I had to add to 
fbdev independently. Unless those features have been added to DRM / KMS 
since then you might want to use the fbdev version. See

http://thread.gmane.org/gmane.linux.ports.arm.omap/55193/focus=55337

as well as possibly some other discussions from that period

http://marc.info/?l=linux-fbdev&r=1&b=201010&w=4

> >   Different subsystems use different data structures to describe video
> >   mode/timing information:
> > 
> >   - struct drm_mode_modeinfo in DRM/KMS
> >   - struct fb_videomode in FBDEV
> >   - struct v4l2_bt_timings in V4L2
> > 
> >   A new common video mode/timing data structure (struct media_video_mode_info,
> >   exact name is to be defined), not tied to any specific subsystem, is
> >   required to share the EDID parser. That structure won't be exported to
> >   userspace.
> > 
> >   Helper functions will be implemented in the subsystems to convert between
> >   that generic structure and the various subsystem-specific structures.
> > 
> >   The mode list is stored in the DRM connector in the EDID parser. A new mode
> >   list data structure can be added, or a callback function can be used by the
> >   parser to give modes one at a time to the caller.
> > 
> >   3D needs to be taken into account (this is similar to interlacing).
> > 
> >   Action points:
> >   - Laurent to work on a proposal. The DRM/KMS EDID parser will be reused.
> 
> I think we should include kernel cmdline video mode parsing here, afaik
> kms and fbdev are rather similar (won't work if they're too different,
> obviously).

This has been a pretty hot discussion topic wrt sh-mobile LCDC / HDMI 
too:-) The goal was to (1) take into account driver's capabilities: not 
all standard HDMI modes were working properly, (2) use EDID data, (3) give 
the user a chance to select a specific mode. Also here a generic solution 
would be very welcome, without breaking existing configurations, of 
course:)

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-20 16:09         ` Guennadi Liakhovetski
  0 siblings, 0 replies; 49+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-20 16:09 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Laurent Pinchart, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Pawel Osciak, Magnus Damm, Marcus Lorentzon, dri-devel,
	Alexander Deucher, Rob Clark, linux-media, Marek Szyprowski

On Fri, 17 Feb 2012, Daniel Vetter wrote:

> On Fri, Feb 17, 2012 at 12:25:51AM +0100, Laurent Pinchart wrote:
> > Hello everybody,
> > 
> > First of all, I would like to thank all the attendees for their participation 
> > in the mini-summit that helped make the meeting a success.
> > 
> > Here are my consolidated notes that cover both the Linaro Connect meeting and 
> > the ELC meeting. They're also available at 
> > http://www.ideasonboard.org/media/meetings/.
> 
> Looks like you've been all really busy ;-) A few quick comments below.
> 
> > Kernel Display and Video API Consolidation mini-summit at ELC 2012
> > ------------------------------------------------------------------
> 
> [snip]
> 
> > ***  Common video mode data structure and EDID parser ***
> > 
> >   Goal: Sharing an EDID parser between DRM/KMS, FBDEV and V4L2.
> > 
> >   The DRM EDID parser is currently the most advanced implementation and will
> >   be taken as a starting point.

I'm certainly absolutely in favour of creating a common EDID parser, and 
the DRM/KMS implementation might indeed be the most complete / advanced 
one, but at least back in 2010 as I was working on the sh-mobile HDMI 
driver, some functinality was still missing there, which I had to add to 
fbdev independently. Unless those features have been added to DRM / KMS 
since then you might want to use the fbdev version. See

http://thread.gmane.org/gmane.linux.ports.arm.omap/55193/focusU337

as well as possibly some other discussions from that period

http://marc.info/?l=linux-fbdev&r=1&b 1010&w=4

> >   Different subsystems use different data structures to describe video
> >   mode/timing information:
> > 
> >   - struct drm_mode_modeinfo in DRM/KMS
> >   - struct fb_videomode in FBDEV
> >   - struct v4l2_bt_timings in V4L2
> > 
> >   A new common video mode/timing data structure (struct media_video_mode_info,
> >   exact name is to be defined), not tied to any specific subsystem, is
> >   required to share the EDID parser. That structure won't be exported to
> >   userspace.
> > 
> >   Helper functions will be implemented in the subsystems to convert between
> >   that generic structure and the various subsystem-specific structures.
> > 
> >   The mode list is stored in the DRM connector in the EDID parser. A new mode
> >   list data structure can be added, or a callback function can be used by the
> >   parser to give modes one at a time to the caller.
> > 
> >   3D needs to be taken into account (this is similar to interlacing).
> > 
> >   Action points:
> >   - Laurent to work on a proposal. The DRM/KMS EDID parser will be reused.
> 
> I think we should include kernel cmdline video mode parsing here, afaik
> kms and fbdev are rather similar (won't work if they're too different,
> obviously).

This has been a pretty hot discussion topic wrt sh-mobile LCDC / HDMI 
too:-) The goal was to (1) take into account driver's capabilities: not 
all standard HDMI modes were working properly, (2) use EDID data, (3) give 
the user a chance to select a specific mode. Also here a generic solution 
would be very welcome, without breaking existing configurations, of 
course:)

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-20 16:09         ` Guennadi Liakhovetski
@ 2012-02-20 16:19           ` David Airlie
  -1 siblings, 0 replies; 49+ messages in thread
From: David Airlie @ 2012-02-20 16:19 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Marcus Lorentzon,
	Pawel Osciak, Magnus Damm, dri-devel, Alexander Deucher,
	Rob Clark, Marek Szyprowski, linux-media, Daniel Vetter


> 
> I'm certainly absolutely in favour of creating a common EDID parser,
> and
> the DRM/KMS implementation might indeed be the most complete /
> advanced
> one, but at least back in 2010 as I was working on the sh-mobile HDMI
> driver, some functinality was still missing there, which I had to add
> to
> fbdev independently. Unless those features have been added to DRM /
> KMS
> since then you might want to use the fbdev version. See
> 
> http://thread.gmane.org/gmane.linux.ports.arm.omap/55193/focus=55337
> 
> as well as possibly some other discussions from that period
> 
> http://marc.info/?l=linux-fbdev&r=1&b=201010&w=4

One feature missing from the drm EDID parser doesn't mean the fbdev one is better in all cases.

Whoever takes over the merging process will have to check for missing bits anyways to avoid regressions.

> > 
> > I think we should include kernel cmdline video mode parsing here,
> > afaik
> > kms and fbdev are rather similar (won't work if they're too
> > different,
> > obviously).
> 
> This has been a pretty hot discussion topic wrt sh-mobile LCDC / HDMI
> too:-) The goal was to (1) take into account driver's capabilities:
> not
> all standard HDMI modes were working properly, (2) use EDID data, (3)
> give
> the user a chance to select a specific mode. Also here a generic
> solution
> would be very welcome, without breaking existing configurations, of
> course:)

The reason the drm has a more enhanced command line parser is to allow
for multiple devices otherwise it should parse mostly the same I thought
I based the drm one directly on the fbdev one + connector specifiers.

Dave.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-20 16:19           ` David Airlie
  0 siblings, 0 replies; 49+ messages in thread
From: David Airlie @ 2012-02-20 16:19 UTC (permalink / raw)
  To: Guennadi Liakhovetski
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Marcus Lorentzon,
	Pawel Osciak, Magnus Damm, dri-devel, Alexander Deucher,
	Rob Clark, Marek Szyprowski, linux-media, Daniel Vetter


> 
> I'm certainly absolutely in favour of creating a common EDID parser,
> and
> the DRM/KMS implementation might indeed be the most complete /
> advanced
> one, but at least back in 2010 as I was working on the sh-mobile HDMI
> driver, some functinality was still missing there, which I had to add
> to
> fbdev independently. Unless those features have been added to DRM /
> KMS
> since then you might want to use the fbdev version. See
> 
> http://thread.gmane.org/gmane.linux.ports.arm.omap/55193/focusU337
> 
> as well as possibly some other discussions from that period
> 
> http://marc.info/?l=linux-fbdev&r=1&b 1010&w=4

One feature missing from the drm EDID parser doesn't mean the fbdev one is better in all cases.

Whoever takes over the merging process will have to check for missing bits anyways to avoid regressions.

> > 
> > I think we should include kernel cmdline video mode parsing here,
> > afaik
> > kms and fbdev are rather similar (won't work if they're too
> > different,
> > obviously).
> 
> This has been a pretty hot discussion topic wrt sh-mobile LCDC / HDMI
> too:-) The goal was to (1) take into account driver's capabilities:
> not
> all standard HDMI modes were working properly, (2) use EDID data, (3)
> give
> the user a chance to select a specific mode. Also here a generic
> solution
> would be very welcome, without breaking existing configurations, of
> course:)

The reason the drm has a more enhanced command line parser is to allow
for multiple devices otherwise it should parse mostly the same I thought
I based the drm one directly on the fbdev one + connector specifiers.

Dave.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-16 23:25     ` Laurent Pinchart
@ 2012-02-20 16:40       ` Guennadi Liakhovetski
  -1 siblings, 0 replies; 49+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-20 16:40 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Sumit Semwal, Jesse Barker, Jesse Barnes,
	Rob Clark, Pawel Osciak, Marek Szyprowski, Tomasz Stanislawski,
	Magnus Damm, Marcus Lorentzon, Alexander Deucher, linux-media,
	dri-devel, linux-fbdev

On Fri, 17 Feb 2012, Laurent Pinchart wrote:

[snip]

> ***  Synchronous pipeline changes ***
> 
>   Goal: Create an API to apply complex changes to a video pipeline atomically.
> 
>   Needed for complex camera use cases. On the DRM/KMS side, the approach is to
>   use one big ioctl to configure the whole pipeline.
> 
>   One solution is a commit ioctl, through the media controller device, that
>   would be dispatched to entities internally with a prepare step and a commit
>   step.
> 
>   Parameters to be committed need to be stored in a context. We can either use
>   one cross-device context, or per-device contexts that would then need to be
>   associated with the commit operation.
> 
>   Action points:
>   - Sakari will provide a proof-of-concept and/or proposal if needed.

I actually have been toying with a related idea, namely replacing the 
current ACTIVE / TRY configuration pair as not sufficiently clearly defined 
and too restrictive with a more flexible concept of an arbitrary number of 
configuration contexts. The idea was to allow the user to create such 
contexts and use atomic commands to instruct the pipeline to switch between 
them. However, as I started writing down an RFC, this was exactly the point, 
where I stopped: what defines a configuration and in which order shall 
configuration commands be executed when switching between them?

In short, my idea was to allow contexts to contain any configuration 
options: not only geometry and pixel format as what TRY is using ATM, but 
also any controls. The API would add commands like

	handle = context_alloc(mode);
	/*
	 * mode can be DEFAULT to start a new configuration, based on
	 * driver defaults, or CLONE to start with the currently active
	 * configuration
	 */
	context_configure(handle);
	/*
	 * all configuration commands from now on happen in the background
	 * and only affect the specified context
	 */
	/* perform any configuration */
	context_switch(handle);
	/* activate one of pre-configured contexts */

The problem, however, is, how to store contexts and how to perform the 
switch. We would, probably, have to define a notion of a "complete 
configuration," which would consist of some generic parameters and, 
optionally, driver-specific ones. Then the drivers (in the downstream 
order?) would just be instructed to switch to a specific configuration 
and each of them would then decide in which order they have to commit 
specific parameters. This must assume, that regardless in what state a 
device currently is, switching to context X always produces the same 
result.

Alternative approaches, like, store each context as a sequence of 
user-provided configuration commands, and play them back, when switching, 
would produce unpredictable results, depending on the state, before the 
switch, especially when using the CLONE context-creation mode.

Anyway, my tuppence to consider.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-20 16:40       ` Guennadi Liakhovetski
  0 siblings, 0 replies; 49+ messages in thread
From: Guennadi Liakhovetski @ 2012-02-20 16:40 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Sumit Semwal, Jesse Barker, Jesse Barnes,
	Rob Clark, Pawel Osciak, Marek Szyprowski, Tomasz Stanislawski,
	Magnus Damm, Marcus Lorentzon, Alexander Deucher, linux-media,
	dri-devel, linux-fbdev

On Fri, 17 Feb 2012, Laurent Pinchart wrote:

[snip]

> ***  Synchronous pipeline changes ***
> 
>   Goal: Create an API to apply complex changes to a video pipeline atomically.
> 
>   Needed for complex camera use cases. On the DRM/KMS side, the approach is to
>   use one big ioctl to configure the whole pipeline.
> 
>   One solution is a commit ioctl, through the media controller device, that
>   would be dispatched to entities internally with a prepare step and a commit
>   step.
> 
>   Parameters to be committed need to be stored in a context. We can either use
>   one cross-device context, or per-device contexts that would then need to be
>   associated with the commit operation.
> 
>   Action points:
>   - Sakari will provide a proof-of-concept and/or proposal if needed.

I actually have been toying with a related idea, namely replacing the 
current ACTIVE / TRY configuration pair as not sufficiently clearly defined 
and too restrictive with a more flexible concept of an arbitrary number of 
configuration contexts. The idea was to allow the user to create such 
contexts and use atomic commands to instruct the pipeline to switch between 
them. However, as I started writing down an RFC, this was exactly the point, 
where I stopped: what defines a configuration and in which order shall 
configuration commands be executed when switching between them?

In short, my idea was to allow contexts to contain any configuration 
options: not only geometry and pixel format as what TRY is using ATM, but 
also any controls. The API would add commands like

	handle = context_alloc(mode);
	/*
	 * mode can be DEFAULT to start a new configuration, based on
	 * driver defaults, or CLONE to start with the currently active
	 * configuration
	 */
	context_configure(handle);
	/*
	 * all configuration commands from now on happen in the background
	 * and only affect the specified context
	 */
	/* perform any configuration */
	context_switch(handle);
	/* activate one of pre-configured contexts */

The problem, however, is, how to store contexts and how to perform the 
switch. We would, probably, have to define a notion of a "complete 
configuration," which would consist of some generic parameters and, 
optionally, driver-specific ones. Then the drivers (in the downstream 
order?) would just be instructed to switch to a specific configuration 
and each of them would then decide in which order they have to commit 
specific parameters. This must assume, that regardless in what state a 
device currently is, switching to context X always produces the same 
result.

Alternative approaches, like, store each context as a sequence of 
user-provided configuration commands, and play them back, when switching, 
would produce unpredictable results, depending on the state, before the 
switch, especially when using the CLONE context-creation mode.

Anyway, my tuppence to consider.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-17 18:46         ` Laurent Pinchart
@ 2012-02-22 16:03           ` James Simmons
  -1 siblings, 0 replies; 49+ messages in thread
From: James Simmons @ 2012-02-22 16:03 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Daniel Vetter, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Pawel Osciak, Magnus Damm, Marcus Lorentzon, dri-devel,
	Alexander Deucher, Rob Clark, linux-media, Marek Szyprowski


> > Imo we should ditch this - fb accel doesn't belong into the kernel. Even
> > on hw that still has a blitter for easy 2d accel without a complete 3d
> > state setup necessary, it's not worth it. Chris Wilson from our team once
> > played around with implementing fb accel in the kernel (i915 hw still has
> > a blitter engine in the latest generations). He quickly noticed that to
> > have decent speed, competitive with s/w rendering by the cpu he needs the
> > entire batch and buffer management stuff from userspace. And to really
> > beat the cpu, you need even more magic.
> > 
> > If you want fast 2d accel, use something like cairo.
> 
> Our conclusion on this is that we should not expose an explicit 2D 
> acceleration API at the kernel level. If really needed, hardware 2D 
> acceleration could be implemented as a DRM device to handle memory management, 
> commands ring setup, synchronization, ... but I'm not even sure if that's 
> worth it. I might not have conveyed it well in my notes.

Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3 
possible accels; copyarea, imageblit, and fillrect. The first two could be 
hooked from the TTM layer. Its something I plan to experiment to see if 
its worth it.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-22 16:03           ` James Simmons
  0 siblings, 0 replies; 49+ messages in thread
From: James Simmons @ 2012-02-22 16:03 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Daniel Vetter, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Pawel Osciak, Magnus Damm, Marcus Lorentzon, dri-devel,
	Alexander Deucher, Rob Clark, linux-media, Marek Szyprowski


> > Imo we should ditch this - fb accel doesn't belong into the kernel. Even
> > on hw that still has a blitter for easy 2d accel without a complete 3d
> > state setup necessary, it's not worth it. Chris Wilson from our team once
> > played around with implementing fb accel in the kernel (i915 hw still has
> > a blitter engine in the latest generations). He quickly noticed that to
> > have decent speed, competitive with s/w rendering by the cpu he needs the
> > entire batch and buffer management stuff from userspace. And to really
> > beat the cpu, you need even more magic.
> > 
> > If you want fast 2d accel, use something like cairo.
> 
> Our conclusion on this is that we should not expose an explicit 2D 
> acceleration API at the kernel level. If really needed, hardware 2D 
> acceleration could be implemented as a DRM device to handle memory management, 
> commands ring setup, synchronization, ... but I'm not even sure if that's 
> worth it. I might not have conveyed it well in my notes.

Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3 
possible accels; copyarea, imageblit, and fillrect. The first two could be 
hooked from the TTM layer. Its something I plan to experiment to see if 
its worth it.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-22 16:03           ` James Simmons
@ 2012-02-22 16:24             ` Daniel Vetter
  -1 siblings, 0 replies; 49+ messages in thread
From: Daniel Vetter @ 2012-02-22 16:24 UTC (permalink / raw)
  To: James Simmons
  Cc: Laurent Pinchart, Daniel Vetter, Tomasz Stanislawski,
	linux-fbdev, Sakari Ailus, Pawel Osciak, Magnus Damm,
	Marcus Lorentzon, dri-devel, Alexander Deucher, Rob Clark,
	linux-media, Marek Szyprowski

On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
> 
> > > Imo we should ditch this - fb accel doesn't belong into the kernel. Even
> > > on hw that still has a blitter for easy 2d accel without a complete 3d
> > > state setup necessary, it's not worth it. Chris Wilson from our team once
> > > played around with implementing fb accel in the kernel (i915 hw still has
> > > a blitter engine in the latest generations). He quickly noticed that to
> > > have decent speed, competitive with s/w rendering by the cpu he needs the
> > > entire batch and buffer management stuff from userspace. And to really
> > > beat the cpu, you need even more magic.
> > > 
> > > If you want fast 2d accel, use something like cairo.
> > 
> > Our conclusion on this is that we should not expose an explicit 2D 
> > acceleration API at the kernel level. If really needed, hardware 2D 
> > acceleration could be implemented as a DRM device to handle memory management, 
> > commands ring setup, synchronization, ... but I'm not even sure if that's 
> > worth it. I might not have conveyed it well in my notes.
> 
> Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3 
> possible accels; copyarea, imageblit, and fillrect. The first two could be 
> hooked from the TTM layer. Its something I plan to experiment to see if 
> its worth it.

Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
screens, but I also think it's a total waste of time to fix this. Imo
fbcon has 2 use-cases:
- display an OOSP.
- allow me to run fsck (or any other desaster-recovery stuff).

It can do that quite fine already.

Flamy yours, Daniel
-- 
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-22 16:24             ` Daniel Vetter
  0 siblings, 0 replies; 49+ messages in thread
From: Daniel Vetter @ 2012-02-22 16:24 UTC (permalink / raw)
  To: James Simmons
  Cc: Laurent Pinchart, Daniel Vetter, Tomasz Stanislawski,
	linux-fbdev, Sakari Ailus, Pawel Osciak, Magnus Damm,
	Marcus Lorentzon, dri-devel, Alexander Deucher, Rob Clark,
	linux-media, Marek Szyprowski

On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
> 
> > > Imo we should ditch this - fb accel doesn't belong into the kernel. Even
> > > on hw that still has a blitter for easy 2d accel without a complete 3d
> > > state setup necessary, it's not worth it. Chris Wilson from our team once
> > > played around with implementing fb accel in the kernel (i915 hw still has
> > > a blitter engine in the latest generations). He quickly noticed that to
> > > have decent speed, competitive with s/w rendering by the cpu he needs the
> > > entire batch and buffer management stuff from userspace. And to really
> > > beat the cpu, you need even more magic.
> > > 
> > > If you want fast 2d accel, use something like cairo.
> > 
> > Our conclusion on this is that we should not expose an explicit 2D 
> > acceleration API at the kernel level. If really needed, hardware 2D 
> > acceleration could be implemented as a DRM device to handle memory management, 
> > commands ring setup, synchronization, ... but I'm not even sure if that's 
> > worth it. I might not have conveyed it well in my notes.
> 
> Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3 
> possible accels; copyarea, imageblit, and fillrect. The first two could be 
> hooked from the TTM layer. Its something I plan to experiment to see if 
> its worth it.

Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
screens, but I also think it's a total waste of time to fix this. Imo
fbcon has 2 use-cases:
- display an OOSP.
- allow me to run fsck (or any other desaster-recovery stuff).

It can do that quite fine already.

Flamy yours, Daniel
-- 
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-22 16:24             ` Daniel Vetter
@ 2012-02-22 16:28               ` Rob Clark
  -1 siblings, 0 replies; 49+ messages in thread
From: Rob Clark @ 2012-02-22 16:28 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: James Simmons, Laurent Pinchart, Tomasz Stanislawski,
	linux-fbdev, Sakari Ailus, Pawel Osciak, Magnus Damm,
	Marcus Lorentzon, dri-devel, Alexander Deucher, linux-media,
	Marek Szyprowski

On Wed, Feb 22, 2012 at 10:24 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
>>
>> > > Imo we should ditch this - fb accel doesn't belong into the kernel. Even
>> > > on hw that still has a blitter for easy 2d accel without a complete 3d
>> > > state setup necessary, it's not worth it. Chris Wilson from our team once
>> > > played around with implementing fb accel in the kernel (i915 hw still has
>> > > a blitter engine in the latest generations). He quickly noticed that to
>> > > have decent speed, competitive with s/w rendering by the cpu he needs the
>> > > entire batch and buffer management stuff from userspace. And to really
>> > > beat the cpu, you need even more magic.
>> > >
>> > > If you want fast 2d accel, use something like cairo.
>> >
>> > Our conclusion on this is that we should not expose an explicit 2D
>> > acceleration API at the kernel level. If really needed, hardware 2D
>> > acceleration could be implemented as a DRM device to handle memory management,
>> > commands ring setup, synchronization, ... but I'm not even sure if that's
>> > worth it. I might not have conveyed it well in my notes.
>>
>> Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3
>> possible accels; copyarea, imageblit, and fillrect. The first two could be
>> hooked from the TTM layer. Its something I plan to experiment to see if
>> its worth it.
>
> Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
> screens, but I also think it's a total waste of time to fix this. Imo
> fbcon has 2 use-cases:
> - display an OOSP.
> - allow me to run fsck (or any other desaster-recovery stuff).
>
> It can do that quite fine already.

and for just fbcon scrolling, if you really wanted to you could
implement it by just shuffling pages around in a GART..

(although, someone, *please* re-write fbcon)

BR,
-R

> Flamy yours, Daniel
> --
> Daniel Vetter
> Mail: daniel@ffwll.ch
> Mobile: +41 (0)79 365 57 48
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-22 16:28               ` Rob Clark
  0 siblings, 0 replies; 49+ messages in thread
From: Rob Clark @ 2012-02-22 16:28 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: James Simmons, Laurent Pinchart, Tomasz Stanislawski,
	linux-fbdev, Sakari Ailus, Pawel Osciak, Magnus Damm,
	Marcus Lorentzon, dri-devel, Alexander Deucher, linux-media,
	Marek Szyprowski

On Wed, Feb 22, 2012 at 10:24 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
>>
>> > > Imo we should ditch this - fb accel doesn't belong into the kernel. Even
>> > > on hw that still has a blitter for easy 2d accel without a complete 3d
>> > > state setup necessary, it's not worth it. Chris Wilson from our team once
>> > > played around with implementing fb accel in the kernel (i915 hw still has
>> > > a blitter engine in the latest generations). He quickly noticed that to
>> > > have decent speed, competitive with s/w rendering by the cpu he needs the
>> > > entire batch and buffer management stuff from userspace. And to really
>> > > beat the cpu, you need even more magic.
>> > >
>> > > If you want fast 2d accel, use something like cairo.
>> >
>> > Our conclusion on this is that we should not expose an explicit 2D
>> > acceleration API at the kernel level. If really needed, hardware 2D
>> > acceleration could be implemented as a DRM device to handle memory management,
>> > commands ring setup, synchronization, ... but I'm not even sure if that's
>> > worth it. I might not have conveyed it well in my notes.
>>
>> Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3
>> possible accels; copyarea, imageblit, and fillrect. The first two could be
>> hooked from the TTM layer. Its something I plan to experiment to see if
>> its worth it.
>
> Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
> screens, but I also think it's a total waste of time to fix this. Imo
> fbcon has 2 use-cases:
> - display an OOSP.
> - allow me to run fsck (or any other desaster-recovery stuff).
>
> It can do that quite fine already.

and for just fbcon scrolling, if you really wanted to you could
implement it by just shuffling pages around in a GART..

(although, someone, *please* re-write fbcon)

BR,
-R

> Flamy yours, Daniel
> --
> Daniel Vetter
> Mail: daniel@ffwll.ch
> Mobile: +41 (0)79 365 57 48
> --
> To unsubscribe from this list: send the line "unsubscribe linux-media" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-22 16:24             ` Daniel Vetter
  (?)
@ 2012-02-22 16:36               ` Chris Wilson
  -1 siblings, 0 replies; 49+ messages in thread
From: Chris Wilson @ 2012-02-22 16:36 UTC (permalink / raw)
  To: Daniel Vetter, James Simmons
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Marcus Lorentzon,
	Pawel Osciak, Magnus Damm, dri-devel, Alexander Deucher,
	Rob Clark, Marek Szyprowski, linux-media

On Wed, 22 Feb 2012 17:24:24 +0100, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
> > Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3 
> > possible accels; copyarea, imageblit, and fillrect. The first two could be 
> > hooked from the TTM layer. Its something I plan to experiment to see if 
> > its worth it.
> 
> Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
> screens, but I also think it's a total waste of time to fix this. Imo
> fbcon has 2 use-cases:
> - display an OOSP.
> - allow me to run fsck (or any other desaster-recovery stuff).
3. Show panics.

Ensuring that nothing prevents the switch to fbcon and displaying the
panic message is the reason why we haven't felt inclined to accelerate
fbcon - it just gets messy for no real gain.

For example: https://bugs.freedesktop.org/attachment.cgi?id=48933
which doesn't handle flushing of pending updates via the GPU when
writing with the CPU during interrupts (i.e. a panic).
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-22 16:36               ` Chris Wilson
  0 siblings, 0 replies; 49+ messages in thread
From: Chris Wilson @ 2012-02-22 16:36 UTC (permalink / raw)
  To: Daniel Vetter, James Simmons
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Marcus Lorentzon,
	Pawel Osciak, Magnus Damm, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

On Wed, 22 Feb 2012 17:24:24 +0100, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
> > Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3 
> > possible accels; copyarea, imageblit, and fillrect. The first two could be 
> > hooked from the TTM layer. Its something I plan to experiment to see if 
> > its worth it.
> 
> Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
> screens, but I also think it's a total waste of time to fix this. Imo
> fbcon has 2 use-cases:
> - display an OOSP.
> - allow me to run fsck (or any other desaster-recovery stuff).
3. Show panics.

Ensuring that nothing prevents the switch to fbcon and displaying the
panic message is the reason why we haven't felt inclined to accelerate
fbcon - it just gets messy for no real gain.

For example: https://bugs.freedesktop.org/attachment.cgi?idH933
which doesn't handle flushing of pending updates via the GPU when
writing with the CPU during interrupts (i.e. a panic).
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-22 16:36               ` Chris Wilson
  0 siblings, 0 replies; 49+ messages in thread
From: Chris Wilson @ 2012-02-22 16:36 UTC (permalink / raw)
  To: Daniel Vetter, James Simmons
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Marcus Lorentzon,
	Pawel Osciak, Magnus Damm, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

On Wed, 22 Feb 2012 17:24:24 +0100, Daniel Vetter <daniel@ffwll.ch> wrote:
> On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
> > Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3 
> > possible accels; copyarea, imageblit, and fillrect. The first two could be 
> > hooked from the TTM layer. Its something I plan to experiment to see if 
> > its worth it.
> 
> Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
> screens, but I also think it's a total waste of time to fix this. Imo
> fbcon has 2 use-cases:
> - display an OOSP.
> - allow me to run fsck (or any other desaster-recovery stuff).
3. Show panics.

Ensuring that nothing prevents the switch to fbcon and displaying the
panic message is the reason why we haven't felt inclined to accelerate
fbcon - it just gets messy for no real gain.

For example: https://bugs.freedesktop.org/attachment.cgi?id=48933
which doesn't handle flushing of pending updates via the GPU when
writing with the CPU during interrupts (i.e. a panic).
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-22 16:36               ` Chris Wilson
@ 2012-02-22 16:40                 ` Clark, Rob
  -1 siblings, 0 replies; 49+ messages in thread
From: Clark, Rob @ 2012-02-22 16:40 UTC (permalink / raw)
  To: Chris Wilson
  Cc: Daniel Vetter, James Simmons, Tomasz Stanislawski, linux-fbdev,
	Sakari Ailus, Marcus Lorentzon, Pawel Osciak, Magnus Damm,
	dri-devel, Alexander Deucher, Marek Szyprowski, linux-media

On Wed, Feb 22, 2012 at 10:36 AM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> On Wed, 22 Feb 2012 17:24:24 +0100, Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
>> > Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3
>> > possible accels; copyarea, imageblit, and fillrect. The first two could be
>> > hooked from the TTM layer. Its something I plan to experiment to see if
>> > its worth it.
>>
>> Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
>> screens, but I also think it's a total waste of time to fix this. Imo
>> fbcon has 2 use-cases:
>> - display an OOSP.
>> - allow me to run fsck (or any other desaster-recovery stuff).
> 3. Show panics.
>
> Ensuring that nothing prevents the switch to fbcon and displaying the
> panic message is the reason why we haven't felt inclined to accelerate
> fbcon - it just gets messy for no real gain.

and when doing 2d accel on a 3d core..  it basically amounts to
putting a shader compiler in the kernel.   Wheeee!

> For example: https://bugs.freedesktop.org/attachment.cgi?id=48933
> which doesn't handle flushing of pending updates via the GPU when
> writing with the CPU during interrupts (i.e. a panic).
> -Chris
>
> --
> Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-22 16:40                 ` Clark, Rob
  0 siblings, 0 replies; 49+ messages in thread
From: Clark, Rob @ 2012-02-22 16:40 UTC (permalink / raw)
  To: Chris Wilson
  Cc: Daniel Vetter, James Simmons, Tomasz Stanislawski, linux-fbdev,
	Sakari Ailus, Marcus Lorentzon, Pawel Osciak, Magnus Damm,
	dri-devel, Alexander Deucher, Marek Szyprowski, linux-media

On Wed, Feb 22, 2012 at 10:36 AM, Chris Wilson <chris@chris-wilson.co.uk> wrote:
> On Wed, 22 Feb 2012 17:24:24 +0100, Daniel Vetter <daniel@ffwll.ch> wrote:
>> On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
>> > Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3
>> > possible accels; copyarea, imageblit, and fillrect. The first two could be
>> > hooked from the TTM layer. Its something I plan to experiment to see if
>> > its worth it.
>>
>> Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
>> screens, but I also think it's a total waste of time to fix this. Imo
>> fbcon has 2 use-cases:
>> - display an OOSP.
>> - allow me to run fsck (or any other desaster-recovery stuff).
> 3. Show panics.
>
> Ensuring that nothing prevents the switch to fbcon and displaying the
> panic message is the reason why we haven't felt inclined to accelerate
> fbcon - it just gets messy for no real gain.

and when doing 2d accel on a 3d core..  it basically amounts to
putting a shader compiler in the kernel.   Wheeee!

> For example: https://bugs.freedesktop.org/attachment.cgi?idH933
> which doesn't handle flushing of pending updates via the GPU when
> writing with the CPU during interrupts (i.e. a panic).
> -Chris
>
> --
> Chris Wilson, Intel Open Source Technology Centre

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-22 16:03           ` James Simmons
@ 2012-02-22 17:00             ` Adam Jackson
  -1 siblings, 0 replies; 49+ messages in thread
From: Adam Jackson @ 2012-02-22 17:00 UTC (permalink / raw)
  To: James Simmons
  Cc: Laurent Pinchart, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Marcus Lorentzon, Pawel Osciak, Magnus Damm, dri-devel,
	Alexander Deucher, Rob Clark, Marek Szyprowski, linux-media

[-- Attachment #1: Type: text/plain, Size: 562 bytes --]

On Wed, 2012-02-22 at 16:03 +0000, James Simmons wrote:

> Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3 
> possible accels; copyarea, imageblit, and fillrect. The first two could be 
> hooked from the TTM layer. Its something I plan to experiment to see if 
> its worth it.

In my ideal world, the "framebuffer console" is something like vte under
wayland, which avoids this entirely.  Scrolling on early boot until you
manage to get into initramfs would still be slow, but whatever, that's a
debugging feature anyway.

- ajax

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-22 17:00             ` Adam Jackson
  0 siblings, 0 replies; 49+ messages in thread
From: Adam Jackson @ 2012-02-22 17:00 UTC (permalink / raw)
  To: James Simmons
  Cc: Laurent Pinchart, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Marcus Lorentzon, Pawel Osciak, Magnus Damm, dri-devel,
	Alexander Deucher, Rob Clark, Marek Szyprowski, linux-media

[-- Attachment #1: Type: text/plain, Size: 562 bytes --]

On Wed, 2012-02-22 at 16:03 +0000, James Simmons wrote:

> Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3 
> possible accels; copyarea, imageblit, and fillrect. The first two could be 
> hooked from the TTM layer. Its something I plan to experiment to see if 
> its worth it.

In my ideal world, the "framebuffer console" is something like vte under
wayland, which avoids this entirely.  Scrolling on early boot until you
manage to get into initramfs would still be slow, but whatever, that's a
debugging feature anyway.

- ajax

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-22 16:40                 ` Clark, Rob
@ 2012-02-22 17:26                   ` James Simmons
  -1 siblings, 0 replies; 49+ messages in thread
From: James Simmons @ 2012-02-22 17:26 UTC (permalink / raw)
  To: Clark, Rob
  Cc: Chris Wilson, Daniel Vetter, Tomasz Stanislawski, linux-fbdev,
	Sakari Ailus, Marcus Lorentzon, Pawel Osciak, Magnus Damm,
	dri-devel, Alexander Deucher, Marek Szyprowski, linux-media


> > Ensuring that nothing prevents the switch to fbcon and displaying the
> > panic message is the reason why we haven't felt inclined to accelerate
> > fbcon - it just gets messy for no real gain.
> 
> and when doing 2d accel on a 3d core..  it basically amounts to
> putting a shader compiler in the kernel.   Wheeee!

Yikes. I'm not suggesting that. In fact I doubt accelerating the imageblit
would be worthy it due to the small size of the images being pushed. The 
real cost is the copyarea which is used for scrolling when no panning is 
available. 

> > For example: https://bugs.freedesktop.org/attachment.cgi?id=48933
> > which doesn't handle flushing of pending updates via the GPU when
> > writing with the CPU during interrupts (i.e. a panic).
> > -Chris
> >
> > --
> > Chris Wilson, Intel Open Source Technology Centre
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fbdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-22 17:26                   ` James Simmons
  0 siblings, 0 replies; 49+ messages in thread
From: James Simmons @ 2012-02-22 17:26 UTC (permalink / raw)
  To: Clark, Rob
  Cc: Chris Wilson, Daniel Vetter, Tomasz Stanislawski, linux-fbdev,
	Sakari Ailus, Marcus Lorentzon, Pawel Osciak, Magnus Damm,
	dri-devel, Alexander Deucher, Marek Szyprowski, linux-media


> > Ensuring that nothing prevents the switch to fbcon and displaying the
> > panic message is the reason why we haven't felt inclined to accelerate
> > fbcon - it just gets messy for no real gain.
> 
> and when doing 2d accel on a 3d core..  it basically amounts to
> putting a shader compiler in the kernel.   Wheeee!

Yikes. I'm not suggesting that. In fact I doubt accelerating the imageblit
would be worthy it due to the small size of the images being pushed. The 
real cost is the copyarea which is used for scrolling when no panning is 
available. 

> > For example: https://bugs.freedesktop.org/attachment.cgi?idH933
> > which doesn't handle flushing of pending updates via the GPU when
> > writing with the CPU during interrupts (i.e. a panic).
> > -Chris
> >
> > --
> > Chris Wilson, Intel Open Source Technology Centre
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fbdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-22 16:40                 ` Clark, Rob
  (?)
  (?)
@ 2012-02-23  0:15                 ` Alan Cox
  -1 siblings, 0 replies; 49+ messages in thread
From: Alan Cox @ 2012-02-23  0:15 UTC (permalink / raw)
  To: Clark, Rob
  Cc: Chris Wilson, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Pawel Osciak, Marcus Lorentzon, Magnus Damm, dri-devel,
	Alexander Deucher, linux-media, Marek Szyprowski

> and when doing 2d accel on a 3d core..  it basically amounts to
> putting a shader compiler in the kernel.   Wheeee!

What I did for the GMA500 is to use the GTT to do scrolling by rewriting
the framebuffer GTT tables so they work as a circular buffer and doing a
bit of alignment of buffers.

The end result is faster than most accelerated 2D scrolls unsurprisingly.

Even faster would be to map enough of the start of the object on the end
of the range in repeat and just roll the frame buffer base. That would
get it down to a couple of 32bit I/O writes..

Alan

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-22 16:28               ` Rob Clark
  (?)
@ 2012-02-23  7:34                 ` Michel Dänzer
  -1 siblings, 0 replies; 49+ messages in thread
From: Michel Dänzer @ 2012-02-23  7:34 UTC (permalink / raw)
  To: Rob Clark
  Cc: Daniel Vetter, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Marcus Lorentzon, Pawel Osciak, Magnus Damm, dri-devel,
	Alexander Deucher, Marek Szyprowski, linux-media

On Mit, 2012-02-22 at 10:28 -0600, Rob Clark wrote: 
> On Wed, Feb 22, 2012 at 10:24 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> > On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
> >>
> >> > > Imo we should ditch this - fb accel doesn't belong into the kernel. Even
> >> > > on hw that still has a blitter for easy 2d accel without a complete 3d
> >> > > state setup necessary, it's not worth it. Chris Wilson from our team once
> >> > > played around with implementing fb accel in the kernel (i915 hw still has
> >> > > a blitter engine in the latest generations). He quickly noticed that to
> >> > > have decent speed, competitive with s/w rendering by the cpu he needs the
> >> > > entire batch and buffer management stuff from userspace. And to really
> >> > > beat the cpu, you need even more magic.
> >> > >
> >> > > If you want fast 2d accel, use something like cairo.
> >> >
> >> > Our conclusion on this is that we should not expose an explicit 2D
> >> > acceleration API at the kernel level. If really needed, hardware 2D
> >> > acceleration could be implemented as a DRM device to handle memory management,
> >> > commands ring setup, synchronization, ... but I'm not even sure if that's
> >> > worth it. I might not have conveyed it well in my notes.
> >>
> >> Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3
> >> possible accels; copyarea, imageblit, and fillrect. The first two could be
> >> hooked from the TTM layer. Its something I plan to experiment to see if
> >> its worth it.
> >
> > Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
> > screens, but I also think it's a total waste of time to fix this. Imo
> > fbcon has 2 use-cases:
> > - display an OOSP.
> > - allow me to run fsck (or any other desaster-recovery stuff).
> >
> > It can do that quite fine already.
> 
> and for just fbcon scrolling, if you really wanted to you could
> implement it by just shuffling pages around in a GART..

Keep in mind there are still discrete GPUs :), where scanning out from
anything but VRAM may not be feasible, and direct CPU access to
(especially reads from) VRAM tends to be very slow.

However, for fbcon that can be addressed in each driver (as is done e.g.
in nouveau), and has nothing to do with any userspace interface.


-- 
Earthling Michel Dänzer           |                   http://www.amd.com
Libre software enthusiast         |          Debian, X and DRI developer

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-23  7:34                 ` Michel Dänzer
  0 siblings, 0 replies; 49+ messages in thread
From: Michel Dänzer @ 2012-02-23  7:34 UTC (permalink / raw)
  To: Rob Clark
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Marcus Lorentzon, Magnus Damm, dri-devel, Alexander Deucher,
	linux-media, Marek Szyprowski

On Mit, 2012-02-22 at 10:28 -0600, Rob Clark wrote: 
> On Wed, Feb 22, 2012 at 10:24 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> > On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
> >>
> >> > > Imo we should ditch this - fb accel doesn't belong into the kernel. Even
> >> > > on hw that still has a blitter for easy 2d accel without a complete 3d
> >> > > state setup necessary, it's not worth it. Chris Wilson from our team once
> >> > > played around with implementing fb accel in the kernel (i915 hw still has
> >> > > a blitter engine in the latest generations). He quickly noticed that to
> >> > > have decent speed, competitive with s/w rendering by the cpu he needs the
> >> > > entire batch and buffer management stuff from userspace. And to really
> >> > > beat the cpu, you need even more magic.
> >> > >
> >> > > If you want fast 2d accel, use something like cairo.
> >> >
> >> > Our conclusion on this is that we should not expose an explicit 2D
> >> > acceleration API at the kernel level. If really needed, hardware 2D
> >> > acceleration could be implemented as a DRM device to handle memory management,
> >> > commands ring setup, synchronization, ... but I'm not even sure if that's
> >> > worth it. I might not have conveyed it well in my notes.
> >>
> >> Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3
> >> possible accels; copyarea, imageblit, and fillrect. The first two could be
> >> hooked from the TTM layer. Its something I plan to experiment to see if
> >> its worth it.
> >
> > Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
> > screens, but I also think it's a total waste of time to fix this. Imo
> > fbcon has 2 use-cases:
> > - display an OOSP.
> > - allow me to run fsck (or any other desaster-recovery stuff).
> >
> > It can do that quite fine already.
> 
> and for just fbcon scrolling, if you really wanted to you could
> implement it by just shuffling pages around in a GART..

Keep in mind there are still discrete GPUs :), where scanning out from
anything but VRAM may not be feasible, and direct CPU access to
(especially reads from) VRAM tends to be very slow.

However, for fbcon that can be addressed in each driver (as is done e.g.
in nouveau), and has nothing to do with any userspace interface.


-- 
Earthling Michel Dänzer           |                   http://www.amd.com
Libre software enthusiast         |          Debian, X and DRI developer

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-02-23  7:34                 ` Michel Dänzer
  0 siblings, 0 replies; 49+ messages in thread
From: Michel Dänzer @ 2012-02-23  7:34 UTC (permalink / raw)
  To: Rob Clark
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Marcus Lorentzon, Magnus Damm, dri-devel, Alexander Deucher,
	linux-media, Marek Szyprowski

On Mit, 2012-02-22 at 10:28 -0600, Rob Clark wrote: 
> On Wed, Feb 22, 2012 at 10:24 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> > On Wed, Feb 22, 2012 at 04:03:21PM +0000, James Simmons wrote:
> >>
> >> > > Imo we should ditch this - fb accel doesn't belong into the kernel. Even
> >> > > on hw that still has a blitter for easy 2d accel without a complete 3d
> >> > > state setup necessary, it's not worth it. Chris Wilson from our team once
> >> > > played around with implementing fb accel in the kernel (i915 hw still has
> >> > > a blitter engine in the latest generations). He quickly noticed that to
> >> > > have decent speed, competitive with s/w rendering by the cpu he needs the
> >> > > entire batch and buffer management stuff from userspace. And to really
> >> > > beat the cpu, you need even more magic.
> >> > >
> >> > > If you want fast 2d accel, use something like cairo.
> >> >
> >> > Our conclusion on this is that we should not expose an explicit 2D
> >> > acceleration API at the kernel level. If really needed, hardware 2D
> >> > acceleration could be implemented as a DRM device to handle memory management,
> >> > commands ring setup, synchronization, ... but I'm not even sure if that's
> >> > worth it. I might not have conveyed it well in my notes.
> >>
> >> Fbcon scrolling at be painful at HD or better modes. Fbcon needs 3
> >> possible accels; copyarea, imageblit, and fillrect. The first two could be
> >> hooked from the TTM layer. Its something I plan to experiment to see if
> >> its worth it.
> >
> > Let's bite into this ;-) I know that fbcon scrolling totally sucks on big
> > screens, but I also think it's a total waste of time to fix this. Imo
> > fbcon has 2 use-cases:
> > - display an OOSP.
> > - allow me to run fsck (or any other desaster-recovery stuff).
> >
> > It can do that quite fine already.
> 
> and for just fbcon scrolling, if you really wanted to you could
> implement it by just shuffling pages around in a GART..

Keep in mind there are still discrete GPUs :), where scanning out from
anything but VRAM may not be feasible, and direct CPU access to
(especially reads from) VRAM tends to be very slow.

However, for fbcon that can be addressed in each driver (as is done e.g.
in nouveau), and has nothing to do with any userspace interface.


-- 
Earthling Michel Dänzer           |                   http://www.amd.com
Libre software enthusiast         |          Debian, X and DRI developer
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-16 23:25     ` Laurent Pinchart
@ 2012-03-02 14:23       ` Heiko Stübner
  -1 siblings, 0 replies; 49+ messages in thread
From: Heiko Stübner @ 2012-03-02 14:23 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Sumit Semwal, Jesse Barker, Jesse Barnes,
	Rob Clark, Pawel Osciak, Marek Szyprowski, Tomasz Stanislawski,
	Magnus Damm, Marcus Lorentzon, Alexander Deucher, linux-media,
	dri-devel, linux-fbdev

Am Freitag, 17. Februar 2012, 00:25:51 schrieb Laurent Pinchart:
> Hello everybody,
> 
> First of all, I would like to thank all the attendees for their
> participation in the mini-summit that helped make the meeting a success.
> 
> Here are my consolidated notes that cover both the Linaro Connect meeting
> and the ELC meeting. They're also available at
> http://www.ideasonboard.org/media/meetings/.
> 
> 
> Kernel Display and Video API Consolidation mini-summit at ELC 2012
> ------------------------------------------------------------------
> 
[...]
> ***  Display Panel Drivers ***
> 
>   Goal: Sharing display panel drivers between display controllers from
>   different vendors.
> 
>   Panels are connected to the display controller through a standard bus
> with a control channel (DSI and eDP are two major such buses). Various
> vendors have created proprietary interfaces for panel drivers:
> 
>   - TI on OMAP (drivers/video/omap2/displays).
>   - Samsung on Exynos (drivers/video/exynos).
>   - ST-Ericsson on MCDE
> (http://www.igloocommunity.org/gitweb/?p=kernel/igloo-
> kernel.git;a=tree;f=drivers/video/mcde)
>   - Renesas is working on a similar interface for SH Mobile.
> 
>   HDMI-on-DSI transmitters, while not panels per-se, can be supported
> through the same API.
> 
>   A Low level Linux Display Framework (https://lkml.org/lkml/2011/9/15/107)
>   has been proposed and overlaps with this topic.
> 
>   For DSI, a possible abstraction level would be a DCS (Display Command
> Set) bus. Panels and/or HDMI-on-DSI transmitter drivers would be
> implemented as DCS drivers.
> 
>   Action points:
>   - Marcus to work on a proposal for DCS-based panels (with Tomi Valkeinen
> and Morimoto-san).

It would also be interesting to see something similar for MIPI-DBI (type B for 
me) [aka rfbi on omap], as most epaper displays use this to transmit data and 
currently use half-baked interfaces.

So hopefully I'll be able to follow the discussion and can then try to convert 
your findings to the dbi case.

Heiko

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-03-02 14:23       ` Heiko Stübner
  0 siblings, 0 replies; 49+ messages in thread
From: Heiko Stübner @ 2012-03-02 14:23 UTC (permalink / raw)
  To: Laurent Pinchart
  Cc: Sakari Ailus, Sumit Semwal, Jesse Barker, Jesse Barnes,
	Rob Clark, Pawel Osciak, Marek Szyprowski, Tomasz Stanislawski,
	Magnus Damm, Marcus Lorentzon, Alexander Deucher, linux-media,
	dri-devel, linux-fbdev

Am Freitag, 17. Februar 2012, 00:25:51 schrieb Laurent Pinchart:
> Hello everybody,
> 
> First of all, I would like to thank all the attendees for their
> participation in the mini-summit that helped make the meeting a success.
> 
> Here are my consolidated notes that cover both the Linaro Connect meeting
> and the ELC meeting. They're also available at
> http://www.ideasonboard.org/media/meetings/.
> 
> 
> Kernel Display and Video API Consolidation mini-summit at ELC 2012
> ------------------------------------------------------------------
> 
[...]
> ***  Display Panel Drivers ***
> 
>   Goal: Sharing display panel drivers between display controllers from
>   different vendors.
> 
>   Panels are connected to the display controller through a standard bus
> with a control channel (DSI and eDP are two major such buses). Various
> vendors have created proprietary interfaces for panel drivers:
> 
>   - TI on OMAP (drivers/video/omap2/displays).
>   - Samsung on Exynos (drivers/video/exynos).
>   - ST-Ericsson on MCDE
> (http://www.igloocommunity.org/gitweb/?p=kernel/igloo-
> kernel.git;a=tree;f=drivers/video/mcde)
>   - Renesas is working on a similar interface for SH Mobile.
> 
>   HDMI-on-DSI transmitters, while not panels per-se, can be supported
> through the same API.
> 
>   A Low level Linux Display Framework (https://lkml.org/lkml/2011/9/15/107)
>   has been proposed and overlaps with this topic.
> 
>   For DSI, a possible abstraction level would be a DCS (Display Command
> Set) bus. Panels and/or HDMI-on-DSI transmitter drivers would be
> implemented as DCS drivers.
> 
>   Action points:
>   - Marcus to work on a proposal for DCS-based panels (with Tomi Valkeinen
> and Morimoto-san).

It would also be interesting to see something similar for MIPI-DBI (type B for 
me) [aka rfbi on omap], as most epaper displays use this to transmit data and 
currently use half-baked interfaces.

So hopefully I'll be able to follow the discussion and can then try to convert 
your findings to the dbi case.

Heiko

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-03-02 14:23       ` Heiko Stübner
  (?)
@ 2012-03-02 15:56       ` Marcus Lorentzon
  -1 siblings, 0 replies; 49+ messages in thread
From: Marcus Lorentzon @ 2012-03-02 15:56 UTC (permalink / raw)
  To: Heiko Stübner
  Cc: Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	dri-devel, Magnus Damm, Alexander Deucher, Rob Clark,
	linux-media, Marek Szyprowski


[-- Attachment #1.1: Type: text/plain, Size: 1219 bytes --]

On 2 March 2012 15:23, Heiko Stübner <heiko@sntech.de> wrote:

> Am Freitag, 17. Februar 2012, 00:25:51 schrieb Laurent Pinchart:
> [...]
> >   For DSI, a possible abstraction level would be a DCS (Display Command
> > Set) bus. Panels and/or HDMI-on-DSI transmitter drivers would be
> > implemented as DCS drivers.
> >
> >   Action points:
> >   - Marcus to work on a proposal for DCS-based panels (with Tomi
> Valkeinen
> > and Morimoto-san).
>
> It would also be interesting to see something similar for MIPI-DBI (type B
> for
> me) [aka rfbi on omap], as most epaper displays use this to transmit data
> and
> currently use half-baked interfaces.
>
> So hopefully I'll be able to follow the discussion and can then try to
> convert
> your findings to the dbi case.
>
> Heiko
>

The idea is to abstract the DSI/DBI-2 type of transport and focus on the
DCS command set level. The driver should be able to select between DBI and
DSI by providing some "struct dcs_device { ifc = DSI/DBI2, ... }" data to
the DCS bus.

Something like a stripped version of
www.igloocommunity.org/gitweb/?p=kernel/igloo-kernel.git;a=blob;f=include/video/mcde.h#l113,
but for DBI/DSI only.

/BR
/Marcus

[-- Attachment #1.2: Type: text/html, Size: 1701 bytes --]

[-- Attachment #2: Type: text/plain, Size: 159 bytes --]

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-02-20 16:09         ` Guennadi Liakhovetski
@ 2012-05-17  2:46           ` Jun Nie
  -1 siblings, 0 replies; 49+ messages in thread
From: Jun Nie @ 2012-05-17  2:46 UTC (permalink / raw)
  To: Guennadi Liakhovetski, Laurent Pinchart
  Cc: Daniel Vetter, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Pawel Osciak, Magnus Damm, Marcus Lorentzon, dri-devel,
	Alexander Deucher, Rob Clark, linux-media, Marek Szyprowski

    Is there any discussion on HDCP on the summit? It is tightly
coupled with HDMI and DVI and should be managed together with the
transmitter. But there is not code to handle HDCP in DRM/FB/V4L in
latest kernel. Any thoughts on HDCP? Or you guys think there is risk
to support it in kernel? Thanks for your comments!

Jun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-05-17  2:46           ` Jun Nie
  0 siblings, 0 replies; 49+ messages in thread
From: Jun Nie @ 2012-05-17  2:46 UTC (permalink / raw)
  To: Guennadi Liakhovetski, Laurent Pinchart
  Cc: Daniel Vetter, Tomasz Stanislawski, linux-fbdev, Sakari Ailus,
	Pawel Osciak, Magnus Damm, Marcus Lorentzon, dri-devel,
	Alexander Deucher, Rob Clark, linux-media, Marek Szyprowski

    Is there any discussion on HDCP on the summit? It is tightly
coupled with HDMI and DVI and should be managed together with the
transmitter. But there is not code to handle HDCP in DRM/FB/V4L in
latest kernel. Any thoughts on HDCP? Or you guys think there is risk
to support it in kernel? Thanks for your comments!

Jun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
  2012-05-17  2:46           ` Jun Nie
@ 2012-05-17  7:53             ` Hans Verkuil
  -1 siblings, 0 replies; 49+ messages in thread
From: Hans Verkuil @ 2012-05-17  7:53 UTC (permalink / raw)
  To: Jun Nie
  Cc: Guennadi Liakhovetski, Laurent Pinchart, Daniel Vetter,
	Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Magnus Damm, Marcus Lorentzon, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

On Thu May 17 2012 04:46:37 Jun Nie wrote:
>     Is there any discussion on HDCP on the summit? It is tightly
> coupled with HDMI and DVI and should be managed together with the
> transmitter. But there is not code to handle HDCP in DRM/FB/V4L in
> latest kernel. Any thoughts on HDCP? Or you guys think there is risk
> to support it in kernel? Thanks for your comments!

There is no risk to support it in the kernel, the risk is all for the
implementer (usually by having to lock down the system preventing access
to the box). You'd better read the HDCP license very carefully before deciding
to use HDCP under linux!

I'm working on V4L HDMI receivers and transmitters myself, but not on HDCP.
But I'd be happy to review/comment on proposals for adding HDCP support.

Note that there is very little work to be done to add this for simple
receivers and transmitters. The hard part will be supporting repeaters.

For simple receivers all you need in V4L2 is a flag telling you that the
received video was encrypted and for a transmitter I think you just need a
control to turn encryption on or off (AFAIK, I'd have to verify that statement
regarding the transmitter to be 100% certain). All the actual encryption and
decryption is handled by the receiver/transmitter hardware, at least on the
hardware that I have seen.

Repeaters are a lot harder as you have to handle key exchanges. I don't know
off-hand what that would involve API-wise in V4L2.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes
@ 2012-05-17  7:53             ` Hans Verkuil
  0 siblings, 0 replies; 49+ messages in thread
From: Hans Verkuil @ 2012-05-17  7:53 UTC (permalink / raw)
  To: Jun Nie
  Cc: Guennadi Liakhovetski, Laurent Pinchart, Daniel Vetter,
	Tomasz Stanislawski, linux-fbdev, Sakari Ailus, Pawel Osciak,
	Magnus Damm, Marcus Lorentzon, dri-devel, Alexander Deucher,
	Rob Clark, linux-media, Marek Szyprowski

On Thu May 17 2012 04:46:37 Jun Nie wrote:
>     Is there any discussion on HDCP on the summit? It is tightly
> coupled with HDMI and DVI and should be managed together with the
> transmitter. But there is not code to handle HDCP in DRM/FB/V4L in
> latest kernel. Any thoughts on HDCP? Or you guys think there is risk
> to support it in kernel? Thanks for your comments!

There is no risk to support it in the kernel, the risk is all for the
implementer (usually by having to lock down the system preventing access
to the box). You'd better read the HDCP license very carefully before deciding
to use HDCP under linux!

I'm working on V4L HDMI receivers and transmitters myself, but not on HDCP.
But I'd be happy to review/comment on proposals for adding HDCP support.

Note that there is very little work to be done to add this for simple
receivers and transmitters. The hard part will be supporting repeaters.

For simple receivers all you need in V4L2 is a flag telling you that the
received video was encrypted and for a transmitter I think you just need a
control to turn encryption on or off (AFAIK, I'd have to verify that statement
regarding the transmitter to be 100% certain). All the actual encryption and
decryption is handled by the receiver/transmitter hardware, at least on the
hardware that I have seen.

Repeaters are a lot harder as you have to handle key exchanges. I don't know
off-hand what that would involve API-wise in V4L2.

Regards,

	Hans

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2012-05-17  7:53 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <201201171126.42675.laurent.pinchart@ideasonboard.com>
     [not found] ` <1654816.MX2JJ87BEo@avalon>
2012-02-16 23:25   ` Kernel Display and Video API Consolidation mini-summit at ELC 2012 - Notes Laurent Pinchart
2012-02-16 23:25     ` Laurent Pinchart
2012-02-17  9:55     ` Daniel Vetter
2012-02-17  9:55       ` Daniel Vetter
2012-02-17 18:46       ` Laurent Pinchart
2012-02-17 18:46         ` Laurent Pinchart
2012-02-22 16:03         ` James Simmons
2012-02-22 16:03           ` James Simmons
2012-02-22 16:24           ` Daniel Vetter
2012-02-22 16:24             ` Daniel Vetter
2012-02-22 16:28             ` Rob Clark
2012-02-22 16:28               ` Rob Clark
2012-02-23  7:34               ` Michel Dänzer
2012-02-23  7:34                 ` Michel Dänzer
2012-02-23  7:34                 ` Michel Dänzer
2012-02-22 16:36             ` Chris Wilson
2012-02-22 16:36               ` Chris Wilson
2012-02-22 16:36               ` Chris Wilson
2012-02-22 16:40               ` Clark, Rob
2012-02-22 16:40                 ` Clark, Rob
2012-02-22 17:26                 ` James Simmons
2012-02-22 17:26                   ` James Simmons
2012-02-23  0:15                 ` Alan Cox
2012-02-22 17:00           ` Adam Jackson
2012-02-22 17:00             ` Adam Jackson
2012-02-20 16:09       ` Guennadi Liakhovetski
2012-02-20 16:09         ` Guennadi Liakhovetski
2012-02-20 16:19         ` David Airlie
2012-02-20 16:19           ` David Airlie
2012-05-17  2:46         ` Jun Nie
2012-05-17  2:46           ` Jun Nie
2012-05-17  7:53           ` Hans Verkuil
2012-05-17  7:53             ` Hans Verkuil
2012-02-17 11:07     ` Semwal, Sumit
2012-02-17 11:19       ` Semwal, Sumit
2012-02-17 18:49       ` Laurent Pinchart
2012-02-17 18:49         ` Laurent Pinchart
2012-02-17 19:42     ` Adam Jackson
2012-02-17 19:42       ` Adam Jackson
2012-02-18 17:53       ` Clark, Rob
2012-02-18 17:53         ` Clark, Rob
2012-02-18  0:56     ` Keith Packard
2012-02-18  0:56       ` Keith Packard
2012-02-18  0:56       ` Keith Packard
2012-02-20 16:40     ` Guennadi Liakhovetski
2012-02-20 16:40       ` Guennadi Liakhovetski
2012-03-02 14:23     ` Heiko Stübner
2012-03-02 14:23       ` Heiko Stübner
2012-03-02 15:56       ` Marcus Lorentzon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.