From mboxrd@z Thu Jan 1 00:00:00 1970 From: Daniel Vetter Subject: Re: [PATCH] drm: add FOURCC formats for compute dma_buf interop. Date: Wed, 19 Mar 2014 11:31:10 +0100 Message-ID: References: <1394819961-21537-1-git-send-email-gwenole.beauchesne@intel.com> <20140314215244.GE30571@phenom.ffwll.local> <20140315112843.GG30571@phenom.ffwll.local> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: mesa-dev-bounces@lists.freedesktop.org Sender: "mesa-dev" To: Gwenole Beauchesne , Tom Cooksey , Jesse Barker , "Clark, Rob" Cc: "mesa-dev@lists.freedesktop.org" , dri-devel List-Id: dri-devel@lists.freedesktop.org On Wed, Mar 19, 2014 at 7:30 AM, Gwenole Beauchesne wrote: > 2014-03-15 12:28 GMT+01:00 Daniel Vetter : >> On Sat, Mar 15, 2014 at 05:41:05AM +0100, Gwenole Beauchesne wrote: >>> Hi, >>> >>> 2014-03-14 22:52 GMT+01:00 Daniel Vetter : >>> > On Fri, Mar 14, 2014 at 06:59:21PM +0100, Gwenole Beauchesne wrote: >>> >> This is a follow-up to: >>> >> http://lists.freedesktop.org/archives/mesa-dev/2014-March/055742.html >>> >> >>> >> Add formats meant to convey a "compute" dimension when a DRM fourcc >>> >> format is needed for dma_buf interop (EGL, OpenCL). >>> >> >>> >> Intended FOURCC format: ('T',,,). >>> >> - : number of elements in the tuple. Range: [0..3]. >>> >> - : type of each element. Values: {'_','I','U','F'}, >>> >> respectively for normalized to [0..1] range, signed integers, >>> >> unsigned integers, floating-point. >>> >> - : size of the element. Values: {1, 2, 4, 8}. >>> >> >>> >> All entities are represented in native-endian byte-order in memory. >>> >> For example, 'T2F4' format would represent the (float, float) tuple >>> >> where elements are stored in little-endian byte-order on x86. >>> >> >>> >> Signed-off-by: Gwenole Beauchesne >>> > >>> > So this fourcc stuff started out as pixel formats for the ADDFB2 ioctl, >>> > i.e. stuff that can be displayed. Hence my first question: How can we >>> > display these bastards here? >>> >>> It's not meant to be displayed. The idea was maybe we could detect the >>> fourcc (e.g. (fourcc & 0xff) == 'T') and reject the buffer accordingly >>> when submitted to one of the display ioctl? >> >> Well we already have explicit lists for all that, so no need to add a pile >> of formats just to be able to reject them ;-) >> >>> > So given all this: Why do we need this in the kernel's header? It sounds >>> > like purely a userspace business with no need at all the enshrine this >>> > into kernel ABI headers. Note that e.g. the mesa import/export interface >>> > have their own fourcc #defines for exactly this reason. >>> >>> I could have stuffed everything in gbm.h for instance, or another new >>> header, but the EXT_dma_buf_import extension actually mentions >>> drm_fourcc.h. So, that's why I finally moved the definitions to there. >>> :) >> >> That's a bit unfortunate ... But the spec also clearly states "as used by >> the drm_mode_fb_cmd2 ioctl". And imo we can't make a case that this is >> true. >> >>> What would be the better place? Can we make the userspace libdrm >>> version of the file live totally unsynchronized from the kernel >>> headers then? >> >> I think the right approach would be to ref the specification and also >> allow other fourcc codes for non-display related buffer layaouts, maybe in >> a different namespace. This entire fourcc stuff was only done since it was >> a somewhat ill-defined "standard" for all kinds of YUV buffers. We've >> already had to massivel pimp it to make it work properly with RGB. >> >> Pimping this even further to support all kinds of random compute/gl >> buffers sounds ill-advised. I think we need to steal a new namespace from >> some existing standard for all this, maybe ogl or ocl has something? Ofc >> that means creating a new dma_buf_import atttribute for eglCreateImageKHR >> which could be used instead of DRM_FOURCC_EXT. > > Thinking further about it, the patch is totally useless. It should be > enough to amend the EXT_image_dma_buf_import spec to allow for using > the GL format/type enums instead... Now, what are the chances to have > the required changes to appear in a new revision of the spec? :) > > I'd suggest the following formalism: > - If EGL_LINUX_DRM_FOURCC_EXT attribute is 0 (zero), then a GL > format/type pair needs to be supplied to the > - Accepted as an attribute to would be EGL_GL_FORMAT_EXT > / EGL_GL_TYPE_EXT, where EGL_GL_FORMAT_EXT attribute value would be > GL_{RED,RG,RGB,RGBA} as usual, and EGL_GL_TYPE_EXT value would be > GL_{{UNSIGNED_,}{BYTE,SHORT,INT},{HALF_,}FLOAT}. > > Alternative would be to provide a unique "internal format", but this > would complicate implementations, for sanity checks (too large switch > cases). > > WDYT? Sounds imo sane, but my useful knowledge sharply drops off a steep cliff as soon as we leave the kernel. Pulling in the original authors of the spec. Maybe we can just add this as a revision/clarification ... Iirc Jesse Barker isn't at linaro any more, but I don't have his latest mail. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch