All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: New "xf86-video-armsoc" DDX driver
       [not found] <4fba0034.e1d9440a.7f33.0bfeSMTPIN_ADDED@mx.google.com>
@ 2012-05-21  8:55 ` Dave Airlie
  2012-05-21  9:03   ` [Linaro-mm-sig] " Daniel Vetter
  0 siblings, 1 reply; 4+ messages in thread
From: Dave Airlie @ 2012-05-21  8:55 UTC (permalink / raw)
  To: Tom Cooksey; +Cc: linaro-mm-sig, xorg-devel, dri-devel

>
> For the last few months we (ARM MPD... "The Mali guys") have been working on
> getting X.Org up and running with Mali T6xx (ARM's next-generation GPU IP).
> The approach is very similar (well identical I think) to how things work on
> OMAP: We use a DRM driver to manage the display controller via KMS. The KMS
> driver also allocates both scan-out and pixmap/back buffers via the
> DRM_IOCTL_MODE_CREATE_DUMB ioctl which is internally implemented with GEM.
> When returning buffers to DRI clients, the x-server uses flink to get a
> global handle to a buffer which it passes back to the DRI client (in our
> case the Mali-T600 X11 EGL winsys). The client then uses the new PRIME
> ioctls to export the GEM buffer it received from the x-server to a dma_buf
> fd. This fd is then passed into the T6xx kernel driver via our own job
> dispatch user/kernel API (we're not using DRM for driving the GPU, only the
> display controller).

So using dumb in this was is probably a bit of an abuse, since dumb is defined
to provide buffers not to be used for acceleration hw. Since when we allocate
dumb buffers, we can't know what special hw layouts are required (tiling etc)
for optimal performance for accel. The logic to work that out is rarely generic.

>
> http://git.linaro.org/gitweb?p=arm/xorg/driver/xf86-video-armsoc.git;a=summa
> ry
>
> Note: When we originally spoke to Rob Clark about this, he suggested we take
> the already-generic xf86-video-modesetting and just add the dri2 code to it.
> This is indeed how we started out, however as we progressed it became clear
> that the majority of the code we wanted was in the omap driver and were
> having to work fairly hard to keep some of the original modesetting code.
> This is why we've now changed tactic and just forked the OMAP driver,
> something Rob is more than happy for us to do.

It does seem like porting to -modesetting, and maybe cleaning up modesetting
if its needs it. The modesetting driver is pretty much just a make it
work port of
the radeon/nouveau/intel code "shared" code.

> One thing the DDX driver isn't doing yet is making use of 2D hw blocks. In
> the short-term, we will simply create a branch off of the "generic" master
> for each SoC and add 2D hardware support there. We do however want a more
> permanent solution which doesn't need a separate branch per SoC. Some of the
> suggested solutions are:
>
> * Add a new generic DRM ioctl API for larger 2D operations (I would imagine
> small blits/blends would be done in SW).

Not going to happen, again the hw isn't generic in this area, some hw requires
3D engines to do 2D ops etc. The limitations on some hw with overlaps etc,
and finally it breaks the rule about generic ioctls for acceleration operations.

> * Use SW rendering for everything other than solid blits and use v4l2's
> blitting API for those (importing/exporting buffers to be blitted using
> dma_buf). The theory here is that most UIs are rendered with GLES and so you
> only need 2D hardware for blits. I think we'll prototype this approach on
> Exynos.

Seems a bit over the top,
> * Define a new x-server sub-module interface to allow a seperate .so 2D
> driver to be loaded (this is the approach the current OMAP DDX uses).

This seems the sanest.

I haven't time this week to review the code, but I'll try and take a look when
time permits.

Dave.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Linaro-mm-sig] New "xf86-video-armsoc" DDX driver
  2012-05-21  8:55 ` New "xf86-video-armsoc" DDX driver Dave Airlie
@ 2012-05-21  9:03   ` Daniel Vetter
  2012-05-24 16:21     ` Tom Cooksey
       [not found]     ` <20120521090328.GA4970-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
  0 siblings, 2 replies; 4+ messages in thread
From: Daniel Vetter @ 2012-05-21  9:03 UTC (permalink / raw)
  To: Dave Airlie; +Cc: linaro-mm-sig, xorg-devel, dri-devel

On Mon, May 21, 2012 at 09:55:06AM +0100, Dave Airlie wrote:
> > * Define a new x-server sub-module interface to allow a seperate .so 2D
> > driver to be loaded (this is the approach the current OMAP DDX uses).
> 
> This seems the sanest.

Or go the intel glamour route and stitch together a somewhat generic 2d
accel code on top of GL. That should give you reasonable (albeit likely
not stellar) X render performance.
-Daniel
-- 
Daniel Vetter
Mail: daniel@ffwll.ch
Mobile: +41 (0)79 365 57 48

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [Linaro-mm-sig] New "xf86-video-armsoc" DDX driver
       [not found]     ` <20120521090328.GA4970-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
@ 2012-05-24 16:21       ` Tom Cooksey
  0 siblings, 0 replies; 4+ messages in thread
From: Tom Cooksey @ 2012-05-24 16:21 UTC (permalink / raw)
  To: 'Daniel Vetter', Dave Airlie
  Cc: linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw,
	xorg-devel-go0+a7rfsptAfugRpC6u6w,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW



> -----Original Message-----
> From: Daniel Vetter [mailto:daniel.vetter-/w4YWyX8dFk@public.gmane.org] On Behalf Of Daniel
> Vetter
> Sent: 21 May 2012 10:04
> To: Dave Airlie
> Cc: Tom Cooksey; linaro-mm-sig-cunTk1MwBs8s++Sfvej+rw@public.gmane.org; xorg-
> devel-go0+a7rfsptAfugRpC6u6w@public.gmane.org; dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
> Subject: Re: [Linaro-mm-sig] New "xf86-video-armsoc" DDX driver
> 
> On Mon, May 21, 2012 at 09:55:06AM +0100, Dave Airlie wrote:
> > > * Define a new x-server sub-module interface to allow a seperate
> > > > .so 2D driver to be loaded (this is the approach the current
> > > > OMAP DDX uses).
> >
> > This seems the sanest.
> 
> Or go the intel glamour route and stitch together a somewhat generic 2d
> accel code on top of GL. That should give you reasonable (albeit likely
> not stellar) X render performance.
> -Daniel

I'm not sure that would perform well on a tile-based deferred renderer
like Mali. To perform well, we need to gather an entire frame's worth
of rendering/draw-calls before passing them to the GPU to render. I
believe this is not the typical use-case of EXA? How much of the
framebuffer is re-drawn between flushes?


Cheers,

Tom





_______________________________________________
xorg-devel-go0+a7rfsptAfugRpC6u6w@public.gmane.org: X.Org development
Archives: http://lists.x.org/archives/xorg-devel
Info: http://lists.x.org/mailman/listinfo/xorg-devel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [Linaro-mm-sig] New "xf86-video-armsoc" DDX driver
  2012-05-21  9:03   ` [Linaro-mm-sig] " Daniel Vetter
@ 2012-05-24 16:21     ` Tom Cooksey
       [not found]     ` <20120521090328.GA4970-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
  1 sibling, 0 replies; 4+ messages in thread
From: Tom Cooksey @ 2012-05-24 16:21 UTC (permalink / raw)
  To: 'Daniel Vetter', Dave Airlie; +Cc: linaro-mm-sig, xorg-devel, dri-devel



> -----Original Message-----
> From: Daniel Vetter [mailto:daniel.vetter@ffwll.ch] On Behalf Of Daniel
> Vetter
> Sent: 21 May 2012 10:04
> To: Dave Airlie
> Cc: Tom Cooksey; linaro-mm-sig@lists.linaro.org; xorg-
> devel@lists.x.org; dri-devel@lists.freedesktop.org
> Subject: Re: [Linaro-mm-sig] New "xf86-video-armsoc" DDX driver
> 
> On Mon, May 21, 2012 at 09:55:06AM +0100, Dave Airlie wrote:
> > > * Define a new x-server sub-module interface to allow a seperate
> > > > .so 2D driver to be loaded (this is the approach the current
> > > > OMAP DDX uses).
> >
> > This seems the sanest.
> 
> Or go the intel glamour route and stitch together a somewhat generic 2d
> accel code on top of GL. That should give you reasonable (albeit likely
> not stellar) X render performance.
> -Daniel

I'm not sure that would perform well on a tile-based deferred renderer
like Mali. To perform well, we need to gather an entire frame's worth
of rendering/draw-calls before passing them to the GPU to render. I
believe this is not the typical use-case of EXA? How much of the
framebuffer is re-drawn between flushes?


Cheers,

Tom

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-05-24 16:22 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <4fba0034.e1d9440a.7f33.0bfeSMTPIN_ADDED@mx.google.com>
2012-05-21  8:55 ` New "xf86-video-armsoc" DDX driver Dave Airlie
2012-05-21  9:03   ` [Linaro-mm-sig] " Daniel Vetter
2012-05-24 16:21     ` Tom Cooksey
     [not found]     ` <20120521090328.GA4970-dv86pmgwkMBes7Z6vYuT8azUEOm+Xw19@public.gmane.org>
2012-05-24 16:21       ` Tom Cooksey

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.