From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Hellstrom Subject: Re: Fence, timeline and android sync points Date: Fri, 15 Aug 2014 08:54:38 +0200 Message-ID: <53EDAEAE.5040300@shipmail.org> References: <20140812221340.GB5746@gmail.com> <20140813082822.GO10500@phenom.ffwll.local> <20140813133602.GA2666@gmail.com> <20140813155420.GG10500@phenom.ffwll.local> <20140813170719.GD2666@gmail.com> <20140814090834.GK10500@phenom.ffwll.local> <20140814142329.GC2000@gmail.com> <20140814155551.GY10500@phenom.ffwll.local> <20140814181809.GD2000@gmail.com> <20140814191505.GH2000@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mx1.bahnhof.se (mx1.bahnhof.se [213.80.101.11]) by gabe.freedesktop.org (Postfix) with ESMTP id B01A56E318 for ; Fri, 15 Aug 2014 00:21:53 -0700 (PDT) In-Reply-To: <20140814191505.GH2000@gmail.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" To: Jerome Glisse Cc: dri-devel , Ben Skeggs List-Id: dri-devel@lists.freedesktop.org On 08/14/2014 09:15 PM, Jerome Glisse wrote: > On Thu, Aug 14, 2014 at 08:47:16PM +0200, Daniel Vetter wrote: >> On Thu, Aug 14, 2014 at 8:18 PM, Jerome Glisse wrote: >>> Sucks because you can not do weird synchronization like one i depicted in another >>> mail in this thread and for as long as cmdbuf_ioctl do not give you fence|syncpt >>> you can not such thing cleanly in non hackish way. >> Actually i915 can soon will do that that. > So you will return fence|syncpoint with each cmdbuf_ioctl ? > >>> Sucks because you have a fence object per buffer object and thus overhead grow >>> with the number of objects. Not even mentioning fence lifetime issue. >>> >>> Sucks because sub-buffer allocation is just one of many tricks that can not be >>> achieved properly and cleanly with implicit sync. >>> >>> ... >> Well I heard all those reasons and I'm well of aware of them. The >> problem is that with current hardware the kernel needs to know for >> each buffer how long it needs to be kept around since hw just can't do >> page faulting. Yeah you can pin them but for an uma design that >> doesn't go down well with folks. > I am not thinking with fancy hw in mind, on contrary i thought about all > this with the crappiest hw i could think of, in mind. > > Yes you can get rid of fence and not have to pin memory with current hw. > What matter for unpinning is to know that all hw block are done using the > memory. This is easily achievable with your beloved seqno. Have one seqno > per driver (one driver can have different block 3d, video decoding, crtc, > ...) each time a buffer is use as part of a command on one block inc the > common seqno and tag the buffer with that number. Have each hw block write > the lastest seqno that is done to a per block location. Now to determine > is buffer is done compare the buffer seqno with the max of all the signaled > seqno of all blocks. > > Cost 1 uint32 per buffer and simple if without locking to check status of > a buffer. Hmm? The trivial and first use of fence objects in the linux DRM was triggered by the fact that a 32-bit seqno wraps pretty quickly and a 32-bit solution just can't be made robust. Now a 64-bit seqno will probably be robust for forseeable future, but when it comes to implement that on 32-bit hardware and compare it to a simple fence object approach, /Thomas