From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753051AbcD1Oi7 (ORCPT ); Thu, 28 Apr 2016 10:38:59 -0400 Received: from mail-wm0-f46.google.com ([74.125.82.46]:37917 "EHLO mail-wm0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752818AbcD1Oi4 (ORCPT ); Thu, 28 Apr 2016 10:38:56 -0400 Date: Thu, 28 Apr 2016 16:38:51 +0200 From: Daniel Vetter To: Gustavo Padovan , Daniel Stone , Greg Hackmann , Ville =?iso-8859-1?Q?Syrj=E4l=E4?= , Gustavo Padovan , Daniel Stone , Riley Andrews , dri-devel , Linux Kernel Mailing List , Arve =?iso-8859-1?B?SGr4bm5lduVn?= , John Harrison Subject: Re: [RFC v2 5/8] drm/fence: add in-fences support Message-ID: <20160428143851.GC5784@phenom.ffwll.local> Mail-Followup-To: Gustavo Padovan , Daniel Stone , Greg Hackmann , Ville =?iso-8859-1?Q?Syrj=E4l=E4?= , Gustavo Padovan , Daniel Stone , Riley Andrews , dri-devel , Linux Kernel Mailing List , Arve =?iso-8859-1?B?SGr4bm5lduVn?= , John Harrison References: <20160426143635.GW8291@phenom.ffwll.local> <20160426162621.GU4329@intel.com> <20160426172049.GB2558@phenom.ffwll.local> <20160426174045.GC4329@intel.com> <20160426182346.GC2558@phenom.ffwll.local> <20160426185506.GH4329@intel.com> <20160426200505.GD2558@phenom.ffwll.local> <571FD402.6050407@google.com> <20160428143644.GA3496@joana> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20160428143644.GA3496@joana> X-Operating-System: Linux phenom 4.6.0-rc5+ User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 28, 2016 at 11:36:44AM -0300, Gustavo Padovan wrote: > 2016-04-27 Daniel Stone : > > > Hi, > > > > On 26 April 2016 at 21:48, Greg Hackmann wrote: > > > On 04/26/2016 01:05 PM, Daniel Vetter wrote: > > >> On Tue, Apr 26, 2016 at 09:55:06PM +0300, Ville Syrjälä wrote: > > >>> What are they doing that can't stuff the fences into an array > > >>> instead of props? > > >> > > >> The hw composer interface is one in-fence per plane. That's really the > > >> major reason why the kernel interface is built to match. And I really > > >> don't think we should diverge just because we have a slight different > > >> color preference ;-) > > > > > > The relationship between layers and fences is only fuzzy and indirect > > > though. The relationship is really between the buffer you're displaying on > > > that layer, and the fence representing the work done to render into that > > > buffer. SurfaceFlinger just happens to bundle them together inside the same > > > struct hwc_layer_1 as an API convenience. > > > > Right, and when using implicit fencing, this comes as a plane > > property, by virtue of plane -> fb -> buffer -> fence. > > > > > Which is kind of splitting hairs as long as you have a 1-to-1 relationship > > > between layers and DRM planes. But that's not always the case. > > > > Can you please elaborate? > > > > > A (per-CRTC?) array of fences would be more flexible. And even in the cases > > > where you could make a 1-to-1 mapping between planes and fences, it's not > > > that much more work for userspace to assemble those fences into an array > > > anyway. > > > > As Ville says, I don't want to go down the path of scheduling CRTC > > updates separately, because that breaks MST pretty badly. If you don't > > want your updates to display atomically, then don't schedule them > > atomically ... ? That's the only reason I can see for making fencing > > per-CRTC, rather than just a pile of unassociated fences appended to > > the request. Per-CRTC fences also forces userspace to merge fences > > before submission when using multiple planes per CRTC, which is pretty > > punitive. > > > > I think having it semantically attached to the plane is a little bit > > nicer for tracing (why was this request delayed? -> a fence -> which > > buffer was that fence for?) at a glance. Also the 'pile of appended > > fences' model is a bit awkward for more generic userspace, which > > creates a libdrm request and builds it (add a plane, try it out, wind > > back) incrementally. Using properties makes that really easy, but > > without properties, we'd have to add separate codepaths - and thus > > separate ABI, which complicates distribution - to libdrm to account > > for a separate plane array which shares a cursor with the properties. > > So for that reason if none other, I'd really prefer not to go down > > that route. > > I also agree to have it as FENCE_FD prop on the plane. Summarizing the > arguments on this thread, they are: > > - implicit fences also needs one fence per plane/fb, so it will be good to > match with that. > - requires userspace to always merge fences "does not require" I presume? > - can use standard plane properties, making kernel and userspace life easier, > an array brings more work to build the atomic request plus extra checkings > on the kernel. > - do not need to changes to drivers > - better for tracing, can identify the buffer/fence promptly - Fits in well with the libdrm atomic rollback support - no need to manage fences separately when incrementally building an atomic commit. > > Gustavo > _______________________________________________ > dri-devel mailing list > dri-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch