From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.3 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9300EC43603 for ; Sun, 8 Dec 2019 16:03:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5E4F9206C3 for ; Sun, 8 Dec 2019 16:03:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ideasonboard.com header.i=@ideasonboard.com header.b="jmE86OGj" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726453AbfLHQD6 (ORCPT ); Sun, 8 Dec 2019 11:03:58 -0500 Received: from perceval.ideasonboard.com ([213.167.242.64]:43348 "EHLO perceval.ideasonboard.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726440AbfLHQD6 (ORCPT ); Sun, 8 Dec 2019 11:03:58 -0500 Received: from pendragon.ideasonboard.com (81-175-216-236.bb.dnainternet.fi [81.175.216.236]) by perceval.ideasonboard.com (Postfix) with ESMTPSA id 8C0FD52B; Sun, 8 Dec 2019 17:03:56 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ideasonboard.com; s=mail; t=1575821036; bh=ZvcLAs9fZ2AAwtiANkJEvG48AhEnJZDrE7y69+gtfrY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=jmE86OGjx3jjJihEpMBUADS5i36a7sfOt11aXF8KWBxaFKkNXJDjb778FQbIsEc3I RJQIkDE4CddaPMfLkJen8rWH+wvCFwU8Jme0x2CBxuHgMdHlJBzkga2q6BcWINVobt 8w2LC6DY0G9e1txjvMryZ+DhExEIrxei2/lhmKqc= Date: Sun, 8 Dec 2019 18:03:49 +0200 From: Laurent Pinchart To: Hyun Kwon Cc: Vinod Koul , "dmaengine@vger.kernel.org" , Michal Simek , Hyun Kwon , Tejas Upadhyay , Satish Kumar Nagireddy Subject: Re: [PATCH v2 2/4] dma: xilinx: dpdma: Add the Xilinx DisplayPort DMA engine driver Message-ID: <20191208160349.GD14311@pendragon.ideasonboard.com> References: <20191107021400.16474-1-laurent.pinchart@ideasonboard.com> <20191107021400.16474-3-laurent.pinchart@ideasonboard.com> <20191109175908.GI952516@vkoul-mobl> <20191205150407.GL4734@pendragon.ideasonboard.com> <20191205163909.GH82508@vkoul-mobl> <20191205202746.GA26880@smtp.xilinx.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20191205202746.GA26880@smtp.xilinx.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org Hi Hyun and Vinod, On Thu, Dec 05, 2019 at 12:27:47PM -0800, Hyun Kwon wrote: > On Thu, 2019-12-05 at 08:39:09 -0800, Vinod Koul wrote: > > On 05-12-19, 17:04, Laurent Pinchart wrote: > > > > > +/* > > > > > + * DPDMA descriptor placement > > > > > + * -------------------------- > > > > > + * DPDMA descritpor life time is described with following placements: > > > > > + * > > > > > + * allocated_desc -> submitted_desc -> pending_desc -> active_desc -> done_list > > > > > + * > > > > > + * Transition is triggered as following: > > > > > + * > > > > > + * -> allocated_desc : a descriptor allocation > > > > > + * allocated_desc -> submitted_desc: a descriptor submission > > > > > + * submitted_desc -> pending_desc: request to issue pending a descriptor > > > > > + * pending_desc -> active_desc: VSYNC intr when a desc is scheduled to DPDMA > > > > > + * active_desc -> done_list: VSYNC intr when DPDMA switches to a new desc > > > > > > > > Well this tells me driver is not using vchan infrastructure, the > > > > drivers/dma/virt-dma.c is common infra which does pretty decent list > > > > management and drivers do not need to open code this. > > > > > > > > Please convert the driver to use virt-dma > > > > > > As noted in the cover letter, > > > > > > "There is one review comment that is still pending: switching to > > > virt-dma. I started investigating this, and it quickly appeared that > > > this would result in an almost complete rewrite of the driver's logic. > > > While the end result may be better, I wonder if this is worth it, given > > > that the DPDMA is tied to the DisplayPort subsystem and can't be used > > > with other DMA slaves. The DPDMA is thus used with very specific usage > > > patterns, which don't need the genericity of descriptor handling > > > provided by virt-dma. Vinod, what's your opinion on this ? Is virt-dma > > > usage a blocker to merge this driver, could we switch to it later, or is > > > it just overkill in this case ?" > > > > > > I'd like to ask an additional question : is the dmaengine API the best > > > solution for this ? The DPDMA is a separate IP core, but it is tied with > > > the DP subsystem. I'm tempted to just fold it in the display driver. The > > > only reason why I'm hesitant on this is that the DPDMA also handles > > > audio channels, that are also part of the DP subsystem, but that could > > > be handled by a separate ALSA driver. Still, handling display, audio and > > > DMA in drivers that we pretend are independent and generic would be a > > > bit of a lie. > > > > Yeah if it is _only_ going to be used in display and no other client > > using it, then I really do not see any advantage of this being a > > dmaengine driver. That is pretty much we have been telling folks over > > the years. > > In the development cycles, the IP blocks came in pieces. The DP tx driver > was developed first as one driver, with dmaengine driver other than DPDMA. > Then the ZynqMP block was added along with this DPDMA driver. Hence, > the reverse is possible, meaning some can decide to take a part of it > and harden with other blocks in next generation SoC. So there was and will > be benefit of keeping drivers modular at block level in my opinion, and > I'm not sure if it needs to put in a monolithic format, when it's already > modular. OK, in the light of this information I'll keep the two separate and will switch to vchan as requested by Vinod. > > Btw since this is xilinx and I guess everything is an IP how difficult > > would it be to put this on a non display core :) > > > > If you decide to use dmaengine I would prefer it use virt-dma that mean > > rewrite yes but helps you term > > I made changes using vchan[1], but it was a while ago. So it might have > been outdated, and details are vague in my head. Not sure if it was at > fully functional stage. Still, just in case it may be helpful. > > [1] https://github.com/starhwk/linux-xlnx/commits/hyunk/upstreaming?after=0b0002113e7381d8a5f3119d064676af4d0953f4+34 Thank you, I will use that as a starting point. -- Regards, Laurent Pinchart