From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1E2CC63697 for ; Sat, 14 Nov 2020 16:24:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9AF4B2227F for ; Sat, 14 Nov 2020 16:24:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727034AbgKNQYK (ORCPT ); Sat, 14 Nov 2020 11:24:10 -0500 Received: from verein.lst.de ([213.95.11.211]:50560 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726265AbgKNQYJ (ORCPT ); Sat, 14 Nov 2020 11:24:09 -0500 Received: by verein.lst.de (Postfix, from userid 2407) id 673BA67373; Sat, 14 Nov 2020 17:24:06 +0100 (CET) Date: Sat, 14 Nov 2020 17:24:06 +0100 From: Christoph Hellwig To: Jonathan Marek Cc: freedreno@lists.freedesktop.org, hch@lst.de, Rob Clark , Sean Paul , David Airlie , Daniel Vetter , "open list:DRM DRIVER FOR MSM ADRENO GPU" , "open list:DRM DRIVER FOR MSM ADRENO GPU" , open list Subject: Re: [RESEND PATCH v2 4/5] drm/msm: add DRM_MSM_GEM_SYNC_CACHE for non-coherent cache maintenance Message-ID: <20201114162406.GC24411@lst.de> References: <20201114151717.5369-1-jonathan@marek.ca> <20201114151717.5369-5-jonathan@marek.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201114151717.5369-5-jonathan@marek.ca> User-Agent: Mutt/1.5.17 (2007-11-01) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Nov 14, 2020 at 10:17:12AM -0500, Jonathan Marek wrote: > +void msm_gem_sync_cache(struct drm_gem_object *obj, uint32_t flags, > + size_t range_start, size_t range_end) > +{ > + struct msm_gem_object *msm_obj = to_msm_bo(obj); > + struct device *dev = msm_obj->base.dev->dev; > + > + /* exit early if get_pages() hasn't been called yet */ > + if (!msm_obj->pages) > + return; > + > + /* TODO: sync only the specified range */ > + > + if (flags & MSM_GEM_SYNC_FOR_DEVICE) { > + dma_sync_sg_for_device(dev, msm_obj->sgt->sgl, > + msm_obj->sgt->nents, DMA_TO_DEVICE); > + } > + > + if (flags & MSM_GEM_SYNC_FOR_CPU) { > + dma_sync_sg_for_cpu(dev, msm_obj->sgt->sgl, > + msm_obj->sgt->nents, DMA_FROM_DEVICE); > + } Splitting this helper from the only caller is rather strange, epecially with the two unused arguments. And I think the way this is specified to take a range, but ignoring it is actively dangerous. User space will rely on it syncing everything sooner or later and then you are stuck. So just define a sync all primitive for now, and if you really need a range sync and have actually implemented it add a new ioctl for that.