From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754670AbbBKWxH (ORCPT ); Wed, 11 Feb 2015 17:53:07 -0500 Received: from pmta2.delivery8.ore.mailhop.org ([54.148.222.11]:32937 "EHLO pmta2.delivery8.ore.mailhop.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753402AbbBKWxD (ORCPT ); Wed, 11 Feb 2015 17:53:03 -0500 X-Greylist: delayed 6609 seconds by postgrey-1.27 at vger.kernel.org; Wed, 11 Feb 2015 17:53:03 EST X-Mail-Handler: DuoCircle Outbound SMTP X-Originating-IP: 104.193.169.186 X-Report-Abuse-To: abuse@duocircle.com (see https://support.duocircle.com/support/solutions/articles/5000540958-duocircle-standard-smtp-abuse-information for abuse reporting information) X-MHO-User: U2FsdGVkX18uNmeAicmQe5WF3Jam9esH Date: Wed, 11 Feb 2015 12:57:58 -0800 From: Tony Lindgren To: Ohad Ben-Cohen Cc: Suman Anna , Kevin Hilman , Dave Gerlach , Robert Tivy , "linux-kernel@vger.kernel.org" , "linux-omap@vger.kernel.org" , linux-arm Subject: Re: [PATCH v3 2/2] remoteproc: add support to handle internal memories Message-ID: <20150211205757.GI2531@atomide.com> References: <1420838519-15669-1-git-send-email-s-anna@ti.com> <1420838519-15669-3-git-send-email-s-anna@ti.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Ohad Ben-Cohen [150210 02:14]: > Hi Suman, > > On Fri, Jan 9, 2015 at 11:21 PM, Suman Anna wrote: > > A remote processor may need to load certain firmware sections into > > internal memories (eg: RAM at L1 or L2 levels) for performance or > > other reasons. Introduce a new resource type (RSC_INTMEM) and add > > an associated handler function to handle such memories. The handler > > creates a kernel mapping for the resource's 'pa' (physical address). > ... > > + * rproc_handle_intmem() - handle internal memory resource entry > > + * @rproc: rproc handle > > + * @rsc: the intmem resource entry > > + * @offset: offset of the resource data in resource table > > + * @avail: size of available data (for image validation) > > + * > > + * This function will handle firmware requests for mapping a memory region > > + * internal to a remote processor into kernel. It neither allocates any > > + * physical pages, nor performs any iommu mapping, as this resource entry > > + * is primarily used for representing physical internal memories. If the > > + * internal memory region can only be accessed through an iommu, please > > + * use a devmem resource entry. > > + * > > + * These resource entries should be grouped near the carveout entries in > > + * the firmware's resource table, as other firmware entries might request > > + * placing other data objects inside these memory regions (e.g. data/code > > + * segments, trace resource entries, ...). > > + */ > > +static int rproc_handle_intmem(struct rproc *rproc, struct fw_rsc_intmem *rsc, > > + int offset, int avail) > > +{ > ... > > + va = (__force void *)ioremap_nocache(rsc->pa, rsc->len); > > Back in the days when we developed remoteproc there was a tremendous > effort to move away from ioremap when not strictly needed. The use of ioremap in general is just fine for drivers as long as they access a dedicated area to the specific device. Accessing random registers and memory in the SoC is what I'm worried about. > I'd be happy if someone intimate with the related hardware could ack > that in this specific case ioremap is indeed needed. No need to review > the entire patch, or anything remoteproc, just make sure that > generally ioremap is how we want to access this internal memory. > > Tony or Kevin any chance you could take a look and ack? > > If ioremap is indeed the way to go, I'd also expect that we wouldn't > have to use __force here, but that's probably a minor patch cleanup. Hmm sounds like this memory should be dedicated to the accelerator? In that case it should use memblock to reserve that area early so the kernel won't be accessing it at all. If it needs to be shared between the kernel and the accelerator, then is the remoteproc or mailbox somehow needs to coordinating the shared access to this memory.. I think those cases should be handled separately and not with a single interface. Regards, Tony