linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Suman Anna <s-anna@ti.com>
To: Tony Lindgren <tony@atomide.com>
Cc: Ohad Ben-Cohen <ohad@wizery.com>,
	Kevin Hilman <khilman@linaro.org>,
	Dave Gerlach <d-gerlach@ti.com>, Robert Tivy <rtivy@ti.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-omap@vger.kernel.org" <linux-omap@vger.kernel.org>,
	linux-arm <linux-arm-kernel@lists.infradead.org>
Subject: Re: [PATCH v3 2/2] remoteproc: add support to handle internal memories
Date: Wed, 11 Feb 2015 19:07:30 -0600	[thread overview]
Message-ID: <54DBFCD2.7070500@ti.com> (raw)
In-Reply-To: <20150212001857.GC25954@atomide.com>

On 02/11/2015 06:18 PM, Tony Lindgren wrote:
> * Suman Anna <s-anna@ti.com> [150211 16:05]:
>> On 02/11/2015 04:48 PM, Tony Lindgren wrote:
>>> * Suman Anna <s-anna@ti.com> [150211 14:32]:
>>>> On 02/11/2015 02:57 PM, Tony Lindgren wrote:
>>>>> * Ohad Ben-Cohen <ohad@wizery.com> [150210 02:14]:
>>>>>> Hi Suman,
>>>>>>
>>>>>> On Fri, Jan 9, 2015 at 11:21 PM, Suman Anna <s-anna@ti.com> wrote:
>>>>>>> A remote processor may need to load certain firmware sections into
>>>>>>> internal memories (eg: RAM at L1 or L2 levels) for performance or
>>>>>>> other reasons. Introduce a new resource type (RSC_INTMEM) and add
>>>>>>> an associated handler function to handle such memories. The handler
>>>>>>> creates a kernel mapping for the resource's 'pa' (physical address).
>>>>>> ...
>>>>>>> + * rproc_handle_intmem() - handle internal memory resource entry
>>>>>>> + * @rproc: rproc handle
>>>>>>> + * @rsc: the intmem resource entry
>>>>>>> + * @offset: offset of the resource data in resource table
>>>>>>> + * @avail: size of available data (for image validation)
>>>>>>> + *
>>>>>>> + * This function will handle firmware requests for mapping a memory region
>>>>>>> + * internal to a remote processor into kernel. It neither allocates any
>>>>>>> + * physical pages, nor performs any iommu mapping, as this resource entry
>>>>>>> + * is primarily used for representing physical internal memories. If the
>>>>>>> + * internal memory region can only be accessed through an iommu, please
>>>>>>> + * use a devmem resource entry.
>>>>>>> + *
>>>>>>> + * These resource entries should be grouped near the carveout entries in
>>>>>>> + * the firmware's resource table, as other firmware entries might request
>>>>>>> + * placing other data objects inside these memory regions (e.g. data/code
>>>>>>> + * segments, trace resource entries, ...).
>>>>>>> + */
>>>>>>> +static int rproc_handle_intmem(struct rproc *rproc, struct fw_rsc_intmem *rsc,
>>>>>>> +                              int offset, int avail)
>>>>>>> +{
>>>>>> ...
>>>>>>> +       va = (__force void *)ioremap_nocache(rsc->pa, rsc->len);
>>>>>>
>>>>>> Back in the days when we developed remoteproc there was a tremendous
>>>>>> effort to move away from ioremap when not strictly needed.
>>>>>
>>>>> The use of ioremap in general is just fine for drivers as long
>>>>> as they access a dedicated area to the specific device. Accessing
>>>>> random registers and memory in the SoC is what I'm worried about.
>>>>>  
>>>>>> I'd be happy if someone intimate with the related hardware could ack
>>>>>> that in this specific case ioremap is indeed needed. No need to review
>>>>>> the entire patch, or anything remoteproc, just make sure that
>>>>>> generally ioremap is how we want to access this internal memory.
>>>>>>
>>>>>> Tony or Kevin any chance you could take a look and ack?
>>>>>>
>>>>>> If ioremap is indeed the way to go, I'd also expect that we wouldn't
>>>>>> have to use __force here, but that's probably a minor patch cleanup.
>>>>>
>>>>> Hmm sounds like this memory should be dedicated to the accelerator?
>>>>>
>>>>> In that case it should use memblock to reserve that area early so
>>>>> the kernel won't be accessing it at all.
>>>>
>>>> The usage here is not really on regular memory, but on internal device
>>>> memory (eg: L2RAM within DSP which is accessible by MPU through L3 bus).
>>>> For the regular shared memory for vrings and vring buffers, the
>>>> remoteproc core does rely on CMA pools.
>>>
>>> OK sounds like Linux needs to access it initially to load the DSP boot
>>> code to L2RAM to get the DSP booted.
>>>
>>> Maybe it can be done with the API provided by drivers/misc/sram.c?
>>>
>>> You could set up the L2RAM as compatible = "mmio-sram" and then
>>> parse the optional phandle for that in the remoteproc code, then
>>> allocate some memory from it to load the DSP boot code and free
>>> it.
>>
>> Not quite the same usage, there are no implicit assumptions on managing
>> this memory. Isn't the SRAM driver better suited for allocating memory
>> using the gen_pool API. It is just regular code that is being placed
>> into RAM, and the linker file on the remoteproc side dictates which
>> portion we are using. So, the section can be anywhere based on the ELF
>> parsing. Further, the same RAM space can be partitioned into Cache
>> and/or RAM, which is usually controlled from internal processor
>> subsystem register programming.
> 
> It still sounds like you need an API like gen_pool to allocate and
> load the DSP code though? So from that point of view it's best to
> use some Linux generic API.
> 
> Just guessing, but the process here is probably something like
> request_firmware, configure hardware, allocate memory area,
> copy firmware to memory, unallocate memory, boot m3 :)

Yeah, atleast for the processors with MMUs, it's usually allocate
memory, program IOMMU, copy firmware, boot rproc. Memory is freed when
unloading the processor and loading a different firmware. For the cases
with internal memory, either I need an ioremap of the region for copying
the firmware sections, or as you said, allocate, copy and unallocate.
That almost always means, I have to allocate the entire region, since I
would need to usually copy the data to a specific location based on the
ELF pheader data. The sram driver also does an ioremap internally, so I
guess it can be done, and probably a bit more code for management within
the rproc core.

regards
Suman



  reply	other threads:[~2015-02-12  1:08 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-09 21:21 [PATCH v3 0/2] couple of generic remoteproc enhancements Suman Anna
2015-01-09 21:21 ` [PATCH v3 1/2] remoteproc: use a flag to detect the presence of IOMMU Suman Anna
2015-03-12  9:04   ` Ohad Ben-Cohen
2015-03-13 23:35     ` Suman Anna
2015-01-09 21:21 ` [PATCH v3 2/2] remoteproc: add support to handle internal memories Suman Anna
2015-02-10 10:10   ` Ohad Ben-Cohen
2015-02-11 20:57     ` Tony Lindgren
2015-02-11 22:28       ` Suman Anna
2015-02-11 22:48         ` Tony Lindgren
2015-02-12  0:01           ` Suman Anna
2015-02-12  0:18             ` Tony Lindgren
2015-02-12  1:07               ` Suman Anna [this message]
2015-02-12  9:09       ` Ohad Ben-Cohen
2015-02-12 20:54         ` Suman Anna
2015-02-13  5:20           ` Ohad Ben-Cohen
2015-02-13 16:13             ` Suman Anna
2015-02-13 18:35               ` Tony Lindgren
2015-01-22 21:52 ` [PATCH v3 0/2] couple of generic remoteproc enhancements Suman Anna
2015-02-03 20:55   ` Suman Anna
2015-02-05 15:11     ` Ohad Ben-Cohen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54DBFCD2.7070500@ti.com \
    --to=s-anna@ti.com \
    --cc=d-gerlach@ti.com \
    --cc=khilman@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-omap@vger.kernel.org \
    --cc=ohad@wizery.com \
    --cc=rtivy@ti.com \
    --cc=tony@atomide.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).