All of lore.kernel.org
 help / color / mirror / Atom feed
From: Suman Anna <s-anna@ti.com>
To: Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: loic pallardy <loic.pallardy@st.com>,
	ohad@wizery.com, lee.jones@linaro.org,
	linux-remoteproc@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] remoteproc: core: Add fixed memory region support
Date: Wed, 31 Aug 2016 11:55:18 -0500	[thread overview]
Message-ID: <b45a4b39-a4a5-ad46-cbbb-112727996560@ti.com> (raw)
In-Reply-To: <20160831163747.GT15161@tuxbot>

On 08/31/2016 11:37 AM, Bjorn Andersson wrote:
> On Tue 30 Aug 16:13 PDT 2016, Suman Anna wrote:
> 
>>>>> +    if (rsc->vring[i].da != 0 && rsc->vring[i].da != FW_RSC_ADDR_ANY) {
> [..]
>>>> @Suman, do you have any input on this?
>>
> 
> Thanks Suman
> 
>> I was thinking about this as well, and the way I actually envisioned
>> this is to add additional rproc_ops with the default behavior falling
>> back to the dma_alloc API. I had two use-cases in mind for that - one is
>> the same as what Loic is trying to resolve here, and the other is a case
>> where I want to allocate these memories not through DMA API, but like
>> say allocate from an remote processor internal RAM or an on-chip
>> internal memory.
> 
> Are these cases a matter of mapping the chunks with ioremap, or are
> there more fancy setups that would affect how we load the data into them
> as well?

The loading can be handled automatically as we do provide the
.da_to_va() ops for individual platform drivers to provide a translation
mechanism. The default ELF loader only needs the kernel va to be able to
copy the data over. In the case of fixed addresses, it is just a matter
of ioremap, but if using the mmio-sram driver, the platform drivers are
responsible for providing you the va and dma.

> 
> Also, in the case of you mapping vrings in on-chip memory, would you use
> the resource table to communicate these addresses or are they simply
> hard coded in the loaded firmware?

It really depends on how the on-chip memory get used. Unless there is a
limitation to use a fixed address location, the normal usage would be
used the mmio-sram driver and the gen_pool API to allocate on-chip
memory. We do fill in the resource table to communicate these addresses
to loaded firmware.

> 
>> This is the case atleast for vrings and vring buffers.
>> I think these decisions are best made in the individual platform drivers
>> as the integration can definitely vary from one SoC to another.
>>
> 
> This touches upon the discussion related to how to support fixed
> position vring buffers.

Indeed, in one of the usage patterns.

> 
>> The other thing this series makes an assumption is that with a fixed da,
>> it is assuming the device is not behind an MMU, and whatever da is
>> pointing to is a bus accessible address.
> 
> But doesn't the current code do the same?
> Isn't the "dma" that we assign "da" the physical address of the memory?

I meant this series is making the assumption. Previously, we were
ignoring the da and overwriting it with the allocated physical address
field, right.

> 
>> We have traditional meant the
>> da as "device address" so it translated as bus address on devices that
>> are not behind an MMU, or actual virtual addresses as seen by the device
>> if behind an MMU.
> 
> I like the idea of making this the uniform design among the various
> resource types.
> 
>> On TI SoCs on some devices, we do have an MMU and so
>> we have a non (-1) da, but it is not valid for memremapping.
>> At the same time, we would also need any allocated address to be filled in.
> 
> Right, so analog to the carveout case we need to allocate memory and
> potentially map the memory in the iommu.
> 
> As this case then repeats itself for the vring (rpmsg) buffers I think
> we should strive for representing and handling all these memory
> allocations in a more uniform way.

Yes agreed. Though there are currently some gaps w.r.t the vrings and
the vring buffers mapping, as the current code doesn't have an
associated iommu_map calls around the allocation. It might be that the
remoteproc would require these to be allocated/mapped in a specific
region for properly configuring the cacheability property around this.
We are using a work-around for the moment to get around this.

regards
Suman

WARNING: multiple messages have this Message-ID (diff)
From: Suman Anna <s-anna@ti.com>
To: Bjorn Andersson <bjorn.andersson@linaro.org>
Cc: loic pallardy <loic.pallardy@st.com>, <ohad@wizery.com>,
	<lee.jones@linaro.org>, <linux-remoteproc@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 2/2] remoteproc: core: Add fixed memory region support
Date: Wed, 31 Aug 2016 11:55:18 -0500	[thread overview]
Message-ID: <b45a4b39-a4a5-ad46-cbbb-112727996560@ti.com> (raw)
In-Reply-To: <20160831163747.GT15161@tuxbot>

On 08/31/2016 11:37 AM, Bjorn Andersson wrote:
> On Tue 30 Aug 16:13 PDT 2016, Suman Anna wrote:
> 
>>>>> +    if (rsc->vring[i].da != 0 && rsc->vring[i].da != FW_RSC_ADDR_ANY) {
> [..]
>>>> @Suman, do you have any input on this?
>>
> 
> Thanks Suman
> 
>> I was thinking about this as well, and the way I actually envisioned
>> this is to add additional rproc_ops with the default behavior falling
>> back to the dma_alloc API. I had two use-cases in mind for that - one is
>> the same as what Loic is trying to resolve here, and the other is a case
>> where I want to allocate these memories not through DMA API, but like
>> say allocate from an remote processor internal RAM or an on-chip
>> internal memory.
> 
> Are these cases a matter of mapping the chunks with ioremap, or are
> there more fancy setups that would affect how we load the data into them
> as well?

The loading can be handled automatically as we do provide the
.da_to_va() ops for individual platform drivers to provide a translation
mechanism. The default ELF loader only needs the kernel va to be able to
copy the data over. In the case of fixed addresses, it is just a matter
of ioremap, but if using the mmio-sram driver, the platform drivers are
responsible for providing you the va and dma.

> 
> Also, in the case of you mapping vrings in on-chip memory, would you use
> the resource table to communicate these addresses or are they simply
> hard coded in the loaded firmware?

It really depends on how the on-chip memory get used. Unless there is a
limitation to use a fixed address location, the normal usage would be
used the mmio-sram driver and the gen_pool API to allocate on-chip
memory. We do fill in the resource table to communicate these addresses
to loaded firmware.

> 
>> This is the case atleast for vrings and vring buffers.
>> I think these decisions are best made in the individual platform drivers
>> as the integration can definitely vary from one SoC to another.
>>
> 
> This touches upon the discussion related to how to support fixed
> position vring buffers.

Indeed, in one of the usage patterns.

> 
>> The other thing this series makes an assumption is that with a fixed da,
>> it is assuming the device is not behind an MMU, and whatever da is
>> pointing to is a bus accessible address.
> 
> But doesn't the current code do the same?
> Isn't the "dma" that we assign "da" the physical address of the memory?

I meant this series is making the assumption. Previously, we were
ignoring the da and overwriting it with the allocated physical address
field, right.

> 
>> We have traditional meant the
>> da as "device address" so it translated as bus address on devices that
>> are not behind an MMU, or actual virtual addresses as seen by the device
>> if behind an MMU.
> 
> I like the idea of making this the uniform design among the various
> resource types.
> 
>> On TI SoCs on some devices, we do have an MMU and so
>> we have a non (-1) da, but it is not valid for memremapping.
>> At the same time, we would also need any allocated address to be filled in.
> 
> Right, so analog to the carveout case we need to allocate memory and
> potentially map the memory in the iommu.
> 
> As this case then repeats itself for the vring (rpmsg) buffers I think
> we should strive for representing and handling all these memory
> allocations in a more uniform way.

Yes agreed. Though there are currently some gaps w.r.t the vrings and
the vring buffers mapping, as the current code doesn't have an
associated iommu_map calls around the allocation. It might be that the
remoteproc would require these to be allocated/mapped in a specific
region for properly configuring the cacheability property around this.
We are using a work-around for the moment to get around this.

regards
Suman

  reply	other threads:[~2016-08-31 16:55 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-08-26 20:19 [PATCH 0/2] Remoteproc: Add predefined coprocessor memory mapping support Loic Pallardy
2016-08-26 20:19 ` Loic Pallardy
2016-08-26 20:19 ` [PATCH 1/2] remoteproc: Modify FW_RSC_ADDR_ANY definition Loic Pallardy
2016-08-26 20:19   ` Loic Pallardy
2016-08-26 20:19 ` [PATCH 2/2] remoteproc: core: Add fixed memory region support Loic Pallardy
2016-08-26 20:19   ` Loic Pallardy
2016-08-27  0:32   ` Bjorn Andersson
2016-08-29  8:09     ` loic pallardy
2016-08-29  8:09       ` loic pallardy
2016-08-30 23:13       ` Suman Anna
2016-08-30 23:13         ` Suman Anna
2016-08-31 16:05         ` loic pallardy
2016-08-31 16:05           ` loic pallardy
2016-08-31 16:37         ` Bjorn Andersson
2016-08-31 16:55           ` Suman Anna [this message]
2016-08-31 16:55             ` Suman Anna

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b45a4b39-a4a5-ad46-cbbb-112727996560@ti.com \
    --to=s-anna@ti.com \
    --cc=bjorn.andersson@linaro.org \
    --cc=lee.jones@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-remoteproc@vger.kernel.org \
    --cc=loic.pallardy@st.com \
    --cc=ohad@wizery.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.