All of lore.kernel.org
 help / color / mirror / Atom feed
From: Suman Anna <s-anna@ti.com>
To: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>,
	Loic Pallardy <loic.pallardy@st.com>,
	Arnaud Pouliquen <arnaud.pouliquen@st.com>,
	Tero Kristo <t-kristo@ti.com>,
	linux-remoteproc <linux-remoteproc@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2 1/2] remoteproc: fall back to using parent memory pool if no dedicated available
Date: Tue, 7 Apr 2020 14:36:36 -0500	[thread overview]
Message-ID: <31e64312-c80c-a1a8-a8cb-c87e54233d4f@ti.com> (raw)
In-Reply-To: <CANLsYkzU79LDVWO=wtoOY-=iW0a4EUf5sruwWicyj+2EAFZ4rg@mail.gmail.com>

Hi Mathieu,

On 3/27/20 4:09 PM, Mathieu Poirier wrote:
> On Wed, 25 Mar 2020 at 17:39, Suman Anna <s-anna@ti.com> wrote:
>>
>> Hi Mathieu,
>>
>> On 3/25/20 3:38 PM, Mathieu Poirier wrote:
>>> On Thu, Mar 19, 2020 at 11:23:20AM -0500, Suman Anna wrote:
>>>> From: Tero Kristo <t-kristo@ti.com>
>>>>
>>>> In some cases, like with OMAP remoteproc, we are not creating dedicated
>>>> memory pool for the virtio device. Instead, we use the same memory pool
>>>> for all shared memories. The current virtio memory pool handling forces
>>>> a split between these two, as a separate device is created for it,
>>>> causing memory to be allocated from bad location if the dedicated pool
>>>> is not available. Fix this by falling back to using the parent device
>>>> memory pool if dedicated is not available.
>>>>
>>>> Fixes: 086d08725d34 ("remoteproc: create vdev subdevice with specific dma memory pool")
>>>> Signed-off-by: Tero Kristo <t-kristo@ti.com>
>>>> Signed-off-by: Suman Anna <s-anna@ti.com>
>>>> ---
>>>> v2:
>>>>  - Address Arnaud's concerns about hard-coded memory-region index 0
>>>>  - Update the comment around the new code addition
>>>> v1: https://patchwork.kernel.org/patch/11422721/
>>>>
>>>>  drivers/remoteproc/remoteproc_virtio.c | 15 +++++++++++++++
>>>>  include/linux/remoteproc.h             |  2 ++
>>>>  2 files changed, 17 insertions(+)
>>>>
>>>> diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c
>>>> index eb817132bc5f..b687715cdf4b 100644
>>>> --- a/drivers/remoteproc/remoteproc_virtio.c
>>>> +++ b/drivers/remoteproc/remoteproc_virtio.c
>>>> @@ -369,6 +369,21 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev, int id)
>>>>                              goto out;
>>>>                      }
>>>>              }
>>>> +    } else {
>>>> +            struct device_node *np = rproc->dev.parent->of_node;
>>>> +
>>>> +            /*
>>>> +             * If we don't have dedicated buffer, just attempt to re-assign
>>>> +             * the reserved memory from our parent. A default memory-region
>>>> +             * at index 0 from the parent's memory-regions is assigned for
>>>> +             * the rvdev dev to allocate from, and this can be customized
>>>> +             * by updating the vdevbuf_mem_id in platform drivers if
>>>> +             * desired. Failure is non-critical and the allocations will
>>>> +             * fall back to global pools, so don't check return value
>>>> +             * either.
>>>
>>> I'm perplex...  In the changelog it is indicated that if a memory pool is
>>> not dedicated allocation happens from a bad location but here failure of
>>> getting a hold of a dedicated memory pool is not critical.
>>
>> So, the comment here is a generic one while the bad location part in the
>> commit description is actually from OMAP remoteproc usage perspective
>> (if you remember the dev_warn messages we added to the memory-region
>> parse logic in the driver).
> 
> I can't tell... Are you referring to the comment lines after
> of_reserved_mem_device_init() in omap_rproc_probe()?

Yes indeed, the dev_warn traces after of_reserved_mem_device_init().

> 
>>
>> Before the fixed-memory carveout support, all the DMA allocations in
>> remoteproc core were made from the rproc platform device's DMA pool (
>> which can be NULL). That is lost after the fixed-memory support, and
>> they were always allocated from global DMA pools if no dedicated pools
>> are used. After this patch, that continues to be case for drivers that
>> still do not use any dedicated pools, while it does restore the usage of
>> the platform device's DMA pool if a driver uses one (OMAP remoteproc
>> falls into the latter).
>>
>>>
>>>> +             */
>>>> +            of_reserved_mem_device_init_by_idx(dev, np,
>>>> +                                               rproc->vdevbuf_mem_id);
>>>
>>> I wonder if using an index setup by platform code is really the best way
>>> forward when we already have the carveout mechanic available to us.  I see the
>>> platform code adding a carveout that would have the same name as rproc->name.
>>> From there in rproc_add_virtio_dev() we could have something like:
>>>
>>>         mem = rproc_find_carveout_by_name(rproc, "%s", rproc->name);
>>>
>>>
>>> That would be very flexible, the location of the reserved memory withing the
>>> memory-region could change without fear of breaking things and no need to add to
>>> struct rproc.
>>>
>>> Let me know what you think.
>>
>> I think that can work as well but I feel it is lot more cumbersome. It
>> does require every platform driver to add code adding/registering that
>> carveout, and parse the reserved memory region etc. End of the day, we
>> rely on DMA API and we just have to assign the region to the newly
>> created device. The DMA pool assignment for devices using
>> reserved-memory nodes has simply been the of_reserved_mem_device_init()
>> function.
> 
> Given all the things happening in the platform drivers adding and
> registering a single carveout doesn't seem that onerous to me.   I
> also expect setting rproc->vdevbuf_mem_id would involve some form of
> parsing.  

So, no additional parsing other than to know which id if you have
multiple regions. A device can only be assigned one default DMA/CMA pool
to use with the DMA API. One would need to add the assignment statement
only if region 0 is not being used as the device DMA-API pool.

Lastly if a couple of platforms end up doing the same thing
> might as well bring the code in the core, hence choosing a generic
> name such as rproc->name for the memory region.

That is actually lot more code than the current code. First you would
need to lookup and parse the reserved mem to get the address and size to
initialize the rproc mem structure, and then use the filled in values to
declare the DMA pool.

> 
> At the very least I would use of_reserved_mem_device_init_by_idx(dev,
> np, 0).  I agree it is not flexible but I'll take that over adding a
> new field to structure rproc.

Yep, I started out indeed with exactly that code in v1, and only
introduced the new field to address Arnaud's comments. Even the new
field is intrinsically initialized to 0, so the code is equivalent, and
supports the cases in case you need to use a different reserved-memory
region than at index 0.

regards
Suman


> 
> Thanks,
> Mathieu
> 
>>
>> regards
>> Suman
>>
>>>
>>> Thanks,
>>> Mathieu
>>>
>>>>      }
>>>>
>>>>      /* Allocate virtio device */
>>>> diff --git a/include/linux/remoteproc.h b/include/linux/remoteproc.h
>>>> index ed127b2d35ca..07bd73a6d72a 100644
>>>> --- a/include/linux/remoteproc.h
>>>> +++ b/include/linux/remoteproc.h
>>>> @@ -481,6 +481,7 @@ struct rproc_dump_segment {
>>>>   * @auto_boot: flag to indicate if remote processor should be auto-started
>>>>   * @dump_segments: list of segments in the firmware
>>>>   * @nb_vdev: number of vdev currently handled by rproc
>>>> + * @vdevbuf_mem_id: default memory-region index for allocating vdev buffers
>>>>   */
>>>>  struct rproc {
>>>>      struct list_head node;
>>>> @@ -514,6 +515,7 @@ struct rproc {
>>>>      bool auto_boot;
>>>>      struct list_head dump_segments;
>>>>      int nb_vdev;
>>>> +    u8 vdevbuf_mem_id;
>>>>      u8 elf_class;
>>>>  };
>>>>
>>>> --
>>>> 2.23.0
>>>>
>>

  parent reply	other threads:[~2020-04-07 19:36 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-19 16:23 [PATCH v2 0/2] Misc. rproc fixes around fixed memory region support Suman Anna
2020-03-19 16:23 ` Suman Anna
2020-03-19 16:23 ` [PATCH v2 1/2] remoteproc: fall back to using parent memory pool if no dedicated available Suman Anna
2020-03-19 16:23   ` Suman Anna
2020-03-20  8:39   ` Arnaud POULIQUEN
2020-03-20  8:39     ` Arnaud POULIQUEN
2020-03-25 20:38   ` Mathieu Poirier
2020-03-25 20:38     ` Mathieu Poirier
2020-03-25 23:39     ` Suman Anna
2020-03-25 23:39       ` Suman Anna
2020-03-27 21:09       ` Mathieu Poirier
2020-03-30 12:29         ` Arnaud POULIQUEN
2020-04-07 19:47           ` Suman Anna
2020-04-08  9:42             ` Arnaud POULIQUEN
2020-04-08 23:36               ` Suman Anna
2020-04-09  9:58                 ` Arnaud POULIQUEN
2020-04-09 13:20                   ` Suman Anna
2020-04-07 19:36         ` Suman Anna [this message]
2020-03-19 16:23 ` [PATCH v2 2/2] remoteproc: Fix and restore the parenting hierarchy for vdev Suman Anna
2020-03-19 16:23   ` Suman Anna
2020-04-14 18:43 ` [PATCH v2 0/2] Misc. rproc fixes around fixed memory region support Suman Anna
2020-04-14 18:43   ` Suman Anna

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=31e64312-c80c-a1a8-a8cb-c87e54233d4f@ti.com \
    --to=s-anna@ti.com \
    --cc=arnaud.pouliquen@st.com \
    --cc=bjorn.andersson@linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-remoteproc@vger.kernel.org \
    --cc=loic.pallardy@st.com \
    --cc=mathieu.poirier@linaro.org \
    --cc=t-kristo@ti.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.