linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bjorn Andersson <bjorn.andersson@linaro.org>
To: Nikita Travkin <nikitos.tr@gmail.com>
Cc: Rob Herring <robh@kernel.org>,
	agross@kernel.org, linux-arm-msm@vger.kernel.org,
	devicetree@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/2] dt-bindings: soc: qcom: Add bindings for Qualcomm Memshare service
Date: Tue, 13 Apr 2021 22:15:08 -0500	[thread overview]
Message-ID: <20210414031508.GZ1538589@yoga> (raw)
In-Reply-To: <bf20ff4b-1765-2bc8-d0de-bea675a1d090@gmail.com>

On Sat 10 Apr 03:05 CDT 2021, Nikita Travkin wrote:

> Hi, sorry for a late reply but I couldn't answer earlier.
> 
> 30.03.2021 19:40, Rob Herring ??????????:
> > On Fri, Mar 19, 2021 at 10:23:20PM +0500, nikitos.tr@gmail.com wrote:
> >> From: Nikita Travkin <nikitos.tr@gmail.com>
> >>
> >> Add DT bindings for memshare: QMI service that allocates
> >> memory per remote processor request.
> >>
> >> Signed-off-by: Nikita Travkin <nikitos.tr@gmail.com>
> >> ---
> >>  .../bindings/soc/qcom/qcom,memshare.yaml      | 109 ++++++++++++++++++
> >>  include/dt-bindings/soc/qcom,memshare.h       |  10 ++
> >>  2 files changed, 119 insertions(+)
> >>  create mode 100644 Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
> >>  create mode 100644 include/dt-bindings/soc/qcom,memshare.h
> >>
> >> diff --git a/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
> >> new file mode 100644
> >> index 000000000000..ebdf128b066c
> >> --- /dev/null
> >> +++ b/Documentation/devicetree/bindings/soc/qcom/qcom,memshare.yaml
> >> @@ -0,0 +1,109 @@
> >> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> >> +%YAML 1.2
> >> +---
> >> +$id: "http://devicetree.org/schemas/soc/qcom/qcom,memshare.yaml#"
> >> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> >> +
> >> +title: Qualcomm QMI Shared Memory Service
> > How many shared memory interfaces does Qcom have...
> >
> >> +
> >> +description: |
> >> +  This driver provides a QMI service that allows remote processors (like modem)
> >> +  to request additional memory. It is used for applications like GPS in modem.
> > If the memory region is defined in reserved-memory, how are you 
> > allocating additional memory? 
> 
> Initially remoteproc is loaded into it's own reserved-memory region
> but qcom decided that they sometimes need more memory than that.
> Memshare driver in msm8916 downstream tree seem to blindly allocate
> DMA region for every request that it gets. Additionally for those
> clients described in the DT, they do the DMA allocation on boot
> time and never free the region. They call it "guaranteed" allocation.
> 
> On msm8916 only one "guaranteed" client seem to be used so I decided
> to implement it with reserved-memory node. On newer platforms they
> seem to have more clients but I think that the driver can be easily
> extended to support dynamic allocation if someone really needs it.
> 

Is the "guaranteed" memory required to come from the reserved-memory
part of memory, or could it simply be allocated on demand as well (or
preallocated, but at a dynamic address)?

If these allocations always came from a reserved-memory region, then
adding a "qcom,memshare" compatible to the reserved-memory node itself
seems like a reasonable approach. But if dma_alloc is sufficient, and
there's cases where there's no "guaranteed" region, perhaps we should
just describe this as part of the remoteproc node (i.e. essentially
flipping the node/subnode in your current binding).


E.g. can we get away with simply adding an optional qcom,memshare-node
to the remoteproc binding and when that's present we make the Qualcomm
remoteproc drivers spawn the memshare handler and listen for requests
from that node?

> I tried to explain that in the cover letter but I think I made some
> mistake as I don't see it in the Patchwork.
> 
> >> +
> >> +maintainers:
> >> +  - Nikita Travkin <nikitos.tr@gmail.com>
> >> +
> >> +properties:
> >> +  compatible:
> >> +    const: qcom,memshare
> >> +
> >> +  qcom,legacy-client:
> >> +    $ref: /schemas/types.yaml#/definitions/phandle
> >> +    description: Phandle to a memshare client node used for legacy requests.
> >> +
> >> +  "#address-cells":
> >> +    const: 1
> >> +
> >> +  "#size-cells":
> >> +    const: 0
> >> +
> >> +patternProperties:
> >> +  "^.*@[0-9]+$":
> >> +    type: object
> >> +
> >> +    properties:
> >> +      reg:
> >> +        description: Proc-ID for clients in this node.
> > What's Proc-ID?
> 
> The requests from the remote nodes contain client-id and proc-id
> that are supposed to differentiate the clients. It's possible to
> find the values in downstream DT or by observing what messages
> are received by the memshare service (I left dev_dbg logging in
> the driver for that reason)
> 
> I think I should reword it to make this more apparent, maybe
> "Proc-ID that clients in this node send."?
> 

If this is a constant for each remote and we make this a child thing of
remoteproc perhaps encode the number in the remoteproc nodes?

(We still need something in DT to state that we want a memshare for
a given platform/remoteproc)

> >
> >> +
> >> +      qcom,qrtr-node:
> >> +        $ref: /schemas/types.yaml#/definitions/uint32
> >> +        description: Node from which the requests are expected.
> >> +
> >> +      "#address-cells":
> >> +        const: 1
> >> +
> >> +      "#size-cells":
> >> +        const: 0
> >> +
> >> +    patternProperties:
> >> +      "^.*@[0-9]+$":
> >> +        type: object
> >> +
> >> +        properties:
> >> +          reg:
> >> +            description: ID of this client.
> > How does one determine the ID?
> 
> As with proc-id, maybe reword to "ID that this client sends."?
> 
> I will change those in v2, I still expect comments on the driver
> itself, so I'll wait for that before submitting it with just a
> couple lines changed.
> 
> >
> >> +
> >> +          memory-region:
> >> +            $ref: /schemas/types.yaml#/definitions/phandle
> >> +            description: |
> >> +              Reserved memory region that should be used for allocation.
> >> +
> >> +        required:
> >> +          - reg
> >> +
> >> +    required:
> >> +      - reg
> >> +      - qcom,qrtr-node
> >> +
> >> +required:
> >> +  - compatible
> >> +
> >> +additionalProperties: false
> >> +
> >> +examples:
> >> +  - |
> >> +    #include <dt-bindings/soc/qcom,memshare.h>
> >> +
> >> +    reserved-memory {
> >> +
> >> +      #address-cells = <2>;
> >> +      #size-cells = <2>;
> >> +
> >> +      gps_mem: gps@93c00000 {
> >> +        reg = <0x0 0x93c00000 0x0 0x200000>;
> >> +        no-map;
> > We support 'compatible' in reserved-memory nodes, can you simplify the 
> > binding and put everything in here?
> 
> If I understand this correctly, each reserved-memory node will
> then load a new instance of memshare. Since the driver registers a
> QMI service that handles multiple clients, there should be only one
> instance.

This you could work around in the driver implementation, to refcount a
single implementation shared among all the instances.

> Additionally, as I mentioned earlier, some clients may not
> need reserved-memory at all
> 

This on the other hand, makes me feel like we shouldn't go that route.

Regards,
Bjorn

> >> +      };
> >> +    };
> >> +
> >> +    memshare {
> >> +      compatible = "qcom,memshare";
> >> +      qcom,legacy-client = <&memshare_gps>;
> >> +
> >> +      #address-cells = <1>;
> >> +      #size-cells = <0>;
> >> +
> >> +      mpss@0 {
> >> +        reg = <MEMSHARE_PROC_MPSS_V01>;
> >> +        qcom,qrtr-node = <0>;
> >> +
> >> +        #address-cells = <1>;
> >> +        #size-cells = <0>;
> >> +
> >> +        memshare_gps: gps@0 {
> >> +          reg = <0>;
> >> +          memory-region = <&gps_mem>;
> >> +        };
> >> +      };
> >> +    };
> >> +
> >> +...
> >> diff --git a/include/dt-bindings/soc/qcom,memshare.h b/include/dt-bindings/soc/qcom,memshare.h
> >> new file mode 100644
> >> index 000000000000..4cef1ef75d09
> >> --- /dev/null
> >> +++ b/include/dt-bindings/soc/qcom,memshare.h
> >> @@ -0,0 +1,10 @@
> >> +/* SPDX-License-Identifier: GPL-2.0 */
> >> +
> >> +#ifndef __DT_QCOM_MEMSHARE_H__
> >> +#define __DT_QCOM_MEMSHARE_H__
> >> +
> >> +#define MEMSHARE_PROC_MPSS_V01 0
> >> +#define MEMSHARE_PROC_ADSP_V01 1
> >> +#define MEMSHARE_PROC_WCNSS_V01 2
> >> +
> >> +#endif /* __DT_QCOM_MEMSHARE_H__ */
> >> -- 
> >> 2.27.0
> >>
> 

  reply	other threads:[~2021-04-14  3:15 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-19 17:23 [PATCH 0/2] soc: qcom: Add Qualcomm Memshare QMI service nikitos.tr
2021-03-19 17:23 ` [PATCH 1/2] dt-bindings: soc: qcom: Add bindings for Qualcomm Memshare service nikitos.tr
2021-03-30 14:40   ` Rob Herring
2021-04-10  8:05     ` Nikita Travkin
2021-04-14  3:15       ` Bjorn Andersson [this message]
2021-04-15 12:03         ` Nikita Travkin
2021-03-19 17:23 ` [PATCH 2/2] soc: qcom: Add Qualcomm Memshare QMI service nikitos.tr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210414031508.GZ1538589@yoga \
    --to=bjorn.andersson@linaro.org \
    --cc=agross@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=nikitos.tr@gmail.com \
    --cc=robh@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).