linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ben Levinsky <BLEVINSK@xilinx.com>
To: Mathieu Poirier <mathieu.poirier@linaro.org>
Cc: "devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	"linux-remoteproc@vger.kernel.org"
	<linux-remoteproc@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>,
	Michal Simek <michals@xilinx.com>,
	"Ed T. Mooring" <emooring@xilinx.com>
Subject: Re: [PATCH v26 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver
Date: Mon, 15 Mar 2021 21:32:40 +0000	[thread overview]
Message-ID: <09895E3D-AED4-4DBF-A48C-684271275D49@xilinx.com> (raw)
In-Reply-To: <20210315173724.GB1342614@xps15>



-----Original Message-----
From: Mathieu Poirier <mathieu.poirier@linaro.org>
Date: Monday, March 15, 2021 at 10:37 AM
To: Ben Levinsky <BLEVINSK@xilinx.com>
Cc: "devicetree@vger.kernel.org" <devicetree@vger.kernel.org>, "linux-remoteproc@vger.kernel.org" <linux-remoteproc@vger.kernel.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "linux-arm-kernel@lists.infradead.org" <linux-arm-kernel@lists.infradead.org>, Michal Simek <michals@xilinx.com>, "Ed T. Mooring" <emooring@xilinx.com>
Subject: Re: [PATCH v26 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver

    On Thu, Mar 11, 2021 at 11:49:13PM +0000, Ben Levinsky wrote:
    > Hi Mathieu
    > 
    > -----Original Message-----
    > From: Mathieu Poirier <mathieu.poirier@linaro.org>
    > Date: Tuesday, March 9, 2021 at 8:53 AM
    > To: Ben Levinsky <BLEVINSK@xilinx.com>
    > Cc: "devicetree@vger.kernel.org" <devicetree@vger.kernel.org>, "linux-remoteproc@vger.kernel.org" <linux-remoteproc@vger.kernel.org>, "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "linux-arm-kernel@lists.infradead.org" <linux-arm-kernel@lists.infradead.org>, Michal Simek <michals@xilinx.com>
    > Subject: Re: [PATCH v26 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver
    > 
    >     [...]
    > 
    >     > +
    >     > +/**
    >     > + * zynqmp_r5_probe - Probes ZynqMP R5 processor device node
    >     > + *		       this is called for each individual R5 core to
    >     > + *		       set up mailbox, Xilinx platform manager unique ID,
    >     > + *		       add to rproc core
    >     > + *
    >     > + * @pdev: domain platform device for current R5 core
    >     > + * @node: pointer of the device node for current R5 core
    >     > + * @rpu_mode: mode to configure RPU, split or lockstep
    >     > + *
    >     > + * Return: 0 for success, negative value for failure.
    >     > + */
    >     > +static struct zynqmp_r5_rproc *zynqmp_r5_probe(struct platform_device *pdev,
    >     > +					       struct device_node *node,
    >     > +					       enum rpu_oper_mode rpu_mode)
    >     > +{
    >     > +	int ret, num_banks;
    >     > +	struct device *dev = &pdev->dev;
    >     > +	struct rproc *rproc_ptr;
    >     > +	struct zynqmp_r5_rproc *z_rproc;
    >     > +	struct device_node *r5_node;
    >     > +
    >     > +	/* Allocate remoteproc instance */
    >     > +	rproc_ptr = devm_rproc_alloc(dev, dev_name(dev), &zynqmp_r5_rproc_ops,
    >     > +				     NULL, sizeof(struct zynqmp_r5_rproc));
    >     > +	if (!rproc_ptr) {
    >     > +		ret = -ENOMEM;
    >     > +		goto error;
    >     > +	}
    >     > +
    >     > +	rproc_ptr->auto_boot = false;
    >     > +	z_rproc = rproc_ptr->priv;
    >     > +	z_rproc->rproc = rproc_ptr;
    >     > +	r5_node = z_rproc->rproc->dev.parent->of_node;
    >     > +
    >     > +	/* Set up DMA mask */
    >     > +	ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
    >     > +	if (ret)
    >     > +		goto error;
    >     > +
    >     > +	/* Get R5 power domain node */
    >     > +	ret = of_property_read_u32(node, "power-domain", &z_rproc->pnode_id);
    >     > +	if (ret)
    >     > +		goto error;
    >     > +
    >     > +	ret = r5_set_mode(z_rproc, rpu_mode);
    >     > +	if (ret)
    >     > +		goto error;
    >     > +
    >     > +	if (of_property_read_bool(node, "mboxes")) {
    >     > +		ret = zynqmp_r5_setup_mbox(z_rproc, node);
    >     > +		if (ret)
    >     > +			goto error;
    >     > +	}
    >     > +
    >     > +	/* go through TCM banks for r5 node */
    >     > +	num_banks = of_count_phandle_with_args(r5_node, BANK_LIST_PROP, NULL);
    >     > +	if (num_banks <= 0) {
    >     > +		dev_err(dev, "need to specify TCM banks\n");
    >     > +		ret = -EINVAL;
    >     > +		goto error;
    >     > +	}
    >     > +
    >     > +	if (num_banks > NUM_SRAMS) {
    >     > +		dev_err(dev, "max number of srams is %d. given: %d \r\n",
    >     > +			NUM_SRAMS, num_banks);
    >     > +		ret = -EINVAL;
    >     > +		goto error;
    >     > +	}
    >     > +
    >     > +	/* construct collection of srams used by the current R5 core */
    >     > +	for (; num_banks; num_banks--) {
    >     > +		struct resource rsc;
    >     > +		struct device_node *dt_node;
    >     > +		resource_size_t size;
    >     > +		int i;
    >     > +
    >     > +		dt_node = of_parse_phandle(r5_node, BANK_LIST_PROP, i);
    > 
    >     Variable @i is not initialised but it is used as an index to retrieve a handle
    >     to the sram banks.  That code _should_ have failed frequently or at least have
    >     yielded abnormal results often enough to be noticed.  Why wasn't it the case?
    > 
    >     I will stop here for the moment.
    > 
    > [Ben]
    > Yes this should be initialized. The reason this got through is that as i defaults to 0 and the 0th bank housed the required data. the case where SRAMS that can be written to, 0xFFE20000 in this case of split mode and on R5-0, was not caught.
    > 

    Here @i is a variable allocated on the stack and as such it is garanteed to be
    garbage on initialisation - it will do anything but default to 0.

Ok.

    > Instead of i I will use 
    > 
    >                 sram_node = of_parse_phandle(node, BANK_LIST_PROP,              
    >                                              num_banks - 1); 

    Do you have to start with the last bank?  If memory serves me well it isn't the
    case in the previous revisions.  Why not go back to the implementation you had
    in V25?  

Makes sense. Will revert as suggested.



  reply	other threads:[~2021-03-15 21:33 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-23 15:44 [PATCH v26 0/5] Add initial zynqmp R5 remoteproc driver Ben Levinsky
2021-02-23 15:44 ` [PATCH v26 1/5] firmware: xilinx: Add ZynqMP firmware ioctl enums for RPU configuration Ben Levinsky
2021-02-23 15:44 ` [PATCH v26 2/5] firmware: xilinx: Add shutdown/wakeup APIs Ben Levinsky
2021-02-23 15:44 ` [PATCH v26 3/5] firmware: xilinx: Add RPU configuration APIs Ben Levinsky
2021-02-23 15:44 ` [PATCH v26 4/5] dt-bindings: remoteproc: Add documentation for ZynqMP R5 rproc bindings Ben Levinsky
2021-02-23 15:44 ` [PATCH v26 5/5] remoteproc: Add initial zynqmp R5 remoteproc driver Ben Levinsky
2021-03-08 19:00   ` Mathieu Poirier
2021-03-11 23:47     ` Ben Levinsky
2021-03-15 17:25       ` Mathieu Poirier
2021-03-15 21:42         ` Ben Levinsky
2021-03-17 16:22           ` Mathieu Poirier
2021-03-09 16:53   ` Mathieu Poirier
2021-03-11 23:49     ` Ben Levinsky
2021-03-15 17:37       ` Mathieu Poirier
2021-03-15 21:32         ` Ben Levinsky [this message]
2021-03-17 16:27           ` Mathieu Poirier
2021-03-19 17:46             ` Ben Levinsky
2021-03-22  0:10             ` Ben Levinsky
2021-02-23 16:34 ` [PATCH v26 0/5] " Mathieu Poirier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=09895E3D-AED4-4DBF-A48C-684271275D49@xilinx.com \
    --to=blevinsk@xilinx.com \
    --cc=devicetree@vger.kernel.org \
    --cc=emooring@xilinx.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-remoteproc@vger.kernel.org \
    --cc=mathieu.poirier@linaro.org \
    --cc=michals@xilinx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).