From: Bjorn Andersson <bjorn.andersson@linaro.org> To: Siddharth Gupta <sidgup@codeaurora.org> Cc: ohad@wizery.com, linux-remoteproc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, psodagud@codeaurora.org, stable@vger.kernel.org Subject: Re: [PATCH 3/3] remoteproc: core: Cleanup device in case of failure Date: Thu, 27 May 2021 22:50:55 -0500 [thread overview] Message-ID: <YLBon0bBnzrizpDi@builder.lan> (raw) In-Reply-To: <1621284349-22752-4-git-send-email-sidgup@codeaurora.org> On Mon 17 May 15:45 CDT 2021, Siddharth Gupta wrote: > When a failure occurs in rproc_add() it returns an error, but does > not cleanup after itself. This change adds the failure path in such > cases. > > Signed-off-by: Siddharth Gupta <sidgup@codeaurora.org> > --- > drivers/remoteproc/remoteproc_core.c | 15 ++++++++++++--- > 1 file changed, 12 insertions(+), 3 deletions(-) > > diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c > index 45d09bf..6f5fa81 100644 > --- a/drivers/remoteproc/remoteproc_core.c > +++ b/drivers/remoteproc/remoteproc_core.c > @@ -2326,8 +2326,10 @@ int rproc_add(struct rproc *rproc) > return ret; > > ret = device_add(dev); > - if (ret < 0) > - return ret; > + if (ret < 0) { > + put_device(dev); > + goto rproc_remove_cdev; > + } > > dev_info(dev, "%s is available\n", rproc->name); > > @@ -2338,7 +2340,7 @@ int rproc_add(struct rproc *rproc) > if (rproc->auto_boot) { > ret = rproc_trigger_auto_boot(rproc); > if (ret < 0) > - return ret; > + goto rproc_remove_dev; > } > > /* expose to rproc_get_by_phandle users */ > @@ -2347,6 +2349,13 @@ int rproc_add(struct rproc *rproc) > mutex_unlock(&rproc_list_mutex); > > return 0; > + > +rproc_remove_dev: > + rproc_delete_debug_dir(rproc); > + device_del(dev); > +rproc_remove_cdev: > + rproc_char_device_remove(rproc); I'm confused, shouldn't this function just do cdev_del()? __unregister_chrdev() seems to do more than unroll what cdev_add() did... Apart from this, I think the patch looks good. Really nice to see you tidy this up! Regards, Bjorn > + return ret; > } > EXPORT_SYMBOL(rproc_add); > > -- > Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, > a Linux Foundation Collaborative Project >
prev parent reply other threads:[~2021-05-28 3:51 UTC|newest] Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-05-17 20:45 [PATCH 0/3] remoteproc: core: Fixes for rproc cdev and add Siddharth Gupta 2021-05-17 20:45 ` [PATCH 1/3] remoteproc: core: Move cdev add before device add Siddharth Gupta 2021-05-28 3:43 ` Bjorn Andersson 2021-05-17 20:45 ` [PATCH 2/3] remoteproc: core: Move validate " Siddharth Gupta 2021-05-28 3:44 ` Bjorn Andersson 2021-05-17 20:45 ` [PATCH 3/3] remoteproc: core: Cleanup device in case of failure Siddharth Gupta 2021-05-28 3:50 ` Bjorn Andersson [this message]
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=YLBon0bBnzrizpDi@builder.lan \ --to=bjorn.andersson@linaro.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-arm-msm@vger.kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-remoteproc@vger.kernel.org \ --cc=ohad@wizery.com \ --cc=psodagud@codeaurora.org \ --cc=sidgup@codeaurora.org \ --cc=stable@vger.kernel.org \ --subject='Re: [PATCH 3/3] remoteproc: core: Cleanup device in case of failure' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).