linux-media.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
To: unlisted-recipients:; (no To-header on input)
Cc: linuxarm@huawei.com, mauro.chehab@huawei.com,
	Mauro Carvalho Chehab <mchehab+huawei@kernel.org>,
	Andy Gross <agross@kernel.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	Mauro Carvalho Chehab <mchehab@kernel.org>,
	Robert Foss <robert.foss@linaro.org>,
	Todor Tomov <todor.too@gmail.com>,
	linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-media@vger.kernel.org
Subject: [PATCH v5 12/25] media: camss: use pm_runtime_resume_and_get()
Date: Thu,  6 May 2021 17:25:50 +0200	[thread overview]
Message-ID: <d4ebcafa976ee6ea8328ddcbb0f3627938a81253.1620314616.git.mchehab+huawei@kernel.org> (raw)
In-Reply-To: <cover.1620314616.git.mchehab+huawei@kernel.org>

Commit dd8088d5a896 ("PM: runtime: Add pm_runtime_resume_and_get to deal with usage counter")
added pm_runtime_resume_and_get() in order to automatically handle
dev->power.usage_count decrement on errors.

Use the new API, in order to cleanup the error check logic.

Reviewed-by: Robert Foss <robert.foss@linaro.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
---
 drivers/media/platform/qcom/camss/camss-csid.c   | 6 ++----
 drivers/media/platform/qcom/camss/camss-csiphy.c | 6 ++----
 drivers/media/platform/qcom/camss/camss-ispif.c  | 6 ++----
 drivers/media/platform/qcom/camss/camss-vfe.c    | 5 +++--
 4 files changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/media/platform/qcom/camss/camss-csid.c b/drivers/media/platform/qcom/camss/camss-csid.c
index cc11fbfdae13..d2a7f2a64f26 100644
--- a/drivers/media/platform/qcom/camss/camss-csid.c
+++ b/drivers/media/platform/qcom/camss/camss-csid.c
@@ -156,11 +156,9 @@ static int csid_set_power(struct v4l2_subdev *sd, int on)
 	int ret;
 
 	if (on) {
-		ret = pm_runtime_get_sync(dev);
-		if (ret < 0) {
-			pm_runtime_put_sync(dev);
+		ret = pm_runtime_resume_and_get(dev);
+		if (ret < 0)
 			return ret;
-		}
 
 		ret = regulator_enable(csid->vdda);
 		if (ret < 0) {
diff --git a/drivers/media/platform/qcom/camss/camss-csiphy.c b/drivers/media/platform/qcom/camss/camss-csiphy.c
index b3c3bf19e522..8e18b8e668cf 100644
--- a/drivers/media/platform/qcom/camss/camss-csiphy.c
+++ b/drivers/media/platform/qcom/camss/camss-csiphy.c
@@ -197,11 +197,9 @@ static int csiphy_set_power(struct v4l2_subdev *sd, int on)
 	if (on) {
 		int ret;
 
-		ret = pm_runtime_get_sync(dev);
-		if (ret < 0) {
-			pm_runtime_put_sync(dev);
+		ret = pm_runtime_resume_and_get(dev);
+		if (ret < 0)
 			return ret;
-		}
 
 		ret = csiphy_set_clock_rates(csiphy);
 		if (ret < 0) {
diff --git a/drivers/media/platform/qcom/camss/camss-ispif.c b/drivers/media/platform/qcom/camss/camss-ispif.c
index 37611c8861da..d9907742ba79 100644
--- a/drivers/media/platform/qcom/camss/camss-ispif.c
+++ b/drivers/media/platform/qcom/camss/camss-ispif.c
@@ -372,11 +372,9 @@ static int ispif_set_power(struct v4l2_subdev *sd, int on)
 			goto exit;
 		}
 
-		ret = pm_runtime_get_sync(dev);
-		if (ret < 0) {
-			pm_runtime_put_sync(dev);
+		ret = pm_runtime_resume_and_get(dev);
+		if (ret < 0)
 			goto exit;
-		}
 
 		ret = camss_enable_clocks(ispif->nclocks, ispif->clock, dev);
 		if (ret < 0) {
diff --git a/drivers/media/platform/qcom/camss/camss-vfe.c b/drivers/media/platform/qcom/camss/camss-vfe.c
index 15695fd466c4..cf743e61f798 100644
--- a/drivers/media/platform/qcom/camss/camss-vfe.c
+++ b/drivers/media/platform/qcom/camss/camss-vfe.c
@@ -584,9 +584,9 @@ static int vfe_get(struct vfe_device *vfe)
 		if (ret < 0)
 			goto error_pm_domain;
 
-		ret = pm_runtime_get_sync(vfe->camss->dev);
+		ret = pm_runtime_resume_and_get(vfe->camss->dev);
 		if (ret < 0)
-			goto error_pm_runtime_get;
+			goto error_domain_off;
 
 		ret = vfe_set_clock_rates(vfe);
 		if (ret < 0)
@@ -620,6 +620,7 @@ static int vfe_get(struct vfe_device *vfe)
 
 error_pm_runtime_get:
 	pm_runtime_put_sync(vfe->camss->dev);
+error_domain_off:
 	vfe->ops->pm_domain_off(vfe);
 
 error_pm_domain:
-- 
2.30.2


  parent reply	other threads:[~2021-05-06 15:27 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-06 15:25 [PATCH v5 00/25] media: use pm_runtime_resume_and_get() on non-i2c drivers Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 01/25] staging: media: imx7-mipi-csis: use pm_runtime_resume_and_get() Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 02/25] staging: media: atomisp: " Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 03/25] staging: media: ipu3: " Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 04/25] staging: media: cedrus_video: " Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 05/25] staging: media: tegra-vde: " Mauro Carvalho Chehab
2021-05-07  5:13   ` Dan Carpenter
2021-05-10 12:26     ` Dmitry Osipenko
2021-05-06 15:25 ` [PATCH v5 06/25] staging: media: tegra-video: " Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 07/25] media: rockchip/rga: " Mauro Carvalho Chehab
2021-05-13 22:38   ` Heiko Stuebner
2021-05-06 15:25 ` [PATCH v5 08/25] media: sti/hva: " Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 09/25] media: ipu3: " Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 10/25] media: coda: " Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 11/25] media: mtk-jpeg: " Mauro Carvalho Chehab
2021-05-06 15:25 ` Mauro Carvalho Chehab [this message]
2021-05-06 15:25 ` [PATCH v5 13/25] media: venus: core: " Mauro Carvalho Chehab
2021-05-10 13:54   ` Stanimir Varbanov
2021-05-17 15:26     ` Stanimir Varbanov
2021-05-18 15:20       ` Mauro Carvalho Chehab
2021-05-18 15:49         ` Stanimir Varbanov
2021-05-06 15:25 ` [PATCH v5 14/25] media: venus: vdec: " Mauro Carvalho Chehab
2021-05-10 13:55   ` Stanimir Varbanov
2021-05-06 15:25 ` [PATCH v5 15/25] media: venus: venc: " Mauro Carvalho Chehab
2021-05-10 13:55   ` Stanimir Varbanov
2021-05-06 15:25 ` [PATCH v5 16/25] media: rcar-fcp: " Mauro Carvalho Chehab
2021-05-06 16:05   ` Laurent Pinchart
2021-05-06 15:25 ` [PATCH v5 17/25] media: rkisp1: " Mauro Carvalho Chehab
2021-05-13 22:27   ` Heiko Stuebner
2021-05-06 15:25 ` [PATCH v5 18/25] media: s3c-camif: " Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 19/25] media: s5p-mfc: " Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 20/25] media: stm32: " Mauro Carvalho Chehab
2021-05-06 15:25 ` [PATCH v5 21/25] media: sunxi: " Mauro Carvalho Chehab
2021-05-06 15:26 ` [PATCH v5 22/25] media: ti-vpe: " Mauro Carvalho Chehab
2021-05-06 15:26 ` [PATCH v5 23/25] media: vsp1: " Mauro Carvalho Chehab
2021-05-06 16:06   ` Laurent Pinchart
2021-05-06 15:26 ` [PATCH v5 24/25] media: rcar-vin: " Mauro Carvalho Chehab
2021-05-06 15:26 ` [PATCH v5 25/25] media: hantro: " Mauro Carvalho Chehab

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d4ebcafa976ee6ea8328ddcbb0f3627938a81253.1620314616.git.mchehab+huawei@kernel.org \
    --to=mchehab+huawei@kernel.org \
    --cc=agross@kernel.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=mauro.chehab@huawei.com \
    --cc=mchehab@kernel.org \
    --cc=robert.foss@linaro.org \
    --cc=todor.too@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).