From: "Rafael J. Wysocki" <rjw@sisk.pl>
To: Ohad Ben-Cohen <ohad@wizery.com>,
Chuansheng Liu <chuansheng.liu@intel.com>
Cc: Li Fei <fei.li@intel.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 5/5] hwspinlock/core: call pm_runtime_put in pm_runtime_get_sync failed case
Date: Fri, 05 Apr 2013 13:39:58 +0200 [thread overview]
Message-ID: <1988602.WZduyKA6Mt@vostro.rjw.lan> (raw)
In-Reply-To: <CAK=WgbbDhfLa1uQn3v0Lu5iOLL5QgGxKuOVrq9AKD9fLtb42FQ@mail.gmail.com>
On Friday, April 05, 2013 09:27:40 AM Ohad Ben-Cohen wrote:
> Hi Li,
>
> On Thu, Feb 28, 2013 at 10:02 AM, Li Fei <fei.li@intel.com> wrote:
> >
> > Even in failed case of pm_runtime_get_sync, the usage_count
> > is incremented. In order to keep the usage_count with correct
> > value and runtime power management to behave correctly, call
> > pm_runtime_put(_sync) in such case.
>
> Is it better then to call pm_runtime_put_noidle instead? This way
> we're sure to only take care of usage_count without ever calling any
> underlying pm handler.
Both would break code that does
pm_runtime_get_sync(dev);
<device access>
pm_runtime_put(dev);
without checking the result of pm_runtime_get_sync() - which BTW is completely
unnecessary in the majority of cases.
So no, it's not a good idea at all.
Thanks,
Rafael
--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
next prev parent reply other threads:[~2013-04-05 11:32 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-02-28 7:37 [PATCH 1/5] regmap: irq: call pm_runtime_put in pm_runtime_get_sync failed case Li Fei
2013-02-28 7:44 ` [PATCH 2/5] mmc: core: call pm_runtime_put_sync " Li Fei
2013-02-28 7:51 ` [PATCH 3/5] wl1251: " Li Fei
2013-02-28 7:57 ` [PATCH 4/5] usb: " Li Fei
2013-02-28 8:02 ` [PATCH 5/5] hwspinlock/core: call pm_runtime_put " Li Fei
2013-04-05 6:27 ` Ohad Ben-Cohen
2013-04-05 11:39 ` Rafael J. Wysocki [this message]
2013-04-05 11:42 ` Rafael J. Wysocki
2013-04-05 13:13 ` Li, Fei
2013-04-05 13:20 ` [PATCH 5/5 V2] " Li Fei
2013-04-05 14:46 ` Ohad Ben-Cohen
2013-02-28 8:37 ` [PATCH 4/5] usb: call pm_runtime_put_sync " Lan Tianyu
2013-02-28 9:00 ` Li, Fei
2013-02-28 15:14 ` Alan Stern
2013-02-28 9:06 ` [PATCH 4/5 V2] " Li Fei
2013-02-28 15:17 ` Alan Stern
2013-03-01 0:38 ` Liu, Chuansheng
2013-03-01 0:50 ` Rafael J. Wysocki
2013-03-01 0:59 ` Liu, Chuansheng
2013-03-01 2:18 ` Rafael J. Wysocki
2013-03-01 2:07 ` Liu, Chuansheng
2013-03-01 2:22 ` Rafael J. Wysocki
2013-03-01 2:23 ` Liu, Chuansheng
2013-03-01 2:57 ` [PATCH 4/5 V3] usb: call pm_runtime_put_noidle " Li Fei
2013-03-01 2:59 ` Li Fei
2013-02-28 8:18 ` [PATCH 3/5] wl1251: call pm_runtime_put_sync " Luciano Coelho
2013-03-05 8:51 ` Luciano Coelho
2013-04-07 10:39 ` [PATCH 2/5] mmc: core: " Ohad Ben-Cohen
2013-04-08 1:36 ` Li, Fei
2013-04-08 1:36 ` [PATCH 2/5 V2] mmc: core: call pm_runtime_put_noidle " Li Fei
2013-04-08 12:48 ` Ohad Ben-Cohen
2013-04-12 18:15 ` Chris Ball
2013-03-01 6:55 ` [PATCH 1/5] regmap: irq: call pm_runtime_put " Mark Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1988602.WZduyKA6Mt@vostro.rjw.lan \
--to=rjw@sisk.pl \
--cc=chuansheng.liu@intel.com \
--cc=fei.li@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=ohad@wizery.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).