linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Stephan Gerhold <stephan@gerhold.net>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Kevin Hilman <khilman@kernel.org>,
	Ulf Hansson <ulf.hansson@linaro.org>, Nishanth Menon <nm@ti.com>,
	Stephen Boyd <sboyd@kernel.org>,
	linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org,
	Niklas Cassel <nks@flawful.org>
Subject: Re: [PATCH v2] opp: Power on (virtual) power domains managed by the OPP core
Date: Fri, 28 Aug 2020 12:05:11 +0530	[thread overview]
Message-ID: <20200828063511.y47ofywtu5qo57bq@vireshk-i7> (raw)
In-Reply-To: <20200827114422.GA1784@gerhold.net>

On 27-08-20, 13:44, Stephan Gerhold wrote:
> Hmm. Actually I was using this parameter for initial testing, and forced
> on the power domains from the qcom-cpufreq-nvmem driver. For my v1 patch
> I wanted to enable the power domains in dev_pm_opp_set_rate(), so there
> using the virt_devs parameter was not possible.

Right, as we really do not want to enable it there and leave it for
the real consumers to handle.

> On the other hand, creating the device links would be possible from the
> platform driver by using the parameter.

Right.

> > And so I think again if this patch should be picked instead of letting
> > the platform handle this ?
> 
> It seems like originally the motivation for the parameter was that
> cpufreq consumers do *not* need to power on the power domains:
> 
> Commit 17a8f868ae3e ("opp: Return genpd virtual devices from dev_pm_opp_attach_genpd()"):
>  "The cpufreq drivers don't need to do runtime PM operations on
>   the virtual devices returned by dev_pm_domain_attach_by_name() and so
>   the virtual devices weren't shared with the callers of
>   dev_pm_opp_attach_genpd() earlier.
> 
>   But the IO device drivers would want to do that. This patch updates
>   the prototype of dev_pm_opp_attach_genpd() to accept another argument
>   to return the pointer to the array of genpd virtual devices."

Not just that I believe. There were also arguments that only the real
consumer knows how to handle multiple power domains. For example for a
USB or Camera module which can work in multiple modes, we may want to
enable only one power domain in say slow mode and another power domain
in fast mode. And so these kind of complex behavior/choices better be
left for the end consumer and not try to handle this generically in
the OPP core.

> But the reason why I made this patch is that actually something *should*
> enable the power domains even for the cpufreq case.

Ulf, what do you think about this ? IIRC from our previous discussions
someone asked me not do so.

> If every user of dev_pm_opp_attach_genpd() ends up creating these device
> links we might as well manage those directly from the OPP core.

Sure, I am all in for reducing code duplication, but ...

> I cannot think of any use case where you would not want to manage those
> power domains using device links. And if there is such a use case,
> chances are good that this use case is so special that using the OPP API
> to set the performance states would not work either. In either case,
> this seems like something that should be discussed once there is such a
> use case.

The example I gave earlier shows a common case where we need to handle
this at the end users which still want to use the OPP API.

> At the moment, there are only two users of dev_pm_opp_attach_genpd():
> 
>   - cpufreq (qcom-cpufreq-nvmem)
>   - I/O (venus, recently added in linux-next [1])
> 
> [1]: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=9a538b83612c8b5848bf840c2ddcd86dda1c8c76
> 
> In fact, venus adds the device link exactly the same way as in my patch.
> So we could move that to the OPP core, simplify the venus code and
> remove the virt_devs parameter. That would be my suggestion.
> 
> I can submit a v3 with that if you agree (or we take this patch as-is
> and remove the parameter separately - I just checked and creating a
> device link twice does not seem to cause any problems...)

I normally tend to agree with the logic that lets only focus on what's
upstream and not think of virtual cases which may never happen. But I
was told that this is too common of a scenario and so it made sense to
do it this way.

Maybe Ulf can again throw some light here :)

-- 
viresh

  reply	other threads:[~2020-08-28  6:35 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-26  9:33 [PATCH v2] opp: Power on (virtual) power domains managed by the OPP core Stephan Gerhold
2020-08-27 10:01 ` Viresh Kumar
2020-08-27 11:44   ` Stephan Gerhold
2020-08-28  6:35     ` Viresh Kumar [this message]
2020-08-28  9:49       ` Ulf Hansson
2020-08-28  9:57       ` Stephan Gerhold
2020-09-11  8:34         ` Stephan Gerhold
2020-09-11  9:25           ` Viresh Kumar
2020-09-11 15:37             ` Stephan Gerhold
2020-08-31 12:14 ` Viresh Kumar
2020-08-31 15:49   ` Stephan Gerhold
2020-09-01  6:00     ` Viresh Kumar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200828063511.y47ofywtu5qo57bq@vireshk-i7 \
    --to=viresh.kumar@linaro.org \
    --cc=khilman@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=nks@flawful.org \
    --cc=nm@ti.com \
    --cc=rjw@rjwysocki.net \
    --cc=sboyd@kernel.org \
    --cc=stephan@gerhold.net \
    --cc=ulf.hansson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).