linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ulf Hansson <ulf.hansson@linaro.org>
To: Viresh Kumar <viresh.kumar@linaro.org>,
	Daniel Baluta <daniel.baluta@oss.nxp.com>,
	Stephan Gerhold <stephan@gerhold.net>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Kevin Hilman <khilman@kernel.org>, Nishanth Menon <nm@ti.com>,
	Stephen Boyd <sboyd@kernel.org>,
	Linux PM <linux-pm@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Niklas Cassel <nks@flawful.org>
Subject: Re: [PATCH v2] opp: Power on (virtual) power domains managed by the OPP core
Date: Fri, 28 Aug 2020 11:49:01 +0200	[thread overview]
Message-ID: <CAPDyKFoJrTUEY3a1W+Xf=az_60L=iT_6OUtWmY-JJxLx5HTC6w@mail.gmail.com> (raw)
In-Reply-To: <20200828063511.y47ofywtu5qo57bq@vireshk-i7>

+ Daniel

> > Commit 17a8f868ae3e ("opp: Return genpd virtual devices from dev_pm_opp_attach_genpd()"):
> >  "The cpufreq drivers don't need to do runtime PM operations on
> >   the virtual devices returned by dev_pm_domain_attach_by_name() and so
> >   the virtual devices weren't shared with the callers of
> >   dev_pm_opp_attach_genpd() earlier.
> >
> >   But the IO device drivers would want to do that. This patch updates
> >   the prototype of dev_pm_opp_attach_genpd() to accept another argument
> >   to return the pointer to the array of genpd virtual devices."
>
> Not just that I believe. There were also arguments that only the real
> consumer knows how to handle multiple power domains. For example for a
> USB or Camera module which can work in multiple modes, we may want to
> enable only one power domain in say slow mode and another power domain
> in fast mode. And so these kind of complex behavior/choices better be
> left for the end consumer and not try to handle this generically in
> the OPP core.
>
> > But the reason why I made this patch is that actually something *should*
> > enable the power domains even for the cpufreq case.
>
> Ulf, what do you think about this ? IIRC from our previous discussions
> someone asked me not do so.

Yes, that's correct, I recall that now as well.

In some cases I have been told that, depending on the running use
case, one of the PM domains could stay off while the other needed to
be on. I was trying to find some thread in the archive, but I failed.
Sorry.

>
> > If every user of dev_pm_opp_attach_genpd() ends up creating these device
> > links we might as well manage those directly from the OPP core.
>
> Sure, I am all in for reducing code duplication, but ...
>
> > I cannot think of any use case where you would not want to manage those
> > power domains using device links. And if there is such a use case,
> > chances are good that this use case is so special that using the OPP API
> > to set the performance states would not work either. In either case,
> > this seems like something that should be discussed once there is such a
> > use case.
>
> The example I gave earlier shows a common case where we need to handle
> this at the end users which still want to use the OPP API.
>
> > At the moment, there are only two users of dev_pm_opp_attach_genpd():
> >
> >   - cpufreq (qcom-cpufreq-nvmem)
> >   - I/O (venus, recently added in linux-next [1])
> >
> > [1]: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=9a538b83612c8b5848bf840c2ddcd86dda1c8c76
> >
> > In fact, venus adds the device link exactly the same way as in my patch.
> > So we could move that to the OPP core, simplify the venus code and
> > remove the virt_devs parameter. That would be my suggestion.
> >
> > I can submit a v3 with that if you agree (or we take this patch as-is
> > and remove the parameter separately - I just checked and creating a
> > device link twice does not seem to cause any problems...)
>
> I normally tend to agree with the logic that lets only focus on what's
> upstream and not think of virtual cases which may never happen. But I
> was told that this is too common of a scenario and so it made sense to
> do it this way.
>
> Maybe Ulf can again throw some light here :)

There is another series that is being discussed [1], which could be
used by the consumer driver to help manage the device links. Maybe
that is the way we should go, to leave room for flexibility.

[1]
[PATCH v3 0/2] Introduce multi PM domains helpers
https://www.spinics.net/lists/kernel/msg3565672.html

  reply	other threads:[~2020-08-28  9:49 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-26  9:33 [PATCH v2] opp: Power on (virtual) power domains managed by the OPP core Stephan Gerhold
2020-08-27 10:01 ` Viresh Kumar
2020-08-27 11:44   ` Stephan Gerhold
2020-08-28  6:35     ` Viresh Kumar
2020-08-28  9:49       ` Ulf Hansson [this message]
2020-08-28  9:57       ` Stephan Gerhold
2020-09-11  8:34         ` Stephan Gerhold
2020-09-11  9:25           ` Viresh Kumar
2020-09-11 15:37             ` Stephan Gerhold
2020-08-31 12:14 ` Viresh Kumar
2020-08-31 15:49   ` Stephan Gerhold
2020-09-01  6:00     ` Viresh Kumar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPDyKFoJrTUEY3a1W+Xf=az_60L=iT_6OUtWmY-JJxLx5HTC6w@mail.gmail.com' \
    --to=ulf.hansson@linaro.org \
    --cc=daniel.baluta@oss.nxp.com \
    --cc=khilman@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=nks@flawful.org \
    --cc=nm@ti.com \
    --cc=rjw@rjwysocki.net \
    --cc=sboyd@kernel.org \
    --cc=stephan@gerhold.net \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).