linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ulf Hansson <ulf.hansson@linaro.org>
To: Stephan Gerhold <stephan@gerhold.net>
Cc: Stephan Gerhold <stephan.gerhold@kernkonzept.com>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Andy Gross <agross@kernel.org>,
	Bjorn Andersson <andersson@kernel.org>,
	Konrad Dybcio <konrad.dybcio@linaro.org>,
	Ilia Lin <ilia.lin@kernel.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Rob Herring <robh+dt@kernel.org>,
	Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
	Conor Dooley <conor+dt@kernel.org>,
	linux-pm@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-kernel@vger.kernel.org, devicetree@vger.kernel.org,
	stable@vger.kernel.org
Subject: Re: [PATCH 1/4] cpufreq: qcom-nvmem: Enable virtual power domain devices
Date: Tue, 17 Oct 2023 23:50:21 +0200	[thread overview]
Message-ID: <CAPDyKFpwZdx=vyuAZSv1WGYCyiohfnt87LM1jw=fhKsF5Ks1Yw@mail.gmail.com> (raw)
In-Reply-To: <ZS70aZbP33fkf9dP@gerhold.net>

[...]

> >
> > *) The pm_runtime_resume_and_get() works for QCS404 as a fix. It also
> > works fine when there is only one RPMPD that manages the performance
> > scaling.
> >
>
> Agreed.
>
> > **) In cases where we have multiple PM domains to scale performance
> > for, using pm_runtime_resume_and_get() would work fine too. Possibly
> > we want to use device_link_add() to set up suppliers, to avoid calling
> > pm_runtime_resume_and_get() for each and every device.
> >
>
> Hm. What would you use as "supplied" device? The CPU device I guess?

The consumer would be the device that is used to probe the cpureq
driver and the supplier(s) the virtual devices returned from genpd
when attaching.

>
> I'm looking again at my old patch from 2020 where I implemented this
> with device links in the OPP core. Seems like you suggested this back
> then too :)
>
>   https://lore.kernel.org/linux-pm/20200826093328.88268-1-stephan@gerhold.net/
>
> However, for the special case of the CPU I think we don't gain any code
> simplification from using device links. There will just be a single
> resume of each virtual genpd device, as well as one put during remove().
> Exactly the same applies when using device links, we need to set up the
> device links once for each virtual genpd device, and clean them up again
> during remove().
>
> Or can you think of another advantage of using device links?

No, not at this point.

So, in this particular case it may not matter that much. But when the
number of PM domains starts to vary between platforms it could be a
nice way to abstract some logic. I guess starting without using
device-links and seeing how it evolves could be a way forward too.

>
> > ***) Due to the above, we don't need a new mechanism to avoid
> > "caching" performance states for genpd. At least for the time being.
> >
>
> Right. Given *) and **) I'll prepare a v2 of $subject patch with the
> remove() cleanup fixed and an improved commit description.
>
> I'll wait for a bit in case you have more thoughts about the device
> links.

One more thing though that crossed my mind. In the rpmpd case, is
there anything we need to care about during system suspend/resume that
isn't already taken care of correctly?

Kind regards
Uffe

  reply	other threads:[~2023-10-17 21:51 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-12  9:40 [PATCH 0/4] cpufreq: Add basic cpufreq scaling for Qualcomm MSM8909 Stephan Gerhold
2023-09-12  9:40 ` [PATCH 1/4] cpufreq: qcom-nvmem: Enable virtual power domain devices Stephan Gerhold
2023-09-13 10:56   ` Ulf Hansson
2023-09-13 12:26     ` Stephan Gerhold
2023-09-29 13:14       ` Ulf Hansson
2023-09-29 17:01         ` Stephan Gerhold
2023-10-12 11:33           ` Ulf Hansson
2023-10-12 18:43             ` Stephan Gerhold
2023-10-16 14:47               ` Ulf Hansson
2023-10-17 20:54                 ` Stephan Gerhold
2023-10-17 21:50                   ` Ulf Hansson [this message]
2023-10-18  7:13                     ` Stephan Gerhold
2023-11-28 12:41                 ` [PATCH 1/4] cpufreq: qcom-nvmem: Handling multiple power domains Stephan Gerhold
2023-12-04 10:59                   ` Ulf Hansson
2023-09-12  9:40 ` [PATCH 2/4] cpufreq: dt: platdev: Add MSM8909 to blocklist Stephan Gerhold
2023-09-12  9:53   ` Konrad Dybcio
2023-09-28  6:52   ` Viresh Kumar
2023-09-12  9:40 ` [PATCH 3/4] dt-bindings: cpufreq: qcom-nvmem: Document MSM8909 Stephan Gerhold
2023-09-12 18:29   ` Rob Herring
2023-09-28  6:54     ` Viresh Kumar
2023-09-12  9:40 ` [PATCH 4/4] cpufreq: qcom-nvmem: Add MSM8909 Stephan Gerhold
2023-09-12  9:59   ` Konrad Dybcio
2023-09-12 11:08     ` Stephan Gerhold

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPDyKFpwZdx=vyuAZSv1WGYCyiohfnt87LM1jw=fhKsF5Ks1Yw@mail.gmail.com' \
    --to=ulf.hansson@linaro.org \
    --cc=agross@kernel.org \
    --cc=andersson@kernel.org \
    --cc=conor+dt@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=ilia.lin@kernel.org \
    --cc=konrad.dybcio@linaro.org \
    --cc=krzysztof.kozlowski+dt@linaro.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=rafael@kernel.org \
    --cc=robh+dt@kernel.org \
    --cc=stable@vger.kernel.org \
    --cc=stephan.gerhold@kernkonzept.com \
    --cc=stephan@gerhold.net \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).