linux-arm-msm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stephan Gerhold <stephan.gerhold@kernkonzept.com>
To: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Andy Gross <agross@kernel.org>,
	Bjorn Andersson <andersson@kernel.org>,
	Konrad Dybcio <konrad.dybcio@linaro.org>,
	Ilia Lin <ilia.lin@kernel.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Rob Herring <robh+dt@kernel.org>,
	Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org>,
	Conor Dooley <conor+dt@kernel.org>,
	linux-pm@vger.kernel.org, linux-arm-msm@vger.kernel.org,
	linux-kernel@vger.kernel.org, devicetree@vger.kernel.org,
	Stephan Gerhold <stephan@gerhold.net>,
	stable@vger.kernel.org
Subject: Re: [PATCH v2 2/3] cpufreq: qcom-nvmem: Enable virtual power domain devices
Date: Tue, 24 Oct 2023 15:07:02 +0200	[thread overview]
Message-ID: <ZTfBZqBwqskhFydZ@kernkonzept.com> (raw)
In-Reply-To: <CAPDyKFrcV8iJnJ904j1jkx0E8PaOLmiTZ7CKk7EV8qQ71AZdbA@mail.gmail.com>

On Tue, Oct 24, 2023 at 02:49:32PM +0200, Ulf Hansson wrote:
> On Tue, 24 Oct 2023 at 14:03, Stephan Gerhold
> <stephan.gerhold@kernkonzept.com> wrote:
> >
> > On Thu, Oct 19, 2023 at 01:26:19PM +0200, Ulf Hansson wrote:
> > > On Thu, 19 Oct 2023 at 12:24, Ulf Hansson <ulf.hansson@linaro.org> wrote:
> > > >
> > > > On Wed, 18 Oct 2023 at 10:06, Stephan Gerhold
> > > > <stephan.gerhold@kernkonzept.com> wrote:
> > > > >
> > > > > The genpd core caches performance state votes from devices that are
> > > > > runtime suspended as of commit 3c5a272202c2 ("PM: domains: Improve
> > > > > runtime PM performance state handling"). They get applied once the
> > > > > device becomes active again.
> > > > >
> > > > > To attach the power domains needed by qcom-cpufreq-nvmem the OPP core
> > > > > calls genpd_dev_pm_attach_by_id(). This results in "virtual" dummy
> > > > > devices that use runtime PM only to control the enable and performance
> > > > > state for the attached power domain.
> > > > >
> > > > > However, at the moment nothing ever resumes the virtual devices created
> > > > > for qcom-cpufreq-nvmem. They remain permanently runtime suspended. This
> > > > > means that performance state votes made during cpufreq scaling get
> > > > > always cached and never applied to the hardware.
> > > > >
> > > > > Fix this by enabling the devices after attaching them and use
> > > > > dev_pm_syscore_device() to ensure the power domains also stay on when
> > > > > going to suspend. Since it supplies the CPU we can never turn it off
> > > > > from Linux. There are other mechanisms to turn it off when needed,
> > > > > usually in the RPM firmware (RPMPD) or the cpuidle path (CPR genpd).
> > > >
> > > > I believe we discussed using dev_pm_syscore_device() for the previous
> > > > version. It's not intended to be used for things like the above.
> > > >
> > > > Moreover, I was under the impression that it wasn't really needed. In
> > > > fact, I would think that this actually breaks things for system
> > > > suspend/resume, as in this case the cpr driver's genpd
> > > > ->power_on|off() callbacks are no longer getting called due this,
> > > > which means that the cpr state machine isn't going to be restored
> > > > properly. Or did I get this wrong?
> > >
> > > BTW, if you really need something like the above, the proper way to do
> > > it would instead be to call device_set_awake_path() for the device.
> > >
> >
> > Unfortunately this does not work correctly. When I use
> > device_set_awake_path() it does set dev->power.wakeup_path = true.
> > However, this flag is cleared again in device_prepare() when entering
> > suspend. To me it looks a bit like wakeup_path is not supposed to be set
> > directly by drivers? Before and after your commit 8512220c5782 ("PM /
> > core: Assign the wakeup_path status flag in __device_prepare()") it
> > seems to be internally bound to device_may_wakeup().
> >
> > It works if I make device_may_wakeup() return true, with
> >
> >         device_set_wakeup_capable(dev, true);
> >         device_wakeup_enable(dev);
> >
> > but that also allows *disabling* the wakeup from sysfs which doesn't
> > really make sense for the CPU.
> >
> > Any ideas?
> 
> The device_set_awake_path() should be called from a system suspend
> callback. So you need to add that callback for the cpufreq driver.
> 
> Sorry, if that wasn't clear.
> 

Hmm, but at the moment I'm calling this on the virtual genpd devices.
How would it work for them? I don't have a suspend callback for them.

I guess could loop over the virtual devices in the cpufreq driver
suspend callback, but is my driver suspend callback really guaranteed to
run before the device_prepare() that clears "wakeup_path" on the virtual
devices?

Or is this the point where we need device links to make that work?
A quick look suggests "wakeup_path" is just propagated to parents but
not device links, so I don't think that would help, either.

Thanks,
-- 
Stephan Gerhold <stephan.gerhold@kernkonzept.com>
Kernkonzept GmbH at Dresden, Germany, HRB 31129, CEO Dr.-Ing. Michael Hohmuth

  reply	other threads:[~2023-10-24 13:07 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-18  8:06 [PATCH v2 0/3] cpufreq: Add basic cpufreq scaling for Qualcomm MSM8909 Stephan Gerhold
2023-10-18  8:06 ` [PATCH v2 1/3] cpufreq: qcom-nvmem: Simplify driver data allocation Stephan Gerhold
2023-10-18  8:45   ` Konrad Dybcio
2023-10-18  8:06 ` [PATCH v2 2/3] cpufreq: qcom-nvmem: Enable virtual power domain devices Stephan Gerhold
2023-10-19 10:24   ` Ulf Hansson
2023-10-19 11:26     ` Ulf Hansson
2023-10-19 13:05       ` Stephan Gerhold
2023-10-19 14:12         ` Ulf Hansson
2023-10-19 14:48           ` Stephan Gerhold
2023-10-19 15:19             ` Ulf Hansson
2023-10-19 17:07               ` Stephan Gerhold
2023-10-20 10:20                 ` Ulf Hansson
2023-10-24 12:03       ` Stephan Gerhold
2023-10-24 12:49         ` Ulf Hansson
2023-10-24 13:07           ` Stephan Gerhold [this message]
2023-10-24 16:11             ` Ulf Hansson
2023-10-24 16:25               ` Stephan Gerhold
2023-10-25 10:05                 ` Ulf Hansson
2023-11-01 14:56                   ` Stephan Gerhold
2023-10-18  8:06 ` [PATCH v2 3/3] cpufreq: qcom-nvmem: Add MSM8909 Stephan Gerhold
2023-10-18  8:42   ` Konrad Dybcio
2023-10-19 10:50   ` Ulf Hansson
2023-10-19  6:16 ` [PATCH v2 0/3] cpufreq: Add basic cpufreq scaling for Qualcomm MSM8909 Viresh Kumar
2023-10-19 10:19   ` Ulf Hansson
2023-10-19 10:21     ` Viresh Kumar
2023-10-19 10:23   ` Viresh Kumar
2023-10-19 13:48     ` Stephan Gerhold
2023-10-20  3:21       ` Viresh Kumar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZTfBZqBwqskhFydZ@kernkonzept.com \
    --to=stephan.gerhold@kernkonzept.com \
    --cc=agross@kernel.org \
    --cc=andersson@kernel.org \
    --cc=conor+dt@kernel.org \
    --cc=devicetree@vger.kernel.org \
    --cc=ilia.lin@kernel.org \
    --cc=konrad.dybcio@linaro.org \
    --cc=krzysztof.kozlowski+dt@linaro.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=rafael@kernel.org \
    --cc=robh+dt@kernel.org \
    --cc=stable@vger.kernel.org \
    --cc=stephan@gerhold.net \
    --cc=ulf.hansson@linaro.org \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).