linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lina Iyer <ilina@codeaurora.org>
To: Sudeep Holla <sudeep.holla@arm.com>
Cc: "Raju P.L.S.S.S.N" <rplsssn@codeaurora.org>,
	andy.gross@linaro.org, david.brown@linaro.org, rjw@rjwysocki.net,
	ulf.hansson@linaro.org, khilman@kernel.org,
	linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org,
	rnayak@codeaurora.org, bjorn.andersson@linaro.org,
	linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org,
	devicetree@vger.kernel.org, sboyd@kernel.org,
	evgreen@chromium.org, dianders@chromium.org, mka@chromium.org,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Subject: Re: [PATCH RFC v1 7/8] drivers: qcom: cpu_pd: Handle cpu hotplug in the domain
Date: Fri, 12 Oct 2018 10:04:27 -0600	[thread overview]
Message-ID: <20181012160427.GG2371@codeaurora.org> (raw)
In-Reply-To: <20181012150429.GH3401@e107155-lin>

On Fri, Oct 12 2018 at 09:04 -0600, Sudeep Holla wrote:
>On Thu, Oct 11, 2018 at 03:06:09PM -0600, Lina Iyer wrote:
>> On Thu, Oct 11 2018 at 11:37 -0600, Sudeep Holla wrote:
>[...]
>
>> >
>> > Is DDR managed by Linux ? I assumed it was handled by higher exception
>> > levels. Can you give examples of resources used by CPU in this context.
>> > When CPU can be powered on or woken up without Linux intervention, the
>> > same holds true for CPU power down or sleep states. I still see no reason
>> > other than the firmware has no support to talk to RPMH.
>> >
>> DDR, shared clocks, regulators etc. Imagine you are running something on
>> the screen and CPUs enter low power mode, while the CPUs were active,
>> there was a need for bunch of display resources, and things the app may
>> have requested resources, while the CPU powered down the requests may
>> not be needed the full extent as when the CPU was running, so they can
>> voted down to a lower state of in some cases turn off the resources
>> completely. What the driver voted for is dependent on the runtime state
>> and the usecase currently active. The 'sleep' state value is also
>> determined by the driver/framework.
>>
>
>Why does CPU going down says that another (screen - supposedly shared)
>resource needs to be relinquished ? Shouldn't display decide that on it's
>own ? I have no idea why screen/display is brought into this discussion.

>CPU can just say: hey I am going down and I don't need my resource.
>How can it say: hey I am going down and display or screen also doesn't
>need the resource. On a multi-cluster, how will the last CPU on one know
>that it needs to act on behalf of the shared resource instead of another
>cluster.
>
Fair questions. Now how would the driver know that the CPUs have powered
down, to say, if you are not active, then you can put these resources in
low power state?
Well they don't, because sending out CPU power down notifications for
all CPUs and the cluster are expensive and can lead to lot of latency.
Instead, the drivers let the RPMH driver know that if and when the CPUs
power down, then you could request these resources to be in that low
power state. The CPU PD power off callbacks trigger the RPMH driver to
flush and request a low power state on behalf of all the drivers.

Drivers let know what their active state request for the resource is as
well as their CPU powered down state request is, in advance. The
'active' request is made immediately, while the 'sleep' request is
staged in. When the CPUs are to be powered off, this request is written
into a hardware registers. The CPU PM domain controller, after powering
down, will make these state requests in hardware thereby lowering the
standby power. The resource state is brought back into the 'active'
value before powering on the first CPU.

>I think we are mixing the system sleep states with CPU idle here.
>If it's system sleeps states, the we need to deal it in some system ops
>when it's the last CPU in the system and not the cluster/power domain.
>
I think the confusion for you is system sleep vs suspend. System sleep
here (probably more of a QC terminology), refers to powering down the
entire SoC for very small durations, while not actually suspended. The
drivers are unaware that this is happening. No hotplug happens and the
interrupts are not migrated during system sleep. When all the CPUs go
into cpuidle, the system sleep state is activated and the resource
requirements are lowered. The resources are brought back to their
previous active values before we exit cpuidle on any CPU. The drivers
have no idea that this happened. We have been doing this on QCOM SoCs
for a decade, so this is not something new for this SoC. Every QCOM SoC
has been doing this, albeit differently because of their architecture.
The newer ones do most of these transitions in hardware as opposed to an
remote CPU. But this is the first time, we are upstreaming this :)

Suspend is an altogether another idle state where drivers are notified
and relinquish their resources before the CPU powers down. Similar
things happen there as well, but at a much deeper level. Resources may
be turned off completely instead of just lowering to a low power state.

For example, suspend happens when the screen times out on a phone.
System sleep happens few hundred times when you are actively reading
something on the phone.

>> > Having to adapt DT to the firmware though the feature is fully discoverable
>> > is not at all good IMO. So the DT in this series *should work* with OSI
>> > mode if the firmware has the support for it, it's as simple as that.
>> >
>> The firmware is ATF and does not support OSI.
>>
>
>OK, to keep it simple: If a platform with PC mode only replaces the firmware
>with one that has OSI mode, we *shouldn't need* to change DT to suite it.
>I think I asked Ulf to add something similar in DT bindings.
>
Fair point and that is what this RFC intends to bring. That PM domains
are useful not just for PSCI, but also for Linux PM drivers such as this
one. We will discuss more how we can fold in platform specific
activities along with PSCI OSI state determination when the
domain->power_off is called. I have some ideas on that. Was hoping to
get to that after the inital idea is conveyed.

Thanks for your time.

Lina



  parent reply	other threads:[~2018-10-12 16:04 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-10 21:20 [PATCH RFC v1 0/8] drivers: qcom: Add cpu power domain for SDM845 Raju P.L.S.S.S.N
2018-10-10 21:20 ` [PATCH RFC v1 1/8] PM / Domains: Add helper functions to attach/detach CPUs to/from genpd Raju P.L.S.S.S.N
2018-10-10 21:20 ` [PATCH RFC v1 2/8] kernel/cpu_pm: Manage runtime PM in the idle path for CPUs Raju P.L.S.S.S.N
2018-10-11 20:52   ` Rafael J. Wysocki
2018-10-11 22:08     ` Lina Iyer
2018-10-12  7:43       ` Rafael J. Wysocki
2018-10-12 10:20         ` Ulf Hansson
2018-10-12 15:20         ` Lina Iyer
2018-10-10 21:20 ` [PATCH RFC v1 3/8] timer: Export next wakeup time of a CPU Raju P.L.S.S.S.N
2018-10-29 22:36   ` Thomas Gleixner
2018-10-30 10:29     ` Ulf Hansson
2018-10-10 21:20 ` [PATCH RFC v1 4/8] drivers: qcom: cpu_pd: add cpu power domain support using genpd Raju P.L.S.S.S.N
2018-10-11 11:13   ` Sudeep Holla
2018-10-11 15:27     ` Ulf Hansson
2018-10-11 15:59       ` Sudeep Holla
2018-10-12  9:23         ` Ulf Hansson
2018-10-12 14:33   ` Sudeep Holla
2018-10-12 18:01     ` Raju P L S S S N
2018-10-10 21:20 ` [PATCH RFC v1 5/8] dt-bindings: introduce cpu power domain bindings for Qualcomm SoCs Raju P.L.S.S.S.N
2018-10-11 11:08   ` Sudeep Holla
2018-10-12 18:08     ` Raju P L S S S N
2018-10-10 21:20 ` [PATCH RFC v1 6/8] drivers: qcom: cpu_pd: program next wakeup to PDC timer Raju P.L.S.S.S.N
2018-10-10 21:20 ` [PATCH RFC v1 7/8] drivers: qcom: cpu_pd: Handle cpu hotplug in the domain Raju P.L.S.S.S.N
2018-10-11 11:20   ` Sudeep Holla
2018-10-11 16:00     ` Lina Iyer
2018-10-11 16:19       ` Sudeep Holla
2018-10-11 16:58         ` Lina Iyer
2018-10-11 17:37           ` Sudeep Holla
2018-10-11 21:06             ` Lina Iyer
2018-10-12 15:04               ` Sudeep Holla
2018-10-12 15:46                 ` Ulf Hansson
2018-10-12 16:16                   ` Lina Iyer
2018-10-12 16:33                   ` Sudeep Holla
2018-10-12 16:04                 ` Lina Iyer [this message]
2018-10-12 17:00                   ` Sudeep Holla
2018-10-12 17:19                     ` Lina Iyer
2018-10-12 17:25                       ` Sudeep Holla
2018-10-22 19:50                         ` Lina Iyer
2018-10-12 14:25   ` Sudeep Holla
2018-10-12 18:10     ` Raju P L S S S N
2018-10-10 21:20 ` [PATCH RFC v1 8/8] arm64: dtsi: sdm845: Add cpu power domain support Raju P.L.S.S.S.N
2018-10-12 17:35   ` Sudeep Holla
2018-10-12 17:52     ` Lina Iyer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181012160427.GG2371@codeaurora.org \
    --to=ilina@codeaurora.org \
    --cc=andy.gross@linaro.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=david.brown@linaro.org \
    --cc=devicetree@vger.kernel.org \
    --cc=dianders@chromium.org \
    --cc=evgreen@chromium.org \
    --cc=khilman@kernel.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-soc@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=mka@chromium.org \
    --cc=rjw@rjwysocki.net \
    --cc=rnayak@codeaurora.org \
    --cc=rplsssn@codeaurora.org \
    --cc=sboyd@kernel.org \
    --cc=sudeep.holla@arm.com \
    --cc=ulf.hansson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).