From: Lina Iyer <ilina@codeaurora.org>
To: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Sudeep Holla <sudeep.holla@arm.com>,
"Raju P.L.S.S.S.N" <rplsssn@codeaurora.org>,
Andy Gross <andy.gross@linaro.org>,
David Brown <david.brown@linaro.org>,
"Rafael J. Wysocki" <rjw@rjwysocki.net>,
Kevin Hilman <khilman@kernel.org>,
linux-arm-msm <linux-arm-msm@vger.kernel.org>,
linux-soc@vger.kernel.org, Rajendra Nayak <rnayak@codeaurora.org>,
Bjorn Andersson <bjorn.andersson@linaro.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Linux PM <linux-pm@vger.kernel.org>,
DTML <devicetree@vger.kernel.org>,
Stephen Boyd <sboyd@kernel.org>,
Evan Green <evgreen@chromium.org>,
Doug Anderson <dianders@chromium.org>,
Matthias Kaehlcke <mka@chromium.org>,
Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Subject: Re: [PATCH RFC v1 7/8] drivers: qcom: cpu_pd: Handle cpu hotplug in the domain
Date: Fri, 12 Oct 2018 10:16:53 -0600 [thread overview]
Message-ID: <20181012161653.GH2371@codeaurora.org> (raw)
In-Reply-To: <CAPDyKFpNuTMav__wTvuMbxU65QpiQPftH_OXkEwztcPAG9FbTw@mail.gmail.com>
On Fri, Oct 12 2018 at 09:46 -0600, Ulf Hansson wrote:
>On 12 October 2018 at 17:04, Sudeep Holla <sudeep.holla@arm.com> wrote:
>> On Thu, Oct 11, 2018 at 03:06:09PM -0600, Lina Iyer wrote:
>>> On Thu, Oct 11 2018 at 11:37 -0600, Sudeep Holla wrote:
>> [...]
>>
>>> >
>>> > Is DDR managed by Linux ? I assumed it was handled by higher exception
>>> > levels. Can you give examples of resources used by CPU in this context.
>>> > When CPU can be powered on or woken up without Linux intervention, the
>>> > same holds true for CPU power down or sleep states. I still see no reason
>>> > other than the firmware has no support to talk to RPMH.
>>> >
>>> DDR, shared clocks, regulators etc. Imagine you are running something on
>>> the screen and CPUs enter low power mode, while the CPUs were active,
>>> there was a need for bunch of display resources, and things the app may
>>> have requested resources, while the CPU powered down the requests may
>>> not be needed the full extent as when the CPU was running, so they can
>>> voted down to a lower state of in some cases turn off the resources
>>> completely. What the driver voted for is dependent on the runtime state
>>> and the usecase currently active. The 'sleep' state value is also
>>> determined by the driver/framework.
>>>
>>
>> Why does CPU going down says that another (screen - supposedly shared)
>> resource needs to be relinquished ? Shouldn't display decide that on it's
>> own ? I have no idea why screen/display is brought into this discussion.
>> CPU can just say: hey I am going down and I don't need my resource.
>> How can it say: hey I am going down and display or screen also doesn't
>> need the resource. On a multi-cluster, how will the last CPU on one know
>> that it needs to act on behalf of the shared resource instead of another
>> cluster.
>
>Apologize for sidetracking the discussion, just want to fold in a few comments.
>
No, this is perfect to warp the whole thing around. Thanks Ulf.
>This is becoming a complicated story. May I suggest we pick the GIC as
>an example instead?
>
>Let's assume the simple case, we have one cluster and when the cluster
>becomes powered off, the GIC needs to be re-configured and wakeups
>must be routed through some "always on" external logic.
>
>The PSCI spec mentions nothing about how to manage this and not the
>rest of the SoC topology for that matter. Hence if the GIC is managed
>by Linux - then Linux also needs to take actions before cluster power
>down and after cluster power up. So, if PSCI FW can't deal with GIC,
>how would manage it?
>
>>
>> I think we are mixing the system sleep states with CPU idle here.
>> If it's system sleeps states, the we need to deal it in some system ops
>> when it's the last CPU in the system and not the cluster/power domain.
>
>What is really a system sleep state? One could consider it just being
>another idles state, having heaver residency targets and greater
>enter/exit latency values, couldn't you?
>
While I explained the driver's side of the story, for the CPUs, system
sleep is a deeper low power mode that ties in with OSI.
Thanks,
Lina
>In the end, there is no reason to keep things powered on, unless they
>are being in used (or soon to be used), that is main point.
>
>We are also working on S2I at Linaro. We strive towards being able to
>show the same power numbers as for S2R, but then we need to get these
>cluster-idle things right.
>
>[...]
>
>Have a nice weekend!
>
>Kind regards
>Uffe
next prev parent reply other threads:[~2018-10-12 16:16 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-10 21:20 [PATCH RFC v1 0/8] drivers: qcom: Add cpu power domain for SDM845 Raju P.L.S.S.S.N
2018-10-10 21:20 ` [PATCH RFC v1 1/8] PM / Domains: Add helper functions to attach/detach CPUs to/from genpd Raju P.L.S.S.S.N
2018-10-10 21:20 ` [PATCH RFC v1 2/8] kernel/cpu_pm: Manage runtime PM in the idle path for CPUs Raju P.L.S.S.S.N
2018-10-11 20:52 ` Rafael J. Wysocki
2018-10-11 22:08 ` Lina Iyer
2018-10-12 7:43 ` Rafael J. Wysocki
2018-10-12 10:20 ` Ulf Hansson
2018-10-12 15:20 ` Lina Iyer
2018-10-10 21:20 ` [PATCH RFC v1 3/8] timer: Export next wakeup time of a CPU Raju P.L.S.S.S.N
2018-10-29 22:36 ` Thomas Gleixner
2018-10-30 10:29 ` Ulf Hansson
2018-10-10 21:20 ` [PATCH RFC v1 4/8] drivers: qcom: cpu_pd: add cpu power domain support using genpd Raju P.L.S.S.S.N
2018-10-11 11:13 ` Sudeep Holla
2018-10-11 15:27 ` Ulf Hansson
2018-10-11 15:59 ` Sudeep Holla
2018-10-12 9:23 ` Ulf Hansson
2018-10-12 14:33 ` Sudeep Holla
2018-10-12 18:01 ` Raju P L S S S N
2018-10-10 21:20 ` [PATCH RFC v1 5/8] dt-bindings: introduce cpu power domain bindings for Qualcomm SoCs Raju P.L.S.S.S.N
2018-10-11 11:08 ` Sudeep Holla
2018-10-12 18:08 ` Raju P L S S S N
2018-10-10 21:20 ` [PATCH RFC v1 6/8] drivers: qcom: cpu_pd: program next wakeup to PDC timer Raju P.L.S.S.S.N
2018-10-10 21:20 ` [PATCH RFC v1 7/8] drivers: qcom: cpu_pd: Handle cpu hotplug in the domain Raju P.L.S.S.S.N
2018-10-11 11:20 ` Sudeep Holla
2018-10-11 16:00 ` Lina Iyer
2018-10-11 16:19 ` Sudeep Holla
2018-10-11 16:58 ` Lina Iyer
2018-10-11 17:37 ` Sudeep Holla
2018-10-11 21:06 ` Lina Iyer
2018-10-12 15:04 ` Sudeep Holla
2018-10-12 15:46 ` Ulf Hansson
2018-10-12 16:16 ` Lina Iyer [this message]
2018-10-12 16:33 ` Sudeep Holla
2018-10-12 16:04 ` Lina Iyer
2018-10-12 17:00 ` Sudeep Holla
2018-10-12 17:19 ` Lina Iyer
2018-10-12 17:25 ` Sudeep Holla
2018-10-22 19:50 ` Lina Iyer
2018-10-12 14:25 ` Sudeep Holla
2018-10-12 18:10 ` Raju P L S S S N
2018-10-10 21:20 ` [PATCH RFC v1 8/8] arm64: dtsi: sdm845: Add cpu power domain support Raju P.L.S.S.S.N
2018-10-12 17:35 ` Sudeep Holla
2018-10-12 17:52 ` Lina Iyer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181012161653.GH2371@codeaurora.org \
--to=ilina@codeaurora.org \
--cc=andy.gross@linaro.org \
--cc=bjorn.andersson@linaro.org \
--cc=david.brown@linaro.org \
--cc=devicetree@vger.kernel.org \
--cc=dianders@chromium.org \
--cc=evgreen@chromium.org \
--cc=khilman@kernel.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=linux-soc@vger.kernel.org \
--cc=lorenzo.pieralisi@arm.com \
--cc=mka@chromium.org \
--cc=rjw@rjwysocki.net \
--cc=rnayak@codeaurora.org \
--cc=rplsssn@codeaurora.org \
--cc=sboyd@kernel.org \
--cc=sudeep.holla@arm.com \
--cc=ulf.hansson@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).