linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Lina Iyer <ilina@codeaurora.org>
To: Sudeep Holla <sudeep.holla@arm.com>
Cc: "Raju P.L.S.S.S.N" <rplsssn@codeaurora.org>,
	andy.gross@linaro.org, david.brown@linaro.org, rjw@rjwysocki.net,
	ulf.hansson@linaro.org, khilman@kernel.org,
	linux-arm-msm@vger.kernel.org, linux-soc@vger.kernel.org,
	rnayak@codeaurora.org, bjorn.andersson@linaro.org,
	linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org,
	devicetree@vger.kernel.org, sboyd@kernel.org,
	evgreen@chromium.org, dianders@chromium.org, mka@chromium.org,
	Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Subject: Re: [PATCH RFC v1 7/8] drivers: qcom: cpu_pd: Handle cpu hotplug in the domain
Date: Mon, 22 Oct 2018 13:50:34 -0600	[thread overview]
Message-ID: <20181022195034.GD17444@codeaurora.org> (raw)
In-Reply-To: <20181012172500.GA23170@e107155-lin>

On Fri, Oct 12 2018 at 11:25 -0600, Sudeep Holla wrote:
>On Fri, Oct 12, 2018 at 11:19:10AM -0600, Lina Iyer wrote:
>> On Fri, Oct 12 2018 at 11:01 -0600, Sudeep Holla wrote:
>> > On Fri, Oct 12, 2018 at 10:04:27AM -0600, Lina Iyer wrote:
>> > > On Fri, Oct 12 2018 at 09:04 -0600, Sudeep Holla wrote:
>> >
>> > [...]
>> >
>> > Yes all these are fine but with multiple power-domains/cluster, it's
>> > hard to determine the first CPU. You may be able to identify it within
>> > the power domain but not system wide. So this doesn't scale with large
>> > systems(e.g. 4 - 8 clusters with 16 CPUs).
>> >
>> We would probably not worry too much about power savings in a msec
>> scale, if we have that big a system. The driver is a platform specific
>> driver, primarily intended for a mobile class CPU and usage. In fact, we
>> haven't done this for QC's server class CPUs.
>>
>
>OK, along as there's no attempt to make it generic and keep it platform
>specific, I am not that bothered.
>
>> > > > I think we are mixing the system sleep states with CPU idle here.
>> > > > If it's system sleeps states, the we need to deal it in some system ops
>> > > > when it's the last CPU in the system and not the cluster/power domain.
>> > > >
>> > > I think the confusion for you is system sleep vs suspend. System sleep
>> > > here (probably more of a QC terminology), refers to powering down the
>> > > entire SoC for very small durations, while not actually suspended. The
>> > > drivers are unaware that this is happening. No hotplug happens and the
>> > > interrupts are not migrated during system sleep. When all the CPUs go
>> > > into cpuidle, the system sleep state is activated and the resource
>> > > requirements are lowered. The resources are brought back to their
>> > > previous active values before we exit cpuidle on any CPU. The drivers
>> > > have no idea that this happened. We have been doing this on QCOM SoCs
>> > > for a decade, so this is not something new for this SoC. Every QCOM SoC
>> > > has been doing this, albeit differently because of their architecture.
>> > > The newer ones do most of these transitions in hardware as opposed to an
>> > > remote CPU. But this is the first time, we are upstreaming this :)
>> > >
>> >
>> > Indeed, I know mobile platforms do such optimisations and I agree it may
>> > save power. As I mentioned above it doesn't scale well with large systems
>> > and also even with single power domains having multiple idle states where
>> > only one state can do this system level idle but not all. As I mentioned
>> > in the other email to Ulf, it's had to generalise this even with DT.
>> > So it's better to have this dealt transparently in the firmware.
>> >
>> Good, then we are on agreement here.
>
It was brought to my attention that there may be some misunderstanding
here. I still believe we need to do this for small systems like the
mobile platforms and the solution may not scale well to servers. We
don't plan to extend the solution to anything other than the mobile SoC.

>No worries.
>
Thanks,
Lina


  reply	other threads:[~2018-10-22 19:50 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-10 21:20 [PATCH RFC v1 0/8] drivers: qcom: Add cpu power domain for SDM845 Raju P.L.S.S.S.N
2018-10-10 21:20 ` [PATCH RFC v1 1/8] PM / Domains: Add helper functions to attach/detach CPUs to/from genpd Raju P.L.S.S.S.N
2018-10-10 21:20 ` [PATCH RFC v1 2/8] kernel/cpu_pm: Manage runtime PM in the idle path for CPUs Raju P.L.S.S.S.N
2018-10-11 20:52   ` Rafael J. Wysocki
2018-10-11 22:08     ` Lina Iyer
2018-10-12  7:43       ` Rafael J. Wysocki
2018-10-12 10:20         ` Ulf Hansson
2018-10-12 15:20         ` Lina Iyer
2018-10-10 21:20 ` [PATCH RFC v1 3/8] timer: Export next wakeup time of a CPU Raju P.L.S.S.S.N
2018-10-29 22:36   ` Thomas Gleixner
2018-10-30 10:29     ` Ulf Hansson
2018-10-10 21:20 ` [PATCH RFC v1 4/8] drivers: qcom: cpu_pd: add cpu power domain support using genpd Raju P.L.S.S.S.N
2018-10-11 11:13   ` Sudeep Holla
2018-10-11 15:27     ` Ulf Hansson
2018-10-11 15:59       ` Sudeep Holla
2018-10-12  9:23         ` Ulf Hansson
2018-10-12 14:33   ` Sudeep Holla
2018-10-12 18:01     ` Raju P L S S S N
2018-10-10 21:20 ` [PATCH RFC v1 5/8] dt-bindings: introduce cpu power domain bindings for Qualcomm SoCs Raju P.L.S.S.S.N
2018-10-11 11:08   ` Sudeep Holla
2018-10-12 18:08     ` Raju P L S S S N
2018-10-10 21:20 ` [PATCH RFC v1 6/8] drivers: qcom: cpu_pd: program next wakeup to PDC timer Raju P.L.S.S.S.N
2018-10-10 21:20 ` [PATCH RFC v1 7/8] drivers: qcom: cpu_pd: Handle cpu hotplug in the domain Raju P.L.S.S.S.N
2018-10-11 11:20   ` Sudeep Holla
2018-10-11 16:00     ` Lina Iyer
2018-10-11 16:19       ` Sudeep Holla
2018-10-11 16:58         ` Lina Iyer
2018-10-11 17:37           ` Sudeep Holla
2018-10-11 21:06             ` Lina Iyer
2018-10-12 15:04               ` Sudeep Holla
2018-10-12 15:46                 ` Ulf Hansson
2018-10-12 16:16                   ` Lina Iyer
2018-10-12 16:33                   ` Sudeep Holla
2018-10-12 16:04                 ` Lina Iyer
2018-10-12 17:00                   ` Sudeep Holla
2018-10-12 17:19                     ` Lina Iyer
2018-10-12 17:25                       ` Sudeep Holla
2018-10-22 19:50                         ` Lina Iyer [this message]
2018-10-12 14:25   ` Sudeep Holla
2018-10-12 18:10     ` Raju P L S S S N
2018-10-10 21:20 ` [PATCH RFC v1 8/8] arm64: dtsi: sdm845: Add cpu power domain support Raju P.L.S.S.S.N
2018-10-12 17:35   ` Sudeep Holla
2018-10-12 17:52     ` Lina Iyer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181022195034.GD17444@codeaurora.org \
    --to=ilina@codeaurora.org \
    --cc=andy.gross@linaro.org \
    --cc=bjorn.andersson@linaro.org \
    --cc=david.brown@linaro.org \
    --cc=devicetree@vger.kernel.org \
    --cc=dianders@chromium.org \
    --cc=evgreen@chromium.org \
    --cc=khilman@kernel.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-soc@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=mka@chromium.org \
    --cc=rjw@rjwysocki.net \
    --cc=rnayak@codeaurora.org \
    --cc=rplsssn@codeaurora.org \
    --cc=sboyd@kernel.org \
    --cc=sudeep.holla@arm.com \
    --cc=ulf.hansson@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).