All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v2 00/12] PM: SoC idle support using PM domains
@ 2016-02-12 20:50 ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer

Hi all,

Changes since RFC v1 [1] -
- genpd lock abstraction using fn ptrs. Sparse is happier.
- Removed restriction that IRQ-safe domains can only have IRQ safe parents [3]
- changes in the way CPU domains are initialized, starting from CPU now
	* cpu-map node dependency removed
	* platform driver initiate CPU PM domain setup
- smaller patchset excluding genpd multiple level changes
	* they were submitted to ML earlier [2]
- cpuidle runtime PM and hotplug changes
- PSCI changes to detect if the f/w supports OSI
- updated documentation
- this new cover description

This series allows CPU common hardware blocks to enter idle state, when the
CPUs are in idle. A lightweight but important power saving in a battery powered
device is power savings when the device is not suspended. While being active,
the CPUs can save power by entering low power states with the help of cpuidle
framework, but the hardware supporting these CPU still remain active. This is
an effort in reducing power on these h/w block around the CPU during cpuidle.

Every CPU decides for itself the deepest low power mode possible and enters
that state to be more power effecient. This is unlike coupled states, where
CPUs have to wait until the last of the CPU is ready to enter the deepest idle.
The last of the CPU that enters idle decides for the domain, the best idle
state the domain may enter. This is also a higher level than what MCPM solves.
In newer SoC's the coherency and other race conditions coming out of warm reset
are handled in the firmware. This patchset helps Linux make the right power
decision for other h/w blocks that are dependent on the CPUs.

The idea is that CPU hierarchy (represented in DT) is modelled as regular
devices attached to their PM domains, which in turn may be attached to its
parent domain. When cpuidle puts the CPU to sleep, runtime PM for the CPU
device is notified, which suspends the CPU device. This triggers a reduction in
genpd domain usage count. When the last device in a domain suspends, the domain
is powered off as part of the same call. Similarly, the domain resumes before
the first CPU device resumes from idle. To achieve this the following changes
are needed -
	- genpd to support multiple idle states (more than just on/off)
	- genpd should support suspend/resume from IRQ safe context
	- CPU devices set up for runtime PM, as IRQ safe devices
	- cpuidle calls runtime PM ops when idling
	- the genpd power downs calling into a gov to determine the deepest idle state 
	- genpd ->power_on()/->power_off() callbacks to be handled by the platform

Patches from Axel [2] allow genpd to define multiple idle states and choose the
best idle state from them.

These patches build up on top of that -

- Patches [1, 2] - Genpd changes
Sets up Generic PM domains to be called from cpuidle. Genpd uses mutexes for
synchronization. This has to be changed to spinlocks for domains that may be
called from IRQ safe contexts.

- Patch [3] - CPU PM domains
Parses DT and sets up PM domains for CPUs and builds up the hierarchy of
domains and devices. These are bunch of helper functions.

- Patch [4] - ARM cpuidle driver
Enable ARM cpuidle driver to call runtime PM. Even though this has been done
for ARM, there is nothing architecture specific about this. Currently, all idle
states other than ARM clock gating calls into runtime PM. This could also be
state specific i.e call into runtime PM only after a certain state. The changes
may be made part of cpuidle framework, but needs discussion.

- Patches [5, 6, 7] - PM domain governor for CPUs
Introduces a new genpd governor that looks into the per-CPU tick device to
identify the next CPU wakeup and determine the available sleep time for the
domain. This along with QoS is used to determine the best idle state for the
domain. A domains' wake up is determined by the first CPU in that domain to
wake up. A coherency level domain's (parent of a domain containing CPU devices)
wake up is determined by the first CPU amongst all the CPUs to wake up.
Identifying the CPUs and their wakeups are the part of these patches.

- Patches [9, 10] - ARM64 PSCI platform driver
ARM64 PSCI v1.0 specific. PSCI OS initiated mode supports powering off CPU
clusters (caches etc, by configuring separate power controllers). These patches
enable Linux to determine if the f/w supports this mode and if so, uses CPU PM
domain helper functions to create PM domains and handles the power_on/power_off
callbacks. The resulting cluster state is passed as an argument to the f/w
along with the CPU state.

- Patches [11, 12] - DTS changes for MSM8916
QCOM 410c Dragonboard/96Board specific. Enables CPUidle and cluster idle for
this SoC. A compatible f/w (in the works) is needed for this support.

Testing and results -

This patchset has been tested on a QCOM 410c Dragonboard with a quad ARM v8 A53
CPU and 512KB L2 cache. Though not accurate, I see a ~20 mA drop in current
measured at the battery outside the SoC when the SoC idle is enabled. This
needs to be redone again to get an accurate measurement. I see about ~5 us
increase before entering idle state to determine the last man down and about
~20 us while coming out of idle (in Linux) when the cache is flushed and
powered down.

This was tested on top of 4.5-rc3. My series along with patches from Axel can
be found at [4]. Note that [5] has not been published on the ML, is needed for
this series.

Thanks,
Lina

[1]. http://comments.gmane.org/gmane.linux.ports.arm.msm/16232
[2]. http://permalink.gmane.org/gmane.linux.power-management.general/71387
[3]. http://permalink.gmane.org/gmane.linux.ports.arm.msm/17279
[4]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-RFC-v2
[5]. https://git.linaro.org/people/lina.iyer/linux-next.git/commit/1dfec82c8c133b0fbcd245d739dade087a1dd1fc

Lina Iyer (12):
  PM / Domains: Abstract genpd locking
  PM / Domains: Support IRQ safe PM domains
  PM / cpu_domains: Setup PM domains for CPUs/clusters
  ARM: cpuidle: Add runtime PM support for CPUs
  timer: Export next wake up of a CPU
  PM / cpu_domains: Record CPUs that are part of the domain
  PM / cpu_domains: Add PM Domain governor for CPUs
  Documentation / cpu_domains: Describe CPU PM domains setup and
    governor
  drivers: firmware: psci: Allow OS Initiated suspend mode
  ARM64: psci: Support cluster idle states for OS-Initiated
  ARM64: dts: Add PSCI cpuidle support for MSM8916
  ARM64: dts: Define CPU power domain for MSM8916

 Documentation/power/cpu_domains.txt   |  79 ++++++++
 Documentation/power/devices.txt       |  12 +-
 arch/arm64/boot/dts/qcom/msm8916.dtsi |  49 +++++
 arch/arm64/kernel/psci.c              |  46 ++++-
 drivers/base/power/Makefile           |   1 +
 drivers/base/power/cpu_domains.c      | 361 ++++++++++++++++++++++++++++++++++
 drivers/base/power/domain.c           | 217 ++++++++++++++++----
 drivers/cpuidle/cpuidle-arm.c         |  48 +++++
 drivers/firmware/psci.c               |  45 ++++-
 include/linux/cpu_domains.h           |  35 ++++
 include/linux/pm_domain.h             |  14 +-
 include/linux/psci.h                  |   2 +
 include/linux/tick.h                  |  10 +
 include/uapi/linux/psci.h             |   5 +
 kernel/time/tick-sched.c              |  13 ++
 15 files changed, 889 insertions(+), 47 deletions(-)
 create mode 100644 Documentation/power/cpu_domains.txt
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 00/12] PM: SoC idle support using PM domains
@ 2016-02-12 20:50 ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

Hi all,

Changes since RFC v1 [1] -
- genpd lock abstraction using fn ptrs. Sparse is happier.
- Removed restriction that IRQ-safe domains can only have IRQ safe parents [3]
- changes in the way CPU domains are initialized, starting from CPU now
	* cpu-map node dependency removed
	* platform driver initiate CPU PM domain setup
- smaller patchset excluding genpd multiple level changes
	* they were submitted to ML earlier [2]
- cpuidle runtime PM and hotplug changes
- PSCI changes to detect if the f/w supports OSI
- updated documentation
- this new cover description

This series allows CPU common hardware blocks to enter idle state, when the
CPUs are in idle. A lightweight but important power saving in a battery powered
device is power savings when the device is not suspended. While being active,
the CPUs can save power by entering low power states with the help of cpuidle
framework, but the hardware supporting these CPU still remain active. This is
an effort in reducing power on these h/w block around the CPU during cpuidle.

Every CPU decides for itself the deepest low power mode possible and enters
that state to be more power effecient. This is unlike coupled states, where
CPUs have to wait until the last of the CPU is ready to enter the deepest idle.
The last of the CPU that enters idle decides for the domain, the best idle
state the domain may enter. This is also a higher level than what MCPM solves.
In newer SoC's the coherency and other race conditions coming out of warm reset
are handled in the firmware. This patchset helps Linux make the right power
decision for other h/w blocks that are dependent on the CPUs.

The idea is that CPU hierarchy (represented in DT) is modelled as regular
devices attached to their PM domains, which in turn may be attached to its
parent domain. When cpuidle puts the CPU to sleep, runtime PM for the CPU
device is notified, which suspends the CPU device. This triggers a reduction in
genpd domain usage count. When the last device in a domain suspends, the domain
is powered off as part of the same call. Similarly, the domain resumes before
the first CPU device resumes from idle. To achieve this the following changes
are needed -
	- genpd to support multiple idle states (more than just on/off)
	- genpd should support suspend/resume from IRQ safe context
	- CPU devices set up for runtime PM, as IRQ safe devices
	- cpuidle calls runtime PM ops when idling
	- the genpd power downs calling into a gov to determine the deepest idle state 
	- genpd ->power_on()/->power_off() callbacks to be handled by the platform

Patches from Axel [2] allow genpd to define multiple idle states and choose the
best idle state from them.

These patches build up on top of that -

- Patches [1, 2] - Genpd changes
Sets up Generic PM domains to be called from cpuidle. Genpd uses mutexes for
synchronization. This has to be changed to spinlocks for domains that may be
called from IRQ safe contexts.

- Patch [3] - CPU PM domains
Parses DT and sets up PM domains for CPUs and builds up the hierarchy of
domains and devices. These are bunch of helper functions.

- Patch [4] - ARM cpuidle driver
Enable ARM cpuidle driver to call runtime PM. Even though this has been done
for ARM, there is nothing architecture specific about this. Currently, all idle
states other than ARM clock gating calls into runtime PM. This could also be
state specific i.e call into runtime PM only after a certain state. The changes
may be made part of cpuidle framework, but needs discussion.

- Patches [5, 6, 7] - PM domain governor for CPUs
Introduces a new genpd governor that looks into the per-CPU tick device to
identify the next CPU wakeup and determine the available sleep time for the
domain. This along with QoS is used to determine the best idle state for the
domain. A domains' wake up is determined by the first CPU in that domain to
wake up. A coherency level domain's (parent of a domain containing CPU devices)
wake up is determined by the first CPU amongst all the CPUs to wake up.
Identifying the CPUs and their wakeups are the part of these patches.

- Patches [9, 10] - ARM64 PSCI platform driver
ARM64 PSCI v1.0 specific. PSCI OS initiated mode supports powering off CPU
clusters (caches etc, by configuring separate power controllers). These patches
enable Linux to determine if the f/w supports this mode and if so, uses CPU PM
domain helper functions to create PM domains and handles the power_on/power_off
callbacks. The resulting cluster state is passed as an argument to the f/w
along with the CPU state.

- Patches [11, 12] - DTS changes for MSM8916
QCOM 410c Dragonboard/96Board specific. Enables CPUidle and cluster idle for
this SoC. A compatible f/w (in the works) is needed for this support.

Testing and results -

This patchset has been tested on a QCOM 410c Dragonboard with a quad ARM v8 A53
CPU and 512KB L2 cache. Though not accurate, I see a ~20 mA drop in current
measured at the battery outside the SoC when the SoC idle is enabled. This
needs to be redone again to get an accurate measurement. I see about ~5 us
increase before entering idle state to determine the last man down and about
~20 us while coming out of idle (in Linux) when the cache is flushed and
powered down.

This was tested on top of 4.5-rc3. My series along with patches from Axel can
be found at [4]. Note that [5] has not been published on the ML, is needed for
this series.

Thanks,
Lina

[1]. http://comments.gmane.org/gmane.linux.ports.arm.msm/16232
[2]. http://permalink.gmane.org/gmane.linux.power-management.general/71387
[3]. http://permalink.gmane.org/gmane.linux.ports.arm.msm/17279
[4]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-RFC-v2
[5]. https://git.linaro.org/people/lina.iyer/linux-next.git/commit/1dfec82c8c133b0fbcd245d739dade087a1dd1fc

Lina Iyer (12):
  PM / Domains: Abstract genpd locking
  PM / Domains: Support IRQ safe PM domains
  PM / cpu_domains: Setup PM domains for CPUs/clusters
  ARM: cpuidle: Add runtime PM support for CPUs
  timer: Export next wake up of a CPU
  PM / cpu_domains: Record CPUs that are part of the domain
  PM / cpu_domains: Add PM Domain governor for CPUs
  Documentation / cpu_domains: Describe CPU PM domains setup and
    governor
  drivers: firmware: psci: Allow OS Initiated suspend mode
  ARM64: psci: Support cluster idle states for OS-Initiated
  ARM64: dts: Add PSCI cpuidle support for MSM8916
  ARM64: dts: Define CPU power domain for MSM8916

 Documentation/power/cpu_domains.txt   |  79 ++++++++
 Documentation/power/devices.txt       |  12 +-
 arch/arm64/boot/dts/qcom/msm8916.dtsi |  49 +++++
 arch/arm64/kernel/psci.c              |  46 ++++-
 drivers/base/power/Makefile           |   1 +
 drivers/base/power/cpu_domains.c      | 361 ++++++++++++++++++++++++++++++++++
 drivers/base/power/domain.c           | 217 ++++++++++++++++----
 drivers/cpuidle/cpuidle-arm.c         |  48 +++++
 drivers/firmware/psci.c               |  45 ++++-
 include/linux/cpu_domains.h           |  35 ++++
 include/linux/pm_domain.h             |  14 +-
 include/linux/psci.h                  |   2 +
 include/linux/tick.h                  |  10 +
 include/uapi/linux/psci.h             |   5 +
 kernel/time/tick-sched.c              |  13 ++
 15 files changed, 889 insertions(+), 47 deletions(-)
 create mode 100644 Documentation/power/cpu_domains.txt
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 01/12] PM / Domains: Abstract genpd locking
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer, Kevin Hilman

Abstract genpd lock/unlock calls, in preparation for domain specific
locks added in the following patches.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Krzysztof Kozłowski <k.kozlowski@samsung.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- split into two patches. This patch abstracts genpd locking
- uses function pointer, instead of runtime function determination

 drivers/base/power/domain.c | 109 ++++++++++++++++++++++++++++++--------------
 include/linux/pm_domain.h   |   5 +-
 2 files changed, 79 insertions(+), 35 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 3ddd05d..8204615 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -40,6 +40,46 @@
 static LIST_HEAD(gpd_list);
 static DEFINE_MUTEX(gpd_list_lock);
 
+struct genpd_lock_fns {
+	void (*lock)(struct generic_pm_domain *genpd);
+	void (*lock_nested)(struct generic_pm_domain *genpd, int depth);
+	int (*lock_interruptible)(struct generic_pm_domain *genpd);
+	void (*unlock)(struct generic_pm_domain *genpd);
+};
+
+static void genpd_lock_irq(struct generic_pm_domain *genpd)
+{
+	mutex_lock(&genpd->mlock);
+}
+
+static void genpd_lock_irq_nested(struct generic_pm_domain *genpd,
+					int depth)
+{
+	mutex_lock_nested(&genpd->mlock, depth);
+}
+
+static int genpd_lock_interruptible_irq(struct generic_pm_domain *genpd)
+{
+	return mutex_lock_interruptible(&genpd->mlock);
+}
+
+static void genpd_unlock_irq(struct generic_pm_domain *genpd)
+{
+	return mutex_unlock(&genpd->mlock);
+}
+
+static struct genpd_lock_fns irq_lock = {
+	.lock = genpd_lock_irq,
+	.lock_nested = genpd_lock_irq_nested,
+	.lock_interruptible = genpd_lock_interruptible_irq,
+	.unlock = genpd_unlock_irq,
+};
+
+#define genpd_lock(p)			p->lock_fns->lock(p)
+#define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
+#define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
+#define genpd_unlock(p)			p->lock_fns->unlock(p)
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -202,9 +242,9 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
 
 		genpd_sd_counter_inc(master);
 
-		mutex_lock_nested(&master->lock, depth + 1);
+		genpd_lock_nested(master, depth + 1);
 		ret = genpd_poweron(master, depth + 1);
-		mutex_unlock(&master->lock);
+		genpd_unlock(master);
 
 		if (ret) {
 			genpd_sd_counter_dec(master);
@@ -268,9 +308,9 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
 		spin_unlock_irq(&dev->power.lock);
 
 		if (!IS_ERR(genpd)) {
-			mutex_lock(&genpd->lock);
+			genpd_lock(genpd);
 			genpd->max_off_time_changed = true;
-			mutex_unlock(&genpd->lock);
+			genpd_unlock(genpd);
 		}
 
 		dev = dev->parent;
@@ -367,9 +407,9 @@ static void genpd_power_off_work_fn(struct work_struct *work)
 
 	genpd = container_of(work, struct generic_pm_domain, power_off_work);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, true);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -439,9 +479,9 @@ static int pm_genpd_runtime_suspend(struct device *dev)
 	if (dev->power.irq_safe)
 		return 0;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, false);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
@@ -476,9 +516,9 @@ static int pm_genpd_runtime_resume(struct device *dev)
 		goto out;
 	}
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	ret = genpd_poweron(genpd, 0);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		return ret;
@@ -692,14 +732,14 @@ static int pm_genpd_prepare(struct device *dev)
 	if (resume_needed(dev, genpd))
 		pm_runtime_resume(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count++ == 0) {
 		genpd->suspended_count = 0;
 		genpd->suspend_power_off = genpd->status == GPD_STATE_POWER_OFF;
 	}
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (genpd->suspend_power_off) {
 		pm_runtime_put_noidle(dev);
@@ -717,12 +757,12 @@ static int pm_genpd_prepare(struct device *dev)
 
 	ret = pm_generic_prepare(dev);
 	if (ret) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 
 		if (--genpd->prepared_count == 0)
 			genpd->suspend_power_off = false;
 
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 		pm_runtime_enable(dev);
 	}
 
@@ -1080,13 +1120,13 @@ static void pm_genpd_complete(struct device *dev)
 	if (IS_ERR(genpd))
 		return;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	run_complete = !genpd->suspend_power_off;
 	if (--genpd->prepared_count == 0)
 		genpd->suspend_power_off = false;
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (run_complete) {
 		pm_generic_complete(dev);
@@ -1260,7 +1300,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1277,7 +1317,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		genpd_free_dev_data(dev, gpd_data);
@@ -1310,7 +1350,7 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 	gpd_data = to_gpd_data(pdd);
 	dev_pm_qos_remove_notifier(dev, &gpd_data->nb);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1325,14 +1365,14 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 
 	list_del_init(&pdd->list_node);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	genpd_free_dev_data(dev, gpd_data);
 
 	return 0;
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 	dev_pm_qos_add_notifier(dev, &gpd_data->nb);
 
 	return ret;
@@ -1358,8 +1398,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	if (!link)
 		return -ENOMEM;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (genpd->status == GPD_STATE_POWER_OFF
 	    &&  subdomain->status != GPD_STATE_POWER_OFF) {
@@ -1382,8 +1422,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 		genpd_sd_counter_inc(genpd);
 
  out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 	if (ret)
 		kfree(link);
 	return ret;
@@ -1404,8 +1444,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain))
 		return -EINVAL;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (!list_empty(&subdomain->slave_links) || subdomain->device_count) {
 		pr_warn("%s: unable to remove subdomain %s\n", genpd->name,
@@ -1429,8 +1469,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	}
 
 out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 
 	return ret;
 }
@@ -1595,7 +1635,8 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->lock);
+	mutex_init(&genpd->mlock);
+	genpd->lock_fns = &irq_lock;
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -1952,9 +1993,9 @@ int genpd_dev_pm_attach(struct device *dev)
 	dev->pm_domain->detach = genpd_dev_pm_detach;
 	dev->pm_domain->sync = genpd_dev_pm_sync;
 
-	mutex_lock(&pd->lock);
+	genpd_lock(pd);
 	ret = genpd_poweron(pd, 0);
-	mutex_unlock(&pd->lock);
+	genpd_unlock(pd);
 out:
 	return ret ? -EPROBE_DEFER : 0;
 }
@@ -2011,7 +2052,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	struct gpd_link *link;
 	int ret;
 
-	ret = mutex_lock_interruptible(&genpd->lock);
+	ret = genpd_lock_interruptible(genpd);
 	if (ret)
 		return -ERESTARTSYS;
 
@@ -2047,7 +2088,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 
 	seq_puts(s, "\n");
 exit:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index 376d7fa..ec5523c 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -44,13 +44,14 @@ struct genpd_power_state {
 	u32 param;
 };
 
+struct genpd_lock_fns;
+
 struct generic_pm_domain {
 	struct dev_pm_domain domain;	/* PM domain operations */
 	struct list_head gpd_list_node;	/* Node in the global PM domains list */
 	struct list_head master_links;	/* Links with PM domain as a master */
 	struct list_head slave_links;	/* Links with PM domain as a slave */
 	struct list_head dev_list;	/* List of devices */
-	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
 	const char *name;
@@ -74,6 +75,8 @@ struct generic_pm_domain {
 	struct genpd_power_state *states;
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
+	struct genpd_lock_fns *lock_fns;
+	struct mutex mlock;
 
 };
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 01/12] PM / Domains: Abstract genpd locking
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

Abstract genpd lock/unlock calls, in preparation for domain specific
locks added in the following patches.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Krzysztof Koz?owski <k.kozlowski@samsung.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- split into two patches. This patch abstracts genpd locking
- uses function pointer, instead of runtime function determination

 drivers/base/power/domain.c | 109 ++++++++++++++++++++++++++++++--------------
 include/linux/pm_domain.h   |   5 +-
 2 files changed, 79 insertions(+), 35 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 3ddd05d..8204615 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -40,6 +40,46 @@
 static LIST_HEAD(gpd_list);
 static DEFINE_MUTEX(gpd_list_lock);
 
+struct genpd_lock_fns {
+	void (*lock)(struct generic_pm_domain *genpd);
+	void (*lock_nested)(struct generic_pm_domain *genpd, int depth);
+	int (*lock_interruptible)(struct generic_pm_domain *genpd);
+	void (*unlock)(struct generic_pm_domain *genpd);
+};
+
+static void genpd_lock_irq(struct generic_pm_domain *genpd)
+{
+	mutex_lock(&genpd->mlock);
+}
+
+static void genpd_lock_irq_nested(struct generic_pm_domain *genpd,
+					int depth)
+{
+	mutex_lock_nested(&genpd->mlock, depth);
+}
+
+static int genpd_lock_interruptible_irq(struct generic_pm_domain *genpd)
+{
+	return mutex_lock_interruptible(&genpd->mlock);
+}
+
+static void genpd_unlock_irq(struct generic_pm_domain *genpd)
+{
+	return mutex_unlock(&genpd->mlock);
+}
+
+static struct genpd_lock_fns irq_lock = {
+	.lock = genpd_lock_irq,
+	.lock_nested = genpd_lock_irq_nested,
+	.lock_interruptible = genpd_lock_interruptible_irq,
+	.unlock = genpd_unlock_irq,
+};
+
+#define genpd_lock(p)			p->lock_fns->lock(p)
+#define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
+#define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
+#define genpd_unlock(p)			p->lock_fns->unlock(p)
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -202,9 +242,9 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
 
 		genpd_sd_counter_inc(master);
 
-		mutex_lock_nested(&master->lock, depth + 1);
+		genpd_lock_nested(master, depth + 1);
 		ret = genpd_poweron(master, depth + 1);
-		mutex_unlock(&master->lock);
+		genpd_unlock(master);
 
 		if (ret) {
 			genpd_sd_counter_dec(master);
@@ -268,9 +308,9 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
 		spin_unlock_irq(&dev->power.lock);
 
 		if (!IS_ERR(genpd)) {
-			mutex_lock(&genpd->lock);
+			genpd_lock(genpd);
 			genpd->max_off_time_changed = true;
-			mutex_unlock(&genpd->lock);
+			genpd_unlock(genpd);
 		}
 
 		dev = dev->parent;
@@ -367,9 +407,9 @@ static void genpd_power_off_work_fn(struct work_struct *work)
 
 	genpd = container_of(work, struct generic_pm_domain, power_off_work);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, true);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -439,9 +479,9 @@ static int pm_genpd_runtime_suspend(struct device *dev)
 	if (dev->power.irq_safe)
 		return 0;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, false);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
@@ -476,9 +516,9 @@ static int pm_genpd_runtime_resume(struct device *dev)
 		goto out;
 	}
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	ret = genpd_poweron(genpd, 0);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		return ret;
@@ -692,14 +732,14 @@ static int pm_genpd_prepare(struct device *dev)
 	if (resume_needed(dev, genpd))
 		pm_runtime_resume(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count++ == 0) {
 		genpd->suspended_count = 0;
 		genpd->suspend_power_off = genpd->status == GPD_STATE_POWER_OFF;
 	}
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (genpd->suspend_power_off) {
 		pm_runtime_put_noidle(dev);
@@ -717,12 +757,12 @@ static int pm_genpd_prepare(struct device *dev)
 
 	ret = pm_generic_prepare(dev);
 	if (ret) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 
 		if (--genpd->prepared_count == 0)
 			genpd->suspend_power_off = false;
 
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 		pm_runtime_enable(dev);
 	}
 
@@ -1080,13 +1120,13 @@ static void pm_genpd_complete(struct device *dev)
 	if (IS_ERR(genpd))
 		return;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	run_complete = !genpd->suspend_power_off;
 	if (--genpd->prepared_count == 0)
 		genpd->suspend_power_off = false;
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (run_complete) {
 		pm_generic_complete(dev);
@@ -1260,7 +1300,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1277,7 +1317,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		genpd_free_dev_data(dev, gpd_data);
@@ -1310,7 +1350,7 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 	gpd_data = to_gpd_data(pdd);
 	dev_pm_qos_remove_notifier(dev, &gpd_data->nb);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1325,14 +1365,14 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 
 	list_del_init(&pdd->list_node);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	genpd_free_dev_data(dev, gpd_data);
 
 	return 0;
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 	dev_pm_qos_add_notifier(dev, &gpd_data->nb);
 
 	return ret;
@@ -1358,8 +1398,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	if (!link)
 		return -ENOMEM;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (genpd->status == GPD_STATE_POWER_OFF
 	    &&  subdomain->status != GPD_STATE_POWER_OFF) {
@@ -1382,8 +1422,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 		genpd_sd_counter_inc(genpd);
 
  out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 	if (ret)
 		kfree(link);
 	return ret;
@@ -1404,8 +1444,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain))
 		return -EINVAL;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (!list_empty(&subdomain->slave_links) || subdomain->device_count) {
 		pr_warn("%s: unable to remove subdomain %s\n", genpd->name,
@@ -1429,8 +1469,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	}
 
 out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 
 	return ret;
 }
@@ -1595,7 +1635,8 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->lock);
+	mutex_init(&genpd->mlock);
+	genpd->lock_fns = &irq_lock;
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -1952,9 +1993,9 @@ int genpd_dev_pm_attach(struct device *dev)
 	dev->pm_domain->detach = genpd_dev_pm_detach;
 	dev->pm_domain->sync = genpd_dev_pm_sync;
 
-	mutex_lock(&pd->lock);
+	genpd_lock(pd);
 	ret = genpd_poweron(pd, 0);
-	mutex_unlock(&pd->lock);
+	genpd_unlock(pd);
 out:
 	return ret ? -EPROBE_DEFER : 0;
 }
@@ -2011,7 +2052,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	struct gpd_link *link;
 	int ret;
 
-	ret = mutex_lock_interruptible(&genpd->lock);
+	ret = genpd_lock_interruptible(genpd);
 	if (ret)
 		return -ERESTARTSYS;
 
@@ -2047,7 +2088,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 
 	seq_puts(s, "\n");
 exit:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index 376d7fa..ec5523c 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -44,13 +44,14 @@ struct genpd_power_state {
 	u32 param;
 };
 
+struct genpd_lock_fns;
+
 struct generic_pm_domain {
 	struct dev_pm_domain domain;	/* PM domain operations */
 	struct list_head gpd_list_node;	/* Node in the global PM domains list */
 	struct list_head master_links;	/* Links with PM domain as a master */
 	struct list_head slave_links;	/* Links with PM domain as a slave */
 	struct list_head dev_list;	/* List of devices */
-	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
 	const char *name;
@@ -74,6 +75,8 @@ struct generic_pm_domain {
 	struct genpd_power_state *states;
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
+	struct genpd_lock_fns *lock_fns;
+	struct mutex mlock;
 
 };
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 02/12] PM / Domains: Support IRQ safe PM domains
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer, Kevin Hilman

Generic Power Domains currently support turning on/off only in process
context. This prevents the usage of PM domains for domains that could be
powered on/off in a context where IRQs are disabled. Many such domains
exist today and do not get powered off, when the IRQ safe devices in
that domain are powered off, because of this limitation.

However, not all domains can operate in IRQ safe contexts. Genpd
therefore, has to support both cases where the domain may or may not
operate in IRQ safe contexts. Configuring genpd to use an appropriate
lock for that domain, would allow domains that have IRQ safe devices to
runtime suspend and resume, in atomic context.

To achieve domain specific locking, set the domain's ->flag to
GENPD_FLAG_IRQ_SAFE while defining the domain. This indicates that genpd
should use a spinlock instead of a mutex for locking the domain. Locking
is abstracted through genpd_lock() and genpd_unlock() functions that use
the flag to determine the appropriate lock to be used for that domain.

Domains that have lower latency to suspend and resume and can operate
with IRQs disabled may now be able to save power, when the component
devices and sub-domains are idle at runtime.

The restriction this imposes on the domain hierarchy is that non-IRQ
safe domains may not have IRQ-safe subdomains, but IRQ safe domains may
have IRQ safe and non-IRQ safe domains and devices.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Krzysztof Kozłowski <k.kozlowski@samsung.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- split off to into its own patch
- Removed restriction that IRQ-safe domains can only have IRQ safe parents

 Documentation/power/devices.txt |  11 +++-
 drivers/base/power/domain.c     | 112 ++++++++++++++++++++++++++++++++++++----
 include/linux/pm_domain.h       |  11 +++-
 3 files changed, 123 insertions(+), 11 deletions(-)

diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
index 8ba6625..c06f0b6 100644
--- a/Documentation/power/devices.txt
+++ b/Documentation/power/devices.txt
@@ -607,7 +607,16 @@ individually.  Instead, a set of devices sharing a power resource can be put
 into a low-power state together at the same time by turning off the shared
 power resource.  Of course, they also need to be put into the full-power state
 together, by turning the shared power resource on.  A set of devices with this
-property is often referred to as a power domain.
+property is often referred to as a power domain. A power domain may also be
+nested inside another power domain.
+
+Devices, by default, operate in process context and if a device can operate in
+IRQ safe context, has to be explicitly set as IRQ safe. Power domains by
+default, operate in process context but could have devices that are IRQ safe.
+Such power domains cannot be powered on/off during runtime PM. On the other
+hand, an IRQ safe PM domains that have IRQ safe devices may be powered off
+when all the devices are in idle. An IRQ safe domain may only be attached as a
+subdomain to another IRQ safe domain.
 
 Support for power domains is provided through the pm_domain field of struct
 device.  This field is a pointer to an object of type struct dev_pm_domain,
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 8204615..3c4f675 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -75,11 +75,59 @@ static struct genpd_lock_fns irq_lock = {
 	.unlock = genpd_unlock_irq,
 };
 
+static void genpd_lock_nosleep(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+}
+
+static void genpd_lock_nosleep_nested(struct generic_pm_domain *genpd,
+					int depth)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave_nested(&genpd->slock, flags, depth);
+	genpd->lock_flags = flags;
+}
+
+static int genpd_lock_nosleep_interruptible(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+	return 0;
+}
+
+static void genpd_unlock_nosleep(struct generic_pm_domain *genpd)
+	__releases(&genpd->slock)
+{
+	spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags);
+}
+
+static struct genpd_lock_fns no_sleep_lock = {
+	.lock = genpd_lock_nosleep,
+	.lock_nested = genpd_lock_nosleep_nested,
+	.lock_interruptible = genpd_lock_nosleep_interruptible,
+	.unlock = genpd_unlock_nosleep,
+};
+
 #define genpd_lock(p)			p->lock_fns->lock(p)
 #define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
 #define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
 #define genpd_unlock(p)			p->lock_fns->unlock(p)
 
+static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
+		struct generic_pm_domain *genpd)
+{
+	return dev->power.irq_safe && !genpd->irq_safe;
+}
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -356,8 +404,17 @@ static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
 		if (stat > PM_QOS_FLAGS_NONE)
 			return -EBUSY;
 
-		if (!pm_runtime_suspended(pdd->dev) || pdd->dev->power.irq_safe)
+		/*
+		 * We do not want to power off the domain if the device is
+		 * not suspended or an IRQ safe device is part of this
+		 * non-IRQ safe domain.
+		 */
+		if (!pm_runtime_suspended(pdd->dev) ||
+			irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd))
 			not_suspended++;
+		WARN_ONCE(irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd),
+				"PM domain %s will not be powered off\n",
+				genpd->name);
 	}
 
 	if (not_suspended > 1 || (not_suspended == 1 && is_async))
@@ -473,10 +530,13 @@ static int pm_genpd_runtime_suspend(struct device *dev)
 	}
 
 	/*
-	 * If power.irq_safe is set, this routine will be run with interrupts
-	 * off, so it can't use mutexes.
+	 * If power.irq_safe is set, this routine may be run with
+	 * IRQ disabled, so suspend only if the power domain is
+	 * irq_safe.
 	 */
-	if (dev->power.irq_safe)
+	WARN_ONCE(irq_safe_dev_in_no_sleep_domain(dev, genpd),
+			"genpd %s will not be powered off\n", genpd->name);
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
 		return 0;
 
 	genpd_lock(genpd);
@@ -510,8 +570,11 @@ static int pm_genpd_runtime_resume(struct device *dev)
 	if (IS_ERR(genpd))
 		return -EINVAL;
 
-	/* If power.irq_safe, the PM domain is never powered off. */
-	if (dev->power.irq_safe) {
+	/*
+	 * As we dont power off a non IRQ safe domain, which holds
+	 * an IRQ safe device, we dont need to restore power to it.
+	 */
+	if (dev->power.irq_safe && !genpd->irq_safe) {
 		timed = false;
 		goto out;
 	}
@@ -1296,6 +1359,13 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
 		return -EINVAL;
 
+	if (genpd->irq_safe && !dev->power.irq_safe) {
+		dev_err(dev,
+			"PM Domain %s is IRQ safe; device has to IRQ safe.\n",
+			genpd->name);
+		return -EINVAL;
+	}
+
 	gpd_data = genpd_alloc_dev_data(dev, genpd, td);
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
@@ -1394,6 +1464,17 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	    || genpd == subdomain)
 		return -EINVAL;
 
+	/*
+	 * If the domain can be powered on/off in an IRQ safe
+	 * context, ensure that the subdomain can also be
+	 * powered on/off in that context.
+	 */
+	if (!genpd->irq_safe && subdomain->irq_safe) {
+		WARN("Parent %s of subdomain %s must be IRQ-safe\n",
+				genpd->name, subdomain->name);
+		return -EINVAL;
+	}
+
 	link = kzalloc(sizeof(*link), GFP_KERNEL);
 	if (!link)
 		return -ENOMEM;
@@ -1610,6 +1691,19 @@ static int of_genpd_device_parse_states(struct device_node *np,
 	return 0;
 }
 
+static void genpd_lock_init(struct generic_pm_domain *genpd)
+{
+	if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
+		spin_lock_init(&genpd->slock);
+		genpd->irq_safe = true;
+		genpd->lock_fns = &no_sleep_lock;
+	} else {
+		mutex_init(&genpd->mlock);
+		genpd->irq_safe = false;
+		genpd->lock_fns = &irq_lock;
+	}
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1635,8 +1729,7 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->mlock);
-	genpd->lock_fns = &irq_lock;
+	genpd_lock_init(genpd);
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -2077,7 +2170,8 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	}
 
 	list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
-		kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);
+		kobj_path = kobject_get_path(&pm_data->dev->kobj,
+				genpd->irq_safe ? GFP_ATOMIC : GFP_KERNEL);
 		if (kobj_path == NULL)
 			continue;
 
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index ec5523c..3f245d5 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -15,9 +15,11 @@
 #include <linux/err.h>
 #include <linux/of.h>
 #include <linux/notifier.h>
+#include <linux/spinlock.h>
 
 /* Defines used for the flags field in the struct generic_pm_domain */
 #define GENPD_FLAG_PM_CLK	(1U << 0) /* PM domain uses PM clk */
+#define GENPD_FLAG_IRQ_SAFE	(1U << 1) /* PM domain operates in atomic */
 
 enum gpd_status {
 	GPD_STATE_ACTIVE = 0,	/* PM domain is active */
@@ -76,7 +78,14 @@ struct generic_pm_domain {
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
 	struct genpd_lock_fns *lock_fns;
-	struct mutex mlock;
+	bool irq_safe;
+	union {
+		struct mutex mlock;
+		struct {
+			spinlock_t slock;
+			unsigned long lock_flags;
+		};
+	};
 
 };
 
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 02/12] PM / Domains: Support IRQ safe PM domains
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

Generic Power Domains currently support turning on/off only in process
context. This prevents the usage of PM domains for domains that could be
powered on/off in a context where IRQs are disabled. Many such domains
exist today and do not get powered off, when the IRQ safe devices in
that domain are powered off, because of this limitation.

However, not all domains can operate in IRQ safe contexts. Genpd
therefore, has to support both cases where the domain may or may not
operate in IRQ safe contexts. Configuring genpd to use an appropriate
lock for that domain, would allow domains that have IRQ safe devices to
runtime suspend and resume, in atomic context.

To achieve domain specific locking, set the domain's ->flag to
GENPD_FLAG_IRQ_SAFE while defining the domain. This indicates that genpd
should use a spinlock instead of a mutex for locking the domain. Locking
is abstracted through genpd_lock() and genpd_unlock() functions that use
the flag to determine the appropriate lock to be used for that domain.

Domains that have lower latency to suspend and resume and can operate
with IRQs disabled may now be able to save power, when the component
devices and sub-domains are idle at runtime.

The restriction this imposes on the domain hierarchy is that non-IRQ
safe domains may not have IRQ-safe subdomains, but IRQ safe domains may
have IRQ safe and non-IRQ safe domains and devices.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Krzysztof Koz?owski <k.kozlowski@samsung.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- split off to into its own patch
- Removed restriction that IRQ-safe domains can only have IRQ safe parents

 Documentation/power/devices.txt |  11 +++-
 drivers/base/power/domain.c     | 112 ++++++++++++++++++++++++++++++++++++----
 include/linux/pm_domain.h       |  11 +++-
 3 files changed, 123 insertions(+), 11 deletions(-)

diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
index 8ba6625..c06f0b6 100644
--- a/Documentation/power/devices.txt
+++ b/Documentation/power/devices.txt
@@ -607,7 +607,16 @@ individually.  Instead, a set of devices sharing a power resource can be put
 into a low-power state together at the same time by turning off the shared
 power resource.  Of course, they also need to be put into the full-power state
 together, by turning the shared power resource on.  A set of devices with this
-property is often referred to as a power domain.
+property is often referred to as a power domain. A power domain may also be
+nested inside another power domain.
+
+Devices, by default, operate in process context and if a device can operate in
+IRQ safe context, has to be explicitly set as IRQ safe. Power domains by
+default, operate in process context but could have devices that are IRQ safe.
+Such power domains cannot be powered on/off during runtime PM. On the other
+hand, an IRQ safe PM domains that have IRQ safe devices may be powered off
+when all the devices are in idle. An IRQ safe domain may only be attached as a
+subdomain to another IRQ safe domain.
 
 Support for power domains is provided through the pm_domain field of struct
 device.  This field is a pointer to an object of type struct dev_pm_domain,
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 8204615..3c4f675 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -75,11 +75,59 @@ static struct genpd_lock_fns irq_lock = {
 	.unlock = genpd_unlock_irq,
 };
 
+static void genpd_lock_nosleep(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+}
+
+static void genpd_lock_nosleep_nested(struct generic_pm_domain *genpd,
+					int depth)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave_nested(&genpd->slock, flags, depth);
+	genpd->lock_flags = flags;
+}
+
+static int genpd_lock_nosleep_interruptible(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+	return 0;
+}
+
+static void genpd_unlock_nosleep(struct generic_pm_domain *genpd)
+	__releases(&genpd->slock)
+{
+	spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags);
+}
+
+static struct genpd_lock_fns no_sleep_lock = {
+	.lock = genpd_lock_nosleep,
+	.lock_nested = genpd_lock_nosleep_nested,
+	.lock_interruptible = genpd_lock_nosleep_interruptible,
+	.unlock = genpd_unlock_nosleep,
+};
+
 #define genpd_lock(p)			p->lock_fns->lock(p)
 #define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
 #define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
 #define genpd_unlock(p)			p->lock_fns->unlock(p)
 
+static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
+		struct generic_pm_domain *genpd)
+{
+	return dev->power.irq_safe && !genpd->irq_safe;
+}
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -356,8 +404,17 @@ static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
 		if (stat > PM_QOS_FLAGS_NONE)
 			return -EBUSY;
 
-		if (!pm_runtime_suspended(pdd->dev) || pdd->dev->power.irq_safe)
+		/*
+		 * We do not want to power off the domain if the device is
+		 * not suspended or an IRQ safe device is part of this
+		 * non-IRQ safe domain.
+		 */
+		if (!pm_runtime_suspended(pdd->dev) ||
+			irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd))
 			not_suspended++;
+		WARN_ONCE(irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd),
+				"PM domain %s will not be powered off\n",
+				genpd->name);
 	}
 
 	if (not_suspended > 1 || (not_suspended == 1 && is_async))
@@ -473,10 +530,13 @@ static int pm_genpd_runtime_suspend(struct device *dev)
 	}
 
 	/*
-	 * If power.irq_safe is set, this routine will be run with interrupts
-	 * off, so it can't use mutexes.
+	 * If power.irq_safe is set, this routine may be run with
+	 * IRQ disabled, so suspend only if the power domain is
+	 * irq_safe.
 	 */
-	if (dev->power.irq_safe)
+	WARN_ONCE(irq_safe_dev_in_no_sleep_domain(dev, genpd),
+			"genpd %s will not be powered off\n", genpd->name);
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
 		return 0;
 
 	genpd_lock(genpd);
@@ -510,8 +570,11 @@ static int pm_genpd_runtime_resume(struct device *dev)
 	if (IS_ERR(genpd))
 		return -EINVAL;
 
-	/* If power.irq_safe, the PM domain is never powered off. */
-	if (dev->power.irq_safe) {
+	/*
+	 * As we dont power off a non IRQ safe domain, which holds
+	 * an IRQ safe device, we dont need to restore power to it.
+	 */
+	if (dev->power.irq_safe && !genpd->irq_safe) {
 		timed = false;
 		goto out;
 	}
@@ -1296,6 +1359,13 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
 		return -EINVAL;
 
+	if (genpd->irq_safe && !dev->power.irq_safe) {
+		dev_err(dev,
+			"PM Domain %s is IRQ safe; device has to IRQ safe.\n",
+			genpd->name);
+		return -EINVAL;
+	}
+
 	gpd_data = genpd_alloc_dev_data(dev, genpd, td);
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
@@ -1394,6 +1464,17 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	    || genpd == subdomain)
 		return -EINVAL;
 
+	/*
+	 * If the domain can be powered on/off in an IRQ safe
+	 * context, ensure that the subdomain can also be
+	 * powered on/off in that context.
+	 */
+	if (!genpd->irq_safe && subdomain->irq_safe) {
+		WARN("Parent %s of subdomain %s must be IRQ-safe\n",
+				genpd->name, subdomain->name);
+		return -EINVAL;
+	}
+
 	link = kzalloc(sizeof(*link), GFP_KERNEL);
 	if (!link)
 		return -ENOMEM;
@@ -1610,6 +1691,19 @@ static int of_genpd_device_parse_states(struct device_node *np,
 	return 0;
 }
 
+static void genpd_lock_init(struct generic_pm_domain *genpd)
+{
+	if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
+		spin_lock_init(&genpd->slock);
+		genpd->irq_safe = true;
+		genpd->lock_fns = &no_sleep_lock;
+	} else {
+		mutex_init(&genpd->mlock);
+		genpd->irq_safe = false;
+		genpd->lock_fns = &irq_lock;
+	}
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1635,8 +1729,7 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->mlock);
-	genpd->lock_fns = &irq_lock;
+	genpd_lock_init(genpd);
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -2077,7 +2170,8 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	}
 
 	list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
-		kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);
+		kobj_path = kobject_get_path(&pm_data->dev->kobj,
+				genpd->irq_safe ? GFP_ATOMIC : GFP_KERNEL);
 		if (kobj_path == NULL)
 			continue;
 
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index ec5523c..3f245d5 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -15,9 +15,11 @@
 #include <linux/err.h>
 #include <linux/of.h>
 #include <linux/notifier.h>
+#include <linux/spinlock.h>
 
 /* Defines used for the flags field in the struct generic_pm_domain */
 #define GENPD_FLAG_PM_CLK	(1U << 0) /* PM domain uses PM clk */
+#define GENPD_FLAG_IRQ_SAFE	(1U << 1) /* PM domain operates in atomic */
 
 enum gpd_status {
 	GPD_STATE_ACTIVE = 0,	/* PM domain is active */
@@ -76,7 +78,14 @@ struct generic_pm_domain {
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
 	struct genpd_lock_fns *lock_fns;
-	struct mutex mlock;
+	bool irq_safe;
+	union {
+		struct mutex mlock;
+		struct {
+			spinlock_t slock;
+			unsigned long lock_flags;
+		};
+	};
 
 };
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 03/12] PM / cpu_domains: Setup PM domains for CPUs/clusters
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer, Daniel Lezcano

Define and add Generic PM domains (genpd) for CPU clusters. Many new
SoCs group CPUs as clusters. Clusters share common resources like power
rails, caches, VFP, Coresight etc. When all CPUs in the cluster are
idle, these shared resources may also be put in their idle state.

CPUs may be associated with their domain providers in DT. The domains in
turn may be associated with their providers. This is clean way to model
the cluster hierarchy like that of ARM's big.little architecture.

For each CPU in the DT, we identify the domain provider; initialize and
register the PM domain if isn't already registered and attach all the
CPU devices to the domain. Usually, when there are multiple clusters of
CPUs, there is a top level coherency domain that is dependent on these
individual domains. All domains thus created are marked IRQ safe
automatically and therefore may be powered down when the CPUs in the
domain are powered down by cpuidle.

Reading DT, initializing Generic PM domains, attaching CPUs to it
domains are common functionalities across ARM SoCs. Provide a common set
of APIs to setup PM domains for CPU clusters and its parents. The
platform drivers may just call of_setup_cpu_pd() to do a single step
setup of CPU domains.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Suggested-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- Initialize down up, start with CPUs, identify and create domains
- Clean up of the API
- Two simple API for setting up CPU PM domains
- File name change: cpu-pd.[ch] ->cpu_domains.[ch]
- Depends on CONFIG_PM_GENERIC_DOMAINS_OF only
- cpu_pd_ops abstracts generic pm domains
- platform code does not know about genpd object used inside
- simplification and bug fixes

 drivers/base/power/Makefile      |   1 +
 drivers/base/power/cpu_domains.c | 267 +++++++++++++++++++++++++++++++++++++++
 include/linux/cpu_domains.h      |  33 +++++
 3 files changed, 301 insertions(+)
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index 5998c53..9883e89 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_PM_SLEEP)	+= main.o wakeup.o
 obj-$(CONFIG_PM_TRACE_RTC)	+= trace.o
 obj-$(CONFIG_PM_OPP)	+= opp/
 obj-$(CONFIG_PM_GENERIC_DOMAINS)	+=  domain.o domain_governor.o
+obj-$(CONFIG_PM_GENERIC_DOMAINS_OF)	+= cpu_domains.o
 obj-$(CONFIG_HAVE_CLK)	+= clock_ops.o
 
 ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
new file mode 100644
index 0000000..981592f
--- /dev/null
+++ b/drivers/base/power/cpu_domains.c
@@ -0,0 +1,267 @@
+/*
+ * drivers/base/power/cpu_domains.c - Helper functions to create CPU PM domains.
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpu_domains.h>
+#include <linux/cpu_pm.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/pm_domain.h>
+#include <linux/rculist.h>
+#include <linux/rcupdate.h>
+#include <linux/slab.h>
+
+#define CPU_PD_NAME_MAX 36
+
+struct cpu_pm_domain {
+	struct list_head link;
+	struct cpu_pd_ops ops;
+	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *parent;
+};
+
+/* List of CPU PM domains we care about */
+static LIST_HEAD(of_cpu_pd_list);
+static DEFINE_SPINLOCK(cpu_pd_list_lock);
+
+static inline
+struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
+{
+	struct cpu_pm_domain *pd;
+	struct cpu_pm_domain *res = NULL;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(pd, &of_cpu_pd_list, link)
+		if (pd->genpd == d) {
+			res = pd;
+			break;
+		}
+	rcu_read_unlock();
+
+	return res;
+}
+
+static int cpu_pd_attach_cpu(int cpu)
+{
+	int ret;
+	struct device *cpu_dev;
+
+	cpu_dev = get_cpu_device(cpu);
+	if (!cpu_dev) {
+		pr_warn("%s: Unable to get device for CPU%d\n",
+				__func__, cpu);
+		return -ENODEV;
+	}
+
+	ret = genpd_dev_pm_attach(cpu_dev);
+	if (ret)
+		dev_warn(cpu_dev,
+			"%s: Unable to attach to power-domain: %d\n",
+			__func__, ret);
+	else
+		dev_dbg(cpu_dev, "Attached to domain\n");
+
+	return ret;
+}
+
+static int cpu_pd_power_on(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_on();
+}
+
+static int cpu_pd_power_off(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_off(genpd->state_idx,
+			genpd->states[genpd->state_idx].param);
+}
+
+/**
+ * of_init_cpu_pm_domain() - Initialize a CPU PM domain from a device node
+ *
+ * @dn: The domain provider's device node
+ * @ops: The power_on/_off callbacks for the domain
+ *
+ * Returns the generic_pm_domain (genpd) pointer to the domain on success
+ */
+static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
+				const struct cpu_pd_ops *ops)
+{
+	struct cpu_pm_domain *pd = NULL;
+	struct generic_pm_domain *genpd = NULL;
+	int ret = -ENOMEM;
+
+	if (!of_device_is_available(dn))
+		return ERR_PTR(-ENODEV);
+
+	genpd = kzalloc(sizeof(*(genpd)), GFP_KERNEL);
+	if (!genpd)
+		goto fail;
+
+	genpd->name = kstrndup(dn->full_name, CPU_PD_NAME_MAX, GFP_KERNEL);
+	if (!genpd->name)
+		goto fail;
+
+	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		goto fail;
+
+	pd->genpd = genpd;
+	pd->genpd->power_off = cpu_pd_power_off;
+	pd->genpd->power_on = cpu_pd_power_on;
+	pd->genpd->flags |= GENPD_FLAG_IRQ_SAFE;
+	pd->ops.power_on = ops->power_on;
+	pd->ops.power_off = ops->power_off;
+
+	INIT_LIST_HEAD_RCU(&pd->link);
+	spin_lock(&cpu_pd_list_lock);
+	list_add_rcu(&pd->link, &of_cpu_pd_list);
+	spin_unlock(&cpu_pd_list_lock);
+
+	/* Register the CPU genpd */
+	pr_debug("adding %s as CPU PM domain.\n", pd->genpd->name);
+	ret = of_pm_genpd_init(dn, pd->genpd, &simple_qos_governor, false);
+	if (ret) {
+		pr_err("Unable to initialize domain %s\n", dn->full_name);
+		goto fail;
+	}
+
+	ret = of_genpd_add_provider_simple(dn, pd->genpd);
+	if (ret)
+		pr_warn("Unable to add genpd %s as provider\n",
+				pd->genpd->name);
+
+	return pd->genpd;
+fail:
+
+	kfree(genpd);
+	kfree(genpd->name);
+	kfree(pd);
+	return ERR_PTR(ret);
+}
+
+static struct generic_pm_domain *of_get_cpu_domain(struct device_node *dn,
+		const struct cpu_pd_ops *ops, int cpu)
+{
+	struct of_phandle_args args;
+	struct generic_pm_domain *genpd, *parent;
+	int ret;
+
+	/* Do we have this domain? If not, create the domain */
+	args.np = dn;
+	args.args_count = 0;
+
+	genpd = of_genpd_get_from_provider(&args);
+	if (!IS_ERR(genpd))
+		goto skip_parent;
+
+	genpd = of_init_cpu_pm_domain(dn, ops);
+	if (IS_ERR(genpd))
+		return genpd;
+
+	/* Is there a domain provider for this domain? */
+	ret = of_parse_phandle_with_args(dn, "power-domains",
+			"#power-domain-cells", 0, &args);
+	of_node_put(dn);
+	if (ret < 0)
+		goto skip_parent;
+
+	/* Find its parent and attach this domain to it, recursively */
+	parent = of_get_cpu_domain(args.np, ops, cpu);
+	if (IS_ERR(parent)) {
+		struct cpu_pm_domain *cpu_pd, *parent_cpu_pd;
+
+		ret = pm_genpd_add_subdomain(genpd, parent);
+		if (ret) {
+			pr_err("%s: Unable to add sub-domain (%s) to parent (%s)\n err: %d",
+					__func__, genpd->name, parent->name,
+					ret);
+			return ERR_PTR(ret);
+		}
+
+		/*
+		 * Reference parent domain for easy access.
+		 * Note: We could be attached to a domain that is not a
+		 * CPU PM domain in that case dont reference the parent.
+		 */
+		cpu_pd = to_cpu_pd(genpd);
+		parent_cpu_pd = to_cpu_pd(parent);
+
+		if (cpu_pd && parent_cpu_pd)
+			cpu_pd->parent = parent_cpu_pd;
+	}
+
+skip_parent:
+	return genpd;
+}
+
+/**
+ * of_setup_cpu_pd_single() - Setup the PM domains for a CPU
+ *
+ * @cpu: The CPU for which the PM domain is to be set up.
+ * @ops: The PM domain suspend/resume ops for the CPU's domain
+ *
+ * If the CPU PM domain exists already, then the CPU is attached to
+ * that CPU PD. If it doesn't, the domain is created, the @ops are
+ * set for power_on/power_off callbacks and then the CPU is attached
+ * to that domain. If the domain was created outside this framework,
+ * then we do not attach the CPU to the domain.
+ */
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{
+
+	struct device_node *dn;
+	struct generic_pm_domain *genpd;
+
+	dn = of_get_cpu_node(cpu, NULL);
+	if (!dn)
+		return -ENODEV;
+
+	dn = of_parse_phandle(dn, "power-domains", 0);
+	if (!dn)
+		return -ENODEV;
+	of_node_put(dn);
+
+	/* Find the genpd for this CPU, create if not found */
+	genpd = of_get_cpu_domain(dn, ops, cpu);
+	if (IS_ERR(genpd))
+		return PTR_ERR(genpd);
+
+	return cpu_pd_attach_cpu(cpu);
+}
+EXPORT_SYMBOL(of_setup_cpu_pd_single);
+
+/**
+ * of_setup_cpu_pd() - Setup the PM domains for all CPUs
+ *
+ * @ops: The PM domain suspend/resume ops for all the domains
+ *
+ * Setup the CPU PM domain and attach all possible CPUs to their respective
+ * domains. The domains are created if not already and then attached.
+ */
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{
+	int cpu;
+	int ret;
+
+	for_each_possible_cpu(cpu) {
+		ret = of_setup_cpu_pd_single(cpu, ops);
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(of_setup_cpu_pd);
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
new file mode 100644
index 0000000..bab4846
--- /dev/null
+++ b/include/linux/cpu_domains.h
@@ -0,0 +1,33 @@
+/*
+ * include/linux/cpu_domains.h
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPU_DOMAINS_H__
+#define __CPU_DOMAINS_H__
+
+struct cpu_pd_ops {
+	int (*power_off)(u32 state_idx, u32 param);
+	int (*power_on)(void);
+};
+
+#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops);
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops);
+#else
+static inline int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{
+	return -ENODEV;
+}
+static inline int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{
+	return -ENODEV;
+}
+#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
+
+#endif /* __CPU_DOMAINS_H__ */
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 03/12] PM / cpu_domains: Setup PM domains for CPUs/clusters
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

Define and add Generic PM domains (genpd) for CPU clusters. Many new
SoCs group CPUs as clusters. Clusters share common resources like power
rails, caches, VFP, Coresight etc. When all CPUs in the cluster are
idle, these shared resources may also be put in their idle state.

CPUs may be associated with their domain providers in DT. The domains in
turn may be associated with their providers. This is clean way to model
the cluster hierarchy like that of ARM's big.little architecture.

For each CPU in the DT, we identify the domain provider; initialize and
register the PM domain if isn't already registered and attach all the
CPU devices to the domain. Usually, when there are multiple clusters of
CPUs, there is a top level coherency domain that is dependent on these
individual domains. All domains thus created are marked IRQ safe
automatically and therefore may be powered down when the CPUs in the
domain are powered down by cpuidle.

Reading DT, initializing Generic PM domains, attaching CPUs to it
domains are common functionalities across ARM SoCs. Provide a common set
of APIs to setup PM domains for CPU clusters and its parents. The
platform drivers may just call of_setup_cpu_pd() to do a single step
setup of CPU domains.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Suggested-by: Kevin Hilman <khilman@linaro.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- Initialize down up, start with CPUs, identify and create domains
- Clean up of the API
- Two simple API for setting up CPU PM domains
- File name change: cpu-pd.[ch] ->cpu_domains.[ch]
- Depends on CONFIG_PM_GENERIC_DOMAINS_OF only
- cpu_pd_ops abstracts generic pm domains
- platform code does not know about genpd object used inside
- simplification and bug fixes

 drivers/base/power/Makefile      |   1 +
 drivers/base/power/cpu_domains.c | 267 +++++++++++++++++++++++++++++++++++++++
 include/linux/cpu_domains.h      |  33 +++++
 3 files changed, 301 insertions(+)
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index 5998c53..9883e89 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_PM_SLEEP)	+= main.o wakeup.o
 obj-$(CONFIG_PM_TRACE_RTC)	+= trace.o
 obj-$(CONFIG_PM_OPP)	+= opp/
 obj-$(CONFIG_PM_GENERIC_DOMAINS)	+=  domain.o domain_governor.o
+obj-$(CONFIG_PM_GENERIC_DOMAINS_OF)	+= cpu_domains.o
 obj-$(CONFIG_HAVE_CLK)	+= clock_ops.o
 
 ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
new file mode 100644
index 0000000..981592f
--- /dev/null
+++ b/drivers/base/power/cpu_domains.c
@@ -0,0 +1,267 @@
+/*
+ * drivers/base/power/cpu_domains.c - Helper functions to create CPU PM domains.
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpu_domains.h>
+#include <linux/cpu_pm.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/pm_domain.h>
+#include <linux/rculist.h>
+#include <linux/rcupdate.h>
+#include <linux/slab.h>
+
+#define CPU_PD_NAME_MAX 36
+
+struct cpu_pm_domain {
+	struct list_head link;
+	struct cpu_pd_ops ops;
+	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *parent;
+};
+
+/* List of CPU PM domains we care about */
+static LIST_HEAD(of_cpu_pd_list);
+static DEFINE_SPINLOCK(cpu_pd_list_lock);
+
+static inline
+struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
+{
+	struct cpu_pm_domain *pd;
+	struct cpu_pm_domain *res = NULL;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(pd, &of_cpu_pd_list, link)
+		if (pd->genpd == d) {
+			res = pd;
+			break;
+		}
+	rcu_read_unlock();
+
+	return res;
+}
+
+static int cpu_pd_attach_cpu(int cpu)
+{
+	int ret;
+	struct device *cpu_dev;
+
+	cpu_dev = get_cpu_device(cpu);
+	if (!cpu_dev) {
+		pr_warn("%s: Unable to get device for CPU%d\n",
+				__func__, cpu);
+		return -ENODEV;
+	}
+
+	ret = genpd_dev_pm_attach(cpu_dev);
+	if (ret)
+		dev_warn(cpu_dev,
+			"%s: Unable to attach to power-domain: %d\n",
+			__func__, ret);
+	else
+		dev_dbg(cpu_dev, "Attached to domain\n");
+
+	return ret;
+}
+
+static int cpu_pd_power_on(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_on();
+}
+
+static int cpu_pd_power_off(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_off(genpd->state_idx,
+			genpd->states[genpd->state_idx].param);
+}
+
+/**
+ * of_init_cpu_pm_domain() - Initialize a CPU PM domain from a device node
+ *
+ * @dn: The domain provider's device node
+ * @ops: The power_on/_off callbacks for the domain
+ *
+ * Returns the generic_pm_domain (genpd) pointer to the domain on success
+ */
+static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
+				const struct cpu_pd_ops *ops)
+{
+	struct cpu_pm_domain *pd = NULL;
+	struct generic_pm_domain *genpd = NULL;
+	int ret = -ENOMEM;
+
+	if (!of_device_is_available(dn))
+		return ERR_PTR(-ENODEV);
+
+	genpd = kzalloc(sizeof(*(genpd)), GFP_KERNEL);
+	if (!genpd)
+		goto fail;
+
+	genpd->name = kstrndup(dn->full_name, CPU_PD_NAME_MAX, GFP_KERNEL);
+	if (!genpd->name)
+		goto fail;
+
+	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		goto fail;
+
+	pd->genpd = genpd;
+	pd->genpd->power_off = cpu_pd_power_off;
+	pd->genpd->power_on = cpu_pd_power_on;
+	pd->genpd->flags |= GENPD_FLAG_IRQ_SAFE;
+	pd->ops.power_on = ops->power_on;
+	pd->ops.power_off = ops->power_off;
+
+	INIT_LIST_HEAD_RCU(&pd->link);
+	spin_lock(&cpu_pd_list_lock);
+	list_add_rcu(&pd->link, &of_cpu_pd_list);
+	spin_unlock(&cpu_pd_list_lock);
+
+	/* Register the CPU genpd */
+	pr_debug("adding %s as CPU PM domain.\n", pd->genpd->name);
+	ret = of_pm_genpd_init(dn, pd->genpd, &simple_qos_governor, false);
+	if (ret) {
+		pr_err("Unable to initialize domain %s\n", dn->full_name);
+		goto fail;
+	}
+
+	ret = of_genpd_add_provider_simple(dn, pd->genpd);
+	if (ret)
+		pr_warn("Unable to add genpd %s as provider\n",
+				pd->genpd->name);
+
+	return pd->genpd;
+fail:
+
+	kfree(genpd);
+	kfree(genpd->name);
+	kfree(pd);
+	return ERR_PTR(ret);
+}
+
+static struct generic_pm_domain *of_get_cpu_domain(struct device_node *dn,
+		const struct cpu_pd_ops *ops, int cpu)
+{
+	struct of_phandle_args args;
+	struct generic_pm_domain *genpd, *parent;
+	int ret;
+
+	/* Do we have this domain? If not, create the domain */
+	args.np = dn;
+	args.args_count = 0;
+
+	genpd = of_genpd_get_from_provider(&args);
+	if (!IS_ERR(genpd))
+		goto skip_parent;
+
+	genpd = of_init_cpu_pm_domain(dn, ops);
+	if (IS_ERR(genpd))
+		return genpd;
+
+	/* Is there a domain provider for this domain? */
+	ret = of_parse_phandle_with_args(dn, "power-domains",
+			"#power-domain-cells", 0, &args);
+	of_node_put(dn);
+	if (ret < 0)
+		goto skip_parent;
+
+	/* Find its parent and attach this domain to it, recursively */
+	parent = of_get_cpu_domain(args.np, ops, cpu);
+	if (IS_ERR(parent)) {
+		struct cpu_pm_domain *cpu_pd, *parent_cpu_pd;
+
+		ret = pm_genpd_add_subdomain(genpd, parent);
+		if (ret) {
+			pr_err("%s: Unable to add sub-domain (%s) to parent (%s)\n err: %d",
+					__func__, genpd->name, parent->name,
+					ret);
+			return ERR_PTR(ret);
+		}
+
+		/*
+		 * Reference parent domain for easy access.
+		 * Note: We could be attached to a domain that is not a
+		 * CPU PM domain in that case dont reference the parent.
+		 */
+		cpu_pd = to_cpu_pd(genpd);
+		parent_cpu_pd = to_cpu_pd(parent);
+
+		if (cpu_pd && parent_cpu_pd)
+			cpu_pd->parent = parent_cpu_pd;
+	}
+
+skip_parent:
+	return genpd;
+}
+
+/**
+ * of_setup_cpu_pd_single() - Setup the PM domains for a CPU
+ *
+ * @cpu: The CPU for which the PM domain is to be set up.
+ * @ops: The PM domain suspend/resume ops for the CPU's domain
+ *
+ * If the CPU PM domain exists already, then the CPU is attached to
+ * that CPU PD. If it doesn't, the domain is created, the @ops are
+ * set for power_on/power_off callbacks and then the CPU is attached
+ * to that domain. If the domain was created outside this framework,
+ * then we do not attach the CPU to the domain.
+ */
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{
+
+	struct device_node *dn;
+	struct generic_pm_domain *genpd;
+
+	dn = of_get_cpu_node(cpu, NULL);
+	if (!dn)
+		return -ENODEV;
+
+	dn = of_parse_phandle(dn, "power-domains", 0);
+	if (!dn)
+		return -ENODEV;
+	of_node_put(dn);
+
+	/* Find the genpd for this CPU, create if not found */
+	genpd = of_get_cpu_domain(dn, ops, cpu);
+	if (IS_ERR(genpd))
+		return PTR_ERR(genpd);
+
+	return cpu_pd_attach_cpu(cpu);
+}
+EXPORT_SYMBOL(of_setup_cpu_pd_single);
+
+/**
+ * of_setup_cpu_pd() - Setup the PM domains for all CPUs
+ *
+ * @ops: The PM domain suspend/resume ops for all the domains
+ *
+ * Setup the CPU PM domain and attach all possible CPUs to their respective
+ * domains. The domains are created if not already and then attached.
+ */
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{
+	int cpu;
+	int ret;
+
+	for_each_possible_cpu(cpu) {
+		ret = of_setup_cpu_pd_single(cpu, ops);
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(of_setup_cpu_pd);
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
new file mode 100644
index 0000000..bab4846
--- /dev/null
+++ b/include/linux/cpu_domains.h
@@ -0,0 +1,33 @@
+/*
+ * include/linux/cpu_domains.h
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPU_DOMAINS_H__
+#define __CPU_DOMAINS_H__
+
+struct cpu_pd_ops {
+	int (*power_off)(u32 state_idx, u32 param);
+	int (*power_on)(void);
+};
+
+#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops);
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops);
+#else
+static inline int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{
+	return -ENODEV;
+}
+static inline int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{
+	return -ENODEV;
+}
+#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
+
+#endif /* __CPU_DOMAINS_H__ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 04/12] ARM: cpuidle: Add runtime PM support for CPUs
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer, Daniel Lezcano

Notify runtime PM when the CPU is going to be powered off in the idle
state. This allows for runtime PM suspend/resume of the CPU as well as
its PM domain.

We do not call into runtime PM for ARM WFI to keep the default state
simple and faster.

Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- runtime PM initialization is now done as part of this file
- hotplug and its runtime PM invocation is done here

 drivers/cpuidle/cpuidle-arm.c | 48 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/drivers/cpuidle/cpuidle-arm.c b/drivers/cpuidle/cpuidle-arm.c
index 545069d..bf7a80c 100644
--- a/drivers/cpuidle/cpuidle-arm.c
+++ b/drivers/cpuidle/cpuidle-arm.c
@@ -11,12 +11,14 @@
 
 #define pr_fmt(fmt) "CPUidle arm: " fmt
 
+#include <linux/cpu.h>
 #include <linux/cpuidle.h>
 #include <linux/cpumask.h>
 #include <linux/cpu_pm.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/of.h>
+#include <linux/pm_runtime.h>
 #include <linux/slab.h>
 
 #include <asm/cpuidle.h>
@@ -37,6 +39,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
 				struct cpuidle_driver *drv, int idx)
 {
 	int ret;
+	struct device *cpu_dev = get_cpu_device(dev->cpu);
 
 	if (!idx) {
 		cpu_do_idle();
@@ -45,6 +48,8 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
 
 	ret = cpu_pm_enter();
 	if (!ret) {
+		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));
+
 		/*
 		 * Pass idle state index to cpu_suspend which in turn will
 		 * call the CPU ops suspend protocol with idle index as a
@@ -52,6 +57,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
 		 */
 		arm_cpuidle_suspend(idx);
 
+		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
 		cpu_pm_exit();
 	}
 
@@ -84,6 +90,30 @@ static const struct of_device_id arm_idle_state_match[] __initconst = {
 	{ },
 };
 
+#ifdef CONFIG_HOTPLUG_CPU
+static int cpu_hotplug(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	struct device *cpu_dev = get_cpu_device(smp_processor_id());
+
+	/* Execute CPU runtime PM on that CPU */
+	switch (action) {
+	case CPU_DYING:
+	case CPU_DYING_FROZEN:
+		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));
+		break;
+	case CPU_STARTING:
+	case CPU_STARTING_FROZEN:
+		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+#endif
+
 /*
  * arm_idle_init
  *
@@ -96,6 +126,7 @@ static int __init arm_idle_init(void)
 	int cpu, ret;
 	struct cpuidle_driver *drv = &arm_idle_driver;
 	struct cpuidle_device *dev;
+	struct device *cpu_dev;
 
 	/*
 	 * Initialize idle states data, starting at index 1.
@@ -118,6 +149,16 @@ static int __init arm_idle_init(void)
 	 * idle states suspend back-end specific data
 	 */
 	for_each_possible_cpu(cpu) {
+
+		/* Initialize Runtime PM for the CPU */
+		cpu_dev = get_cpu_device(cpu);
+		pm_runtime_irq_safe(cpu_dev);
+		pm_runtime_enable(cpu_dev);
+		if (cpu_online(cpu)) {
+			pm_runtime_get_noresume(cpu_dev);
+			pm_runtime_set_active(cpu_dev);
+		}
+
 		ret = arm_cpuidle_init(cpu);
 
 		/*
@@ -148,10 +189,17 @@ static int __init arm_idle_init(void)
 		}
 	}
 
+#ifdef CONFIG_HOTPLUG_CPU
+	/* Register for hotplug notifications for runtime PM */
+	hotcpu_notifier(cpu_hotplug, 0);
+#endif
+
 	return 0;
 out_fail:
 	while (--cpu >= 0) {
 		dev = per_cpu(cpuidle_devices, cpu);
+		cpu_dev = get_cpu_device(cpu);
+		__pm_runtime_disable(cpu_dev, false);
 		cpuidle_unregister_device(dev);
 		kfree(dev);
 	}
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 04/12] ARM: cpuidle: Add runtime PM support for CPUs
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

Notify runtime PM when the CPU is going to be powered off in the idle
state. This allows for runtime PM suspend/resume of the CPU as well as
its PM domain.

We do not call into runtime PM for ARM WFI to keep the default state
simple and faster.

Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- runtime PM initialization is now done as part of this file
- hotplug and its runtime PM invocation is done here

 drivers/cpuidle/cpuidle-arm.c | 48 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/drivers/cpuidle/cpuidle-arm.c b/drivers/cpuidle/cpuidle-arm.c
index 545069d..bf7a80c 100644
--- a/drivers/cpuidle/cpuidle-arm.c
+++ b/drivers/cpuidle/cpuidle-arm.c
@@ -11,12 +11,14 @@
 
 #define pr_fmt(fmt) "CPUidle arm: " fmt
 
+#include <linux/cpu.h>
 #include <linux/cpuidle.h>
 #include <linux/cpumask.h>
 #include <linux/cpu_pm.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/of.h>
+#include <linux/pm_runtime.h>
 #include <linux/slab.h>
 
 #include <asm/cpuidle.h>
@@ -37,6 +39,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
 				struct cpuidle_driver *drv, int idx)
 {
 	int ret;
+	struct device *cpu_dev = get_cpu_device(dev->cpu);
 
 	if (!idx) {
 		cpu_do_idle();
@@ -45,6 +48,8 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
 
 	ret = cpu_pm_enter();
 	if (!ret) {
+		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));
+
 		/*
 		 * Pass idle state index to cpu_suspend which in turn will
 		 * call the CPU ops suspend protocol with idle index as a
@@ -52,6 +57,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
 		 */
 		arm_cpuidle_suspend(idx);
 
+		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
 		cpu_pm_exit();
 	}
 
@@ -84,6 +90,30 @@ static const struct of_device_id arm_idle_state_match[] __initconst = {
 	{ },
 };
 
+#ifdef CONFIG_HOTPLUG_CPU
+static int cpu_hotplug(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	struct device *cpu_dev = get_cpu_device(smp_processor_id());
+
+	/* Execute CPU runtime PM on that CPU */
+	switch (action) {
+	case CPU_DYING:
+	case CPU_DYING_FROZEN:
+		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));
+		break;
+	case CPU_STARTING:
+	case CPU_STARTING_FROZEN:
+		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+#endif
+
 /*
  * arm_idle_init
  *
@@ -96,6 +126,7 @@ static int __init arm_idle_init(void)
 	int cpu, ret;
 	struct cpuidle_driver *drv = &arm_idle_driver;
 	struct cpuidle_device *dev;
+	struct device *cpu_dev;
 
 	/*
 	 * Initialize idle states data, starting at index 1.
@@ -118,6 +149,16 @@ static int __init arm_idle_init(void)
 	 * idle states suspend back-end specific data
 	 */
 	for_each_possible_cpu(cpu) {
+
+		/* Initialize Runtime PM for the CPU */
+		cpu_dev = get_cpu_device(cpu);
+		pm_runtime_irq_safe(cpu_dev);
+		pm_runtime_enable(cpu_dev);
+		if (cpu_online(cpu)) {
+			pm_runtime_get_noresume(cpu_dev);
+			pm_runtime_set_active(cpu_dev);
+		}
+
 		ret = arm_cpuidle_init(cpu);
 
 		/*
@@ -148,10 +189,17 @@ static int __init arm_idle_init(void)
 		}
 	}
 
+#ifdef CONFIG_HOTPLUG_CPU
+	/* Register for hotplug notifications for runtime PM */
+	hotcpu_notifier(cpu_hotplug, 0);
+#endif
+
 	return 0;
 out_fail:
 	while (--cpu >= 0) {
 		dev = per_cpu(cpuidle_devices, cpu);
+		cpu_dev = get_cpu_device(cpu);
+		__pm_runtime_disable(cpu_dev, false);
 		cpuidle_unregister_device(dev);
 		kfree(dev);
 	}
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 05/12] timer: Export next wake up of a CPU
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer,
	Thomas Gleixner

Knowing the sleep length of the CPU is useful for the power state
determination on idle. However, when the common sleep time between
multiple CPUs is needed, the sleep length of a CPU is not useful.

By reading the next wake up event of a CPU, governors can determine the
first CPU to wake up (due to timer) amongst a cluster of CPUs and the
sleep time available between the last CPU to idle and the first CPU to
resume. This information is useful to determine if the caches and other
common hardware blocks can also be put in idle during this common period
of inactivity.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- any CPU can access this variable if that CPU has IRQs disabled

 include/linux/tick.h     | 10 ++++++++++
 kernel/time/tick-sched.c | 13 +++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/include/linux/tick.h b/include/linux/tick.h
index 97fd4e5..b8a2e06 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -104,6 +104,7 @@ extern void tick_nohz_idle_enter(void);
 extern void tick_nohz_idle_exit(void);
 extern void tick_nohz_irq_exit(void);
 extern ktime_t tick_nohz_get_sleep_length(void);
+extern ktime_t tick_nohz_get_next_wakeup(int cpu);
 extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
 extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
 #else /* !CONFIG_NO_HZ_COMMON */
@@ -118,6 +119,15 @@ static inline ktime_t tick_nohz_get_sleep_length(void)
 
 	return len;
 }
+
+static inline ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	ktime_t len = { .tv64 = NSEC_PER_SEC/HZ };
+
+	/* Next wake up is the tick period, assume it starts now */
+	return ktime_add(len, ktime_get());
+}
+
 static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
 static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
 #endif /* !CONFIG_NO_HZ_COMMON */
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 0b17424..f0ddaec 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -888,6 +888,19 @@ ktime_t tick_nohz_get_sleep_length(void)
 	return ts->sleep_length;
 }
 
+/**
+ * tick_nohz_get_next_wakeup - return the next wake up of the CPU
+ *
+ * Called with interrupts disabled on the cpu
+ */
+ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	struct clock_event_device *dev =
+			per_cpu(tick_cpu_device.evtdev, cpu);
+
+	return dev->next_event;
+}
+
 static void tick_nohz_account_idle_ticks(struct tick_sched *ts)
 {
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 05/12] timer: Export next wake up of a CPU
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

Knowing the sleep length of the CPU is useful for the power state
determination on idle. However, when the common sleep time between
multiple CPUs is needed, the sleep length of a CPU is not useful.

By reading the next wake up event of a CPU, governors can determine the
first CPU to wake up (due to timer) amongst a cluster of CPUs and the
sleep time available between the last CPU to idle and the first CPU to
resume. This information is useful to determine if the caches and other
common hardware blocks can also be put in idle during this common period
of inactivity.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- any CPU can access this variable if that CPU has IRQs disabled

 include/linux/tick.h     | 10 ++++++++++
 kernel/time/tick-sched.c | 13 +++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/include/linux/tick.h b/include/linux/tick.h
index 97fd4e5..b8a2e06 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -104,6 +104,7 @@ extern void tick_nohz_idle_enter(void);
 extern void tick_nohz_idle_exit(void);
 extern void tick_nohz_irq_exit(void);
 extern ktime_t tick_nohz_get_sleep_length(void);
+extern ktime_t tick_nohz_get_next_wakeup(int cpu);
 extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
 extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
 #else /* !CONFIG_NO_HZ_COMMON */
@@ -118,6 +119,15 @@ static inline ktime_t tick_nohz_get_sleep_length(void)
 
 	return len;
 }
+
+static inline ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	ktime_t len = { .tv64 = NSEC_PER_SEC/HZ };
+
+	/* Next wake up is the tick period, assume it starts now */
+	return ktime_add(len, ktime_get());
+}
+
 static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
 static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
 #endif /* !CONFIG_NO_HZ_COMMON */
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 0b17424..f0ddaec 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -888,6 +888,19 @@ ktime_t tick_nohz_get_sleep_length(void)
 	return ts->sleep_length;
 }
 
+/**
+ * tick_nohz_get_next_wakeup - return the next wake up of the CPU
+ *
+ * Called with interrupts disabled on the cpu
+ */
+ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	struct clock_event_device *dev =
+			per_cpu(tick_cpu_device.evtdev, cpu);
+
+	return dev->next_event;
+}
+
 static void tick_nohz_account_idle_ticks(struct tick_sched *ts)
 {
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 06/12] PM / cpu_domains: Record CPUs that are part of the domain
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer

In order to power down a CPU domain, the domain needs information about
the CPUs in that domain. The reference counting for the CPU devices in a
domain is done by genpd. In order to understand the idle time for the
domain, it is necessary to know which CPUs are part of the domain, so
the domain governor may understand the sleep durations of the comprising
CPU devices and make a judicious decision on the domain's idle state.
This extends to the parent of such domains, which may have multiple such
CPU domains and therefore would need to know the sleep patterns of all
the CPUs in all the CPU domains.

To aid this functionality, traverse up the parent chain and set the CPU
in the cpumasks while attaching a CPU to its domain. This mask is
provided in the callback to the platform driver to help identify the
CPUs in the domain that is being powered off.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 27 ++++++++++++++++++++++++---
 include/linux/cpu_domains.h      |  4 +++-
 2 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 981592f..c99710c 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -9,6 +9,7 @@
  */
 
 #include <linux/cpu.h>
+#include <linux/cpumask.h>
 #include <linux/cpu_domains.h>
 #include <linux/cpu_pm.h>
 #include <linux/device.h>
@@ -27,6 +28,7 @@ struct cpu_pm_domain {
 	struct cpu_pd_ops ops;
 	struct generic_pm_domain *genpd;
 	struct cpu_pm_domain *parent;
+	cpumask_var_t cpus;
 };
 
 /* List of CPU PM domains we care about */
@@ -50,7 +52,7 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
 	return res;
 }
 
-static int cpu_pd_attach_cpu(int cpu)
+static int cpu_pd_attach_cpu(struct cpu_pm_domain *cpu_pd, int cpu)
 {
 	int ret;
 	struct device *cpu_dev;
@@ -70,6 +72,11 @@ static int cpu_pd_attach_cpu(int cpu)
 	else
 		dev_dbg(cpu_dev, "Attached to domain\n");
 
+	while (!ret && cpu_pd) {
+		cpumask_set_cpu(cpu, cpu_pd->cpus);
+		cpu_pd = cpu_pd->parent;
+	};
+
 	return ret;
 }
 
@@ -85,7 +92,8 @@ static int cpu_pd_power_off(struct generic_pm_domain *genpd)
 	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
 
 	return pd->ops.power_off(genpd->state_idx,
-			genpd->states[genpd->state_idx].param);
+			genpd->states[genpd->state_idx].param,
+			pd->cpus);
 }
 
 /**
@@ -118,6 +126,9 @@ static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
 	if (!pd)
 		goto fail;
 
+	if (!zalloc_cpumask_var(&pd->cpus, GFP_KERNEL))
+		goto fail;
+
 	pd->genpd = genpd;
 	pd->genpd->power_off = cpu_pd_power_off;
 	pd->genpd->power_on = cpu_pd_power_on;
@@ -148,6 +159,8 @@ fail:
 
 	kfree(genpd);
 	kfree(genpd->name);
+	if (pd)
+		kfree(pd->cpus);
 	kfree(pd);
 	return ERR_PTR(ret);
 }
@@ -224,6 +237,7 @@ int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
 
 	struct device_node *dn;
 	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *cpu_pd;
 
 	dn = of_get_cpu_node(cpu, NULL);
 	if (!dn)
@@ -239,7 +253,14 @@ int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
 	if (IS_ERR(genpd))
 		return PTR_ERR(genpd);
 
-	return cpu_pd_attach_cpu(cpu);
+	cpu_pd = to_cpu_pd(genpd);
+	if (!cpu_pd) {
+		pr_err("%s: Genpd was created outside CPU PM domains\n",
+				__func__);
+		return -ENOENT;
+	}
+
+	return cpu_pd_attach_cpu(cpu_pd, cpu);
 }
 EXPORT_SYMBOL(of_setup_cpu_pd_single);
 
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
index bab4846..0c539f0 100644
--- a/include/linux/cpu_domains.h
+++ b/include/linux/cpu_domains.h
@@ -11,8 +11,10 @@
 #ifndef __CPU_DOMAINS_H__
 #define __CPU_DOMAINS_H__
 
+#include <linux/cpumask.h>
+
 struct cpu_pd_ops {
-	int (*power_off)(u32 state_idx, u32 param);
+	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
 	int (*power_on)(void);
 };
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 06/12] PM / cpu_domains: Record CPUs that are part of the domain
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

In order to power down a CPU domain, the domain needs information about
the CPUs in that domain. The reference counting for the CPU devices in a
domain is done by genpd. In order to understand the idle time for the
domain, it is necessary to know which CPUs are part of the domain, so
the domain governor may understand the sleep durations of the comprising
CPU devices and make a judicious decision on the domain's idle state.
This extends to the parent of such domains, which may have multiple such
CPU domains and therefore would need to know the sleep patterns of all
the CPUs in all the CPU domains.

To aid this functionality, traverse up the parent chain and set the CPU
in the cpumasks while attaching a CPU to its domain. This mask is
provided in the callback to the platform driver to help identify the
CPUs in the domain that is being powered off.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 27 ++++++++++++++++++++++++---
 include/linux/cpu_domains.h      |  4 +++-
 2 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 981592f..c99710c 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -9,6 +9,7 @@
  */
 
 #include <linux/cpu.h>
+#include <linux/cpumask.h>
 #include <linux/cpu_domains.h>
 #include <linux/cpu_pm.h>
 #include <linux/device.h>
@@ -27,6 +28,7 @@ struct cpu_pm_domain {
 	struct cpu_pd_ops ops;
 	struct generic_pm_domain *genpd;
 	struct cpu_pm_domain *parent;
+	cpumask_var_t cpus;
 };
 
 /* List of CPU PM domains we care about */
@@ -50,7 +52,7 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
 	return res;
 }
 
-static int cpu_pd_attach_cpu(int cpu)
+static int cpu_pd_attach_cpu(struct cpu_pm_domain *cpu_pd, int cpu)
 {
 	int ret;
 	struct device *cpu_dev;
@@ -70,6 +72,11 @@ static int cpu_pd_attach_cpu(int cpu)
 	else
 		dev_dbg(cpu_dev, "Attached to domain\n");
 
+	while (!ret && cpu_pd) {
+		cpumask_set_cpu(cpu, cpu_pd->cpus);
+		cpu_pd = cpu_pd->parent;
+	};
+
 	return ret;
 }
 
@@ -85,7 +92,8 @@ static int cpu_pd_power_off(struct generic_pm_domain *genpd)
 	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
 
 	return pd->ops.power_off(genpd->state_idx,
-			genpd->states[genpd->state_idx].param);
+			genpd->states[genpd->state_idx].param,
+			pd->cpus);
 }
 
 /**
@@ -118,6 +126,9 @@ static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
 	if (!pd)
 		goto fail;
 
+	if (!zalloc_cpumask_var(&pd->cpus, GFP_KERNEL))
+		goto fail;
+
 	pd->genpd = genpd;
 	pd->genpd->power_off = cpu_pd_power_off;
 	pd->genpd->power_on = cpu_pd_power_on;
@@ -148,6 +159,8 @@ fail:
 
 	kfree(genpd);
 	kfree(genpd->name);
+	if (pd)
+		kfree(pd->cpus);
 	kfree(pd);
 	return ERR_PTR(ret);
 }
@@ -224,6 +237,7 @@ int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
 
 	struct device_node *dn;
 	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *cpu_pd;
 
 	dn = of_get_cpu_node(cpu, NULL);
 	if (!dn)
@@ -239,7 +253,14 @@ int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
 	if (IS_ERR(genpd))
 		return PTR_ERR(genpd);
 
-	return cpu_pd_attach_cpu(cpu);
+	cpu_pd = to_cpu_pd(genpd);
+	if (!cpu_pd) {
+		pr_err("%s: Genpd was created outside CPU PM domains\n",
+				__func__);
+		return -ENOENT;
+	}
+
+	return cpu_pd_attach_cpu(cpu_pd, cpu);
 }
 EXPORT_SYMBOL(of_setup_cpu_pd_single);
 
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
index bab4846..0c539f0 100644
--- a/include/linux/cpu_domains.h
+++ b/include/linux/cpu_domains.h
@@ -11,8 +11,10 @@
 #ifndef __CPU_DOMAINS_H__
 #define __CPU_DOMAINS_H__
 
+#include <linux/cpumask.h>
+
 struct cpu_pd_ops {
-	int (*power_off)(u32 state_idx, u32 param);
+	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
 	int (*power_on)(void);
 };
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 07/12] PM / cpu_domains: Add PM Domain governor for CPUs
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer

A PM domain comprising of CPUs may be powered off when all the CPUs in
the domain are powered down. Powering down a CPU domain is generally a
expensive operation and therefore the power performance trade offs
should be considered. The time between the last CPU powering down and
the first CPU powering up in a domain, is the time available for the
domain to sleep. Ideally, the sleep time of the domain should fulfill
the residency requirement of the domains' idle state.

To do this effectively, read the time before the wakeup of the cluster's
CPUs and ensure that the domain's idle state sleep time guarantees the
QoS requirements of each of the CPU, the PM QoS CPU_DMA_LATENCY and the
state's residency.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- bug fix

 drivers/base/power/cpu_domains.c | 75 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 74 insertions(+), 1 deletion(-)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index c99710c..7069411 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -17,9 +17,12 @@
 #include <linux/list.h>
 #include <linux/of.h>
 #include <linux/pm_domain.h>
+#include <linux/pm_qos.h>
+#include <linux/pm_runtime.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
 #include <linux/slab.h>
+#include <linux/tick.h>
 
 #define CPU_PD_NAME_MAX 36
 
@@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
 	return res;
 }
 
+static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
+{
+	struct generic_pm_domain *genpd = pd_to_genpd(pd);
+	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
+	int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
+	u64 sleep_ns;
+	ktime_t earliest, next_wakeup;
+	int cpu;
+	int i;
+
+	/* Reset the last set genpd state, default to index 0 */
+	genpd->state_idx = 0;
+
+	/* We dont want to power down, if QoS is 0 */
+	if (!qos)
+		return false;
+
+	/*
+	 * Find the sleep time for the cluster.
+	 * The time between now and the first wake up of any CPU that
+	 * are in this domain hierarchy is the time available for the
+	 * domain to be idle.
+	 */
+	earliest = ktime_set(KTIME_SEC_MAX, 0);
+	for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {
+		next_wakeup = tick_nohz_get_next_wakeup(cpu);
+		if (earliest.tv64 > next_wakeup.tv64)
+			earliest = next_wakeup;
+	}
+
+	sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get()));
+	if (sleep_ns <= 0)
+		return false;
+
+	/*
+	 * Find the deepest sleep state that satisfies the residency
+	 * requirement and the QoS constraint
+	 */
+	for (i = genpd->state_count - 1; i >= 0; i--) {
+		u64 state_sleep_ns;
+
+		state_sleep_ns = genpd->states[i].power_off_latency_ns +
+			genpd->states[i].power_on_latency_ns +
+			genpd->states[i].residency_ns;
+
+		/*
+		 * If we cant sleep to save power in the state, move on
+		 * to the next lower idle state.
+		 */
+		if (state_sleep_ns > sleep_ns)
+			continue;
+
+		/*
+		 * We also dont want to sleep more than we should to
+		 * gaurantee QoS.
+		 */
+		if (state_sleep_ns < (qos * NSEC_PER_USEC))
+			break;
+	}
+
+	if (i >= 0)
+		genpd->state_idx = i;
+
+	return  (i >= 0) ? true : false;
+}
+
+static struct dev_power_governor cpu_pd_gov = {
+	.power_down_ok = cpu_pd_down_ok,
+};
+
 static int cpu_pd_attach_cpu(struct cpu_pm_domain *cpu_pd, int cpu)
 {
 	int ret;
@@ -143,7 +216,7 @@ static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
 
 	/* Register the CPU genpd */
 	pr_debug("adding %s as CPU PM domain.\n", pd->genpd->name);
-	ret = of_pm_genpd_init(dn, pd->genpd, &simple_qos_governor, false);
+	ret = of_pm_genpd_init(dn, pd->genpd, &cpu_pd_gov, false);
 	if (ret) {
 		pr_err("Unable to initialize domain %s\n", dn->full_name);
 		goto fail;
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 07/12] PM / cpu_domains: Add PM Domain governor for CPUs
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

A PM domain comprising of CPUs may be powered off when all the CPUs in
the domain are powered down. Powering down a CPU domain is generally a
expensive operation and therefore the power performance trade offs
should be considered. The time between the last CPU powering down and
the first CPU powering up in a domain, is the time available for the
domain to sleep. Ideally, the sleep time of the domain should fulfill
the residency requirement of the domains' idle state.

To do this effectively, read the time before the wakeup of the cluster's
CPUs and ensure that the domain's idle state sleep time guarantees the
QoS requirements of each of the CPU, the PM QoS CPU_DMA_LATENCY and the
state's residency.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- bug fix

 drivers/base/power/cpu_domains.c | 75 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 74 insertions(+), 1 deletion(-)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index c99710c..7069411 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -17,9 +17,12 @@
 #include <linux/list.h>
 #include <linux/of.h>
 #include <linux/pm_domain.h>
+#include <linux/pm_qos.h>
+#include <linux/pm_runtime.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
 #include <linux/slab.h>
+#include <linux/tick.h>
 
 #define CPU_PD_NAME_MAX 36
 
@@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
 	return res;
 }
 
+static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
+{
+	struct generic_pm_domain *genpd = pd_to_genpd(pd);
+	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
+	int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
+	u64 sleep_ns;
+	ktime_t earliest, next_wakeup;
+	int cpu;
+	int i;
+
+	/* Reset the last set genpd state, default to index 0 */
+	genpd->state_idx = 0;
+
+	/* We dont want to power down, if QoS is 0 */
+	if (!qos)
+		return false;
+
+	/*
+	 * Find the sleep time for the cluster.
+	 * The time between now and the first wake up of any CPU that
+	 * are in this domain hierarchy is the time available for the
+	 * domain to be idle.
+	 */
+	earliest = ktime_set(KTIME_SEC_MAX, 0);
+	for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {
+		next_wakeup = tick_nohz_get_next_wakeup(cpu);
+		if (earliest.tv64 > next_wakeup.tv64)
+			earliest = next_wakeup;
+	}
+
+	sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get()));
+	if (sleep_ns <= 0)
+		return false;
+
+	/*
+	 * Find the deepest sleep state that satisfies the residency
+	 * requirement and the QoS constraint
+	 */
+	for (i = genpd->state_count - 1; i >= 0; i--) {
+		u64 state_sleep_ns;
+
+		state_sleep_ns = genpd->states[i].power_off_latency_ns +
+			genpd->states[i].power_on_latency_ns +
+			genpd->states[i].residency_ns;
+
+		/*
+		 * If we cant sleep to save power in the state, move on
+		 * to the next lower idle state.
+		 */
+		if (state_sleep_ns > sleep_ns)
+			continue;
+
+		/*
+		 * We also dont want to sleep more than we should to
+		 * gaurantee QoS.
+		 */
+		if (state_sleep_ns < (qos * NSEC_PER_USEC))
+			break;
+	}
+
+	if (i >= 0)
+		genpd->state_idx = i;
+
+	return  (i >= 0) ? true : false;
+}
+
+static struct dev_power_governor cpu_pd_gov = {
+	.power_down_ok = cpu_pd_down_ok,
+};
+
 static int cpu_pd_attach_cpu(struct cpu_pm_domain *cpu_pd, int cpu)
 {
 	int ret;
@@ -143,7 +216,7 @@ static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
 
 	/* Register the CPU genpd */
 	pr_debug("adding %s as CPU PM domain.\n", pd->genpd->name);
-	ret = of_pm_genpd_init(dn, pd->genpd, &simple_qos_governor, false);
+	ret = of_pm_genpd_init(dn, pd->genpd, &cpu_pd_gov, false);
 	if (ret) {
 		pr_err("Unable to initialize domain %s\n", dn->full_name);
 		goto fail;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 08/12] Documentation / cpu_domains: Describe CPU PM domains setup and governor
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer

A generic CPU PM domain functionality is provided by
drivers/base/power/cpu_domains.c. This document describes the generic
usecase of CPU's PM domains, the setup of such domains and a CPU
specific genpd governor.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- a patch of its own 
- updated documentation

 Documentation/power/cpu_domains.txt | 79 +++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)
 create mode 100644 Documentation/power/cpu_domains.txt

diff --git a/Documentation/power/cpu_domains.txt b/Documentation/power/cpu_domains.txt
new file mode 100644
index 0000000..5fdc66d
--- /dev/null
+++ b/Documentation/power/cpu_domains.txt
@@ -0,0 +1,79 @@
+CPU PM domains
+==============
+
+Newer CPUs are grouped in SoCs as clusters. A cluster in addition to the CPUs
+may have caches, VFP and architecture specific power controller that share the
+resources when any of the CPUs are active. When the CPUs are in idle, some of
+these cluster components may also idle. A cluster may also be nested inside
+another cluster that provides common coherency interfaces to share data
+between the clusters. The organization of such clusters and CPU may be
+descibed in DT, since they are SoC specific.
+
+CPUIdle framework enables the CPUs to determine the sleep time and enter low
+power state to save power during periods of idle. CPUs in a cluster may enter
+and exit idle state independently of each other. During the time when all the
+CPUs are in idle state, the cluster may safely put some of the shared
+resources in their idle state. The time between the last CPU to enter idle and
+the first CPU to wake up is the time available for the cluster to enter its
+idle state.
+
+When SoCs power down the CPU during cpuidle, they generally have supplemental
+hardware that can handshake with the CPU with a signal that indicates that the
+CPU has stopped execution. The hardware is also responsible for warm booting
+the CPU on receiving an interrupt. With cluster architecture, common resources
+that are shared by the cluster may also be powered down by an external
+microcontroller or a processor. The microcontroller may be programmed in
+advance to put the hardware blocks in a low power state, when the last active
+CPU sends the idle signal. When the signal is received, the microcontroller
+may trigger the hardware blocks to enter their low power state. When an
+interrupt to wakeup the processor is received, the microcontroller is
+responsible for bringing up the hardware blocks to its active state, before
+waking up the CPU. The timelines for such operations should be in the
+acceptable range for the for CPU idle to get power benefits.
+
+CPU PM Domain Setup
+-------------------
+
+PM domains  are represented in the DT as domain consumers and providers. A
+device may have a domain provider and a domain provider may support multiple
+domain consumers. Domains like clusters, may also be nested inside one
+another. A domain that has no active consumer, may be powered off and any
+resuming consumer would trigger the domain back to active. Parent domains may
+be powered off when the child domains are powered off. The CPU cluster can be
+fashioned as a PM domain. When the CPU devices are powered off, the PM domain
+may be powered off.
+
+Device idle is reference counted by runtime PM. When there is no active need
+for the device, runtime PM invokes callbacks to suspend the parent domain.
+Generic PM domain (genpd) handles the hierarchy of devices, domains and the
+reference counting of objects leading to last man down and first man up in the
+domain. The CPU domains helper functions defines PM domains for each CPU
+cluster and attaches the CPU devices to the respective PM domains.
+
+Platform drivers may use the following API to register their CPU PM domains.
+
+of_setup_cpu_pd() -
+Provides a single step registration of the CPU PM domain and attach CPUs to
+the genpd. Platform drivers may additionally register callbacks for power_on
+and power_off operations for the PM domain.
+
+of_setup_cpu_pd_single() -
+Define PM domain for a single CPU and attach the CPU to its domain.
+
+
+CPU PM Domain governor
+----------------------
+
+CPUs have an unique ability to determine their next wakeup. CPUs may wake up
+for known timer interrupts and unknown interrupts from idle. Prediction
+algorithms and heuristic based algorithms like the Menu governor for cpuidle
+can determine the next wakeup of the CPU. However, determining the wakeup
+across a group of CPUs is a tough problem to solve.
+
+A simplistic approach would be to resort to known wakeups of the CPUs in
+determining the next wakeup of any CPU in the cluster. The CPU PM domain
+governor does just that. By looking into the tick device of the CPUs, the
+governor can determine the sleep time between the last CPU and the first
+scheduled wakeup of any CPU in that domain. This combined with the PM QoS
+requirement for CPU_DMA_LATENCY can be used to determine the deepest possible
+idle state of the CPU domain.
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 08/12] Documentation / cpu_domains: Describe CPU PM domains setup and governor
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

A generic CPU PM domain functionality is provided by
drivers/base/power/cpu_domains.c. This document describes the generic
usecase of CPU's PM domains, the setup of such domains and a CPU
specific genpd governor.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- a patch of its own 
- updated documentation

 Documentation/power/cpu_domains.txt | 79 +++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)
 create mode 100644 Documentation/power/cpu_domains.txt

diff --git a/Documentation/power/cpu_domains.txt b/Documentation/power/cpu_domains.txt
new file mode 100644
index 0000000..5fdc66d
--- /dev/null
+++ b/Documentation/power/cpu_domains.txt
@@ -0,0 +1,79 @@
+CPU PM domains
+==============
+
+Newer CPUs are grouped in SoCs as clusters. A cluster in addition to the CPUs
+may have caches, VFP and architecture specific power controller that share the
+resources when any of the CPUs are active. When the CPUs are in idle, some of
+these cluster components may also idle. A cluster may also be nested inside
+another cluster that provides common coherency interfaces to share data
+between the clusters. The organization of such clusters and CPU may be
+descibed in DT, since they are SoC specific.
+
+CPUIdle framework enables the CPUs to determine the sleep time and enter low
+power state to save power during periods of idle. CPUs in a cluster may enter
+and exit idle state independently of each other. During the time when all the
+CPUs are in idle state, the cluster may safely put some of the shared
+resources in their idle state. The time between the last CPU to enter idle and
+the first CPU to wake up is the time available for the cluster to enter its
+idle state.
+
+When SoCs power down the CPU during cpuidle, they generally have supplemental
+hardware that can handshake with the CPU with a signal that indicates that the
+CPU has stopped execution. The hardware is also responsible for warm booting
+the CPU on receiving an interrupt. With cluster architecture, common resources
+that are shared by the cluster may also be powered down by an external
+microcontroller or a processor. The microcontroller may be programmed in
+advance to put the hardware blocks in a low power state, when the last active
+CPU sends the idle signal. When the signal is received, the microcontroller
+may trigger the hardware blocks to enter their low power state. When an
+interrupt to wakeup the processor is received, the microcontroller is
+responsible for bringing up the hardware blocks to its active state, before
+waking up the CPU. The timelines for such operations should be in the
+acceptable range for the for CPU idle to get power benefits.
+
+CPU PM Domain Setup
+-------------------
+
+PM domains  are represented in the DT as domain consumers and providers. A
+device may have a domain provider and a domain provider may support multiple
+domain consumers. Domains like clusters, may also be nested inside one
+another. A domain that has no active consumer, may be powered off and any
+resuming consumer would trigger the domain back to active. Parent domains may
+be powered off when the child domains are powered off. The CPU cluster can be
+fashioned as a PM domain. When the CPU devices are powered off, the PM domain
+may be powered off.
+
+Device idle is reference counted by runtime PM. When there is no active need
+for the device, runtime PM invokes callbacks to suspend the parent domain.
+Generic PM domain (genpd) handles the hierarchy of devices, domains and the
+reference counting of objects leading to last man down and first man up in the
+domain. The CPU domains helper functions defines PM domains for each CPU
+cluster and attaches the CPU devices to the respective PM domains.
+
+Platform drivers may use the following API to register their CPU PM domains.
+
+of_setup_cpu_pd() -
+Provides a single step registration of the CPU PM domain and attach CPUs to
+the genpd. Platform drivers may additionally register callbacks for power_on
+and power_off operations for the PM domain.
+
+of_setup_cpu_pd_single() -
+Define PM domain for a single CPU and attach the CPU to its domain.
+
+
+CPU PM Domain governor
+----------------------
+
+CPUs have an unique ability to determine their next wakeup. CPUs may wake up
+for known timer interrupts and unknown interrupts from idle. Prediction
+algorithms and heuristic based algorithms like the Menu governor for cpuidle
+can determine the next wakeup of the CPU. However, determining the wakeup
+across a group of CPUs is a tough problem to solve.
+
+A simplistic approach would be to resort to known wakeups of the CPUs in
+determining the next wakeup of any CPU in the cluster. The CPU PM domain
+governor does just that. By looking into the tick device of the CPUs, the
+governor can determine the sleep time between the last CPU and the first
+scheduled wakeup of any CPU in that domain. This combined with the PM QoS
+requirement for CPU_DMA_LATENCY can be used to determine the deepest possible
+idle state of the CPU domain.
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 09/12] drivers: firmware: psci: Allow OS Initiated suspend mode
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer, Mark Rutland

PSCI firmware v1.0 onwards may support 2 different modes for
CPU_SUSPEND. Platform coordinated mode is the default and every firmware
should support it. The firmware shall also be the default mode.

With the kernel capable of deciding the state for CPU cluster and
coherency domains, the OS Initiated mode may now be used by the kernel,
provided the firmware supports it. SET_SUSPEND_MODE is a PSCI function
available on v1.0 onwards and can be used to set the mode in the
firmware.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/firmware/psci.c   | 45 ++++++++++++++++++++++++++++++++++++++++++++-
 include/linux/psci.h      |  2 ++
 include/uapi/linux/psci.h |  5 +++++
 3 files changed, 51 insertions(+), 1 deletion(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index f25cd79..03c084e 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -49,12 +49,19 @@
  * require cooperation with a Trusted OS driver.
  */
 static int resident_cpu = -1;
+static bool has_osi_pd;
+static bool psci_suspend_mode_is_osi;
 
 bool psci_tos_resident_on(int cpu)
 {
 	return cpu == resident_cpu;
 }
 
+bool psci_has_osi_pd_support(void)
+{
+	return has_osi_pd;
+}
+
 struct psci_operations psci_ops;
 
 typedef unsigned long (psci_fn)(unsigned long, unsigned long,
@@ -250,6 +257,26 @@ static int psci_system_suspend(unsigned long unused)
 			      virt_to_phys(cpu_resume), 0, 0);
 }
 
+int psci_set_suspend_mode_osi(bool enable)
+{
+	int ret;
+	int mode;
+
+	if (enable && !psci_has_osi_pd_support())
+		return -ENODEV;
+
+	if (enable == psci_suspend_mode_is_osi)
+		return 0;
+
+	mode = enable ? PSCI_1_0_SUSPEND_MODE_OSI : PSCI_1_0_SUSPEND_MODE_PC;
+	ret = invoke_psci_fn(PSCI_FN_NATIVE(1_0, SET_SUSPEND_MODE),
+			     mode, 0, 0);
+	if (!ret)
+		psci_suspend_mode_is_osi = enable;
+
+	return psci_to_linux_errno(ret);
+}
+
 static int psci_system_suspend_enter(suspend_state_t state)
 {
 	return cpu_suspend(0, psci_system_suspend);
@@ -443,10 +470,26 @@ out_put_node:
 	return err;
 }
 
+static int __init psci_1_0_init(struct device_node *np)
+{
+	int ret;
+
+	ret = psci_0_2_init(np);
+	if (ret)
+		return ret;
+
+	/* Check if PSCI OSI mode is available */
+	ret = psci_features(PSCI_FN_NATIVE(0_2, CPU_SUSPEND));
+	if (ret & PSCI_1_0_OS_INITIATED)
+		has_osi_pd = true;
+
+	return 0;
+}
+
 static const struct of_device_id const psci_of_match[] __initconst = {
 	{ .compatible = "arm,psci",	.data = psci_0_1_init},
 	{ .compatible = "arm,psci-0.2",	.data = psci_0_2_init},
-	{ .compatible = "arm,psci-1.0",	.data = psci_0_2_init},
+	{ .compatible = "arm,psci-1.0",	.data = psci_1_0_init},
 	{},
 };
 
diff --git a/include/linux/psci.h b/include/linux/psci.h
index 12c4865..deae633 100644
--- a/include/linux/psci.h
+++ b/include/linux/psci.h
@@ -23,6 +23,8 @@
 bool psci_tos_resident_on(int cpu);
 bool psci_power_state_loses_context(u32 state);
 bool psci_power_state_is_valid(u32 state);
+bool psci_has_osi_pd_support(void);
+int psci_set_suspend_mode_osi(bool enable);
 
 struct psci_operations {
 	int (*cpu_suspend)(u32 state, unsigned long entry_point);
diff --git a/include/uapi/linux/psci.h b/include/uapi/linux/psci.h
index 3d7a0fc..eaab6e3 100644
--- a/include/uapi/linux/psci.h
+++ b/include/uapi/linux/psci.h
@@ -50,6 +50,7 @@
 #define PSCI_1_0_FN_SYSTEM_SUSPEND		PSCI_0_2_FN(14)
 
 #define PSCI_1_0_FN64_SYSTEM_SUSPEND		PSCI_0_2_FN64(14)
+#define PSCI_1_0_FN64_SET_SUSPEND_MODE		PSCI_0_2_FN64(15)
 
 /* PSCI v0.2 power state encoding for CPU_SUSPEND function */
 #define PSCI_0_2_POWER_STATE_ID_MASK		0xffff
@@ -93,6 +94,10 @@
 #define PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK	\
 			(0x1 << PSCI_1_0_FEATURES_CPU_SUSPEND_PF_SHIFT)
 
+#define PSCI_1_0_OS_INITIATED			BIT(0)
+#define PSCI_1_0_SUSPEND_MODE_PC		0
+#define PSCI_1_0_SUSPEND_MODE_OSI		1
+
 /* PSCI return values (inclusive of all PSCI versions) */
 #define PSCI_RET_SUCCESS			0
 #define PSCI_RET_NOT_SUPPORTED			-1
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 09/12] drivers: firmware: psci: Allow OS Initiated suspend mode
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

PSCI firmware v1.0 onwards may support 2 different modes for
CPU_SUSPEND. Platform coordinated mode is the default and every firmware
should support it. The firmware shall also be the default mode.

With the kernel capable of deciding the state for CPU cluster and
coherency domains, the OS Initiated mode may now be used by the kernel,
provided the firmware supports it. SET_SUSPEND_MODE is a PSCI function
available on v1.0 onwards and can be used to set the mode in the
firmware.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/firmware/psci.c   | 45 ++++++++++++++++++++++++++++++++++++++++++++-
 include/linux/psci.h      |  2 ++
 include/uapi/linux/psci.h |  5 +++++
 3 files changed, 51 insertions(+), 1 deletion(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index f25cd79..03c084e 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -49,12 +49,19 @@
  * require cooperation with a Trusted OS driver.
  */
 static int resident_cpu = -1;
+static bool has_osi_pd;
+static bool psci_suspend_mode_is_osi;
 
 bool psci_tos_resident_on(int cpu)
 {
 	return cpu == resident_cpu;
 }
 
+bool psci_has_osi_pd_support(void)
+{
+	return has_osi_pd;
+}
+
 struct psci_operations psci_ops;
 
 typedef unsigned long (psci_fn)(unsigned long, unsigned long,
@@ -250,6 +257,26 @@ static int psci_system_suspend(unsigned long unused)
 			      virt_to_phys(cpu_resume), 0, 0);
 }
 
+int psci_set_suspend_mode_osi(bool enable)
+{
+	int ret;
+	int mode;
+
+	if (enable && !psci_has_osi_pd_support())
+		return -ENODEV;
+
+	if (enable == psci_suspend_mode_is_osi)
+		return 0;
+
+	mode = enable ? PSCI_1_0_SUSPEND_MODE_OSI : PSCI_1_0_SUSPEND_MODE_PC;
+	ret = invoke_psci_fn(PSCI_FN_NATIVE(1_0, SET_SUSPEND_MODE),
+			     mode, 0, 0);
+	if (!ret)
+		psci_suspend_mode_is_osi = enable;
+
+	return psci_to_linux_errno(ret);
+}
+
 static int psci_system_suspend_enter(suspend_state_t state)
 {
 	return cpu_suspend(0, psci_system_suspend);
@@ -443,10 +470,26 @@ out_put_node:
 	return err;
 }
 
+static int __init psci_1_0_init(struct device_node *np)
+{
+	int ret;
+
+	ret = psci_0_2_init(np);
+	if (ret)
+		return ret;
+
+	/* Check if PSCI OSI mode is available */
+	ret = psci_features(PSCI_FN_NATIVE(0_2, CPU_SUSPEND));
+	if (ret & PSCI_1_0_OS_INITIATED)
+		has_osi_pd = true;
+
+	return 0;
+}
+
 static const struct of_device_id const psci_of_match[] __initconst = {
 	{ .compatible = "arm,psci",	.data = psci_0_1_init},
 	{ .compatible = "arm,psci-0.2",	.data = psci_0_2_init},
-	{ .compatible = "arm,psci-1.0",	.data = psci_0_2_init},
+	{ .compatible = "arm,psci-1.0",	.data = psci_1_0_init},
 	{},
 };
 
diff --git a/include/linux/psci.h b/include/linux/psci.h
index 12c4865..deae633 100644
--- a/include/linux/psci.h
+++ b/include/linux/psci.h
@@ -23,6 +23,8 @@
 bool psci_tos_resident_on(int cpu);
 bool psci_power_state_loses_context(u32 state);
 bool psci_power_state_is_valid(u32 state);
+bool psci_has_osi_pd_support(void);
+int psci_set_suspend_mode_osi(bool enable);
 
 struct psci_operations {
 	int (*cpu_suspend)(u32 state, unsigned long entry_point);
diff --git a/include/uapi/linux/psci.h b/include/uapi/linux/psci.h
index 3d7a0fc..eaab6e3 100644
--- a/include/uapi/linux/psci.h
+++ b/include/uapi/linux/psci.h
@@ -50,6 +50,7 @@
 #define PSCI_1_0_FN_SYSTEM_SUSPEND		PSCI_0_2_FN(14)
 
 #define PSCI_1_0_FN64_SYSTEM_SUSPEND		PSCI_0_2_FN64(14)
+#define PSCI_1_0_FN64_SET_SUSPEND_MODE		PSCI_0_2_FN64(15)
 
 /* PSCI v0.2 power state encoding for CPU_SUSPEND function */
 #define PSCI_0_2_POWER_STATE_ID_MASK		0xffff
@@ -93,6 +94,10 @@
 #define PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK	\
 			(0x1 << PSCI_1_0_FEATURES_CPU_SUSPEND_PF_SHIFT)
 
+#define PSCI_1_0_OS_INITIATED			BIT(0)
+#define PSCI_1_0_SUSPEND_MODE_PC		0
+#define PSCI_1_0_SUSPEND_MODE_OSI		1
+
 /* PSCI return values (inclusive of all PSCI versions) */
 #define PSCI_RET_SUCCESS			0
 #define PSCI_RET_NOT_SUPPORTED			-1
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 10/12] ARM64: psci: Support cluster idle states for OS-Initiated
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer, Mark Rutland

PSCI OS initiated firmware may allow Linux to determine the state of the
CPU cluster and the cluster at coherency level to enter idle states when
there are no active CPUs. Since Linux has a better idea of the QoS and
the wakeup pattern of the CPUs, the cluster idle states may be better
determined by the OS instead of the firmware.

The last CPU entering idle in a cluster, is responsible for selecting
the state of the cluster. Only one CPU in a cluster may provide the
cluster idle state to the firmware. Similarly, the last CPU in the
system may provide the state of the coherency domain along with the
cluster and the CPU state IDs.

Utilize the CPU PM domain framework's helper functions to build up the
hierarchy of cluster topology using Generic PM domains. We provide
callbacks for domain power_on and power_off. By appending the state IDs
at each domain level in the -power_off() callbacks, we build up a
composite state ID that can be passed onto the firmware to idle the CPU,
the cluster and the coherency interface.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- initializes CPU PM domains based on OSI support in FW
- simplified initialization and cluster state id determination

 arch/arm64/kernel/psci.c | 46 +++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index f67f35b..75b1707 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -15,6 +15,7 @@
 
 #define pr_fmt(fmt) "psci: " fmt
 
+#include <linux/cpu_domains.h>
 #include <linux/init.h>
 #include <linux/of.h>
 #include <linux/smp.h>
@@ -31,6 +32,24 @@
 #include <asm/suspend.h>
 
 static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state);
+static DEFINE_PER_CPU(u32, cluster_state_id);
+
+static inline u32 psci_get_composite_state_id(u32 cpu_state)
+{
+	return cpu_state | this_cpu_read(cluster_state_id);
+}
+
+static inline void psci_reset_composite_state_id(void)
+{
+	this_cpu_write(cluster_state_id, 0);
+}
+
+static int psci_pd_power_off(u32 state_idx, u32 param,
+		const struct cpumask *mask)
+{
+	__this_cpu_add(cluster_state_id, param);
+	return 0;
+}
 
 static int __maybe_unused cpu_psci_cpu_init_idle(unsigned int cpu)
 {
@@ -89,6 +108,19 @@ static int __maybe_unused cpu_psci_cpu_init_idle(unsigned int cpu)
 	}
 	/* Idle states parsed correctly, initialize per-cpu pointer */
 	per_cpu(psci_power_state, cpu) = psci_states;
+
+	if (psci_has_osi_pd_support()) {
+		const struct cpu_pd_ops psci_pd_ops = {
+			.power_off = psci_pd_power_off,
+		};
+
+		ret = of_setup_cpu_pd_single(cpu, &psci_pd_ops);
+		if (!ret)
+			ret = psci_set_suspend_mode_osi(true);
+		if (ret)
+			pr_warn("CPU%d: Error setting PSCI OSI mode\n", cpu);
+	}
+
 	return 0;
 
 free_mem:
@@ -117,6 +149,8 @@ static int cpu_psci_cpu_boot(unsigned int cpu)
 	if (err)
 		pr_err("failed to boot CPU%d (%d)\n", cpu, err);
 
+	/* Reset CPU cluster states */
+	psci_reset_composite_state_id();
 	return err;
 }
 
@@ -181,15 +215,16 @@ static int cpu_psci_cpu_kill(unsigned int cpu)
 static int psci_suspend_finisher(unsigned long index)
 {
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
 
-	return psci_ops.cpu_suspend(state[index - 1],
-				    virt_to_phys(cpu_resume));
+	return psci_ops.cpu_suspend(ext_state, virt_to_phys(cpu_resume));
 }
 
 static int __maybe_unused cpu_psci_cpu_suspend(unsigned long index)
 {
 	int ret;
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
 	/*
 	 * idle state index 0 corresponds to wfi, should never be called
 	 * from the cpu_suspend operations
@@ -198,10 +233,15 @@ static int __maybe_unused cpu_psci_cpu_suspend(unsigned long index)
 		return -EINVAL;
 
 	if (!psci_power_state_loses_context(state[index - 1]))
-		ret = psci_ops.cpu_suspend(state[index - 1], 0);
+		ret = psci_ops.cpu_suspend(ext_state, 0);
 	else
 		ret = cpu_suspend(index, psci_suspend_finisher);
 
+	/*
+	 * Clear the CPU's cluster states, we start afresh after coming
+	 * out of idle.
+	 */
+	psci_reset_composite_state_id();
 	return ret;
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 10/12] ARM64: psci: Support cluster idle states for OS-Initiated
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

PSCI OS initiated firmware may allow Linux to determine the state of the
CPU cluster and the cluster at coherency level to enter idle states when
there are no active CPUs. Since Linux has a better idea of the QoS and
the wakeup pattern of the CPUs, the cluster idle states may be better
determined by the OS instead of the firmware.

The last CPU entering idle in a cluster, is responsible for selecting
the state of the cluster. Only one CPU in a cluster may provide the
cluster idle state to the firmware. Similarly, the last CPU in the
system may provide the state of the coherency domain along with the
cluster and the CPU state IDs.

Utilize the CPU PM domain framework's helper functions to build up the
hierarchy of cluster topology using Generic PM domains. We provide
callbacks for domain power_on and power_off. By appending the state IDs
at each domain level in the -power_off() callbacks, we build up a
composite state ID that can be passed onto the firmware to idle the CPU,
the cluster and the coherency interface.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- initializes CPU PM domains based on OSI support in FW
- simplified initialization and cluster state id determination

 arch/arm64/kernel/psci.c | 46 +++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
index f67f35b..75b1707 100644
--- a/arch/arm64/kernel/psci.c
+++ b/arch/arm64/kernel/psci.c
@@ -15,6 +15,7 @@
 
 #define pr_fmt(fmt) "psci: " fmt
 
+#include <linux/cpu_domains.h>
 #include <linux/init.h>
 #include <linux/of.h>
 #include <linux/smp.h>
@@ -31,6 +32,24 @@
 #include <asm/suspend.h>
 
 static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state);
+static DEFINE_PER_CPU(u32, cluster_state_id);
+
+static inline u32 psci_get_composite_state_id(u32 cpu_state)
+{
+	return cpu_state | this_cpu_read(cluster_state_id);
+}
+
+static inline void psci_reset_composite_state_id(void)
+{
+	this_cpu_write(cluster_state_id, 0);
+}
+
+static int psci_pd_power_off(u32 state_idx, u32 param,
+		const struct cpumask *mask)
+{
+	__this_cpu_add(cluster_state_id, param);
+	return 0;
+}
 
 static int __maybe_unused cpu_psci_cpu_init_idle(unsigned int cpu)
 {
@@ -89,6 +108,19 @@ static int __maybe_unused cpu_psci_cpu_init_idle(unsigned int cpu)
 	}
 	/* Idle states parsed correctly, initialize per-cpu pointer */
 	per_cpu(psci_power_state, cpu) = psci_states;
+
+	if (psci_has_osi_pd_support()) {
+		const struct cpu_pd_ops psci_pd_ops = {
+			.power_off = psci_pd_power_off,
+		};
+
+		ret = of_setup_cpu_pd_single(cpu, &psci_pd_ops);
+		if (!ret)
+			ret = psci_set_suspend_mode_osi(true);
+		if (ret)
+			pr_warn("CPU%d: Error setting PSCI OSI mode\n", cpu);
+	}
+
 	return 0;
 
 free_mem:
@@ -117,6 +149,8 @@ static int cpu_psci_cpu_boot(unsigned int cpu)
 	if (err)
 		pr_err("failed to boot CPU%d (%d)\n", cpu, err);
 
+	/* Reset CPU cluster states */
+	psci_reset_composite_state_id();
 	return err;
 }
 
@@ -181,15 +215,16 @@ static int cpu_psci_cpu_kill(unsigned int cpu)
 static int psci_suspend_finisher(unsigned long index)
 {
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
 
-	return psci_ops.cpu_suspend(state[index - 1],
-				    virt_to_phys(cpu_resume));
+	return psci_ops.cpu_suspend(ext_state, virt_to_phys(cpu_resume));
 }
 
 static int __maybe_unused cpu_psci_cpu_suspend(unsigned long index)
 {
 	int ret;
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
 	/*
 	 * idle state index 0 corresponds to wfi, should never be called
 	 * from the cpu_suspend operations
@@ -198,10 +233,15 @@ static int __maybe_unused cpu_psci_cpu_suspend(unsigned long index)
 		return -EINVAL;
 
 	if (!psci_power_state_loses_context(state[index - 1]))
-		ret = psci_ops.cpu_suspend(state[index - 1], 0);
+		ret = psci_ops.cpu_suspend(ext_state, 0);
 	else
 		ret = cpu_suspend(index, psci_suspend_finisher);
 
+	/*
+	 * Clear the CPU's cluster states, we start afresh after coming
+	 * out of idle.
+	 */
+	psci_reset_composite_state_id();
 	return ret;
 }
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 11/12] ARM64: dts: Add PSCI cpuidle support for MSM8916
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer, devicetree

Add device bindings for CPUs to suspend using PSCI as the enable-method.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 arch/arm64/boot/dts/qcom/msm8916.dtsi | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
index 9153214..b7839a8 100644
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
@@ -61,27 +61,51 @@
 			device_type = "cpu";
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU1: cpu@1 {
 			device_type = "cpu";
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x1>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU2: cpu@2 {
 			device_type = "cpu";
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x2>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU3: cpu@3 {
 			device_type = "cpu";
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x3>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
+		};
+
+		idle-states {
+			CPU_SPC: spc {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x40000002>;
+				entry-latency-us = <130>;
+				exit-latency-us = <150>;
+				min-residency-us = <2000>;
+				local-timer-stop;
+			};
 		};
 	};
 
+	psci {
+		compatible = "arm,psci-1.0";
+		method = "smc";
+	};
+
 	timer {
 		compatible = "arm,armv8-timer";
 		interrupts = <GIC_PPI 2 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 11/12] ARM64: dts: Add PSCI cpuidle support for MSM8916
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

Add device bindings for CPUs to suspend using PSCI as the enable-method.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 arch/arm64/boot/dts/qcom/msm8916.dtsi | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
index 9153214..b7839a8 100644
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
@@ -61,27 +61,51 @@
 			device_type = "cpu";
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU1: cpu at 1 {
 			device_type = "cpu";
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x1>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU2: cpu at 2 {
 			device_type = "cpu";
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x2>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU3: cpu at 3 {
 			device_type = "cpu";
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x3>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
+		};
+
+		idle-states {
+			CPU_SPC: spc {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x40000002>;
+				entry-latency-us = <130>;
+				exit-latency-us = <150>;
+				min-residency-us = <2000>;
+				local-timer-stop;
+			};
 		};
 	};
 
+	psci {
+		compatible = "arm,psci-1.0";
+		method = "smc";
+	};
+
 	timer {
 		compatible = "arm,armv8-timer";
 		interrupts = <GIC_PPI 2 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_LOW)>,
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 12/12] ARM64: dts: Define CPU power domain for MSM8916
  2016-02-12 20:50 ` Lina Iyer
@ 2016-02-12 20:50   ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer, devicetree

Define power domain and the power states for the domain as defined by
the PSCI firmware. The 8916 firmware supports OS initiated method of
powering off the CPU clusters.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- no cpu-map topology node

 arch/arm64/boot/dts/qcom/msm8916.dtsi | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
index b7839a8..62dade8 100644
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
@@ -63,6 +63,7 @@
 			reg = <0x0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU1: cpu@1 {
@@ -71,6 +72,7 @@
 			reg = <0x1>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU2: cpu@2 {
@@ -79,6 +81,7 @@
 			reg = <0x2>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU3: cpu@3 {
@@ -87,6 +90,7 @@
 			reg = <0x3>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		idle-states {
@@ -101,6 +105,27 @@
 		};
 	};
 
+	CPU_PD: cpu-pd@0 {
+		#power-domain-cells = <0>;
+		power-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
+	};
+
+	pd-power-states {
+		CLUSTER_RET: power-state@1 {
+			state-param = <0x1000010>;
+			entry-latency-us = <500>;
+			exit-latency-us = <500>;
+			residency-us = <2000>;
+		 };
+
+		CLUSTER_PWR_DWN: power-state@2 {
+			state-param = <0x1000030>;
+			entry-latency-us = <2000>;
+			exit-latency-us = <2000>;
+			residency-us = <6000>;
+		};
+	};
+
 	psci {
 		compatible = "arm,psci-1.0";
 		method = "smc";
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [RFC v2 12/12] ARM64: dts: Define CPU power domain for MSM8916
@ 2016-02-12 20:50   ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-12 20:50 UTC (permalink / raw)
  To: linux-arm-kernel

Define power domain and the power states for the domain as defined by
the PSCI firmware. The 8916 firmware supports OS initiated method of
powering off the CPU clusters.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
Changes since RFC v1 -
- no cpu-map topology node

 arch/arm64/boot/dts/qcom/msm8916.dtsi | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
index b7839a8..62dade8 100644
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
@@ -63,6 +63,7 @@
 			reg = <0x0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU1: cpu at 1 {
@@ -71,6 +72,7 @@
 			reg = <0x1>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU2: cpu at 2 {
@@ -79,6 +81,7 @@
 			reg = <0x2>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU3: cpu at 3 {
@@ -87,6 +90,7 @@
 			reg = <0x3>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		idle-states {
@@ -101,6 +105,27 @@
 		};
 	};
 
+	CPU_PD: cpu-pd at 0 {
+		#power-domain-cells = <0>;
+		power-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
+	};
+
+	pd-power-states {
+		CLUSTER_RET: power-state at 1 {
+			state-param = <0x1000010>;
+			entry-latency-us = <500>;
+			exit-latency-us = <500>;
+			residency-us = <2000>;
+		 };
+
+		CLUSTER_PWR_DWN: power-state at 2 {
+			state-param = <0x1000030>;
+			entry-latency-us = <2000>;
+			exit-latency-us = <2000>;
+			residency-us = <6000>;
+		};
+	};
+
 	psci {
 		compatible = "arm,psci-1.0";
 		method = "smc";
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* Re: [RFC v2 03/12] PM / cpu_domains: Setup PM domains for CPUs/clusters
  2016-02-12 20:50   ` Lina Iyer
@ 2016-02-17 23:38     ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-17 23:38 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Daniel Lezcano

On Fri, Feb 12 2016 at 13:51 -0700, Lina Iyer wrote:

>+static int cpu_pd_power_on(struct generic_pm_domain *genpd)
>+{
>+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
>+
>+	return pd->ops.power_on();

Needs a check for NULL ops.power_on() here.
>+}
>+
>+static int cpu_pd_power_off(struct generic_pm_domain *genpd)
>+{
>+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
>+
>+	return pd->ops.power_off(genpd->state_idx,
>+			genpd->states[genpd->state_idx].param);
Needs a check for NULL ops.power_off().

Thanks,
Lina

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 03/12] PM / cpu_domains: Setup PM domains for CPUs/clusters
@ 2016-02-17 23:38     ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-17 23:38 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 12 2016 at 13:51 -0700, Lina Iyer wrote:

>+static int cpu_pd_power_on(struct generic_pm_domain *genpd)
>+{
>+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
>+
>+	return pd->ops.power_on();

Needs a check for NULL ops.power_on() here.
>+}
>+
>+static int cpu_pd_power_off(struct generic_pm_domain *genpd)
>+{
>+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
>+
>+	return pd->ops.power_off(genpd->state_idx,
>+			genpd->states[genpd->state_idx].param);
Needs a check for NULL ops.power_off().

Thanks,
Lina

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [BUG FIX] PM / cpu_domains: Check for NULL callbacks
  2016-02-12 20:50   ` Lina Iyer
@ 2016-02-18 17:29     ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-18 17:29 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: geert, k.kozlowski, msivasub, agross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, ahaslam, mtitinger, Lina Iyer

Check for NULL platform callback before calling.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 7069411..bcaa474 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -157,16 +157,22 @@ static int cpu_pd_power_on(struct generic_pm_domain *genpd)
 {
 	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
 
-	return pd->ops.power_on();
+	if (pd->ops.power_on)
+		return pd->ops.power_on();
+
+	return 0;
 }
 
 static int cpu_pd_power_off(struct generic_pm_domain *genpd)
 {
 	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
 
-	return pd->ops.power_off(genpd->state_idx,
+	if (pd->ops.power_off)
+		return pd->ops.power_off(genpd->state_idx,
 			genpd->states[genpd->state_idx].param,
 			pd->cpus);
+
+	return 0;
 }
 
 /**
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* [BUG FIX] PM / cpu_domains: Check for NULL callbacks
@ 2016-02-18 17:29     ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-18 17:29 UTC (permalink / raw)
  To: linux-arm-kernel

Check for NULL platform callback before calling.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 7069411..bcaa474 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -157,16 +157,22 @@ static int cpu_pd_power_on(struct generic_pm_domain *genpd)
 {
 	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
 
-	return pd->ops.power_on();
+	if (pd->ops.power_on)
+		return pd->ops.power_on();
+
+	return 0;
 }
 
 static int cpu_pd_power_off(struct generic_pm_domain *genpd)
 {
 	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
 
-	return pd->ops.power_off(genpd->state_idx,
+	if (pd->ops.power_off)
+		return pd->ops.power_off(genpd->state_idx,
 			genpd->states[genpd->state_idx].param,
 			pd->cpus);
+
+	return 0;
 }
 
 /**
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 68+ messages in thread

* Re: [BUG FIX] PM / cpu_domains: Check for NULL callbacks
  2016-02-18 17:29     ` Lina Iyer
@ 2016-02-18 17:46       ` Rafael J. Wysocki
  -1 siblings, 0 replies; 68+ messages in thread
From: Rafael J. Wysocki @ 2016-02-18 17:46 UTC (permalink / raw)
  To: Lina Iyer
  Cc: Ulf Hansson, Kevin Hilman, Rafael J. Wysocki, linux-pm,
	linux-arm-kernel, Geert Uytterhoeven, k.kozlowski, msivasub,
	agross, Stephen Boyd, linux-arm-msm, Lorenzo Pieralisi, ahaslam,
	mtitinger

On Thu, Feb 18, 2016 at 6:29 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
> Check for NULL platform callback before calling.
>
> Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
> ---
>  drivers/base/power/cpu_domains.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
> index 7069411..bcaa474 100644
> --- a/drivers/base/power/cpu_domains.c
> +++ b/drivers/base/power/cpu_domains.c
> @@ -157,16 +157,22 @@ static int cpu_pd_power_on(struct generic_pm_domain *genpd)
>  {
>         struct cpu_pm_domain *pd = to_cpu_pd(genpd);
>
> -       return pd->ops.power_on();
> +       if (pd->ops.power_on)
> +               return pd->ops.power_on();
> +
> +       return 0;
>  }

I usually write things like that as

return pd->ops.power_on ? pd->ops.power_on() : 0;

That gets the job done in just one line of code instead of 4 and in
one statement instead of 3.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [BUG FIX] PM / cpu_domains: Check for NULL callbacks
@ 2016-02-18 17:46       ` Rafael J. Wysocki
  0 siblings, 0 replies; 68+ messages in thread
From: Rafael J. Wysocki @ 2016-02-18 17:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Feb 18, 2016 at 6:29 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
> Check for NULL platform callback before calling.
>
> Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
> ---
>  drivers/base/power/cpu_domains.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
> index 7069411..bcaa474 100644
> --- a/drivers/base/power/cpu_domains.c
> +++ b/drivers/base/power/cpu_domains.c
> @@ -157,16 +157,22 @@ static int cpu_pd_power_on(struct generic_pm_domain *genpd)
>  {
>         struct cpu_pm_domain *pd = to_cpu_pd(genpd);
>
> -       return pd->ops.power_on();
> +       if (pd->ops.power_on)
> +               return pd->ops.power_on();
> +
> +       return 0;
>  }

I usually write things like that as

return pd->ops.power_on ? pd->ops.power_on() : 0;

That gets the job done in just one line of code instead of 4 and in
one statement instead of 3.

Thanks,
Rafael

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [BUG FIX] PM / cpu_domains: Check for NULL callbacks
  2016-02-18 17:46       ` Rafael J. Wysocki
@ 2016-02-18 22:51         ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-18 22:51 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Ulf Hansson, Kevin Hilman, Rafael J. Wysocki, linux-pm,
	linux-arm-kernel, Geert Uytterhoeven, k.kozlowski, msivasub,
	agross, Stephen Boyd, linux-arm-msm, Lorenzo Pieralisi, ahaslam,
	mtitinger

On Thu, Feb 18 2016 at 10:46 -0700, Rafael J. Wysocki wrote:
>On Thu, Feb 18, 2016 at 6:29 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>> Check for NULL platform callback before calling.
>>
>> Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
>> ---
>>  drivers/base/power/cpu_domains.c | 10 ++++++++--
>>  1 file changed, 8 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
>> index 7069411..bcaa474 100644
>> --- a/drivers/base/power/cpu_domains.c
>> +++ b/drivers/base/power/cpu_domains.c
>> @@ -157,16 +157,22 @@ static int cpu_pd_power_on(struct generic_pm_domain *genpd)
>>  {
>>         struct cpu_pm_domain *pd = to_cpu_pd(genpd);
>>
>> -       return pd->ops.power_on();
>> +       if (pd->ops.power_on)
>> +               return pd->ops.power_on();
>> +
>> +       return 0;
>>  }
>
>I usually write things like that as
>
>return pd->ops.power_on ? pd->ops.power_on() : 0;
>
>That gets the job done in just one line of code instead of 4 and in
>one statement instead of 3.
>
Sure. Thanks. Will roll this in with the next submission.

Thanks,
Lina

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [BUG FIX] PM / cpu_domains: Check for NULL callbacks
@ 2016-02-18 22:51         ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-02-18 22:51 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Feb 18 2016 at 10:46 -0700, Rafael J. Wysocki wrote:
>On Thu, Feb 18, 2016 at 6:29 PM, Lina Iyer <lina.iyer@linaro.org> wrote:
>> Check for NULL platform callback before calling.
>>
>> Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
>> ---
>>  drivers/base/power/cpu_domains.c | 10 ++++++++--
>>  1 file changed, 8 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
>> index 7069411..bcaa474 100644
>> --- a/drivers/base/power/cpu_domains.c
>> +++ b/drivers/base/power/cpu_domains.c
>> @@ -157,16 +157,22 @@ static int cpu_pd_power_on(struct generic_pm_domain *genpd)
>>  {
>>         struct cpu_pm_domain *pd = to_cpu_pd(genpd);
>>
>> -       return pd->ops.power_on();
>> +       if (pd->ops.power_on)
>> +               return pd->ops.power_on();
>> +
>> +       return 0;
>>  }
>
>I usually write things like that as
>
>return pd->ops.power_on ? pd->ops.power_on() : 0;
>
>That gets the job done in just one line of code instead of 4 and in
>one statement instead of 3.
>
Sure. Thanks. Will roll this in with the next submission.

Thanks,
Lina

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 01/12] PM / Domains: Abstract genpd locking
  2016-02-12 20:50   ` Lina Iyer
@ 2016-02-26 18:08     ` Stephen Boyd
  -1 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 18:08 UTC (permalink / raw)
  To: Lina Iyer
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger, Kevin Hilman

On 02/12, Lina Iyer wrote:
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index 3ddd05d..8204615 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -40,6 +40,46 @@
>  static LIST_HEAD(gpd_list);
>  static DEFINE_MUTEX(gpd_list_lock);
>  
> +struct genpd_lock_fns {
> +	void (*lock)(struct generic_pm_domain *genpd);
> +	void (*lock_nested)(struct generic_pm_domain *genpd, int depth);
> +	int (*lock_interruptible)(struct generic_pm_domain *genpd);
> +	void (*unlock)(struct generic_pm_domain *genpd);
> +};
> +
> +static void genpd_lock_irq(struct generic_pm_domain *genpd)
> +{
> +	mutex_lock(&genpd->mlock);
> +}
> +
> +static void genpd_lock_irq_nested(struct generic_pm_domain *genpd,
> +					int depth)
> +{
> +	mutex_lock_nested(&genpd->mlock, depth);
> +}
> +
> +static int genpd_lock_interruptible_irq(struct generic_pm_domain *genpd)
> +{
> +	return mutex_lock_interruptible(&genpd->mlock);
> +}
> +
> +static void genpd_unlock_irq(struct generic_pm_domain *genpd)
> +{
> +	return mutex_unlock(&genpd->mlock);
> +}
> +
> +static struct genpd_lock_fns irq_lock = {

Can this be const? Also, why is this called irq_lock when the
lock functions are mutex based?

> +	.lock = genpd_lock_irq,
> +	.lock_nested = genpd_lock_irq_nested,
> +	.lock_interruptible = genpd_lock_interruptible_irq,
> +	.unlock = genpd_unlock_irq,
> +};
> +
> @@ -74,6 +75,8 @@ struct generic_pm_domain {
>  	struct genpd_power_state *states;
>  	unsigned int state_count; /* number of states */
>  	unsigned int state_idx; /* state that genpd will go to when off */
> +	struct genpd_lock_fns *lock_fns;

const?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 01/12] PM / Domains: Abstract genpd locking
@ 2016-02-26 18:08     ` Stephen Boyd
  0 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 18:08 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/12, Lina Iyer wrote:
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index 3ddd05d..8204615 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -40,6 +40,46 @@
>  static LIST_HEAD(gpd_list);
>  static DEFINE_MUTEX(gpd_list_lock);
>  
> +struct genpd_lock_fns {
> +	void (*lock)(struct generic_pm_domain *genpd);
> +	void (*lock_nested)(struct generic_pm_domain *genpd, int depth);
> +	int (*lock_interruptible)(struct generic_pm_domain *genpd);
> +	void (*unlock)(struct generic_pm_domain *genpd);
> +};
> +
> +static void genpd_lock_irq(struct generic_pm_domain *genpd)
> +{
> +	mutex_lock(&genpd->mlock);
> +}
> +
> +static void genpd_lock_irq_nested(struct generic_pm_domain *genpd,
> +					int depth)
> +{
> +	mutex_lock_nested(&genpd->mlock, depth);
> +}
> +
> +static int genpd_lock_interruptible_irq(struct generic_pm_domain *genpd)
> +{
> +	return mutex_lock_interruptible(&genpd->mlock);
> +}
> +
> +static void genpd_unlock_irq(struct generic_pm_domain *genpd)
> +{
> +	return mutex_unlock(&genpd->mlock);
> +}
> +
> +static struct genpd_lock_fns irq_lock = {

Can this be const? Also, why is this called irq_lock when the
lock functions are mutex based?

> +	.lock = genpd_lock_irq,
> +	.lock_nested = genpd_lock_irq_nested,
> +	.lock_interruptible = genpd_lock_interruptible_irq,
> +	.unlock = genpd_unlock_irq,
> +};
> +
> @@ -74,6 +75,8 @@ struct generic_pm_domain {
>  	struct genpd_power_state *states;
>  	unsigned int state_count; /* number of states */
>  	unsigned int state_idx; /* state that genpd will go to when off */
> +	struct genpd_lock_fns *lock_fns;

const?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 02/12] PM / Domains: Support IRQ safe PM domains
  2016-02-12 20:50   ` Lina Iyer
@ 2016-02-26 18:17     ` Stephen Boyd
  -1 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 18:17 UTC (permalink / raw)
  To: Lina Iyer
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger, Kevin Hilman

On 02/12, Lina Iyer wrote:
> diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
> index 8ba6625..c06f0b6 100644
> --- a/Documentation/power/devices.txt
> +++ b/Documentation/power/devices.txt
> @@ -607,7 +607,16 @@ individually.  Instead, a set of devices sharing a power resource can be put
>  into a low-power state together at the same time by turning off the shared
>  power resource.  Of course, they also need to be put into the full-power state
>  together, by turning the shared power resource on.  A set of devices with this
> -property is often referred to as a power domain.
> +property is often referred to as a power domain. A power domain may also be
> +nested inside another power domain.
> +
> +Devices, by default, operate in process context and if a device can operate in
> +IRQ safe context, has to be explicitly set as IRQ safe. Power domains by

Devices, by default, operate in process context. If a device can
operate in IRQ safe context that has to be explicitly indicated
by setting the irq_safe boolean inside struct generic_pm_domain
to true. Power domains typically operate in process context...

> +default, operate in process context but could have devices that are IRQ safe.
> +Such power domains cannot be powered on/off during runtime PM. On the other
> +hand, an IRQ safe PM domains that have IRQ safe devices may be powered off

On the other hand, IRQ safe PM domains that have ..

> +when all the devices are in idle. An IRQ safe domain may only be attached as a

all the devices in the domain?

> +subdomain to another IRQ safe domain.
>  
>  Support for power domains is provided through the pm_domain field of struct
>  device.  This field is a pointer to an object of type struct dev_pm_domain,
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index 8204615..3c4f675 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -75,11 +75,59 @@ static struct genpd_lock_fns irq_lock = {
>  	.unlock = genpd_unlock_irq,
>  };
>  
> +static void genpd_lock_nosleep(struct generic_pm_domain *genpd)
> +	__acquires(&genpd->slock)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&genpd->slock, flags);
> +	genpd->lock_flags = flags;
> +}
> +
> +static void genpd_lock_nosleep_nested(struct generic_pm_domain *genpd,
> +					int depth)
> +	__acquires(&genpd->slock)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave_nested(&genpd->slock, flags, depth);
> +	genpd->lock_flags = flags;
> +}
> +
> +static int genpd_lock_nosleep_interruptible(struct generic_pm_domain *genpd)
> +	__acquires(&genpd->slock)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&genpd->slock, flags);
> +	genpd->lock_flags = flags;
> +	return 0;

	genpd_lock_nosleep(genpd);
	return 0;

> +}
> +
> +static void genpd_unlock_nosleep(struct generic_pm_domain *genpd)
> +	__releases(&genpd->slock)
> +{
> +	spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags);
> +}
> +
> +static struct genpd_lock_fns no_sleep_lock = {

const?

> +	.lock = genpd_lock_nosleep,
> +	.lock_nested = genpd_lock_nosleep_nested,
> +	.lock_interruptible = genpd_lock_nosleep_interruptible,
> +	.unlock = genpd_unlock_nosleep,
> +};
> +
>  #define genpd_lock(p)			p->lock_fns->lock(p)
>  #define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
>  #define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
>  #define genpd_unlock(p)			p->lock_fns->unlock(p)
>  
> +static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
> +		struct generic_pm_domain *genpd)
> +{
> +	return dev->power.irq_safe && !genpd->irq_safe;
> +}
> +
>  /*
>   * Get the generic PM domain for a particular struct device.
>   * This validates the struct device pointer, the PM domain pointer,
> @@ -510,8 +570,11 @@ static int pm_genpd_runtime_resume(struct device *dev)
>  	if (IS_ERR(genpd))
>  		return -EINVAL;
>  
> -	/* If power.irq_safe, the PM domain is never powered off. */
> -	if (dev->power.irq_safe) {
> +	/*
> +	 * As we dont power off a non IRQ safe domain, which holds

s/dont/don't/

> +	 * an IRQ safe device, we dont need to restore power to it.

s/dont/don't/

> +	 */
> +	if (dev->power.irq_safe && !genpd->irq_safe) {
>  		timed = false;
>  		goto out;
>  	}
> @@ -1296,6 +1359,13 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
>  	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
>  		return -EINVAL;
>  
> +	if (genpd->irq_safe && !dev->power.irq_safe) {
> +		dev_err(dev,
> +			"PM Domain %s is IRQ safe; device has to IRQ safe.\n",

has to be?

> +			genpd->name);
> +		return -EINVAL;
> +	}
> +
>  	gpd_data = genpd_alloc_dev_data(dev, genpd, td);
>  	if (IS_ERR(gpd_data))
>  		return PTR_ERR(gpd_data);
> @@ -1394,6 +1464,17 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
>  	    || genpd == subdomain)
>  		return -EINVAL;
>  
> +	/*
> +	 * If the domain can be powered on/off in an IRQ safe
> +	 * context, ensure that the subdomain can also be
> +	 * powered on/off in that context.
> +	 */
> +	if (!genpd->irq_safe && subdomain->irq_safe) {
> +		WARN("Parent %s of subdomain %s must be IRQ-safe\n",

Nitpick! IRQ-safe or IRQ safe? Use one consistently please.

> +				genpd->name, subdomain->name);
> +		return -EINVAL;
> +	}
> +
>  	link = kzalloc(sizeof(*link), GFP_KERNEL);
>  	if (!link)
>  		return -ENOMEM;

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 02/12] PM / Domains: Support IRQ safe PM domains
@ 2016-02-26 18:17     ` Stephen Boyd
  0 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 18:17 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/12, Lina Iyer wrote:
> diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
> index 8ba6625..c06f0b6 100644
> --- a/Documentation/power/devices.txt
> +++ b/Documentation/power/devices.txt
> @@ -607,7 +607,16 @@ individually.  Instead, a set of devices sharing a power resource can be put
>  into a low-power state together at the same time by turning off the shared
>  power resource.  Of course, they also need to be put into the full-power state
>  together, by turning the shared power resource on.  A set of devices with this
> -property is often referred to as a power domain.
> +property is often referred to as a power domain. A power domain may also be
> +nested inside another power domain.
> +
> +Devices, by default, operate in process context and if a device can operate in
> +IRQ safe context, has to be explicitly set as IRQ safe. Power domains by

Devices, by default, operate in process context. If a device can
operate in IRQ safe context that has to be explicitly indicated
by setting the irq_safe boolean inside struct generic_pm_domain
to true. Power domains typically operate in process context...

> +default, operate in process context but could have devices that are IRQ safe.
> +Such power domains cannot be powered on/off during runtime PM. On the other
> +hand, an IRQ safe PM domains that have IRQ safe devices may be powered off

On the other hand, IRQ safe PM domains that have ..

> +when all the devices are in idle. An IRQ safe domain may only be attached as a

all the devices in the domain?

> +subdomain to another IRQ safe domain.
>  
>  Support for power domains is provided through the pm_domain field of struct
>  device.  This field is a pointer to an object of type struct dev_pm_domain,
> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> index 8204615..3c4f675 100644
> --- a/drivers/base/power/domain.c
> +++ b/drivers/base/power/domain.c
> @@ -75,11 +75,59 @@ static struct genpd_lock_fns irq_lock = {
>  	.unlock = genpd_unlock_irq,
>  };
>  
> +static void genpd_lock_nosleep(struct generic_pm_domain *genpd)
> +	__acquires(&genpd->slock)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&genpd->slock, flags);
> +	genpd->lock_flags = flags;
> +}
> +
> +static void genpd_lock_nosleep_nested(struct generic_pm_domain *genpd,
> +					int depth)
> +	__acquires(&genpd->slock)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave_nested(&genpd->slock, flags, depth);
> +	genpd->lock_flags = flags;
> +}
> +
> +static int genpd_lock_nosleep_interruptible(struct generic_pm_domain *genpd)
> +	__acquires(&genpd->slock)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&genpd->slock, flags);
> +	genpd->lock_flags = flags;
> +	return 0;

	genpd_lock_nosleep(genpd);
	return 0;

> +}
> +
> +static void genpd_unlock_nosleep(struct generic_pm_domain *genpd)
> +	__releases(&genpd->slock)
> +{
> +	spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags);
> +}
> +
> +static struct genpd_lock_fns no_sleep_lock = {

const?

> +	.lock = genpd_lock_nosleep,
> +	.lock_nested = genpd_lock_nosleep_nested,
> +	.lock_interruptible = genpd_lock_nosleep_interruptible,
> +	.unlock = genpd_unlock_nosleep,
> +};
> +
>  #define genpd_lock(p)			p->lock_fns->lock(p)
>  #define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
>  #define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
>  #define genpd_unlock(p)			p->lock_fns->unlock(p)
>  
> +static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
> +		struct generic_pm_domain *genpd)
> +{
> +	return dev->power.irq_safe && !genpd->irq_safe;
> +}
> +
>  /*
>   * Get the generic PM domain for a particular struct device.
>   * This validates the struct device pointer, the PM domain pointer,
> @@ -510,8 +570,11 @@ static int pm_genpd_runtime_resume(struct device *dev)
>  	if (IS_ERR(genpd))
>  		return -EINVAL;
>  
> -	/* If power.irq_safe, the PM domain is never powered off. */
> -	if (dev->power.irq_safe) {
> +	/*
> +	 * As we dont power off a non IRQ safe domain, which holds

s/dont/don't/

> +	 * an IRQ safe device, we dont need to restore power to it.

s/dont/don't/

> +	 */
> +	if (dev->power.irq_safe && !genpd->irq_safe) {
>  		timed = false;
>  		goto out;
>  	}
> @@ -1296,6 +1359,13 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
>  	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
>  		return -EINVAL;
>  
> +	if (genpd->irq_safe && !dev->power.irq_safe) {
> +		dev_err(dev,
> +			"PM Domain %s is IRQ safe; device has to IRQ safe.\n",

has to be?

> +			genpd->name);
> +		return -EINVAL;
> +	}
> +
>  	gpd_data = genpd_alloc_dev_data(dev, genpd, td);
>  	if (IS_ERR(gpd_data))
>  		return PTR_ERR(gpd_data);
> @@ -1394,6 +1464,17 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
>  	    || genpd == subdomain)
>  		return -EINVAL;
>  
> +	/*
> +	 * If the domain can be powered on/off in an IRQ safe
> +	 * context, ensure that the subdomain can also be
> +	 * powered on/off in that context.
> +	 */
> +	if (!genpd->irq_safe && subdomain->irq_safe) {
> +		WARN("Parent %s of subdomain %s must be IRQ-safe\n",

Nitpick! IRQ-safe or IRQ safe? Use one consistently please.

> +				genpd->name, subdomain->name);
> +		return -EINVAL;
> +	}
> +
>  	link = kzalloc(sizeof(*link), GFP_KERNEL);
>  	if (!link)
>  		return -ENOMEM;

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 04/12] ARM: cpuidle: Add runtime PM support for CPUs
  2016-02-12 20:50   ` Lina Iyer
@ 2016-02-26 18:24     ` Stephen Boyd
  -1 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 18:24 UTC (permalink / raw)
  To: Lina Iyer
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger, Daniel Lezcano

On 02/12, Lina Iyer wrote:
> @@ -45,6 +48,8 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
>  
>  	ret = cpu_pm_enter();
>  	if (!ret) {
> +		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));

Can you add a comment on why we need to use RCU_NONIDLE here?
It's not super obvious.

> +
>  		/*
>  		 * Pass idle state index to cpu_suspend which in turn will
>  		 * call the CPU ops suspend protocol with idle index as a
> @@ -52,6 +57,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
>  		 */
>  		arm_cpuidle_suspend(idx);
>  
> +		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
>  		cpu_pm_exit();
>  	}
>  
> @@ -84,6 +90,30 @@ static const struct of_device_id arm_idle_state_match[] __initconst = {
>  	{ },
>  };
>  
> +#ifdef CONFIG_HOTPLUG_CPU
> +static int cpu_hotplug(struct notifier_block *nb,

This function is pretty generically named. Maybe something more
runtime PM specific or cpu idle specific?

> +			unsigned long action, void *data)
> +{
> +	struct device *cpu_dev = get_cpu_device(smp_processor_id());
> +
> +	/* Execute CPU runtime PM on that CPU */
> +	switch (action) {

We could do the & ~CPU_TASKS_FROZEN trick here to save a few cases.

> +	case CPU_DYING:
> +	case CPU_DYING_FROZEN:
> +		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));

And do we actually need to use it for hotplug path? These
notifiers don't run from idle context do they?

> +		break;
> +	case CPU_STARTING:
> +	case CPU_STARTING_FROZEN:
> +		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}
> +#endif
> +
>  /*
>   * arm_idle_init
>   *
> @@ -96,6 +126,7 @@ static int __init arm_idle_init(void)
>  	int cpu, ret;
>  	struct cpuidle_driver *drv = &arm_idle_driver;
>  	struct cpuidle_device *dev;
> +	struct device *cpu_dev;
>  
>  	/*
>  	 * Initialize idle states data, starting at index 1.
> @@ -148,10 +189,17 @@ static int __init arm_idle_init(void)
>  		}
>  	}
>  
> +#ifdef CONFIG_HOTPLUG_CPU
> +	/* Register for hotplug notifications for runtime PM */
> +	hotcpu_notifier(cpu_hotplug, 0);

Define an empty cpu_hotplug() function for !CONFIG_HOTPLUG_CPU
and then always call this without the ifdef?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 04/12] ARM: cpuidle: Add runtime PM support for CPUs
@ 2016-02-26 18:24     ` Stephen Boyd
  0 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 18:24 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/12, Lina Iyer wrote:
> @@ -45,6 +48,8 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
>  
>  	ret = cpu_pm_enter();
>  	if (!ret) {
> +		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));

Can you add a comment on why we need to use RCU_NONIDLE here?
It's not super obvious.

> +
>  		/*
>  		 * Pass idle state index to cpu_suspend which in turn will
>  		 * call the CPU ops suspend protocol with idle index as a
> @@ -52,6 +57,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
>  		 */
>  		arm_cpuidle_suspend(idx);
>  
> +		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
>  		cpu_pm_exit();
>  	}
>  
> @@ -84,6 +90,30 @@ static const struct of_device_id arm_idle_state_match[] __initconst = {
>  	{ },
>  };
>  
> +#ifdef CONFIG_HOTPLUG_CPU
> +static int cpu_hotplug(struct notifier_block *nb,

This function is pretty generically named. Maybe something more
runtime PM specific or cpu idle specific?

> +			unsigned long action, void *data)
> +{
> +	struct device *cpu_dev = get_cpu_device(smp_processor_id());
> +
> +	/* Execute CPU runtime PM on that CPU */
> +	switch (action) {

We could do the & ~CPU_TASKS_FROZEN trick here to save a few cases.

> +	case CPU_DYING:
> +	case CPU_DYING_FROZEN:
> +		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));

And do we actually need to use it for hotplug path? These
notifiers don't run from idle context do they?

> +		break;
> +	case CPU_STARTING:
> +	case CPU_STARTING_FROZEN:
> +		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return NOTIFY_OK;
> +}
> +#endif
> +
>  /*
>   * arm_idle_init
>   *
> @@ -96,6 +126,7 @@ static int __init arm_idle_init(void)
>  	int cpu, ret;
>  	struct cpuidle_driver *drv = &arm_idle_driver;
>  	struct cpuidle_device *dev;
> +	struct device *cpu_dev;
>  
>  	/*
>  	 * Initialize idle states data, starting at index 1.
> @@ -148,10 +189,17 @@ static int __init arm_idle_init(void)
>  		}
>  	}
>  
> +#ifdef CONFIG_HOTPLUG_CPU
> +	/* Register for hotplug notifications for runtime PM */
> +	hotcpu_notifier(cpu_hotplug, 0);

Define an empty cpu_hotplug() function for !CONFIG_HOTPLUG_CPU
and then always call this without the ifdef?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 03/12] PM / cpu_domains: Setup PM domains for CPUs/clusters
  2016-02-12 20:50   ` Lina Iyer
@ 2016-02-26 19:10     ` Stephen Boyd
  -1 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 19:10 UTC (permalink / raw)
  To: Lina Iyer
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger, Daniel Lezcano

On 02/12, Lina Iyer wrote:
> diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
> new file mode 100644
> index 0000000..981592f
> --- /dev/null
> +++ b/drivers/base/power/cpu_domains.c
> @@ -0,0 +1,267 @@
> +
> +/* List of CPU PM domains we care about */
> +static LIST_HEAD(of_cpu_pd_list);
> +static DEFINE_SPINLOCK(cpu_pd_list_lock);

Can this be a mutex?

> +
> +/**
> + * of_init_cpu_pm_domain() - Initialize a CPU PM domain from a device node
> + *
> + * @dn: The domain provider's device node
> + * @ops: The power_on/_off callbacks for the domain
> + *
> + * Returns the generic_pm_domain (genpd) pointer to the domain on success
> + */
> +static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
> +				const struct cpu_pd_ops *ops)
> +{
> +	struct cpu_pm_domain *pd = NULL;
> +	struct generic_pm_domain *genpd = NULL;
> +	int ret = -ENOMEM;
> +
> +	if (!of_device_is_available(dn))
> +		return ERR_PTR(-ENODEV);
> +
> +	genpd = kzalloc(sizeof(*(genpd)), GFP_KERNEL);

Drop extra parenthesis

> +	if (!genpd)
> +		goto fail;
> +
> +	genpd->name = kstrndup(dn->full_name, CPU_PD_NAME_MAX, GFP_KERNEL);
> +	if (!genpd->name)
> +		goto fail;
> +
> +	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
> +	if (!pd)
> +		goto fail;
> +
> +	pd->genpd = genpd;
> +	pd->genpd->power_off = cpu_pd_power_off;
> +	pd->genpd->power_on = cpu_pd_power_on;
> +	pd->genpd->flags |= GENPD_FLAG_IRQ_SAFE;
> +	pd->ops.power_on = ops->power_on;
> +	pd->ops.power_off = ops->power_off;
> +
> +	INIT_LIST_HEAD_RCU(&pd->link);
> +	spin_lock(&cpu_pd_list_lock);
> +	list_add_rcu(&pd->link, &of_cpu_pd_list);
> +	spin_unlock(&cpu_pd_list_lock);
> +
> +	/* Register the CPU genpd */
> +	pr_debug("adding %s as CPU PM domain.\n", pd->genpd->name);

Drop the full stop?

> +	ret = of_pm_genpd_init(dn, pd->genpd, &simple_qos_governor, false);
> +	if (ret) {
> +		pr_err("Unable to initialize domain %s\n", dn->full_name);
> +		goto fail;
> +	}
> +
> +	ret = of_genpd_add_provider_simple(dn, pd->genpd);
> +	if (ret)
> +		pr_warn("Unable to add genpd %s as provider\n",
> +				pd->genpd->name);
> +
> +	return pd->genpd;
> +fail:
> +
> +	kfree(genpd);
> +	kfree(genpd->name);

Switch order so that name is freed first to avoid junk deref here.

> +	kfree(pd);
> +	return ERR_PTR(ret);
> +}
> +
> +static struct generic_pm_domain *of_get_cpu_domain(struct device_node *dn,
> +		const struct cpu_pd_ops *ops, int cpu)
> +{
> +	struct of_phandle_args args;
> +	struct generic_pm_domain *genpd, *parent;
> +	int ret;
> +
> +	/* Do we have this domain? If not, create the domain */
> +	args.np = dn;
> +	args.args_count = 0;
> +
> +	genpd = of_genpd_get_from_provider(&args);
> +	if (!IS_ERR(genpd))
> +		goto skip_parent;

Why not just return genpd and drop the goto?

> +
> +	genpd = of_init_cpu_pm_domain(dn, ops);
> +	if (IS_ERR(genpd))
> +		return genpd;
> +
> +	/* Is there a domain provider for this domain? */
> +	ret = of_parse_phandle_with_args(dn, "power-domains",
> +			"#power-domain-cells", 0, &args);
> +	of_node_put(dn);

Shouldn't this be of_node_put(args.np)? I suppose it's the same
so this isn't too important.

> +	if (ret < 0)
> +		goto skip_parent;
> +
> +	/* Find its parent and attach this domain to it, recursively */
> +	parent = of_get_cpu_domain(args.np, ops, cpu);

Except that we use the np here. So perhaps move the of_node_put()
down to the skip_parent goto?

> +	if (IS_ERR(parent)) {
> +		struct cpu_pm_domain *cpu_pd, *parent_cpu_pd;
> +
> +		ret = pm_genpd_add_subdomain(genpd, parent);

parent is an error pointer here... isn't this always going to
fail? Maybe that should be if (!IS_ERR(parent)) up there?

> +		if (ret) {
> +			pr_err("%s: Unable to add sub-domain (%s) to parent (%s)\n err: %d",
> +					__func__, genpd->name, parent->name,
> +					ret);
> +			return ERR_PTR(ret);
> +		}
> +
> +		/*
> +		 * Reference parent domain for easy access.
> +		 * Note: We could be attached to a domain that is not a
> +		 * CPU PM domain in that case dont reference the parent.

s/dont/don't/

> +		 */
> +		cpu_pd = to_cpu_pd(genpd);
> +		parent_cpu_pd = to_cpu_pd(parent);
> +
> +		if (cpu_pd && parent_cpu_pd)
> +			cpu_pd->parent = parent_cpu_pd;
> +	}
> +
> +skip_parent:
> +	return genpd;
> +}
> +
> +/**
> + * of_setup_cpu_pd_single() - Setup the PM domains for a CPU
> + *
> + * @cpu: The CPU for which the PM domain is to be set up.
> + * @ops: The PM domain suspend/resume ops for the CPU's domain
> + *
> + * If the CPU PM domain exists already, then the CPU is attached to
> + * that CPU PD. If it doesn't, the domain is created, the @ops are
> + * set for power_on/power_off callbacks and then the CPU is attached
> + * to that domain. If the domain was created outside this framework,
> + * then we do not attach the CPU to the domain.
> + */
> +int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
> +{
> +
> +	struct device_node *dn;
> +	struct generic_pm_domain *genpd;
> +
> +	dn = of_get_cpu_node(cpu, NULL);
> +	if (!dn)
> +		return -ENODEV;
> +
> +	dn = of_parse_phandle(dn, "power-domains", 0);
> +	if (!dn)
> +		return -ENODEV;
> +	of_node_put(dn);

This should be put after of_get_cpu_domain().

> +
> +	/* Find the genpd for this CPU, create if not found */
> +	genpd = of_get_cpu_domain(dn, ops, cpu);
> +	if (IS_ERR(genpd))
> +		return PTR_ERR(genpd);
> +
> +	return cpu_pd_attach_cpu(cpu);
> +}
> +EXPORT_SYMBOL(of_setup_cpu_pd_single);

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 03/12] PM / cpu_domains: Setup PM domains for CPUs/clusters
@ 2016-02-26 19:10     ` Stephen Boyd
  0 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 19:10 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/12, Lina Iyer wrote:
> diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
> new file mode 100644
> index 0000000..981592f
> --- /dev/null
> +++ b/drivers/base/power/cpu_domains.c
> @@ -0,0 +1,267 @@
> +
> +/* List of CPU PM domains we care about */
> +static LIST_HEAD(of_cpu_pd_list);
> +static DEFINE_SPINLOCK(cpu_pd_list_lock);

Can this be a mutex?

> +
> +/**
> + * of_init_cpu_pm_domain() - Initialize a CPU PM domain from a device node
> + *
> + * @dn: The domain provider's device node
> + * @ops: The power_on/_off callbacks for the domain
> + *
> + * Returns the generic_pm_domain (genpd) pointer to the domain on success
> + */
> +static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
> +				const struct cpu_pd_ops *ops)
> +{
> +	struct cpu_pm_domain *pd = NULL;
> +	struct generic_pm_domain *genpd = NULL;
> +	int ret = -ENOMEM;
> +
> +	if (!of_device_is_available(dn))
> +		return ERR_PTR(-ENODEV);
> +
> +	genpd = kzalloc(sizeof(*(genpd)), GFP_KERNEL);

Drop extra parenthesis

> +	if (!genpd)
> +		goto fail;
> +
> +	genpd->name = kstrndup(dn->full_name, CPU_PD_NAME_MAX, GFP_KERNEL);
> +	if (!genpd->name)
> +		goto fail;
> +
> +	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
> +	if (!pd)
> +		goto fail;
> +
> +	pd->genpd = genpd;
> +	pd->genpd->power_off = cpu_pd_power_off;
> +	pd->genpd->power_on = cpu_pd_power_on;
> +	pd->genpd->flags |= GENPD_FLAG_IRQ_SAFE;
> +	pd->ops.power_on = ops->power_on;
> +	pd->ops.power_off = ops->power_off;
> +
> +	INIT_LIST_HEAD_RCU(&pd->link);
> +	spin_lock(&cpu_pd_list_lock);
> +	list_add_rcu(&pd->link, &of_cpu_pd_list);
> +	spin_unlock(&cpu_pd_list_lock);
> +
> +	/* Register the CPU genpd */
> +	pr_debug("adding %s as CPU PM domain.\n", pd->genpd->name);

Drop the full stop?

> +	ret = of_pm_genpd_init(dn, pd->genpd, &simple_qos_governor, false);
> +	if (ret) {
> +		pr_err("Unable to initialize domain %s\n", dn->full_name);
> +		goto fail;
> +	}
> +
> +	ret = of_genpd_add_provider_simple(dn, pd->genpd);
> +	if (ret)
> +		pr_warn("Unable to add genpd %s as provider\n",
> +				pd->genpd->name);
> +
> +	return pd->genpd;
> +fail:
> +
> +	kfree(genpd);
> +	kfree(genpd->name);

Switch order so that name is freed first to avoid junk deref here.

> +	kfree(pd);
> +	return ERR_PTR(ret);
> +}
> +
> +static struct generic_pm_domain *of_get_cpu_domain(struct device_node *dn,
> +		const struct cpu_pd_ops *ops, int cpu)
> +{
> +	struct of_phandle_args args;
> +	struct generic_pm_domain *genpd, *parent;
> +	int ret;
> +
> +	/* Do we have this domain? If not, create the domain */
> +	args.np = dn;
> +	args.args_count = 0;
> +
> +	genpd = of_genpd_get_from_provider(&args);
> +	if (!IS_ERR(genpd))
> +		goto skip_parent;

Why not just return genpd and drop the goto?

> +
> +	genpd = of_init_cpu_pm_domain(dn, ops);
> +	if (IS_ERR(genpd))
> +		return genpd;
> +
> +	/* Is there a domain provider for this domain? */
> +	ret = of_parse_phandle_with_args(dn, "power-domains",
> +			"#power-domain-cells", 0, &args);
> +	of_node_put(dn);

Shouldn't this be of_node_put(args.np)? I suppose it's the same
so this isn't too important.

> +	if (ret < 0)
> +		goto skip_parent;
> +
> +	/* Find its parent and attach this domain to it, recursively */
> +	parent = of_get_cpu_domain(args.np, ops, cpu);

Except that we use the np here. So perhaps move the of_node_put()
down to the skip_parent goto?

> +	if (IS_ERR(parent)) {
> +		struct cpu_pm_domain *cpu_pd, *parent_cpu_pd;
> +
> +		ret = pm_genpd_add_subdomain(genpd, parent);

parent is an error pointer here... isn't this always going to
fail? Maybe that should be if (!IS_ERR(parent)) up there?

> +		if (ret) {
> +			pr_err("%s: Unable to add sub-domain (%s) to parent (%s)\n err: %d",
> +					__func__, genpd->name, parent->name,
> +					ret);
> +			return ERR_PTR(ret);
> +		}
> +
> +		/*
> +		 * Reference parent domain for easy access.
> +		 * Note: We could be attached to a domain that is not a
> +		 * CPU PM domain in that case dont reference the parent.

s/dont/don't/

> +		 */
> +		cpu_pd = to_cpu_pd(genpd);
> +		parent_cpu_pd = to_cpu_pd(parent);
> +
> +		if (cpu_pd && parent_cpu_pd)
> +			cpu_pd->parent = parent_cpu_pd;
> +	}
> +
> +skip_parent:
> +	return genpd;
> +}
> +
> +/**
> + * of_setup_cpu_pd_single() - Setup the PM domains for a CPU
> + *
> + * @cpu: The CPU for which the PM domain is to be set up.
> + * @ops: The PM domain suspend/resume ops for the CPU's domain
> + *
> + * If the CPU PM domain exists already, then the CPU is attached to
> + * that CPU PD. If it doesn't, the domain is created, the @ops are
> + * set for power_on/power_off callbacks and then the CPU is attached
> + * to that domain. If the domain was created outside this framework,
> + * then we do not attach the CPU to the domain.
> + */
> +int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
> +{
> +
> +	struct device_node *dn;
> +	struct generic_pm_domain *genpd;
> +
> +	dn = of_get_cpu_node(cpu, NULL);
> +	if (!dn)
> +		return -ENODEV;
> +
> +	dn = of_parse_phandle(dn, "power-domains", 0);
> +	if (!dn)
> +		return -ENODEV;
> +	of_node_put(dn);

This should be put after of_get_cpu_domain().

> +
> +	/* Find the genpd for this CPU, create if not found */
> +	genpd = of_get_cpu_domain(dn, ops, cpu);
> +	if (IS_ERR(genpd))
> +		return PTR_ERR(genpd);
> +
> +	return cpu_pd_attach_cpu(cpu);
> +}
> +EXPORT_SYMBOL(of_setup_cpu_pd_single);

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 06/12] PM / cpu_domains: Record CPUs that are part of the domain
  2016-02-12 20:50   ` Lina Iyer
@ 2016-02-26 19:20     ` Stephen Boyd
  -1 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 19:20 UTC (permalink / raw)
  To: Lina Iyer
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger

On 02/12, Lina Iyer wrote:
> diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
> index bab4846..0c539f0 100644
> --- a/include/linux/cpu_domains.h
> +++ b/include/linux/cpu_domains.h
> @@ -11,8 +11,10 @@
>  #ifndef __CPU_DOMAINS_H__
>  #define __CPU_DOMAINS_H__
>  
> +#include <linux/cpumask.h>

Just forward declare struct cpumask instead?

> +
>  struct cpu_pd_ops {
> -	int (*power_off)(u32 state_idx, u32 param);
> +	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
>  	int (*power_on)(void);
>  };

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 06/12] PM / cpu_domains: Record CPUs that are part of the domain
@ 2016-02-26 19:20     ` Stephen Boyd
  0 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 19:20 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/12, Lina Iyer wrote:
> diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
> index bab4846..0c539f0 100644
> --- a/include/linux/cpu_domains.h
> +++ b/include/linux/cpu_domains.h
> @@ -11,8 +11,10 @@
>  #ifndef __CPU_DOMAINS_H__
>  #define __CPU_DOMAINS_H__
>  
> +#include <linux/cpumask.h>

Just forward declare struct cpumask instead?

> +
>  struct cpu_pd_ops {
> -	int (*power_off)(u32 state_idx, u32 param);
> +	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
>  	int (*power_on)(void);
>  };

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 07/12] PM / cpu_domains: Add PM Domain governor for CPUs
  2016-02-12 20:50   ` Lina Iyer
@ 2016-02-26 19:33     ` Stephen Boyd
  -1 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 19:33 UTC (permalink / raw)
  To: Lina Iyer
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger

On 02/12, Lina Iyer wrote:
> @@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
>  	return res;
>  }
>  
> +static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
> +{
> +	struct generic_pm_domain *genpd = pd_to_genpd(pd);
> +	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
> +	int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
> +	u64 sleep_ns;
> +	ktime_t earliest, next_wakeup;
> +	int cpu;
> +	int i;
> +
> +	/* Reset the last set genpd state, default to index 0 */
> +	genpd->state_idx = 0;
> +
> +	/* We dont want to power down, if QoS is 0 */
> +	if (!qos)
> +		return false;
> +
> +	/*
> +	 * Find the sleep time for the cluster.
> +	 * The time between now and the first wake up of any CPU that
> +	 * are in this domain hierarchy is the time available for the
> +	 * domain to be idle.
> +	 */
> +	earliest = ktime_set(KTIME_SEC_MAX, 0);
> +	for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {

We're not worried about hotplug happening in parallel because
preemption is disabled here?

> +		next_wakeup = tick_nohz_get_next_wakeup(cpu);
> +		if (earliest.tv64 > next_wakeup.tv64)

	if (ktime_before(next_wakeup, earliest))

> +			earliest = next_wakeup;
> +	}
> +
> +	sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get()));
> +	if (sleep_ns <= 0)
> +		return false;
> +
> +	/*
> +	 * Find the deepest sleep state that satisfies the residency
> +	 * requirement and the QoS constraint
> +	 */
> +	for (i = genpd->state_count - 1; i >= 0; i--) {
> +		u64 state_sleep_ns;
> +
> +		state_sleep_ns = genpd->states[i].power_off_latency_ns +
> +			genpd->states[i].power_on_latency_ns +
> +			genpd->states[i].residency_ns;
> +
> +		/*
> +		 * If we cant sleep to save power in the state, move on

s/cant/can't/

> +		 * to the next lower idle state.
> +		 */
> +		if (state_sleep_ns > sleep_ns)
> +			continue;
> +
> +		/*
> +		 * We also dont want to sleep more than we should to

s/dont/don't/

> +		 * gaurantee QoS.
> +		 */
> +		if (state_sleep_ns < (qos * NSEC_PER_USEC))

Maybe we should make qos into qos_ns? Presumably the compiler
would hoist out the multiplication here, but it doesn't hurt to
do it explicitly.

> +			break;
> +	}
> +
> +	if (i >= 0)
> +		genpd->state_idx = i;
> +
> +	return  (i >= 0) ? true : false;

Just return i >= 0?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 07/12] PM / cpu_domains: Add PM Domain governor for CPUs
@ 2016-02-26 19:33     ` Stephen Boyd
  0 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 19:33 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/12, Lina Iyer wrote:
> @@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
>  	return res;
>  }
>  
> +static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
> +{
> +	struct generic_pm_domain *genpd = pd_to_genpd(pd);
> +	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
> +	int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
> +	u64 sleep_ns;
> +	ktime_t earliest, next_wakeup;
> +	int cpu;
> +	int i;
> +
> +	/* Reset the last set genpd state, default to index 0 */
> +	genpd->state_idx = 0;
> +
> +	/* We dont want to power down, if QoS is 0 */
> +	if (!qos)
> +		return false;
> +
> +	/*
> +	 * Find the sleep time for the cluster.
> +	 * The time between now and the first wake up of any CPU that
> +	 * are in this domain hierarchy is the time available for the
> +	 * domain to be idle.
> +	 */
> +	earliest = ktime_set(KTIME_SEC_MAX, 0);
> +	for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {

We're not worried about hotplug happening in parallel because
preemption is disabled here?

> +		next_wakeup = tick_nohz_get_next_wakeup(cpu);
> +		if (earliest.tv64 > next_wakeup.tv64)

	if (ktime_before(next_wakeup, earliest))

> +			earliest = next_wakeup;
> +	}
> +
> +	sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get()));
> +	if (sleep_ns <= 0)
> +		return false;
> +
> +	/*
> +	 * Find the deepest sleep state that satisfies the residency
> +	 * requirement and the QoS constraint
> +	 */
> +	for (i = genpd->state_count - 1; i >= 0; i--) {
> +		u64 state_sleep_ns;
> +
> +		state_sleep_ns = genpd->states[i].power_off_latency_ns +
> +			genpd->states[i].power_on_latency_ns +
> +			genpd->states[i].residency_ns;
> +
> +		/*
> +		 * If we cant sleep to save power in the state, move on

s/cant/can't/

> +		 * to the next lower idle state.
> +		 */
> +		if (state_sleep_ns > sleep_ns)
> +			continue;
> +
> +		/*
> +		 * We also dont want to sleep more than we should to

s/dont/don't/

> +		 * gaurantee QoS.
> +		 */
> +		if (state_sleep_ns < (qos * NSEC_PER_USEC))

Maybe we should make qos into qos_ns? Presumably the compiler
would hoist out the multiplication here, but it doesn't hurt to
do it explicitly.

> +			break;
> +	}
> +
> +	if (i >= 0)
> +		genpd->state_idx = i;
> +
> +	return  (i >= 0) ? true : false;

Just return i >= 0?

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 08/12] Documentation / cpu_domains: Describe CPU PM domains setup and governor
  2016-02-12 20:50   ` Lina Iyer
@ 2016-02-26 19:43     ` Stephen Boyd
  -1 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 19:43 UTC (permalink / raw)
  To: Lina Iyer
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger

On 02/12, Lina Iyer wrote:
> diff --git a/Documentation/power/cpu_domains.txt b/Documentation/power/cpu_domains.txt
> new file mode 100644
> index 0000000..5fdc66d
> --- /dev/null
> +++ b/Documentation/power/cpu_domains.txt
> @@ -0,0 +1,79 @@
> +CPU PM domains
> +==============
> +
> +Newer CPUs are grouped in SoCs as clusters. A cluster in addition to the CPUs
> +may have caches, VFP and architecture specific power controller that share the

caches, floating point units, and other architecture specific
hardware that share resources when any of the CPUs are active.

> +resources when any of the CPUs are active. When the CPUs are in idle, some of
> +these cluster components may also idle. A cluster may also be nested inside
> +another cluster that provides common coherency interfaces to share data
> +between the clusters. The organization of such clusters and CPU may be
> +descibed in DT, since they are SoC specific.
> +
> +CPUIdle framework enables the CPUs to determine the sleep time and enter low
> +power state to save power during periods of idle. CPUs in a cluster may enter
> +and exit idle state independently of each other. During the time when all the
> +CPUs are in idle state, the cluster may safely put some of the shared
> +resources in their idle state. The time between the last CPU to enter idle and
> +the first CPU to wake up is the time available for the cluster to enter its
> +idle state.
> +
> +When SoCs power down the CPU during cpuidle, they generally have supplemental
> +hardware that can handshake with the CPU with a signal that indicates that the
> +CPU has stopped execution. The hardware is also responsible for warm booting
> +the CPU on receiving an interrupt. With cluster architecture, common resources

In a cluster architecture,

> +that are shared by the cluster may also be powered down by an external

shared by a cluster

> +microcontroller or a processor. The microcontroller may be programmed in
> +advance to put the hardware blocks in a low power state, when the last active
> +CPU sends the idle signal. When the signal is received, the microcontroller
> +may trigger the hardware blocks to enter their low power state. When an
> +interrupt to wakeup the processor is received, the microcontroller is
> +responsible for bringing up the hardware blocks to its active state, before
> +waking up the CPU. The timelines for such operations should be in the
> +acceptable range for the for CPU idle to get power benefits.

acceptable range for CPU idle to get power benefits.

> +
> +CPU PM Domain Setup
> +-------------------
> +
> +PM domains  are represented in the DT as domain consumers and providers. A

              ^ extra space here

> +device may have a domain provider and a domain provider may support multiple
> +domain consumers. Domains like clusters, may also be nested inside one
> +another. A domain that has no active consumer, may be powered off and any
> +resuming consumer would trigger the domain back to active. Parent domains may
> +be powered off when the child domains are powered off. The CPU cluster can be
> +fashioned as a PM domain. When the CPU devices are powered off, the PM domain
> +may be powered off.
> +
> +Device idle is reference counted by runtime PM. When there is no active need
> +for the device, runtime PM invokes callbacks to suspend the parent domain.
> +Generic PM domain (genpd) handles the hierarchy of devices, domains and the
> +reference counting of objects leading to last man down and first man up in the
> +domain. The CPU domains helper functions defines PM domains for each CPU
> +cluster and attaches the CPU devices to the respective PM domains.
> +
> +Platform drivers may use the following API to register their CPU PM domains.
> +
> +of_setup_cpu_pd() -
> +Provides a single step registration of the CPU PM domain and attach CPUs to
> +the genpd. Platform drivers may additionally register callbacks for power_on
> +and power_off operations for the PM domain.
> +
> +of_setup_cpu_pd_single() -
> +Define PM domain for a single CPU and attach the CPU to its domain.
> +
> +
> +CPU PM Domain governor
> +----------------------
> +
> +CPUs have an unique ability to determine their next wakeup. CPUs may wake up

a unique

> +for known timer interrupts and unknown interrupts from idle. Prediction
> +algorithms and heuristic based algorithms like the Menu governor for cpuidle
> +can determine the next wakeup of the CPU. However, determining the wakeup
> +across a group of CPUs is a tough problem to solve.
> +

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 08/12] Documentation / cpu_domains: Describe CPU PM domains setup and governor
@ 2016-02-26 19:43     ` Stephen Boyd
  0 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 19:43 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/12, Lina Iyer wrote:
> diff --git a/Documentation/power/cpu_domains.txt b/Documentation/power/cpu_domains.txt
> new file mode 100644
> index 0000000..5fdc66d
> --- /dev/null
> +++ b/Documentation/power/cpu_domains.txt
> @@ -0,0 +1,79 @@
> +CPU PM domains
> +==============
> +
> +Newer CPUs are grouped in SoCs as clusters. A cluster in addition to the CPUs
> +may have caches, VFP and architecture specific power controller that share the

caches, floating point units, and other architecture specific
hardware that share resources when any of the CPUs are active.

> +resources when any of the CPUs are active. When the CPUs are in idle, some of
> +these cluster components may also idle. A cluster may also be nested inside
> +another cluster that provides common coherency interfaces to share data
> +between the clusters. The organization of such clusters and CPU may be
> +descibed in DT, since they are SoC specific.
> +
> +CPUIdle framework enables the CPUs to determine the sleep time and enter low
> +power state to save power during periods of idle. CPUs in a cluster may enter
> +and exit idle state independently of each other. During the time when all the
> +CPUs are in idle state, the cluster may safely put some of the shared
> +resources in their idle state. The time between the last CPU to enter idle and
> +the first CPU to wake up is the time available for the cluster to enter its
> +idle state.
> +
> +When SoCs power down the CPU during cpuidle, they generally have supplemental
> +hardware that can handshake with the CPU with a signal that indicates that the
> +CPU has stopped execution. The hardware is also responsible for warm booting
> +the CPU on receiving an interrupt. With cluster architecture, common resources

In a cluster architecture,

> +that are shared by the cluster may also be powered down by an external

shared by a cluster

> +microcontroller or a processor. The microcontroller may be programmed in
> +advance to put the hardware blocks in a low power state, when the last active
> +CPU sends the idle signal. When the signal is received, the microcontroller
> +may trigger the hardware blocks to enter their low power state. When an
> +interrupt to wakeup the processor is received, the microcontroller is
> +responsible for bringing up the hardware blocks to its active state, before
> +waking up the CPU. The timelines for such operations should be in the
> +acceptable range for the for CPU idle to get power benefits.

acceptable range for CPU idle to get power benefits.

> +
> +CPU PM Domain Setup
> +-------------------
> +
> +PM domains  are represented in the DT as domain consumers and providers. A

              ^ extra space here

> +device may have a domain provider and a domain provider may support multiple
> +domain consumers. Domains like clusters, may also be nested inside one
> +another. A domain that has no active consumer, may be powered off and any
> +resuming consumer would trigger the domain back to active. Parent domains may
> +be powered off when the child domains are powered off. The CPU cluster can be
> +fashioned as a PM domain. When the CPU devices are powered off, the PM domain
> +may be powered off.
> +
> +Device idle is reference counted by runtime PM. When there is no active need
> +for the device, runtime PM invokes callbacks to suspend the parent domain.
> +Generic PM domain (genpd) handles the hierarchy of devices, domains and the
> +reference counting of objects leading to last man down and first man up in the
> +domain. The CPU domains helper functions defines PM domains for each CPU
> +cluster and attaches the CPU devices to the respective PM domains.
> +
> +Platform drivers may use the following API to register their CPU PM domains.
> +
> +of_setup_cpu_pd() -
> +Provides a single step registration of the CPU PM domain and attach CPUs to
> +the genpd. Platform drivers may additionally register callbacks for power_on
> +and power_off operations for the PM domain.
> +
> +of_setup_cpu_pd_single() -
> +Define PM domain for a single CPU and attach the CPU to its domain.
> +
> +
> +CPU PM Domain governor
> +----------------------
> +
> +CPUs have an unique ability to determine their next wakeup. CPUs may wake up

a unique

> +for known timer interrupts and unknown interrupts from idle. Prediction
> +algorithms and heuristic based algorithms like the Menu governor for cpuidle
> +can determine the next wakeup of the CPU. However, determining the wakeup
> +across a group of CPUs is a tough problem to solve.
> +

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 12/12] ARM64: dts: Define CPU power domain for MSM8916
  2016-02-12 20:50   ` Lina Iyer
@ 2016-02-26 19:50     ` Stephen Boyd
  -1 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 19:50 UTC (permalink / raw)
  To: Lina Iyer
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger, devicetree

On 02/12, Lina Iyer wrote:
> @@ -101,6 +105,27 @@
>  		};
>  	};
>  
> +	CPU_PD: cpu-pd@0 {
> +		#power-domain-cells = <0>;
> +		power-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;

Why isn't this part of the psci node? PSCI is the node that's
providing the code/logic for the power domain.

> +	};
> +
> +	pd-power-states {
> +		CLUSTER_RET: power-state@1 {
> +			state-param = <0x1000010>;
> +			entry-latency-us = <500>;
> +			exit-latency-us = <500>;
> +			residency-us = <2000>;
> +		 };
> +
> +		CLUSTER_PWR_DWN: power-state@2 {
> +			state-param = <0x1000030>;
> +			entry-latency-us = <2000>;
> +			exit-latency-us = <2000>;
> +			residency-us = <6000>;
> +		};
> +	};
> +

And I would expect these to be put somewhere inside the power
domain provider as well? Is this documented somewhere?

>  	psci {
>  		compatible = "arm,psci-1.0";
>  		method = "smc";

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 12/12] ARM64: dts: Define CPU power domain for MSM8916
@ 2016-02-26 19:50     ` Stephen Boyd
  0 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-02-26 19:50 UTC (permalink / raw)
  To: linux-arm-kernel

On 02/12, Lina Iyer wrote:
> @@ -101,6 +105,27 @@
>  		};
>  	};
>  
> +	CPU_PD: cpu-pd at 0 {
> +		#power-domain-cells = <0>;
> +		power-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;

Why isn't this part of the psci node? PSCI is the node that's
providing the code/logic for the power domain.

> +	};
> +
> +	pd-power-states {
> +		CLUSTER_RET: power-state at 1 {
> +			state-param = <0x1000010>;
> +			entry-latency-us = <500>;
> +			exit-latency-us = <500>;
> +			residency-us = <2000>;
> +		 };
> +
> +		CLUSTER_PWR_DWN: power-state at 2 {
> +			state-param = <0x1000030>;
> +			entry-latency-us = <2000>;
> +			exit-latency-us = <2000>;
> +			residency-us = <6000>;
> +		};
> +	};
> +

And I would expect these to be put somewhere inside the power
domain provider as well? Is this documented somewhere?

>  	psci {
>  		compatible = "arm,psci-1.0";
>  		method = "smc";

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 01/12] PM / Domains: Abstract genpd locking
  2016-02-26 18:08     ` Stephen Boyd
@ 2016-03-01 16:55       ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 16:55 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger, Kevin Hilman

On Fri, Feb 26 2016 at 11:08 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
>> index 3ddd05d..8204615 100644
>> --- a/drivers/base/power/domain.c
>> +++ b/drivers/base/power/domain.c
>> @@ -40,6 +40,46 @@
>>  static LIST_HEAD(gpd_list);
>>  static DEFINE_MUTEX(gpd_list_lock);
>>
>> +struct genpd_lock_fns {
>> +	void (*lock)(struct generic_pm_domain *genpd);
>> +	void (*lock_nested)(struct generic_pm_domain *genpd, int depth);
>> +	int (*lock_interruptible)(struct generic_pm_domain *genpd);
>> +	void (*unlock)(struct generic_pm_domain *genpd);
>> +};
>> +
>> +static void genpd_lock_irq(struct generic_pm_domain *genpd)
>> +{
>> +	mutex_lock(&genpd->mlock);
>> +}
>> +
>> +static void genpd_lock_irq_nested(struct generic_pm_domain *genpd,
>> +					int depth)
>> +{
>> +	mutex_lock_nested(&genpd->mlock, depth);
>> +}
>> +
>> +static int genpd_lock_interruptible_irq(struct generic_pm_domain *genpd)
>> +{
>> +	return mutex_lock_interruptible(&genpd->mlock);
>> +}
>> +
>> +static void genpd_unlock_irq(struct generic_pm_domain *genpd)
>> +{
>> +	return mutex_unlock(&genpd->mlock);
>> +}
>> +
>> +static struct genpd_lock_fns irq_lock = {
>
>Can this be const? Also, why is this called irq_lock when the
>lock functions are mutex based?
>
hmm.. well IRQs are allowed, but I guess, I should come up with a better
name.

>> +	.lock = genpd_lock_irq,
>> +	.lock_nested = genpd_lock_irq_nested,
>> +	.lock_interruptible = genpd_lock_interruptible_irq,
>> +	.unlock = genpd_unlock_irq,
>> +};
>> +
>> @@ -74,6 +75,8 @@ struct generic_pm_domain {
>>  	struct genpd_power_state *states;
>>  	unsigned int state_count; /* number of states */
>>  	unsigned int state_idx; /* state that genpd will go to when off */
>> +	struct genpd_lock_fns *lock_fns;
>
>const?
>
Sure will fix both.

Thanks,
Lina

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 01/12] PM / Domains: Abstract genpd locking
@ 2016-03-01 16:55       ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 16:55 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 26 2016 at 11:08 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
>> index 3ddd05d..8204615 100644
>> --- a/drivers/base/power/domain.c
>> +++ b/drivers/base/power/domain.c
>> @@ -40,6 +40,46 @@
>>  static LIST_HEAD(gpd_list);
>>  static DEFINE_MUTEX(gpd_list_lock);
>>
>> +struct genpd_lock_fns {
>> +	void (*lock)(struct generic_pm_domain *genpd);
>> +	void (*lock_nested)(struct generic_pm_domain *genpd, int depth);
>> +	int (*lock_interruptible)(struct generic_pm_domain *genpd);
>> +	void (*unlock)(struct generic_pm_domain *genpd);
>> +};
>> +
>> +static void genpd_lock_irq(struct generic_pm_domain *genpd)
>> +{
>> +	mutex_lock(&genpd->mlock);
>> +}
>> +
>> +static void genpd_lock_irq_nested(struct generic_pm_domain *genpd,
>> +					int depth)
>> +{
>> +	mutex_lock_nested(&genpd->mlock, depth);
>> +}
>> +
>> +static int genpd_lock_interruptible_irq(struct generic_pm_domain *genpd)
>> +{
>> +	return mutex_lock_interruptible(&genpd->mlock);
>> +}
>> +
>> +static void genpd_unlock_irq(struct generic_pm_domain *genpd)
>> +{
>> +	return mutex_unlock(&genpd->mlock);
>> +}
>> +
>> +static struct genpd_lock_fns irq_lock = {
>
>Can this be const? Also, why is this called irq_lock when the
>lock functions are mutex based?
>
hmm.. well IRQs are allowed, but I guess, I should come up with a better
name.

>> +	.lock = genpd_lock_irq,
>> +	.lock_nested = genpd_lock_irq_nested,
>> +	.lock_interruptible = genpd_lock_interruptible_irq,
>> +	.unlock = genpd_unlock_irq,
>> +};
>> +
>> @@ -74,6 +75,8 @@ struct generic_pm_domain {
>>  	struct genpd_power_state *states;
>>  	unsigned int state_count; /* number of states */
>>  	unsigned int state_idx; /* state that genpd will go to when off */
>> +	struct genpd_lock_fns *lock_fns;
>
>const?
>
Sure will fix both.

Thanks,
Lina

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 02/12] PM / Domains: Support IRQ safe PM domains
  2016-02-26 18:17     ` Stephen Boyd
@ 2016-03-01 17:44       ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 17:44 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger, Kevin Hilman

On Fri, Feb 26 2016 at 11:17 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
>> index 8ba6625..c06f0b6 100644
>> --- a/Documentation/power/devices.txt
>> +++ b/Documentation/power/devices.txt
>> @@ -607,7 +607,16 @@ individually.  Instead, a set of devices sharing a power resource can be put
>>  into a low-power state together at the same time by turning off the shared
>>  power resource.  Of course, they also need to be put into the full-power state
>>  together, by turning the shared power resource on.  A set of devices with this
>> -property is often referred to as a power domain.
>> +property is often referred to as a power domain. A power domain may also be
>> +nested inside another power domain.
>> +
>> +Devices, by default, operate in process context and if a device can operate in
>> +IRQ safe context, has to be explicitly set as IRQ safe. Power domains by
>
>Devices, by default, operate in process context. If a device can
>operate in IRQ safe context that has to be explicitly indicated
>by setting the irq_safe boolean inside struct generic_pm_domain
>to true. Power domains typically operate in process context...
>
Done.

>> +default, operate in process context but could have devices that are IRQ safe.
>> +Such power domains cannot be powered on/off during runtime PM. On the other
>> +hand, an IRQ safe PM domains that have IRQ safe devices may be powered off
>
>On the other hand, IRQ safe PM domains that have ..
>
Done.

>> +when all the devices are in idle. An IRQ safe domain may only be attached as a
>
>all the devices in the domain?
>
Devices need not be IRQ safe.

<...>

>> +static struct genpd_lock_fns no_sleep_lock = {
>
>const?
>
OK

...

>> +	/*
>> +	 * As we dont power off a non IRQ safe domain, which holds
>
>s/dont/don't/
>
>> +	 * an IRQ safe device, we dont need to restore power to it.
>
>s/dont/don't/
>
Done to both.
>> +	 */
>> +	if (dev->power.irq_safe && !genpd->irq_safe) {
>>  		timed = false;
>>  		goto out;
>>  	}
>> @@ -1296,6 +1359,13 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
>>  	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
>>  		return -EINVAL;
>>
>> +	if (genpd->irq_safe && !dev->power.irq_safe) {
>> +		dev_err(dev,
>> +			"PM Domain %s is IRQ safe; device has to IRQ safe.\n",
>
>has to be?
>
This is a remenant. This limitation need not exist. Removed.

>> +			genpd->name);
>> +		return -EINVAL;
>> +	}
>> +
>>  	gpd_data = genpd_alloc_dev_data(dev, genpd, td);
>>  	if (IS_ERR(gpd_data))
>>  		return PTR_ERR(gpd_data);
>> @@ -1394,6 +1464,17 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
>>  	    || genpd == subdomain)
>>  		return -EINVAL;
>>
>> +	/*
>> +	 * If the domain can be powered on/off in an IRQ safe
>> +	 * context, ensure that the subdomain can also be
>> +	 * powered on/off in that context.
>> +	 */
>> +	if (!genpd->irq_safe && subdomain->irq_safe) {
>> +		WARN("Parent %s of subdomain %s must be IRQ-safe\n",
>
>Nitpick! IRQ-safe or IRQ safe? Use one consistently please.
>
Sorry. It will be IRQ safe.

Thank,
Lina

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 02/12] PM / Domains: Support IRQ safe PM domains
@ 2016-03-01 17:44       ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 17:44 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 26 2016 at 11:17 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
>> index 8ba6625..c06f0b6 100644
>> --- a/Documentation/power/devices.txt
>> +++ b/Documentation/power/devices.txt
>> @@ -607,7 +607,16 @@ individually.  Instead, a set of devices sharing a power resource can be put
>>  into a low-power state together at the same time by turning off the shared
>>  power resource.  Of course, they also need to be put into the full-power state
>>  together, by turning the shared power resource on.  A set of devices with this
>> -property is often referred to as a power domain.
>> +property is often referred to as a power domain. A power domain may also be
>> +nested inside another power domain.
>> +
>> +Devices, by default, operate in process context and if a device can operate in
>> +IRQ safe context, has to be explicitly set as IRQ safe. Power domains by
>
>Devices, by default, operate in process context. If a device can
>operate in IRQ safe context that has to be explicitly indicated
>by setting the irq_safe boolean inside struct generic_pm_domain
>to true. Power domains typically operate in process context...
>
Done.

>> +default, operate in process context but could have devices that are IRQ safe.
>> +Such power domains cannot be powered on/off during runtime PM. On the other
>> +hand, an IRQ safe PM domains that have IRQ safe devices may be powered off
>
>On the other hand, IRQ safe PM domains that have ..
>
Done.

>> +when all the devices are in idle. An IRQ safe domain may only be attached as a
>
>all the devices in the domain?
>
Devices need not be IRQ safe.

<...>

>> +static struct genpd_lock_fns no_sleep_lock = {
>
>const?
>
OK

...

>> +	/*
>> +	 * As we dont power off a non IRQ safe domain, which holds
>
>s/dont/don't/
>
>> +	 * an IRQ safe device, we dont need to restore power to it.
>
>s/dont/don't/
>
Done to both.
>> +	 */
>> +	if (dev->power.irq_safe && !genpd->irq_safe) {
>>  		timed = false;
>>  		goto out;
>>  	}
>> @@ -1296,6 +1359,13 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
>>  	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(dev))
>>  		return -EINVAL;
>>
>> +	if (genpd->irq_safe && !dev->power.irq_safe) {
>> +		dev_err(dev,
>> +			"PM Domain %s is IRQ safe; device has to IRQ safe.\n",
>
>has to be?
>
This is a remenant. This limitation need not exist. Removed.

>> +			genpd->name);
>> +		return -EINVAL;
>> +	}
>> +
>>  	gpd_data = genpd_alloc_dev_data(dev, genpd, td);
>>  	if (IS_ERR(gpd_data))
>>  		return PTR_ERR(gpd_data);
>> @@ -1394,6 +1464,17 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
>>  	    || genpd == subdomain)
>>  		return -EINVAL;
>>
>> +	/*
>> +	 * If the domain can be powered on/off in an IRQ safe
>> +	 * context, ensure that the subdomain can also be
>> +	 * powered on/off in that context.
>> +	 */
>> +	if (!genpd->irq_safe && subdomain->irq_safe) {
>> +		WARN("Parent %s of subdomain %s must be IRQ-safe\n",
>
>Nitpick! IRQ-safe or IRQ safe? Use one consistently please.
>
Sorry. It will be IRQ safe.

Thank,
Lina

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 03/12] PM / cpu_domains: Setup PM domains for CPUs/clusters
  2016-02-26 19:10     ` Stephen Boyd
@ 2016-03-01 18:00       ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 18:00 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger, Daniel Lezcano

On Fri, Feb 26 2016 at 12:10 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
>> new file mode 100644
>> index 0000000..981592f
>> --- /dev/null
>> +++ b/drivers/base/power/cpu_domains.c
>> @@ -0,0 +1,267 @@
>> +
>> +/* List of CPU PM domains we care about */
>> +static LIST_HEAD(of_cpu_pd_list);
>> +static DEFINE_SPINLOCK(cpu_pd_list_lock);
>
>Can this be a mutex?
>
Yes, will change. The init function would not be called from atomic context.

>> +	genpd = kzalloc(sizeof(*(genpd)), GFP_KERNEL);
>
>Drop extra parenthesis
>

>> +	/* Register the CPU genpd */
>> +	pr_debug("adding %s as CPU PM domain.\n", pd->genpd->name);
>
>Drop the full stop?
>
OK

>> +	kfree(genpd);
>> +	kfree(genpd->name);
>
>Switch order so that name is freed first to avoid junk deref here.
>
Sounds good.

>> +	kfree(pd);
>> +	return ERR_PTR(ret);
>> +}
>> +
>> +static struct generic_pm_domain *of_get_cpu_domain(struct device_node *dn,
>> +		const struct cpu_pd_ops *ops, int cpu)
>> +{

>> +	genpd = of_genpd_get_from_provider(&args);
>> +	if (!IS_ERR(genpd))
>> +		goto skip_parent;
>
>Why not just return genpd and drop the goto?
>
Ok

>> +	genpd = of_init_cpu_pm_domain(dn, ops);
>> +	if (IS_ERR(genpd))
>> +		return genpd;
>> +
>> +	/* Is there a domain provider for this domain? */
>> +	ret = of_parse_phandle_with_args(dn, "power-domains",
>> +			"#power-domain-cells", 0, &args);
>> +	of_node_put(dn);
>
>Shouldn't this be of_node_put(args.np)? I suppose it's the same
>so this isn't too important.
>
>> +	if (ret < 0)
>> +		goto skip_parent;
>> +
>> +	/* Find its parent and attach this domain to it, recursively */
>> +	parent = of_get_cpu_domain(args.np, ops, cpu);
>
>Except that we use the np here. So perhaps move the of_node_put()
>down to the skip_parent goto?
>
Ok

>> +	if (IS_ERR(parent)) {
>> +		struct cpu_pm_domain *cpu_pd, *parent_cpu_pd;
>> +
>> +		ret = pm_genpd_add_subdomain(genpd, parent);
>
>parent is an error pointer here... isn't this always going to
>fail? Maybe that should be if (!IS_ERR(parent)) up there?
>
Good catch. Yes, it should be.

>> +		/*
>> +		 * Reference parent domain for easy access.
>> +		 * Note: We could be attached to a domain that is not a
>> +		 * CPU PM domain in that case dont reference the parent.
>
>s/dont/don't/
>
Done.

>> +	dn = of_get_cpu_node(cpu, NULL);
>> +	if (!dn)
>> +		return -ENODEV;
>> +
>> +	dn = of_parse_phandle(dn, "power-domains", 0);
>> +	if (!dn)
>> +		return -ENODEV;
>> +	of_node_put(dn);
>
>This should be put after of_get_cpu_domain().
>
Thanks for this review, Stephen.

Thanks,
Lina

>> +
>> +	/* Find the genpd for this CPU, create if not found */
>> +	genpd = of_get_cpu_domain(dn, ops, cpu);
>> +	if (IS_ERR(genpd))
>> +		return PTR_ERR(genpd);
>> +
>> +	return cpu_pd_attach_cpu(cpu);
>> +}
>> +EXPORT_SYMBOL(of_setup_cpu_pd_single);
>
>-- 
>Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 03/12] PM / cpu_domains: Setup PM domains for CPUs/clusters
@ 2016-03-01 18:00       ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 18:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 26 2016 at 12:10 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
>> new file mode 100644
>> index 0000000..981592f
>> --- /dev/null
>> +++ b/drivers/base/power/cpu_domains.c
>> @@ -0,0 +1,267 @@
>> +
>> +/* List of CPU PM domains we care about */
>> +static LIST_HEAD(of_cpu_pd_list);
>> +static DEFINE_SPINLOCK(cpu_pd_list_lock);
>
>Can this be a mutex?
>
Yes, will change. The init function would not be called from atomic context.

>> +	genpd = kzalloc(sizeof(*(genpd)), GFP_KERNEL);
>
>Drop extra parenthesis
>

>> +	/* Register the CPU genpd */
>> +	pr_debug("adding %s as CPU PM domain.\n", pd->genpd->name);
>
>Drop the full stop?
>
OK

>> +	kfree(genpd);
>> +	kfree(genpd->name);
>
>Switch order so that name is freed first to avoid junk deref here.
>
Sounds good.

>> +	kfree(pd);
>> +	return ERR_PTR(ret);
>> +}
>> +
>> +static struct generic_pm_domain *of_get_cpu_domain(struct device_node *dn,
>> +		const struct cpu_pd_ops *ops, int cpu)
>> +{

>> +	genpd = of_genpd_get_from_provider(&args);
>> +	if (!IS_ERR(genpd))
>> +		goto skip_parent;
>
>Why not just return genpd and drop the goto?
>
Ok

>> +	genpd = of_init_cpu_pm_domain(dn, ops);
>> +	if (IS_ERR(genpd))
>> +		return genpd;
>> +
>> +	/* Is there a domain provider for this domain? */
>> +	ret = of_parse_phandle_with_args(dn, "power-domains",
>> +			"#power-domain-cells", 0, &args);
>> +	of_node_put(dn);
>
>Shouldn't this be of_node_put(args.np)? I suppose it's the same
>so this isn't too important.
>
>> +	if (ret < 0)
>> +		goto skip_parent;
>> +
>> +	/* Find its parent and attach this domain to it, recursively */
>> +	parent = of_get_cpu_domain(args.np, ops, cpu);
>
>Except that we use the np here. So perhaps move the of_node_put()
>down to the skip_parent goto?
>
Ok

>> +	if (IS_ERR(parent)) {
>> +		struct cpu_pm_domain *cpu_pd, *parent_cpu_pd;
>> +
>> +		ret = pm_genpd_add_subdomain(genpd, parent);
>
>parent is an error pointer here... isn't this always going to
>fail? Maybe that should be if (!IS_ERR(parent)) up there?
>
Good catch. Yes, it should be.

>> +		/*
>> +		 * Reference parent domain for easy access.
>> +		 * Note: We could be attached to a domain that is not a
>> +		 * CPU PM domain in that case dont reference the parent.
>
>s/dont/don't/
>
Done.

>> +	dn = of_get_cpu_node(cpu, NULL);
>> +	if (!dn)
>> +		return -ENODEV;
>> +
>> +	dn = of_parse_phandle(dn, "power-domains", 0);
>> +	if (!dn)
>> +		return -ENODEV;
>> +	of_node_put(dn);
>
>This should be put after of_get_cpu_domain().
>
Thanks for this review, Stephen.

Thanks,
Lina

>> +
>> +	/* Find the genpd for this CPU, create if not found */
>> +	genpd = of_get_cpu_domain(dn, ops, cpu);
>> +	if (IS_ERR(genpd))
>> +		return PTR_ERR(genpd);
>> +
>> +	return cpu_pd_attach_cpu(cpu);
>> +}
>> +EXPORT_SYMBOL(of_setup_cpu_pd_single);
>
>-- 
>Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 04/12] ARM: cpuidle: Add runtime PM support for CPUs
  2016-02-26 18:24     ` Stephen Boyd
@ 2016-03-01 18:36       ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 18:36 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger, Daniel Lezcano

On Fri, Feb 26 2016 at 11:24 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> @@ -45,6 +48,8 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
>>
>>  	ret = cpu_pm_enter();
>>  	if (!ret) {
>> +		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));
>
>Can you add a comment on why we need to use RCU_NONIDLE here?
>It's not super obvious.
>
OK.

>> +
>>  		/*
>>  		 * Pass idle state index to cpu_suspend which in turn will
>>  		 * call the CPU ops suspend protocol with idle index as a
>> @@ -52,6 +57,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
>>  		 */
>>  		arm_cpuidle_suspend(idx);
>>
>> +		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
>>  		cpu_pm_exit();
>>  	}
>>
>> @@ -84,6 +90,30 @@ static const struct of_device_id arm_idle_state_match[] __initconst = {
>>  	{ },
>>  };
>>
>> +#ifdef CONFIG_HOTPLUG_CPU
>> +static int cpu_hotplug(struct notifier_block *nb,
>
>This function is pretty generically named. Maybe something more
>runtime PM specific or cpu idle specific?
>
OK

>> +			unsigned long action, void *data)
>> +{
>> +	struct device *cpu_dev = get_cpu_device(smp_processor_id());
>> +
>> +	/* Execute CPU runtime PM on that CPU */
>> +	switch (action) {
>
>We could do the & ~CPU_TASKS_FROZEN trick here to save a few cases.
>
OK

>> +	case CPU_DYING:
>> +	case CPU_DYING_FROZEN:
>> +		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));
>
>And do we actually need to use it for hotplug path? These
>notifiers don't run from idle context do they?
>
True. Will remove.

> +
>>  /*
>>   * arm_idle_init
>>   *
>> @@ -96,6 +126,7 @@ static int __init arm_idle_init(void)
>>  	int cpu, ret;
>>  	struct cpuidle_driver *drv = &arm_idle_driver;
>>  	struct cpuidle_device *dev;
>> +	struct device *cpu_dev;
>>
>>  	/*
>>  	 * Initialize idle states data, starting at index 1.
>> @@ -148,10 +189,17 @@ static int __init arm_idle_init(void)
>>  		}
>>  	}
>>
>> +#ifdef CONFIG_HOTPLUG_CPU
>> +	/* Register for hotplug notifications for runtime PM */
>> +	hotcpu_notifier(cpu_hotplug, 0);
>
>Define an empty cpu_hotplug() function for !CONFIG_HOTPLUG_CPU
>and then always call this without the ifdef?
>
I did this so we dont even register a hotplug notifier. Will change.

Thanks,
Lina

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 04/12] ARM: cpuidle: Add runtime PM support for CPUs
@ 2016-03-01 18:36       ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 18:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 26 2016 at 11:24 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> @@ -45,6 +48,8 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
>>
>>  	ret = cpu_pm_enter();
>>  	if (!ret) {
>> +		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));
>
>Can you add a comment on why we need to use RCU_NONIDLE here?
>It's not super obvious.
>
OK.

>> +
>>  		/*
>>  		 * Pass idle state index to cpu_suspend which in turn will
>>  		 * call the CPU ops suspend protocol with idle index as a
>> @@ -52,6 +57,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
>>  		 */
>>  		arm_cpuidle_suspend(idx);
>>
>> +		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
>>  		cpu_pm_exit();
>>  	}
>>
>> @@ -84,6 +90,30 @@ static const struct of_device_id arm_idle_state_match[] __initconst = {
>>  	{ },
>>  };
>>
>> +#ifdef CONFIG_HOTPLUG_CPU
>> +static int cpu_hotplug(struct notifier_block *nb,
>
>This function is pretty generically named. Maybe something more
>runtime PM specific or cpu idle specific?
>
OK

>> +			unsigned long action, void *data)
>> +{
>> +	struct device *cpu_dev = get_cpu_device(smp_processor_id());
>> +
>> +	/* Execute CPU runtime PM on that CPU */
>> +	switch (action) {
>
>We could do the & ~CPU_TASKS_FROZEN trick here to save a few cases.
>
OK

>> +	case CPU_DYING:
>> +	case CPU_DYING_FROZEN:
>> +		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));
>
>And do we actually need to use it for hotplug path? These
>notifiers don't run from idle context do they?
>
True. Will remove.

> +
>>  /*
>>   * arm_idle_init
>>   *
>> @@ -96,6 +126,7 @@ static int __init arm_idle_init(void)
>>  	int cpu, ret;
>>  	struct cpuidle_driver *drv = &arm_idle_driver;
>>  	struct cpuidle_device *dev;
>> +	struct device *cpu_dev;
>>
>>  	/*
>>  	 * Initialize idle states data, starting at index 1.
>> @@ -148,10 +189,17 @@ static int __init arm_idle_init(void)
>>  		}
>>  	}
>>
>> +#ifdef CONFIG_HOTPLUG_CPU
>> +	/* Register for hotplug notifications for runtime PM */
>> +	hotcpu_notifier(cpu_hotplug, 0);
>
>Define an empty cpu_hotplug() function for !CONFIG_HOTPLUG_CPU
>and then always call this without the ifdef?
>
I did this so we dont even register a hotplug notifier. Will change.

Thanks,
Lina

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 06/12] PM / cpu_domains: Record CPUs that are part of the domain
  2016-02-26 19:20     ` Stephen Boyd
@ 2016-03-01 19:24       ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 19:24 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger

On Fri, Feb 26 2016 at 12:20 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
>> index bab4846..0c539f0 100644
>> --- a/include/linux/cpu_domains.h
>> +++ b/include/linux/cpu_domains.h
>> @@ -11,8 +11,10 @@
>>  #ifndef __CPU_DOMAINS_H__
>>  #define __CPU_DOMAINS_H__
>>
>> +#include <linux/cpumask.h>
>
>Just forward declare struct cpumask instead?
>
Sure.

Thanks,
Lina

>> +
>>  struct cpu_pd_ops {
>> -	int (*power_off)(u32 state_idx, u32 param);
>> +	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
>>  	int (*power_on)(void);
>>  };
>
>-- 
>Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 06/12] PM / cpu_domains: Record CPUs that are part of the domain
@ 2016-03-01 19:24       ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 19:24 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 26 2016 at 12:20 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
>> index bab4846..0c539f0 100644
>> --- a/include/linux/cpu_domains.h
>> +++ b/include/linux/cpu_domains.h
>> @@ -11,8 +11,10 @@
>>  #ifndef __CPU_DOMAINS_H__
>>  #define __CPU_DOMAINS_H__
>>
>> +#include <linux/cpumask.h>
>
>Just forward declare struct cpumask instead?
>
Sure.

Thanks,
Lina

>> +
>>  struct cpu_pd_ops {
>> -	int (*power_off)(u32 state_idx, u32 param);
>> +	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
>>  	int (*power_on)(void);
>>  };
>
>-- 
>Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 07/12] PM / cpu_domains: Add PM Domain governor for CPUs
  2016-02-26 19:33     ` Stephen Boyd
@ 2016-03-01 19:32       ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 19:32 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger

On Fri, Feb 26 2016 at 12:33 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> @@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
>>  	return res;
>>  }
>>
>> +static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
>> +{
>> +	struct generic_pm_domain *genpd = pd_to_genpd(pd);
>> +	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
>> +	int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
>> +	u64 sleep_ns;
>> +	ktime_t earliest, next_wakeup;
>> +	int cpu;
>> +	int i;
>> +
>> +	/* Reset the last set genpd state, default to index 0 */
>> +	genpd->state_idx = 0;
>> +
>> +	/* We dont want to power down, if QoS is 0 */
>> +	if (!qos)
>> +		return false;
>> +
>> +	/*
>> +	 * Find the sleep time for the cluster.
>> +	 * The time between now and the first wake up of any CPU that
>> +	 * are in this domain hierarchy is the time available for the
>> +	 * domain to be idle.
>> +	 */
>> +	earliest = ktime_set(KTIME_SEC_MAX, 0);
>> +	for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {
>
>We're not worried about hotplug happening in parallel because
>preemption is disabled here?
>
Nope. Hotplug on the same domain or in its hierarchy will be waiting on
the domain lock to released before becoming online. Any other domain is
not of concern for this domain governor.

If a core was hotplugged out while this is happening, then we may risk
making an premature wake up decision, which would happen either way if
we lock hotplug here.

>> +		next_wakeup = tick_nohz_get_next_wakeup(cpu);
>> +		if (earliest.tv64 > next_wakeup.tv64)
>
>	if (ktime_before(next_wakeup, earliest))
>
>> +			earliest = next_wakeup;
>> +	}
>> +
>> +	sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get()));
>> +	if (sleep_ns <= 0)
>> +		return false;
>> +
>> +	/*
>> +	 * Find the deepest sleep state that satisfies the residency
>> +	 * requirement and the QoS constraint
>> +	 */
>> +	for (i = genpd->state_count - 1; i >= 0; i--) {
>> +		u64 state_sleep_ns;
>> +
>> +		state_sleep_ns = genpd->states[i].power_off_latency_ns +
>> +			genpd->states[i].power_on_latency_ns +
>> +			genpd->states[i].residency_ns;
>> +
>> +		/*
>> +		 * If we cant sleep to save power in the state, move on
>
>s/cant/can't/
>
argh. Fixed.
>> +		 * to the next lower idle state.
>> +		 */
>> +		if (state_sleep_ns > sleep_ns)
>> +			continue;
>> +
>> +		/*
>> +		 * We also dont want to sleep more than we should to
>
>s/dont/don't/
>
Done
>> +		 * gaurantee QoS.
>> +		 */
>> +		if (state_sleep_ns < (qos * NSEC_PER_USEC))
>
>Maybe we should make qos into qos_ns? Presumably the compiler
>would hoist out the multiplication here, but it doesn't hurt to
>do it explicitly.
>
Okay

>> +			break;
>> +	}
>> +
>> +	if (i >= 0)
>> +		genpd->state_idx = i;
>> +
>> +	return  (i >= 0) ? true : false;
>
>Just return i >= 0?
>
Ok

Thanks,
Lina
>-- 
>Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 07/12] PM / cpu_domains: Add PM Domain governor for CPUs
@ 2016-03-01 19:32       ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 19:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 26 2016 at 12:33 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> @@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
>>  	return res;
>>  }
>>
>> +static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
>> +{
>> +	struct generic_pm_domain *genpd = pd_to_genpd(pd);
>> +	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
>> +	int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
>> +	u64 sleep_ns;
>> +	ktime_t earliest, next_wakeup;
>> +	int cpu;
>> +	int i;
>> +
>> +	/* Reset the last set genpd state, default to index 0 */
>> +	genpd->state_idx = 0;
>> +
>> +	/* We dont want to power down, if QoS is 0 */
>> +	if (!qos)
>> +		return false;
>> +
>> +	/*
>> +	 * Find the sleep time for the cluster.
>> +	 * The time between now and the first wake up of any CPU that
>> +	 * are in this domain hierarchy is the time available for the
>> +	 * domain to be idle.
>> +	 */
>> +	earliest = ktime_set(KTIME_SEC_MAX, 0);
>> +	for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {
>
>We're not worried about hotplug happening in parallel because
>preemption is disabled here?
>
Nope. Hotplug on the same domain or in its hierarchy will be waiting on
the domain lock to released before becoming online. Any other domain is
not of concern for this domain governor.

If a core was hotplugged out while this is happening, then we may risk
making an premature wake up decision, which would happen either way if
we lock hotplug here.

>> +		next_wakeup = tick_nohz_get_next_wakeup(cpu);
>> +		if (earliest.tv64 > next_wakeup.tv64)
>
>	if (ktime_before(next_wakeup, earliest))
>
>> +			earliest = next_wakeup;
>> +	}
>> +
>> +	sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get()));
>> +	if (sleep_ns <= 0)
>> +		return false;
>> +
>> +	/*
>> +	 * Find the deepest sleep state that satisfies the residency
>> +	 * requirement and the QoS constraint
>> +	 */
>> +	for (i = genpd->state_count - 1; i >= 0; i--) {
>> +		u64 state_sleep_ns;
>> +
>> +		state_sleep_ns = genpd->states[i].power_off_latency_ns +
>> +			genpd->states[i].power_on_latency_ns +
>> +			genpd->states[i].residency_ns;
>> +
>> +		/*
>> +		 * If we cant sleep to save power in the state, move on
>
>s/cant/can't/
>
argh. Fixed.
>> +		 * to the next lower idle state.
>> +		 */
>> +		if (state_sleep_ns > sleep_ns)
>> +			continue;
>> +
>> +		/*
>> +		 * We also dont want to sleep more than we should to
>
>s/dont/don't/
>
Done
>> +		 * gaurantee QoS.
>> +		 */
>> +		if (state_sleep_ns < (qos * NSEC_PER_USEC))
>
>Maybe we should make qos into qos_ns? Presumably the compiler
>would hoist out the multiplication here, but it doesn't hurt to
>do it explicitly.
>
Okay

>> +			break;
>> +	}
>> +
>> +	if (i >= 0)
>> +		genpd->state_idx = i;
>> +
>> +	return  (i >= 0) ? true : false;
>
>Just return i >= 0?
>
Ok

Thanks,
Lina
>-- 
>Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 07/12] PM / cpu_domains: Add PM Domain governor for CPUs
  2016-03-01 19:32       ` Lina Iyer
@ 2016-03-01 19:35         ` Stephen Boyd
  -1 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-03-01 19:35 UTC (permalink / raw)
  To: Lina Iyer
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger

On 03/01/2016 11:32 AM, Lina Iyer wrote:
> On Fri, Feb 26 2016 at 12:33 -0700, Stephen Boyd wrote:
>> On 02/12, Lina Iyer wrote:
>>> @@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct
>>> generic_pm_domain *d)
>>>      return res;
>>>  }
>>>
>>> +static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
>>> +{
>>> +    struct generic_pm_domain *genpd = pd_to_genpd(pd);
>>> +    struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
>>> +    int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
>>> +    u64 sleep_ns;
>>> +    ktime_t earliest, next_wakeup;
>>> +    int cpu;
>>> +    int i;
>>> +
>>> +    /* Reset the last set genpd state, default to index 0 */
>>> +    genpd->state_idx = 0;
>>> +
>>> +    /* We dont want to power down, if QoS is 0 */
>>> +    if (!qos)
>>> +        return false;
>>> +
>>> +    /*
>>> +     * Find the sleep time for the cluster.
>>> +     * The time between now and the first wake up of any CPU that
>>> +     * are in this domain hierarchy is the time available for the
>>> +     * domain to be idle.
>>> +     */
>>> +    earliest = ktime_set(KTIME_SEC_MAX, 0);
>>> +    for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {
>>
>> We're not worried about hotplug happening in parallel because
>> preemption is disabled here?
>>
> Nope. Hotplug on the same domain or in its hierarchy will be waiting on
> the domain lock to released before becoming online. Any other domain is
> not of concern for this domain governor.
>
> If a core was hotplugged out while this is happening, then we may risk
> making an premature wake up decision, which would happen either way if
> we lock hotplug here.

Ok please make this into a comment in the code.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 07/12] PM / cpu_domains: Add PM Domain governor for CPUs
@ 2016-03-01 19:35         ` Stephen Boyd
  0 siblings, 0 replies; 68+ messages in thread
From: Stephen Boyd @ 2016-03-01 19:35 UTC (permalink / raw)
  To: linux-arm-kernel

On 03/01/2016 11:32 AM, Lina Iyer wrote:
> On Fri, Feb 26 2016 at 12:33 -0700, Stephen Boyd wrote:
>> On 02/12, Lina Iyer wrote:
>>> @@ -52,6 +55,76 @@ struct cpu_pm_domain *to_cpu_pd(struct
>>> generic_pm_domain *d)
>>>      return res;
>>>  }
>>>
>>> +static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
>>> +{
>>> +    struct generic_pm_domain *genpd = pd_to_genpd(pd);
>>> +    struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
>>> +    int qos = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
>>> +    u64 sleep_ns;
>>> +    ktime_t earliest, next_wakeup;
>>> +    int cpu;
>>> +    int i;
>>> +
>>> +    /* Reset the last set genpd state, default to index 0 */
>>> +    genpd->state_idx = 0;
>>> +
>>> +    /* We dont want to power down, if QoS is 0 */
>>> +    if (!qos)
>>> +        return false;
>>> +
>>> +    /*
>>> +     * Find the sleep time for the cluster.
>>> +     * The time between now and the first wake up of any CPU that
>>> +     * are in this domain hierarchy is the time available for the
>>> +     * domain to be idle.
>>> +     */
>>> +    earliest = ktime_set(KTIME_SEC_MAX, 0);
>>> +    for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {
>>
>> We're not worried about hotplug happening in parallel because
>> preemption is disabled here?
>>
> Nope. Hotplug on the same domain or in its hierarchy will be waiting on
> the domain lock to released before becoming online. Any other domain is
> not of concern for this domain governor.
>
> If a core was hotplugged out while this is happening, then we may risk
> making an premature wake up decision, which would happen either way if
> we lock hotplug here.

Ok please make this into a comment in the code.

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 08/12] Documentation / cpu_domains: Describe CPU PM domains setup and governor
  2016-02-26 19:43     ` Stephen Boyd
@ 2016-03-01 19:36       ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 19:36 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger

On Fri, Feb 26 2016 at 12:43 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> diff --git a/Documentation/power/cpu_domains.txt b/Documentation/power/cpu_domains.txt
>> new file mode 100644
>> index 0000000..5fdc66d
>> --- /dev/null
>> +++ b/Documentation/power/cpu_domains.txt
>> @@ -0,0 +1,79 @@
>> +CPU PM domains
>> +==============
>> +
>> +Newer CPUs are grouped in SoCs as clusters. A cluster in addition to the CPUs
>> +may have caches, VFP and architecture specific power controller that share the
>
>caches, floating point units, and other architecture specific
>hardware that share resources when any of the CPUs are active.
>
All comments addressed.

Thanks,
Lina

>> +resources when any of the CPUs are active. When the CPUs are in idle, some of
>> +these cluster components may also idle. A cluster may also be nested inside
>> +another cluster that provides common coherency interfaces to share data
>> +between the clusters. The organization of such clusters and CPU may be
>> +descibed in DT, since they are SoC specific.
>> +
>> +CPUIdle framework enables the CPUs to determine the sleep time and enter low
>> +power state to save power during periods of idle. CPUs in a cluster may enter
>> +and exit idle state independently of each other. During the time when all the
>> +CPUs are in idle state, the cluster may safely put some of the shared
>> +resources in their idle state. The time between the last CPU to enter idle and
>> +the first CPU to wake up is the time available for the cluster to enter its
>> +idle state.
>> +
>> +When SoCs power down the CPU during cpuidle, they generally have supplemental
>> +hardware that can handshake with the CPU with a signal that indicates that the
>> +CPU has stopped execution. The hardware is also responsible for warm booting
>> +the CPU on receiving an interrupt. With cluster architecture, common resources
>
>In a cluster architecture,
>
>> +that are shared by the cluster may also be powered down by an external
>
>shared by a cluster
>
>> +microcontroller or a processor. The microcontroller may be programmed in
>> +advance to put the hardware blocks in a low power state, when the last active
>> +CPU sends the idle signal. When the signal is received, the microcontroller
>> +may trigger the hardware blocks to enter their low power state. When an
>> +interrupt to wakeup the processor is received, the microcontroller is
>> +responsible for bringing up the hardware blocks to its active state, before
>> +waking up the CPU. The timelines for such operations should be in the
>> +acceptable range for the for CPU idle to get power benefits.
>
>acceptable range for CPU idle to get power benefits.
>
>> +
>> +CPU PM Domain Setup
>> +-------------------
>> +
>> +PM domains  are represented in the DT as domain consumers and providers. A
>
>              ^ extra space here
>
>> +device may have a domain provider and a domain provider may support multiple
>> +domain consumers. Domains like clusters, may also be nested inside one
>> +another. A domain that has no active consumer, may be powered off and any
>> +resuming consumer would trigger the domain back to active. Parent domains may
>> +be powered off when the child domains are powered off. The CPU cluster can be
>> +fashioned as a PM domain. When the CPU devices are powered off, the PM domain
>> +may be powered off.
>> +
>> +Device idle is reference counted by runtime PM. When there is no active need
>> +for the device, runtime PM invokes callbacks to suspend the parent domain.
>> +Generic PM domain (genpd) handles the hierarchy of devices, domains and the
>> +reference counting of objects leading to last man down and first man up in the
>> +domain. The CPU domains helper functions defines PM domains for each CPU
>> +cluster and attaches the CPU devices to the respective PM domains.
>> +
>> +Platform drivers may use the following API to register their CPU PM domains.
>> +
>> +of_setup_cpu_pd() -
>> +Provides a single step registration of the CPU PM domain and attach CPUs to
>> +the genpd. Platform drivers may additionally register callbacks for power_on
>> +and power_off operations for the PM domain.
>> +
>> +of_setup_cpu_pd_single() -
>> +Define PM domain for a single CPU and attach the CPU to its domain.
>> +
>> +
>> +CPU PM Domain governor
>> +----------------------
>> +
>> +CPUs have an unique ability to determine their next wakeup. CPUs may wake up
>
>a unique
>
>> +for known timer interrupts and unknown interrupts from idle. Prediction
>> +algorithms and heuristic based algorithms like the Menu governor for cpuidle
>> +can determine the next wakeup of the CPU. However, determining the wakeup
>> +across a group of CPUs is a tough problem to solve.
>> +
>
>-- 
>Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 08/12] Documentation / cpu_domains: Describe CPU PM domains setup and governor
@ 2016-03-01 19:36       ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 19:36 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 26 2016 at 12:43 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> diff --git a/Documentation/power/cpu_domains.txt b/Documentation/power/cpu_domains.txt
>> new file mode 100644
>> index 0000000..5fdc66d
>> --- /dev/null
>> +++ b/Documentation/power/cpu_domains.txt
>> @@ -0,0 +1,79 @@
>> +CPU PM domains
>> +==============
>> +
>> +Newer CPUs are grouped in SoCs as clusters. A cluster in addition to the CPUs
>> +may have caches, VFP and architecture specific power controller that share the
>
>caches, floating point units, and other architecture specific
>hardware that share resources when any of the CPUs are active.
>
All comments addressed.

Thanks,
Lina

>> +resources when any of the CPUs are active. When the CPUs are in idle, some of
>> +these cluster components may also idle. A cluster may also be nested inside
>> +another cluster that provides common coherency interfaces to share data
>> +between the clusters. The organization of such clusters and CPU may be
>> +descibed in DT, since they are SoC specific.
>> +
>> +CPUIdle framework enables the CPUs to determine the sleep time and enter low
>> +power state to save power during periods of idle. CPUs in a cluster may enter
>> +and exit idle state independently of each other. During the time when all the
>> +CPUs are in idle state, the cluster may safely put some of the shared
>> +resources in their idle state. The time between the last CPU to enter idle and
>> +the first CPU to wake up is the time available for the cluster to enter its
>> +idle state.
>> +
>> +When SoCs power down the CPU during cpuidle, they generally have supplemental
>> +hardware that can handshake with the CPU with a signal that indicates that the
>> +CPU has stopped execution. The hardware is also responsible for warm booting
>> +the CPU on receiving an interrupt. With cluster architecture, common resources
>
>In a cluster architecture,
>
>> +that are shared by the cluster may also be powered down by an external
>
>shared by a cluster
>
>> +microcontroller or a processor. The microcontroller may be programmed in
>> +advance to put the hardware blocks in a low power state, when the last active
>> +CPU sends the idle signal. When the signal is received, the microcontroller
>> +may trigger the hardware blocks to enter their low power state. When an
>> +interrupt to wakeup the processor is received, the microcontroller is
>> +responsible for bringing up the hardware blocks to its active state, before
>> +waking up the CPU. The timelines for such operations should be in the
>> +acceptable range for the for CPU idle to get power benefits.
>
>acceptable range for CPU idle to get power benefits.
>
>> +
>> +CPU PM Domain Setup
>> +-------------------
>> +
>> +PM domains  are represented in the DT as domain consumers and providers. A
>
>              ^ extra space here
>
>> +device may have a domain provider and a domain provider may support multiple
>> +domain consumers. Domains like clusters, may also be nested inside one
>> +another. A domain that has no active consumer, may be powered off and any
>> +resuming consumer would trigger the domain back to active. Parent domains may
>> +be powered off when the child domains are powered off. The CPU cluster can be
>> +fashioned as a PM domain. When the CPU devices are powered off, the PM domain
>> +may be powered off.
>> +
>> +Device idle is reference counted by runtime PM. When there is no active need
>> +for the device, runtime PM invokes callbacks to suspend the parent domain.
>> +Generic PM domain (genpd) handles the hierarchy of devices, domains and the
>> +reference counting of objects leading to last man down and first man up in the
>> +domain. The CPU domains helper functions defines PM domains for each CPU
>> +cluster and attaches the CPU devices to the respective PM domains.
>> +
>> +Platform drivers may use the following API to register their CPU PM domains.
>> +
>> +of_setup_cpu_pd() -
>> +Provides a single step registration of the CPU PM domain and attach CPUs to
>> +the genpd. Platform drivers may additionally register callbacks for power_on
>> +and power_off operations for the PM domain.
>> +
>> +of_setup_cpu_pd_single() -
>> +Define PM domain for a single CPU and attach the CPU to its domain.
>> +
>> +
>> +CPU PM Domain governor
>> +----------------------
>> +
>> +CPUs have an unique ability to determine their next wakeup. CPUs may wake up
>
>a unique
>
>> +for known timer interrupts and unknown interrupts from idle. Prediction
>> +algorithms and heuristic based algorithms like the Menu governor for cpuidle
>> +can determine the next wakeup of the CPU. However, determining the wakeup
>> +across a group of CPUs is a tough problem to solve.
>> +
>
>-- 
>Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* Re: [RFC v2 12/12] ARM64: dts: Define CPU power domain for MSM8916
  2016-02-26 19:50     ` Stephen Boyd
@ 2016-03-01 19:41       ` Lina Iyer
  -1 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 19:41 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel, geert,
	k.kozlowski, msivasub, agross, linux-arm-msm, lorenzo.pieralisi,
	ahaslam, mtitinger, devicetree

On Fri, Feb 26 2016 at 12:50 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> @@ -101,6 +105,27 @@
>>  		};
>>  	};
>>
>> +	CPU_PD: cpu-pd@0 {
>> +		#power-domain-cells = <0>;
>> +		power-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
>
>Why isn't this part of the psci node? PSCI is the node that's
>providing the code/logic for the power domain.
>
I like that idea too. 
Lorenzo, what do you think?

>> +	};
>> +
>> +	pd-power-states {
>> +		CLUSTER_RET: power-state@1 {
>> +			state-param = <0x1000010>;
>> +			entry-latency-us = <500>;
>> +			exit-latency-us = <500>;
>> +			residency-us = <2000>;
>> +		 };
>> +
>> +		CLUSTER_PWR_DWN: power-state@2 {
>> +			state-param = <0x1000030>;
>> +			entry-latency-us = <2000>;
>> +			exit-latency-us = <2000>;
>> +			residency-us = <6000>;
>> +		};
>> +	};
>> +
>
>And I would expect these to be put somewhere inside the power
>domain provider as well? Is this documented somewhere?
>
Not yet, they will be, when it is submitted. This the glue patch that I
use on top of Axel's series to read domain states from DT, instead of
defining with the driver.

I have to discuss with Ulf, as to who is submitting that patch.

Thanks,
Lina

>>  	psci {
>>  		compatible = "arm,psci-1.0";
>>  		method = "smc";
>
>-- 
>Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

* [RFC v2 12/12] ARM64: dts: Define CPU power domain for MSM8916
@ 2016-03-01 19:41       ` Lina Iyer
  0 siblings, 0 replies; 68+ messages in thread
From: Lina Iyer @ 2016-03-01 19:41 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 26 2016 at 12:50 -0700, Stephen Boyd wrote:
>On 02/12, Lina Iyer wrote:
>> @@ -101,6 +105,27 @@
>>  		};
>>  	};
>>
>> +	CPU_PD: cpu-pd at 0 {
>> +		#power-domain-cells = <0>;
>> +		power-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
>
>Why isn't this part of the psci node? PSCI is the node that's
>providing the code/logic for the power domain.
>
I like that idea too. 
Lorenzo, what do you think?

>> +	};
>> +
>> +	pd-power-states {
>> +		CLUSTER_RET: power-state at 1 {
>> +			state-param = <0x1000010>;
>> +			entry-latency-us = <500>;
>> +			exit-latency-us = <500>;
>> +			residency-us = <2000>;
>> +		 };
>> +
>> +		CLUSTER_PWR_DWN: power-state at 2 {
>> +			state-param = <0x1000030>;
>> +			entry-latency-us = <2000>;
>> +			exit-latency-us = <2000>;
>> +			residency-us = <6000>;
>> +		};
>> +	};
>> +
>
>And I would expect these to be put somewhere inside the power
>domain provider as well? Is this documented somewhere?
>
Not yet, they will be, when it is submitted. This the glue patch that I
use on top of Axel's series to read domain states from DT, instead of
defining with the driver.

I have to discuss with Ulf, as to who is submitting that patch.

Thanks,
Lina

>>  	psci {
>>  		compatible = "arm,psci-1.0";
>>  		method = "smc";
>
>-- 
>Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
>a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 68+ messages in thread

end of thread, other threads:[~2016-03-01 19:41 UTC | newest]

Thread overview: 68+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-12 20:50 [RFC v2 00/12] PM: SoC idle support using PM domains Lina Iyer
2016-02-12 20:50 ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 01/12] PM / Domains: Abstract genpd locking Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-26 18:08   ` Stephen Boyd
2016-02-26 18:08     ` Stephen Boyd
2016-03-01 16:55     ` Lina Iyer
2016-03-01 16:55       ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 02/12] PM / Domains: Support IRQ safe PM domains Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-26 18:17   ` Stephen Boyd
2016-02-26 18:17     ` Stephen Boyd
2016-03-01 17:44     ` Lina Iyer
2016-03-01 17:44       ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 03/12] PM / cpu_domains: Setup PM domains for CPUs/clusters Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-17 23:38   ` Lina Iyer
2016-02-17 23:38     ` Lina Iyer
2016-02-18 17:29   ` [BUG FIX] PM / cpu_domains: Check for NULL callbacks Lina Iyer
2016-02-18 17:29     ` Lina Iyer
2016-02-18 17:46     ` Rafael J. Wysocki
2016-02-18 17:46       ` Rafael J. Wysocki
2016-02-18 22:51       ` Lina Iyer
2016-02-18 22:51         ` Lina Iyer
2016-02-26 19:10   ` [RFC v2 03/12] PM / cpu_domains: Setup PM domains for CPUs/clusters Stephen Boyd
2016-02-26 19:10     ` Stephen Boyd
2016-03-01 18:00     ` Lina Iyer
2016-03-01 18:00       ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 04/12] ARM: cpuidle: Add runtime PM support for CPUs Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-26 18:24   ` Stephen Boyd
2016-02-26 18:24     ` Stephen Boyd
2016-03-01 18:36     ` Lina Iyer
2016-03-01 18:36       ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 05/12] timer: Export next wake up of a CPU Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 06/12] PM / cpu_domains: Record CPUs that are part of the domain Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-26 19:20   ` Stephen Boyd
2016-02-26 19:20     ` Stephen Boyd
2016-03-01 19:24     ` Lina Iyer
2016-03-01 19:24       ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 07/12] PM / cpu_domains: Add PM Domain governor for CPUs Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-26 19:33   ` Stephen Boyd
2016-02-26 19:33     ` Stephen Boyd
2016-03-01 19:32     ` Lina Iyer
2016-03-01 19:32       ` Lina Iyer
2016-03-01 19:35       ` Stephen Boyd
2016-03-01 19:35         ` Stephen Boyd
2016-02-12 20:50 ` [RFC v2 08/12] Documentation / cpu_domains: Describe CPU PM domains setup and governor Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-26 19:43   ` Stephen Boyd
2016-02-26 19:43     ` Stephen Boyd
2016-03-01 19:36     ` Lina Iyer
2016-03-01 19:36       ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 09/12] drivers: firmware: psci: Allow OS Initiated suspend mode Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 10/12] ARM64: psci: Support cluster idle states for OS-Initiated Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 11/12] ARM64: dts: Add PSCI cpuidle support for MSM8916 Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-12 20:50 ` [RFC v2 12/12] ARM64: dts: Define CPU power domain " Lina Iyer
2016-02-12 20:50   ` Lina Iyer
2016-02-26 19:50   ` Stephen Boyd
2016-02-26 19:50     ` Stephen Boyd
2016-03-01 19:41     ` Lina Iyer
2016-03-01 19:41       ` Lina Iyer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.