All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 00/16] PM: SoC idle support using PM domains
@ 2016-08-25 20:03 ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Hi all,

Changes since v3 [7]:
- Mostly refactoring and reorganization, no functional changes.
- Refactored DT support for CPU PM domains into a separate patch.
  (Suggested by Ulf)
- Reorganized domain idle state into DT binding, to be more in line
  with the discussions that followed the last patch submission.
  (Thanks Brendan, Sudeep, Lorenzo for some really good discussions.)

Changes since v2 [5]:
- Update PSCI documentation to define OS-Initiated PM domains.
- Nifty updates and bug fixes. Thanks Brendan!
- Define PSCI PM domains under psci node in 8916 DT.
- Documentation updates for domain definitions.
- Updated series is at [4].

Changes since v1 [6]:
- Use arm,idle-state as the DT binding for domain idle state.
- OS-Initated changes to support that and to read arm,psci-suspend-param
(Thanks Mark Rutland and Kevin Hilman)
- tick_nohz_get_next_wakeup() - suggestions from Thomas Gleixner.
- The updated series is at [3].

Changes since RFC-v3 [1]:
- Reorganize the patches. Documentations have their own patch.
- Moved code around with PSCI OS initiated so they would not have compiler
  errors in other configuration.
- Minor bug fixes with genpd power_on functionality.
- Rebased on top of 4.7-rc1

This is the submission of the SoC idle support in the kernel for CPU domains
using genpd. The patches were submitted as RFC's earlier, the last of them is
[1]. Since the RFC, multiple discussions have happened around making the
patches generic across all architectures. For now, the patches address the
needs of the ARM community, but sure enough can be extended to support other
architectures. Some of the limitations in making this patch generic is the lack
of device idle state description in the DT, but that in itself is a bigger
topic for a future discussion.

The patch has been tested on the 410c Dragonboard and the MTK EVB boards. Both
show good power savings when used with OS Initiated PSCI f/w.

This entire series can be found at [8].

Thanks,
Lina

[1]. http://lists.infradead.org/pipermail/linux-arm-kernel/2016-March/412934.html
[2]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v1
[3]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v2
[4]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v3
[5]. https://lwn.net/Articles/695987/
[6]. https://lwn.net/Articles/675674/
[7]. http://www.spinics.net/lists/arm-kernel/msg522021.html
[8]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v4

Axel Haslam (2):
  PM / Domains: Allow domain power states to be read from DT
  dt/bindings: Update binding for PM domain idle states

Lina Iyer (14):
  PM / Domains: Abstract genpd locking
  PM / Domains: Support IRQ safe PM domains
  PM / doc: Update device documentation for devices in IRQ safe PM
    domains
  PM / cpu_domains: Setup PM domains for CPUs/clusters
  PM / cpu_domains: Initialize CPU PM domains from DT
  ARM: cpuidle: Add runtime PM support for CPUs
  timer: Export next wake up of a CPU
  PM / cpu_domains: Add PM Domain governor for CPUs
  doc / cpu_domains: Describe CPU PM domains setup and governor
  drivers: firmware: psci: Allow OS Initiated suspend mode
  drivers: firmware: psci: Support cluster idle states for OS-Initiated
  dt/bindings: Add PSCI OS-Initiated PM Domains bindings
  ARM64: dts: Add PSCI cpuidle support for MSM8916
  ARM64: dts: Define CPU power domain for MSM8916

 Documentation/devicetree/bindings/arm/psci.txt     |  79 ++++
 .../devicetree/bindings/power/power_domain.txt     |  57 +++
 Documentation/power/cpu_domains.txt                | 109 +++++
 Documentation/power/devices.txt                    |  12 +-
 arch/arm64/boot/dts/qcom/msm8916.dtsi              |  49 +++
 drivers/base/power/Makefile                        |   1 +
 drivers/base/power/cpu_domains.c                   | 459 +++++++++++++++++++++
 drivers/base/power/domain.c                        | 308 ++++++++++++--
 drivers/cpuidle/cpuidle-arm.c                      |  55 +++
 drivers/firmware/psci.c                            | 135 +++++-
 include/linux/cpu_domains.h                        |  67 +++
 include/linux/pm_domain.h                          |  24 +-
 include/linux/tick.h                               |   7 +
 include/uapi/linux/psci.h                          |   5 +
 kernel/time/tick-sched.c                           |  11 +
 15 files changed, 1314 insertions(+), 64 deletions(-)
 create mode 100644 Documentation/power/cpu_domains.txt
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 00/16] PM: SoC idle support using PM domains
@ 2016-08-25 20:03 ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Hi all,

Changes since v3 [7]:
- Mostly refactoring and reorganization, no functional changes.
- Refactored DT support for CPU PM domains into a separate patch.
  (Suggested by Ulf)
- Reorganized domain idle state into DT binding, to be more in line
  with the discussions that followed the last patch submission.
  (Thanks Brendan, Sudeep, Lorenzo for some really good discussions.)

Changes since v2 [5]:
- Update PSCI documentation to define OS-Initiated PM domains.
- Nifty updates and bug fixes. Thanks Brendan!
- Define PSCI PM domains under psci node in 8916 DT.
- Documentation updates for domain definitions.
- Updated series is at [4].

Changes since v1 [6]:
- Use arm,idle-state as the DT binding for domain idle state.
- OS-Initated changes to support that and to read arm,psci-suspend-param
(Thanks Mark Rutland and Kevin Hilman)
- tick_nohz_get_next_wakeup() - suggestions from Thomas Gleixner.
- The updated series is at [3].

Changes since RFC-v3 [1]:
- Reorganize the patches. Documentations have their own patch.
- Moved code around with PSCI OS initiated so they would not have compiler
  errors in other configuration.
- Minor bug fixes with genpd power_on functionality.
- Rebased on top of 4.7-rc1

This is the submission of the SoC idle support in the kernel for CPU domains
using genpd. The patches were submitted as RFC's earlier, the last of them is
[1]. Since the RFC, multiple discussions have happened around making the
patches generic across all architectures. For now, the patches address the
needs of the ARM community, but sure enough can be extended to support other
architectures. Some of the limitations in making this patch generic is the lack
of device idle state description in the DT, but that in itself is a bigger
topic for a future discussion.

The patch has been tested on the 410c Dragonboard and the MTK EVB boards. Both
show good power savings when used with OS Initiated PSCI f/w.

This entire series can be found at [8].

Thanks,
Lina

[1]. http://lists.infradead.org/pipermail/linux-arm-kernel/2016-March/412934.html
[2]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v1
[3]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v2
[4]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v3
[5]. https://lwn.net/Articles/695987/
[6]. https://lwn.net/Articles/675674/
[7]. http://www.spinics.net/lists/arm-kernel/msg522021.html
[8]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v4

Axel Haslam (2):
  PM / Domains: Allow domain power states to be read from DT
  dt/bindings: Update binding for PM domain idle states

Lina Iyer (14):
  PM / Domains: Abstract genpd locking
  PM / Domains: Support IRQ safe PM domains
  PM / doc: Update device documentation for devices in IRQ safe PM
    domains
  PM / cpu_domains: Setup PM domains for CPUs/clusters
  PM / cpu_domains: Initialize CPU PM domains from DT
  ARM: cpuidle: Add runtime PM support for CPUs
  timer: Export next wake up of a CPU
  PM / cpu_domains: Add PM Domain governor for CPUs
  doc / cpu_domains: Describe CPU PM domains setup and governor
  drivers: firmware: psci: Allow OS Initiated suspend mode
  drivers: firmware: psci: Support cluster idle states for OS-Initiated
  dt/bindings: Add PSCI OS-Initiated PM Domains bindings
  ARM64: dts: Add PSCI cpuidle support for MSM8916
  ARM64: dts: Define CPU power domain for MSM8916

 Documentation/devicetree/bindings/arm/psci.txt     |  79 ++++
 .../devicetree/bindings/power/power_domain.txt     |  57 +++
 Documentation/power/cpu_domains.txt                | 109 +++++
 Documentation/power/devices.txt                    |  12 +-
 arch/arm64/boot/dts/qcom/msm8916.dtsi              |  49 +++
 drivers/base/power/Makefile                        |   1 +
 drivers/base/power/cpu_domains.c                   | 459 +++++++++++++++++++++
 drivers/base/power/domain.c                        | 308 ++++++++++++--
 drivers/cpuidle/cpuidle-arm.c                      |  55 +++
 drivers/firmware/psci.c                            | 135 +++++-
 include/linux/cpu_domains.h                        |  67 +++
 include/linux/pm_domain.h                          |  24 +-
 include/linux/tick.h                               |   7 +
 include/uapi/linux/psci.h                          |   5 +
 kernel/time/tick-sched.c                           |  11 +
 15 files changed, 1314 insertions(+), 64 deletions(-)
 create mode 100644 Documentation/power/cpu_domains.txt
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v4 01/16] PM / Domains: Allow domain power states to be read from DT
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Axel Haslam,
	Marc Titinger, Lina Iyer

From: Axel Haslam <ahaslam+renesas@baylibre.com>

This patch allows domains to define idle states in the DT. SoC's can
define domain idle states in DT using the "domain-idle-states" property
of the domain provider. Calling of_pm_genpd_init() will  read the idle
states and initialize the genpd for the domain.

In addition to the entry and exit latency for idle state, also add
residency_ns, param and of_node property to each state. A domain idling
in a state is only power effecient if it stays idle for a certain period
in that state. The residency provides this minimum time for the idle
state to provide power benefits. The param is a state specific u32 value
that the platform may use for that idle state.

Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Lina: Added state properties, removed state names, wakeup-latency,
added of_pm_genpd_init() API, pruned commit text]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[Ulf: Moved around code to make it compile properly, rebased on top of multiple
state support,changed to use pm_genpd_init()]
---
 drivers/base/power/domain.c | 92 ++++++++++++++++++++++++++++++++++++++++++++-
 include/linux/pm_domain.h   | 11 +++++-
 2 files changed, 101 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index a1f2aff..3aecac3 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -1253,6 +1253,90 @@ out:
 }
 EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
 
+static const struct of_device_id arm_idle_state_match[] = {
+	{ .compatible = "arm,idle-state", },
+	{ }
+};
+
+static int genpd_of_get_power_state(struct genpd_power_state *genpd_state,
+				    struct device_node *state_node)
+{
+	int err = 0;
+	u32 latency;
+	u32 residency;
+	u32 entry_latency, exit_latency;
+	const struct of_device_id *match_id;
+
+	match_id = of_match_node(arm_idle_state_match, state_node);
+	if (!match_id)
+		return -EINVAL;
+
+	err = of_property_read_u32(state_node, "entry-latency-us",
+				   &entry_latency);
+	if (err) {
+		pr_debug(" * %s missing entry-latency-us property\n",
+			 state_node->full_name);
+		return -EINVAL;
+	}
+
+	err = of_property_read_u32(state_node, "exit-latency-us",
+				   &exit_latency);
+	if (err) {
+		pr_debug(" * %s missing exit-latency-us property\n",
+			 state_node->full_name);
+		return -EINVAL;
+	}
+
+	err = of_property_read_u32(state_node, "min-residency-us", &residency);
+	if (!err)
+		genpd_state->residency_ns = 1000 * residency;
+
+	latency = entry_latency + exit_latency;
+	genpd_state->power_on_latency_ns = 1000 * latency;
+	genpd_state->power_off_latency_ns = 1000 * entry_latency;
+	genpd_state->of_node = state_node;
+
+	return 0;
+}
+
+int pm_genpd_of_parse_power_states(struct generic_pm_domain *genpd)
+{
+	struct device_node *np;
+	int i, err = 0;
+
+	for (i = 0; i < GENPD_MAX_NUM_STATES; i++) {
+		np = of_parse_phandle(genpd->of_node, "domain-idle-states", i);
+		if (!np)
+			break;
+
+		err = genpd_of_get_power_state(&genpd->states[i], np);
+		if (err) {
+			pr_err
+			    ("Parsing idle state node %s failed with err %d\n",
+			     np->full_name, err);
+			err = -EINVAL;
+			of_node_put(np);
+			break;
+		}
+		of_node_put(np);
+	}
+
+	if (err)
+		return err;
+
+	genpd->state_count = i;
+	return 0;
+}
+EXPORT_SYMBOL(pm_genpd_of_parse_power_states);
+
+static int genpd_of_parse(struct generic_pm_domain *genpd)
+{
+	if (!genpd->of_node || (genpd->state_count > 0))
+		return 0;
+
+	return pm_genpd_of_parse_power_states(genpd);
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1262,8 +1346,10 @@ EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
  * Returns 0 on successful initialization, else a negative error code.
  */
 int pm_genpd_init(struct generic_pm_domain *genpd,
-		  struct dev_power_governor *gov, bool is_off)
+		   struct dev_power_governor *gov, bool is_off)
 {
+	int ret;
+
 	if (IS_ERR_OR_NULL(genpd))
 		return -EINVAL;
 
@@ -1306,6 +1392,10 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 		genpd->dev_ops.start = pm_clk_resume;
 	}
 
+	ret = genpd_of_parse(genpd);
+	if (ret)
+		return ret;
+
 	if (genpd->state_idx >= GENPD_MAX_NUM_STATES) {
 		pr_warn("Initial state index out of bounds.\n");
 		genpd->state_idx = GENPD_MAX_NUM_STATES - 1;
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index 31fec85..c5d14b9 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -40,6 +40,9 @@ struct gpd_dev_ops {
 struct genpd_power_state {
 	s64 power_off_latency_ns;
 	s64 power_on_latency_ns;
+	s64 residency_ns;
+	u32 param;
+	struct device_node *of_node;
 };
 
 struct generic_pm_domain {
@@ -51,6 +54,7 @@ struct generic_pm_domain {
 	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
+	struct device_node *of_node;	/* Device node of the PM domain */
 	const char *name;
 	atomic_t sd_count;	/* Number of subdomains with power "on" */
 	enum gpd_status status;	/* Current state of the domain */
@@ -129,7 +133,7 @@ extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 				     struct generic_pm_domain *target);
 extern int pm_genpd_init(struct generic_pm_domain *genpd,
 			 struct dev_power_governor *gov, bool is_off);
-
+extern int pm_genpd_of_parse_power_states(struct generic_pm_domain *genpd);
 extern struct dev_power_governor simple_qos_governor;
 extern struct dev_power_governor pm_domain_always_on_gov;
 #else
@@ -168,6 +172,11 @@ static inline int pm_genpd_init(struct generic_pm_domain *genpd,
 {
 	return -ENOSYS;
 }
+static inline int pm_genpd_of_parse_power_states(
+				struct generic_pm_domain *genpd)
+{
+	return -ENODEV;
+}
 #endif
 
 static inline int pm_genpd_add_device(struct generic_pm_domain *genpd,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 01/16] PM / Domains: Allow domain power states to be read from DT
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

From: Axel Haslam <ahaslam+renesas@baylibre.com>

This patch allows domains to define idle states in the DT. SoC's can
define domain idle states in DT using the "domain-idle-states" property
of the domain provider. Calling of_pm_genpd_init() will  read the idle
states and initialize the genpd for the domain.

In addition to the entry and exit latency for idle state, also add
residency_ns, param and of_node property to each state. A domain idling
in a state is only power effecient if it stays idle for a certain period
in that state. The residency provides this minimum time for the idle
state to provide power benefits. The param is a state specific u32 value
that the platform may use for that idle state.

Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Lina: Added state properties, removed state names, wakeup-latency,
added of_pm_genpd_init() API, pruned commit text]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[Ulf: Moved around code to make it compile properly, rebased on top of multiple
state support,changed to use pm_genpd_init()]
---
 drivers/base/power/domain.c | 92 ++++++++++++++++++++++++++++++++++++++++++++-
 include/linux/pm_domain.h   | 11 +++++-
 2 files changed, 101 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index a1f2aff..3aecac3 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -1253,6 +1253,90 @@ out:
 }
 EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
 
+static const struct of_device_id arm_idle_state_match[] = {
+	{ .compatible = "arm,idle-state", },
+	{ }
+};
+
+static int genpd_of_get_power_state(struct genpd_power_state *genpd_state,
+				    struct device_node *state_node)
+{
+	int err = 0;
+	u32 latency;
+	u32 residency;
+	u32 entry_latency, exit_latency;
+	const struct of_device_id *match_id;
+
+	match_id = of_match_node(arm_idle_state_match, state_node);
+	if (!match_id)
+		return -EINVAL;
+
+	err = of_property_read_u32(state_node, "entry-latency-us",
+				   &entry_latency);
+	if (err) {
+		pr_debug(" * %s missing entry-latency-us property\n",
+			 state_node->full_name);
+		return -EINVAL;
+	}
+
+	err = of_property_read_u32(state_node, "exit-latency-us",
+				   &exit_latency);
+	if (err) {
+		pr_debug(" * %s missing exit-latency-us property\n",
+			 state_node->full_name);
+		return -EINVAL;
+	}
+
+	err = of_property_read_u32(state_node, "min-residency-us", &residency);
+	if (!err)
+		genpd_state->residency_ns = 1000 * residency;
+
+	latency = entry_latency + exit_latency;
+	genpd_state->power_on_latency_ns = 1000 * latency;
+	genpd_state->power_off_latency_ns = 1000 * entry_latency;
+	genpd_state->of_node = state_node;
+
+	return 0;
+}
+
+int pm_genpd_of_parse_power_states(struct generic_pm_domain *genpd)
+{
+	struct device_node *np;
+	int i, err = 0;
+
+	for (i = 0; i < GENPD_MAX_NUM_STATES; i++) {
+		np = of_parse_phandle(genpd->of_node, "domain-idle-states", i);
+		if (!np)
+			break;
+
+		err = genpd_of_get_power_state(&genpd->states[i], np);
+		if (err) {
+			pr_err
+			    ("Parsing idle state node %s failed with err %d\n",
+			     np->full_name, err);
+			err = -EINVAL;
+			of_node_put(np);
+			break;
+		}
+		of_node_put(np);
+	}
+
+	if (err)
+		return err;
+
+	genpd->state_count = i;
+	return 0;
+}
+EXPORT_SYMBOL(pm_genpd_of_parse_power_states);
+
+static int genpd_of_parse(struct generic_pm_domain *genpd)
+{
+	if (!genpd->of_node || (genpd->state_count > 0))
+		return 0;
+
+	return pm_genpd_of_parse_power_states(genpd);
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1262,8 +1346,10 @@ EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
  * Returns 0 on successful initialization, else a negative error code.
  */
 int pm_genpd_init(struct generic_pm_domain *genpd,
-		  struct dev_power_governor *gov, bool is_off)
+		   struct dev_power_governor *gov, bool is_off)
 {
+	int ret;
+
 	if (IS_ERR_OR_NULL(genpd))
 		return -EINVAL;
 
@@ -1306,6 +1392,10 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 		genpd->dev_ops.start = pm_clk_resume;
 	}
 
+	ret = genpd_of_parse(genpd);
+	if (ret)
+		return ret;
+
 	if (genpd->state_idx >= GENPD_MAX_NUM_STATES) {
 		pr_warn("Initial state index out of bounds.\n");
 		genpd->state_idx = GENPD_MAX_NUM_STATES - 1;
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index 31fec85..c5d14b9 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -40,6 +40,9 @@ struct gpd_dev_ops {
 struct genpd_power_state {
 	s64 power_off_latency_ns;
 	s64 power_on_latency_ns;
+	s64 residency_ns;
+	u32 param;
+	struct device_node *of_node;
 };
 
 struct generic_pm_domain {
@@ -51,6 +54,7 @@ struct generic_pm_domain {
 	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
+	struct device_node *of_node;	/* Device node of the PM domain */
 	const char *name;
 	atomic_t sd_count;	/* Number of subdomains with power "on" */
 	enum gpd_status status;	/* Current state of the domain */
@@ -129,7 +133,7 @@ extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 				     struct generic_pm_domain *target);
 extern int pm_genpd_init(struct generic_pm_domain *genpd,
 			 struct dev_power_governor *gov, bool is_off);
-
+extern int pm_genpd_of_parse_power_states(struct generic_pm_domain *genpd);
 extern struct dev_power_governor simple_qos_governor;
 extern struct dev_power_governor pm_domain_always_on_gov;
 #else
@@ -168,6 +172,11 @@ static inline int pm_genpd_init(struct generic_pm_domain *genpd,
 {
 	return -ENOSYS;
 }
+static inline int pm_genpd_of_parse_power_states(
+				struct generic_pm_domain *genpd)
+{
+	return -ENODEV;
+}
 #endif
 
 static inline int pm_genpd_add_device(struct generic_pm_domain *genpd,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 02/16] dt/bindings: Update binding for PM domain idle states
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: devicetree, lorenzo.pieralisi, Juri.Lelli, linux-arm-msm, sboyd,
	Axel Haslam, Marc Titinger, brendan.jackman, sudeep.holla,
	andy.gross, Lina Iyer

From: Axel Haslam <ahaslam+renesas@baylibre.com>

Update DT bindings to describe idle states of PM domains.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Lina: Added state properties, removed state names, wakeup-latency,
added of_pm_genpd_init() API, pruned commit text]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
---
 .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
index 025b5e7..4960486 100644
--- a/Documentation/devicetree/bindings/power/power_domain.txt
+++ b/Documentation/devicetree/bindings/power/power_domain.txt
@@ -29,6 +29,10 @@ Optional properties:
    specified by this binding. More details about power domain specifier are
    available in the next section.
 
+- domain-idle-states : A phandle of an idle-state that shall be soaked into a
+                generic domain power state. The idle state definitions are
+                compatible with arm,idle-state specified in [1].
+
 Example:
 
 	power: power-controller@12340000 {
@@ -59,6 +63,57 @@ The nodes above define two power controllers: 'parent' and 'child'.
 Domains created by the 'child' power controller are subdomains of '0' power
 domain provided by the 'parent' power controller.
 
+Example 3: ARM v7 style CPU PM domains (Linux domain controller)
+
+	cpus {
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		CPU0: cpu@0 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a7", "arm,armv7";
+			reg = <0x0>;
+			power-domains = <&a7_pd>;
+		};
+
+		CPU1: cpu@1 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a15", "arm,armv7";
+			reg = <0x0>;
+			power-domains = <&a15_pd>;
+		};
+	};
+
+	pm-domains {
+		a15_pd: a15_pd {
+			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
+			compatible = "arm,cortex-a15";
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_SLEEP_0>;
+		};
+
+		a7_pd: a7_pd {
+			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
+			compatible = "arm,cortex-a7";
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
+		};
+
+		CLUSTER_SLEEP_0: state0 {
+			compatible = "arm,idle-state";
+			entry-latency-us = <1000>;
+			exit-latency-us = <2000>;
+			min-residency-us = <10000>;
+		};
+
+		CLUSTER_SLEEP_1: state1 {
+			compatible = "arm,idle-state";
+			entry-latency-us = <5000>;
+			exit-latency-us = <5000>;
+			min-residency-us = <100000>;
+		};
+	};
+
 ==PM domain consumers==
 
 Required properties:
@@ -76,3 +131,5 @@ Example:
 The node above defines a typical PM domain consumer device, which is located
 inside a PM domain with index 0 of a power controller represented by a node
 with the label "power".
+
+[1]. Documentation/devicetree/bindings/arm/idle-states.txt
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

From: Axel Haslam <ahaslam+renesas@baylibre.com>

Update DT bindings to describe idle states of PM domains.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Lina: Added state properties, removed state names, wakeup-latency,
added of_pm_genpd_init() API, pruned commit text]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
---
 .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
index 025b5e7..4960486 100644
--- a/Documentation/devicetree/bindings/power/power_domain.txt
+++ b/Documentation/devicetree/bindings/power/power_domain.txt
@@ -29,6 +29,10 @@ Optional properties:
    specified by this binding. More details about power domain specifier are
    available in the next section.
 
+- domain-idle-states : A phandle of an idle-state that shall be soaked into a
+                generic domain power state. The idle state definitions are
+                compatible with arm,idle-state specified in [1].
+
 Example:
 
 	power: power-controller at 12340000 {
@@ -59,6 +63,57 @@ The nodes above define two power controllers: 'parent' and 'child'.
 Domains created by the 'child' power controller are subdomains of '0' power
 domain provided by the 'parent' power controller.
 
+Example 3: ARM v7 style CPU PM domains (Linux domain controller)
+
+	cpus {
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		CPU0: cpu at 0 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a7", "arm,armv7";
+			reg = <0x0>;
+			power-domains = <&a7_pd>;
+		};
+
+		CPU1: cpu at 1 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a15", "arm,armv7";
+			reg = <0x0>;
+			power-domains = <&a15_pd>;
+		};
+	};
+
+	pm-domains {
+		a15_pd: a15_pd {
+			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
+			compatible = "arm,cortex-a15";
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_SLEEP_0>;
+		};
+
+		a7_pd: a7_pd {
+			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
+			compatible = "arm,cortex-a7";
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
+		};
+
+		CLUSTER_SLEEP_0: state0 {
+			compatible = "arm,idle-state";
+			entry-latency-us = <1000>;
+			exit-latency-us = <2000>;
+			min-residency-us = <10000>;
+		};
+
+		CLUSTER_SLEEP_1: state1 {
+			compatible = "arm,idle-state";
+			entry-latency-us = <5000>;
+			exit-latency-us = <5000>;
+			min-residency-us = <100000>;
+		};
+	};
+
 ==PM domain consumers==
 
 Required properties:
@@ -76,3 +131,5 @@ Example:
 The node above defines a typical PM domain consumer device, which is located
 inside a PM domain with index 0 of a power controller represented by a node
 with the label "power".
+
+[1]. Documentation/devicetree/bindings/arm/idle-states.txt
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 03/16] PM / Domains: Abstract genpd locking
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Abstract genpd lock/unlock calls, in preparation for domain specific
locks added in the following patches.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased as additional mutex_lock|unlock has been added]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/base/power/domain.c | 113 ++++++++++++++++++++++++++++++--------------
 include/linux/pm_domain.h   |   5 +-
 2 files changed, 81 insertions(+), 37 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 3aecac3..ce1dbfdd 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -39,6 +39,46 @@
 static LIST_HEAD(gpd_list);
 static DEFINE_MUTEX(gpd_list_lock);
 
+struct genpd_lock_fns {
+	void (*lock)(struct generic_pm_domain *genpd);
+	void (*lock_nested)(struct generic_pm_domain *genpd, int depth);
+	int (*lock_interruptible)(struct generic_pm_domain *genpd);
+	void (*unlock)(struct generic_pm_domain *genpd);
+};
+
+static void genpd_lock_mtx(struct generic_pm_domain *genpd)
+{
+	mutex_lock(&genpd->mlock);
+}
+
+static void genpd_lock_nested_mtx(struct generic_pm_domain *genpd,
+					int depth)
+{
+	mutex_lock_nested(&genpd->mlock, depth);
+}
+
+static int genpd_lock_interruptible_mtx(struct generic_pm_domain *genpd)
+{
+	return mutex_lock_interruptible(&genpd->mlock);
+}
+
+static void genpd_unlock_mtx(struct generic_pm_domain *genpd)
+{
+	return mutex_unlock(&genpd->mlock);
+}
+
+static const struct genpd_lock_fns genpd_mtx_fns  = {
+	.lock = genpd_lock_mtx,
+	.lock_nested = genpd_lock_nested_mtx,
+	.lock_interruptible = genpd_lock_interruptible_mtx,
+	.unlock = genpd_unlock_mtx,
+};
+
+#define genpd_lock(p)			p->lock_fns->lock(p)
+#define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
+#define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
+#define genpd_unlock(p)			p->lock_fns->unlock(p)
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -200,9 +240,9 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
 
 		genpd_sd_counter_inc(master);
 
-		mutex_lock_nested(&master->lock, depth + 1);
+		genpd_lock_nested(master, depth + 1);
 		ret = genpd_poweron(master, depth + 1);
-		mutex_unlock(&master->lock);
+		genpd_unlock(master);
 
 		if (ret) {
 			genpd_sd_counter_dec(master);
@@ -255,9 +295,9 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
 		spin_unlock_irq(&dev->power.lock);
 
 		if (!IS_ERR(genpd)) {
-			mutex_lock(&genpd->lock);
+			genpd_lock(genpd);
 			genpd->max_off_time_changed = true;
-			mutex_unlock(&genpd->lock);
+			genpd_unlock(genpd);
 		}
 
 		dev = dev->parent;
@@ -354,9 +394,9 @@ static void genpd_power_off_work_fn(struct work_struct *work)
 
 	genpd = container_of(work, struct generic_pm_domain, power_off_work);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, true);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -472,9 +512,9 @@ static int genpd_runtime_suspend(struct device *dev)
 	if (dev->power.irq_safe)
 		return 0;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, false);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
@@ -509,9 +549,9 @@ static int genpd_runtime_resume(struct device *dev)
 		goto out;
 	}
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	ret = genpd_poweron(genpd, 0);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		return ret;
@@ -547,9 +587,9 @@ err_stop:
 	genpd_stop_dev(genpd, dev);
 err_poweroff:
 	if (!dev->power.irq_safe) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 		genpd_poweroff(genpd, 0);
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 	}
 
 	return ret;
@@ -732,20 +772,20 @@ static int pm_genpd_prepare(struct device *dev)
 	if (resume_needed(dev, genpd))
 		pm_runtime_resume(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count++ == 0)
 		genpd->suspended_count = 0;
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	ret = pm_generic_prepare(dev);
 	if (ret) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 
 		genpd->prepared_count--;
 
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 	}
 
 	return ret;
@@ -936,13 +976,13 @@ static void pm_genpd_complete(struct device *dev)
 
 	pm_generic_complete(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	genpd->prepared_count--;
 	if (!genpd->prepared_count)
 		genpd_queue_power_off_work(genpd);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -1077,7 +1117,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1094,7 +1134,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		genpd_free_dev_data(dev, gpd_data);
@@ -1127,7 +1167,7 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 	gpd_data = to_gpd_data(pdd);
 	dev_pm_qos_remove_notifier(dev, &gpd_data->nb);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1142,14 +1182,14 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 
 	list_del_init(&pdd->list_node);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	genpd_free_dev_data(dev, gpd_data);
 
 	return 0;
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 	dev_pm_qos_add_notifier(dev, &gpd_data->nb);
 
 	return ret;
@@ -1175,8 +1215,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	if (!link)
 		return -ENOMEM;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (genpd->status == GPD_STATE_POWER_OFF
 	    &&  subdomain->status != GPD_STATE_POWER_OFF) {
@@ -1199,8 +1239,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 		genpd_sd_counter_inc(genpd);
 
  out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 	if (ret)
 		kfree(link);
 	return ret;
@@ -1221,8 +1261,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain))
 		return -EINVAL;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (!list_empty(&subdomain->master_links) || subdomain->device_count) {
 		pr_warn("%s: unable to remove subdomain %s\n", genpd->name,
@@ -1246,8 +1286,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	}
 
 out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 
 	return ret;
 }
@@ -1356,7 +1396,8 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->lock);
+	mutex_init(&genpd->mlock);
+	genpd->lock_fns = &genpd_mtx_fns;
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -1714,9 +1755,9 @@ int genpd_dev_pm_attach(struct device *dev)
 	dev->pm_domain->detach = genpd_dev_pm_detach;
 	dev->pm_domain->sync = genpd_dev_pm_sync;
 
-	mutex_lock(&pd->lock);
+	genpd_lock(pd);
 	ret = genpd_poweron(pd, 0);
-	mutex_unlock(&pd->lock);
+	genpd_unlock(pd);
 out:
 	return ret ? -EPROBE_DEFER : 0;
 }
@@ -1774,7 +1815,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	char state[16];
 	int ret;
 
-	ret = mutex_lock_interruptible(&genpd->lock);
+	ret = genpd_lock_interruptible(genpd);
 	if (ret)
 		return -ERESTARTSYS;
 
@@ -1811,7 +1852,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 
 	seq_puts(s, "\n");
 exit:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index c5d14b9..d37bf11 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -45,13 +45,14 @@ struct genpd_power_state {
 	struct device_node *of_node;
 };
 
+struct genpd_lock_fns;
+
 struct generic_pm_domain {
 	struct dev_pm_domain domain;	/* PM domain operations */
 	struct list_head gpd_list_node;	/* Node in the global PM domains list */
 	struct list_head master_links;	/* Links with PM domain as a master */
 	struct list_head slave_links;	/* Links with PM domain as a slave */
 	struct list_head dev_list;	/* List of devices */
-	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
 	struct device_node *of_node;	/* Device node of the PM domain */
@@ -75,6 +76,8 @@ struct generic_pm_domain {
 	struct genpd_power_state states[GENPD_MAX_NUM_STATES];
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
+	const struct genpd_lock_fns *lock_fns;
+	struct mutex mlock;
 
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 03/16] PM / Domains: Abstract genpd locking
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Abstract genpd lock/unlock calls, in preparation for domain specific
locks added in the following patches.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased as additional mutex_lock|unlock has been added]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/base/power/domain.c | 113 ++++++++++++++++++++++++++++++--------------
 include/linux/pm_domain.h   |   5 +-
 2 files changed, 81 insertions(+), 37 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 3aecac3..ce1dbfdd 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -39,6 +39,46 @@
 static LIST_HEAD(gpd_list);
 static DEFINE_MUTEX(gpd_list_lock);
 
+struct genpd_lock_fns {
+	void (*lock)(struct generic_pm_domain *genpd);
+	void (*lock_nested)(struct generic_pm_domain *genpd, int depth);
+	int (*lock_interruptible)(struct generic_pm_domain *genpd);
+	void (*unlock)(struct generic_pm_domain *genpd);
+};
+
+static void genpd_lock_mtx(struct generic_pm_domain *genpd)
+{
+	mutex_lock(&genpd->mlock);
+}
+
+static void genpd_lock_nested_mtx(struct generic_pm_domain *genpd,
+					int depth)
+{
+	mutex_lock_nested(&genpd->mlock, depth);
+}
+
+static int genpd_lock_interruptible_mtx(struct generic_pm_domain *genpd)
+{
+	return mutex_lock_interruptible(&genpd->mlock);
+}
+
+static void genpd_unlock_mtx(struct generic_pm_domain *genpd)
+{
+	return mutex_unlock(&genpd->mlock);
+}
+
+static const struct genpd_lock_fns genpd_mtx_fns  = {
+	.lock = genpd_lock_mtx,
+	.lock_nested = genpd_lock_nested_mtx,
+	.lock_interruptible = genpd_lock_interruptible_mtx,
+	.unlock = genpd_unlock_mtx,
+};
+
+#define genpd_lock(p)			p->lock_fns->lock(p)
+#define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
+#define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
+#define genpd_unlock(p)			p->lock_fns->unlock(p)
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -200,9 +240,9 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
 
 		genpd_sd_counter_inc(master);
 
-		mutex_lock_nested(&master->lock, depth + 1);
+		genpd_lock_nested(master, depth + 1);
 		ret = genpd_poweron(master, depth + 1);
-		mutex_unlock(&master->lock);
+		genpd_unlock(master);
 
 		if (ret) {
 			genpd_sd_counter_dec(master);
@@ -255,9 +295,9 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
 		spin_unlock_irq(&dev->power.lock);
 
 		if (!IS_ERR(genpd)) {
-			mutex_lock(&genpd->lock);
+			genpd_lock(genpd);
 			genpd->max_off_time_changed = true;
-			mutex_unlock(&genpd->lock);
+			genpd_unlock(genpd);
 		}
 
 		dev = dev->parent;
@@ -354,9 +394,9 @@ static void genpd_power_off_work_fn(struct work_struct *work)
 
 	genpd = container_of(work, struct generic_pm_domain, power_off_work);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, true);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -472,9 +512,9 @@ static int genpd_runtime_suspend(struct device *dev)
 	if (dev->power.irq_safe)
 		return 0;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, false);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
@@ -509,9 +549,9 @@ static int genpd_runtime_resume(struct device *dev)
 		goto out;
 	}
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	ret = genpd_poweron(genpd, 0);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		return ret;
@@ -547,9 +587,9 @@ err_stop:
 	genpd_stop_dev(genpd, dev);
 err_poweroff:
 	if (!dev->power.irq_safe) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 		genpd_poweroff(genpd, 0);
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 	}
 
 	return ret;
@@ -732,20 +772,20 @@ static int pm_genpd_prepare(struct device *dev)
 	if (resume_needed(dev, genpd))
 		pm_runtime_resume(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count++ == 0)
 		genpd->suspended_count = 0;
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	ret = pm_generic_prepare(dev);
 	if (ret) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 
 		genpd->prepared_count--;
 
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 	}
 
 	return ret;
@@ -936,13 +976,13 @@ static void pm_genpd_complete(struct device *dev)
 
 	pm_generic_complete(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	genpd->prepared_count--;
 	if (!genpd->prepared_count)
 		genpd_queue_power_off_work(genpd);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -1077,7 +1117,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1094,7 +1134,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		genpd_free_dev_data(dev, gpd_data);
@@ -1127,7 +1167,7 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 	gpd_data = to_gpd_data(pdd);
 	dev_pm_qos_remove_notifier(dev, &gpd_data->nb);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1142,14 +1182,14 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 
 	list_del_init(&pdd->list_node);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	genpd_free_dev_data(dev, gpd_data);
 
 	return 0;
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 	dev_pm_qos_add_notifier(dev, &gpd_data->nb);
 
 	return ret;
@@ -1175,8 +1215,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	if (!link)
 		return -ENOMEM;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (genpd->status == GPD_STATE_POWER_OFF
 	    &&  subdomain->status != GPD_STATE_POWER_OFF) {
@@ -1199,8 +1239,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 		genpd_sd_counter_inc(genpd);
 
  out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 	if (ret)
 		kfree(link);
 	return ret;
@@ -1221,8 +1261,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain))
 		return -EINVAL;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (!list_empty(&subdomain->master_links) || subdomain->device_count) {
 		pr_warn("%s: unable to remove subdomain %s\n", genpd->name,
@@ -1246,8 +1286,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	}
 
 out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 
 	return ret;
 }
@@ -1356,7 +1396,8 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->lock);
+	mutex_init(&genpd->mlock);
+	genpd->lock_fns = &genpd_mtx_fns;
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -1714,9 +1755,9 @@ int genpd_dev_pm_attach(struct device *dev)
 	dev->pm_domain->detach = genpd_dev_pm_detach;
 	dev->pm_domain->sync = genpd_dev_pm_sync;
 
-	mutex_lock(&pd->lock);
+	genpd_lock(pd);
 	ret = genpd_poweron(pd, 0);
-	mutex_unlock(&pd->lock);
+	genpd_unlock(pd);
 out:
 	return ret ? -EPROBE_DEFER : 0;
 }
@@ -1774,7 +1815,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	char state[16];
 	int ret;
 
-	ret = mutex_lock_interruptible(&genpd->lock);
+	ret = genpd_lock_interruptible(genpd);
 	if (ret)
 		return -ERESTARTSYS;
 
@@ -1811,7 +1852,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 
 	seq_puts(s, "\n");
 exit:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index c5d14b9..d37bf11 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -45,13 +45,14 @@ struct genpd_power_state {
 	struct device_node *of_node;
 };
 
+struct genpd_lock_fns;
+
 struct generic_pm_domain {
 	struct dev_pm_domain domain;	/* PM domain operations */
 	struct list_head gpd_list_node;	/* Node in the global PM domains list */
 	struct list_head master_links;	/* Links with PM domain as a master */
 	struct list_head slave_links;	/* Links with PM domain as a slave */
 	struct list_head dev_list;	/* List of devices */
-	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
 	struct device_node *of_node;	/* Device node of the PM domain */
@@ -75,6 +76,8 @@ struct generic_pm_domain {
 	struct genpd_power_state states[GENPD_MAX_NUM_STATES];
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
+	const struct genpd_lock_fns *lock_fns;
+	struct mutex mlock;
 
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 04/16] PM / Domains: Support IRQ safe PM domains
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Generic Power Domains currently support turning on/off only in process
context. This prevents the usage of PM domains for domains that could be
powered on/off in a context where IRQs are disabled. Many such domains
exist today and do not get powered off, when the IRQ safe devices in
that domain are powered off, because of this limitation.

However, not all domains can operate in IRQ safe contexts. Genpd
therefore, has to support both cases where the domain may or may not
operate in IRQ safe contexts. Configuring genpd to use an appropriate
lock for that domain, would allow domains that have IRQ safe devices to
runtime suspend and resume, in atomic context.

To achieve domain specific locking, set the domain's ->flag to
GENPD_FLAG_IRQ_SAFE while defining the domain. This indicates that genpd
should use a spinlock instead of a mutex for locking the domain. Locking
is abstracted through genpd_lock() and genpd_unlock() functions that use
the flag to determine the appropriate lock to be used for that domain.

Domains that have lower latency to suspend and resume and can operate
with IRQs disabled may now be able to save power, when the component
devices and sub-domains are idle at runtime.

The restriction this imposes on the domain hierarchy is that non-IRQ
safe domains may not have IRQ-safe subdomains, but IRQ safe domains may
have IRQ safe and non-IRQ safe subdomains and devices.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased and solved a conflict]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/base/power/domain.c | 107 +++++++++++++++++++++++++++++++++++++++-----
 include/linux/pm_domain.h   |  10 ++++-
 2 files changed, 106 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index ce1dbfdd..4fc5688 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -74,11 +74,61 @@ static const struct genpd_lock_fns genpd_mtx_fns  = {
 	.unlock = genpd_unlock_mtx,
 };
 
+static void genpd_lock_spin(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+}
+
+static void genpd_lock_nested_spin(struct generic_pm_domain *genpd,
+					int depth)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave_nested(&genpd->slock, flags, depth);
+	genpd->lock_flags = flags;
+}
+
+static int genpd_lock_interruptible_spin(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+	return 0;
+}
+
+static void genpd_unlock_spin(struct generic_pm_domain *genpd)
+	__releases(&genpd->slock)
+{
+	spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags);
+}
+
+static const struct genpd_lock_fns genpd_spin_fns = {
+	.lock = genpd_lock_spin,
+	.lock_nested = genpd_lock_nested_spin,
+	.lock_interruptible = genpd_lock_interruptible_spin,
+	.unlock = genpd_unlock_spin,
+};
+
 #define genpd_lock(p)			p->lock_fns->lock(p)
 #define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
 #define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
 #define genpd_unlock(p)			p->lock_fns->unlock(p)
 
+#define genpd_is_irq_safe(genpd)	(genpd->flags & GENPD_FLAG_IRQ_SAFE)
+
+static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
+		struct generic_pm_domain *genpd)
+{
+	return pm_runtime_is_irq_safe(dev) && !genpd_is_irq_safe(genpd);
+}
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -343,7 +393,12 @@ static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
 		if (stat > PM_QOS_FLAGS_NONE)
 			return -EBUSY;
 
-		if (!pm_runtime_suspended(pdd->dev) || pdd->dev->power.irq_safe)
+		/*
+		 * Do not allow PM domain to be powered off, when an IRQ safe
+		 * device is part of a non-IRQ safe domain.
+		 */
+		if (!pm_runtime_suspended(pdd->dev) ||
+			irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd))
 			not_suspended++;
 	}
 
@@ -506,10 +561,10 @@ static int genpd_runtime_suspend(struct device *dev)
 	}
 
 	/*
-	 * If power.irq_safe is set, this routine will be run with interrupts
-	 * off, so it can't use mutexes.
+	 * If power.irq_safe is set, this routine may be run with
+	 * IRQs disabled, so suspend only if the PM domain also is irq_safe.
 	 */
-	if (dev->power.irq_safe)
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
 		return 0;
 
 	genpd_lock(genpd);
@@ -543,8 +598,11 @@ static int genpd_runtime_resume(struct device *dev)
 	if (IS_ERR(genpd))
 		return -EINVAL;
 
-	/* If power.irq_safe, the PM domain is never powered off. */
-	if (dev->power.irq_safe) {
+	/*
+	 * As we don't power off a non IRQ safe domain, which holds
+	 * an IRQ safe device, we don't need to restore power to it.
+	 */
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd)) {
 		timed = false;
 		goto out;
 	}
@@ -586,7 +644,8 @@ static int genpd_runtime_resume(struct device *dev)
 err_stop:
 	genpd_stop_dev(genpd, dev);
 err_poweroff:
-	if (!dev->power.irq_safe) {
+	if (!dev->power.irq_safe ||
+		(dev->power.irq_safe && genpd_is_irq_safe(genpd))) {
 		genpd_lock(genpd);
 		genpd_poweroff(genpd, 0);
 		genpd_unlock(genpd);
@@ -1117,6 +1176,11 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
+	/* Check if we are adding an IRQ safe device to non-IRQ safe domain */
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
+		dev_warn_once(dev, "PM domain %s will not be powered off\n",
+				genpd->name);
+
 	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
@@ -1211,6 +1275,17 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	    || genpd == subdomain)
 		return -EINVAL;
 
+	/*
+	 * If the domain can be powered on/off in an IRQ safe
+	 * context, ensure that the subdomain can also be
+	 * powered on/off in that context.
+	 */
+	if (!genpd_is_irq_safe(genpd) && genpd_is_irq_safe(subdomain)) {
+		WARN("Parent %s of subdomain %s must be IRQ safe\n",
+				genpd->name, subdomain->name);
+		return -EINVAL;
+	}
+
 	link = kzalloc(sizeof(*link), GFP_KERNEL);
 	if (!link)
 		return -ENOMEM;
@@ -1377,6 +1452,17 @@ static int genpd_of_parse(struct generic_pm_domain *genpd)
 	return pm_genpd_of_parse_power_states(genpd);
 }
 
+static void genpd_lock_init(struct generic_pm_domain *genpd)
+{
+	if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
+		spin_lock_init(&genpd->slock);
+		genpd->lock_fns = &genpd_spin_fns;
+	} else {
+		mutex_init(&genpd->mlock);
+		genpd->lock_fns = &genpd_mtx_fns;
+	}
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1396,8 +1482,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->mlock);
-	genpd->lock_fns = &genpd_mtx_fns;
+	genpd_lock_init(genpd);
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -1841,7 +1926,9 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	}
 
 	list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
-		kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);
+		kobj_path = kobject_get_path(&pm_data->dev->kobj,
+				genpd_is_irq_safe(genpd) ?
+				GFP_ATOMIC : GFP_KERNEL);
 		if (kobj_path == NULL)
 			continue;
 
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index d37bf11..688dc57 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -15,9 +15,11 @@
 #include <linux/err.h>
 #include <linux/of.h>
 #include <linux/notifier.h>
+#include <linux/spinlock.h>
 
 /* Defines used for the flags field in the struct generic_pm_domain */
 #define GENPD_FLAG_PM_CLK	(1U << 0) /* PM domain uses PM clk */
+#define GENPD_FLAG_IRQ_SAFE	(1U << 1) /* PM domain operates in atomic */
 
 #define GENPD_MAX_NUM_STATES	8 /* Number of possible low power states */
 
@@ -77,7 +79,13 @@ struct generic_pm_domain {
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
 	const struct genpd_lock_fns *lock_fns;
-	struct mutex mlock;
+	union {
+		struct mutex mlock;
+		struct {
+			spinlock_t slock;
+			unsigned long lock_flags;
+		};
+	};
 
 };
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 04/16] PM / Domains: Support IRQ safe PM domains
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Generic Power Domains currently support turning on/off only in process
context. This prevents the usage of PM domains for domains that could be
powered on/off in a context where IRQs are disabled. Many such domains
exist today and do not get powered off, when the IRQ safe devices in
that domain are powered off, because of this limitation.

However, not all domains can operate in IRQ safe contexts. Genpd
therefore, has to support both cases where the domain may or may not
operate in IRQ safe contexts. Configuring genpd to use an appropriate
lock for that domain, would allow domains that have IRQ safe devices to
runtime suspend and resume, in atomic context.

To achieve domain specific locking, set the domain's ->flag to
GENPD_FLAG_IRQ_SAFE while defining the domain. This indicates that genpd
should use a spinlock instead of a mutex for locking the domain. Locking
is abstracted through genpd_lock() and genpd_unlock() functions that use
the flag to determine the appropriate lock to be used for that domain.

Domains that have lower latency to suspend and resume and can operate
with IRQs disabled may now be able to save power, when the component
devices and sub-domains are idle at runtime.

The restriction this imposes on the domain hierarchy is that non-IRQ
safe domains may not have IRQ-safe subdomains, but IRQ safe domains may
have IRQ safe and non-IRQ safe subdomains and devices.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased and solved a conflict]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/base/power/domain.c | 107 +++++++++++++++++++++++++++++++++++++++-----
 include/linux/pm_domain.h   |  10 ++++-
 2 files changed, 106 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index ce1dbfdd..4fc5688 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -74,11 +74,61 @@ static const struct genpd_lock_fns genpd_mtx_fns  = {
 	.unlock = genpd_unlock_mtx,
 };
 
+static void genpd_lock_spin(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+}
+
+static void genpd_lock_nested_spin(struct generic_pm_domain *genpd,
+					int depth)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave_nested(&genpd->slock, flags, depth);
+	genpd->lock_flags = flags;
+}
+
+static int genpd_lock_interruptible_spin(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+	return 0;
+}
+
+static void genpd_unlock_spin(struct generic_pm_domain *genpd)
+	__releases(&genpd->slock)
+{
+	spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags);
+}
+
+static const struct genpd_lock_fns genpd_spin_fns = {
+	.lock = genpd_lock_spin,
+	.lock_nested = genpd_lock_nested_spin,
+	.lock_interruptible = genpd_lock_interruptible_spin,
+	.unlock = genpd_unlock_spin,
+};
+
 #define genpd_lock(p)			p->lock_fns->lock(p)
 #define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
 #define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
 #define genpd_unlock(p)			p->lock_fns->unlock(p)
 
+#define genpd_is_irq_safe(genpd)	(genpd->flags & GENPD_FLAG_IRQ_SAFE)
+
+static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
+		struct generic_pm_domain *genpd)
+{
+	return pm_runtime_is_irq_safe(dev) && !genpd_is_irq_safe(genpd);
+}
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -343,7 +393,12 @@ static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
 		if (stat > PM_QOS_FLAGS_NONE)
 			return -EBUSY;
 
-		if (!pm_runtime_suspended(pdd->dev) || pdd->dev->power.irq_safe)
+		/*
+		 * Do not allow PM domain to be powered off, when an IRQ safe
+		 * device is part of a non-IRQ safe domain.
+		 */
+		if (!pm_runtime_suspended(pdd->dev) ||
+			irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd))
 			not_suspended++;
 	}
 
@@ -506,10 +561,10 @@ static int genpd_runtime_suspend(struct device *dev)
 	}
 
 	/*
-	 * If power.irq_safe is set, this routine will be run with interrupts
-	 * off, so it can't use mutexes.
+	 * If power.irq_safe is set, this routine may be run with
+	 * IRQs disabled, so suspend only if the PM domain also is irq_safe.
 	 */
-	if (dev->power.irq_safe)
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
 		return 0;
 
 	genpd_lock(genpd);
@@ -543,8 +598,11 @@ static int genpd_runtime_resume(struct device *dev)
 	if (IS_ERR(genpd))
 		return -EINVAL;
 
-	/* If power.irq_safe, the PM domain is never powered off. */
-	if (dev->power.irq_safe) {
+	/*
+	 * As we don't power off a non IRQ safe domain, which holds
+	 * an IRQ safe device, we don't need to restore power to it.
+	 */
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd)) {
 		timed = false;
 		goto out;
 	}
@@ -586,7 +644,8 @@ static int genpd_runtime_resume(struct device *dev)
 err_stop:
 	genpd_stop_dev(genpd, dev);
 err_poweroff:
-	if (!dev->power.irq_safe) {
+	if (!dev->power.irq_safe ||
+		(dev->power.irq_safe && genpd_is_irq_safe(genpd))) {
 		genpd_lock(genpd);
 		genpd_poweroff(genpd, 0);
 		genpd_unlock(genpd);
@@ -1117,6 +1176,11 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
+	/* Check if we are adding an IRQ safe device to non-IRQ safe domain */
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
+		dev_warn_once(dev, "PM domain %s will not be powered off\n",
+				genpd->name);
+
 	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
@@ -1211,6 +1275,17 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	    || genpd == subdomain)
 		return -EINVAL;
 
+	/*
+	 * If the domain can be powered on/off in an IRQ safe
+	 * context, ensure that the subdomain can also be
+	 * powered on/off in that context.
+	 */
+	if (!genpd_is_irq_safe(genpd) && genpd_is_irq_safe(subdomain)) {
+		WARN("Parent %s of subdomain %s must be IRQ safe\n",
+				genpd->name, subdomain->name);
+		return -EINVAL;
+	}
+
 	link = kzalloc(sizeof(*link), GFP_KERNEL);
 	if (!link)
 		return -ENOMEM;
@@ -1377,6 +1452,17 @@ static int genpd_of_parse(struct generic_pm_domain *genpd)
 	return pm_genpd_of_parse_power_states(genpd);
 }
 
+static void genpd_lock_init(struct generic_pm_domain *genpd)
+{
+	if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
+		spin_lock_init(&genpd->slock);
+		genpd->lock_fns = &genpd_spin_fns;
+	} else {
+		mutex_init(&genpd->mlock);
+		genpd->lock_fns = &genpd_mtx_fns;
+	}
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1396,8 +1482,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->mlock);
-	genpd->lock_fns = &genpd_mtx_fns;
+	genpd_lock_init(genpd);
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -1841,7 +1926,9 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	}
 
 	list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
-		kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);
+		kobj_path = kobject_get_path(&pm_data->dev->kobj,
+				genpd_is_irq_safe(genpd) ?
+				GFP_ATOMIC : GFP_KERNEL);
 		if (kobj_path == NULL)
 			continue;
 
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index d37bf11..688dc57 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -15,9 +15,11 @@
 #include <linux/err.h>
 #include <linux/of.h>
 #include <linux/notifier.h>
+#include <linux/spinlock.h>
 
 /* Defines used for the flags field in the struct generic_pm_domain */
 #define GENPD_FLAG_PM_CLK	(1U << 0) /* PM domain uses PM clk */
+#define GENPD_FLAG_IRQ_SAFE	(1U << 1) /* PM domain operates in atomic */
 
 #define GENPD_MAX_NUM_STATES	8 /* Number of possible low power states */
 
@@ -77,7 +79,13 @@ struct generic_pm_domain {
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
 	const struct genpd_lock_fns *lock_fns;
-	struct mutex mlock;
+	union {
+		struct mutex mlock;
+		struct {
+			spinlock_t slock;
+			unsigned long lock_flags;
+		};
+	};
 
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 05/16] PM / doc: Update device documentation for devices in IRQ safe PM domains
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Update documentation to reflect the changes made to support IRQ safe PM
domains.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/power/devices.txt | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
index 8ba6625..a622136 100644
--- a/Documentation/power/devices.txt
+++ b/Documentation/power/devices.txt
@@ -607,7 +607,17 @@ individually.  Instead, a set of devices sharing a power resource can be put
 into a low-power state together at the same time by turning off the shared
 power resource.  Of course, they also need to be put into the full-power state
 together, by turning the shared power resource on.  A set of devices with this
-property is often referred to as a power domain.
+property is often referred to as a power domain. A power domain may also be
+nested inside another power domain.
+
+Devices, by default, operate in process context. If a device can operate in
+IRQ safe context that has to be explicitly indicated by setting the irq_safe
+boolean inside struct generic_pm_domain to be true. Power domains by default,
+operate in process context but could have devices that are IRQ safe. Such
+power domains cannot be powered on/off during runtime PM. On the other hand,
+IRQ safe PM domains that have IRQ safe devices may be powered off when all
+the devices are in idle. An IRQ safe domain may only be attached as a
+subdomain to another IRQ safe domain.
 
 Support for power domains is provided through the pm_domain field of struct
 device.  This field is a pointer to an object of type struct dev_pm_domain,
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 05/16] PM / doc: Update device documentation for devices in IRQ safe PM domains
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Update documentation to reflect the changes made to support IRQ safe PM
domains.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/power/devices.txt | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
index 8ba6625..a622136 100644
--- a/Documentation/power/devices.txt
+++ b/Documentation/power/devices.txt
@@ -607,7 +607,17 @@ individually.  Instead, a set of devices sharing a power resource can be put
 into a low-power state together at the same time by turning off the shared
 power resource.  Of course, they also need to be put into the full-power state
 together, by turning the shared power resource on.  A set of devices with this
-property is often referred to as a power domain.
+property is often referred to as a power domain. A power domain may also be
+nested inside another power domain.
+
+Devices, by default, operate in process context. If a device can operate in
+IRQ safe context that has to be explicitly indicated by setting the irq_safe
+boolean inside struct generic_pm_domain to be true. Power domains by default,
+operate in process context but could have devices that are IRQ safe. Such
+power domains cannot be powered on/off during runtime PM. On the other hand,
+IRQ safe PM domains that have IRQ safe devices may be powered off when all
+the devices are in idle. An IRQ safe domain may only be attached as a
+subdomain to another IRQ safe domain.
 
 Support for power domains is provided through the pm_domain field of struct
 device.  This field is a pointer to an object of type struct dev_pm_domain,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 06/16] PM / cpu_domains: Setup PM domains for CPUs/clusters
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Define and add Generic PM domains (genpd) for CPU clusters. Many new
SoCs group CPUs as clusters. Clusters share common resources like power
rails, caches, VFP, Coresight etc. When all CPUs in the cluster are
idle, these shared resources may also be put in their idle state.

CPUs may be associated with their domain providers. The domains in
turn may be associated with their providers. This is clean way to model
the cluster hierarchy like that of ARM's big.little architecture.

Platform drivers may initialize generic PM domains and setup the CPU PM
domains for the genpd and attach CPUs to the domain. In the following
patches, the CPUs are hooked up to runtime PM framework which helps
power down the domain, when all the CPUs in the domain are idle.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Suggested-by: Kevin Hilman <khilman@kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/Makefile      |   1 +
 drivers/base/power/cpu_domains.c | 191 +++++++++++++++++++++++++++++++++++++++
 include/linux/cpu_domains.h      |  49 ++++++++++
 3 files changed, 241 insertions(+)
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index 5998c53..9883e89 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_PM_SLEEP)	+= main.o wakeup.o
 obj-$(CONFIG_PM_TRACE_RTC)	+= trace.o
 obj-$(CONFIG_PM_OPP)	+= opp/
 obj-$(CONFIG_PM_GENERIC_DOMAINS)	+=  domain.o domain_governor.o
+obj-$(CONFIG_PM_GENERIC_DOMAINS_OF)	+= cpu_domains.o
 obj-$(CONFIG_HAVE_CLK)	+= clock_ops.o
 
 ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
new file mode 100644
index 0000000..73e493b
--- /dev/null
+++ b/drivers/base/power/cpu_domains.c
@@ -0,0 +1,191 @@
+/*
+ * drivers/base/power/cpu_domains.c - Helper functions to create CPU PM domains.
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/cpu_domains.h>
+#include <linux/cpu_pm.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/pm_domain.h>
+#include <linux/rculist.h>
+#include <linux/rcupdate.h>
+#include <linux/slab.h>
+
+#define CPU_PD_NAME_MAX 36
+
+struct cpu_pm_domain {
+	struct list_head link;
+	struct cpu_pd_ops ops;
+	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *parent;
+	cpumask_var_t cpus;
+};
+
+/* List of CPU PM domains we care about */
+static LIST_HEAD(of_cpu_pd_list);
+static DEFINE_MUTEX(cpu_pd_list_lock);
+
+static inline struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
+{
+	struct cpu_pm_domain *pd;
+	struct cpu_pm_domain *res = NULL;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(pd, &of_cpu_pd_list, link)
+		if (pd->genpd == d) {
+			res = pd;
+			break;
+		}
+	rcu_read_unlock();
+
+	return res;
+}
+
+static int cpu_pd_power_on(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_on ? pd->ops.power_on() : 0;
+}
+
+static int cpu_pd_power_off(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_off ? pd->ops.power_off(genpd->state_idx,
+					genpd->states[genpd->state_idx].param,
+					pd->cpus) : 0;
+}
+
+/**
+ * cpu_pd_attach_domain:  Attach a child CPU PM to its parent
+ *
+ * @parent: The parent generic PM domain
+ * @child: The child generic PM domain
+ *
+ * Generally, the child PM domain is the one to which CPUs are attached.
+ */
+int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child)
+{
+	struct cpu_pm_domain *cpu_pd, *parent_cpu_pd;
+	int ret;
+
+	ret = pm_genpd_add_subdomain(parent, child);
+	if (ret) {
+		pr_err("%s: Unable to add sub-domain (%s) to %s.\n err=%d",
+				__func__, child->name, parent->name, ret);
+		return ret;
+	}
+
+	cpu_pd = to_cpu_pd(child);
+	parent_cpu_pd = to_cpu_pd(parent);
+
+	if (cpu_pd && parent_cpu_pd)
+		cpu_pd->parent = parent_cpu_pd;
+
+	return ret;
+}
+EXPORT_SYMBOL(cpu_pd_attach_domain);
+
+/**
+ * cpu_pd_attach_cpu:  Attach a CPU to its CPU PM domain.
+ *
+ * @genpd: The parent generic PM domain
+ * @cpu: The CPU number
+ */
+int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
+{
+	int ret;
+	struct device *cpu_dev;
+	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
+
+	cpu_dev = get_cpu_device(cpu);
+	if (!cpu_dev) {
+		pr_warn("%s: Unable to get device for CPU%d\n",
+				__func__, cpu);
+		return -ENODEV;
+	}
+
+	ret = genpd_dev_pm_attach(cpu_dev);
+	if (ret)
+		dev_warn(cpu_dev,
+			"%s: Unable to attach to power-domain: %d\n",
+			__func__, ret);
+	else
+		dev_dbg(cpu_dev, "Attached to domain\n");
+
+	while (!ret && cpu_pd) {
+		cpumask_set_cpu(cpu, cpu_pd->cpus);
+		cpu_pd = cpu_pd->parent;
+	};
+
+	return ret;
+}
+EXPORT_SYMBOL(cpu_pd_attach_cpu);
+
+/**
+ * cpu_pd_init: Initialize a CPU PM domain for a genpd
+ *
+ * @genpd: The initialized generic PM domain object.
+ * @ops: The power_on/power_off ops for the domain controller.
+ *
+ * Initialize a CPU PM domain based on a generic PM domain. The platform driver
+ * is expected to setup the genpd object and the states associated with the
+ * generic PM domain, before calling this function.
+ */
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops)
+{
+	int ret = -ENOMEM;
+	struct cpu_pm_domain *pd;
+
+	if (IS_ERR_OR_NULL(genpd))
+		return ERR_PTR(-EINVAL);
+
+	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		goto fail;
+
+	if (!zalloc_cpumask_var(&pd->cpus, GFP_KERNEL))
+		goto fail;
+
+	genpd->power_off = cpu_pd_power_off;
+	genpd->power_on = cpu_pd_power_on;
+	genpd->flags |= GENPD_FLAG_IRQ_SAFE;
+	pd->genpd = genpd;
+	pd->ops.power_on = ops->power_on;
+	pd->ops.power_off = ops->power_off;
+
+	INIT_LIST_HEAD_RCU(&pd->link);
+	mutex_lock(&cpu_pd_list_lock);
+	list_add_rcu(&pd->link, &of_cpu_pd_list);
+	mutex_unlock(&cpu_pd_list_lock);
+
+	ret = pm_genpd_init(genpd, &simple_qos_governor, false);
+	if (ret) {
+		pr_err("Unable to initialize domain %s\n", genpd->name);
+		goto fail;
+	}
+
+	pr_debug("adding %s as CPU PM domain\n", pd->genpd->name);
+
+	return genpd;
+fail:
+	kfree(genpd->name);
+	kfree(genpd);
+	if (pd)
+		kfree(pd->cpus);
+	kfree(pd);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL(cpu_pd_init);
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
new file mode 100644
index 0000000..3a0a027
--- /dev/null
+++ b/include/linux/cpu_domains.h
@@ -0,0 +1,49 @@
+/*
+ * include/linux/cpu_domains.h
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPU_DOMAINS_H__
+#define __CPU_DOMAINS_H__
+
+#include <linux/types.h>
+
+struct cpumask;
+
+struct cpu_pd_ops {
+	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
+	int (*power_on)(void);
+};
+
+#ifdef CONFIG_PM_GENERIC_DOMAINS
+
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops);
+
+int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child);
+
+int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu);
+
+#else
+
+static inline
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops)
+{ return ERR_PTR(-ENODEV); }
+
+static inline int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child)
+{ return -ENODEV; }
+
+static inline int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
+{ return -ENODEV; }
+
+#endif /* CONFIG_PM_GENERIC_DOMAINS */
+
+#endif /* __CPU_DOMAINS_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 06/16] PM / cpu_domains: Setup PM domains for CPUs/clusters
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Define and add Generic PM domains (genpd) for CPU clusters. Many new
SoCs group CPUs as clusters. Clusters share common resources like power
rails, caches, VFP, Coresight etc. When all CPUs in the cluster are
idle, these shared resources may also be put in their idle state.

CPUs may be associated with their domain providers. The domains in
turn may be associated with their providers. This is clean way to model
the cluster hierarchy like that of ARM's big.little architecture.

Platform drivers may initialize generic PM domains and setup the CPU PM
domains for the genpd and attach CPUs to the domain. In the following
patches, the CPUs are hooked up to runtime PM framework which helps
power down the domain, when all the CPUs in the domain are idle.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Suggested-by: Kevin Hilman <khilman@kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/Makefile      |   1 +
 drivers/base/power/cpu_domains.c | 191 +++++++++++++++++++++++++++++++++++++++
 include/linux/cpu_domains.h      |  49 ++++++++++
 3 files changed, 241 insertions(+)
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index 5998c53..9883e89 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_PM_SLEEP)	+= main.o wakeup.o
 obj-$(CONFIG_PM_TRACE_RTC)	+= trace.o
 obj-$(CONFIG_PM_OPP)	+= opp/
 obj-$(CONFIG_PM_GENERIC_DOMAINS)	+=  domain.o domain_governor.o
+obj-$(CONFIG_PM_GENERIC_DOMAINS_OF)	+= cpu_domains.o
 obj-$(CONFIG_HAVE_CLK)	+= clock_ops.o
 
 ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
new file mode 100644
index 0000000..73e493b
--- /dev/null
+++ b/drivers/base/power/cpu_domains.c
@@ -0,0 +1,191 @@
+/*
+ * drivers/base/power/cpu_domains.c - Helper functions to create CPU PM domains.
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/cpu_domains.h>
+#include <linux/cpu_pm.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/pm_domain.h>
+#include <linux/rculist.h>
+#include <linux/rcupdate.h>
+#include <linux/slab.h>
+
+#define CPU_PD_NAME_MAX 36
+
+struct cpu_pm_domain {
+	struct list_head link;
+	struct cpu_pd_ops ops;
+	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *parent;
+	cpumask_var_t cpus;
+};
+
+/* List of CPU PM domains we care about */
+static LIST_HEAD(of_cpu_pd_list);
+static DEFINE_MUTEX(cpu_pd_list_lock);
+
+static inline struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
+{
+	struct cpu_pm_domain *pd;
+	struct cpu_pm_domain *res = NULL;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(pd, &of_cpu_pd_list, link)
+		if (pd->genpd == d) {
+			res = pd;
+			break;
+		}
+	rcu_read_unlock();
+
+	return res;
+}
+
+static int cpu_pd_power_on(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_on ? pd->ops.power_on() : 0;
+}
+
+static int cpu_pd_power_off(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_off ? pd->ops.power_off(genpd->state_idx,
+					genpd->states[genpd->state_idx].param,
+					pd->cpus) : 0;
+}
+
+/**
+ * cpu_pd_attach_domain:  Attach a child CPU PM to its parent
+ *
+ * @parent: The parent generic PM domain
+ * @child: The child generic PM domain
+ *
+ * Generally, the child PM domain is the one to which CPUs are attached.
+ */
+int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child)
+{
+	struct cpu_pm_domain *cpu_pd, *parent_cpu_pd;
+	int ret;
+
+	ret = pm_genpd_add_subdomain(parent, child);
+	if (ret) {
+		pr_err("%s: Unable to add sub-domain (%s) to %s.\n err=%d",
+				__func__, child->name, parent->name, ret);
+		return ret;
+	}
+
+	cpu_pd = to_cpu_pd(child);
+	parent_cpu_pd = to_cpu_pd(parent);
+
+	if (cpu_pd && parent_cpu_pd)
+		cpu_pd->parent = parent_cpu_pd;
+
+	return ret;
+}
+EXPORT_SYMBOL(cpu_pd_attach_domain);
+
+/**
+ * cpu_pd_attach_cpu:  Attach a CPU to its CPU PM domain.
+ *
+ * @genpd: The parent generic PM domain
+ * @cpu: The CPU number
+ */
+int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
+{
+	int ret;
+	struct device *cpu_dev;
+	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
+
+	cpu_dev = get_cpu_device(cpu);
+	if (!cpu_dev) {
+		pr_warn("%s: Unable to get device for CPU%d\n",
+				__func__, cpu);
+		return -ENODEV;
+	}
+
+	ret = genpd_dev_pm_attach(cpu_dev);
+	if (ret)
+		dev_warn(cpu_dev,
+			"%s: Unable to attach to power-domain: %d\n",
+			__func__, ret);
+	else
+		dev_dbg(cpu_dev, "Attached to domain\n");
+
+	while (!ret && cpu_pd) {
+		cpumask_set_cpu(cpu, cpu_pd->cpus);
+		cpu_pd = cpu_pd->parent;
+	};
+
+	return ret;
+}
+EXPORT_SYMBOL(cpu_pd_attach_cpu);
+
+/**
+ * cpu_pd_init: Initialize a CPU PM domain for a genpd
+ *
+ * @genpd: The initialized generic PM domain object.
+ * @ops: The power_on/power_off ops for the domain controller.
+ *
+ * Initialize a CPU PM domain based on a generic PM domain. The platform driver
+ * is expected to setup the genpd object and the states associated with the
+ * generic PM domain, before calling this function.
+ */
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops)
+{
+	int ret = -ENOMEM;
+	struct cpu_pm_domain *pd;
+
+	if (IS_ERR_OR_NULL(genpd))
+		return ERR_PTR(-EINVAL);
+
+	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		goto fail;
+
+	if (!zalloc_cpumask_var(&pd->cpus, GFP_KERNEL))
+		goto fail;
+
+	genpd->power_off = cpu_pd_power_off;
+	genpd->power_on = cpu_pd_power_on;
+	genpd->flags |= GENPD_FLAG_IRQ_SAFE;
+	pd->genpd = genpd;
+	pd->ops.power_on = ops->power_on;
+	pd->ops.power_off = ops->power_off;
+
+	INIT_LIST_HEAD_RCU(&pd->link);
+	mutex_lock(&cpu_pd_list_lock);
+	list_add_rcu(&pd->link, &of_cpu_pd_list);
+	mutex_unlock(&cpu_pd_list_lock);
+
+	ret = pm_genpd_init(genpd, &simple_qos_governor, false);
+	if (ret) {
+		pr_err("Unable to initialize domain %s\n", genpd->name);
+		goto fail;
+	}
+
+	pr_debug("adding %s as CPU PM domain\n", pd->genpd->name);
+
+	return genpd;
+fail:
+	kfree(genpd->name);
+	kfree(genpd);
+	if (pd)
+		kfree(pd->cpus);
+	kfree(pd);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL(cpu_pd_init);
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
new file mode 100644
index 0000000..3a0a027
--- /dev/null
+++ b/include/linux/cpu_domains.h
@@ -0,0 +1,49 @@
+/*
+ * include/linux/cpu_domains.h
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPU_DOMAINS_H__
+#define __CPU_DOMAINS_H__
+
+#include <linux/types.h>
+
+struct cpumask;
+
+struct cpu_pd_ops {
+	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
+	int (*power_on)(void);
+};
+
+#ifdef CONFIG_PM_GENERIC_DOMAINS
+
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops);
+
+int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child);
+
+int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu);
+
+#else
+
+static inline
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops)
+{ return ERR_PTR(-ENODEV); }
+
+static inline int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child)
+{ return -ENODEV; }
+
+static inline int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
+{ return -ENODEV; }
+
+#endif /* CONFIG_PM_GENERIC_DOMAINS */
+
+#endif /* __CPU_DOMAINS_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 07/16] PM / cpu_domains: Initialize CPU PM domains from DT
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Add helper functions to parse DT and initialize the CPU PM domains and
attach CPU to their respective domains using information provided in the
DT.

For each CPU in the DT, we identify the domain provider; initialize and
register the PM domain if isn't already registered and attach all the
CPU devices to the domain. Usually, when there are multiple clusters of
CPUs, there is a top level coherency domain that is dependent on these
individual domains. All domains thus created are marked IRQ safe
automatically and therefore may be powered down when the CPUs in the
domain are powered down by cpuidle.

Cc: Kevin Hilman <khilman@kernel.org>
Suggested-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 190 +++++++++++++++++++++++++++++++++++++++
 include/linux/cpu_domains.h      |  18 ++++
 2 files changed, 208 insertions(+)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 73e493b..8bf61e2 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -15,6 +15,7 @@
 #include <linux/device.h>
 #include <linux/kernel.h>
 #include <linux/list.h>
+#include <linux/of.h>
 #include <linux/pm_domain.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
@@ -189,3 +190,192 @@ fail:
 	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL(cpu_pd_init);
+
+static struct generic_pm_domain *alloc_genpd(const char *name)
+{
+	struct generic_pm_domain *genpd;
+
+	genpd = kzalloc(sizeof(*genpd), GFP_KERNEL);
+	if (!genpd)
+		return ERR_PTR(-ENOMEM);
+
+	genpd->name = kstrndup(name, CPU_PD_NAME_MAX, GFP_KERNEL);
+	if (!genpd->name) {
+		kfree(genpd);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	return genpd;
+}
+
+/**
+ * of_init_cpu_pm_domain() - Initialize a CPU PM domain from a device node
+ *
+ * @dn: The domain provider's device node
+ * @ops: The power_on/_off callbacks for the domain
+ *
+ * Returns the generic_pm_domain (genpd) pointer to the domain on success
+ */
+static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
+				const struct cpu_pd_ops *ops)
+{
+	struct cpu_pm_domain *pd = NULL;
+	struct generic_pm_domain *genpd = NULL;
+	int ret = -ENOMEM;
+
+	if (!of_device_is_available(dn))
+		return ERR_PTR(-ENODEV);
+
+	genpd = alloc_genpd(dn->full_name);
+	if (IS_ERR(genpd))
+		return genpd;
+
+	genpd->of_node = dn;
+
+	/* Populate platform specific states from DT */
+	if (ops->populate_state_data) {
+		struct device_node *np;
+		int i;
+
+		/* Initialize the arm,idle-state properties */
+		ret = pm_genpd_of_parse_power_states(genpd);
+		if (ret) {
+			pr_warn("%s domain states not initialized (%d)\n",
+					dn->full_name, ret);
+			goto fail;
+		}
+		for (i = 0; i < genpd->state_count; i++) {
+			ret = ops->populate_state_data(genpd->states[i].of_node,
+						&genpd->states[i].param);
+			of_node_put(np);
+			if (ret)
+				goto fail;
+		}
+	}
+
+	genpd = cpu_pd_init(genpd, ops);
+	if (IS_ERR(genpd))
+		goto fail;
+
+	ret = of_genpd_add_provider_simple(dn, genpd);
+	if (ret)
+		pr_warn("Unable to add genpd %s as provider\n",
+				pd->genpd->name);
+
+	return genpd;
+fail:
+	kfree(genpd->name);
+	kfree(genpd);
+	if (pd)
+		kfree(pd->cpus);
+	kfree(pd);
+	return ERR_PTR(ret);
+}
+
+static struct generic_pm_domain *of_get_cpu_domain(struct device_node *dn,
+		const struct cpu_pd_ops *ops, int cpu)
+{
+	struct of_phandle_args args;
+	struct generic_pm_domain *genpd, *parent;
+	int ret;
+
+	/* Do we have this domain? If not, create the domain */
+	args.np = dn;
+	args.args_count = 0;
+
+	genpd = of_genpd_get_from_provider(&args);
+	if (!IS_ERR(genpd))
+		return genpd;
+
+	genpd = of_init_cpu_pm_domain(dn, ops);
+	if (IS_ERR(genpd))
+		return genpd;
+
+	/* Is there a domain provider for this domain? */
+	ret = of_parse_phandle_with_args(dn, "power-domains",
+			"#power-domain-cells", 0, &args);
+	if (ret < 0)
+		goto skip_parent;
+
+	/* Find its parent and attach this domain to it, recursively */
+	parent = of_get_cpu_domain(args.np, ops, cpu);
+	if (IS_ERR(parent))
+		goto skip_parent;
+
+	ret = cpu_pd_attach_domain(parent, genpd);
+	if (ret)
+		pr_err("Unable to attach domain %s to parent %s\n",
+				genpd->name, parent->name);
+
+skip_parent:
+	of_node_put(dn);
+	return genpd;
+}
+
+/**
+ * of_setup_cpu_pd_single() - Setup the PM domains for a CPU
+ *
+ * @cpu: The CPU for which the PM domain is to be set up.
+ * @ops: The PM domain suspend/resume ops for the CPU's domain
+ *
+ * If the CPU PM domain exists already, then the CPU is attached to
+ * that CPU PD. If it doesn't, the domain is created, the @ops are
+ * set for power_on/power_off callbacks and then the CPU is attached
+ * to that domain. If the domain was created outside this framework,
+ * then we do not attach the CPU to the domain.
+ */
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{
+
+	struct device_node *dn, *np;
+	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *cpu_pd;
+
+	np = of_get_cpu_node(cpu, NULL);
+	if (!np)
+		return -ENODEV;
+
+	dn = of_parse_phandle(np, "power-domains", 0);
+	of_node_put(np);
+	if (!dn)
+		return -ENODEV;
+
+	/* Find the genpd for this CPU, create if not found */
+	genpd = of_get_cpu_domain(dn, ops, cpu);
+	of_node_put(dn);
+	if (IS_ERR(genpd))
+		return PTR_ERR(genpd);
+
+	cpu_pd = to_cpu_pd(genpd);
+	if (!cpu_pd) {
+		pr_err("%s: Genpd was created outside CPU PM domains\n",
+				__func__);
+		return -ENOENT;
+	}
+
+	return cpu_pd_attach_cpu(genpd, cpu);
+}
+EXPORT_SYMBOL(of_setup_cpu_pd_single);
+
+/**
+ * of_setup_cpu_pd() - Setup the PM domains for all CPUs
+ *
+ * @ops: The PM domain suspend/resume ops for all the domains
+ *
+ * Setup the CPU PM domain and attach all possible CPUs to their respective
+ * domains. The domains are created if not already and then attached.
+ */
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{
+	int cpu;
+	int ret;
+
+	for_each_possible_cpu(cpu) {
+		ret = of_setup_cpu_pd_single(cpu, ops);
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(of_setup_cpu_pd);
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
index 3a0a027..736d9e6 100644
--- a/include/linux/cpu_domains.h
+++ b/include/linux/cpu_domains.h
@@ -14,8 +14,10 @@
 #include <linux/types.h>
 
 struct cpumask;
+struct device_node;
 
 struct cpu_pd_ops {
+	int (*populate_state_data)(struct device_node *n, u32 *param);
 	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
 	int (*power_on)(void);
 };
@@ -46,4 +48,20 @@ static inline int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
 
 #endif /* CONFIG_PM_GENERIC_DOMAINS */
 
+#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
+
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops);
+
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops);
+
+#else
+
+static inline int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{ return -ENODEV; }
+
+static inline int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{ return -ENODEV; }
+
+#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
+
 #endif /* __CPU_DOMAINS_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 07/16] PM / cpu_domains: Initialize CPU PM domains from DT
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Add helper functions to parse DT and initialize the CPU PM domains and
attach CPU to their respective domains using information provided in the
DT.

For each CPU in the DT, we identify the domain provider; initialize and
register the PM domain if isn't already registered and attach all the
CPU devices to the domain. Usually, when there are multiple clusters of
CPUs, there is a top level coherency domain that is dependent on these
individual domains. All domains thus created are marked IRQ safe
automatically and therefore may be powered down when the CPUs in the
domain are powered down by cpuidle.

Cc: Kevin Hilman <khilman@kernel.org>
Suggested-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 190 +++++++++++++++++++++++++++++++++++++++
 include/linux/cpu_domains.h      |  18 ++++
 2 files changed, 208 insertions(+)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 73e493b..8bf61e2 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -15,6 +15,7 @@
 #include <linux/device.h>
 #include <linux/kernel.h>
 #include <linux/list.h>
+#include <linux/of.h>
 #include <linux/pm_domain.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
@@ -189,3 +190,192 @@ fail:
 	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL(cpu_pd_init);
+
+static struct generic_pm_domain *alloc_genpd(const char *name)
+{
+	struct generic_pm_domain *genpd;
+
+	genpd = kzalloc(sizeof(*genpd), GFP_KERNEL);
+	if (!genpd)
+		return ERR_PTR(-ENOMEM);
+
+	genpd->name = kstrndup(name, CPU_PD_NAME_MAX, GFP_KERNEL);
+	if (!genpd->name) {
+		kfree(genpd);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	return genpd;
+}
+
+/**
+ * of_init_cpu_pm_domain() - Initialize a CPU PM domain from a device node
+ *
+ * @dn: The domain provider's device node
+ * @ops: The power_on/_off callbacks for the domain
+ *
+ * Returns the generic_pm_domain (genpd) pointer to the domain on success
+ */
+static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
+				const struct cpu_pd_ops *ops)
+{
+	struct cpu_pm_domain *pd = NULL;
+	struct generic_pm_domain *genpd = NULL;
+	int ret = -ENOMEM;
+
+	if (!of_device_is_available(dn))
+		return ERR_PTR(-ENODEV);
+
+	genpd = alloc_genpd(dn->full_name);
+	if (IS_ERR(genpd))
+		return genpd;
+
+	genpd->of_node = dn;
+
+	/* Populate platform specific states from DT */
+	if (ops->populate_state_data) {
+		struct device_node *np;
+		int i;
+
+		/* Initialize the arm,idle-state properties */
+		ret = pm_genpd_of_parse_power_states(genpd);
+		if (ret) {
+			pr_warn("%s domain states not initialized (%d)\n",
+					dn->full_name, ret);
+			goto fail;
+		}
+		for (i = 0; i < genpd->state_count; i++) {
+			ret = ops->populate_state_data(genpd->states[i].of_node,
+						&genpd->states[i].param);
+			of_node_put(np);
+			if (ret)
+				goto fail;
+		}
+	}
+
+	genpd = cpu_pd_init(genpd, ops);
+	if (IS_ERR(genpd))
+		goto fail;
+
+	ret = of_genpd_add_provider_simple(dn, genpd);
+	if (ret)
+		pr_warn("Unable to add genpd %s as provider\n",
+				pd->genpd->name);
+
+	return genpd;
+fail:
+	kfree(genpd->name);
+	kfree(genpd);
+	if (pd)
+		kfree(pd->cpus);
+	kfree(pd);
+	return ERR_PTR(ret);
+}
+
+static struct generic_pm_domain *of_get_cpu_domain(struct device_node *dn,
+		const struct cpu_pd_ops *ops, int cpu)
+{
+	struct of_phandle_args args;
+	struct generic_pm_domain *genpd, *parent;
+	int ret;
+
+	/* Do we have this domain? If not, create the domain */
+	args.np = dn;
+	args.args_count = 0;
+
+	genpd = of_genpd_get_from_provider(&args);
+	if (!IS_ERR(genpd))
+		return genpd;
+
+	genpd = of_init_cpu_pm_domain(dn, ops);
+	if (IS_ERR(genpd))
+		return genpd;
+
+	/* Is there a domain provider for this domain? */
+	ret = of_parse_phandle_with_args(dn, "power-domains",
+			"#power-domain-cells", 0, &args);
+	if (ret < 0)
+		goto skip_parent;
+
+	/* Find its parent and attach this domain to it, recursively */
+	parent = of_get_cpu_domain(args.np, ops, cpu);
+	if (IS_ERR(parent))
+		goto skip_parent;
+
+	ret = cpu_pd_attach_domain(parent, genpd);
+	if (ret)
+		pr_err("Unable to attach domain %s to parent %s\n",
+				genpd->name, parent->name);
+
+skip_parent:
+	of_node_put(dn);
+	return genpd;
+}
+
+/**
+ * of_setup_cpu_pd_single() - Setup the PM domains for a CPU
+ *
+ * @cpu: The CPU for which the PM domain is to be set up.
+ * @ops: The PM domain suspend/resume ops for the CPU's domain
+ *
+ * If the CPU PM domain exists already, then the CPU is attached to
+ * that CPU PD. If it doesn't, the domain is created, the @ops are
+ * set for power_on/power_off callbacks and then the CPU is attached
+ * to that domain. If the domain was created outside this framework,
+ * then we do not attach the CPU to the domain.
+ */
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{
+
+	struct device_node *dn, *np;
+	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *cpu_pd;
+
+	np = of_get_cpu_node(cpu, NULL);
+	if (!np)
+		return -ENODEV;
+
+	dn = of_parse_phandle(np, "power-domains", 0);
+	of_node_put(np);
+	if (!dn)
+		return -ENODEV;
+
+	/* Find the genpd for this CPU, create if not found */
+	genpd = of_get_cpu_domain(dn, ops, cpu);
+	of_node_put(dn);
+	if (IS_ERR(genpd))
+		return PTR_ERR(genpd);
+
+	cpu_pd = to_cpu_pd(genpd);
+	if (!cpu_pd) {
+		pr_err("%s: Genpd was created outside CPU PM domains\n",
+				__func__);
+		return -ENOENT;
+	}
+
+	return cpu_pd_attach_cpu(genpd, cpu);
+}
+EXPORT_SYMBOL(of_setup_cpu_pd_single);
+
+/**
+ * of_setup_cpu_pd() - Setup the PM domains for all CPUs
+ *
+ * @ops: The PM domain suspend/resume ops for all the domains
+ *
+ * Setup the CPU PM domain and attach all possible CPUs to their respective
+ * domains. The domains are created if not already and then attached.
+ */
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{
+	int cpu;
+	int ret;
+
+	for_each_possible_cpu(cpu) {
+		ret = of_setup_cpu_pd_single(cpu, ops);
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(of_setup_cpu_pd);
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
index 3a0a027..736d9e6 100644
--- a/include/linux/cpu_domains.h
+++ b/include/linux/cpu_domains.h
@@ -14,8 +14,10 @@
 #include <linux/types.h>
 
 struct cpumask;
+struct device_node;
 
 struct cpu_pd_ops {
+	int (*populate_state_data)(struct device_node *n, u32 *param);
 	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
 	int (*power_on)(void);
 };
@@ -46,4 +48,20 @@ static inline int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
 
 #endif /* CONFIG_PM_GENERIC_DOMAINS */
 
+#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
+
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops);
+
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops);
+
+#else
+
+static inline int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{ return -ENODEV; }
+
+static inline int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{ return -ENODEV; }
+
+#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
+
 #endif /* __CPU_DOMAINS_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 08/16] ARM: cpuidle: Add runtime PM support for CPUs
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	Daniel Lezcano

Notify runtime PM when the CPU is going to be powered off in the idle
state. This allows for runtime PM suspend/resume of the CPU as well as
its PM domain.

We do not call into runtime PM for ARM WFI to keep the default state
simple and faster.

Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/cpuidle/cpuidle-arm.c | 55 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/drivers/cpuidle/cpuidle-arm.c b/drivers/cpuidle/cpuidle-arm.c
index e342565e..3abd145 100644
--- a/drivers/cpuidle/cpuidle-arm.c
+++ b/drivers/cpuidle/cpuidle-arm.c
@@ -11,12 +11,14 @@
 
 #define pr_fmt(fmt) "CPUidle arm: " fmt
 
+#include <linux/cpu.h>
 #include <linux/cpuidle.h>
 #include <linux/cpumask.h>
 #include <linux/cpu_pm.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/of.h>
+#include <linux/pm_runtime.h>
 #include <linux/slab.h>
 
 #include <asm/cpuidle.h>
@@ -37,6 +39,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
 				struct cpuidle_driver *drv, int idx)
 {
 	int ret;
+	struct device *cpu_dev = get_cpu_device(dev->cpu);
 
 	if (!idx) {
 		cpu_do_idle();
@@ -46,12 +49,20 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
 	ret = cpu_pm_enter();
 	if (!ret) {
 		/*
+		 * Call runtime PM suspend on our device
+		 * Notify RCU to pay attention to critical sections
+		 * called from within runtime PM.
+		 */
+		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));
+
+		/*
 		 * Pass idle state index to cpu_suspend which in turn will
 		 * call the CPU ops suspend protocol with idle index as a
 		 * parameter.
 		 */
 		ret = arm_cpuidle_suspend(idx);
 
+		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
 		cpu_pm_exit();
 	}
 
@@ -84,6 +95,34 @@ static const struct of_device_id arm_idle_state_match[] __initconst = {
 	{ },
 };
 
+#ifdef CONFIG_HOTPLUG_CPU
+static int arm_idle_cpu_hotplug(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	struct device *cpu_dev = get_cpu_device(smp_processor_id());
+
+	/* Execute CPU runtime PM on that CPU */
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_DYING:
+		pm_runtime_put_sync_suspend(cpu_dev);
+		break;
+	case CPU_STARTING:
+		pm_runtime_get_sync(cpu_dev);
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+#else
+static int arm_idle_cpu_hotplug(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	return NOTIFY_OK;
+}
+#endif
+
 /*
  * arm_idle_init
  *
@@ -96,6 +135,7 @@ static int __init arm_idle_init(void)
 	int cpu, ret;
 	struct cpuidle_driver *drv = &arm_idle_driver;
 	struct cpuidle_device *dev;
+	struct device *cpu_dev;
 
 	/*
 	 * Initialize idle states data, starting at index 1.
@@ -118,6 +158,16 @@ static int __init arm_idle_init(void)
 	 * idle states suspend back-end specific data
 	 */
 	for_each_possible_cpu(cpu) {
+
+		/* Initialize Runtime PM for the CPU */
+		cpu_dev = get_cpu_device(cpu);
+		pm_runtime_irq_safe(cpu_dev);
+		pm_runtime_enable(cpu_dev);
+		if (cpu_online(cpu)) {
+			pm_runtime_get_noresume(cpu_dev);
+			pm_runtime_set_active(cpu_dev);
+		}
+
 		ret = arm_cpuidle_init(cpu);
 
 		/*
@@ -148,10 +198,15 @@ static int __init arm_idle_init(void)
 		}
 	}
 
+	/* Register for hotplug notifications for runtime PM */
+	hotcpu_notifier(arm_idle_cpu_hotplug, 0);
+
 	return 0;
 out_fail:
 	while (--cpu >= 0) {
 		dev = per_cpu(cpuidle_devices, cpu);
+		cpu_dev = get_cpu_device(cpu);
+		__pm_runtime_disable(cpu_dev, false);
 		cpuidle_unregister_device(dev);
 		kfree(dev);
 	}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 08/16] ARM: cpuidle: Add runtime PM support for CPUs
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Notify runtime PM when the CPU is going to be powered off in the idle
state. This allows for runtime PM suspend/resume of the CPU as well as
its PM domain.

We do not call into runtime PM for ARM WFI to keep the default state
simple and faster.

Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/cpuidle/cpuidle-arm.c | 55 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 55 insertions(+)

diff --git a/drivers/cpuidle/cpuidle-arm.c b/drivers/cpuidle/cpuidle-arm.c
index e342565e..3abd145 100644
--- a/drivers/cpuidle/cpuidle-arm.c
+++ b/drivers/cpuidle/cpuidle-arm.c
@@ -11,12 +11,14 @@
 
 #define pr_fmt(fmt) "CPUidle arm: " fmt
 
+#include <linux/cpu.h>
 #include <linux/cpuidle.h>
 #include <linux/cpumask.h>
 #include <linux/cpu_pm.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/of.h>
+#include <linux/pm_runtime.h>
 #include <linux/slab.h>
 
 #include <asm/cpuidle.h>
@@ -37,6 +39,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
 				struct cpuidle_driver *drv, int idx)
 {
 	int ret;
+	struct device *cpu_dev = get_cpu_device(dev->cpu);
 
 	if (!idx) {
 		cpu_do_idle();
@@ -46,12 +49,20 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
 	ret = cpu_pm_enter();
 	if (!ret) {
 		/*
+		 * Call runtime PM suspend on our device
+		 * Notify RCU to pay attention to critical sections
+		 * called from within runtime PM.
+		 */
+		RCU_NONIDLE(pm_runtime_put_sync_suspend(cpu_dev));
+
+		/*
 		 * Pass idle state index to cpu_suspend which in turn will
 		 * call the CPU ops suspend protocol with idle index as a
 		 * parameter.
 		 */
 		ret = arm_cpuidle_suspend(idx);
 
+		RCU_NONIDLE(pm_runtime_get_sync(cpu_dev));
 		cpu_pm_exit();
 	}
 
@@ -84,6 +95,34 @@ static const struct of_device_id arm_idle_state_match[] __initconst = {
 	{ },
 };
 
+#ifdef CONFIG_HOTPLUG_CPU
+static int arm_idle_cpu_hotplug(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	struct device *cpu_dev = get_cpu_device(smp_processor_id());
+
+	/* Execute CPU runtime PM on that CPU */
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_DYING:
+		pm_runtime_put_sync_suspend(cpu_dev);
+		break;
+	case CPU_STARTING:
+		pm_runtime_get_sync(cpu_dev);
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+#else
+static int arm_idle_cpu_hotplug(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	return NOTIFY_OK;
+}
+#endif
+
 /*
  * arm_idle_init
  *
@@ -96,6 +135,7 @@ static int __init arm_idle_init(void)
 	int cpu, ret;
 	struct cpuidle_driver *drv = &arm_idle_driver;
 	struct cpuidle_device *dev;
+	struct device *cpu_dev;
 
 	/*
 	 * Initialize idle states data, starting at index 1.
@@ -118,6 +158,16 @@ static int __init arm_idle_init(void)
 	 * idle states suspend back-end specific data
 	 */
 	for_each_possible_cpu(cpu) {
+
+		/* Initialize Runtime PM for the CPU */
+		cpu_dev = get_cpu_device(cpu);
+		pm_runtime_irq_safe(cpu_dev);
+		pm_runtime_enable(cpu_dev);
+		if (cpu_online(cpu)) {
+			pm_runtime_get_noresume(cpu_dev);
+			pm_runtime_set_active(cpu_dev);
+		}
+
 		ret = arm_cpuidle_init(cpu);
 
 		/*
@@ -148,10 +198,15 @@ static int __init arm_idle_init(void)
 		}
 	}
 
+	/* Register for hotplug notifications for runtime PM */
+	hotcpu_notifier(arm_idle_cpu_hotplug, 0);
+
 	return 0;
 out_fail:
 	while (--cpu >= 0) {
 		dev = per_cpu(cpuidle_devices, cpu);
+		cpu_dev = get_cpu_device(cpu);
+		__pm_runtime_disable(cpu_dev, false);
 		cpuidle_unregister_device(dev);
 		kfree(dev);
 	}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 09/16] timer: Export next wake up of a CPU
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	Thomas Gleixner

Knowing the sleep length of the CPU is useful for the power state
determination on idle. The value is relative to the time when the call
was invoked by the CPU. This doesn't work well when there is a need to
know when the actual wakeup is.

By reading the next wake up event of a CPU, governors can determine the
first CPU to wake up (due to timer) amongst a cluster of CPUs and the
sleep time available between the last CPU to idle and the first CPU to
resume. This information is useful to determine if the caches and other
common hardware blocks can also be put in idle during this common period
of inactivity.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 include/linux/tick.h     |  7 +++++++
 kernel/time/tick-sched.c | 11 +++++++++++
 2 files changed, 18 insertions(+)

diff --git a/include/linux/tick.h b/include/linux/tick.h
index 62be0786..92fa4b0 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -117,6 +117,7 @@ extern void tick_nohz_idle_enter(void);
 extern void tick_nohz_idle_exit(void);
 extern void tick_nohz_irq_exit(void);
 extern ktime_t tick_nohz_get_sleep_length(void);
+extern ktime_t tick_nohz_get_next_wakeup(int cpu);
 extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
 extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
 #else /* !CONFIG_NO_HZ_COMMON */
@@ -131,6 +132,12 @@ static inline ktime_t tick_nohz_get_sleep_length(void)
 
 	return len;
 }
+
+static inline ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	return tick_next_period;
+}
+
 static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
 static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
 #endif /* !CONFIG_NO_HZ_COMMON */
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 536ada8..5c7ac17 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -979,6 +979,17 @@ ktime_t tick_nohz_get_sleep_length(void)
 	return ts->sleep_length;
 }
 
+/**
+ * tick_nohz_get_next_wakeup - return the next wake up of the CPU
+ */
+ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	struct clock_event_device *dev =
+			per_cpu(tick_cpu_device.evtdev, cpu);
+
+	return dev->next_event;
+}
+
 static void tick_nohz_account_idle_ticks(struct tick_sched *ts)
 {
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 09/16] timer: Export next wake up of a CPU
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Knowing the sleep length of the CPU is useful for the power state
determination on idle. The value is relative to the time when the call
was invoked by the CPU. This doesn't work well when there is a need to
know when the actual wakeup is.

By reading the next wake up event of a CPU, governors can determine the
first CPU to wake up (due to timer) amongst a cluster of CPUs and the
sleep time available between the last CPU to idle and the first CPU to
resume. This information is useful to determine if the caches and other
common hardware blocks can also be put in idle during this common period
of inactivity.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 include/linux/tick.h     |  7 +++++++
 kernel/time/tick-sched.c | 11 +++++++++++
 2 files changed, 18 insertions(+)

diff --git a/include/linux/tick.h b/include/linux/tick.h
index 62be0786..92fa4b0 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -117,6 +117,7 @@ extern void tick_nohz_idle_enter(void);
 extern void tick_nohz_idle_exit(void);
 extern void tick_nohz_irq_exit(void);
 extern ktime_t tick_nohz_get_sleep_length(void);
+extern ktime_t tick_nohz_get_next_wakeup(int cpu);
 extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
 extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
 #else /* !CONFIG_NO_HZ_COMMON */
@@ -131,6 +132,12 @@ static inline ktime_t tick_nohz_get_sleep_length(void)
 
 	return len;
 }
+
+static inline ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	return tick_next_period;
+}
+
 static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
 static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
 #endif /* !CONFIG_NO_HZ_COMMON */
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 536ada8..5c7ac17 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -979,6 +979,17 @@ ktime_t tick_nohz_get_sleep_length(void)
 	return ts->sleep_length;
 }
 
+/**
+ * tick_nohz_get_next_wakeup - return the next wake up of the CPU
+ */
+ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	struct clock_event_device *dev =
+			per_cpu(tick_cpu_device.evtdev, cpu);
+
+	return dev->next_event;
+}
+
 static void tick_nohz_account_idle_ticks(struct tick_sched *ts)
 {
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 10/16] PM / cpu_domains: Add PM Domain governor for CPUs
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

A PM domain comprising of CPUs may be powered off when all the CPUs in
the domain are powered down. Powering down a CPU domain is generally a
expensive operation and therefore the power performance trade offs
should be considered. The time between the last CPU powering down and
the first CPU powering up in a domain, is the time available for the
domain to sleep. Ideally, the sleep time of the domain should fulfill
the residency requirement of the domains' idle state.

To do this effectively, read the time before the wakeup of the cluster's
CPUs and ensure that the domain's idle state sleep time guarantees the
QoS requirements of each of the CPU, the PM QoS CPU_DMA_LATENCY and the
state's residency.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 80 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 79 insertions(+), 1 deletion(-)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 8bf61e2..79fa4ae 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -17,9 +17,12 @@
 #include <linux/list.h>
 #include <linux/of.h>
 #include <linux/pm_domain.h>
+#include <linux/pm_qos.h>
+#include <linux/pm_runtime.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
 #include <linux/slab.h>
+#include <linux/tick.h>
 
 #define CPU_PD_NAME_MAX 36
 
@@ -51,6 +54,81 @@ static inline struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
 	return res;
 }
 
+static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
+{
+	struct generic_pm_domain *genpd = pd_to_genpd(pd);
+	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
+	int qos_ns = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
+	u64 sleep_ns;
+	ktime_t earliest, next_wakeup;
+	int cpu;
+	int i;
+
+	/* Reset the last set genpd state, default to index 0 */
+	genpd->state_idx = 0;
+
+	/* We don't want to power down, if QoS is 0 */
+	if (!qos_ns)
+		return false;
+
+	/*
+	 * Find the sleep time for the cluster.
+	 * The time between now and the first wake up of any CPU that
+	 * are in this domain hierarchy is the time available for the
+	 * domain to be idle.
+	 *
+	 * We only care about the next wakeup for any online CPU in that
+	 * cluster. Hotplug off any of the CPUs that we care about will
+	 * wait on the genpd lock, until we are done. Any other CPU hotplug
+	 * is not of consequence to our sleep time.
+	 */
+	earliest = ktime_set(KTIME_SEC_MAX, 0);
+	for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {
+		next_wakeup = tick_nohz_get_next_wakeup(cpu);
+		if (earliest.tv64 > next_wakeup.tv64)
+			earliest = next_wakeup;
+	}
+
+	sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get()));
+	if (sleep_ns <= 0)
+		return false;
+
+	/*
+	 * Find the deepest sleep state that satisfies the residency
+	 * requirement and the QoS constraint
+	 */
+	for (i = genpd->state_count - 1; i >= 0; i--) {
+		u64 state_sleep_ns;
+
+		state_sleep_ns = genpd->states[i].power_off_latency_ns +
+			genpd->states[i].power_on_latency_ns +
+			genpd->states[i].residency_ns;
+
+		/*
+		 * If we can't sleep to save power in the state, move on
+		 * to the next lower idle state.
+		 */
+		if (state_sleep_ns > sleep_ns)
+			continue;
+
+		/*
+		 * We also don't want to sleep more than we should to
+		 * gaurantee QoS.
+		 */
+		if (state_sleep_ns < (qos_ns * NSEC_PER_USEC))
+			break;
+	}
+
+	if (i >= 0)
+		genpd->state_idx = i;
+
+	return (i >= 0);
+}
+
+static struct dev_power_governor cpu_pd_gov = {
+	.power_down_ok = cpu_pd_down_ok,
+};
+
 static int cpu_pd_power_on(struct generic_pm_domain *genpd)
 {
 	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
@@ -172,7 +250,7 @@ struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
 	list_add_rcu(&pd->link, &of_cpu_pd_list);
 	mutex_unlock(&cpu_pd_list_lock);
 
-	ret = pm_genpd_init(genpd, &simple_qos_governor, false);
+	ret = pm_genpd_init(genpd, &cpu_pd_gov, false);
 	if (ret) {
 		pr_err("Unable to initialize domain %s\n", genpd->name);
 		goto fail;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 10/16] PM / cpu_domains: Add PM Domain governor for CPUs
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

A PM domain comprising of CPUs may be powered off when all the CPUs in
the domain are powered down. Powering down a CPU domain is generally a
expensive operation and therefore the power performance trade offs
should be considered. The time between the last CPU powering down and
the first CPU powering up in a domain, is the time available for the
domain to sleep. Ideally, the sleep time of the domain should fulfill
the residency requirement of the domains' idle state.

To do this effectively, read the time before the wakeup of the cluster's
CPUs and ensure that the domain's idle state sleep time guarantees the
QoS requirements of each of the CPU, the PM QoS CPU_DMA_LATENCY and the
state's residency.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 80 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 79 insertions(+), 1 deletion(-)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 8bf61e2..79fa4ae 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -17,9 +17,12 @@
 #include <linux/list.h>
 #include <linux/of.h>
 #include <linux/pm_domain.h>
+#include <linux/pm_qos.h>
+#include <linux/pm_runtime.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
 #include <linux/slab.h>
+#include <linux/tick.h>
 
 #define CPU_PD_NAME_MAX 36
 
@@ -51,6 +54,81 @@ static inline struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
 	return res;
 }
 
+static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
+{
+	struct generic_pm_domain *genpd = pd_to_genpd(pd);
+	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
+	int qos_ns = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
+	u64 sleep_ns;
+	ktime_t earliest, next_wakeup;
+	int cpu;
+	int i;
+
+	/* Reset the last set genpd state, default to index 0 */
+	genpd->state_idx = 0;
+
+	/* We don't want to power down, if QoS is 0 */
+	if (!qos_ns)
+		return false;
+
+	/*
+	 * Find the sleep time for the cluster.
+	 * The time between now and the first wake up of any CPU that
+	 * are in this domain hierarchy is the time available for the
+	 * domain to be idle.
+	 *
+	 * We only care about the next wakeup for any online CPU in that
+	 * cluster. Hotplug off any of the CPUs that we care about will
+	 * wait on the genpd lock, until we are done. Any other CPU hotplug
+	 * is not of consequence to our sleep time.
+	 */
+	earliest = ktime_set(KTIME_SEC_MAX, 0);
+	for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {
+		next_wakeup = tick_nohz_get_next_wakeup(cpu);
+		if (earliest.tv64 > next_wakeup.tv64)
+			earliest = next_wakeup;
+	}
+
+	sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get()));
+	if (sleep_ns <= 0)
+		return false;
+
+	/*
+	 * Find the deepest sleep state that satisfies the residency
+	 * requirement and the QoS constraint
+	 */
+	for (i = genpd->state_count - 1; i >= 0; i--) {
+		u64 state_sleep_ns;
+
+		state_sleep_ns = genpd->states[i].power_off_latency_ns +
+			genpd->states[i].power_on_latency_ns +
+			genpd->states[i].residency_ns;
+
+		/*
+		 * If we can't sleep to save power in the state, move on
+		 * to the next lower idle state.
+		 */
+		if (state_sleep_ns > sleep_ns)
+			continue;
+
+		/*
+		 * We also don't want to sleep more than we should to
+		 * gaurantee QoS.
+		 */
+		if (state_sleep_ns < (qos_ns * NSEC_PER_USEC))
+			break;
+	}
+
+	if (i >= 0)
+		genpd->state_idx = i;
+
+	return (i >= 0);
+}
+
+static struct dev_power_governor cpu_pd_gov = {
+	.power_down_ok = cpu_pd_down_ok,
+};
+
 static int cpu_pd_power_on(struct generic_pm_domain *genpd)
 {
 	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
@@ -172,7 +250,7 @@ struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
 	list_add_rcu(&pd->link, &of_cpu_pd_list);
 	mutex_unlock(&cpu_pd_list_lock);
 
-	ret = pm_genpd_init(genpd, &simple_qos_governor, false);
+	ret = pm_genpd_init(genpd, &cpu_pd_gov, false);
 	if (ret) {
 		pr_err("Unable to initialize domain %s\n", genpd->name);
 		goto fail;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 11/16] doc / cpu_domains: Describe CPU PM domains setup and governor
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

A generic CPU PM domain functionality is provided by
drivers/base/power/cpu_domains.c. This document describes the generic
usecase of CPU's PM domains, the setup of such domains and a CPU
specific genpd governor.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/power/cpu_domains.txt | 109 ++++++++++++++++++++++++++++++++++++
 1 file changed, 109 insertions(+)
 create mode 100644 Documentation/power/cpu_domains.txt

diff --git a/Documentation/power/cpu_domains.txt b/Documentation/power/cpu_domains.txt
new file mode 100644
index 0000000..6a39a64
--- /dev/null
+++ b/Documentation/power/cpu_domains.txt
@@ -0,0 +1,109 @@
+CPU PM domains
+==============
+
+Newer CPUs are grouped in SoCs as clusters. A cluster in addition to the CPUs
+may have caches, floating point units and other architecture specific power
+controller that share resources when any of the CPUs are active. When the CPUs
+are in idle, some of these cluster components may also idle. A cluster may
+also be nested inside another cluster that provides common coherency
+interfaces to share data between the clusters. The organization of such
+clusters and CPU may be descibed in DT, since they are SoC specific.
+
+CPUIdle framework enables the CPUs to determine the sleep time and enter low
+power state to save power during periods of idle. CPUs in a cluster may enter
+and exit idle state independently of each other. During the time when all the
+CPUs are in idle state, the cluster may safely put some of the shared
+resources in their idle state. The time between the last CPU to enter idle and
+the first CPU to wake up is the time available for the cluster to enter its
+idle state.
+
+When SoCs power down the CPU during cpuidle, they generally have supplemental
+hardware that can handshake with the CPU with a signal that indicates that the
+CPU has stopped execution. The hardware is also responsible for warm booting
+the CPU on receiving an interrupt. In a cluster architecture, common resources
+that are shared by a cluster may also be powered down by an external
+microcontroller or a processor. The microcontroller may be programmed in
+advance to put the hardware blocks in a low power state, when the last active
+CPU sends the idle signal. When the signal is received, the microcontroller
+may trigger the hardware blocks to enter their low power state. When an
+interrupt to wakeup the processor is received, the microcontroller is
+responsible for bringing up the hardware blocks to its active state, before
+waking up the CPU. The timelines for such operations should be in the
+acceptable range for for CPU idle to get power benefits.
+
+CPU PM Domain Setup
+-------------------
+
+PM domains are represented in the DT as domain consumers and providers. A
+device may have a domain provider and a domain provider may support multiple
+domain consumers. Domains like clusters, may also be nested inside one
+another. A domain that has no active consumer, may be powered off and any
+resuming consumer would trigger the domain back to active. Parent domains may
+be powered off when the child domains are powered off. The CPU cluster can be
+fashioned as a PM domain. When the CPU devices are powered off, the PM domain
+may be powered off.
+
+Device idle is reference counted by runtime PM. When there is no active need
+for the device, runtime PM invokes callbacks to suspend the parent domain.
+Generic PM domain (genpd) handles the hierarchy of devices, domains and the
+reference counting of objects leading to last man down and first man up in the
+domain. The CPU domains helper functions defines PM domains for each CPU
+cluster and attaches the CPU devices to the respective PM domains.
+
+Platform drivers may use the following API to register their CPU PM domains.
+
+of_setup_cpu_pd() -
+Provides a single step registration of the CPU PM domain and attach CPUs to
+the genpd. Platform drivers may additionally register callbacks for power_on
+and power_off operations for the PM domain.
+
+of_setup_cpu_pd_single() -
+Define PM domain for a single CPU and attach the CPU to its domain.
+
+
+CPU PM Domain governor
+----------------------
+
+CPUs have a unique ability to determine their next wakeup. CPUs may wake up
+for known timer interrupts and unknown interrupts from idle. Prediction
+algorithms and heuristic based algorithms like the Menu governor for cpuidle
+can determine the next wakeup of the CPU. However, determining the wakeup
+across a group of CPUs is a tough problem to solve.
+
+A simplistic approach would be to resort to known wakeups of the CPUs in
+determining the next wakeup of any CPU in the cluster. The CPU PM domain
+governor does just that. By looking into the tick device of the CPUs, the
+governor can determine the sleep time between the last CPU and the first
+scheduled wakeup of any CPU in that domain. This combined with the PM QoS
+requirement for CPU_DMA_LATENCY can be used to determine the deepest possible
+idle state of the CPU domain.
+
+
+PSCI based CPU PM Domains
+-------------------------
+
+ARM PSCI v1.0 supports PM domains for CPU clusters like in big.Little
+architecture. It is supported as part of the OS-Initiated (OSI) mode of the
+PSCI firmware. Since the control of domains is abstracted in the firmware,
+Linux does not need even a driver to control these domains. The complexity of
+determining the idle state of the PM domain is handled by the CPU PM domains.
+
+Every PSCI CPU PM domain idle state has a unique PSCI state id. The state id
+is read from the DT and specified using the arm,psci-suspend-param property.
+This makes it easy for big.Little SoCs to just specify the PM domain idle
+states for the CPU along with the psci-suspend-param and everything else is
+handled by the PSCI fimrware drivers and the firmware.
+
+
+DT definitions for PSCI CPU PM Domains
+--------------------------------------
+
+A PM domain's idle state can be defined in DT, the description of which is
+available in [1]. PSCI based CPU PM domains may define their idle states as
+part of the psci node. The additional parameter arm,psci-suspend-param is used
+to indicate to the firmwware the addition cluster state that would be achieved
+after the last CPU makes the PSCI call to suspend the CPU. The description of
+PSCI domain states is available in [2].
+
+[1]. Documentation/devicetree/bindings/arm/idle-states.txt
+[2]. Documentation/devicetree/bindings/arm/psci.txt
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 11/16] doc / cpu_domains: Describe CPU PM domains setup and governor
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

A generic CPU PM domain functionality is provided by
drivers/base/power/cpu_domains.c. This document describes the generic
usecase of CPU's PM domains, the setup of such domains and a CPU
specific genpd governor.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/power/cpu_domains.txt | 109 ++++++++++++++++++++++++++++++++++++
 1 file changed, 109 insertions(+)
 create mode 100644 Documentation/power/cpu_domains.txt

diff --git a/Documentation/power/cpu_domains.txt b/Documentation/power/cpu_domains.txt
new file mode 100644
index 0000000..6a39a64
--- /dev/null
+++ b/Documentation/power/cpu_domains.txt
@@ -0,0 +1,109 @@
+CPU PM domains
+==============
+
+Newer CPUs are grouped in SoCs as clusters. A cluster in addition to the CPUs
+may have caches, floating point units and other architecture specific power
+controller that share resources when any of the CPUs are active. When the CPUs
+are in idle, some of these cluster components may also idle. A cluster may
+also be nested inside another cluster that provides common coherency
+interfaces to share data between the clusters. The organization of such
+clusters and CPU may be descibed in DT, since they are SoC specific.
+
+CPUIdle framework enables the CPUs to determine the sleep time and enter low
+power state to save power during periods of idle. CPUs in a cluster may enter
+and exit idle state independently of each other. During the time when all the
+CPUs are in idle state, the cluster may safely put some of the shared
+resources in their idle state. The time between the last CPU to enter idle and
+the first CPU to wake up is the time available for the cluster to enter its
+idle state.
+
+When SoCs power down the CPU during cpuidle, they generally have supplemental
+hardware that can handshake with the CPU with a signal that indicates that the
+CPU has stopped execution. The hardware is also responsible for warm booting
+the CPU on receiving an interrupt. In a cluster architecture, common resources
+that are shared by a cluster may also be powered down by an external
+microcontroller or a processor. The microcontroller may be programmed in
+advance to put the hardware blocks in a low power state, when the last active
+CPU sends the idle signal. When the signal is received, the microcontroller
+may trigger the hardware blocks to enter their low power state. When an
+interrupt to wakeup the processor is received, the microcontroller is
+responsible for bringing up the hardware blocks to its active state, before
+waking up the CPU. The timelines for such operations should be in the
+acceptable range for for CPU idle to get power benefits.
+
+CPU PM Domain Setup
+-------------------
+
+PM domains are represented in the DT as domain consumers and providers. A
+device may have a domain provider and a domain provider may support multiple
+domain consumers. Domains like clusters, may also be nested inside one
+another. A domain that has no active consumer, may be powered off and any
+resuming consumer would trigger the domain back to active. Parent domains may
+be powered off when the child domains are powered off. The CPU cluster can be
+fashioned as a PM domain. When the CPU devices are powered off, the PM domain
+may be powered off.
+
+Device idle is reference counted by runtime PM. When there is no active need
+for the device, runtime PM invokes callbacks to suspend the parent domain.
+Generic PM domain (genpd) handles the hierarchy of devices, domains and the
+reference counting of objects leading to last man down and first man up in the
+domain. The CPU domains helper functions defines PM domains for each CPU
+cluster and attaches the CPU devices to the respective PM domains.
+
+Platform drivers may use the following API to register their CPU PM domains.
+
+of_setup_cpu_pd() -
+Provides a single step registration of the CPU PM domain and attach CPUs to
+the genpd. Platform drivers may additionally register callbacks for power_on
+and power_off operations for the PM domain.
+
+of_setup_cpu_pd_single() -
+Define PM domain for a single CPU and attach the CPU to its domain.
+
+
+CPU PM Domain governor
+----------------------
+
+CPUs have a unique ability to determine their next wakeup. CPUs may wake up
+for known timer interrupts and unknown interrupts from idle. Prediction
+algorithms and heuristic based algorithms like the Menu governor for cpuidle
+can determine the next wakeup of the CPU. However, determining the wakeup
+across a group of CPUs is a tough problem to solve.
+
+A simplistic approach would be to resort to known wakeups of the CPUs in
+determining the next wakeup of any CPU in the cluster. The CPU PM domain
+governor does just that. By looking into the tick device of the CPUs, the
+governor can determine the sleep time between the last CPU and the first
+scheduled wakeup of any CPU in that domain. This combined with the PM QoS
+requirement for CPU_DMA_LATENCY can be used to determine the deepest possible
+idle state of the CPU domain.
+
+
+PSCI based CPU PM Domains
+-------------------------
+
+ARM PSCI v1.0 supports PM domains for CPU clusters like in big.Little
+architecture. It is supported as part of the OS-Initiated (OSI) mode of the
+PSCI firmware. Since the control of domains is abstracted in the firmware,
+Linux does not need even a driver to control these domains. The complexity of
+determining the idle state of the PM domain is handled by the CPU PM domains.
+
+Every PSCI CPU PM domain idle state has a unique PSCI state id. The state id
+is read from the DT and specified using the arm,psci-suspend-param property.
+This makes it easy for big.Little SoCs to just specify the PM domain idle
+states for the CPU along with the psci-suspend-param and everything else is
+handled by the PSCI fimrware drivers and the firmware.
+
+
+DT definitions for PSCI CPU PM Domains
+--------------------------------------
+
+A PM domain's idle state can be defined in DT, the description of which is
+available in [1]. PSCI based CPU PM domains may define their idle states as
+part of the psci node. The additional parameter arm,psci-suspend-param is used
+to indicate to the firmwware the addition cluster state that would be achieved
+after the last CPU makes the PSCI call to suspend the CPU. The description of
+PSCI domain states is available in [2].
+
+[1]. Documentation/devicetree/bindings/arm/idle-states.txt
+[2]. Documentation/devicetree/bindings/arm/psci.txt
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 12/16] drivers: firmware: psci: Allow OS Initiated suspend mode
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	Mark Rutland

PSCI firmware v1.0 onwards may support 2 different modes for
CPU_SUSPEND. Platform coordinated mode is the default and every firmware
should support it. OS Initiated mode is optional for the firmware to
implement and allow Linux to make an better decision on the state of
the CPU cluster heirarchy.

With the kernel capable of deciding the state for CPU cluster and
coherency domains, the OS Initiated mode may now be used by the kernel,
provided the firmware supports it. SET_SUSPEND_MODE is a PSCI function
available on v1.0 onwards and can be used to set the mode in the
firmware.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased on 4.7 rc1]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/firmware/psci.c   | 42 +++++++++++++++++++++++++++++-------------
 include/uapi/linux/psci.h |  5 +++++
 2 files changed, 34 insertions(+), 13 deletions(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 03e0458..759b134 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -52,6 +52,7 @@
  * require cooperation with a Trusted OS driver.
  */
 static int resident_cpu = -1;
+static bool psci_has_osi;
 
 bool psci_tos_resident_on(int cpu)
 {
@@ -506,9 +507,8 @@ static int __init psci_0_2_init(struct device_node *np)
 	int err;
 
 	err = get_set_conduit_method(np);
-
 	if (err)
-		goto out_put_node;
+		return err;
 	/*
 	 * Starting with v0.2, the PSCI specification introduced a call
 	 * (PSCI_VERSION) that allows probing the firmware version, so
@@ -516,11 +516,7 @@ static int __init psci_0_2_init(struct device_node *np)
 	 * can be carried out according to the specific version reported
 	 * by firmware
 	 */
-	err = psci_probe();
-
-out_put_node:
-	of_node_put(np);
-	return err;
+	return psci_probe();
 }
 
 /*
@@ -532,9 +528,8 @@ static int __init psci_0_1_init(struct device_node *np)
 	int err;
 
 	err = get_set_conduit_method(np);
-
 	if (err)
-		goto out_put_node;
+		return err;
 
 	pr_info("Using PSCI v0.1 Function IDs from DT\n");
 
@@ -558,15 +553,31 @@ static int __init psci_0_1_init(struct device_node *np)
 		psci_ops.migrate = psci_migrate;
 	}
 
-out_put_node:
-	of_node_put(np);
 	return err;
 }
 
+static int __init psci_1_0_init(struct device_node *np)
+{
+	int ret;
+
+	ret = psci_0_2_init(np);
+	if (ret)
+		return ret;
+
+	/* Check if PSCI OSI mode is available */
+	ret = psci_features(psci_function_id[PSCI_FN_CPU_SUSPEND]);
+	if (ret & PSCI_1_0_OS_INITIATED) {
+		if (!psci_features(PSCI_1_0_FN_SET_SUSPEND_MODE))
+			psci_has_osi = true;
+	}
+
+	return 0;
+}
+
 static const struct of_device_id psci_of_match[] __initconst = {
 	{ .compatible = "arm,psci",	.data = psci_0_1_init},
 	{ .compatible = "arm,psci-0.2",	.data = psci_0_2_init},
-	{ .compatible = "arm,psci-1.0",	.data = psci_0_2_init},
+	{ .compatible = "arm,psci-1.0",	.data = psci_1_0_init},
 	{},
 };
 
@@ -575,6 +586,7 @@ int __init psci_dt_init(void)
 	struct device_node *np;
 	const struct of_device_id *matched_np;
 	psci_initcall_t init_fn;
+	int ret;
 
 	np = of_find_matching_node_and_match(NULL, psci_of_match, &matched_np);
 
@@ -582,7 +594,11 @@ int __init psci_dt_init(void)
 		return -ENODEV;
 
 	init_fn = (psci_initcall_t)matched_np->data;
-	return init_fn(np);
+	ret = init_fn(np);
+
+	of_node_put(np);
+
+	return ret;
 }
 
 #ifdef CONFIG_ACPI
diff --git a/include/uapi/linux/psci.h b/include/uapi/linux/psci.h
index 3d7a0fc..7dd778e 100644
--- a/include/uapi/linux/psci.h
+++ b/include/uapi/linux/psci.h
@@ -48,6 +48,7 @@
 
 #define PSCI_1_0_FN_PSCI_FEATURES		PSCI_0_2_FN(10)
 #define PSCI_1_0_FN_SYSTEM_SUSPEND		PSCI_0_2_FN(14)
+#define PSCI_1_0_FN_SET_SUSPEND_MODE		PSCI_0_2_FN(15)
 
 #define PSCI_1_0_FN64_SYSTEM_SUSPEND		PSCI_0_2_FN64(14)
 
@@ -93,6 +94,10 @@
 #define PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK	\
 			(0x1 << PSCI_1_0_FEATURES_CPU_SUSPEND_PF_SHIFT)
 
+#define PSCI_1_0_OS_INITIATED			BIT(0)
+#define PSCI_1_0_SUSPEND_MODE_PC		0
+#define PSCI_1_0_SUSPEND_MODE_OSI		1
+
 /* PSCI return values (inclusive of all PSCI versions) */
 #define PSCI_RET_SUCCESS			0
 #define PSCI_RET_NOT_SUPPORTED			-1
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 12/16] drivers: firmware: psci: Allow OS Initiated suspend mode
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

PSCI firmware v1.0 onwards may support 2 different modes for
CPU_SUSPEND. Platform coordinated mode is the default and every firmware
should support it. OS Initiated mode is optional for the firmware to
implement and allow Linux to make an better decision on the state of
the CPU cluster heirarchy.

With the kernel capable of deciding the state for CPU cluster and
coherency domains, the OS Initiated mode may now be used by the kernel,
provided the firmware supports it. SET_SUSPEND_MODE is a PSCI function
available on v1.0 onwards and can be used to set the mode in the
firmware.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased on 4.7 rc1]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/firmware/psci.c   | 42 +++++++++++++++++++++++++++++-------------
 include/uapi/linux/psci.h |  5 +++++
 2 files changed, 34 insertions(+), 13 deletions(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 03e0458..759b134 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -52,6 +52,7 @@
  * require cooperation with a Trusted OS driver.
  */
 static int resident_cpu = -1;
+static bool psci_has_osi;
 
 bool psci_tos_resident_on(int cpu)
 {
@@ -506,9 +507,8 @@ static int __init psci_0_2_init(struct device_node *np)
 	int err;
 
 	err = get_set_conduit_method(np);
-
 	if (err)
-		goto out_put_node;
+		return err;
 	/*
 	 * Starting with v0.2, the PSCI specification introduced a call
 	 * (PSCI_VERSION) that allows probing the firmware version, so
@@ -516,11 +516,7 @@ static int __init psci_0_2_init(struct device_node *np)
 	 * can be carried out according to the specific version reported
 	 * by firmware
 	 */
-	err = psci_probe();
-
-out_put_node:
-	of_node_put(np);
-	return err;
+	return psci_probe();
 }
 
 /*
@@ -532,9 +528,8 @@ static int __init psci_0_1_init(struct device_node *np)
 	int err;
 
 	err = get_set_conduit_method(np);
-
 	if (err)
-		goto out_put_node;
+		return err;
 
 	pr_info("Using PSCI v0.1 Function IDs from DT\n");
 
@@ -558,15 +553,31 @@ static int __init psci_0_1_init(struct device_node *np)
 		psci_ops.migrate = psci_migrate;
 	}
 
-out_put_node:
-	of_node_put(np);
 	return err;
 }
 
+static int __init psci_1_0_init(struct device_node *np)
+{
+	int ret;
+
+	ret = psci_0_2_init(np);
+	if (ret)
+		return ret;
+
+	/* Check if PSCI OSI mode is available */
+	ret = psci_features(psci_function_id[PSCI_FN_CPU_SUSPEND]);
+	if (ret & PSCI_1_0_OS_INITIATED) {
+		if (!psci_features(PSCI_1_0_FN_SET_SUSPEND_MODE))
+			psci_has_osi = true;
+	}
+
+	return 0;
+}
+
 static const struct of_device_id psci_of_match[] __initconst = {
 	{ .compatible = "arm,psci",	.data = psci_0_1_init},
 	{ .compatible = "arm,psci-0.2",	.data = psci_0_2_init},
-	{ .compatible = "arm,psci-1.0",	.data = psci_0_2_init},
+	{ .compatible = "arm,psci-1.0",	.data = psci_1_0_init},
 	{},
 };
 
@@ -575,6 +586,7 @@ int __init psci_dt_init(void)
 	struct device_node *np;
 	const struct of_device_id *matched_np;
 	psci_initcall_t init_fn;
+	int ret;
 
 	np = of_find_matching_node_and_match(NULL, psci_of_match, &matched_np);
 
@@ -582,7 +594,11 @@ int __init psci_dt_init(void)
 		return -ENODEV;
 
 	init_fn = (psci_initcall_t)matched_np->data;
-	return init_fn(np);
+	ret = init_fn(np);
+
+	of_node_put(np);
+
+	return ret;
 }
 
 #ifdef CONFIG_ACPI
diff --git a/include/uapi/linux/psci.h b/include/uapi/linux/psci.h
index 3d7a0fc..7dd778e 100644
--- a/include/uapi/linux/psci.h
+++ b/include/uapi/linux/psci.h
@@ -48,6 +48,7 @@
 
 #define PSCI_1_0_FN_PSCI_FEATURES		PSCI_0_2_FN(10)
 #define PSCI_1_0_FN_SYSTEM_SUSPEND		PSCI_0_2_FN(14)
+#define PSCI_1_0_FN_SET_SUSPEND_MODE		PSCI_0_2_FN(15)
 
 #define PSCI_1_0_FN64_SYSTEM_SUSPEND		PSCI_0_2_FN64(14)
 
@@ -93,6 +94,10 @@
 #define PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK	\
 			(0x1 << PSCI_1_0_FEATURES_CPU_SUSPEND_PF_SHIFT)
 
+#define PSCI_1_0_OS_INITIATED			BIT(0)
+#define PSCI_1_0_SUSPEND_MODE_PC		0
+#define PSCI_1_0_SUSPEND_MODE_OSI		1
+
 /* PSCI return values (inclusive of all PSCI versions) */
 #define PSCI_RET_SUCCESS			0
 #define PSCI_RET_NOT_SUPPORTED			-1
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 13/16] drivers: firmware: psci: Support cluster idle states for OS-Initiated
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	Mark Rutland

PSCI OS initiated firmware may allow Linux to determine the state of the
CPU cluster and the cluster at coherency level to enter idle states when
there are no active CPUs. Since Linux has a better idea of the QoS and
the wakeup pattern of the CPUs, the cluster idle states may be better
determined by the OS instead of the firmware.

The last CPU entering idle in a cluster, is responsible for selecting
the state of the cluster. Only one CPU in a cluster may provide the
cluster idle state to the firmware. Similarly, the last CPU in the
system may provide the state of the coherency domain along with the
cluster and the CPU state IDs.

Utilize the CPU PM domain framework's helper functions to build up the
hierarchy of cluster topology using Generic PM domains. We provide
callbacks for domain power_on and power_off. By appending the state IDs
at each domain level in the -power_off() callbacks, we build up a
composite state ID that can be passed onto the firmware to idle the CPU,
the cluster and the coherency interface.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/firmware/psci.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 759b134..431385a 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -15,6 +15,7 @@
 
 #include <linux/arm-smccc.h>
 #include <linux/cpuidle.h>
+#include <linux/cpu_domains.h>
 #include <linux/errno.h>
 #include <linux/linkage.h>
 #include <linux/of.h>
@@ -53,6 +54,18 @@
  */
 static int resident_cpu = -1;
 static bool psci_has_osi;
+static bool psci_has_osi_pd;
+static DEFINE_PER_CPU(u32, cluster_state_id);
+
+static inline u32 psci_get_composite_state_id(u32 cpu_state)
+{
+	return cpu_state | this_cpu_read(cluster_state_id);
+}
+
+static inline void psci_reset_composite_state_id(void)
+{
+	this_cpu_write(cluster_state_id, 0);
+}
 
 bool psci_tos_resident_on(int cpu)
 {
@@ -179,6 +192,8 @@ static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
 
 	fn = psci_function_id[PSCI_FN_CPU_ON];
 	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
+	/* Reset CPU cluster states */
+	psci_reset_composite_state_id();
 	return psci_to_linux_errno(err);
 }
 
@@ -250,6 +265,27 @@ static int __init psci_features(u32 psci_func_id)
 
 #ifdef CONFIG_CPU_IDLE
 static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state);
+static bool psci_suspend_mode_is_osi;
+
+static int psci_set_suspend_mode_osi(bool enable)
+{
+	int ret;
+	int mode;
+
+	if (enable && !psci_has_osi)
+		return -ENODEV;
+
+	if (enable == psci_suspend_mode_is_osi)
+		return 0;
+
+	mode = enable ? PSCI_1_0_SUSPEND_MODE_OSI : PSCI_1_0_SUSPEND_MODE_PC;
+	ret = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE,
+			     mode, 0, 0);
+	if (!ret)
+		psci_suspend_mode_is_osi = enable;
+
+	return psci_to_linux_errno(ret);
+}
 
 static int psci_dt_cpu_init_idle(struct device_node *cpu_node, int cpu)
 {
@@ -311,11 +347,48 @@ free_mem:
 	return ret;
 }
 
+static int psci_pd_populate_state_data(struct device_node *np, u32 *param)
+{
+	return of_property_read_u32(np, "arm,psci-suspend-param", param);
+}
+
+static int psci_pd_power_off(u32 idx, u32 param, const struct cpumask *mask)
+{
+	__this_cpu_add(cluster_state_id, param);
+	return 0;
+}
+
+const struct cpu_pd_ops psci_pd_ops = {
+	.populate_state_data = psci_pd_populate_state_data,
+	.power_off = psci_pd_power_off,
+};
+
+static int psci_cpu_osi_pd_init(int cpu)
+{
+	int ret;
+
+	if (!psci_has_osi_pd)
+		return 0;
+
+	ret = of_setup_cpu_pd_single(cpu, &psci_pd_ops);
+	if (!ret) {
+		ret = psci_set_suspend_mode_osi(true);
+		if (ret)
+			pr_warn("CPU%d: Error setting PSCI OSI mode\n", cpu);
+	}
+
+	return ret;
+}
+
 int psci_cpu_init_idle(unsigned int cpu)
 {
 	struct device_node *cpu_node;
 	int ret;
 
+	ret = psci_cpu_osi_pd_init(cpu);
+	if (ret)
+		return ret;
+
 	cpu_node = of_get_cpu_node(cpu, NULL);
 	if (!cpu_node)
 		return -ENODEV;
@@ -330,15 +403,17 @@ int psci_cpu_init_idle(unsigned int cpu)
 static int psci_suspend_finisher(unsigned long index)
 {
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
 
-	return psci_ops.cpu_suspend(state[index - 1],
-				    virt_to_phys(cpu_resume));
+	return psci_ops.cpu_suspend(ext_state, virt_to_phys(cpu_resume));
 }
 
 int psci_cpu_suspend_enter(unsigned long index)
 {
 	int ret;
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
+
 	/*
 	 * idle state index 0 corresponds to wfi, should never be called
 	 * from the cpu_suspend operations
@@ -347,10 +422,16 @@ int psci_cpu_suspend_enter(unsigned long index)
 		return -EINVAL;
 
 	if (!psci_power_state_loses_context(state[index - 1]))
-		ret = psci_ops.cpu_suspend(state[index - 1], 0);
+		ret = psci_ops.cpu_suspend(ext_state, 0);
 	else
 		ret = cpu_suspend(index, psci_suspend_finisher);
 
+	/*
+	 * Clear the CPU's cluster states, we start afresh after coming
+	 * out of idle.
+	 */
+	psci_reset_composite_state_id();
+
 	return ret;
 }
 
@@ -558,6 +639,7 @@ static int __init psci_0_1_init(struct device_node *np)
 
 static int __init psci_1_0_init(struct device_node *np)
 {
+	struct device_node *dn;
 	int ret;
 
 	ret = psci_0_2_init(np);
@@ -569,6 +651,11 @@ static int __init psci_1_0_init(struct device_node *np)
 	if (ret & PSCI_1_0_OS_INITIATED) {
 		if (!psci_features(PSCI_1_0_FN_SET_SUSPEND_MODE))
 			psci_has_osi = true;
+		/* Check if we power domains defined in the PSCI node */
+		dn = of_find_node_with_property(np, "#power-domain-cells");
+		if (dn)
+			psci_has_osi_pd = true;
+		of_node_put(dn);
 	}
 
 	return 0;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 13/16] drivers: firmware: psci: Support cluster idle states for OS-Initiated
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

PSCI OS initiated firmware may allow Linux to determine the state of the
CPU cluster and the cluster at coherency level to enter idle states when
there are no active CPUs. Since Linux has a better idea of the QoS and
the wakeup pattern of the CPUs, the cluster idle states may be better
determined by the OS instead of the firmware.

The last CPU entering idle in a cluster, is responsible for selecting
the state of the cluster. Only one CPU in a cluster may provide the
cluster idle state to the firmware. Similarly, the last CPU in the
system may provide the state of the coherency domain along with the
cluster and the CPU state IDs.

Utilize the CPU PM domain framework's helper functions to build up the
hierarchy of cluster topology using Generic PM domains. We provide
callbacks for domain power_on and power_off. By appending the state IDs
at each domain level in the -power_off() callbacks, we build up a
composite state ID that can be passed onto the firmware to idle the CPU,
the cluster and the coherency interface.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/firmware/psci.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 759b134..431385a 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -15,6 +15,7 @@
 
 #include <linux/arm-smccc.h>
 #include <linux/cpuidle.h>
+#include <linux/cpu_domains.h>
 #include <linux/errno.h>
 #include <linux/linkage.h>
 #include <linux/of.h>
@@ -53,6 +54,18 @@
  */
 static int resident_cpu = -1;
 static bool psci_has_osi;
+static bool psci_has_osi_pd;
+static DEFINE_PER_CPU(u32, cluster_state_id);
+
+static inline u32 psci_get_composite_state_id(u32 cpu_state)
+{
+	return cpu_state | this_cpu_read(cluster_state_id);
+}
+
+static inline void psci_reset_composite_state_id(void)
+{
+	this_cpu_write(cluster_state_id, 0);
+}
 
 bool psci_tos_resident_on(int cpu)
 {
@@ -179,6 +192,8 @@ static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
 
 	fn = psci_function_id[PSCI_FN_CPU_ON];
 	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
+	/* Reset CPU cluster states */
+	psci_reset_composite_state_id();
 	return psci_to_linux_errno(err);
 }
 
@@ -250,6 +265,27 @@ static int __init psci_features(u32 psci_func_id)
 
 #ifdef CONFIG_CPU_IDLE
 static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state);
+static bool psci_suspend_mode_is_osi;
+
+static int psci_set_suspend_mode_osi(bool enable)
+{
+	int ret;
+	int mode;
+
+	if (enable && !psci_has_osi)
+		return -ENODEV;
+
+	if (enable == psci_suspend_mode_is_osi)
+		return 0;
+
+	mode = enable ? PSCI_1_0_SUSPEND_MODE_OSI : PSCI_1_0_SUSPEND_MODE_PC;
+	ret = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE,
+			     mode, 0, 0);
+	if (!ret)
+		psci_suspend_mode_is_osi = enable;
+
+	return psci_to_linux_errno(ret);
+}
 
 static int psci_dt_cpu_init_idle(struct device_node *cpu_node, int cpu)
 {
@@ -311,11 +347,48 @@ free_mem:
 	return ret;
 }
 
+static int psci_pd_populate_state_data(struct device_node *np, u32 *param)
+{
+	return of_property_read_u32(np, "arm,psci-suspend-param", param);
+}
+
+static int psci_pd_power_off(u32 idx, u32 param, const struct cpumask *mask)
+{
+	__this_cpu_add(cluster_state_id, param);
+	return 0;
+}
+
+const struct cpu_pd_ops psci_pd_ops = {
+	.populate_state_data = psci_pd_populate_state_data,
+	.power_off = psci_pd_power_off,
+};
+
+static int psci_cpu_osi_pd_init(int cpu)
+{
+	int ret;
+
+	if (!psci_has_osi_pd)
+		return 0;
+
+	ret = of_setup_cpu_pd_single(cpu, &psci_pd_ops);
+	if (!ret) {
+		ret = psci_set_suspend_mode_osi(true);
+		if (ret)
+			pr_warn("CPU%d: Error setting PSCI OSI mode\n", cpu);
+	}
+
+	return ret;
+}
+
 int psci_cpu_init_idle(unsigned int cpu)
 {
 	struct device_node *cpu_node;
 	int ret;
 
+	ret = psci_cpu_osi_pd_init(cpu);
+	if (ret)
+		return ret;
+
 	cpu_node = of_get_cpu_node(cpu, NULL);
 	if (!cpu_node)
 		return -ENODEV;
@@ -330,15 +403,17 @@ int psci_cpu_init_idle(unsigned int cpu)
 static int psci_suspend_finisher(unsigned long index)
 {
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
 
-	return psci_ops.cpu_suspend(state[index - 1],
-				    virt_to_phys(cpu_resume));
+	return psci_ops.cpu_suspend(ext_state, virt_to_phys(cpu_resume));
 }
 
 int psci_cpu_suspend_enter(unsigned long index)
 {
 	int ret;
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
+
 	/*
 	 * idle state index 0 corresponds to wfi, should never be called
 	 * from the cpu_suspend operations
@@ -347,10 +422,16 @@ int psci_cpu_suspend_enter(unsigned long index)
 		return -EINVAL;
 
 	if (!psci_power_state_loses_context(state[index - 1]))
-		ret = psci_ops.cpu_suspend(state[index - 1], 0);
+		ret = psci_ops.cpu_suspend(ext_state, 0);
 	else
 		ret = cpu_suspend(index, psci_suspend_finisher);
 
+	/*
+	 * Clear the CPU's cluster states, we start afresh after coming
+	 * out of idle.
+	 */
+	psci_reset_composite_state_id();
+
 	return ret;
 }
 
@@ -558,6 +639,7 @@ static int __init psci_0_1_init(struct device_node *np)
 
 static int __init psci_1_0_init(struct device_node *np)
 {
+	struct device_node *dn;
 	int ret;
 
 	ret = psci_0_2_init(np);
@@ -569,6 +651,11 @@ static int __init psci_1_0_init(struct device_node *np)
 	if (ret & PSCI_1_0_OS_INITIATED) {
 		if (!psci_features(PSCI_1_0_FN_SET_SUSPEND_MODE))
 			psci_has_osi = true;
+		/* Check if we power domains defined in the PSCI node */
+		dn = of_find_node_with_property(np, "#power-domain-cells");
+		if (dn)
+			psci_has_osi_pd = true;
+		of_node_put(dn);
 	}
 
 	return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 14/16] dt/bindings: Add PSCI OS-Initiated PM Domains bindings
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	devicetree, Mark Rutland

Add bindings for defining a OS-Initiated based CPU PM domain.

Cc: <devicetree@vger.kernel.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/devicetree/bindings/arm/psci.txt | 79 ++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/Documentation/devicetree/bindings/arm/psci.txt b/Documentation/devicetree/bindings/arm/psci.txt
index a2c4f1d..63a229b 100644
--- a/Documentation/devicetree/bindings/arm/psci.txt
+++ b/Documentation/devicetree/bindings/arm/psci.txt
@@ -105,7 +105,86 @@ Case 3: PSCI v0.2 and PSCI v0.1.
 		...
 	};
 
+PSCI v1.0 onwards, supports OS-Initiated mode for powering off CPU domains
+from the firmware. Such PM domains for which the PSCI firmware driver acts as
+pseudo-controller, may also be specified in the DT under the psci node. The
+domain definitions must follow the domain idle state specifications per [3].
+The domain states themselves must be compatible with 'arm,idle-state' defined
+in [1] and need to specify the arm,psci-suspend-param property for each idle
+state.
+
+More information on defining CPU PM domains is available in [4].
+
+Example: OS-Iniated PSCI based PM domains with 1 CPU in each domain
+
+	cpus {
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		CPU0: cpu@0 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a53", "arm,armv8";
+			reg = <0x0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_PWRDN>;
+			power-domains = <&CPU_PD0>;
+		};
+
+		CPU1: cpu@1 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a57", "arm,armv8";
+			reg = <0x100>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_PWRDN>;
+			power-domains = <&CPU_PD1>;
+		};
+
+		idle-states {
+			CPU_PWRDN: cpu_power_down{
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x000001>;
+				entry-latency-us = <10>;
+				exit-latency-us = <10>;
+				min-residency-us = <100>;
+			};
+
+			CLUSTER_RET: domain_ret {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000010>;
+				entry-latency-us = <500>;
+				exit-latency-us = <500>;
+				min-residency-us = <2000>;
+			};
+
+			CLUSTER_PWR_DWN: domain_gdhs {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000030>;
+				entry-latency-us = <2000>;
+				exit-latency-us = <2000>;
+				min-residency-us = <6000>;
+			};
+	};
+
+	psci {
+		compatible = "arm,psci-1.0";
+		method = "smc";
+
+		CPU_PD0: cpu-pd@0 {
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
+		};
+
+		CPU_PD1: cpu-pd@1 {
+			#power-domain-cells = <0>;
+			domain-idle-states =  <&CLUSTER_PWR_DWN>;
+		};
+	};
+
 [1] Kernel documentation - ARM idle states bindings
     Documentation/devicetree/bindings/arm/idle-states.txt
 [2] Power State Coordination Interface (PSCI) specification
     http://infocenter.arm.com/help/topic/com.arm.doc.den0022c/DEN0022C_Power_State_Coordination_Interface.pdf
+[3]. PM Domains description
+    Documentation/devicetree/bindings/power/power_domain.txt
+[4]. CPU PM Domains description
+    Documentation/power/cpu_domains.txt
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 14/16] dt/bindings: Add PSCI OS-Initiated PM Domains bindings
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Add bindings for defining a OS-Initiated based CPU PM domain.

Cc: <devicetree@vger.kernel.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/devicetree/bindings/arm/psci.txt | 79 ++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/Documentation/devicetree/bindings/arm/psci.txt b/Documentation/devicetree/bindings/arm/psci.txt
index a2c4f1d..63a229b 100644
--- a/Documentation/devicetree/bindings/arm/psci.txt
+++ b/Documentation/devicetree/bindings/arm/psci.txt
@@ -105,7 +105,86 @@ Case 3: PSCI v0.2 and PSCI v0.1.
 		...
 	};
 
+PSCI v1.0 onwards, supports OS-Initiated mode for powering off CPU domains
+from the firmware. Such PM domains for which the PSCI firmware driver acts as
+pseudo-controller, may also be specified in the DT under the psci node. The
+domain definitions must follow the domain idle state specifications per [3].
+The domain states themselves must be compatible with 'arm,idle-state' defined
+in [1] and need to specify the arm,psci-suspend-param property for each idle
+state.
+
+More information on defining CPU PM domains is available in [4].
+
+Example: OS-Iniated PSCI based PM domains with 1 CPU in each domain
+
+	cpus {
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		CPU0: cpu at 0 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a53", "arm,armv8";
+			reg = <0x0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_PWRDN>;
+			power-domains = <&CPU_PD0>;
+		};
+
+		CPU1: cpu at 1 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a57", "arm,armv8";
+			reg = <0x100>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_PWRDN>;
+			power-domains = <&CPU_PD1>;
+		};
+
+		idle-states {
+			CPU_PWRDN: cpu_power_down{
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x000001>;
+				entry-latency-us = <10>;
+				exit-latency-us = <10>;
+				min-residency-us = <100>;
+			};
+
+			CLUSTER_RET: domain_ret {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000010>;
+				entry-latency-us = <500>;
+				exit-latency-us = <500>;
+				min-residency-us = <2000>;
+			};
+
+			CLUSTER_PWR_DWN: domain_gdhs {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000030>;
+				entry-latency-us = <2000>;
+				exit-latency-us = <2000>;
+				min-residency-us = <6000>;
+			};
+	};
+
+	psci {
+		compatible = "arm,psci-1.0";
+		method = "smc";
+
+		CPU_PD0: cpu-pd at 0 {
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
+		};
+
+		CPU_PD1: cpu-pd at 1 {
+			#power-domain-cells = <0>;
+			domain-idle-states =  <&CLUSTER_PWR_DWN>;
+		};
+	};
+
 [1] Kernel documentation - ARM idle states bindings
     Documentation/devicetree/bindings/arm/idle-states.txt
 [2] Power State Coordination Interface (PSCI) specification
     http://infocenter.arm.com/help/topic/com.arm.doc.den0022c/DEN0022C_Power_State_Coordination_Interface.pdf
+[3]. PM Domains description
+    Documentation/devicetree/bindings/power/power_domain.txt
+[4]. CPU PM Domains description
+    Documentation/power/cpu_domains.txt
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 15/16] ARM64: dts: Add PSCI cpuidle support for MSM8916
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	devicetree

Add device bindings for CPUs to suspend using PSCI as the enable-method.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 arch/arm64/boot/dts/qcom/msm8916.dtsi | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
index 9681200..3029773 100644
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
@@ -62,6 +62,8 @@
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x0>;
 			next-level-cache = <&L2_0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU1: cpu@1 {
@@ -69,6 +71,8 @@
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x1>;
 			next-level-cache = <&L2_0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU2: cpu@2 {
@@ -76,6 +80,8 @@
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x2>;
 			next-level-cache = <&L2_0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU3: cpu@3 {
@@ -83,12 +89,30 @@
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x3>;
 			next-level-cache = <&L2_0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		L2_0: l2-cache {
 		      compatible = "cache";
 		      cache-level = <2>;
 		};
+
+		idle-states {
+			CPU_SPC: spc {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x40000002>;
+				entry-latency-us = <130>;
+				exit-latency-us = <150>;
+				min-residency-us = <2000>;
+				local-timer-stop;
+			};
+		};
+	};
+
+	psci {
+		compatible = "arm,psci-1.0";
+		method = "smc";
 	};
 
 	timer {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 15/16] ARM64: dts: Add PSCI cpuidle support for MSM8916
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Add device bindings for CPUs to suspend using PSCI as the enable-method.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 arch/arm64/boot/dts/qcom/msm8916.dtsi | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
index 9681200..3029773 100644
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
@@ -62,6 +62,8 @@
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x0>;
 			next-level-cache = <&L2_0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU1: cpu at 1 {
@@ -69,6 +71,8 @@
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x1>;
 			next-level-cache = <&L2_0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU2: cpu at 2 {
@@ -76,6 +80,8 @@
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x2>;
 			next-level-cache = <&L2_0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		CPU3: cpu at 3 {
@@ -83,12 +89,30 @@
 			compatible = "arm,cortex-a53", "arm,armv8";
 			reg = <0x3>;
 			next-level-cache = <&L2_0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_SPC>;
 		};
 
 		L2_0: l2-cache {
 		      compatible = "cache";
 		      cache-level = <2>;
 		};
+
+		idle-states {
+			CPU_SPC: spc {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x40000002>;
+				entry-latency-us = <130>;
+				exit-latency-us = <150>;
+				min-residency-us = <2000>;
+				local-timer-stop;
+			};
+		};
+	};
+
+	psci {
+		compatible = "arm,psci-1.0";
+		method = "smc";
 	};
 
 	timer {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 16/16] ARM64: dts: Define CPU power domain for MSM8916
  2016-08-25 20:03 ` Lina Iyer
@ 2016-08-25 20:03   ` Lina Iyer
  -1 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	devicetree

Define power domain and the power states for the domain as defined by
the PSCI firmware. The 8916 firmware supports OS initiated method of
powering off the CPU clusters.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 arch/arm64/boot/dts/qcom/msm8916.dtsi | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
index 3029773..506c712 100644
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
@@ -64,6 +64,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU1: cpu@1 {
@@ -73,6 +74,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU2: cpu@2 {
@@ -82,6 +84,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU3: cpu@3 {
@@ -91,6 +94,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		L2_0: l2-cache {
@@ -107,12 +111,33 @@
 				min-residency-us = <2000>;
 				local-timer-stop;
 			};
+
+			CLUSTER_RET: cluster_retention {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000010>;
+				entry-latency-us = <500>;
+				exit-latency-us = <500>;
+				min-residency-us = <2000>;
+			};
+
+			CLUSTER_PWR_DWN: cluster_gdhs {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000030>;
+				entry-latency-us = <2000>;
+				exit-latency-us = <2000>;
+				min-residency-us = <6000>;
+			};
 		};
 	};
 
 	psci {
 		compatible = "arm,psci-1.0";
 		method = "smc";
+
+		CPU_PD: cpu-pd@0 {
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
+		};
 	};
 
 	timer {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 16/16] ARM64: dts: Define CPU power domain for MSM8916
@ 2016-08-25 20:03   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 20:03 UTC (permalink / raw)
  To: linux-arm-kernel

Define power domain and the power states for the domain as defined by
the PSCI firmware. The 8916 firmware supports OS initiated method of
powering off the CPU clusters.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 arch/arm64/boot/dts/qcom/msm8916.dtsi | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
index 3029773..506c712 100644
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
@@ -64,6 +64,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU1: cpu at 1 {
@@ -73,6 +74,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU2: cpu at 2 {
@@ -82,6 +84,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU3: cpu at 3 {
@@ -91,6 +94,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		L2_0: l2-cache {
@@ -107,12 +111,33 @@
 				min-residency-us = <2000>;
 				local-timer-stop;
 			};
+
+			CLUSTER_RET: cluster_retention {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000010>;
+				entry-latency-us = <500>;
+				exit-latency-us = <500>;
+				min-residency-us = <2000>;
+			};
+
+			CLUSTER_PWR_DWN: cluster_gdhs {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000030>;
+				entry-latency-us = <2000>;
+				exit-latency-us = <2000>;
+				min-residency-us = <6000>;
+			};
 		};
 	};
 
 	psci {
 		compatible = "arm,psci-1.0";
 		method = "smc";
+
+		CPU_PD: cpu-pd at 0 {
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
+		};
 	};
 
 	timer {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 13/16] drivers: firmware: psci: Support cluster idle states for OS-Initiated
  2016-08-25 19:51 [PATCH v4 00/16] PM: SoC idle support using PM domains Lina Iyer
@ 2016-08-25 19:51   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 19:51 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	Mark Rutland

PSCI OS initiated firmware may allow Linux to determine the state of the
CPU cluster and the cluster at coherency level to enter idle states when
there are no active CPUs. Since Linux has a better idea of the QoS and
the wakeup pattern of the CPUs, the cluster idle states may be better
determined by the OS instead of the firmware.

The last CPU entering idle in a cluster, is responsible for selecting
the state of the cluster. Only one CPU in a cluster may provide the
cluster idle state to the firmware. Similarly, the last CPU in the
system may provide the state of the coherency domain along with the
cluster and the CPU state IDs.

Utilize the CPU PM domain framework's helper functions to build up the
hierarchy of cluster topology using Generic PM domains. We provide
callbacks for domain power_on and power_off. By appending the state IDs
at each domain level in the -power_off() callbacks, we build up a
composite state ID that can be passed onto the firmware to idle the CPU,
the cluster and the coherency interface.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/firmware/psci.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 759b134..431385a 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -15,6 +15,7 @@
 
 #include <linux/arm-smccc.h>
 #include <linux/cpuidle.h>
+#include <linux/cpu_domains.h>
 #include <linux/errno.h>
 #include <linux/linkage.h>
 #include <linux/of.h>
@@ -53,6 +54,18 @@
  */
 static int resident_cpu = -1;
 static bool psci_has_osi;
+static bool psci_has_osi_pd;
+static DEFINE_PER_CPU(u32, cluster_state_id);
+
+static inline u32 psci_get_composite_state_id(u32 cpu_state)
+{
+	return cpu_state | this_cpu_read(cluster_state_id);
+}
+
+static inline void psci_reset_composite_state_id(void)
+{
+	this_cpu_write(cluster_state_id, 0);
+}
 
 bool psci_tos_resident_on(int cpu)
 {
@@ -179,6 +192,8 @@ static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
 
 	fn = psci_function_id[PSCI_FN_CPU_ON];
 	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
+	/* Reset CPU cluster states */
+	psci_reset_composite_state_id();
 	return psci_to_linux_errno(err);
 }
 
@@ -250,6 +265,27 @@ static int __init psci_features(u32 psci_func_id)
 
 #ifdef CONFIG_CPU_IDLE
 static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state);
+static bool psci_suspend_mode_is_osi;
+
+static int psci_set_suspend_mode_osi(bool enable)
+{
+	int ret;
+	int mode;
+
+	if (enable && !psci_has_osi)
+		return -ENODEV;
+
+	if (enable == psci_suspend_mode_is_osi)
+		return 0;
+
+	mode = enable ? PSCI_1_0_SUSPEND_MODE_OSI : PSCI_1_0_SUSPEND_MODE_PC;
+	ret = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE,
+			     mode, 0, 0);
+	if (!ret)
+		psci_suspend_mode_is_osi = enable;
+
+	return psci_to_linux_errno(ret);
+}
 
 static int psci_dt_cpu_init_idle(struct device_node *cpu_node, int cpu)
 {
@@ -311,11 +347,48 @@ free_mem:
 	return ret;
 }
 
+static int psci_pd_populate_state_data(struct device_node *np, u32 *param)
+{
+	return of_property_read_u32(np, "arm,psci-suspend-param", param);
+}
+
+static int psci_pd_power_off(u32 idx, u32 param, const struct cpumask *mask)
+{
+	__this_cpu_add(cluster_state_id, param);
+	return 0;
+}
+
+const struct cpu_pd_ops psci_pd_ops = {
+	.populate_state_data = psci_pd_populate_state_data,
+	.power_off = psci_pd_power_off,
+};
+
+static int psci_cpu_osi_pd_init(int cpu)
+{
+	int ret;
+
+	if (!psci_has_osi_pd)
+		return 0;
+
+	ret = of_setup_cpu_pd_single(cpu, &psci_pd_ops);
+	if (!ret) {
+		ret = psci_set_suspend_mode_osi(true);
+		if (ret)
+			pr_warn("CPU%d: Error setting PSCI OSI mode\n", cpu);
+	}
+
+	return ret;
+}
+
 int psci_cpu_init_idle(unsigned int cpu)
 {
 	struct device_node *cpu_node;
 	int ret;
 
+	ret = psci_cpu_osi_pd_init(cpu);
+	if (ret)
+		return ret;
+
 	cpu_node = of_get_cpu_node(cpu, NULL);
 	if (!cpu_node)
 		return -ENODEV;
@@ -330,15 +403,17 @@ int psci_cpu_init_idle(unsigned int cpu)
 static int psci_suspend_finisher(unsigned long index)
 {
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
 
-	return psci_ops.cpu_suspend(state[index - 1],
-				    virt_to_phys(cpu_resume));
+	return psci_ops.cpu_suspend(ext_state, virt_to_phys(cpu_resume));
 }
 
 int psci_cpu_suspend_enter(unsigned long index)
 {
 	int ret;
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
+
 	/*
 	 * idle state index 0 corresponds to wfi, should never be called
 	 * from the cpu_suspend operations
@@ -347,10 +422,16 @@ int psci_cpu_suspend_enter(unsigned long index)
 		return -EINVAL;
 
 	if (!psci_power_state_loses_context(state[index - 1]))
-		ret = psci_ops.cpu_suspend(state[index - 1], 0);
+		ret = psci_ops.cpu_suspend(ext_state, 0);
 	else
 		ret = cpu_suspend(index, psci_suspend_finisher);
 
+	/*
+	 * Clear the CPU's cluster states, we start afresh after coming
+	 * out of idle.
+	 */
+	psci_reset_composite_state_id();
+
 	return ret;
 }
 
@@ -558,6 +639,7 @@ static int __init psci_0_1_init(struct device_node *np)
 
 static int __init psci_1_0_init(struct device_node *np)
 {
+	struct device_node *dn;
 	int ret;
 
 	ret = psci_0_2_init(np);
@@ -569,6 +651,11 @@ static int __init psci_1_0_init(struct device_node *np)
 	if (ret & PSCI_1_0_OS_INITIATED) {
 		if (!psci_features(PSCI_1_0_FN_SET_SUSPEND_MODE))
 			psci_has_osi = true;
+		/* Check if we power domains defined in the PSCI node */
+		dn = of_find_node_with_property(np, "#power-domain-cells");
+		if (dn)
+			psci_has_osi_pd = true;
+		of_node_put(dn);
 	}
 
 	return 0;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v4 13/16] drivers: firmware: psci: Support cluster idle states for OS-Initiated
@ 2016-08-25 19:51   ` Lina Iyer
  0 siblings, 0 replies; 36+ messages in thread
From: Lina Iyer @ 2016-08-25 19:51 UTC (permalink / raw)
  To: linux-arm-kernel

PSCI OS initiated firmware may allow Linux to determine the state of the
CPU cluster and the cluster at coherency level to enter idle states when
there are no active CPUs. Since Linux has a better idea of the QoS and
the wakeup pattern of the CPUs, the cluster idle states may be better
determined by the OS instead of the firmware.

The last CPU entering idle in a cluster, is responsible for selecting
the state of the cluster. Only one CPU in a cluster may provide the
cluster idle state to the firmware. Similarly, the last CPU in the
system may provide the state of the coherency domain along with the
cluster and the CPU state IDs.

Utilize the CPU PM domain framework's helper functions to build up the
hierarchy of cluster topology using Generic PM domains. We provide
callbacks for domain power_on and power_off. By appending the state IDs
at each domain level in the -power_off() callbacks, we build up a
composite state ID that can be passed onto the firmware to idle the CPU,
the cluster and the coherency interface.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/firmware/psci.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 759b134..431385a 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -15,6 +15,7 @@
 
 #include <linux/arm-smccc.h>
 #include <linux/cpuidle.h>
+#include <linux/cpu_domains.h>
 #include <linux/errno.h>
 #include <linux/linkage.h>
 #include <linux/of.h>
@@ -53,6 +54,18 @@
  */
 static int resident_cpu = -1;
 static bool psci_has_osi;
+static bool psci_has_osi_pd;
+static DEFINE_PER_CPU(u32, cluster_state_id);
+
+static inline u32 psci_get_composite_state_id(u32 cpu_state)
+{
+	return cpu_state | this_cpu_read(cluster_state_id);
+}
+
+static inline void psci_reset_composite_state_id(void)
+{
+	this_cpu_write(cluster_state_id, 0);
+}
 
 bool psci_tos_resident_on(int cpu)
 {
@@ -179,6 +192,8 @@ static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
 
 	fn = psci_function_id[PSCI_FN_CPU_ON];
 	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
+	/* Reset CPU cluster states */
+	psci_reset_composite_state_id();
 	return psci_to_linux_errno(err);
 }
 
@@ -250,6 +265,27 @@ static int __init psci_features(u32 psci_func_id)
 
 #ifdef CONFIG_CPU_IDLE
 static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state);
+static bool psci_suspend_mode_is_osi;
+
+static int psci_set_suspend_mode_osi(bool enable)
+{
+	int ret;
+	int mode;
+
+	if (enable && !psci_has_osi)
+		return -ENODEV;
+
+	if (enable == psci_suspend_mode_is_osi)
+		return 0;
+
+	mode = enable ? PSCI_1_0_SUSPEND_MODE_OSI : PSCI_1_0_SUSPEND_MODE_PC;
+	ret = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE,
+			     mode, 0, 0);
+	if (!ret)
+		psci_suspend_mode_is_osi = enable;
+
+	return psci_to_linux_errno(ret);
+}
 
 static int psci_dt_cpu_init_idle(struct device_node *cpu_node, int cpu)
 {
@@ -311,11 +347,48 @@ free_mem:
 	return ret;
 }
 
+static int psci_pd_populate_state_data(struct device_node *np, u32 *param)
+{
+	return of_property_read_u32(np, "arm,psci-suspend-param", param);
+}
+
+static int psci_pd_power_off(u32 idx, u32 param, const struct cpumask *mask)
+{
+	__this_cpu_add(cluster_state_id, param);
+	return 0;
+}
+
+const struct cpu_pd_ops psci_pd_ops = {
+	.populate_state_data = psci_pd_populate_state_data,
+	.power_off = psci_pd_power_off,
+};
+
+static int psci_cpu_osi_pd_init(int cpu)
+{
+	int ret;
+
+	if (!psci_has_osi_pd)
+		return 0;
+
+	ret = of_setup_cpu_pd_single(cpu, &psci_pd_ops);
+	if (!ret) {
+		ret = psci_set_suspend_mode_osi(true);
+		if (ret)
+			pr_warn("CPU%d: Error setting PSCI OSI mode\n", cpu);
+	}
+
+	return ret;
+}
+
 int psci_cpu_init_idle(unsigned int cpu)
 {
 	struct device_node *cpu_node;
 	int ret;
 
+	ret = psci_cpu_osi_pd_init(cpu);
+	if (ret)
+		return ret;
+
 	cpu_node = of_get_cpu_node(cpu, NULL);
 	if (!cpu_node)
 		return -ENODEV;
@@ -330,15 +403,17 @@ int psci_cpu_init_idle(unsigned int cpu)
 static int psci_suspend_finisher(unsigned long index)
 {
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
 
-	return psci_ops.cpu_suspend(state[index - 1],
-				    virt_to_phys(cpu_resume));
+	return psci_ops.cpu_suspend(ext_state, virt_to_phys(cpu_resume));
 }
 
 int psci_cpu_suspend_enter(unsigned long index)
 {
 	int ret;
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
+
 	/*
 	 * idle state index 0 corresponds to wfi, should never be called
 	 * from the cpu_suspend operations
@@ -347,10 +422,16 @@ int psci_cpu_suspend_enter(unsigned long index)
 		return -EINVAL;
 
 	if (!psci_power_state_loses_context(state[index - 1]))
-		ret = psci_ops.cpu_suspend(state[index - 1], 0);
+		ret = psci_ops.cpu_suspend(ext_state, 0);
 	else
 		ret = cpu_suspend(index, psci_suspend_finisher);
 
+	/*
+	 * Clear the CPU's cluster states, we start afresh after coming
+	 * out of idle.
+	 */
+	psci_reset_composite_state_id();
+
 	return ret;
 }
 
@@ -558,6 +639,7 @@ static int __init psci_0_1_init(struct device_node *np)
 
 static int __init psci_1_0_init(struct device_node *np)
 {
+	struct device_node *dn;
 	int ret;
 
 	ret = psci_0_2_init(np);
@@ -569,6 +651,11 @@ static int __init psci_1_0_init(struct device_node *np)
 	if (ret & PSCI_1_0_OS_INITIATED) {
 		if (!psci_features(PSCI_1_0_FN_SET_SUSPEND_MODE))
 			psci_has_osi = true;
+		/* Check if we power domains defined in the PSCI node */
+		dn = of_find_node_with_property(np, "#power-domain-cells");
+		if (dn)
+			psci_has_osi_pd = true;
+		of_node_put(dn);
 	}
 
 	return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2016-08-25 20:09 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-25 20:03 [PATCH v4 00/16] PM: SoC idle support using PM domains Lina Iyer
2016-08-25 20:03 ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 01/16] PM / Domains: Allow domain power states to be read from DT Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 02/16] dt/bindings: Update binding for PM domain idle states Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 03/16] PM / Domains: Abstract genpd locking Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 04/16] PM / Domains: Support IRQ safe PM domains Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 05/16] PM / doc: Update device documentation for devices in " Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 06/16] PM / cpu_domains: Setup PM domains for CPUs/clusters Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 07/16] PM / cpu_domains: Initialize CPU PM domains from DT Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 08/16] ARM: cpuidle: Add runtime PM support for CPUs Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 09/16] timer: Export next wake up of a CPU Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 10/16] PM / cpu_domains: Add PM Domain governor for CPUs Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 11/16] doc / cpu_domains: Describe CPU PM domains setup and governor Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 12/16] drivers: firmware: psci: Allow OS Initiated suspend mode Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 13/16] drivers: firmware: psci: Support cluster idle states for OS-Initiated Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 14/16] dt/bindings: Add PSCI OS-Initiated PM Domains bindings Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 15/16] ARM64: dts: Add PSCI cpuidle support for MSM8916 Lina Iyer
2016-08-25 20:03   ` Lina Iyer
2016-08-25 20:03 ` [PATCH v4 16/16] ARM64: dts: Define CPU power domain " Lina Iyer
2016-08-25 20:03   ` Lina Iyer
  -- strict thread matches above, loose matches on Subject: below --
2016-08-25 19:51 [PATCH v4 00/16] PM: SoC idle support using PM domains Lina Iyer
2016-08-25 19:51 ` [PATCH v4 13/16] drivers: firmware: psci: Support cluster idle states for OS-Initiated Lina Iyer
2016-08-25 19:51   ` Lina Iyer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.