All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 00/16] PM: SoC idle support using PM domains
@ 2016-08-26 20:17 ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Hi all,

Changes since v4 [10]:
- Rebased on top of v4.8-rc3.
- Generalized CPU runtime PM, not specific to ARM anymore.
- CPU PM domains not dependent on OF anymore.

Changes since v3 [7]:
- Mostly refactoring and reorganization, no functional changes.
- Refactored DT support for CPU PM domains into a separate patch.
  (Suggested by Ulf)
- Reorganized domain idle state into DT binding, to be more in line
  with the discussions that followed the last patch submission.
  (Thanks Brendan, Sudeep, Lorenzo for some really good discussions.)

Changes since v2 [5]:
- Update PSCI documentation to define OS-Initiated PM domains.
- Nifty updates and bug fixes. Thanks Brendan!
- Define PSCI PM domains under psci node in 8916 DT.
- Documentation updates for domain definitions.
- Updated series is at [4].

Changes since v1 [6]:
- Use arm,idle-state as the DT binding for domain idle state.
- OS-Initated changes to support that and to read arm,psci-suspend-param
(Thanks Mark Rutland and Kevin Hilman)
- tick_nohz_get_next_wakeup() - suggestions from Thomas Gleixner.
- The updated series is at [3].

Changes since RFC-v3 [1]:
- Reorganize the patches. Documentations have their own patch.
- Moved code around with PSCI OS initiated so they would not have compiler
  errors in other configuration.
- Minor bug fixes with genpd power_on functionality.
- Rebased on top of 4.7-rc1

This is the submission of the SoC idle support in the kernel for CPU domains
using genpd. The patches were submitted as RFC's earlier, the last of them is
[1]. Since the RFC, multiple discussions have happened around making the
patches generic across all architectures.

The patch has been tested on the 410c Dragonboard and the MTK EVB boards. Both
show good power savings when used with OS Initiated PSCI f/w.

This entire series can be found at [9].

Thanks,
Lina

[1]. http://lists.infradead.org/pipermail/linux-arm-kernel/2016-March/412934.html
[2]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v1
[3]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v2
[4]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v3
[5]. https://lwn.net/Articles/695987/
[6]. https://lwn.net/Articles/675674/
[7]. http://www.spinics.net/lists/arm-kernel/msg522021.html
[8]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v4
[9]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v5
[10]. http://www.spinics.net/lists/arm-kernel/msg526463.html

Axel Haslam (2):
  PM / Domains: Allow domain power states to be read from DT
  dt/bindings: Update binding for PM domain idle states

Lina Iyer (14):
  PM / Domains: Abstract genpd locking
  PM / Domains: Support IRQ safe PM domains
  PM / doc: Update device documentation for devices in IRQ safe PM
    domains
  drivers: cpu: Setup CPU devices to do runtime PM
  kernel/cpu_pm: Add runtime PM support for CPUs
  PM / cpu_domains: Setup PM domains for CPUs/clusters
  PM / cpu_domains: Initialize CPU PM domains from DT
  timer: Export next wake up of a CPU
  PM / cpu_domains: Add PM Domain governor for CPUs
  doc / cpu_domains: Describe CPU PM domains setup and governor
  drivers: firmware: psci: Allow OS Initiated suspend mode
  drivers: firmware: psci: Support cluster idle states for OS-Initiated
  dt/bindings: Add PSCI OS-Initiated PM Domains bindings
  ARM64: dts: Define CPU power domain for MSM8916

 Documentation/devicetree/bindings/arm/psci.txt     |  79 ++++
 .../devicetree/bindings/power/power_domain.txt     |  57 +++
 Documentation/power/cpu_domains.txt                | 109 +++++
 Documentation/power/devices.txt                    |  12 +-
 arch/arm64/boot/dts/qcom/msm8916.dtsi              |  25 ++
 drivers/base/cpu.c                                 |  18 +
 drivers/base/power/Makefile                        |   2 +-
 drivers/base/power/cpu_domains.c                   | 459 +++++++++++++++++++++
 drivers/base/power/domain.c                        | 308 ++++++++++++--
 drivers/firmware/psci.c                            | 135 +++++-
 include/linux/cpu_domains.h                        |  67 +++
 include/linux/pm_domain.h                          |  24 +-
 include/linux/tick.h                               |   7 +
 include/uapi/linux/psci.h                          |   5 +
 kernel/cpu_pm.c                                    |  45 ++
 kernel/time/tick-sched.c                           |  11 +
 16 files changed, 1298 insertions(+), 65 deletions(-)
 create mode 100644 Documentation/power/cpu_domains.txt
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

-- 
2.7.4


^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 00/16] PM: SoC idle support using PM domains
@ 2016-08-26 20:17 ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

Hi all,

Changes since v4 [10]:
- Rebased on top of v4.8-rc3.
- Generalized CPU runtime PM, not specific to ARM anymore.
- CPU PM domains not dependent on OF anymore.

Changes since v3 [7]:
- Mostly refactoring and reorganization, no functional changes.
- Refactored DT support for CPU PM domains into a separate patch.
  (Suggested by Ulf)
- Reorganized domain idle state into DT binding, to be more in line
  with the discussions that followed the last patch submission.
  (Thanks Brendan, Sudeep, Lorenzo for some really good discussions.)

Changes since v2 [5]:
- Update PSCI documentation to define OS-Initiated PM domains.
- Nifty updates and bug fixes. Thanks Brendan!
- Define PSCI PM domains under psci node in 8916 DT.
- Documentation updates for domain definitions.
- Updated series is at [4].

Changes since v1 [6]:
- Use arm,idle-state as the DT binding for domain idle state.
- OS-Initated changes to support that and to read arm,psci-suspend-param
(Thanks Mark Rutland and Kevin Hilman)
- tick_nohz_get_next_wakeup() - suggestions from Thomas Gleixner.
- The updated series is at [3].

Changes since RFC-v3 [1]:
- Reorganize the patches. Documentations have their own patch.
- Moved code around with PSCI OS initiated so they would not have compiler
  errors in other configuration.
- Minor bug fixes with genpd power_on functionality.
- Rebased on top of 4.7-rc1

This is the submission of the SoC idle support in the kernel for CPU domains
using genpd. The patches were submitted as RFC's earlier, the last of them is
[1]. Since the RFC, multiple discussions have happened around making the
patches generic across all architectures.

The patch has been tested on the 410c Dragonboard and the MTK EVB boards. Both
show good power savings when used with OS Initiated PSCI f/w.

This entire series can be found at [9].

Thanks,
Lina

[1]. http://lists.infradead.org/pipermail/linux-arm-kernel/2016-March/412934.html
[2]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v1
[3]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v2
[4]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v3
[5]. https://lwn.net/Articles/695987/
[6]. https://lwn.net/Articles/675674/
[7]. http://www.spinics.net/lists/arm-kernel/msg522021.html
[8]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v4
[9]. https://git.linaro.org/people/lina.iyer/linux-next.git/shortlog/refs/heads/genpd-psci-v5
[10]. http://www.spinics.net/lists/arm-kernel/msg526463.html

Axel Haslam (2):
  PM / Domains: Allow domain power states to be read from DT
  dt/bindings: Update binding for PM domain idle states

Lina Iyer (14):
  PM / Domains: Abstract genpd locking
  PM / Domains: Support IRQ safe PM domains
  PM / doc: Update device documentation for devices in IRQ safe PM
    domains
  drivers: cpu: Setup CPU devices to do runtime PM
  kernel/cpu_pm: Add runtime PM support for CPUs
  PM / cpu_domains: Setup PM domains for CPUs/clusters
  PM / cpu_domains: Initialize CPU PM domains from DT
  timer: Export next wake up of a CPU
  PM / cpu_domains: Add PM Domain governor for CPUs
  doc / cpu_domains: Describe CPU PM domains setup and governor
  drivers: firmware: psci: Allow OS Initiated suspend mode
  drivers: firmware: psci: Support cluster idle states for OS-Initiated
  dt/bindings: Add PSCI OS-Initiated PM Domains bindings
  ARM64: dts: Define CPU power domain for MSM8916

 Documentation/devicetree/bindings/arm/psci.txt     |  79 ++++
 .../devicetree/bindings/power/power_domain.txt     |  57 +++
 Documentation/power/cpu_domains.txt                | 109 +++++
 Documentation/power/devices.txt                    |  12 +-
 arch/arm64/boot/dts/qcom/msm8916.dtsi              |  25 ++
 drivers/base/cpu.c                                 |  18 +
 drivers/base/power/Makefile                        |   2 +-
 drivers/base/power/cpu_domains.c                   | 459 +++++++++++++++++++++
 drivers/base/power/domain.c                        | 308 ++++++++++++--
 drivers/firmware/psci.c                            | 135 +++++-
 include/linux/cpu_domains.h                        |  67 +++
 include/linux/pm_domain.h                          |  24 +-
 include/linux/tick.h                               |   7 +
 include/uapi/linux/psci.h                          |   5 +
 kernel/cpu_pm.c                                    |  45 ++
 kernel/time/tick-sched.c                           |  11 +
 16 files changed, 1298 insertions(+), 65 deletions(-)
 create mode 100644 Documentation/power/cpu_domains.txt
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

-- 
2.7.4

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 01/16] PM / Domains: Allow domain power states to be read from DT
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Axel Haslam,
	Marc Titinger, Lina Iyer

From: Axel Haslam <ahaslam+renesas@baylibre.com>

This patch allows domains to define idle states in the DT. SoC's can
define domain idle states in DT using the "domain-idle-states" property
of the domain provider. Calling of_pm_genpd_init() will  read the idle
states and initialize the genpd for the domain.

In addition to the entry and exit latency for idle state, also add
residency_ns, param and of_node property to each state. A domain idling
in a state is only power effecient if it stays idle for a certain period
in that state. The residency provides this minimum time for the idle
state to provide power benefits. The param is a state specific u32 value
that the platform may use for that idle state.

Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Lina: Added state properties, removed state names, wakeup-latency,
added of_pm_genpd_init() API, pruned commit text]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[Ulf: Moved around code to make it compile properly, rebased on top of multiple
state support,changed to use pm_genpd_init()]
---
 drivers/base/power/domain.c | 92 ++++++++++++++++++++++++++++++++++++++++++++-
 include/linux/pm_domain.h   | 11 +++++-
 2 files changed, 101 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index a1f2aff..3aecac3 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -1253,6 +1253,90 @@ out:
 }
 EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
 
+static const struct of_device_id arm_idle_state_match[] = {
+	{ .compatible = "arm,idle-state", },
+	{ }
+};
+
+static int genpd_of_get_power_state(struct genpd_power_state *genpd_state,
+				    struct device_node *state_node)
+{
+	int err = 0;
+	u32 latency;
+	u32 residency;
+	u32 entry_latency, exit_latency;
+	const struct of_device_id *match_id;
+
+	match_id = of_match_node(arm_idle_state_match, state_node);
+	if (!match_id)
+		return -EINVAL;
+
+	err = of_property_read_u32(state_node, "entry-latency-us",
+				   &entry_latency);
+	if (err) {
+		pr_debug(" * %s missing entry-latency-us property\n",
+			 state_node->full_name);
+		return -EINVAL;
+	}
+
+	err = of_property_read_u32(state_node, "exit-latency-us",
+				   &exit_latency);
+	if (err) {
+		pr_debug(" * %s missing exit-latency-us property\n",
+			 state_node->full_name);
+		return -EINVAL;
+	}
+
+	err = of_property_read_u32(state_node, "min-residency-us", &residency);
+	if (!err)
+		genpd_state->residency_ns = 1000 * residency;
+
+	latency = entry_latency + exit_latency;
+	genpd_state->power_on_latency_ns = 1000 * latency;
+	genpd_state->power_off_latency_ns = 1000 * entry_latency;
+	genpd_state->of_node = state_node;
+
+	return 0;
+}
+
+int pm_genpd_of_parse_power_states(struct generic_pm_domain *genpd)
+{
+	struct device_node *np;
+	int i, err = 0;
+
+	for (i = 0; i < GENPD_MAX_NUM_STATES; i++) {
+		np = of_parse_phandle(genpd->of_node, "domain-idle-states", i);
+		if (!np)
+			break;
+
+		err = genpd_of_get_power_state(&genpd->states[i], np);
+		if (err) {
+			pr_err
+			    ("Parsing idle state node %s failed with err %d\n",
+			     np->full_name, err);
+			err = -EINVAL;
+			of_node_put(np);
+			break;
+		}
+		of_node_put(np);
+	}
+
+	if (err)
+		return err;
+
+	genpd->state_count = i;
+	return 0;
+}
+EXPORT_SYMBOL(pm_genpd_of_parse_power_states);
+
+static int genpd_of_parse(struct generic_pm_domain *genpd)
+{
+	if (!genpd->of_node || (genpd->state_count > 0))
+		return 0;
+
+	return pm_genpd_of_parse_power_states(genpd);
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1262,8 +1346,10 @@ EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
  * Returns 0 on successful initialization, else a negative error code.
  */
 int pm_genpd_init(struct generic_pm_domain *genpd,
-		  struct dev_power_governor *gov, bool is_off)
+		   struct dev_power_governor *gov, bool is_off)
 {
+	int ret;
+
 	if (IS_ERR_OR_NULL(genpd))
 		return -EINVAL;
 
@@ -1306,6 +1392,10 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 		genpd->dev_ops.start = pm_clk_resume;
 	}
 
+	ret = genpd_of_parse(genpd);
+	if (ret)
+		return ret;
+
 	if (genpd->state_idx >= GENPD_MAX_NUM_STATES) {
 		pr_warn("Initial state index out of bounds.\n");
 		genpd->state_idx = GENPD_MAX_NUM_STATES - 1;
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index 31fec85..c5d14b9 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -40,6 +40,9 @@ struct gpd_dev_ops {
 struct genpd_power_state {
 	s64 power_off_latency_ns;
 	s64 power_on_latency_ns;
+	s64 residency_ns;
+	u32 param;
+	struct device_node *of_node;
 };
 
 struct generic_pm_domain {
@@ -51,6 +54,7 @@ struct generic_pm_domain {
 	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
+	struct device_node *of_node;	/* Device node of the PM domain */
 	const char *name;
 	atomic_t sd_count;	/* Number of subdomains with power "on" */
 	enum gpd_status status;	/* Current state of the domain */
@@ -129,7 +133,7 @@ extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 				     struct generic_pm_domain *target);
 extern int pm_genpd_init(struct generic_pm_domain *genpd,
 			 struct dev_power_governor *gov, bool is_off);
-
+extern int pm_genpd_of_parse_power_states(struct generic_pm_domain *genpd);
 extern struct dev_power_governor simple_qos_governor;
 extern struct dev_power_governor pm_domain_always_on_gov;
 #else
@@ -168,6 +172,11 @@ static inline int pm_genpd_init(struct generic_pm_domain *genpd,
 {
 	return -ENOSYS;
 }
+static inline int pm_genpd_of_parse_power_states(
+				struct generic_pm_domain *genpd)
+{
+	return -ENODEV;
+}
 #endif
 
 static inline int pm_genpd_add_device(struct generic_pm_domain *genpd,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 01/16] PM / Domains: Allow domain power states to be read from DT
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

From: Axel Haslam <ahaslam+renesas@baylibre.com>

This patch allows domains to define idle states in the DT. SoC's can
define domain idle states in DT using the "domain-idle-states" property
of the domain provider. Calling of_pm_genpd_init() will  read the idle
states and initialize the genpd for the domain.

In addition to the entry and exit latency for idle state, also add
residency_ns, param and of_node property to each state. A domain idling
in a state is only power effecient if it stays idle for a certain period
in that state. The residency provides this minimum time for the idle
state to provide power benefits. The param is a state specific u32 value
that the platform may use for that idle state.

Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Lina: Added state properties, removed state names, wakeup-latency,
added of_pm_genpd_init() API, pruned commit text]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[Ulf: Moved around code to make it compile properly, rebased on top of multiple
state support,changed to use pm_genpd_init()]
---
 drivers/base/power/domain.c | 92 ++++++++++++++++++++++++++++++++++++++++++++-
 include/linux/pm_domain.h   | 11 +++++-
 2 files changed, 101 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index a1f2aff..3aecac3 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -1253,6 +1253,90 @@ out:
 }
 EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
 
+static const struct of_device_id arm_idle_state_match[] = {
+	{ .compatible = "arm,idle-state", },
+	{ }
+};
+
+static int genpd_of_get_power_state(struct genpd_power_state *genpd_state,
+				    struct device_node *state_node)
+{
+	int err = 0;
+	u32 latency;
+	u32 residency;
+	u32 entry_latency, exit_latency;
+	const struct of_device_id *match_id;
+
+	match_id = of_match_node(arm_idle_state_match, state_node);
+	if (!match_id)
+		return -EINVAL;
+
+	err = of_property_read_u32(state_node, "entry-latency-us",
+				   &entry_latency);
+	if (err) {
+		pr_debug(" * %s missing entry-latency-us property\n",
+			 state_node->full_name);
+		return -EINVAL;
+	}
+
+	err = of_property_read_u32(state_node, "exit-latency-us",
+				   &exit_latency);
+	if (err) {
+		pr_debug(" * %s missing exit-latency-us property\n",
+			 state_node->full_name);
+		return -EINVAL;
+	}
+
+	err = of_property_read_u32(state_node, "min-residency-us", &residency);
+	if (!err)
+		genpd_state->residency_ns = 1000 * residency;
+
+	latency = entry_latency + exit_latency;
+	genpd_state->power_on_latency_ns = 1000 * latency;
+	genpd_state->power_off_latency_ns = 1000 * entry_latency;
+	genpd_state->of_node = state_node;
+
+	return 0;
+}
+
+int pm_genpd_of_parse_power_states(struct generic_pm_domain *genpd)
+{
+	struct device_node *np;
+	int i, err = 0;
+
+	for (i = 0; i < GENPD_MAX_NUM_STATES; i++) {
+		np = of_parse_phandle(genpd->of_node, "domain-idle-states", i);
+		if (!np)
+			break;
+
+		err = genpd_of_get_power_state(&genpd->states[i], np);
+		if (err) {
+			pr_err
+			    ("Parsing idle state node %s failed with err %d\n",
+			     np->full_name, err);
+			err = -EINVAL;
+			of_node_put(np);
+			break;
+		}
+		of_node_put(np);
+	}
+
+	if (err)
+		return err;
+
+	genpd->state_count = i;
+	return 0;
+}
+EXPORT_SYMBOL(pm_genpd_of_parse_power_states);
+
+static int genpd_of_parse(struct generic_pm_domain *genpd)
+{
+	if (!genpd->of_node || (genpd->state_count > 0))
+		return 0;
+
+	return pm_genpd_of_parse_power_states(genpd);
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1262,8 +1346,10 @@ EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
  * Returns 0 on successful initialization, else a negative error code.
  */
 int pm_genpd_init(struct generic_pm_domain *genpd,
-		  struct dev_power_governor *gov, bool is_off)
+		   struct dev_power_governor *gov, bool is_off)
 {
+	int ret;
+
 	if (IS_ERR_OR_NULL(genpd))
 		return -EINVAL;
 
@@ -1306,6 +1392,10 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 		genpd->dev_ops.start = pm_clk_resume;
 	}
 
+	ret = genpd_of_parse(genpd);
+	if (ret)
+		return ret;
+
 	if (genpd->state_idx >= GENPD_MAX_NUM_STATES) {
 		pr_warn("Initial state index out of bounds.\n");
 		genpd->state_idx = GENPD_MAX_NUM_STATES - 1;
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index 31fec85..c5d14b9 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -40,6 +40,9 @@ struct gpd_dev_ops {
 struct genpd_power_state {
 	s64 power_off_latency_ns;
 	s64 power_on_latency_ns;
+	s64 residency_ns;
+	u32 param;
+	struct device_node *of_node;
 };
 
 struct generic_pm_domain {
@@ -51,6 +54,7 @@ struct generic_pm_domain {
 	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
+	struct device_node *of_node;	/* Device node of the PM domain */
 	const char *name;
 	atomic_t sd_count;	/* Number of subdomains with power "on" */
 	enum gpd_status status;	/* Current state of the domain */
@@ -129,7 +133,7 @@ extern int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 				     struct generic_pm_domain *target);
 extern int pm_genpd_init(struct generic_pm_domain *genpd,
 			 struct dev_power_governor *gov, bool is_off);
-
+extern int pm_genpd_of_parse_power_states(struct generic_pm_domain *genpd);
 extern struct dev_power_governor simple_qos_governor;
 extern struct dev_power_governor pm_domain_always_on_gov;
 #else
@@ -168,6 +172,11 @@ static inline int pm_genpd_init(struct generic_pm_domain *genpd,
 {
 	return -ENOSYS;
 }
+static inline int pm_genpd_of_parse_power_states(
+				struct generic_pm_domain *genpd)
+{
+	return -ENODEV;
+}
 #endif
 
 static inline int pm_genpd_add_device(struct generic_pm_domain *genpd,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Axel Haslam,
	devicetree, Marc Titinger, Lina Iyer

From: Axel Haslam <ahaslam+renesas@baylibre.com>

Update DT bindings to describe idle states of PM domains.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Lina: Added state properties, removed state names, wakeup-latency,
added of_pm_genpd_init() API, pruned commit text]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
---
 .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
index 025b5e7..4960486 100644
--- a/Documentation/devicetree/bindings/power/power_domain.txt
+++ b/Documentation/devicetree/bindings/power/power_domain.txt
@@ -29,6 +29,10 @@ Optional properties:
    specified by this binding. More details about power domain specifier are
    available in the next section.
 
+- domain-idle-states : A phandle of an idle-state that shall be soaked into a
+                generic domain power state. The idle state definitions are
+                compatible with arm,idle-state specified in [1].
+
 Example:
 
 	power: power-controller@12340000 {
@@ -59,6 +63,57 @@ The nodes above define two power controllers: 'parent' and 'child'.
 Domains created by the 'child' power controller are subdomains of '0' power
 domain provided by the 'parent' power controller.
 
+Example 3: ARM v7 style CPU PM domains (Linux domain controller)
+
+	cpus {
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		CPU0: cpu@0 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a7", "arm,armv7";
+			reg = <0x0>;
+			power-domains = <&a7_pd>;
+		};
+
+		CPU1: cpu@1 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a15", "arm,armv7";
+			reg = <0x0>;
+			power-domains = <&a15_pd>;
+		};
+	};
+
+	pm-domains {
+		a15_pd: a15_pd {
+			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
+			compatible = "arm,cortex-a15";
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_SLEEP_0>;
+		};
+
+		a7_pd: a7_pd {
+			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
+			compatible = "arm,cortex-a7";
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
+		};
+
+		CLUSTER_SLEEP_0: state0 {
+			compatible = "arm,idle-state";
+			entry-latency-us = <1000>;
+			exit-latency-us = <2000>;
+			min-residency-us = <10000>;
+		};
+
+		CLUSTER_SLEEP_1: state1 {
+			compatible = "arm,idle-state";
+			entry-latency-us = <5000>;
+			exit-latency-us = <5000>;
+			min-residency-us = <100000>;
+		};
+	};
+
 ==PM domain consumers==
 
 Required properties:
@@ -76,3 +131,5 @@ Example:
 The node above defines a typical PM domain consumer device, which is located
 inside a PM domain with index 0 of a power controller represented by a node
 with the label "power".
+
+[1]. Documentation/devicetree/bindings/arm/idle-states.txt
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

From: Axel Haslam <ahaslam+renesas@baylibre.com>

Update DT bindings to describe idle states of PM domains.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Lina: Added state properties, removed state names, wakeup-latency,
added of_pm_genpd_init() API, pruned commit text]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
[Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
---
 .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
index 025b5e7..4960486 100644
--- a/Documentation/devicetree/bindings/power/power_domain.txt
+++ b/Documentation/devicetree/bindings/power/power_domain.txt
@@ -29,6 +29,10 @@ Optional properties:
    specified by this binding. More details about power domain specifier are
    available in the next section.
 
+- domain-idle-states : A phandle of an idle-state that shall be soaked into a
+                generic domain power state. The idle state definitions are
+                compatible with arm,idle-state specified in [1].
+
 Example:
 
 	power: power-controller at 12340000 {
@@ -59,6 +63,57 @@ The nodes above define two power controllers: 'parent' and 'child'.
 Domains created by the 'child' power controller are subdomains of '0' power
 domain provided by the 'parent' power controller.
 
+Example 3: ARM v7 style CPU PM domains (Linux domain controller)
+
+	cpus {
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		CPU0: cpu at 0 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a7", "arm,armv7";
+			reg = <0x0>;
+			power-domains = <&a7_pd>;
+		};
+
+		CPU1: cpu at 1 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a15", "arm,armv7";
+			reg = <0x0>;
+			power-domains = <&a15_pd>;
+		};
+	};
+
+	pm-domains {
+		a15_pd: a15_pd {
+			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
+			compatible = "arm,cortex-a15";
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_SLEEP_0>;
+		};
+
+		a7_pd: a7_pd {
+			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
+			compatible = "arm,cortex-a7";
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
+		};
+
+		CLUSTER_SLEEP_0: state0 {
+			compatible = "arm,idle-state";
+			entry-latency-us = <1000>;
+			exit-latency-us = <2000>;
+			min-residency-us = <10000>;
+		};
+
+		CLUSTER_SLEEP_1: state1 {
+			compatible = "arm,idle-state";
+			entry-latency-us = <5000>;
+			exit-latency-us = <5000>;
+			min-residency-us = <100000>;
+		};
+	};
+
 ==PM domain consumers==
 
 Required properties:
@@ -76,3 +131,5 @@ Example:
 The node above defines a typical PM domain consumer device, which is located
 inside a PM domain with index 0 of a power controller represented by a node
 with the label "power".
+
+[1]. Documentation/devicetree/bindings/arm/idle-states.txt
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 03/16] PM / Domains: Abstract genpd locking
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Abstract genpd lock/unlock calls, in preparation for domain specific
locks added in the following patches.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased as additional mutex_lock|unlock has been added]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/base/power/domain.c | 113 ++++++++++++++++++++++++++++++--------------
 include/linux/pm_domain.h   |   5 +-
 2 files changed, 81 insertions(+), 37 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 3aecac3..ce1dbfdd 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -39,6 +39,46 @@
 static LIST_HEAD(gpd_list);
 static DEFINE_MUTEX(gpd_list_lock);
 
+struct genpd_lock_fns {
+	void (*lock)(struct generic_pm_domain *genpd);
+	void (*lock_nested)(struct generic_pm_domain *genpd, int depth);
+	int (*lock_interruptible)(struct generic_pm_domain *genpd);
+	void (*unlock)(struct generic_pm_domain *genpd);
+};
+
+static void genpd_lock_mtx(struct generic_pm_domain *genpd)
+{
+	mutex_lock(&genpd->mlock);
+}
+
+static void genpd_lock_nested_mtx(struct generic_pm_domain *genpd,
+					int depth)
+{
+	mutex_lock_nested(&genpd->mlock, depth);
+}
+
+static int genpd_lock_interruptible_mtx(struct generic_pm_domain *genpd)
+{
+	return mutex_lock_interruptible(&genpd->mlock);
+}
+
+static void genpd_unlock_mtx(struct generic_pm_domain *genpd)
+{
+	return mutex_unlock(&genpd->mlock);
+}
+
+static const struct genpd_lock_fns genpd_mtx_fns  = {
+	.lock = genpd_lock_mtx,
+	.lock_nested = genpd_lock_nested_mtx,
+	.lock_interruptible = genpd_lock_interruptible_mtx,
+	.unlock = genpd_unlock_mtx,
+};
+
+#define genpd_lock(p)			p->lock_fns->lock(p)
+#define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
+#define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
+#define genpd_unlock(p)			p->lock_fns->unlock(p)
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -200,9 +240,9 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
 
 		genpd_sd_counter_inc(master);
 
-		mutex_lock_nested(&master->lock, depth + 1);
+		genpd_lock_nested(master, depth + 1);
 		ret = genpd_poweron(master, depth + 1);
-		mutex_unlock(&master->lock);
+		genpd_unlock(master);
 
 		if (ret) {
 			genpd_sd_counter_dec(master);
@@ -255,9 +295,9 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
 		spin_unlock_irq(&dev->power.lock);
 
 		if (!IS_ERR(genpd)) {
-			mutex_lock(&genpd->lock);
+			genpd_lock(genpd);
 			genpd->max_off_time_changed = true;
-			mutex_unlock(&genpd->lock);
+			genpd_unlock(genpd);
 		}
 
 		dev = dev->parent;
@@ -354,9 +394,9 @@ static void genpd_power_off_work_fn(struct work_struct *work)
 
 	genpd = container_of(work, struct generic_pm_domain, power_off_work);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, true);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -472,9 +512,9 @@ static int genpd_runtime_suspend(struct device *dev)
 	if (dev->power.irq_safe)
 		return 0;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, false);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
@@ -509,9 +549,9 @@ static int genpd_runtime_resume(struct device *dev)
 		goto out;
 	}
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	ret = genpd_poweron(genpd, 0);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		return ret;
@@ -547,9 +587,9 @@ err_stop:
 	genpd_stop_dev(genpd, dev);
 err_poweroff:
 	if (!dev->power.irq_safe) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 		genpd_poweroff(genpd, 0);
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 	}
 
 	return ret;
@@ -732,20 +772,20 @@ static int pm_genpd_prepare(struct device *dev)
 	if (resume_needed(dev, genpd))
 		pm_runtime_resume(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count++ == 0)
 		genpd->suspended_count = 0;
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	ret = pm_generic_prepare(dev);
 	if (ret) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 
 		genpd->prepared_count--;
 
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 	}
 
 	return ret;
@@ -936,13 +976,13 @@ static void pm_genpd_complete(struct device *dev)
 
 	pm_generic_complete(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	genpd->prepared_count--;
 	if (!genpd->prepared_count)
 		genpd_queue_power_off_work(genpd);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -1077,7 +1117,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1094,7 +1134,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		genpd_free_dev_data(dev, gpd_data);
@@ -1127,7 +1167,7 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 	gpd_data = to_gpd_data(pdd);
 	dev_pm_qos_remove_notifier(dev, &gpd_data->nb);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1142,14 +1182,14 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 
 	list_del_init(&pdd->list_node);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	genpd_free_dev_data(dev, gpd_data);
 
 	return 0;
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 	dev_pm_qos_add_notifier(dev, &gpd_data->nb);
 
 	return ret;
@@ -1175,8 +1215,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	if (!link)
 		return -ENOMEM;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (genpd->status == GPD_STATE_POWER_OFF
 	    &&  subdomain->status != GPD_STATE_POWER_OFF) {
@@ -1199,8 +1239,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 		genpd_sd_counter_inc(genpd);
 
  out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 	if (ret)
 		kfree(link);
 	return ret;
@@ -1221,8 +1261,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain))
 		return -EINVAL;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (!list_empty(&subdomain->master_links) || subdomain->device_count) {
 		pr_warn("%s: unable to remove subdomain %s\n", genpd->name,
@@ -1246,8 +1286,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	}
 
 out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 
 	return ret;
 }
@@ -1356,7 +1396,8 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->lock);
+	mutex_init(&genpd->mlock);
+	genpd->lock_fns = &genpd_mtx_fns;
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -1714,9 +1755,9 @@ int genpd_dev_pm_attach(struct device *dev)
 	dev->pm_domain->detach = genpd_dev_pm_detach;
 	dev->pm_domain->sync = genpd_dev_pm_sync;
 
-	mutex_lock(&pd->lock);
+	genpd_lock(pd);
 	ret = genpd_poweron(pd, 0);
-	mutex_unlock(&pd->lock);
+	genpd_unlock(pd);
 out:
 	return ret ? -EPROBE_DEFER : 0;
 }
@@ -1774,7 +1815,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	char state[16];
 	int ret;
 
-	ret = mutex_lock_interruptible(&genpd->lock);
+	ret = genpd_lock_interruptible(genpd);
 	if (ret)
 		return -ERESTARTSYS;
 
@@ -1811,7 +1852,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 
 	seq_puts(s, "\n");
 exit:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index c5d14b9..d37bf11 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -45,13 +45,14 @@ struct genpd_power_state {
 	struct device_node *of_node;
 };
 
+struct genpd_lock_fns;
+
 struct generic_pm_domain {
 	struct dev_pm_domain domain;	/* PM domain operations */
 	struct list_head gpd_list_node;	/* Node in the global PM domains list */
 	struct list_head master_links;	/* Links with PM domain as a master */
 	struct list_head slave_links;	/* Links with PM domain as a slave */
 	struct list_head dev_list;	/* List of devices */
-	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
 	struct device_node *of_node;	/* Device node of the PM domain */
@@ -75,6 +76,8 @@ struct generic_pm_domain {
 	struct genpd_power_state states[GENPD_MAX_NUM_STATES];
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
+	const struct genpd_lock_fns *lock_fns;
+	struct mutex mlock;
 
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 03/16] PM / Domains: Abstract genpd locking
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

Abstract genpd lock/unlock calls, in preparation for domain specific
locks added in the following patches.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased as additional mutex_lock|unlock has been added]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/base/power/domain.c | 113 ++++++++++++++++++++++++++++++--------------
 include/linux/pm_domain.h   |   5 +-
 2 files changed, 81 insertions(+), 37 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 3aecac3..ce1dbfdd 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -39,6 +39,46 @@
 static LIST_HEAD(gpd_list);
 static DEFINE_MUTEX(gpd_list_lock);
 
+struct genpd_lock_fns {
+	void (*lock)(struct generic_pm_domain *genpd);
+	void (*lock_nested)(struct generic_pm_domain *genpd, int depth);
+	int (*lock_interruptible)(struct generic_pm_domain *genpd);
+	void (*unlock)(struct generic_pm_domain *genpd);
+};
+
+static void genpd_lock_mtx(struct generic_pm_domain *genpd)
+{
+	mutex_lock(&genpd->mlock);
+}
+
+static void genpd_lock_nested_mtx(struct generic_pm_domain *genpd,
+					int depth)
+{
+	mutex_lock_nested(&genpd->mlock, depth);
+}
+
+static int genpd_lock_interruptible_mtx(struct generic_pm_domain *genpd)
+{
+	return mutex_lock_interruptible(&genpd->mlock);
+}
+
+static void genpd_unlock_mtx(struct generic_pm_domain *genpd)
+{
+	return mutex_unlock(&genpd->mlock);
+}
+
+static const struct genpd_lock_fns genpd_mtx_fns  = {
+	.lock = genpd_lock_mtx,
+	.lock_nested = genpd_lock_nested_mtx,
+	.lock_interruptible = genpd_lock_interruptible_mtx,
+	.unlock = genpd_unlock_mtx,
+};
+
+#define genpd_lock(p)			p->lock_fns->lock(p)
+#define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
+#define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
+#define genpd_unlock(p)			p->lock_fns->unlock(p)
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -200,9 +240,9 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
 
 		genpd_sd_counter_inc(master);
 
-		mutex_lock_nested(&master->lock, depth + 1);
+		genpd_lock_nested(master, depth + 1);
 		ret = genpd_poweron(master, depth + 1);
-		mutex_unlock(&master->lock);
+		genpd_unlock(master);
 
 		if (ret) {
 			genpd_sd_counter_dec(master);
@@ -255,9 +295,9 @@ static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
 		spin_unlock_irq(&dev->power.lock);
 
 		if (!IS_ERR(genpd)) {
-			mutex_lock(&genpd->lock);
+			genpd_lock(genpd);
 			genpd->max_off_time_changed = true;
-			mutex_unlock(&genpd->lock);
+			genpd_unlock(genpd);
 		}
 
 		dev = dev->parent;
@@ -354,9 +394,9 @@ static void genpd_power_off_work_fn(struct work_struct *work)
 
 	genpd = container_of(work, struct generic_pm_domain, power_off_work);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, true);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -472,9 +512,9 @@ static int genpd_runtime_suspend(struct device *dev)
 	if (dev->power.irq_safe)
 		return 0;
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	genpd_poweroff(genpd, false);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
@@ -509,9 +549,9 @@ static int genpd_runtime_resume(struct device *dev)
 		goto out;
 	}
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 	ret = genpd_poweron(genpd, 0);
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		return ret;
@@ -547,9 +587,9 @@ err_stop:
 	genpd_stop_dev(genpd, dev);
 err_poweroff:
 	if (!dev->power.irq_safe) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 		genpd_poweroff(genpd, 0);
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 	}
 
 	return ret;
@@ -732,20 +772,20 @@ static int pm_genpd_prepare(struct device *dev)
 	if (resume_needed(dev, genpd))
 		pm_runtime_resume(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count++ == 0)
 		genpd->suspended_count = 0;
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	ret = pm_generic_prepare(dev);
 	if (ret) {
-		mutex_lock(&genpd->lock);
+		genpd_lock(genpd);
 
 		genpd->prepared_count--;
 
-		mutex_unlock(&genpd->lock);
+		genpd_unlock(genpd);
 	}
 
 	return ret;
@@ -936,13 +976,13 @@ static void pm_genpd_complete(struct device *dev)
 
 	pm_generic_complete(dev);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	genpd->prepared_count--;
 	if (!genpd->prepared_count)
 		genpd_queue_power_off_work(genpd);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 }
 
 /**
@@ -1077,7 +1117,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1094,7 +1134,7 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	list_add_tail(&gpd_data->base.list_node, &genpd->dev_list);
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	if (ret)
 		genpd_free_dev_data(dev, gpd_data);
@@ -1127,7 +1167,7 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 	gpd_data = to_gpd_data(pdd);
 	dev_pm_qos_remove_notifier(dev, &gpd_data->nb);
 
-	mutex_lock(&genpd->lock);
+	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
 		ret = -EAGAIN;
@@ -1142,14 +1182,14 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd,
 
 	list_del_init(&pdd->list_node);
 
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	genpd_free_dev_data(dev, gpd_data);
 
 	return 0;
 
  out:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 	dev_pm_qos_add_notifier(dev, &gpd_data->nb);
 
 	return ret;
@@ -1175,8 +1215,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	if (!link)
 		return -ENOMEM;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (genpd->status == GPD_STATE_POWER_OFF
 	    &&  subdomain->status != GPD_STATE_POWER_OFF) {
@@ -1199,8 +1239,8 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 		genpd_sd_counter_inc(genpd);
 
  out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 	if (ret)
 		kfree(link);
 	return ret;
@@ -1221,8 +1261,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain))
 		return -EINVAL;
 
-	mutex_lock(&subdomain->lock);
-	mutex_lock_nested(&genpd->lock, SINGLE_DEPTH_NESTING);
+	genpd_lock(subdomain);
+	genpd_lock_nested(genpd, SINGLE_DEPTH_NESTING);
 
 	if (!list_empty(&subdomain->master_links) || subdomain->device_count) {
 		pr_warn("%s: unable to remove subdomain %s\n", genpd->name,
@@ -1246,8 +1286,8 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
 	}
 
 out:
-	mutex_unlock(&genpd->lock);
-	mutex_unlock(&subdomain->lock);
+	genpd_unlock(genpd);
+	genpd_unlock(subdomain);
 
 	return ret;
 }
@@ -1356,7 +1396,8 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->lock);
+	mutex_init(&genpd->mlock);
+	genpd->lock_fns = &genpd_mtx_fns;
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -1714,9 +1755,9 @@ int genpd_dev_pm_attach(struct device *dev)
 	dev->pm_domain->detach = genpd_dev_pm_detach;
 	dev->pm_domain->sync = genpd_dev_pm_sync;
 
-	mutex_lock(&pd->lock);
+	genpd_lock(pd);
 	ret = genpd_poweron(pd, 0);
-	mutex_unlock(&pd->lock);
+	genpd_unlock(pd);
 out:
 	return ret ? -EPROBE_DEFER : 0;
 }
@@ -1774,7 +1815,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	char state[16];
 	int ret;
 
-	ret = mutex_lock_interruptible(&genpd->lock);
+	ret = genpd_lock_interruptible(genpd);
 	if (ret)
 		return -ERESTARTSYS;
 
@@ -1811,7 +1852,7 @@ static int pm_genpd_summary_one(struct seq_file *s,
 
 	seq_puts(s, "\n");
 exit:
-	mutex_unlock(&genpd->lock);
+	genpd_unlock(genpd);
 
 	return 0;
 }
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index c5d14b9..d37bf11 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -45,13 +45,14 @@ struct genpd_power_state {
 	struct device_node *of_node;
 };
 
+struct genpd_lock_fns;
+
 struct generic_pm_domain {
 	struct dev_pm_domain domain;	/* PM domain operations */
 	struct list_head gpd_list_node;	/* Node in the global PM domains list */
 	struct list_head master_links;	/* Links with PM domain as a master */
 	struct list_head slave_links;	/* Links with PM domain as a slave */
 	struct list_head dev_list;	/* List of devices */
-	struct mutex lock;
 	struct dev_power_governor *gov;
 	struct work_struct power_off_work;
 	struct device_node *of_node;	/* Device node of the PM domain */
@@ -75,6 +76,8 @@ struct generic_pm_domain {
 	struct genpd_power_state states[GENPD_MAX_NUM_STATES];
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
+	const struct genpd_lock_fns *lock_fns;
+	struct mutex mlock;
 
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 04/16] PM / Domains: Support IRQ safe PM domains
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Generic Power Domains currently support turning on/off only in process
context. This prevents the usage of PM domains for domains that could be
powered on/off in a context where IRQs are disabled. Many such domains
exist today and do not get powered off, when the IRQ safe devices in
that domain are powered off, because of this limitation.

However, not all domains can operate in IRQ safe contexts. Genpd
therefore, has to support both cases where the domain may or may not
operate in IRQ safe contexts. Configuring genpd to use an appropriate
lock for that domain, would allow domains that have IRQ safe devices to
runtime suspend and resume, in atomic context.

To achieve domain specific locking, set the domain's ->flag to
GENPD_FLAG_IRQ_SAFE while defining the domain. This indicates that genpd
should use a spinlock instead of a mutex for locking the domain. Locking
is abstracted through genpd_lock() and genpd_unlock() functions that use
the flag to determine the appropriate lock to be used for that domain.

Domains that have lower latency to suspend and resume and can operate
with IRQs disabled may now be able to save power, when the component
devices and sub-domains are idle at runtime.

The restriction this imposes on the domain hierarchy is that non-IRQ
safe domains may not have IRQ-safe subdomains, but IRQ safe domains may
have IRQ safe and non-IRQ safe subdomains and devices.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased and solved a conflict]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/base/power/domain.c | 107 +++++++++++++++++++++++++++++++++++++++-----
 include/linux/pm_domain.h   |  10 ++++-
 2 files changed, 106 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index ce1dbfdd..4fc5688 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -74,11 +74,61 @@ static const struct genpd_lock_fns genpd_mtx_fns  = {
 	.unlock = genpd_unlock_mtx,
 };
 
+static void genpd_lock_spin(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+}
+
+static void genpd_lock_nested_spin(struct generic_pm_domain *genpd,
+					int depth)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave_nested(&genpd->slock, flags, depth);
+	genpd->lock_flags = flags;
+}
+
+static int genpd_lock_interruptible_spin(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+	return 0;
+}
+
+static void genpd_unlock_spin(struct generic_pm_domain *genpd)
+	__releases(&genpd->slock)
+{
+	spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags);
+}
+
+static const struct genpd_lock_fns genpd_spin_fns = {
+	.lock = genpd_lock_spin,
+	.lock_nested = genpd_lock_nested_spin,
+	.lock_interruptible = genpd_lock_interruptible_spin,
+	.unlock = genpd_unlock_spin,
+};
+
 #define genpd_lock(p)			p->lock_fns->lock(p)
 #define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
 #define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
 #define genpd_unlock(p)			p->lock_fns->unlock(p)
 
+#define genpd_is_irq_safe(genpd)	(genpd->flags & GENPD_FLAG_IRQ_SAFE)
+
+static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
+		struct generic_pm_domain *genpd)
+{
+	return pm_runtime_is_irq_safe(dev) && !genpd_is_irq_safe(genpd);
+}
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -343,7 +393,12 @@ static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
 		if (stat > PM_QOS_FLAGS_NONE)
 			return -EBUSY;
 
-		if (!pm_runtime_suspended(pdd->dev) || pdd->dev->power.irq_safe)
+		/*
+		 * Do not allow PM domain to be powered off, when an IRQ safe
+		 * device is part of a non-IRQ safe domain.
+		 */
+		if (!pm_runtime_suspended(pdd->dev) ||
+			irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd))
 			not_suspended++;
 	}
 
@@ -506,10 +561,10 @@ static int genpd_runtime_suspend(struct device *dev)
 	}
 
 	/*
-	 * If power.irq_safe is set, this routine will be run with interrupts
-	 * off, so it can't use mutexes.
+	 * If power.irq_safe is set, this routine may be run with
+	 * IRQs disabled, so suspend only if the PM domain also is irq_safe.
 	 */
-	if (dev->power.irq_safe)
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
 		return 0;
 
 	genpd_lock(genpd);
@@ -543,8 +598,11 @@ static int genpd_runtime_resume(struct device *dev)
 	if (IS_ERR(genpd))
 		return -EINVAL;
 
-	/* If power.irq_safe, the PM domain is never powered off. */
-	if (dev->power.irq_safe) {
+	/*
+	 * As we don't power off a non IRQ safe domain, which holds
+	 * an IRQ safe device, we don't need to restore power to it.
+	 */
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd)) {
 		timed = false;
 		goto out;
 	}
@@ -586,7 +644,8 @@ static int genpd_runtime_resume(struct device *dev)
 err_stop:
 	genpd_stop_dev(genpd, dev);
 err_poweroff:
-	if (!dev->power.irq_safe) {
+	if (!dev->power.irq_safe ||
+		(dev->power.irq_safe && genpd_is_irq_safe(genpd))) {
 		genpd_lock(genpd);
 		genpd_poweroff(genpd, 0);
 		genpd_unlock(genpd);
@@ -1117,6 +1176,11 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
+	/* Check if we are adding an IRQ safe device to non-IRQ safe domain */
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
+		dev_warn_once(dev, "PM domain %s will not be powered off\n",
+				genpd->name);
+
 	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
@@ -1211,6 +1275,17 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	    || genpd == subdomain)
 		return -EINVAL;
 
+	/*
+	 * If the domain can be powered on/off in an IRQ safe
+	 * context, ensure that the subdomain can also be
+	 * powered on/off in that context.
+	 */
+	if (!genpd_is_irq_safe(genpd) && genpd_is_irq_safe(subdomain)) {
+		WARN("Parent %s of subdomain %s must be IRQ safe\n",
+				genpd->name, subdomain->name);
+		return -EINVAL;
+	}
+
 	link = kzalloc(sizeof(*link), GFP_KERNEL);
 	if (!link)
 		return -ENOMEM;
@@ -1377,6 +1452,17 @@ static int genpd_of_parse(struct generic_pm_domain *genpd)
 	return pm_genpd_of_parse_power_states(genpd);
 }
 
+static void genpd_lock_init(struct generic_pm_domain *genpd)
+{
+	if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
+		spin_lock_init(&genpd->slock);
+		genpd->lock_fns = &genpd_spin_fns;
+	} else {
+		mutex_init(&genpd->mlock);
+		genpd->lock_fns = &genpd_mtx_fns;
+	}
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1396,8 +1482,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->mlock);
-	genpd->lock_fns = &genpd_mtx_fns;
+	genpd_lock_init(genpd);
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -1841,7 +1926,9 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	}
 
 	list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
-		kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);
+		kobj_path = kobject_get_path(&pm_data->dev->kobj,
+				genpd_is_irq_safe(genpd) ?
+				GFP_ATOMIC : GFP_KERNEL);
 		if (kobj_path == NULL)
 			continue;
 
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index d37bf11..688dc57 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -15,9 +15,11 @@
 #include <linux/err.h>
 #include <linux/of.h>
 #include <linux/notifier.h>
+#include <linux/spinlock.h>
 
 /* Defines used for the flags field in the struct generic_pm_domain */
 #define GENPD_FLAG_PM_CLK	(1U << 0) /* PM domain uses PM clk */
+#define GENPD_FLAG_IRQ_SAFE	(1U << 1) /* PM domain operates in atomic */
 
 #define GENPD_MAX_NUM_STATES	8 /* Number of possible low power states */
 
@@ -77,7 +79,13 @@ struct generic_pm_domain {
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
 	const struct genpd_lock_fns *lock_fns;
-	struct mutex mlock;
+	union {
+		struct mutex mlock;
+		struct {
+			spinlock_t slock;
+			unsigned long lock_flags;
+		};
+	};
 
 };
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 04/16] PM / Domains: Support IRQ safe PM domains
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

Generic Power Domains currently support turning on/off only in process
context. This prevents the usage of PM domains for domains that could be
powered on/off in a context where IRQs are disabled. Many such domains
exist today and do not get powered off, when the IRQ safe devices in
that domain are powered off, because of this limitation.

However, not all domains can operate in IRQ safe contexts. Genpd
therefore, has to support both cases where the domain may or may not
operate in IRQ safe contexts. Configuring genpd to use an appropriate
lock for that domain, would allow domains that have IRQ safe devices to
runtime suspend and resume, in atomic context.

To achieve domain specific locking, set the domain's ->flag to
GENPD_FLAG_IRQ_SAFE while defining the domain. This indicates that genpd
should use a spinlock instead of a mutex for locking the domain. Locking
is abstracted through genpd_lock() and genpd_unlock() functions that use
the flag to determine the appropriate lock to be used for that domain.

Domains that have lower latency to suspend and resume and can operate
with IRQs disabled may now be able to save power, when the component
devices and sub-domains are idle at runtime.

The restriction this imposes on the domain hierarchy is that non-IRQ
safe domains may not have IRQ-safe subdomains, but IRQ safe domains may
have IRQ safe and non-IRQ safe subdomains and devices.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased and solved a conflict]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/base/power/domain.c | 107 +++++++++++++++++++++++++++++++++++++++-----
 include/linux/pm_domain.h   |  10 ++++-
 2 files changed, 106 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index ce1dbfdd..4fc5688 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -74,11 +74,61 @@ static const struct genpd_lock_fns genpd_mtx_fns  = {
 	.unlock = genpd_unlock_mtx,
 };
 
+static void genpd_lock_spin(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+}
+
+static void genpd_lock_nested_spin(struct generic_pm_domain *genpd,
+					int depth)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave_nested(&genpd->slock, flags, depth);
+	genpd->lock_flags = flags;
+}
+
+static int genpd_lock_interruptible_spin(struct generic_pm_domain *genpd)
+	__acquires(&genpd->slock)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&genpd->slock, flags);
+	genpd->lock_flags = flags;
+	return 0;
+}
+
+static void genpd_unlock_spin(struct generic_pm_domain *genpd)
+	__releases(&genpd->slock)
+{
+	spin_unlock_irqrestore(&genpd->slock, genpd->lock_flags);
+}
+
+static const struct genpd_lock_fns genpd_spin_fns = {
+	.lock = genpd_lock_spin,
+	.lock_nested = genpd_lock_nested_spin,
+	.lock_interruptible = genpd_lock_interruptible_spin,
+	.unlock = genpd_unlock_spin,
+};
+
 #define genpd_lock(p)			p->lock_fns->lock(p)
 #define genpd_lock_nested(p, d)		p->lock_fns->lock_nested(p, d)
 #define genpd_lock_interruptible(p)	p->lock_fns->lock_interruptible(p)
 #define genpd_unlock(p)			p->lock_fns->unlock(p)
 
+#define genpd_is_irq_safe(genpd)	(genpd->flags & GENPD_FLAG_IRQ_SAFE)
+
+static inline bool irq_safe_dev_in_no_sleep_domain(struct device *dev,
+		struct generic_pm_domain *genpd)
+{
+	return pm_runtime_is_irq_safe(dev) && !genpd_is_irq_safe(genpd);
+}
+
 /*
  * Get the generic PM domain for a particular struct device.
  * This validates the struct device pointer, the PM domain pointer,
@@ -343,7 +393,12 @@ static int genpd_poweroff(struct generic_pm_domain *genpd, bool is_async)
 		if (stat > PM_QOS_FLAGS_NONE)
 			return -EBUSY;
 
-		if (!pm_runtime_suspended(pdd->dev) || pdd->dev->power.irq_safe)
+		/*
+		 * Do not allow PM domain to be powered off, when an IRQ safe
+		 * device is part of a non-IRQ safe domain.
+		 */
+		if (!pm_runtime_suspended(pdd->dev) ||
+			irq_safe_dev_in_no_sleep_domain(pdd->dev, genpd))
 			not_suspended++;
 	}
 
@@ -506,10 +561,10 @@ static int genpd_runtime_suspend(struct device *dev)
 	}
 
 	/*
-	 * If power.irq_safe is set, this routine will be run with interrupts
-	 * off, so it can't use mutexes.
+	 * If power.irq_safe is set, this routine may be run with
+	 * IRQs disabled, so suspend only if the PM domain also is irq_safe.
 	 */
-	if (dev->power.irq_safe)
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
 		return 0;
 
 	genpd_lock(genpd);
@@ -543,8 +598,11 @@ static int genpd_runtime_resume(struct device *dev)
 	if (IS_ERR(genpd))
 		return -EINVAL;
 
-	/* If power.irq_safe, the PM domain is never powered off. */
-	if (dev->power.irq_safe) {
+	/*
+	 * As we don't power off a non IRQ safe domain, which holds
+	 * an IRQ safe device, we don't need to restore power to it.
+	 */
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd)) {
 		timed = false;
 		goto out;
 	}
@@ -586,7 +644,8 @@ static int genpd_runtime_resume(struct device *dev)
 err_stop:
 	genpd_stop_dev(genpd, dev);
 err_poweroff:
-	if (!dev->power.irq_safe) {
+	if (!dev->power.irq_safe ||
+		(dev->power.irq_safe && genpd_is_irq_safe(genpd))) {
 		genpd_lock(genpd);
 		genpd_poweroff(genpd, 0);
 		genpd_unlock(genpd);
@@ -1117,6 +1176,11 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev,
 	if (IS_ERR(gpd_data))
 		return PTR_ERR(gpd_data);
 
+	/* Check if we are adding an IRQ safe device to non-IRQ safe domain */
+	if (irq_safe_dev_in_no_sleep_domain(dev, genpd))
+		dev_warn_once(dev, "PM domain %s will not be powered off\n",
+				genpd->name);
+
 	genpd_lock(genpd);
 
 	if (genpd->prepared_count > 0) {
@@ -1211,6 +1275,17 @@ int pm_genpd_add_subdomain(struct generic_pm_domain *genpd,
 	    || genpd == subdomain)
 		return -EINVAL;
 
+	/*
+	 * If the domain can be powered on/off in an IRQ safe
+	 * context, ensure that the subdomain can also be
+	 * powered on/off in that context.
+	 */
+	if (!genpd_is_irq_safe(genpd) && genpd_is_irq_safe(subdomain)) {
+		WARN("Parent %s of subdomain %s must be IRQ safe\n",
+				genpd->name, subdomain->name);
+		return -EINVAL;
+	}
+
 	link = kzalloc(sizeof(*link), GFP_KERNEL);
 	if (!link)
 		return -ENOMEM;
@@ -1377,6 +1452,17 @@ static int genpd_of_parse(struct generic_pm_domain *genpd)
 	return pm_genpd_of_parse_power_states(genpd);
 }
 
+static void genpd_lock_init(struct generic_pm_domain *genpd)
+{
+	if (genpd->flags & GENPD_FLAG_IRQ_SAFE) {
+		spin_lock_init(&genpd->slock);
+		genpd->lock_fns = &genpd_spin_fns;
+	} else {
+		mutex_init(&genpd->mlock);
+		genpd->lock_fns = &genpd_mtx_fns;
+	}
+}
+
 /**
  * pm_genpd_init - Initialize a generic I/O PM domain object.
  * @genpd: PM domain object to initialize.
@@ -1396,8 +1482,7 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
 	INIT_LIST_HEAD(&genpd->master_links);
 	INIT_LIST_HEAD(&genpd->slave_links);
 	INIT_LIST_HEAD(&genpd->dev_list);
-	mutex_init(&genpd->mlock);
-	genpd->lock_fns = &genpd_mtx_fns;
+	genpd_lock_init(genpd);
 	genpd->gov = gov;
 	INIT_WORK(&genpd->power_off_work, genpd_power_off_work_fn);
 	atomic_set(&genpd->sd_count, 0);
@@ -1841,7 +1926,9 @@ static int pm_genpd_summary_one(struct seq_file *s,
 	}
 
 	list_for_each_entry(pm_data, &genpd->dev_list, list_node) {
-		kobj_path = kobject_get_path(&pm_data->dev->kobj, GFP_KERNEL);
+		kobj_path = kobject_get_path(&pm_data->dev->kobj,
+				genpd_is_irq_safe(genpd) ?
+				GFP_ATOMIC : GFP_KERNEL);
 		if (kobj_path == NULL)
 			continue;
 
diff --git a/include/linux/pm_domain.h b/include/linux/pm_domain.h
index d37bf11..688dc57 100644
--- a/include/linux/pm_domain.h
+++ b/include/linux/pm_domain.h
@@ -15,9 +15,11 @@
 #include <linux/err.h>
 #include <linux/of.h>
 #include <linux/notifier.h>
+#include <linux/spinlock.h>
 
 /* Defines used for the flags field in the struct generic_pm_domain */
 #define GENPD_FLAG_PM_CLK	(1U << 0) /* PM domain uses PM clk */
+#define GENPD_FLAG_IRQ_SAFE	(1U << 1) /* PM domain operates in atomic */
 
 #define GENPD_MAX_NUM_STATES	8 /* Number of possible low power states */
 
@@ -77,7 +79,13 @@ struct generic_pm_domain {
 	unsigned int state_count; /* number of states */
 	unsigned int state_idx; /* state that genpd will go to when off */
 	const struct genpd_lock_fns *lock_fns;
-	struct mutex mlock;
+	union {
+		struct mutex mlock;
+		struct {
+			spinlock_t slock;
+			unsigned long lock_flags;
+		};
+	};
 
 };
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 05/16] PM / doc: Update device documentation for devices in IRQ safe PM domains
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Update documentation to reflect the changes made to support IRQ safe PM
domains.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/power/devices.txt | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
index 8ba6625..a622136 100644
--- a/Documentation/power/devices.txt
+++ b/Documentation/power/devices.txt
@@ -607,7 +607,17 @@ individually.  Instead, a set of devices sharing a power resource can be put
 into a low-power state together at the same time by turning off the shared
 power resource.  Of course, they also need to be put into the full-power state
 together, by turning the shared power resource on.  A set of devices with this
-property is often referred to as a power domain.
+property is often referred to as a power domain. A power domain may also be
+nested inside another power domain.
+
+Devices, by default, operate in process context. If a device can operate in
+IRQ safe context that has to be explicitly indicated by setting the irq_safe
+boolean inside struct generic_pm_domain to be true. Power domains by default,
+operate in process context but could have devices that are IRQ safe. Such
+power domains cannot be powered on/off during runtime PM. On the other hand,
+IRQ safe PM domains that have IRQ safe devices may be powered off when all
+the devices are in idle. An IRQ safe domain may only be attached as a
+subdomain to another IRQ safe domain.
 
 Support for power domains is provided through the pm_domain field of struct
 device.  This field is a pointer to an object of type struct dev_pm_domain,
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 05/16] PM / doc: Update device documentation for devices in IRQ safe PM domains
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

Update documentation to reflect the changes made to support IRQ safe PM
domains.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/power/devices.txt | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
index 8ba6625..a622136 100644
--- a/Documentation/power/devices.txt
+++ b/Documentation/power/devices.txt
@@ -607,7 +607,17 @@ individually.  Instead, a set of devices sharing a power resource can be put
 into a low-power state together at the same time by turning off the shared
 power resource.  Of course, they also need to be put into the full-power state
 together, by turning the shared power resource on.  A set of devices with this
-property is often referred to as a power domain.
+property is often referred to as a power domain. A power domain may also be
+nested inside another power domain.
+
+Devices, by default, operate in process context. If a device can operate in
+IRQ safe context that has to be explicitly indicated by setting the irq_safe
+boolean inside struct generic_pm_domain to be true. Power domains by default,
+operate in process context but could have devices that are IRQ safe. Such
+power domains cannot be powered on/off during runtime PM. On the other hand,
+IRQ safe PM domains that have IRQ safe devices may be powered off when all
+the devices are in idle. An IRQ safe domain may only be attached as a
+subdomain to another IRQ safe domain.
 
 Support for power domains is provided through the pm_domain field of struct
 device.  This field is a pointer to an object of type struct dev_pm_domain,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 06/16] drivers: cpu: Setup CPU devices to do runtime PM
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

CPU devices just like any other device, can do runtime PM. However, CPU
devices may only do runtime only when IRQs are disabled. The devices
must be set as IRQ safe.

Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/cpu.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 691eeea..c1e5e77 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -17,6 +17,7 @@
 #include <linux/of.h>
 #include <linux/cpufeature.h>
 #include <linux/tick.h>
+#include <linux/pm_runtime.h>
 
 #include "base.h"
 
@@ -344,6 +345,21 @@ static int cpu_uevent(struct device *dev, struct kobj_uevent_env *env)
 }
 #endif
 
+#ifdef CONFIG_PM
+static void cpu_runtime_pm_init(struct device *dev)
+{
+	pm_runtime_irq_safe(dev);
+	pm_runtime_enable(dev);
+	if (cpu_online(dev->id)) {
+		pm_runtime_get_noresume(dev);
+		pm_runtime_set_active(dev);
+	}
+}
+#else
+static void cpu_runtime_pm_init(struct device *dev)
+{ }
+#endif
+
 /*
  * register_cpu - Setup a sysfs device for a CPU.
  * @cpu - cpu->hotpluggable field set to 1 will generate a control file in
@@ -376,6 +392,8 @@ int register_cpu(struct cpu *cpu, int num)
 	if (!error)
 		register_cpu_under_node(num, cpu_to_node(num));
 
+	cpu_runtime_pm_init(&cpu->dev);
+
 	return error;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 06/16] drivers: cpu: Setup CPU devices to do runtime PM
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

CPU devices just like any other device, can do runtime PM. However, CPU
devices may only do runtime only when IRQs are disabled. The devices
must be set as IRQ safe.

Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/cpu.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
index 691eeea..c1e5e77 100644
--- a/drivers/base/cpu.c
+++ b/drivers/base/cpu.c
@@ -17,6 +17,7 @@
 #include <linux/of.h>
 #include <linux/cpufeature.h>
 #include <linux/tick.h>
+#include <linux/pm_runtime.h>
 
 #include "base.h"
 
@@ -344,6 +345,21 @@ static int cpu_uevent(struct device *dev, struct kobj_uevent_env *env)
 }
 #endif
 
+#ifdef CONFIG_PM
+static void cpu_runtime_pm_init(struct device *dev)
+{
+	pm_runtime_irq_safe(dev);
+	pm_runtime_enable(dev);
+	if (cpu_online(dev->id)) {
+		pm_runtime_get_noresume(dev);
+		pm_runtime_set_active(dev);
+	}
+}
+#else
+static void cpu_runtime_pm_init(struct device *dev)
+{ }
+#endif
+
 /*
  * register_cpu - Setup a sysfs device for a CPU.
  * @cpu - cpu->hotpluggable field set to 1 will generate a control file in
@@ -376,6 +392,8 @@ int register_cpu(struct cpu *cpu, int num)
 	if (!error)
 		register_cpu_under_node(num, cpu_to_node(num));
 
+	cpu_runtime_pm_init(&cpu->dev);
+
 	return error;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 07/16] kernel/cpu_pm: Add runtime PM support for CPUs
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Notify runtime PM when the CPU is going to be powered off in the idle
state. This allows for runtime PM suspend/resume of the CPU as well as
its PM domain.

Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 kernel/cpu_pm.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
index 009cc9a..657ce06 100644
--- a/kernel/cpu_pm.c
+++ b/kernel/cpu_pm.c
@@ -16,9 +16,11 @@
  */
 
 #include <linux/kernel.h>
+#include <linux/cpu.h>
 #include <linux/cpu_pm.h>
 #include <linux/module.h>
 #include <linux/notifier.h>
+#include <linux/pm_runtime.h>
 #include <linux/spinlock.h>
 #include <linux/syscore_ops.h>
 
@@ -99,6 +101,7 @@ int cpu_pm_enter(void)
 {
 	int nr_calls;
 	int ret = 0;
+	struct device *dev = get_cpu_device(smp_processor_id());
 
 	read_lock(&cpu_pm_notifier_lock);
 	ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls);
@@ -110,6 +113,10 @@ int cpu_pm_enter(void)
 		cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
 	read_unlock(&cpu_pm_notifier_lock);
 
+	/* Notify Runtime PM that we are suspending the CPU */
+	if (!ret && dev)
+		RCU_NONIDLE(pm_runtime_put_sync_suspend(dev));
+
 	return ret;
 }
 EXPORT_SYMBOL_GPL(cpu_pm_enter);
@@ -129,6 +136,11 @@ EXPORT_SYMBOL_GPL(cpu_pm_enter);
 int cpu_pm_exit(void)
 {
 	int ret;
+	struct device *dev = get_cpu_device(smp_processor_id());
+
+	/* Notify Runtime PM that we are resuming the CPU */
+	if (dev)
+		RCU_NONIDLE(pm_runtime_get_sync(dev));
 
 	read_lock(&cpu_pm_notifier_lock);
 	ret = cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
@@ -200,6 +212,39 @@ int cpu_cluster_pm_exit(void)
 }
 EXPORT_SYMBOL_GPL(cpu_cluster_pm_exit);
 
+#ifdef CONFIG_HOTPLUG_CPU
+static int cpu_pm_cpu_hotplug(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	struct device *dev = get_cpu_device(smp_processor_id());
+
+	if (!dev)
+		return NOTIFY_OK;
+
+	/* Execute CPU runtime PM on that CPU */
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_DYING:
+		pm_runtime_put_sync_suspend(dev);
+		break;
+	case CPU_STARTING:
+		pm_runtime_get_sync(dev);
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static int __init cpu_pm_hotplug_init(void)
+{
+	/* Register for hotplug notifications for runtime PM */
+	hotcpu_notifier(cpu_pm_cpu_hotplug, 0);
+	return 0;
+}
+device_initcall(cpu_pm_hotplug_init);
+#endif
+
 #ifdef CONFIG_PM
 static int cpu_pm_suspend(void)
 {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 07/16] kernel/cpu_pm: Add runtime PM support for CPUs
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

Notify runtime PM when the CPU is going to be powered off in the idle
state. This allows for runtime PM suspend/resume of the CPU as well as
its PM domain.

Cc: Kevin Hilman <khilman@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 kernel/cpu_pm.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/kernel/cpu_pm.c b/kernel/cpu_pm.c
index 009cc9a..657ce06 100644
--- a/kernel/cpu_pm.c
+++ b/kernel/cpu_pm.c
@@ -16,9 +16,11 @@
  */
 
 #include <linux/kernel.h>
+#include <linux/cpu.h>
 #include <linux/cpu_pm.h>
 #include <linux/module.h>
 #include <linux/notifier.h>
+#include <linux/pm_runtime.h>
 #include <linux/spinlock.h>
 #include <linux/syscore_ops.h>
 
@@ -99,6 +101,7 @@ int cpu_pm_enter(void)
 {
 	int nr_calls;
 	int ret = 0;
+	struct device *dev = get_cpu_device(smp_processor_id());
 
 	read_lock(&cpu_pm_notifier_lock);
 	ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls);
@@ -110,6 +113,10 @@ int cpu_pm_enter(void)
 		cpu_pm_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
 	read_unlock(&cpu_pm_notifier_lock);
 
+	/* Notify Runtime PM that we are suspending the CPU */
+	if (!ret && dev)
+		RCU_NONIDLE(pm_runtime_put_sync_suspend(dev));
+
 	return ret;
 }
 EXPORT_SYMBOL_GPL(cpu_pm_enter);
@@ -129,6 +136,11 @@ EXPORT_SYMBOL_GPL(cpu_pm_enter);
 int cpu_pm_exit(void)
 {
 	int ret;
+	struct device *dev = get_cpu_device(smp_processor_id());
+
+	/* Notify Runtime PM that we are resuming the CPU */
+	if (dev)
+		RCU_NONIDLE(pm_runtime_get_sync(dev));
 
 	read_lock(&cpu_pm_notifier_lock);
 	ret = cpu_pm_notify(CPU_PM_EXIT, -1, NULL);
@@ -200,6 +212,39 @@ int cpu_cluster_pm_exit(void)
 }
 EXPORT_SYMBOL_GPL(cpu_cluster_pm_exit);
 
+#ifdef CONFIG_HOTPLUG_CPU
+static int cpu_pm_cpu_hotplug(struct notifier_block *nb,
+			unsigned long action, void *data)
+{
+	struct device *dev = get_cpu_device(smp_processor_id());
+
+	if (!dev)
+		return NOTIFY_OK;
+
+	/* Execute CPU runtime PM on that CPU */
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_DYING:
+		pm_runtime_put_sync_suspend(dev);
+		break;
+	case CPU_STARTING:
+		pm_runtime_get_sync(dev);
+		break;
+	default:
+		break;
+	}
+
+	return NOTIFY_OK;
+}
+
+static int __init cpu_pm_hotplug_init(void)
+{
+	/* Register for hotplug notifications for runtime PM */
+	hotcpu_notifier(cpu_pm_cpu_hotplug, 0);
+	return 0;
+}
+device_initcall(cpu_pm_hotplug_init);
+#endif
+
 #ifdef CONFIG_PM
 static int cpu_pm_suspend(void)
 {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 08/16] PM / cpu_domains: Setup PM domains for CPUs/clusters
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Define and add Generic PM domains (genpd) for CPU clusters. Many new
SoCs group CPUs as clusters. Clusters share common resources like power
rails, caches, VFP, Coresight etc. When all CPUs in the cluster are
idle, these shared resources may also be put in their idle state.

CPUs may be associated with their domain providers. The domains in
turn may be associated with their providers. This is clean way to model
the cluster hierarchy like that of ARM's big.little architecture.

Platform drivers may initialize generic PM domains and setup the CPU PM
domains for the genpd and attach CPUs to the domain. In the following
patches, the CPUs are hooked up to runtime PM framework which helps
power down the domain, when all the CPUs in the domain are idle.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Suggested-by: Kevin Hilman <khilman@kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/Makefile      |   2 +-
 drivers/base/power/cpu_domains.c | 191 +++++++++++++++++++++++++++++++++++++++
 include/linux/cpu_domains.h      |  49 ++++++++++
 3 files changed, 241 insertions(+), 1 deletion(-)
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index 5998c53..ee383f1 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -2,7 +2,7 @@ obj-$(CONFIG_PM)	+= sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o
 obj-$(CONFIG_PM_SLEEP)	+= main.o wakeup.o
 obj-$(CONFIG_PM_TRACE_RTC)	+= trace.o
 obj-$(CONFIG_PM_OPP)	+= opp/
-obj-$(CONFIG_PM_GENERIC_DOMAINS)	+=  domain.o domain_governor.o
+obj-$(CONFIG_PM_GENERIC_DOMAINS)	+= domain.o domain_governor.o cpu_domains.o
 obj-$(CONFIG_HAVE_CLK)	+= clock_ops.o
 
 ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
new file mode 100644
index 0000000..73e493b
--- /dev/null
+++ b/drivers/base/power/cpu_domains.c
@@ -0,0 +1,191 @@
+/*
+ * drivers/base/power/cpu_domains.c - Helper functions to create CPU PM domains.
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/cpu_domains.h>
+#include <linux/cpu_pm.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/pm_domain.h>
+#include <linux/rculist.h>
+#include <linux/rcupdate.h>
+#include <linux/slab.h>
+
+#define CPU_PD_NAME_MAX 36
+
+struct cpu_pm_domain {
+	struct list_head link;
+	struct cpu_pd_ops ops;
+	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *parent;
+	cpumask_var_t cpus;
+};
+
+/* List of CPU PM domains we care about */
+static LIST_HEAD(of_cpu_pd_list);
+static DEFINE_MUTEX(cpu_pd_list_lock);
+
+static inline struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
+{
+	struct cpu_pm_domain *pd;
+	struct cpu_pm_domain *res = NULL;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(pd, &of_cpu_pd_list, link)
+		if (pd->genpd == d) {
+			res = pd;
+			break;
+		}
+	rcu_read_unlock();
+
+	return res;
+}
+
+static int cpu_pd_power_on(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_on ? pd->ops.power_on() : 0;
+}
+
+static int cpu_pd_power_off(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_off ? pd->ops.power_off(genpd->state_idx,
+					genpd->states[genpd->state_idx].param,
+					pd->cpus) : 0;
+}
+
+/**
+ * cpu_pd_attach_domain:  Attach a child CPU PM to its parent
+ *
+ * @parent: The parent generic PM domain
+ * @child: The child generic PM domain
+ *
+ * Generally, the child PM domain is the one to which CPUs are attached.
+ */
+int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child)
+{
+	struct cpu_pm_domain *cpu_pd, *parent_cpu_pd;
+	int ret;
+
+	ret = pm_genpd_add_subdomain(parent, child);
+	if (ret) {
+		pr_err("%s: Unable to add sub-domain (%s) to %s.\n err=%d",
+				__func__, child->name, parent->name, ret);
+		return ret;
+	}
+
+	cpu_pd = to_cpu_pd(child);
+	parent_cpu_pd = to_cpu_pd(parent);
+
+	if (cpu_pd && parent_cpu_pd)
+		cpu_pd->parent = parent_cpu_pd;
+
+	return ret;
+}
+EXPORT_SYMBOL(cpu_pd_attach_domain);
+
+/**
+ * cpu_pd_attach_cpu:  Attach a CPU to its CPU PM domain.
+ *
+ * @genpd: The parent generic PM domain
+ * @cpu: The CPU number
+ */
+int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
+{
+	int ret;
+	struct device *cpu_dev;
+	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
+
+	cpu_dev = get_cpu_device(cpu);
+	if (!cpu_dev) {
+		pr_warn("%s: Unable to get device for CPU%d\n",
+				__func__, cpu);
+		return -ENODEV;
+	}
+
+	ret = genpd_dev_pm_attach(cpu_dev);
+	if (ret)
+		dev_warn(cpu_dev,
+			"%s: Unable to attach to power-domain: %d\n",
+			__func__, ret);
+	else
+		dev_dbg(cpu_dev, "Attached to domain\n");
+
+	while (!ret && cpu_pd) {
+		cpumask_set_cpu(cpu, cpu_pd->cpus);
+		cpu_pd = cpu_pd->parent;
+	};
+
+	return ret;
+}
+EXPORT_SYMBOL(cpu_pd_attach_cpu);
+
+/**
+ * cpu_pd_init: Initialize a CPU PM domain for a genpd
+ *
+ * @genpd: The initialized generic PM domain object.
+ * @ops: The power_on/power_off ops for the domain controller.
+ *
+ * Initialize a CPU PM domain based on a generic PM domain. The platform driver
+ * is expected to setup the genpd object and the states associated with the
+ * generic PM domain, before calling this function.
+ */
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops)
+{
+	int ret = -ENOMEM;
+	struct cpu_pm_domain *pd;
+
+	if (IS_ERR_OR_NULL(genpd))
+		return ERR_PTR(-EINVAL);
+
+	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		goto fail;
+
+	if (!zalloc_cpumask_var(&pd->cpus, GFP_KERNEL))
+		goto fail;
+
+	genpd->power_off = cpu_pd_power_off;
+	genpd->power_on = cpu_pd_power_on;
+	genpd->flags |= GENPD_FLAG_IRQ_SAFE;
+	pd->genpd = genpd;
+	pd->ops.power_on = ops->power_on;
+	pd->ops.power_off = ops->power_off;
+
+	INIT_LIST_HEAD_RCU(&pd->link);
+	mutex_lock(&cpu_pd_list_lock);
+	list_add_rcu(&pd->link, &of_cpu_pd_list);
+	mutex_unlock(&cpu_pd_list_lock);
+
+	ret = pm_genpd_init(genpd, &simple_qos_governor, false);
+	if (ret) {
+		pr_err("Unable to initialize domain %s\n", genpd->name);
+		goto fail;
+	}
+
+	pr_debug("adding %s as CPU PM domain\n", pd->genpd->name);
+
+	return genpd;
+fail:
+	kfree(genpd->name);
+	kfree(genpd);
+	if (pd)
+		kfree(pd->cpus);
+	kfree(pd);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL(cpu_pd_init);
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
new file mode 100644
index 0000000..3a0a027
--- /dev/null
+++ b/include/linux/cpu_domains.h
@@ -0,0 +1,49 @@
+/*
+ * include/linux/cpu_domains.h
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPU_DOMAINS_H__
+#define __CPU_DOMAINS_H__
+
+#include <linux/types.h>
+
+struct cpumask;
+
+struct cpu_pd_ops {
+	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
+	int (*power_on)(void);
+};
+
+#ifdef CONFIG_PM_GENERIC_DOMAINS
+
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops);
+
+int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child);
+
+int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu);
+
+#else
+
+static inline
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops)
+{ return ERR_PTR(-ENODEV); }
+
+static inline int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child)
+{ return -ENODEV; }
+
+static inline int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
+{ return -ENODEV; }
+
+#endif /* CONFIG_PM_GENERIC_DOMAINS */
+
+#endif /* __CPU_DOMAINS_H__ */
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 08/16] PM / cpu_domains: Setup PM domains for CPUs/clusters
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

Define and add Generic PM domains (genpd) for CPU clusters. Many new
SoCs group CPUs as clusters. Clusters share common resources like power
rails, caches, VFP, Coresight etc. When all CPUs in the cluster are
idle, these shared resources may also be put in their idle state.

CPUs may be associated with their domain providers. The domains in
turn may be associated with their providers. This is clean way to model
the cluster hierarchy like that of ARM's big.little architecture.

Platform drivers may initialize generic PM domains and setup the CPU PM
domains for the genpd and attach CPUs to the domain. In the following
patches, the CPUs are hooked up to runtime PM framework which helps
power down the domain, when all the CPUs in the domain are idle.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Suggested-by: Kevin Hilman <khilman@kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/Makefile      |   2 +-
 drivers/base/power/cpu_domains.c | 191 +++++++++++++++++++++++++++++++++++++++
 include/linux/cpu_domains.h      |  49 ++++++++++
 3 files changed, 241 insertions(+), 1 deletion(-)
 create mode 100644 drivers/base/power/cpu_domains.c
 create mode 100644 include/linux/cpu_domains.h

diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index 5998c53..ee383f1 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -2,7 +2,7 @@ obj-$(CONFIG_PM)	+= sysfs.o generic_ops.o common.o qos.o runtime.o wakeirq.o
 obj-$(CONFIG_PM_SLEEP)	+= main.o wakeup.o
 obj-$(CONFIG_PM_TRACE_RTC)	+= trace.o
 obj-$(CONFIG_PM_OPP)	+= opp/
-obj-$(CONFIG_PM_GENERIC_DOMAINS)	+=  domain.o domain_governor.o
+obj-$(CONFIG_PM_GENERIC_DOMAINS)	+= domain.o domain_governor.o cpu_domains.o
 obj-$(CONFIG_HAVE_CLK)	+= clock_ops.o
 
 ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
new file mode 100644
index 0000000..73e493b
--- /dev/null
+++ b/drivers/base/power/cpu_domains.c
@@ -0,0 +1,191 @@
+/*
+ * drivers/base/power/cpu_domains.c - Helper functions to create CPU PM domains.
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/cpu_domains.h>
+#include <linux/cpu_pm.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/pm_domain.h>
+#include <linux/rculist.h>
+#include <linux/rcupdate.h>
+#include <linux/slab.h>
+
+#define CPU_PD_NAME_MAX 36
+
+struct cpu_pm_domain {
+	struct list_head link;
+	struct cpu_pd_ops ops;
+	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *parent;
+	cpumask_var_t cpus;
+};
+
+/* List of CPU PM domains we care about */
+static LIST_HEAD(of_cpu_pd_list);
+static DEFINE_MUTEX(cpu_pd_list_lock);
+
+static inline struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
+{
+	struct cpu_pm_domain *pd;
+	struct cpu_pm_domain *res = NULL;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(pd, &of_cpu_pd_list, link)
+		if (pd->genpd == d) {
+			res = pd;
+			break;
+		}
+	rcu_read_unlock();
+
+	return res;
+}
+
+static int cpu_pd_power_on(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_on ? pd->ops.power_on() : 0;
+}
+
+static int cpu_pd_power_off(struct generic_pm_domain *genpd)
+{
+	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
+
+	return pd->ops.power_off ? pd->ops.power_off(genpd->state_idx,
+					genpd->states[genpd->state_idx].param,
+					pd->cpus) : 0;
+}
+
+/**
+ * cpu_pd_attach_domain:  Attach a child CPU PM to its parent
+ *
+ * @parent: The parent generic PM domain
+ * @child: The child generic PM domain
+ *
+ * Generally, the child PM domain is the one to which CPUs are attached.
+ */
+int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child)
+{
+	struct cpu_pm_domain *cpu_pd, *parent_cpu_pd;
+	int ret;
+
+	ret = pm_genpd_add_subdomain(parent, child);
+	if (ret) {
+		pr_err("%s: Unable to add sub-domain (%s) to %s.\n err=%d",
+				__func__, child->name, parent->name, ret);
+		return ret;
+	}
+
+	cpu_pd = to_cpu_pd(child);
+	parent_cpu_pd = to_cpu_pd(parent);
+
+	if (cpu_pd && parent_cpu_pd)
+		cpu_pd->parent = parent_cpu_pd;
+
+	return ret;
+}
+EXPORT_SYMBOL(cpu_pd_attach_domain);
+
+/**
+ * cpu_pd_attach_cpu:  Attach a CPU to its CPU PM domain.
+ *
+ * @genpd: The parent generic PM domain
+ * @cpu: The CPU number
+ */
+int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
+{
+	int ret;
+	struct device *cpu_dev;
+	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
+
+	cpu_dev = get_cpu_device(cpu);
+	if (!cpu_dev) {
+		pr_warn("%s: Unable to get device for CPU%d\n",
+				__func__, cpu);
+		return -ENODEV;
+	}
+
+	ret = genpd_dev_pm_attach(cpu_dev);
+	if (ret)
+		dev_warn(cpu_dev,
+			"%s: Unable to attach to power-domain: %d\n",
+			__func__, ret);
+	else
+		dev_dbg(cpu_dev, "Attached to domain\n");
+
+	while (!ret && cpu_pd) {
+		cpumask_set_cpu(cpu, cpu_pd->cpus);
+		cpu_pd = cpu_pd->parent;
+	};
+
+	return ret;
+}
+EXPORT_SYMBOL(cpu_pd_attach_cpu);
+
+/**
+ * cpu_pd_init: Initialize a CPU PM domain for a genpd
+ *
+ * @genpd: The initialized generic PM domain object.
+ * @ops: The power_on/power_off ops for the domain controller.
+ *
+ * Initialize a CPU PM domain based on a generic PM domain. The platform driver
+ * is expected to setup the genpd object and the states associated with the
+ * generic PM domain, before calling this function.
+ */
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops)
+{
+	int ret = -ENOMEM;
+	struct cpu_pm_domain *pd;
+
+	if (IS_ERR_OR_NULL(genpd))
+		return ERR_PTR(-EINVAL);
+
+	pd = kzalloc(sizeof(*pd), GFP_KERNEL);
+	if (!pd)
+		goto fail;
+
+	if (!zalloc_cpumask_var(&pd->cpus, GFP_KERNEL))
+		goto fail;
+
+	genpd->power_off = cpu_pd_power_off;
+	genpd->power_on = cpu_pd_power_on;
+	genpd->flags |= GENPD_FLAG_IRQ_SAFE;
+	pd->genpd = genpd;
+	pd->ops.power_on = ops->power_on;
+	pd->ops.power_off = ops->power_off;
+
+	INIT_LIST_HEAD_RCU(&pd->link);
+	mutex_lock(&cpu_pd_list_lock);
+	list_add_rcu(&pd->link, &of_cpu_pd_list);
+	mutex_unlock(&cpu_pd_list_lock);
+
+	ret = pm_genpd_init(genpd, &simple_qos_governor, false);
+	if (ret) {
+		pr_err("Unable to initialize domain %s\n", genpd->name);
+		goto fail;
+	}
+
+	pr_debug("adding %s as CPU PM domain\n", pd->genpd->name);
+
+	return genpd;
+fail:
+	kfree(genpd->name);
+	kfree(genpd);
+	if (pd)
+		kfree(pd->cpus);
+	kfree(pd);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL(cpu_pd_init);
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
new file mode 100644
index 0000000..3a0a027
--- /dev/null
+++ b/include/linux/cpu_domains.h
@@ -0,0 +1,49 @@
+/*
+ * include/linux/cpu_domains.h
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CPU_DOMAINS_H__
+#define __CPU_DOMAINS_H__
+
+#include <linux/types.h>
+
+struct cpumask;
+
+struct cpu_pd_ops {
+	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
+	int (*power_on)(void);
+};
+
+#ifdef CONFIG_PM_GENERIC_DOMAINS
+
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops);
+
+int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child);
+
+int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu);
+
+#else
+
+static inline
+struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
+				const struct cpu_pd_ops *ops)
+{ return ERR_PTR(-ENODEV); }
+
+static inline int cpu_pd_attach_domain(struct generic_pm_domain *parent,
+				struct generic_pm_domain *child)
+{ return -ENODEV; }
+
+static inline int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
+{ return -ENODEV; }
+
+#endif /* CONFIG_PM_GENERIC_DOMAINS */
+
+#endif /* __CPU_DOMAINS_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 09/16] PM / cpu_domains: Initialize CPU PM domains from DT
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

Add helper functions to parse DT and initialize the CPU PM domains and
attach CPU to their respective domains using information provided in the
DT.

For each CPU in the DT, we identify the domain provider; initialize and
register the PM domain if isn't already registered and attach all the
CPU devices to the domain. Usually, when there are multiple clusters of
CPUs, there is a top level coherency domain that is dependent on these
individual domains. All domains thus created are marked IRQ safe
automatically and therefore may be powered down when the CPUs in the
domain are powered down by cpuidle.

Cc: Kevin Hilman <khilman@kernel.org>
Suggested-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 190 +++++++++++++++++++++++++++++++++++++++
 include/linux/cpu_domains.h      |  18 ++++
 2 files changed, 208 insertions(+)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 73e493b..8bf61e2 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -15,6 +15,7 @@
 #include <linux/device.h>
 #include <linux/kernel.h>
 #include <linux/list.h>
+#include <linux/of.h>
 #include <linux/pm_domain.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
@@ -189,3 +190,192 @@ fail:
 	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL(cpu_pd_init);
+
+static struct generic_pm_domain *alloc_genpd(const char *name)
+{
+	struct generic_pm_domain *genpd;
+
+	genpd = kzalloc(sizeof(*genpd), GFP_KERNEL);
+	if (!genpd)
+		return ERR_PTR(-ENOMEM);
+
+	genpd->name = kstrndup(name, CPU_PD_NAME_MAX, GFP_KERNEL);
+	if (!genpd->name) {
+		kfree(genpd);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	return genpd;
+}
+
+/**
+ * of_init_cpu_pm_domain() - Initialize a CPU PM domain from a device node
+ *
+ * @dn: The domain provider's device node
+ * @ops: The power_on/_off callbacks for the domain
+ *
+ * Returns the generic_pm_domain (genpd) pointer to the domain on success
+ */
+static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
+				const struct cpu_pd_ops *ops)
+{
+	struct cpu_pm_domain *pd = NULL;
+	struct generic_pm_domain *genpd = NULL;
+	int ret = -ENOMEM;
+
+	if (!of_device_is_available(dn))
+		return ERR_PTR(-ENODEV);
+
+	genpd = alloc_genpd(dn->full_name);
+	if (IS_ERR(genpd))
+		return genpd;
+
+	genpd->of_node = dn;
+
+	/* Populate platform specific states from DT */
+	if (ops->populate_state_data) {
+		struct device_node *np;
+		int i;
+
+		/* Initialize the arm,idle-state properties */
+		ret = pm_genpd_of_parse_power_states(genpd);
+		if (ret) {
+			pr_warn("%s domain states not initialized (%d)\n",
+					dn->full_name, ret);
+			goto fail;
+		}
+		for (i = 0; i < genpd->state_count; i++) {
+			ret = ops->populate_state_data(genpd->states[i].of_node,
+						&genpd->states[i].param);
+			of_node_put(np);
+			if (ret)
+				goto fail;
+		}
+	}
+
+	genpd = cpu_pd_init(genpd, ops);
+	if (IS_ERR(genpd))
+		goto fail;
+
+	ret = of_genpd_add_provider_simple(dn, genpd);
+	if (ret)
+		pr_warn("Unable to add genpd %s as provider\n",
+				pd->genpd->name);
+
+	return genpd;
+fail:
+	kfree(genpd->name);
+	kfree(genpd);
+	if (pd)
+		kfree(pd->cpus);
+	kfree(pd);
+	return ERR_PTR(ret);
+}
+
+static struct generic_pm_domain *of_get_cpu_domain(struct device_node *dn,
+		const struct cpu_pd_ops *ops, int cpu)
+{
+	struct of_phandle_args args;
+	struct generic_pm_domain *genpd, *parent;
+	int ret;
+
+	/* Do we have this domain? If not, create the domain */
+	args.np = dn;
+	args.args_count = 0;
+
+	genpd = of_genpd_get_from_provider(&args);
+	if (!IS_ERR(genpd))
+		return genpd;
+
+	genpd = of_init_cpu_pm_domain(dn, ops);
+	if (IS_ERR(genpd))
+		return genpd;
+
+	/* Is there a domain provider for this domain? */
+	ret = of_parse_phandle_with_args(dn, "power-domains",
+			"#power-domain-cells", 0, &args);
+	if (ret < 0)
+		goto skip_parent;
+
+	/* Find its parent and attach this domain to it, recursively */
+	parent = of_get_cpu_domain(args.np, ops, cpu);
+	if (IS_ERR(parent))
+		goto skip_parent;
+
+	ret = cpu_pd_attach_domain(parent, genpd);
+	if (ret)
+		pr_err("Unable to attach domain %s to parent %s\n",
+				genpd->name, parent->name);
+
+skip_parent:
+	of_node_put(dn);
+	return genpd;
+}
+
+/**
+ * of_setup_cpu_pd_single() - Setup the PM domains for a CPU
+ *
+ * @cpu: The CPU for which the PM domain is to be set up.
+ * @ops: The PM domain suspend/resume ops for the CPU's domain
+ *
+ * If the CPU PM domain exists already, then the CPU is attached to
+ * that CPU PD. If it doesn't, the domain is created, the @ops are
+ * set for power_on/power_off callbacks and then the CPU is attached
+ * to that domain. If the domain was created outside this framework,
+ * then we do not attach the CPU to the domain.
+ */
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{
+
+	struct device_node *dn, *np;
+	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *cpu_pd;
+
+	np = of_get_cpu_node(cpu, NULL);
+	if (!np)
+		return -ENODEV;
+
+	dn = of_parse_phandle(np, "power-domains", 0);
+	of_node_put(np);
+	if (!dn)
+		return -ENODEV;
+
+	/* Find the genpd for this CPU, create if not found */
+	genpd = of_get_cpu_domain(dn, ops, cpu);
+	of_node_put(dn);
+	if (IS_ERR(genpd))
+		return PTR_ERR(genpd);
+
+	cpu_pd = to_cpu_pd(genpd);
+	if (!cpu_pd) {
+		pr_err("%s: Genpd was created outside CPU PM domains\n",
+				__func__);
+		return -ENOENT;
+	}
+
+	return cpu_pd_attach_cpu(genpd, cpu);
+}
+EXPORT_SYMBOL(of_setup_cpu_pd_single);
+
+/**
+ * of_setup_cpu_pd() - Setup the PM domains for all CPUs
+ *
+ * @ops: The PM domain suspend/resume ops for all the domains
+ *
+ * Setup the CPU PM domain and attach all possible CPUs to their respective
+ * domains. The domains are created if not already and then attached.
+ */
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{
+	int cpu;
+	int ret;
+
+	for_each_possible_cpu(cpu) {
+		ret = of_setup_cpu_pd_single(cpu, ops);
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(of_setup_cpu_pd);
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
index 3a0a027..736d9e6 100644
--- a/include/linux/cpu_domains.h
+++ b/include/linux/cpu_domains.h
@@ -14,8 +14,10 @@
 #include <linux/types.h>
 
 struct cpumask;
+struct device_node;
 
 struct cpu_pd_ops {
+	int (*populate_state_data)(struct device_node *n, u32 *param);
 	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
 	int (*power_on)(void);
 };
@@ -46,4 +48,20 @@ static inline int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
 
 #endif /* CONFIG_PM_GENERIC_DOMAINS */
 
+#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
+
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops);
+
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops);
+
+#else
+
+static inline int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{ return -ENODEV; }
+
+static inline int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{ return -ENODEV; }
+
+#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
+
 #endif /* __CPU_DOMAINS_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 09/16] PM / cpu_domains: Initialize CPU PM domains from DT
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

Add helper functions to parse DT and initialize the CPU PM domains and
attach CPU to their respective domains using information provided in the
DT.

For each CPU in the DT, we identify the domain provider; initialize and
register the PM domain if isn't already registered and attach all the
CPU devices to the domain. Usually, when there are multiple clusters of
CPUs, there is a top level coherency domain that is dependent on these
individual domains. All domains thus created are marked IRQ safe
automatically and therefore may be powered down when the CPUs in the
domain are powered down by cpuidle.

Cc: Kevin Hilman <khilman@kernel.org>
Suggested-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 190 +++++++++++++++++++++++++++++++++++++++
 include/linux/cpu_domains.h      |  18 ++++
 2 files changed, 208 insertions(+)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 73e493b..8bf61e2 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -15,6 +15,7 @@
 #include <linux/device.h>
 #include <linux/kernel.h>
 #include <linux/list.h>
+#include <linux/of.h>
 #include <linux/pm_domain.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
@@ -189,3 +190,192 @@ fail:
 	return ERR_PTR(ret);
 }
 EXPORT_SYMBOL(cpu_pd_init);
+
+static struct generic_pm_domain *alloc_genpd(const char *name)
+{
+	struct generic_pm_domain *genpd;
+
+	genpd = kzalloc(sizeof(*genpd), GFP_KERNEL);
+	if (!genpd)
+		return ERR_PTR(-ENOMEM);
+
+	genpd->name = kstrndup(name, CPU_PD_NAME_MAX, GFP_KERNEL);
+	if (!genpd->name) {
+		kfree(genpd);
+		return ERR_PTR(-ENOMEM);
+	}
+
+	return genpd;
+}
+
+/**
+ * of_init_cpu_pm_domain() - Initialize a CPU PM domain from a device node
+ *
+ * @dn: The domain provider's device node
+ * @ops: The power_on/_off callbacks for the domain
+ *
+ * Returns the generic_pm_domain (genpd) pointer to the domain on success
+ */
+static struct generic_pm_domain *of_init_cpu_pm_domain(struct device_node *dn,
+				const struct cpu_pd_ops *ops)
+{
+	struct cpu_pm_domain *pd = NULL;
+	struct generic_pm_domain *genpd = NULL;
+	int ret = -ENOMEM;
+
+	if (!of_device_is_available(dn))
+		return ERR_PTR(-ENODEV);
+
+	genpd = alloc_genpd(dn->full_name);
+	if (IS_ERR(genpd))
+		return genpd;
+
+	genpd->of_node = dn;
+
+	/* Populate platform specific states from DT */
+	if (ops->populate_state_data) {
+		struct device_node *np;
+		int i;
+
+		/* Initialize the arm,idle-state properties */
+		ret = pm_genpd_of_parse_power_states(genpd);
+		if (ret) {
+			pr_warn("%s domain states not initialized (%d)\n",
+					dn->full_name, ret);
+			goto fail;
+		}
+		for (i = 0; i < genpd->state_count; i++) {
+			ret = ops->populate_state_data(genpd->states[i].of_node,
+						&genpd->states[i].param);
+			of_node_put(np);
+			if (ret)
+				goto fail;
+		}
+	}
+
+	genpd = cpu_pd_init(genpd, ops);
+	if (IS_ERR(genpd))
+		goto fail;
+
+	ret = of_genpd_add_provider_simple(dn, genpd);
+	if (ret)
+		pr_warn("Unable to add genpd %s as provider\n",
+				pd->genpd->name);
+
+	return genpd;
+fail:
+	kfree(genpd->name);
+	kfree(genpd);
+	if (pd)
+		kfree(pd->cpus);
+	kfree(pd);
+	return ERR_PTR(ret);
+}
+
+static struct generic_pm_domain *of_get_cpu_domain(struct device_node *dn,
+		const struct cpu_pd_ops *ops, int cpu)
+{
+	struct of_phandle_args args;
+	struct generic_pm_domain *genpd, *parent;
+	int ret;
+
+	/* Do we have this domain? If not, create the domain */
+	args.np = dn;
+	args.args_count = 0;
+
+	genpd = of_genpd_get_from_provider(&args);
+	if (!IS_ERR(genpd))
+		return genpd;
+
+	genpd = of_init_cpu_pm_domain(dn, ops);
+	if (IS_ERR(genpd))
+		return genpd;
+
+	/* Is there a domain provider for this domain? */
+	ret = of_parse_phandle_with_args(dn, "power-domains",
+			"#power-domain-cells", 0, &args);
+	if (ret < 0)
+		goto skip_parent;
+
+	/* Find its parent and attach this domain to it, recursively */
+	parent = of_get_cpu_domain(args.np, ops, cpu);
+	if (IS_ERR(parent))
+		goto skip_parent;
+
+	ret = cpu_pd_attach_domain(parent, genpd);
+	if (ret)
+		pr_err("Unable to attach domain %s to parent %s\n",
+				genpd->name, parent->name);
+
+skip_parent:
+	of_node_put(dn);
+	return genpd;
+}
+
+/**
+ * of_setup_cpu_pd_single() - Setup the PM domains for a CPU
+ *
+ * @cpu: The CPU for which the PM domain is to be set up.
+ * @ops: The PM domain suspend/resume ops for the CPU's domain
+ *
+ * If the CPU PM domain exists already, then the CPU is attached to
+ * that CPU PD. If it doesn't, the domain is created, the @ops are
+ * set for power_on/power_off callbacks and then the CPU is attached
+ * to that domain. If the domain was created outside this framework,
+ * then we do not attach the CPU to the domain.
+ */
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{
+
+	struct device_node *dn, *np;
+	struct generic_pm_domain *genpd;
+	struct cpu_pm_domain *cpu_pd;
+
+	np = of_get_cpu_node(cpu, NULL);
+	if (!np)
+		return -ENODEV;
+
+	dn = of_parse_phandle(np, "power-domains", 0);
+	of_node_put(np);
+	if (!dn)
+		return -ENODEV;
+
+	/* Find the genpd for this CPU, create if not found */
+	genpd = of_get_cpu_domain(dn, ops, cpu);
+	of_node_put(dn);
+	if (IS_ERR(genpd))
+		return PTR_ERR(genpd);
+
+	cpu_pd = to_cpu_pd(genpd);
+	if (!cpu_pd) {
+		pr_err("%s: Genpd was created outside CPU PM domains\n",
+				__func__);
+		return -ENOENT;
+	}
+
+	return cpu_pd_attach_cpu(genpd, cpu);
+}
+EXPORT_SYMBOL(of_setup_cpu_pd_single);
+
+/**
+ * of_setup_cpu_pd() - Setup the PM domains for all CPUs
+ *
+ * @ops: The PM domain suspend/resume ops for all the domains
+ *
+ * Setup the CPU PM domain and attach all possible CPUs to their respective
+ * domains. The domains are created if not already and then attached.
+ */
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{
+	int cpu;
+	int ret;
+
+	for_each_possible_cpu(cpu) {
+		ret = of_setup_cpu_pd_single(cpu, ops);
+		if (ret)
+			break;
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(of_setup_cpu_pd);
diff --git a/include/linux/cpu_domains.h b/include/linux/cpu_domains.h
index 3a0a027..736d9e6 100644
--- a/include/linux/cpu_domains.h
+++ b/include/linux/cpu_domains.h
@@ -14,8 +14,10 @@
 #include <linux/types.h>
 
 struct cpumask;
+struct device_node;
 
 struct cpu_pd_ops {
+	int (*populate_state_data)(struct device_node *n, u32 *param);
 	int (*power_off)(u32 state_idx, u32 param, const struct cpumask *mask);
 	int (*power_on)(void);
 };
@@ -46,4 +48,20 @@ static inline int cpu_pd_attach_cpu(struct generic_pm_domain *genpd, int cpu)
 
 #endif /* CONFIG_PM_GENERIC_DOMAINS */
 
+#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
+
+int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops);
+
+int of_setup_cpu_pd(const struct cpu_pd_ops *ops);
+
+#else
+
+static inline int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
+{ return -ENODEV; }
+
+static inline int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
+{ return -ENODEV; }
+
+#endif /* CONFIG_PM_GENERIC_DOMAINS_OF */
+
 #endif /* __CPU_DOMAINS_H__ */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 10/16] timer: Export next wake up of a CPU
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	Thomas Gleixner

Knowing the sleep length of the CPU is useful for the power state
determination on idle. The value is relative to the time when the call
was invoked by the CPU. This doesn't work well when there is a need to
know when the actual wakeup is.

By reading the next wake up event of a CPU, governors can determine the
first CPU to wake up (due to timer) amongst a cluster of CPUs and the
sleep time available between the last CPU to idle and the first CPU to
resume. This information is useful to determine if the caches and other
common hardware blocks can also be put in idle during this common period
of inactivity.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 include/linux/tick.h     |  7 +++++++
 kernel/time/tick-sched.c | 11 +++++++++++
 2 files changed, 18 insertions(+)

diff --git a/include/linux/tick.h b/include/linux/tick.h
index 62be0786..92fa4b0 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -117,6 +117,7 @@ extern void tick_nohz_idle_enter(void);
 extern void tick_nohz_idle_exit(void);
 extern void tick_nohz_irq_exit(void);
 extern ktime_t tick_nohz_get_sleep_length(void);
+extern ktime_t tick_nohz_get_next_wakeup(int cpu);
 extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
 extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
 #else /* !CONFIG_NO_HZ_COMMON */
@@ -131,6 +132,12 @@ static inline ktime_t tick_nohz_get_sleep_length(void)
 
 	return len;
 }
+
+static inline ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	return tick_next_period;
+}
+
 static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
 static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
 #endif /* !CONFIG_NO_HZ_COMMON */
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 204fdc8..7d8df93 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -990,6 +990,17 @@ ktime_t tick_nohz_get_sleep_length(void)
 	return ts->sleep_length;
 }
 
+/**
+ * tick_nohz_get_next_wakeup - return the next wake up of the CPU
+ */
+ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	struct clock_event_device *dev =
+			per_cpu(tick_cpu_device.evtdev, cpu);
+
+	return dev->next_event;
+}
+
 static void tick_nohz_account_idle_ticks(struct tick_sched *ts)
 {
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 10/16] timer: Export next wake up of a CPU
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

Knowing the sleep length of the CPU is useful for the power state
determination on idle. The value is relative to the time when the call
was invoked by the CPU. This doesn't work well when there is a need to
know when the actual wakeup is.

By reading the next wake up event of a CPU, governors can determine the
first CPU to wake up (due to timer) amongst a cluster of CPUs and the
sleep time available between the last CPU to idle and the first CPU to
resume. This information is useful to determine if the caches and other
common hardware blocks can also be put in idle during this common period
of inactivity.

Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 include/linux/tick.h     |  7 +++++++
 kernel/time/tick-sched.c | 11 +++++++++++
 2 files changed, 18 insertions(+)

diff --git a/include/linux/tick.h b/include/linux/tick.h
index 62be0786..92fa4b0 100644
--- a/include/linux/tick.h
+++ b/include/linux/tick.h
@@ -117,6 +117,7 @@ extern void tick_nohz_idle_enter(void);
 extern void tick_nohz_idle_exit(void);
 extern void tick_nohz_irq_exit(void);
 extern ktime_t tick_nohz_get_sleep_length(void);
+extern ktime_t tick_nohz_get_next_wakeup(int cpu);
 extern u64 get_cpu_idle_time_us(int cpu, u64 *last_update_time);
 extern u64 get_cpu_iowait_time_us(int cpu, u64 *last_update_time);
 #else /* !CONFIG_NO_HZ_COMMON */
@@ -131,6 +132,12 @@ static inline ktime_t tick_nohz_get_sleep_length(void)
 
 	return len;
 }
+
+static inline ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	return tick_next_period;
+}
+
 static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
 static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }
 #endif /* !CONFIG_NO_HZ_COMMON */
diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
index 204fdc8..7d8df93 100644
--- a/kernel/time/tick-sched.c
+++ b/kernel/time/tick-sched.c
@@ -990,6 +990,17 @@ ktime_t tick_nohz_get_sleep_length(void)
 	return ts->sleep_length;
 }
 
+/**
+ * tick_nohz_get_next_wakeup - return the next wake up of the CPU
+ */
+ktime_t tick_nohz_get_next_wakeup(int cpu)
+{
+	struct clock_event_device *dev =
+			per_cpu(tick_cpu_device.evtdev, cpu);
+
+	return dev->next_event;
+}
+
 static void tick_nohz_account_idle_ticks(struct tick_sched *ts)
 {
 #ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 11/16] PM / cpu_domains: Add PM Domain governor for CPUs
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

A PM domain comprising of CPUs may be powered off when all the CPUs in
the domain are powered down. Powering down a CPU domain is generally a
expensive operation and therefore the power performance trade offs
should be considered. The time between the last CPU powering down and
the first CPU powering up in a domain, is the time available for the
domain to sleep. Ideally, the sleep time of the domain should fulfill
the residency requirement of the domains' idle state.

To do this effectively, read the time before the wakeup of the cluster's
CPUs and ensure that the domain's idle state sleep time guarantees the
QoS requirements of each of the CPU, the PM QoS CPU_DMA_LATENCY and the
state's residency.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 80 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 79 insertions(+), 1 deletion(-)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 8bf61e2..79fa4ae 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -17,9 +17,12 @@
 #include <linux/list.h>
 #include <linux/of.h>
 #include <linux/pm_domain.h>
+#include <linux/pm_qos.h>
+#include <linux/pm_runtime.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
 #include <linux/slab.h>
+#include <linux/tick.h>
 
 #define CPU_PD_NAME_MAX 36
 
@@ -51,6 +54,81 @@ static inline struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
 	return res;
 }
 
+static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
+{
+	struct generic_pm_domain *genpd = pd_to_genpd(pd);
+	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
+	int qos_ns = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
+	u64 sleep_ns;
+	ktime_t earliest, next_wakeup;
+	int cpu;
+	int i;
+
+	/* Reset the last set genpd state, default to index 0 */
+	genpd->state_idx = 0;
+
+	/* We don't want to power down, if QoS is 0 */
+	if (!qos_ns)
+		return false;
+
+	/*
+	 * Find the sleep time for the cluster.
+	 * The time between now and the first wake up of any CPU that
+	 * are in this domain hierarchy is the time available for the
+	 * domain to be idle.
+	 *
+	 * We only care about the next wakeup for any online CPU in that
+	 * cluster. Hotplug off any of the CPUs that we care about will
+	 * wait on the genpd lock, until we are done. Any other CPU hotplug
+	 * is not of consequence to our sleep time.
+	 */
+	earliest = ktime_set(KTIME_SEC_MAX, 0);
+	for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {
+		next_wakeup = tick_nohz_get_next_wakeup(cpu);
+		if (earliest.tv64 > next_wakeup.tv64)
+			earliest = next_wakeup;
+	}
+
+	sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get()));
+	if (sleep_ns <= 0)
+		return false;
+
+	/*
+	 * Find the deepest sleep state that satisfies the residency
+	 * requirement and the QoS constraint
+	 */
+	for (i = genpd->state_count - 1; i >= 0; i--) {
+		u64 state_sleep_ns;
+
+		state_sleep_ns = genpd->states[i].power_off_latency_ns +
+			genpd->states[i].power_on_latency_ns +
+			genpd->states[i].residency_ns;
+
+		/*
+		 * If we can't sleep to save power in the state, move on
+		 * to the next lower idle state.
+		 */
+		if (state_sleep_ns > sleep_ns)
+			continue;
+
+		/*
+		 * We also don't want to sleep more than we should to
+		 * gaurantee QoS.
+		 */
+		if (state_sleep_ns < (qos_ns * NSEC_PER_USEC))
+			break;
+	}
+
+	if (i >= 0)
+		genpd->state_idx = i;
+
+	return (i >= 0);
+}
+
+static struct dev_power_governor cpu_pd_gov = {
+	.power_down_ok = cpu_pd_down_ok,
+};
+
 static int cpu_pd_power_on(struct generic_pm_domain *genpd)
 {
 	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
@@ -172,7 +250,7 @@ struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
 	list_add_rcu(&pd->link, &of_cpu_pd_list);
 	mutex_unlock(&cpu_pd_list_lock);
 
-	ret = pm_genpd_init(genpd, &simple_qos_governor, false);
+	ret = pm_genpd_init(genpd, &cpu_pd_gov, false);
 	if (ret) {
 		pr_err("Unable to initialize domain %s\n", genpd->name);
 		goto fail;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 11/16] PM / cpu_domains: Add PM Domain governor for CPUs
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

A PM domain comprising of CPUs may be powered off when all the CPUs in
the domain are powered down. Powering down a CPU domain is generally a
expensive operation and therefore the power performance trade offs
should be considered. The time between the last CPU powering down and
the first CPU powering up in a domain, is the time available for the
domain to sleep. Ideally, the sleep time of the domain should fulfill
the residency requirement of the domains' idle state.

To do this effectively, read the time before the wakeup of the cluster's
CPUs and ensure that the domain's idle state sleep time guarantees the
QoS requirements of each of the CPU, the PM QoS CPU_DMA_LATENCY and the
state's residency.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/base/power/cpu_domains.c | 80 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 79 insertions(+), 1 deletion(-)

diff --git a/drivers/base/power/cpu_domains.c b/drivers/base/power/cpu_domains.c
index 8bf61e2..79fa4ae 100644
--- a/drivers/base/power/cpu_domains.c
+++ b/drivers/base/power/cpu_domains.c
@@ -17,9 +17,12 @@
 #include <linux/list.h>
 #include <linux/of.h>
 #include <linux/pm_domain.h>
+#include <linux/pm_qos.h>
+#include <linux/pm_runtime.h>
 #include <linux/rculist.h>
 #include <linux/rcupdate.h>
 #include <linux/slab.h>
+#include <linux/tick.h>
 
 #define CPU_PD_NAME_MAX 36
 
@@ -51,6 +54,81 @@ static inline struct cpu_pm_domain *to_cpu_pd(struct generic_pm_domain *d)
 	return res;
 }
 
+static bool cpu_pd_down_ok(struct dev_pm_domain *pd)
+{
+	struct generic_pm_domain *genpd = pd_to_genpd(pd);
+	struct cpu_pm_domain *cpu_pd = to_cpu_pd(genpd);
+	int qos_ns = pm_qos_request(PM_QOS_CPU_DMA_LATENCY);
+	u64 sleep_ns;
+	ktime_t earliest, next_wakeup;
+	int cpu;
+	int i;
+
+	/* Reset the last set genpd state, default to index 0 */
+	genpd->state_idx = 0;
+
+	/* We don't want to power down, if QoS is 0 */
+	if (!qos_ns)
+		return false;
+
+	/*
+	 * Find the sleep time for the cluster.
+	 * The time between now and the first wake up of any CPU that
+	 * are in this domain hierarchy is the time available for the
+	 * domain to be idle.
+	 *
+	 * We only care about the next wakeup for any online CPU in that
+	 * cluster. Hotplug off any of the CPUs that we care about will
+	 * wait on the genpd lock, until we are done. Any other CPU hotplug
+	 * is not of consequence to our sleep time.
+	 */
+	earliest = ktime_set(KTIME_SEC_MAX, 0);
+	for_each_cpu_and(cpu, cpu_pd->cpus, cpu_online_mask) {
+		next_wakeup = tick_nohz_get_next_wakeup(cpu);
+		if (earliest.tv64 > next_wakeup.tv64)
+			earliest = next_wakeup;
+	}
+
+	sleep_ns = ktime_to_ns(ktime_sub(earliest, ktime_get()));
+	if (sleep_ns <= 0)
+		return false;
+
+	/*
+	 * Find the deepest sleep state that satisfies the residency
+	 * requirement and the QoS constraint
+	 */
+	for (i = genpd->state_count - 1; i >= 0; i--) {
+		u64 state_sleep_ns;
+
+		state_sleep_ns = genpd->states[i].power_off_latency_ns +
+			genpd->states[i].power_on_latency_ns +
+			genpd->states[i].residency_ns;
+
+		/*
+		 * If we can't sleep to save power in the state, move on
+		 * to the next lower idle state.
+		 */
+		if (state_sleep_ns > sleep_ns)
+			continue;
+
+		/*
+		 * We also don't want to sleep more than we should to
+		 * gaurantee QoS.
+		 */
+		if (state_sleep_ns < (qos_ns * NSEC_PER_USEC))
+			break;
+	}
+
+	if (i >= 0)
+		genpd->state_idx = i;
+
+	return (i >= 0);
+}
+
+static struct dev_power_governor cpu_pd_gov = {
+	.power_down_ok = cpu_pd_down_ok,
+};
+
 static int cpu_pd_power_on(struct generic_pm_domain *genpd)
 {
 	struct cpu_pm_domain *pd = to_cpu_pd(genpd);
@@ -172,7 +250,7 @@ struct generic_pm_domain *cpu_pd_init(struct generic_pm_domain *genpd,
 	list_add_rcu(&pd->link, &of_cpu_pd_list);
 	mutex_unlock(&cpu_pd_list_lock);
 
-	ret = pm_genpd_init(genpd, &simple_qos_governor, false);
+	ret = pm_genpd_init(genpd, &cpu_pd_gov, false);
 	if (ret) {
 		pr_err("Unable to initialize domain %s\n", genpd->name);
 		goto fail;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 12/16] doc / cpu_domains: Describe CPU PM domains setup and governor
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer

A generic CPU PM domain functionality is provided by
drivers/base/power/cpu_domains.c. This document describes the generic
usecase of CPU's PM domains, the setup of such domains and a CPU
specific genpd governor.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/power/cpu_domains.txt | 109 ++++++++++++++++++++++++++++++++++++
 1 file changed, 109 insertions(+)
 create mode 100644 Documentation/power/cpu_domains.txt

diff --git a/Documentation/power/cpu_domains.txt b/Documentation/power/cpu_domains.txt
new file mode 100644
index 0000000..6a39a64
--- /dev/null
+++ b/Documentation/power/cpu_domains.txt
@@ -0,0 +1,109 @@
+CPU PM domains
+==============
+
+Newer CPUs are grouped in SoCs as clusters. A cluster in addition to the CPUs
+may have caches, floating point units and other architecture specific power
+controller that share resources when any of the CPUs are active. When the CPUs
+are in idle, some of these cluster components may also idle. A cluster may
+also be nested inside another cluster that provides common coherency
+interfaces to share data between the clusters. The organization of such
+clusters and CPU may be descibed in DT, since they are SoC specific.
+
+CPUIdle framework enables the CPUs to determine the sleep time and enter low
+power state to save power during periods of idle. CPUs in a cluster may enter
+and exit idle state independently of each other. During the time when all the
+CPUs are in idle state, the cluster may safely put some of the shared
+resources in their idle state. The time between the last CPU to enter idle and
+the first CPU to wake up is the time available for the cluster to enter its
+idle state.
+
+When SoCs power down the CPU during cpuidle, they generally have supplemental
+hardware that can handshake with the CPU with a signal that indicates that the
+CPU has stopped execution. The hardware is also responsible for warm booting
+the CPU on receiving an interrupt. In a cluster architecture, common resources
+that are shared by a cluster may also be powered down by an external
+microcontroller or a processor. The microcontroller may be programmed in
+advance to put the hardware blocks in a low power state, when the last active
+CPU sends the idle signal. When the signal is received, the microcontroller
+may trigger the hardware blocks to enter their low power state. When an
+interrupt to wakeup the processor is received, the microcontroller is
+responsible for bringing up the hardware blocks to its active state, before
+waking up the CPU. The timelines for such operations should be in the
+acceptable range for for CPU idle to get power benefits.
+
+CPU PM Domain Setup
+-------------------
+
+PM domains are represented in the DT as domain consumers and providers. A
+device may have a domain provider and a domain provider may support multiple
+domain consumers. Domains like clusters, may also be nested inside one
+another. A domain that has no active consumer, may be powered off and any
+resuming consumer would trigger the domain back to active. Parent domains may
+be powered off when the child domains are powered off. The CPU cluster can be
+fashioned as a PM domain. When the CPU devices are powered off, the PM domain
+may be powered off.
+
+Device idle is reference counted by runtime PM. When there is no active need
+for the device, runtime PM invokes callbacks to suspend the parent domain.
+Generic PM domain (genpd) handles the hierarchy of devices, domains and the
+reference counting of objects leading to last man down and first man up in the
+domain. The CPU domains helper functions defines PM domains for each CPU
+cluster and attaches the CPU devices to the respective PM domains.
+
+Platform drivers may use the following API to register their CPU PM domains.
+
+of_setup_cpu_pd() -
+Provides a single step registration of the CPU PM domain and attach CPUs to
+the genpd. Platform drivers may additionally register callbacks for power_on
+and power_off operations for the PM domain.
+
+of_setup_cpu_pd_single() -
+Define PM domain for a single CPU and attach the CPU to its domain.
+
+
+CPU PM Domain governor
+----------------------
+
+CPUs have a unique ability to determine their next wakeup. CPUs may wake up
+for known timer interrupts and unknown interrupts from idle. Prediction
+algorithms and heuristic based algorithms like the Menu governor for cpuidle
+can determine the next wakeup of the CPU. However, determining the wakeup
+across a group of CPUs is a tough problem to solve.
+
+A simplistic approach would be to resort to known wakeups of the CPUs in
+determining the next wakeup of any CPU in the cluster. The CPU PM domain
+governor does just that. By looking into the tick device of the CPUs, the
+governor can determine the sleep time between the last CPU and the first
+scheduled wakeup of any CPU in that domain. This combined with the PM QoS
+requirement for CPU_DMA_LATENCY can be used to determine the deepest possible
+idle state of the CPU domain.
+
+
+PSCI based CPU PM Domains
+-------------------------
+
+ARM PSCI v1.0 supports PM domains for CPU clusters like in big.Little
+architecture. It is supported as part of the OS-Initiated (OSI) mode of the
+PSCI firmware. Since the control of domains is abstracted in the firmware,
+Linux does not need even a driver to control these domains. The complexity of
+determining the idle state of the PM domain is handled by the CPU PM domains.
+
+Every PSCI CPU PM domain idle state has a unique PSCI state id. The state id
+is read from the DT and specified using the arm,psci-suspend-param property.
+This makes it easy for big.Little SoCs to just specify the PM domain idle
+states for the CPU along with the psci-suspend-param and everything else is
+handled by the PSCI fimrware drivers and the firmware.
+
+
+DT definitions for PSCI CPU PM Domains
+--------------------------------------
+
+A PM domain's idle state can be defined in DT, the description of which is
+available in [1]. PSCI based CPU PM domains may define their idle states as
+part of the psci node. The additional parameter arm,psci-suspend-param is used
+to indicate to the firmwware the addition cluster state that would be achieved
+after the last CPU makes the PSCI call to suspend the CPU. The description of
+PSCI domain states is available in [2].
+
+[1]. Documentation/devicetree/bindings/arm/idle-states.txt
+[2]. Documentation/devicetree/bindings/arm/psci.txt
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 12/16] doc / cpu_domains: Describe CPU PM domains setup and governor
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

A generic CPU PM domain functionality is provided by
drivers/base/power/cpu_domains.c. This document describes the generic
usecase of CPU's PM domains, the setup of such domains and a CPU
specific genpd governor.

Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/power/cpu_domains.txt | 109 ++++++++++++++++++++++++++++++++++++
 1 file changed, 109 insertions(+)
 create mode 100644 Documentation/power/cpu_domains.txt

diff --git a/Documentation/power/cpu_domains.txt b/Documentation/power/cpu_domains.txt
new file mode 100644
index 0000000..6a39a64
--- /dev/null
+++ b/Documentation/power/cpu_domains.txt
@@ -0,0 +1,109 @@
+CPU PM domains
+==============
+
+Newer CPUs are grouped in SoCs as clusters. A cluster in addition to the CPUs
+may have caches, floating point units and other architecture specific power
+controller that share resources when any of the CPUs are active. When the CPUs
+are in idle, some of these cluster components may also idle. A cluster may
+also be nested inside another cluster that provides common coherency
+interfaces to share data between the clusters. The organization of such
+clusters and CPU may be descibed in DT, since they are SoC specific.
+
+CPUIdle framework enables the CPUs to determine the sleep time and enter low
+power state to save power during periods of idle. CPUs in a cluster may enter
+and exit idle state independently of each other. During the time when all the
+CPUs are in idle state, the cluster may safely put some of the shared
+resources in their idle state. The time between the last CPU to enter idle and
+the first CPU to wake up is the time available for the cluster to enter its
+idle state.
+
+When SoCs power down the CPU during cpuidle, they generally have supplemental
+hardware that can handshake with the CPU with a signal that indicates that the
+CPU has stopped execution. The hardware is also responsible for warm booting
+the CPU on receiving an interrupt. In a cluster architecture, common resources
+that are shared by a cluster may also be powered down by an external
+microcontroller or a processor. The microcontroller may be programmed in
+advance to put the hardware blocks in a low power state, when the last active
+CPU sends the idle signal. When the signal is received, the microcontroller
+may trigger the hardware blocks to enter their low power state. When an
+interrupt to wakeup the processor is received, the microcontroller is
+responsible for bringing up the hardware blocks to its active state, before
+waking up the CPU. The timelines for such operations should be in the
+acceptable range for for CPU idle to get power benefits.
+
+CPU PM Domain Setup
+-------------------
+
+PM domains are represented in the DT as domain consumers and providers. A
+device may have a domain provider and a domain provider may support multiple
+domain consumers. Domains like clusters, may also be nested inside one
+another. A domain that has no active consumer, may be powered off and any
+resuming consumer would trigger the domain back to active. Parent domains may
+be powered off when the child domains are powered off. The CPU cluster can be
+fashioned as a PM domain. When the CPU devices are powered off, the PM domain
+may be powered off.
+
+Device idle is reference counted by runtime PM. When there is no active need
+for the device, runtime PM invokes callbacks to suspend the parent domain.
+Generic PM domain (genpd) handles the hierarchy of devices, domains and the
+reference counting of objects leading to last man down and first man up in the
+domain. The CPU domains helper functions defines PM domains for each CPU
+cluster and attaches the CPU devices to the respective PM domains.
+
+Platform drivers may use the following API to register their CPU PM domains.
+
+of_setup_cpu_pd() -
+Provides a single step registration of the CPU PM domain and attach CPUs to
+the genpd. Platform drivers may additionally register callbacks for power_on
+and power_off operations for the PM domain.
+
+of_setup_cpu_pd_single() -
+Define PM domain for a single CPU and attach the CPU to its domain.
+
+
+CPU PM Domain governor
+----------------------
+
+CPUs have a unique ability to determine their next wakeup. CPUs may wake up
+for known timer interrupts and unknown interrupts from idle. Prediction
+algorithms and heuristic based algorithms like the Menu governor for cpuidle
+can determine the next wakeup of the CPU. However, determining the wakeup
+across a group of CPUs is a tough problem to solve.
+
+A simplistic approach would be to resort to known wakeups of the CPUs in
+determining the next wakeup of any CPU in the cluster. The CPU PM domain
+governor does just that. By looking into the tick device of the CPUs, the
+governor can determine the sleep time between the last CPU and the first
+scheduled wakeup of any CPU in that domain. This combined with the PM QoS
+requirement for CPU_DMA_LATENCY can be used to determine the deepest possible
+idle state of the CPU domain.
+
+
+PSCI based CPU PM Domains
+-------------------------
+
+ARM PSCI v1.0 supports PM domains for CPU clusters like in big.Little
+architecture. It is supported as part of the OS-Initiated (OSI) mode of the
+PSCI firmware. Since the control of domains is abstracted in the firmware,
+Linux does not need even a driver to control these domains. The complexity of
+determining the idle state of the PM domain is handled by the CPU PM domains.
+
+Every PSCI CPU PM domain idle state has a unique PSCI state id. The state id
+is read from the DT and specified using the arm,psci-suspend-param property.
+This makes it easy for big.Little SoCs to just specify the PM domain idle
+states for the CPU along with the psci-suspend-param and everything else is
+handled by the PSCI fimrware drivers and the firmware.
+
+
+DT definitions for PSCI CPU PM Domains
+--------------------------------------
+
+A PM domain's idle state can be defined in DT, the description of which is
+available in [1]. PSCI based CPU PM domains may define their idle states as
+part of the psci node. The additional parameter arm,psci-suspend-param is used
+to indicate to the firmwware the addition cluster state that would be achieved
+after the last CPU makes the PSCI call to suspend the CPU. The description of
+PSCI domain states is available in [2].
+
+[1]. Documentation/devicetree/bindings/arm/idle-states.txt
+[2]. Documentation/devicetree/bindings/arm/psci.txt
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 13/16] drivers: firmware: psci: Allow OS Initiated suspend mode
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	Mark Rutland

PSCI firmware v1.0 onwards may support 2 different modes for
CPU_SUSPEND. Platform coordinated mode is the default and every firmware
should support it. OS Initiated mode is optional for the firmware to
implement and allow Linux to make an better decision on the state of
the CPU cluster heirarchy.

With the kernel capable of deciding the state for CPU cluster and
coherency domains, the OS Initiated mode may now be used by the kernel,
provided the firmware supports it. SET_SUSPEND_MODE is a PSCI function
available on v1.0 onwards and can be used to set the mode in the
firmware.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased on 4.7 rc1]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/firmware/psci.c   | 42 +++++++++++++++++++++++++++++-------------
 include/uapi/linux/psci.h |  5 +++++
 2 files changed, 34 insertions(+), 13 deletions(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 8263429..a2edd91 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -53,6 +53,7 @@
  * require cooperation with a Trusted OS driver.
  */
 static int resident_cpu = -1;
+static bool psci_has_osi;
 
 bool psci_tos_resident_on(int cpu)
 {
@@ -558,9 +559,8 @@ static int __init psci_0_2_init(struct device_node *np)
 	int err;
 
 	err = get_set_conduit_method(np);
-
 	if (err)
-		goto out_put_node;
+		return err;
 	/*
 	 * Starting with v0.2, the PSCI specification introduced a call
 	 * (PSCI_VERSION) that allows probing the firmware version, so
@@ -568,11 +568,7 @@ static int __init psci_0_2_init(struct device_node *np)
 	 * can be carried out according to the specific version reported
 	 * by firmware
 	 */
-	err = psci_probe();
-
-out_put_node:
-	of_node_put(np);
-	return err;
+	return psci_probe();
 }
 
 /*
@@ -584,9 +580,8 @@ static int __init psci_0_1_init(struct device_node *np)
 	int err;
 
 	err = get_set_conduit_method(np);
-
 	if (err)
-		goto out_put_node;
+		return err;
 
 	pr_info("Using PSCI v0.1 Function IDs from DT\n");
 
@@ -610,15 +605,31 @@ static int __init psci_0_1_init(struct device_node *np)
 		psci_ops.migrate = psci_migrate;
 	}
 
-out_put_node:
-	of_node_put(np);
 	return err;
 }
 
+static int __init psci_1_0_init(struct device_node *np)
+{
+	int ret;
+
+	ret = psci_0_2_init(np);
+	if (ret)
+		return ret;
+
+	/* Check if PSCI OSI mode is available */
+	ret = psci_features(psci_function_id[PSCI_FN_CPU_SUSPEND]);
+	if (ret & PSCI_1_0_OS_INITIATED) {
+		if (!psci_features(PSCI_1_0_FN_SET_SUSPEND_MODE))
+			psci_has_osi = true;
+	}
+
+	return 0;
+}
+
 static const struct of_device_id psci_of_match[] __initconst = {
 	{ .compatible = "arm,psci",	.data = psci_0_1_init},
 	{ .compatible = "arm,psci-0.2",	.data = psci_0_2_init},
-	{ .compatible = "arm,psci-1.0",	.data = psci_0_2_init},
+	{ .compatible = "arm,psci-1.0",	.data = psci_1_0_init},
 	{},
 };
 
@@ -627,6 +638,7 @@ int __init psci_dt_init(void)
 	struct device_node *np;
 	const struct of_device_id *matched_np;
 	psci_initcall_t init_fn;
+	int ret;
 
 	np = of_find_matching_node_and_match(NULL, psci_of_match, &matched_np);
 
@@ -634,7 +646,11 @@ int __init psci_dt_init(void)
 		return -ENODEV;
 
 	init_fn = (psci_initcall_t)matched_np->data;
-	return init_fn(np);
+	ret = init_fn(np);
+
+	of_node_put(np);
+
+	return ret;
 }
 
 #ifdef CONFIG_ACPI
diff --git a/include/uapi/linux/psci.h b/include/uapi/linux/psci.h
index 3d7a0fc..7dd778e 100644
--- a/include/uapi/linux/psci.h
+++ b/include/uapi/linux/psci.h
@@ -48,6 +48,7 @@
 
 #define PSCI_1_0_FN_PSCI_FEATURES		PSCI_0_2_FN(10)
 #define PSCI_1_0_FN_SYSTEM_SUSPEND		PSCI_0_2_FN(14)
+#define PSCI_1_0_FN_SET_SUSPEND_MODE		PSCI_0_2_FN(15)
 
 #define PSCI_1_0_FN64_SYSTEM_SUSPEND		PSCI_0_2_FN64(14)
 
@@ -93,6 +94,10 @@
 #define PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK	\
 			(0x1 << PSCI_1_0_FEATURES_CPU_SUSPEND_PF_SHIFT)
 
+#define PSCI_1_0_OS_INITIATED			BIT(0)
+#define PSCI_1_0_SUSPEND_MODE_PC		0
+#define PSCI_1_0_SUSPEND_MODE_OSI		1
+
 /* PSCI return values (inclusive of all PSCI versions) */
 #define PSCI_RET_SUCCESS			0
 #define PSCI_RET_NOT_SUPPORTED			-1
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 13/16] drivers: firmware: psci: Allow OS Initiated suspend mode
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

PSCI firmware v1.0 onwards may support 2 different modes for
CPU_SUSPEND. Platform coordinated mode is the default and every firmware
should support it. OS Initiated mode is optional for the firmware to
implement and allow Linux to make an better decision on the state of
the CPU cluster heirarchy.

With the kernel capable of deciding the state for CPU cluster and
coherency domains, the OS Initiated mode may now be used by the kernel,
provided the firmware supports it. SET_SUSPEND_MODE is a PSCI function
available on v1.0 onwards and can be used to set the mode in the
firmware.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
[Ulf: Rebased on 4.7 rc1]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
 drivers/firmware/psci.c   | 42 +++++++++++++++++++++++++++++-------------
 include/uapi/linux/psci.h |  5 +++++
 2 files changed, 34 insertions(+), 13 deletions(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index 8263429..a2edd91 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -53,6 +53,7 @@
  * require cooperation with a Trusted OS driver.
  */
 static int resident_cpu = -1;
+static bool psci_has_osi;
 
 bool psci_tos_resident_on(int cpu)
 {
@@ -558,9 +559,8 @@ static int __init psci_0_2_init(struct device_node *np)
 	int err;
 
 	err = get_set_conduit_method(np);
-
 	if (err)
-		goto out_put_node;
+		return err;
 	/*
 	 * Starting with v0.2, the PSCI specification introduced a call
 	 * (PSCI_VERSION) that allows probing the firmware version, so
@@ -568,11 +568,7 @@ static int __init psci_0_2_init(struct device_node *np)
 	 * can be carried out according to the specific version reported
 	 * by firmware
 	 */
-	err = psci_probe();
-
-out_put_node:
-	of_node_put(np);
-	return err;
+	return psci_probe();
 }
 
 /*
@@ -584,9 +580,8 @@ static int __init psci_0_1_init(struct device_node *np)
 	int err;
 
 	err = get_set_conduit_method(np);
-
 	if (err)
-		goto out_put_node;
+		return err;
 
 	pr_info("Using PSCI v0.1 Function IDs from DT\n");
 
@@ -610,15 +605,31 @@ static int __init psci_0_1_init(struct device_node *np)
 		psci_ops.migrate = psci_migrate;
 	}
 
-out_put_node:
-	of_node_put(np);
 	return err;
 }
 
+static int __init psci_1_0_init(struct device_node *np)
+{
+	int ret;
+
+	ret = psci_0_2_init(np);
+	if (ret)
+		return ret;
+
+	/* Check if PSCI OSI mode is available */
+	ret = psci_features(psci_function_id[PSCI_FN_CPU_SUSPEND]);
+	if (ret & PSCI_1_0_OS_INITIATED) {
+		if (!psci_features(PSCI_1_0_FN_SET_SUSPEND_MODE))
+			psci_has_osi = true;
+	}
+
+	return 0;
+}
+
 static const struct of_device_id psci_of_match[] __initconst = {
 	{ .compatible = "arm,psci",	.data = psci_0_1_init},
 	{ .compatible = "arm,psci-0.2",	.data = psci_0_2_init},
-	{ .compatible = "arm,psci-1.0",	.data = psci_0_2_init},
+	{ .compatible = "arm,psci-1.0",	.data = psci_1_0_init},
 	{},
 };
 
@@ -627,6 +638,7 @@ int __init psci_dt_init(void)
 	struct device_node *np;
 	const struct of_device_id *matched_np;
 	psci_initcall_t init_fn;
+	int ret;
 
 	np = of_find_matching_node_and_match(NULL, psci_of_match, &matched_np);
 
@@ -634,7 +646,11 @@ int __init psci_dt_init(void)
 		return -ENODEV;
 
 	init_fn = (psci_initcall_t)matched_np->data;
-	return init_fn(np);
+	ret = init_fn(np);
+
+	of_node_put(np);
+
+	return ret;
 }
 
 #ifdef CONFIG_ACPI
diff --git a/include/uapi/linux/psci.h b/include/uapi/linux/psci.h
index 3d7a0fc..7dd778e 100644
--- a/include/uapi/linux/psci.h
+++ b/include/uapi/linux/psci.h
@@ -48,6 +48,7 @@
 
 #define PSCI_1_0_FN_PSCI_FEATURES		PSCI_0_2_FN(10)
 #define PSCI_1_0_FN_SYSTEM_SUSPEND		PSCI_0_2_FN(14)
+#define PSCI_1_0_FN_SET_SUSPEND_MODE		PSCI_0_2_FN(15)
 
 #define PSCI_1_0_FN64_SYSTEM_SUSPEND		PSCI_0_2_FN64(14)
 
@@ -93,6 +94,10 @@
 #define PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK	\
 			(0x1 << PSCI_1_0_FEATURES_CPU_SUSPEND_PF_SHIFT)
 
+#define PSCI_1_0_OS_INITIATED			BIT(0)
+#define PSCI_1_0_SUSPEND_MODE_PC		0
+#define PSCI_1_0_SUSPEND_MODE_OSI		1
+
 /* PSCI return values (inclusive of all PSCI versions) */
 #define PSCI_RET_SUCCESS			0
 #define PSCI_RET_NOT_SUPPORTED			-1
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 14/16] drivers: firmware: psci: Support cluster idle states for OS-Initiated
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	Mark Rutland

PSCI OS initiated firmware may allow Linux to determine the state of the
CPU cluster and the cluster at coherency level to enter idle states when
there are no active CPUs. Since Linux has a better idea of the QoS and
the wakeup pattern of the CPUs, the cluster idle states may be better
determined by the OS instead of the firmware.

The last CPU entering idle in a cluster, is responsible for selecting
the state of the cluster. Only one CPU in a cluster may provide the
cluster idle state to the firmware. Similarly, the last CPU in the
system may provide the state of the coherency domain along with the
cluster and the CPU state IDs.

Utilize the CPU PM domain framework's helper functions to build up the
hierarchy of cluster topology using Generic PM domains. We provide
callbacks for domain power_on and power_off. By appending the state IDs
at each domain level in the -power_off() callbacks, we build up a
composite state ID that can be passed onto the firmware to idle the CPU,
the cluster and the coherency interface.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/firmware/psci.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index a2edd91..02b3c7c 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -16,6 +16,7 @@
 #include <linux/acpi.h>
 #include <linux/arm-smccc.h>
 #include <linux/cpuidle.h>
+#include <linux/cpu_domains.h>
 #include <linux/errno.h>
 #include <linux/linkage.h>
 #include <linux/of.h>
@@ -54,6 +55,18 @@
  */
 static int resident_cpu = -1;
 static bool psci_has_osi;
+static bool psci_has_osi_pd;
+static DEFINE_PER_CPU(u32, cluster_state_id);
+
+static inline u32 psci_get_composite_state_id(u32 cpu_state)
+{
+	return cpu_state | this_cpu_read(cluster_state_id);
+}
+
+static inline void psci_reset_composite_state_id(void)
+{
+	this_cpu_write(cluster_state_id, 0);
+}
 
 bool psci_tos_resident_on(int cpu)
 {
@@ -180,6 +193,8 @@ static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
 
 	fn = psci_function_id[PSCI_FN_CPU_ON];
 	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
+	/* Reset CPU cluster states */
+	psci_reset_composite_state_id();
 	return psci_to_linux_errno(err);
 }
 
@@ -251,6 +266,27 @@ static int __init psci_features(u32 psci_func_id)
 
 #ifdef CONFIG_CPU_IDLE
 static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state);
+static bool psci_suspend_mode_is_osi;
+
+static int psci_set_suspend_mode_osi(bool enable)
+{
+	int ret;
+	int mode;
+
+	if (enable && !psci_has_osi)
+		return -ENODEV;
+
+	if (enable == psci_suspend_mode_is_osi)
+		return 0;
+
+	mode = enable ? PSCI_1_0_SUSPEND_MODE_OSI : PSCI_1_0_SUSPEND_MODE_PC;
+	ret = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE,
+			     mode, 0, 0);
+	if (!ret)
+		psci_suspend_mode_is_osi = enable;
+
+	return psci_to_linux_errno(ret);
+}
 
 static int psci_dt_cpu_init_idle(struct device_node *cpu_node, int cpu)
 {
@@ -353,6 +389,39 @@ static int __maybe_unused psci_acpi_cpu_init_idle(unsigned int cpu)
 }
 #endif
 
+static int psci_pd_populate_state_data(struct device_node *np, u32 *param)
+{
+	return of_property_read_u32(np, "arm,psci-suspend-param", param);
+}
+
+static int psci_pd_power_off(u32 idx, u32 param, const struct cpumask *mask)
+{
+	__this_cpu_add(cluster_state_id, param);
+	return 0;
+}
+
+const struct cpu_pd_ops psci_pd_ops = {
+	.populate_state_data = psci_pd_populate_state_data,
+	.power_off = psci_pd_power_off,
+};
+
+static int psci_cpu_osi_pd_init(int cpu)
+{
+	int ret;
+
+	if (!psci_has_osi_pd)
+		return 0;
+
+	ret = of_setup_cpu_pd_single(cpu, &psci_pd_ops);
+	if (!ret) {
+		ret = psci_set_suspend_mode_osi(true);
+		if (ret)
+			pr_warn("CPU%d: Error setting PSCI OSI mode\n", cpu);
+	}
+
+	return ret;
+}
+
 int psci_cpu_init_idle(unsigned int cpu)
 {
 	struct device_node *cpu_node;
@@ -368,6 +437,10 @@ int psci_cpu_init_idle(unsigned int cpu)
 	if (!acpi_disabled)
 		return psci_acpi_cpu_init_idle(cpu);
 
+	ret = psci_cpu_osi_pd_init(cpu);
+	if (ret)
+		return ret;
+
 	cpu_node = of_get_cpu_node(cpu, NULL);
 	if (!cpu_node)
 		return -ENODEV;
@@ -382,15 +455,17 @@ int psci_cpu_init_idle(unsigned int cpu)
 static int psci_suspend_finisher(unsigned long index)
 {
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
 
-	return psci_ops.cpu_suspend(state[index - 1],
-				    virt_to_phys(cpu_resume));
+	return psci_ops.cpu_suspend(ext_state, virt_to_phys(cpu_resume));
 }
 
 int psci_cpu_suspend_enter(unsigned long index)
 {
 	int ret;
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
+
 	/*
 	 * idle state index 0 corresponds to wfi, should never be called
 	 * from the cpu_suspend operations
@@ -399,10 +474,16 @@ int psci_cpu_suspend_enter(unsigned long index)
 		return -EINVAL;
 
 	if (!psci_power_state_loses_context(state[index - 1]))
-		ret = psci_ops.cpu_suspend(state[index - 1], 0);
+		ret = psci_ops.cpu_suspend(ext_state, 0);
 	else
 		ret = cpu_suspend(index, psci_suspend_finisher);
 
+	/*
+	 * Clear the CPU's cluster states, we start afresh after coming
+	 * out of idle.
+	 */
+	psci_reset_composite_state_id();
+
 	return ret;
 }
 
@@ -610,6 +691,7 @@ static int __init psci_0_1_init(struct device_node *np)
 
 static int __init psci_1_0_init(struct device_node *np)
 {
+	struct device_node *dn;
 	int ret;
 
 	ret = psci_0_2_init(np);
@@ -621,6 +703,11 @@ static int __init psci_1_0_init(struct device_node *np)
 	if (ret & PSCI_1_0_OS_INITIATED) {
 		if (!psci_features(PSCI_1_0_FN_SET_SUSPEND_MODE))
 			psci_has_osi = true;
+		/* Check if we power domains defined in the PSCI node */
+		dn = of_find_node_with_property(np, "#power-domain-cells");
+		if (dn)
+			psci_has_osi_pd = true;
+		of_node_put(dn);
 	}
 
 	return 0;
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 14/16] drivers: firmware: psci: Support cluster idle states for OS-Initiated
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

PSCI OS initiated firmware may allow Linux to determine the state of the
CPU cluster and the cluster at coherency level to enter idle states when
there are no active CPUs. Since Linux has a better idea of the QoS and
the wakeup pattern of the CPUs, the cluster idle states may be better
determined by the OS instead of the firmware.

The last CPU entering idle in a cluster, is responsible for selecting
the state of the cluster. Only one CPU in a cluster may provide the
cluster idle state to the firmware. Similarly, the last CPU in the
system may provide the state of the coherency domain along with the
cluster and the CPU state IDs.

Utilize the CPU PM domain framework's helper functions to build up the
hierarchy of cluster topology using Generic PM domains. We provide
callbacks for domain power_on and power_off. By appending the state IDs
at each domain level in the -power_off() callbacks, we build up a
composite state ID that can be passed onto the firmware to idle the CPU,
the cluster and the coherency interface.

Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 drivers/firmware/psci.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 90 insertions(+), 3 deletions(-)

diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index a2edd91..02b3c7c 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -16,6 +16,7 @@
 #include <linux/acpi.h>
 #include <linux/arm-smccc.h>
 #include <linux/cpuidle.h>
+#include <linux/cpu_domains.h>
 #include <linux/errno.h>
 #include <linux/linkage.h>
 #include <linux/of.h>
@@ -54,6 +55,18 @@
  */
 static int resident_cpu = -1;
 static bool psci_has_osi;
+static bool psci_has_osi_pd;
+static DEFINE_PER_CPU(u32, cluster_state_id);
+
+static inline u32 psci_get_composite_state_id(u32 cpu_state)
+{
+	return cpu_state | this_cpu_read(cluster_state_id);
+}
+
+static inline void psci_reset_composite_state_id(void)
+{
+	this_cpu_write(cluster_state_id, 0);
+}
 
 bool psci_tos_resident_on(int cpu)
 {
@@ -180,6 +193,8 @@ static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
 
 	fn = psci_function_id[PSCI_FN_CPU_ON];
 	err = invoke_psci_fn(fn, cpuid, entry_point, 0);
+	/* Reset CPU cluster states */
+	psci_reset_composite_state_id();
 	return psci_to_linux_errno(err);
 }
 
@@ -251,6 +266,27 @@ static int __init psci_features(u32 psci_func_id)
 
 #ifdef CONFIG_CPU_IDLE
 static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state);
+static bool psci_suspend_mode_is_osi;
+
+static int psci_set_suspend_mode_osi(bool enable)
+{
+	int ret;
+	int mode;
+
+	if (enable && !psci_has_osi)
+		return -ENODEV;
+
+	if (enable == psci_suspend_mode_is_osi)
+		return 0;
+
+	mode = enable ? PSCI_1_0_SUSPEND_MODE_OSI : PSCI_1_0_SUSPEND_MODE_PC;
+	ret = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE,
+			     mode, 0, 0);
+	if (!ret)
+		psci_suspend_mode_is_osi = enable;
+
+	return psci_to_linux_errno(ret);
+}
 
 static int psci_dt_cpu_init_idle(struct device_node *cpu_node, int cpu)
 {
@@ -353,6 +389,39 @@ static int __maybe_unused psci_acpi_cpu_init_idle(unsigned int cpu)
 }
 #endif
 
+static int psci_pd_populate_state_data(struct device_node *np, u32 *param)
+{
+	return of_property_read_u32(np, "arm,psci-suspend-param", param);
+}
+
+static int psci_pd_power_off(u32 idx, u32 param, const struct cpumask *mask)
+{
+	__this_cpu_add(cluster_state_id, param);
+	return 0;
+}
+
+const struct cpu_pd_ops psci_pd_ops = {
+	.populate_state_data = psci_pd_populate_state_data,
+	.power_off = psci_pd_power_off,
+};
+
+static int psci_cpu_osi_pd_init(int cpu)
+{
+	int ret;
+
+	if (!psci_has_osi_pd)
+		return 0;
+
+	ret = of_setup_cpu_pd_single(cpu, &psci_pd_ops);
+	if (!ret) {
+		ret = psci_set_suspend_mode_osi(true);
+		if (ret)
+			pr_warn("CPU%d: Error setting PSCI OSI mode\n", cpu);
+	}
+
+	return ret;
+}
+
 int psci_cpu_init_idle(unsigned int cpu)
 {
 	struct device_node *cpu_node;
@@ -368,6 +437,10 @@ int psci_cpu_init_idle(unsigned int cpu)
 	if (!acpi_disabled)
 		return psci_acpi_cpu_init_idle(cpu);
 
+	ret = psci_cpu_osi_pd_init(cpu);
+	if (ret)
+		return ret;
+
 	cpu_node = of_get_cpu_node(cpu, NULL);
 	if (!cpu_node)
 		return -ENODEV;
@@ -382,15 +455,17 @@ int psci_cpu_init_idle(unsigned int cpu)
 static int psci_suspend_finisher(unsigned long index)
 {
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
 
-	return psci_ops.cpu_suspend(state[index - 1],
-				    virt_to_phys(cpu_resume));
+	return psci_ops.cpu_suspend(ext_state, virt_to_phys(cpu_resume));
 }
 
 int psci_cpu_suspend_enter(unsigned long index)
 {
 	int ret;
 	u32 *state = __this_cpu_read(psci_power_state);
+	u32 ext_state = psci_get_composite_state_id(state[index - 1]);
+
 	/*
 	 * idle state index 0 corresponds to wfi, should never be called
 	 * from the cpu_suspend operations
@@ -399,10 +474,16 @@ int psci_cpu_suspend_enter(unsigned long index)
 		return -EINVAL;
 
 	if (!psci_power_state_loses_context(state[index - 1]))
-		ret = psci_ops.cpu_suspend(state[index - 1], 0);
+		ret = psci_ops.cpu_suspend(ext_state, 0);
 	else
 		ret = cpu_suspend(index, psci_suspend_finisher);
 
+	/*
+	 * Clear the CPU's cluster states, we start afresh after coming
+	 * out of idle.
+	 */
+	psci_reset_composite_state_id();
+
 	return ret;
 }
 
@@ -610,6 +691,7 @@ static int __init psci_0_1_init(struct device_node *np)
 
 static int __init psci_1_0_init(struct device_node *np)
 {
+	struct device_node *dn;
 	int ret;
 
 	ret = psci_0_2_init(np);
@@ -621,6 +703,11 @@ static int __init psci_1_0_init(struct device_node *np)
 	if (ret & PSCI_1_0_OS_INITIATED) {
 		if (!psci_features(PSCI_1_0_FN_SET_SUSPEND_MODE))
 			psci_has_osi = true;
+		/* Check if we power domains defined in the PSCI node */
+		dn = of_find_node_with_property(np, "#power-domain-cells");
+		if (dn)
+			psci_has_osi_pd = true;
+		of_node_put(dn);
 	}
 
 	return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 15/16] dt/bindings: Add PSCI OS-Initiated PM Domains bindings
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17     ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson-QSEj5FYQhm4dnm+yROfE0A,
	khilman-DgEjT+Ai2ygdnm+yROfE0A, rjw-LthD3rsA81gm4RdzfppkhA,
	linux-pm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: andy.gross-QSEj5FYQhm4dnm+yROfE0A, sboyd-sgV2jX0FEOL9JmXXK+q4OQ,
	linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	brendan.jackman-5wv7dgnIgG8, lorenzo.pieralisi-5wv7dgnIgG8,
	sudeep.holla-5wv7dgnIgG8, Juri.Lelli-5wv7dgnIgG8, Lina Iyer,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Mark Rutland

Add bindings for defining a OS-Initiated based CPU PM domain.

Cc: <devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi-5wv7dgnIgG8@public.gmane.org>
Cc: Mark Rutland <mark.rutland-5wv7dgnIgG8@public.gmane.org>
Signed-off-by: Lina Iyer <lina.iyer-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
---
 Documentation/devicetree/bindings/arm/psci.txt | 79 ++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/Documentation/devicetree/bindings/arm/psci.txt b/Documentation/devicetree/bindings/arm/psci.txt
index a2c4f1d..63a229b 100644
--- a/Documentation/devicetree/bindings/arm/psci.txt
+++ b/Documentation/devicetree/bindings/arm/psci.txt
@@ -105,7 +105,86 @@ Case 3: PSCI v0.2 and PSCI v0.1.
 		...
 	};
 
+PSCI v1.0 onwards, supports OS-Initiated mode for powering off CPU domains
+from the firmware. Such PM domains for which the PSCI firmware driver acts as
+pseudo-controller, may also be specified in the DT under the psci node. The
+domain definitions must follow the domain idle state specifications per [3].
+The domain states themselves must be compatible with 'arm,idle-state' defined
+in [1] and need to specify the arm,psci-suspend-param property for each idle
+state.
+
+More information on defining CPU PM domains is available in [4].
+
+Example: OS-Iniated PSCI based PM domains with 1 CPU in each domain
+
+	cpus {
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		CPU0: cpu@0 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a53", "arm,armv8";
+			reg = <0x0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_PWRDN>;
+			power-domains = <&CPU_PD0>;
+		};
+
+		CPU1: cpu@1 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a57", "arm,armv8";
+			reg = <0x100>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_PWRDN>;
+			power-domains = <&CPU_PD1>;
+		};
+
+		idle-states {
+			CPU_PWRDN: cpu_power_down{
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x000001>;
+				entry-latency-us = <10>;
+				exit-latency-us = <10>;
+				min-residency-us = <100>;
+			};
+
+			CLUSTER_RET: domain_ret {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000010>;
+				entry-latency-us = <500>;
+				exit-latency-us = <500>;
+				min-residency-us = <2000>;
+			};
+
+			CLUSTER_PWR_DWN: domain_gdhs {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000030>;
+				entry-latency-us = <2000>;
+				exit-latency-us = <2000>;
+				min-residency-us = <6000>;
+			};
+	};
+
+	psci {
+		compatible = "arm,psci-1.0";
+		method = "smc";
+
+		CPU_PD0: cpu-pd@0 {
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
+		};
+
+		CPU_PD1: cpu-pd@1 {
+			#power-domain-cells = <0>;
+			domain-idle-states =  <&CLUSTER_PWR_DWN>;
+		};
+	};
+
 [1] Kernel documentation - ARM idle states bindings
     Documentation/devicetree/bindings/arm/idle-states.txt
 [2] Power State Coordination Interface (PSCI) specification
     http://infocenter.arm.com/help/topic/com.arm.doc.den0022c/DEN0022C_Power_State_Coordination_Interface.pdf
+[3]. PM Domains description
+    Documentation/devicetree/bindings/power/power_domain.txt
+[4]. CPU PM Domains description
+    Documentation/power/cpu_domains.txt
-- 
2.7.4

--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 15/16] dt/bindings: Add PSCI OS-Initiated PM Domains bindings
@ 2016-08-26 20:17     ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

Add bindings for defining a OS-Initiated based CPU PM domain.

Cc: <devicetree@vger.kernel.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 Documentation/devicetree/bindings/arm/psci.txt | 79 ++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/Documentation/devicetree/bindings/arm/psci.txt b/Documentation/devicetree/bindings/arm/psci.txt
index a2c4f1d..63a229b 100644
--- a/Documentation/devicetree/bindings/arm/psci.txt
+++ b/Documentation/devicetree/bindings/arm/psci.txt
@@ -105,7 +105,86 @@ Case 3: PSCI v0.2 and PSCI v0.1.
 		...
 	};
 
+PSCI v1.0 onwards, supports OS-Initiated mode for powering off CPU domains
+from the firmware. Such PM domains for which the PSCI firmware driver acts as
+pseudo-controller, may also be specified in the DT under the psci node. The
+domain definitions must follow the domain idle state specifications per [3].
+The domain states themselves must be compatible with 'arm,idle-state' defined
+in [1] and need to specify the arm,psci-suspend-param property for each idle
+state.
+
+More information on defining CPU PM domains is available in [4].
+
+Example: OS-Iniated PSCI based PM domains with 1 CPU in each domain
+
+	cpus {
+		#address-cells = <1>;
+		#size-cells = <0>;
+
+		CPU0: cpu at 0 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a53", "arm,armv8";
+			reg = <0x0>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_PWRDN>;
+			power-domains = <&CPU_PD0>;
+		};
+
+		CPU1: cpu at 1 {
+			device_type = "cpu";
+			compatible = "arm,cortex-a57", "arm,armv8";
+			reg = <0x100>;
+			enable-method = "psci";
+			cpu-idle-states = <&CPU_PWRDN>;
+			power-domains = <&CPU_PD1>;
+		};
+
+		idle-states {
+			CPU_PWRDN: cpu_power_down{
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x000001>;
+				entry-latency-us = <10>;
+				exit-latency-us = <10>;
+				min-residency-us = <100>;
+			};
+
+			CLUSTER_RET: domain_ret {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000010>;
+				entry-latency-us = <500>;
+				exit-latency-us = <500>;
+				min-residency-us = <2000>;
+			};
+
+			CLUSTER_PWR_DWN: domain_gdhs {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000030>;
+				entry-latency-us = <2000>;
+				exit-latency-us = <2000>;
+				min-residency-us = <6000>;
+			};
+	};
+
+	psci {
+		compatible = "arm,psci-1.0";
+		method = "smc";
+
+		CPU_PD0: cpu-pd at 0 {
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
+		};
+
+		CPU_PD1: cpu-pd at 1 {
+			#power-domain-cells = <0>;
+			domain-idle-states =  <&CLUSTER_PWR_DWN>;
+		};
+	};
+
 [1] Kernel documentation - ARM idle states bindings
     Documentation/devicetree/bindings/arm/idle-states.txt
 [2] Power State Coordination Interface (PSCI) specification
     http://infocenter.arm.com/help/topic/com.arm.doc.den0022c/DEN0022C_Power_State_Coordination_Interface.pdf
+[3]. PM Domains description
+    Documentation/devicetree/bindings/power/power_domain.txt
+[4]. CPU PM Domains description
+    Documentation/power/cpu_domains.txt
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 16/16] ARM64: dts: Define CPU power domain for MSM8916
  2016-08-26 20:17 ` Lina Iyer
@ 2016-08-26 20:17   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: ulf.hansson, khilman, rjw, linux-pm, linux-arm-kernel
  Cc: andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, sudeep.holla, Juri.Lelli, Lina Iyer,
	devicetree

Define power domain and the power states for the domain as defined by
the PSCI firmware. The 8916 firmware supports OS initiated method of
powering off the CPU clusters.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 arch/arm64/boot/dts/qcom/msm8916.dtsi | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
index 11bdc24..e6d8c3b 100644
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
@@ -99,6 +99,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU1: cpu@1 {
@@ -108,6 +109,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU2: cpu@2 {
@@ -117,6 +119,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU3: cpu@3 {
@@ -126,6 +129,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		L2_0: l2-cache {
@@ -142,12 +146,33 @@
 				min-residency-us = <2000>;
 				local-timer-stop;
 			};
+
+			CLUSTER_RET: cluster_retention {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000010>;
+				entry-latency-us = <500>;
+				exit-latency-us = <500>;
+				min-residency-us = <2000>;
+			};
+
+			CLUSTER_PWR_DWN: cluster_gdhs {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000030>;
+				entry-latency-us = <2000>;
+				exit-latency-us = <2000>;
+				min-residency-us = <6000>;
+			};
 		};
 	};
 
 	psci {
 		compatible = "arm,psci-1.0";
 		method = "smc";
+
+		CPU_PD: cpu-pd@0 {
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
+		};
 	};
 
 	pmu {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 70+ messages in thread

* [PATCH v5 16/16] ARM64: dts: Define CPU power domain for MSM8916
@ 2016-08-26 20:17   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-08-26 20:17 UTC (permalink / raw)
  To: linux-arm-kernel

Define power domain and the power states for the domain as defined by
the PSCI firmware. The 8916 firmware supports OS initiated method of
powering off the CPU clusters.

Cc: <devicetree@vger.kernel.org>
Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
---
 arch/arm64/boot/dts/qcom/msm8916.dtsi | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
index 11bdc24..e6d8c3b 100644
--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
@@ -99,6 +99,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU1: cpu at 1 {
@@ -108,6 +109,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU2: cpu at 2 {
@@ -117,6 +119,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		CPU3: cpu at 3 {
@@ -126,6 +129,7 @@
 			next-level-cache = <&L2_0>;
 			enable-method = "psci";
 			cpu-idle-states = <&CPU_SPC>;
+			power-domains = <&CPU_PD>;
 		};
 
 		L2_0: l2-cache {
@@ -142,12 +146,33 @@
 				min-residency-us = <2000>;
 				local-timer-stop;
 			};
+
+			CLUSTER_RET: cluster_retention {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000010>;
+				entry-latency-us = <500>;
+				exit-latency-us = <500>;
+				min-residency-us = <2000>;
+			};
+
+			CLUSTER_PWR_DWN: cluster_gdhs {
+				compatible = "arm,idle-state";
+				arm,psci-suspend-param = <0x1000030>;
+				entry-latency-us = <2000>;
+				exit-latency-us = <2000>;
+				min-residency-us = <6000>;
+			};
 		};
 	};
 
 	psci {
 		compatible = "arm,psci-1.0";
 		method = "smc";
+
+		CPU_PD: cpu-pd at 0 {
+			#power-domain-cells = <0>;
+			domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWR_DWN>;
+		};
 	};
 
 	pmu {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 10/16] timer: Export next wake up of a CPU
  2016-08-26 20:17   ` Lina Iyer
@ 2016-08-26 21:29     ` kbuild test robot
  -1 siblings, 0 replies; 70+ messages in thread
From: kbuild test robot @ 2016-08-26 21:29 UTC (permalink / raw)
  Cc: kbuild-all, ulf.hansson, khilman, rjw, linux-pm,
	linux-arm-kernel, andy.gross, sboyd, linux-arm-msm,
	brendan.jackman, lorenzo.pieralisi, sudeep.holla, Juri.Lelli,
	Lina Iyer, Thomas Gleixner

[-- Attachment #1: Type: text/plain, Size: 1847 bytes --]

Hi Lina,

[auto build test ERROR on pm/linux-next]
[also build test ERROR on v4.8-rc3 next-20160825]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
[Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
[Check https://git-scm.com/docs/git-format-patch for more information]

url:    https://github.com/0day-ci/linux/commits/Lina-Iyer/PM-SoC-idle-support-using-PM-domains/20160827-042847
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: m68k-sun3_defconfig (attached as .config)
compiler: m68k-linux-gcc (GCC) 4.9.0
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=m68k 

All errors (new ones prefixed by >>):

   In file included from fs/proc/stat.c:13:0:
   include/linux/tick.h: In function 'tick_nohz_get_next_wakeup':
>> include/linux/tick.h:138:9: error: 'tick_next_period' undeclared (first use in this function)
     return tick_next_period;
            ^
   include/linux/tick.h:138:9: note: each undeclared identifier is reported only once for each function it appears in

vim +/tick_next_period +138 include/linux/tick.h

   132	
   133		return len;
   134	}
   135	
   136	static inline ktime_t tick_nohz_get_next_wakeup(int cpu)
   137	{
 > 138		return tick_next_period;
   139	}
   140	
   141	static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 11444 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 10/16] timer: Export next wake up of a CPU
@ 2016-08-26 21:29     ` kbuild test robot
  0 siblings, 0 replies; 70+ messages in thread
From: kbuild test robot @ 2016-08-26 21:29 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Lina,

[auto build test ERROR on pm/linux-next]
[also build test ERROR on v4.8-rc3 next-20160825]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
[Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
[Check https://git-scm.com/docs/git-format-patch for more information]

url:    https://github.com/0day-ci/linux/commits/Lina-Iyer/PM-SoC-idle-support-using-PM-domains/20160827-042847
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: m68k-sun3_defconfig (attached as .config)
compiler: m68k-linux-gcc (GCC) 4.9.0
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=m68k 

All errors (new ones prefixed by >>):

   In file included from fs/proc/stat.c:13:0:
   include/linux/tick.h: In function 'tick_nohz_get_next_wakeup':
>> include/linux/tick.h:138:9: error: 'tick_next_period' undeclared (first use in this function)
     return tick_next_period;
            ^
   include/linux/tick.h:138:9: note: each undeclared identifier is reported only once for each function it appears in

vim +/tick_next_period +138 include/linux/tick.h

   132	
   133		return len;
   134	}
   135	
   136	static inline ktime_t tick_nohz_get_next_wakeup(int cpu)
   137	{
 > 138		return tick_next_period;
   139	}
   140	
   141	static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/octet-stream
Size: 11444 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160827/90d3c0db/attachment-0001.obj>

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 11/16] PM / cpu_domains: Add PM Domain governor for CPUs
  2016-08-26 20:17   ` Lina Iyer
@ 2016-08-26 23:10     ` kbuild test robot
  -1 siblings, 0 replies; 70+ messages in thread
From: kbuild test robot @ 2016-08-26 23:10 UTC (permalink / raw)
  Cc: ulf.hansson, lorenzo.pieralisi, Juri.Lelli, linux-pm, sboyd,
	khilman, rjw, sudeep.holla, brendan.jackman, kbuild-all,
	linux-arm-msm, andy.gross, Lina Iyer, linux-arm-kernel

[-- Attachment #1: Type: text/plain, Size: 2695 bytes --]

Hi Lina,

[auto build test WARNING on pm/linux-next]
[also build test WARNING on v4.8-rc3 next-20160825]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
[Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
[Check https://git-scm.com/docs/git-format-patch for more information]

url:    https://github.com/0day-ci/linux/commits/Lina-Iyer/PM-SoC-idle-support-using-PM-domains/20160827-042847
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: i386-randconfig-s0-201634 (attached as .config)
compiler: gcc-6 (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All warnings (new ones prefixed by >>):

   In file included from drivers/base/power/cpu_domains.c:25:0:
   include/linux/tick.h: In function 'tick_nohz_get_next_wakeup':
   include/linux/tick.h:138:9: error: 'tick_next_period' undeclared (first use in this function)
     return tick_next_period;
            ^~~~~~~~~~~~~~~~
   include/linux/tick.h:138:9: note: each undeclared identifier is reported only once for each function it appears in
>> include/linux/tick.h:139:1: warning: control reaches end of non-void function [-Wreturn-type]
    }
    ^

vim +139 include/linux/tick.h

4f86d3a8 Len Brown                     2007-10-03  132  
4f86d3a8 Len Brown                     2007-10-03  133  	return len;
4f86d3a8 Len Brown                     2007-10-03  134  }
4fad5e09 Lina Iyer                     2016-08-26  135  
4fad5e09 Lina Iyer                     2016-08-26  136  static inline ktime_t tick_nohz_get_next_wakeup(int cpu)
4fad5e09 Lina Iyer                     2016-08-26  137  {
4fad5e09 Lina Iyer                     2016-08-26 @138  	return tick_next_period;
4fad5e09 Lina Iyer                     2016-08-26 @139  }
4fad5e09 Lina Iyer                     2016-08-26  140  
8083e4ad venkatesh.pallipadi@intel.com 2008-08-04  141  static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
0224cf4c Arjan van de Ven              2010-05-09  142  static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }

:::::: The code at line 139 was first introduced by commit
:::::: 4fad5e091fe06e14a8258812e6df31f1fbb69d7f timer: Export next wake up of a CPU

:::::: TO: Lina Iyer <lina.iyer@linaro.org>
:::::: CC: 0day robot <fengguang.wu@intel.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 25925 bytes --]

[-- Attachment #3: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 11/16] PM / cpu_domains: Add PM Domain governor for CPUs
@ 2016-08-26 23:10     ` kbuild test robot
  0 siblings, 0 replies; 70+ messages in thread
From: kbuild test robot @ 2016-08-26 23:10 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Lina,

[auto build test WARNING on pm/linux-next]
[also build test WARNING on v4.8-rc3 next-20160825]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
[Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
[Check https://git-scm.com/docs/git-format-patch for more information]

url:    https://github.com/0day-ci/linux/commits/Lina-Iyer/PM-SoC-idle-support-using-PM-domains/20160827-042847
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: i386-randconfig-s0-201634 (attached as .config)
compiler: gcc-6 (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All warnings (new ones prefixed by >>):

   In file included from drivers/base/power/cpu_domains.c:25:0:
   include/linux/tick.h: In function 'tick_nohz_get_next_wakeup':
   include/linux/tick.h:138:9: error: 'tick_next_period' undeclared (first use in this function)
     return tick_next_period;
            ^~~~~~~~~~~~~~~~
   include/linux/tick.h:138:9: note: each undeclared identifier is reported only once for each function it appears in
>> include/linux/tick.h:139:1: warning: control reaches end of non-void function [-Wreturn-type]
    }
    ^

vim +139 include/linux/tick.h

4f86d3a8 Len Brown                     2007-10-03  132  
4f86d3a8 Len Brown                     2007-10-03  133  	return len;
4f86d3a8 Len Brown                     2007-10-03  134  }
4fad5e09 Lina Iyer                     2016-08-26  135  
4fad5e09 Lina Iyer                     2016-08-26  136  static inline ktime_t tick_nohz_get_next_wakeup(int cpu)
4fad5e09 Lina Iyer                     2016-08-26  137  {
4fad5e09 Lina Iyer                     2016-08-26 @138  	return tick_next_period;
4fad5e09 Lina Iyer                     2016-08-26 @139  }
4fad5e09 Lina Iyer                     2016-08-26  140  
8083e4ad venkatesh.pallipadi at intel.com 2008-08-04  141  static inline u64 get_cpu_idle_time_us(int cpu, u64 *unused) { return -1; }
0224cf4c Arjan van de Ven              2010-05-09  142  static inline u64 get_cpu_iowait_time_us(int cpu, u64 *unused) { return -1; }

:::::: The code at line 139 was first introduced by commit
:::::: 4fad5e091fe06e14a8258812e6df31f1fbb69d7f timer: Export next wake up of a CPU

:::::: TO: Lina Iyer <lina.iyer@linaro.org>
:::::: CC: 0day robot <fengguang.wu@intel.com>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/octet-stream
Size: 25925 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160827/7fc905d0/attachment-0001.obj>

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 09/16] PM / cpu_domains: Initialize CPU PM domains from DT
  2016-08-26 20:17   ` Lina Iyer
@ 2016-08-26 23:28     ` kbuild test robot
  -1 siblings, 0 replies; 70+ messages in thread
From: kbuild test robot @ 2016-08-26 23:28 UTC (permalink / raw)
  Cc: kbuild-all, ulf.hansson, khilman, rjw, linux-pm,
	linux-arm-kernel, andy.gross, sboyd, linux-arm-msm,
	brendan.jackman, lorenzo.pieralisi, sudeep.holla, Juri.Lelli,
	Lina Iyer

[-- Attachment #1: Type: text/plain, Size: 3683 bytes --]

Hi Lina,

[auto build test ERROR on pm/linux-next]
[also build test ERROR on v4.8-rc3 next-20160825]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
[Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
[Check https://git-scm.com/docs/git-format-patch for more information]

url:    https://github.com/0day-ci/linux/commits/Lina-Iyer/PM-SoC-idle-support-using-PM-domains/20160827-042847
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: i386-randconfig-r0-201634 (attached as .config)
compiler: gcc-5 (Debian 5.4.0-6) 5.4.0 20160609
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

>> drivers/base/power/cpu_domains.c:327:5: error: redefinition of 'of_setup_cpu_pd_single'
    int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
        ^
   In file included from drivers/base/power/cpu_domains.c:13:0:
   include/linux/cpu_domains.h:59:19: note: previous definition of 'of_setup_cpu_pd_single' was here
    static inline int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
                      ^
>> drivers/base/power/cpu_domains.c:368:5: error: redefinition of 'of_setup_cpu_pd'
    int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
        ^
   In file included from drivers/base/power/cpu_domains.c:13:0:
   include/linux/cpu_domains.h:62:19: note: previous definition of 'of_setup_cpu_pd' was here
    static inline int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
                      ^

vim +/of_setup_cpu_pd_single +327 drivers/base/power/cpu_domains.c

   321	 * If the CPU PM domain exists already, then the CPU is attached to
   322	 * that CPU PD. If it doesn't, the domain is created, the @ops are
   323	 * set for power_on/power_off callbacks and then the CPU is attached
   324	 * to that domain. If the domain was created outside this framework,
   325	 * then we do not attach the CPU to the domain.
   326	 */
 > 327	int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
   328	{
   329	
   330		struct device_node *dn, *np;
   331		struct generic_pm_domain *genpd;
   332		struct cpu_pm_domain *cpu_pd;
   333	
   334		np = of_get_cpu_node(cpu, NULL);
   335		if (!np)
   336			return -ENODEV;
   337	
   338		dn = of_parse_phandle(np, "power-domains", 0);
   339		of_node_put(np);
   340		if (!dn)
   341			return -ENODEV;
   342	
   343		/* Find the genpd for this CPU, create if not found */
   344		genpd = of_get_cpu_domain(dn, ops, cpu);
   345		of_node_put(dn);
   346		if (IS_ERR(genpd))
   347			return PTR_ERR(genpd);
   348	
   349		cpu_pd = to_cpu_pd(genpd);
   350		if (!cpu_pd) {
   351			pr_err("%s: Genpd was created outside CPU PM domains\n",
   352					__func__);
   353			return -ENOENT;
   354		}
   355	
   356		return cpu_pd_attach_cpu(genpd, cpu);
   357	}
   358	EXPORT_SYMBOL(of_setup_cpu_pd_single);
   359	
   360	/**
   361	 * of_setup_cpu_pd() - Setup the PM domains for all CPUs
   362	 *
   363	 * @ops: The PM domain suspend/resume ops for all the domains
   364	 *
   365	 * Setup the CPU PM domain and attach all possible CPUs to their respective
   366	 * domains. The domains are created if not already and then attached.
   367	 */
 > 368	int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
   369	{
   370		int cpu;
   371		int ret;

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 27955 bytes --]

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 09/16] PM / cpu_domains: Initialize CPU PM domains from DT
@ 2016-08-26 23:28     ` kbuild test robot
  0 siblings, 0 replies; 70+ messages in thread
From: kbuild test robot @ 2016-08-26 23:28 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Lina,

[auto build test ERROR on pm/linux-next]
[also build test ERROR on v4.8-rc3 next-20160825]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
[Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
[Check https://git-scm.com/docs/git-format-patch for more information]

url:    https://github.com/0day-ci/linux/commits/Lina-Iyer/PM-SoC-idle-support-using-PM-domains/20160827-042847
base:   https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
config: i386-randconfig-r0-201634 (attached as .config)
compiler: gcc-5 (Debian 5.4.0-6) 5.4.0 20160609
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

>> drivers/base/power/cpu_domains.c:327:5: error: redefinition of 'of_setup_cpu_pd_single'
    int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
        ^
   In file included from drivers/base/power/cpu_domains.c:13:0:
   include/linux/cpu_domains.h:59:19: note: previous definition of 'of_setup_cpu_pd_single' was here
    static inline int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
                      ^
>> drivers/base/power/cpu_domains.c:368:5: error: redefinition of 'of_setup_cpu_pd'
    int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
        ^
   In file included from drivers/base/power/cpu_domains.c:13:0:
   include/linux/cpu_domains.h:62:19: note: previous definition of 'of_setup_cpu_pd' was here
    static inline int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
                      ^

vim +/of_setup_cpu_pd_single +327 drivers/base/power/cpu_domains.c

   321	 * If the CPU PM domain exists already, then the CPU is attached to
   322	 * that CPU PD. If it doesn't, the domain is created, the @ops are
   323	 * set for power_on/power_off callbacks and then the CPU is attached
   324	 * to that domain. If the domain was created outside this framework,
   325	 * then we do not attach the CPU to the domain.
   326	 */
 > 327	int of_setup_cpu_pd_single(int cpu, const struct cpu_pd_ops *ops)
   328	{
   329	
   330		struct device_node *dn, *np;
   331		struct generic_pm_domain *genpd;
   332		struct cpu_pm_domain *cpu_pd;
   333	
   334		np = of_get_cpu_node(cpu, NULL);
   335		if (!np)
   336			return -ENODEV;
   337	
   338		dn = of_parse_phandle(np, "power-domains", 0);
   339		of_node_put(np);
   340		if (!dn)
   341			return -ENODEV;
   342	
   343		/* Find the genpd for this CPU, create if not found */
   344		genpd = of_get_cpu_domain(dn, ops, cpu);
   345		of_node_put(dn);
   346		if (IS_ERR(genpd))
   347			return PTR_ERR(genpd);
   348	
   349		cpu_pd = to_cpu_pd(genpd);
   350		if (!cpu_pd) {
   351			pr_err("%s: Genpd was created outside CPU PM domains\n",
   352					__func__);
   353			return -ENOENT;
   354		}
   355	
   356		return cpu_pd_attach_cpu(genpd, cpu);
   357	}
   358	EXPORT_SYMBOL(of_setup_cpu_pd_single);
   359	
   360	/**
   361	 * of_setup_cpu_pd() - Setup the PM domains for all CPUs
   362	 *
   363	 * @ops: The PM domain suspend/resume ops for all the domains
   364	 *
   365	 * Setup the CPU PM domain and attach all possible CPUs to their respective
   366	 * domains. The domains are created if not already and then attached.
   367	 */
 > 368	int of_setup_cpu_pd(const struct cpu_pd_ops *ops)
   369	{
   370		int cpu;
   371		int ret;

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
-------------- next part --------------
A non-text attachment was scrubbed...
Name: .config.gz
Type: application/octet-stream
Size: 27955 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160827/b829dff8/attachment-0001.obj>

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-08-26 20:17   ` Lina Iyer
@ 2016-09-02 14:21       ` Sudeep Holla
  -1 siblings, 0 replies; 70+ messages in thread
From: Sudeep Holla @ 2016-09-02 14:21 UTC (permalink / raw)
  To: Lina Iyer, rjw-LthD3rsA81gm4RdzfppkhA,
	linux-pm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: ulf.hansson-QSEj5FYQhm4dnm+yROfE0A,
	khilman-DgEjT+Ai2ygdnm+yROfE0A, Sudeep Holla,
	andy.gross-QSEj5FYQhm4dnm+yROfE0A, sboyd-sgV2jX0FEOL9JmXXK+q4OQ,
	linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	brendan.jackman-5wv7dgnIgG8, lorenzo.pieralisi-5wv7dgnIgG8,
	Juri.Lelli-5wv7dgnIgG8, Axel Haslam,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Marc Titinger



On 26/08/16 21:17, Lina Iyer wrote:
> From: Axel Haslam <ahaslam+renesas-rdvid1DuHRBWk0Htik3J/w@public.gmane.org>
>
> Update DT bindings to describe idle states of PM domains.
>
> Cc: <devicetree-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
> Signed-off-by: Marc Titinger <mtitinger+renesas-rdvid1DuHRBWk0Htik3J/w@public.gmane.org>
> Signed-off-by: Lina Iyer <lina.iyer-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
> [Lina: Added state properties, removed state names, wakeup-latency,
> added of_pm_genpd_init() API, pruned commit text]
> Signed-off-by: Ulf Hansson <ulf.hansson-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
> [Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
> ---
>  .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
>  1 file changed, 57 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
> index 025b5e7..4960486 100644
> --- a/Documentation/devicetree/bindings/power/power_domain.txt
> +++ b/Documentation/devicetree/bindings/power/power_domain.txt
> @@ -29,6 +29,10 @@ Optional properties:
>     specified by this binding. More details about power domain specifier are
>     available in the next section.
>
> +- domain-idle-states : A phandle of an idle-state that shall be soaked into a
> +                generic domain power state. The idle state definitions are
> +                compatible with arm,idle-state specified in [1].
> +
>  Example:
>
>  	power: power-controller@12340000 {
> @@ -59,6 +63,57 @@ The nodes above define two power controllers: 'parent' and 'child'.
>  Domains created by the 'child' power controller are subdomains of '0' power
>  domain provided by the 'parent' power controller.
>
> +Example 3: ARM v7 style CPU PM domains (Linux domain controller)
> +
> +	cpus {
> +		#address-cells = <1>;
> +		#size-cells = <0>;
> +
> +		CPU0: cpu@0 {
> +			device_type = "cpu";
> +			compatible = "arm,cortex-a7", "arm,armv7";
> +			reg = <0x0>;
> +			power-domains = <&a7_pd>;
> +		};
> +
> +		CPU1: cpu@1 {
> +			device_type = "cpu";
> +			compatible = "arm,cortex-a15", "arm,armv7";
> +			reg = <0x0>;
> +			power-domains = <&a15_pd>;
> +		};
> +	};
> +
> +	pm-domains {
> +		a15_pd: a15_pd {
> +			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
> +			compatible = "arm,cortex-a15";
> +			#power-domain-cells = <0>;
> +			domain-idle-states = <&CLUSTER_SLEEP_0>;
> +		};
> +
> +		a7_pd: a7_pd {
> +			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
> +			compatible = "arm,cortex-a7";
> +			#power-domain-cells = <0>;
> +			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
> +		};
> +
> +		CLUSTER_SLEEP_0: state0 {
> +			compatible = "arm,idle-state";
> +			entry-latency-us = <1000>;
> +			exit-latency-us = <2000>;
> +			min-residency-us = <10000>;
> +		};
> +
> +		CLUSTER_SLEEP_1: state1 {
> +			compatible = "arm,idle-state";
> +			entry-latency-us = <5000>;
> +			exit-latency-us = <5000>;
> +			min-residency-us = <100000>;
> +		};
> +	};
> +

This version is *not very descriptive*. Also the discussion we had on v3
version has not yet concluded IMO. So can I take that we agreed on what
was proposed there or not ?

We could have better example above *really* based on the discussions we
had so far. This example always makes me think it's well crafted to
avoid any sort of discussions. We need to consider different use-cases
e.g. what about CPU level states ?

IMO, we need to discuss this DT binding in detail and arrive at some
conclusion before you take all the troubles to respin the series.
Also it's better to keep the DT binding separate until we have some
conclusion instead of posting the implementation for each version.
That's just my opinion(I would be least bothered about implementation
until I know it will be accepted before I can peek into the code, others
may differ.

-- 
Regards,
Sudeep
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-02 14:21       ` Sudeep Holla
  0 siblings, 0 replies; 70+ messages in thread
From: Sudeep Holla @ 2016-09-02 14:21 UTC (permalink / raw)
  To: linux-arm-kernel



On 26/08/16 21:17, Lina Iyer wrote:
> From: Axel Haslam <ahaslam+renesas@baylibre.com>
>
> Update DT bindings to describe idle states of PM domains.
>
> Cc: <devicetree@vger.kernel.org>
> Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
> Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
> [Lina: Added state properties, removed state names, wakeup-latency,
> added of_pm_genpd_init() API, pruned commit text]
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> [Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
> ---
>  .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
>  1 file changed, 57 insertions(+)
>
> diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
> index 025b5e7..4960486 100644
> --- a/Documentation/devicetree/bindings/power/power_domain.txt
> +++ b/Documentation/devicetree/bindings/power/power_domain.txt
> @@ -29,6 +29,10 @@ Optional properties:
>     specified by this binding. More details about power domain specifier are
>     available in the next section.
>
> +- domain-idle-states : A phandle of an idle-state that shall be soaked into a
> +                generic domain power state. The idle state definitions are
> +                compatible with arm,idle-state specified in [1].
> +
>  Example:
>
>  	power: power-controller at 12340000 {
> @@ -59,6 +63,57 @@ The nodes above define two power controllers: 'parent' and 'child'.
>  Domains created by the 'child' power controller are subdomains of '0' power
>  domain provided by the 'parent' power controller.
>
> +Example 3: ARM v7 style CPU PM domains (Linux domain controller)
> +
> +	cpus {
> +		#address-cells = <1>;
> +		#size-cells = <0>;
> +
> +		CPU0: cpu at 0 {
> +			device_type = "cpu";
> +			compatible = "arm,cortex-a7", "arm,armv7";
> +			reg = <0x0>;
> +			power-domains = <&a7_pd>;
> +		};
> +
> +		CPU1: cpu at 1 {
> +			device_type = "cpu";
> +			compatible = "arm,cortex-a15", "arm,armv7";
> +			reg = <0x0>;
> +			power-domains = <&a15_pd>;
> +		};
> +	};
> +
> +	pm-domains {
> +		a15_pd: a15_pd {
> +			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
> +			compatible = "arm,cortex-a15";
> +			#power-domain-cells = <0>;
> +			domain-idle-states = <&CLUSTER_SLEEP_0>;
> +		};
> +
> +		a7_pd: a7_pd {
> +			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
> +			compatible = "arm,cortex-a7";
> +			#power-domain-cells = <0>;
> +			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
> +		};
> +
> +		CLUSTER_SLEEP_0: state0 {
> +			compatible = "arm,idle-state";
> +			entry-latency-us = <1000>;
> +			exit-latency-us = <2000>;
> +			min-residency-us = <10000>;
> +		};
> +
> +		CLUSTER_SLEEP_1: state1 {
> +			compatible = "arm,idle-state";
> +			entry-latency-us = <5000>;
> +			exit-latency-us = <5000>;
> +			min-residency-us = <100000>;
> +		};
> +	};
> +

This version is *not very descriptive*. Also the discussion we had on v3
version has not yet concluded IMO. So can I take that we agreed on what
was proposed there or not ?

We could have better example above *really* based on the discussions we
had so far. This example always makes me think it's well crafted to
avoid any sort of discussions. We need to consider different use-cases
e.g. what about CPU level states ?

IMO, we need to discuss this DT binding in detail and arrive at some
conclusion before you take all the troubles to respin the series.
Also it's better to keep the DT binding separate until we have some
conclusion instead of posting the implementation for each version.
That's just my opinion(I would be least bothered about implementation
until I know it will be accepted before I can peek into the code, others
may differ.

-- 
Regards,
Sudeep

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-02 14:21       ` Sudeep Holla
@ 2016-09-02 20:16         ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-09-02 20:16 UTC (permalink / raw)
  To: Sudeep Holla
  Cc: rjw, linux-pm, linux-arm-kernel, ulf.hansson, khilman,
	andy.gross, sboyd, linux-arm-msm, brendan.jackman,
	lorenzo.pieralisi, Juri.Lelli, Axel Haslam, devicetree,
	Marc Titinger

On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>
>
>On 26/08/16 21:17, Lina Iyer wrote:
>>From: Axel Haslam <ahaslam+renesas@baylibre.com>
>>
>>Update DT bindings to describe idle states of PM domains.
>>
>>Cc: <devicetree@vger.kernel.org>
>>Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
>>Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
>>[Lina: Added state properties, removed state names, wakeup-latency,
>>added of_pm_genpd_init() API, pruned commit text]
>>Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
>>[Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
>>---
>> .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
>> 1 file changed, 57 insertions(+)
>>
>>diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
>>index 025b5e7..4960486 100644
>>--- a/Documentation/devicetree/bindings/power/power_domain.txt
>>+++ b/Documentation/devicetree/bindings/power/power_domain.txt
>>@@ -29,6 +29,10 @@ Optional properties:
>>    specified by this binding. More details about power domain specifier are
>>    available in the next section.
>>
>>+- domain-idle-states : A phandle of an idle-state that shall be soaked into a
>>+                generic domain power state. The idle state definitions are
>>+                compatible with arm,idle-state specified in [1].
>>+
>> Example:
>>
>> 	power: power-controller@12340000 {
>>@@ -59,6 +63,57 @@ The nodes above define two power controllers: 'parent' and 'child'.
>> Domains created by the 'child' power controller are subdomains of '0' power
>> domain provided by the 'parent' power controller.
>>
>>+Example 3: ARM v7 style CPU PM domains (Linux domain controller)
>>+
>>+	cpus {
>>+		#address-cells = <1>;
>>+		#size-cells = <0>;
>>+
>>+		CPU0: cpu@0 {
>>+			device_type = "cpu";
>>+			compatible = "arm,cortex-a7", "arm,armv7";
>>+			reg = <0x0>;
>>+			power-domains = <&a7_pd>;
>>+		};
>>+
>>+		CPU1: cpu@1 {
>>+			device_type = "cpu";
>>+			compatible = "arm,cortex-a15", "arm,armv7";
>>+			reg = <0x0>;
>>+			power-domains = <&a15_pd>;
>>+		};
>>+	};
>>+
>>+	pm-domains {
>>+		a15_pd: a15_pd {
>>+			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
>>+			compatible = "arm,cortex-a15";
>>+			#power-domain-cells = <0>;
>>+			domain-idle-states = <&CLUSTER_SLEEP_0>;
>>+		};
>>+
>>+		a7_pd: a7_pd {
>>+			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
>>+			compatible = "arm,cortex-a7";
>>+			#power-domain-cells = <0>;
>>+			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
>>+		};
>>+
>>+		CLUSTER_SLEEP_0: state0 {
>>+			compatible = "arm,idle-state";
>>+			entry-latency-us = <1000>;
>>+			exit-latency-us = <2000>;
>>+			min-residency-us = <10000>;
>>+		};
>>+
>>+		CLUSTER_SLEEP_1: state1 {
>>+			compatible = "arm,idle-state";
>>+			entry-latency-us = <5000>;
>>+			exit-latency-us = <5000>;
>>+			min-residency-us = <100000>;
>>+		};
>>+	};
>>+
>
>This version is *not very descriptive*. Also the discussion we had on v3
>version has not yet concluded IMO. So can I take that we agreed on what
>was proposed there or not ?
>
Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
for the new changes in the following patches. Let me know if that makes
sense.

Thanks,
Lina

>We could have better example above *really* based on the discussions we
>had so far. This example always makes me think it's well crafted to
>avoid any sort of discussions. We need to consider different use-cases
>e.g. what about CPU level states ?
>
>IMO, we need to discuss this DT binding in detail and arrive at some
>conclusion before you take all the troubles to respin the series.
>Also it's better to keep the DT binding separate until we have some
>conclusion instead of posting the implementation for each version.
>That's just my opinion(I would be least bothered about implementation
>until I know it will be accepted before I can peek into the code, others
>may differ.
>
>-- 
>Regards,
>Sudeep

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-02 20:16         ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-09-02 20:16 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>
>
>On 26/08/16 21:17, Lina Iyer wrote:
>>From: Axel Haslam <ahaslam+renesas@baylibre.com>
>>
>>Update DT bindings to describe idle states of PM domains.
>>
>>Cc: <devicetree@vger.kernel.org>
>>Signed-off-by: Marc Titinger <mtitinger+renesas@baylibre.com>
>>Signed-off-by: Lina Iyer <lina.iyer@linaro.org>
>>[Lina: Added state properties, removed state names, wakeup-latency,
>>added of_pm_genpd_init() API, pruned commit text]
>>Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
>>[Ulf: Moved around code to make it compile properly, rebased on top of multiple state support]
>>---
>> .../devicetree/bindings/power/power_domain.txt     | 57 ++++++++++++++++++++++
>> 1 file changed, 57 insertions(+)
>>
>>diff --git a/Documentation/devicetree/bindings/power/power_domain.txt b/Documentation/devicetree/bindings/power/power_domain.txt
>>index 025b5e7..4960486 100644
>>--- a/Documentation/devicetree/bindings/power/power_domain.txt
>>+++ b/Documentation/devicetree/bindings/power/power_domain.txt
>>@@ -29,6 +29,10 @@ Optional properties:
>>    specified by this binding. More details about power domain specifier are
>>    available in the next section.
>>
>>+- domain-idle-states : A phandle of an idle-state that shall be soaked into a
>>+                generic domain power state. The idle state definitions are
>>+                compatible with arm,idle-state specified in [1].
>>+
>> Example:
>>
>> 	power: power-controller at 12340000 {
>>@@ -59,6 +63,57 @@ The nodes above define two power controllers: 'parent' and 'child'.
>> Domains created by the 'child' power controller are subdomains of '0' power
>> domain provided by the 'parent' power controller.
>>
>>+Example 3: ARM v7 style CPU PM domains (Linux domain controller)
>>+
>>+	cpus {
>>+		#address-cells = <1>;
>>+		#size-cells = <0>;
>>+
>>+		CPU0: cpu at 0 {
>>+			device_type = "cpu";
>>+			compatible = "arm,cortex-a7", "arm,armv7";
>>+			reg = <0x0>;
>>+			power-domains = <&a7_pd>;
>>+		};
>>+
>>+		CPU1: cpu at 1 {
>>+			device_type = "cpu";
>>+			compatible = "arm,cortex-a15", "arm,armv7";
>>+			reg = <0x0>;
>>+			power-domains = <&a15_pd>;
>>+		};
>>+	};
>>+
>>+	pm-domains {
>>+		a15_pd: a15_pd {
>>+			/* will have A15 platform ARM_PD_METHOD_OF_DECLARE*/
>>+			compatible = "arm,cortex-a15";
>>+			#power-domain-cells = <0>;
>>+			domain-idle-states = <&CLUSTER_SLEEP_0>;
>>+		};
>>+
>>+		a7_pd: a7_pd {
>>+			/* will have a A7 platform ARM_PD_METHOD_OF_DECLARE*/
>>+			compatible = "arm,cortex-a7";
>>+			#power-domain-cells = <0>;
>>+			domain-idle-states = <&CLUSTER_SLEEP_0>, <&CLUSTER_SLEEP_1>;
>>+		};
>>+
>>+		CLUSTER_SLEEP_0: state0 {
>>+			compatible = "arm,idle-state";
>>+			entry-latency-us = <1000>;
>>+			exit-latency-us = <2000>;
>>+			min-residency-us = <10000>;
>>+		};
>>+
>>+		CLUSTER_SLEEP_1: state1 {
>>+			compatible = "arm,idle-state";
>>+			entry-latency-us = <5000>;
>>+			exit-latency-us = <5000>;
>>+			min-residency-us = <100000>;
>>+		};
>>+	};
>>+
>
>This version is *not very descriptive*. Also the discussion we had on v3
>version has not yet concluded IMO. So can I take that we agreed on what
>was proposed there or not ?
>
Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
for the new changes in the following patches. Let me know if that makes
sense.

Thanks,
Lina

>We could have better example above *really* based on the discussions we
>had so far. This example always makes me think it's well crafted to
>avoid any sort of discussions. We need to consider different use-cases
>e.g. what about CPU level states ?
>
>IMO, we need to discuss this DT binding in detail and arrive at some
>conclusion before you take all the troubles to respin the series.
>Also it's better to keep the DT binding separate until we have some
>conclusion instead of posting the implementation for each version.
>That's just my opinion(I would be least bothered about implementation
>until I know it will be accepted before I can peek into the code, others
>may differ.
>
>-- 
>Regards,
>Sudeep

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-02 20:16         ` Lina Iyer
@ 2016-09-12 15:19             ` Brendan Jackman
  -1 siblings, 0 replies; 70+ messages in thread
From: Brendan Jackman @ 2016-09-12 15:19 UTC (permalink / raw)
  To: Lina Iyer
  Cc: Sudeep Holla, rjw-LthD3rsA81gm4RdzfppkhA,
	linux-pm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	ulf.hansson-QSEj5FYQhm4dnm+yROfE0A,
	khilman-DgEjT+Ai2ygdnm+yROfE0A,
	andy.gross-QSEj5FYQhm4dnm+yROfE0A, sboyd-sgV2jX0FEOL9JmXXK+q4OQ,
	linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	brendan.jackman-5wv7dgnIgG8, lorenzo.pieralisi-5wv7dgnIgG8,
	Juri.Lelli-5wv7dgnIgG8, Axel Haslam,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Marc Titinger


Hi Lina,

Sorry for the delay here, Sudeep and I were both been on holiday last week.

On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
[...]
>>This version is *not very descriptive*. Also the discussion we had on v3
>>version has not yet concluded IMO. So can I take that we agreed on what
>>was proposed there or not ?
>>
> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
> for the new changes in the following patches. Let me know if that makes
> sense.

The not-yet-concluded discussion Sudeep is referring to is at [1].

In that thread we initially proposed the idea of, instead of splitting
state phandles between cpu-idle-states and domain-idle-states, putting
CPUs in their own domains and using domain-idle-states for _all_
phandles, deprecating cpu-idle-states. I've brought this up in other
threads [2] but discussion keeps petering out, and neither this example
nor the 8916 dtsi in this patch series reflect the idea.

It would be great if we could go back to the thread at [1] where Sudeep
has posted examples and come to a clear consensus on the binding design
before reviewing implementation patches. Ideally with input from Ulf,
Rob and Kevin.

[1] https://patchwork.kernel.org/patch/9264507
[2] http://www.spinics.net/lists/devicetree/msg141024.html
>
> Thanks,
> Lina
>
>>We could have better example above *really* based on the discussions we
>>had so far. This example always makes me think it's well crafted to
>>avoid any sort of discussions. We need to consider different use-cases
>>e.g. what about CPU level states ?
>>
>>IMO, we need to discuss this DT binding in detail and arrive at someq
>>conclusion before you take all the troubles to respin the series.
>>Also it's better to keep the DT binding separate until we have some
>>conclusion instead of posting the implementation for each version.
>>That's just my opinion(I would be least bothered about implementation
>>until I know it will be accepted before I can peek into the code, others
>>may differ.
>>
>>--
>>Regards,
>>Sudeep

Cheers,
Brendan
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-12 15:19             ` Brendan Jackman
  0 siblings, 0 replies; 70+ messages in thread
From: Brendan Jackman @ 2016-09-12 15:19 UTC (permalink / raw)
  To: linux-arm-kernel


Hi Lina,

Sorry for the delay here, Sudeep and I were both been on holiday last week.

On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
[...]
>>This version is *not very descriptive*. Also the discussion we had on v3
>>version has not yet concluded IMO. So can I take that we agreed on what
>>was proposed there or not ?
>>
> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
> for the new changes in the following patches. Let me know if that makes
> sense.

The not-yet-concluded discussion Sudeep is referring to is at [1].

In that thread we initially proposed the idea of, instead of splitting
state phandles between cpu-idle-states and domain-idle-states, putting
CPUs in their own domains and using domain-idle-states for _all_
phandles, deprecating cpu-idle-states. I've brought this up in other
threads [2] but discussion keeps petering out, and neither this example
nor the 8916 dtsi in this patch series reflect the idea.

It would be great if we could go back to the thread at [1] where Sudeep
has posted examples and come to a clear consensus on the binding design
before reviewing implementation patches. Ideally with input from Ulf,
Rob and Kevin.

[1] https://patchwork.kernel.org/patch/9264507
[2] http://www.spinics.net/lists/devicetree/msg141024.html
>
> Thanks,
> Lina
>
>>We could have better example above *really* based on the discussions we
>>had so far. This example always makes me think it's well crafted to
>>avoid any sort of discussions. We need to consider different use-cases
>>e.g. what about CPU level states ?
>>
>>IMO, we need to discuss this DT binding in detail and arrive at someq
>>conclusion before you take all the troubles to respin the series.
>>Also it's better to keep the DT binding separate until we have some
>>conclusion instead of posting the implementation for each version.
>>That's just my opinion(I would be least bothered about implementation
>>until I know it will be accepted before I can peek into the code, others
>>may differ.
>>
>>--
>>Regards,
>>Sudeep

Cheers,
Brendan

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-12 15:19             ` Brendan Jackman
@ 2016-09-12 16:16               ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-09-12 16:16 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Sudeep Holla, rjw, linux-pm, linux-arm-kernel, ulf.hansson,
	khilman, andy.gross, sboyd, linux-arm-msm, lorenzo.pieralisi,
	Juri.Lelli, Axel Haslam, devicetree, Marc Titinger

On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>
>Hi Lina,
>
>Sorry for the delay here, Sudeep and I were both been on holiday last week.
>
>On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>[...]
>>>This version is *not very descriptive*. Also the discussion we had on v3
>>>version has not yet concluded IMO. So can I take that we agreed on what
>>>was proposed there or not ?
>>>
>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>> for the new changes in the following patches. Let me know if that makes
>> sense.
>
>The not-yet-concluded discussion Sudeep is referring to is at [1].
>
>In that thread we initially proposed the idea of, instead of splitting
>state phandles between cpu-idle-states and domain-idle-states, putting
>CPUs in their own domains and using domain-idle-states for _all_
>phandles, deprecating cpu-idle-states. I've brought this up in other
>threads [2] but discussion keeps petering out, and neither this example
>nor the 8916 dtsi in this patch series reflect the idea.
>
Brendan, while your idea is good and will work for CPUs, I do not expect
other domains and possibly CPU domains on some architectures to follow
this model. There is nothing that prevents you from doing this today,
you can specify domains around CPUs in your devicetree and CPU PM will
handle the hierarchy. I don't think its fair to force it on all SoCs
using CPU domains. This patchset does not restrict you from organizing
the idle states the way you want it. This revision of the series, clubs
CPU and domain idle states under idle-states umbrella. So part of your
requirement is also satisfied.

You can follow up the series with your new additions, I don't see a
conflict with this change.

Thanks,
Lina


>It would be great if we could go back to the thread at [1] where Sudeep
>has posted examples and come to a clear consensus on the binding design
>before reviewing implementation patches. Ideally with input from Ulf,
>Rob and Kevin.
>
>[1] https://patchwork.kernel.org/patch/9264507
>[2] http://www.spinics.net/lists/devicetree/msg141024.html
>>
>> Thanks,
>> Lina
>>
>>>We could have better example above *really* based on the discussions we
>>>had so far. This example always makes me think it's well crafted to
>>>avoid any sort of discussions. We need to consider different use-cases
>>>e.g. what about CPU level states ?
>>>
>>>IMO, we need to discuss this DT binding in detail and arrive at someq
>>>conclusion before you take all the troubles to respin the series.
>>>Also it's better to keep the DT binding separate until we have some
>>>conclusion instead of posting the implementation for each version.
>>>That's just my opinion(I would be least bothered about implementation
>>>until I know it will be accepted before I can peek into the code, others
>>>may differ.
>>>
>>>--
>>>Regards,
>>>Sudeep
>
>Cheers,
>Brendan

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-12 16:16               ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-09-12 16:16 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>
>Hi Lina,
>
>Sorry for the delay here, Sudeep and I were both been on holiday last week.
>
>On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>[...]
>>>This version is *not very descriptive*. Also the discussion we had on v3
>>>version has not yet concluded IMO. So can I take that we agreed on what
>>>was proposed there or not ?
>>>
>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>> for the new changes in the following patches. Let me know if that makes
>> sense.
>
>The not-yet-concluded discussion Sudeep is referring to is at [1].
>
>In that thread we initially proposed the idea of, instead of splitting
>state phandles between cpu-idle-states and domain-idle-states, putting
>CPUs in their own domains and using domain-idle-states for _all_
>phandles, deprecating cpu-idle-states. I've brought this up in other
>threads [2] but discussion keeps petering out, and neither this example
>nor the 8916 dtsi in this patch series reflect the idea.
>
Brendan, while your idea is good and will work for CPUs, I do not expect
other domains and possibly CPU domains on some architectures to follow
this model. There is nothing that prevents you from doing this today,
you can specify domains around CPUs in your devicetree and CPU PM will
handle the hierarchy. I don't think its fair to force it on all SoCs
using CPU domains. This patchset does not restrict you from organizing
the idle states the way you want it. This revision of the series, clubs
CPU and domain idle states under idle-states umbrella. So part of your
requirement is also satisfied.

You can follow up the series with your new additions, I don't see a
conflict with this change.

Thanks,
Lina


>It would be great if we could go back to the thread at [1] where Sudeep
>has posted examples and come to a clear consensus on the binding design
>before reviewing implementation patches. Ideally with input from Ulf,
>Rob and Kevin.
>
>[1] https://patchwork.kernel.org/patch/9264507
>[2] http://www.spinics.net/lists/devicetree/msg141024.html
>>
>> Thanks,
>> Lina
>>
>>>We could have better example above *really* based on the discussions we
>>>had so far. This example always makes me think it's well crafted to
>>>avoid any sort of discussions. We need to consider different use-cases
>>>e.g. what about CPU level states ?
>>>
>>>IMO, we need to discuss this DT binding in detail and arrive at someq
>>>conclusion before you take all the troubles to respin the series.
>>>Also it's better to keep the DT binding separate until we have some
>>>conclusion instead of posting the implementation for each version.
>>>That's just my opinion(I would be least bothered about implementation
>>>until I know it will be accepted before I can peek into the code, others
>>>may differ.
>>>
>>>--
>>>Regards,
>>>Sudeep
>
>Cheers,
>Brendan

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-12 16:16               ` Lina Iyer
@ 2016-09-12 17:09                 ` Sudeep Holla
  -1 siblings, 0 replies; 70+ messages in thread
From: Sudeep Holla @ 2016-09-12 17:09 UTC (permalink / raw)
  To: Lina Iyer, Brendan Jackman
  Cc: devicetree, ulf.hansson, lorenzo.pieralisi, Juri.Lelli, khilman,
	sboyd, linux-arm-msm, linux-pm, rjw, Axel Haslam, Marc Titinger,
	Sudeep Holla, andy.gross, linux-arm-kernel



On 12/09/16 17:16, Lina Iyer wrote:
> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>
>> Hi Lina,
>>
>> Sorry for the delay here, Sudeep and I were both been on holiday last
>> week.
>>
>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>> [...]
>>>> This version is *not very descriptive*. Also the discussion we had
>>>> on v3
>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>> was proposed there or not ?
>>>>
>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>> for the new changes in the following patches. Let me know if that makes
>>> sense.

Please add all possible use-cases in the bindings. Though one can refer
the usage examples, it might not cover all usage descriptions. It helps
preventing people from defining their own when they don't see examples.
Again DT bindings are like specifications, it should be descriptive
especially this kind of generic ones.

>>
>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>
>> In that thread we initially proposed the idea of, instead of splitting
>> state phandles between cpu-idle-states and domain-idle-states, putting
>> CPUs in their own domains and using domain-idle-states for _all_
>> phandles, deprecating cpu-idle-states. I've brought this up in other
>> threads [2] but discussion keeps petering out, and neither this example
>> nor the 8916 dtsi in this patch series reflect the idea.
>>
> Brendan, while your idea is good and will work for CPUs, I do not expect
> other domains and possibly CPU domains on some architectures to follow
> this model. There is nothing that prevents you from doing this today,
> you can specify domains around CPUs in your devicetree and CPU PM will
> handle the hierarchy. I don't think its fair to force it on all SoCs
> using CPU domains.

I disagree. We are defining DT bindings here and it *should* be same for
all the SoC unless there is a compelling reason not to. I am fine if
those reasons are stated and agreed.

> This patchset does not restrict you from organizing
> the idle states the way you want it. This revision of the series, clubs
> CPU and domain idle states under idle-states umbrella. So part of your
> requirement is also satisfied.
>

I will look at the DTS changes in the series. But we *must* have more
description with more examples in the binding document.

> You can follow up the series with your new additions, I don't see a
> conflict with this change.
>

If we just need additions, then it should be fine.

-- 
Regards,
Sudeep

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-12 17:09                 ` Sudeep Holla
  0 siblings, 0 replies; 70+ messages in thread
From: Sudeep Holla @ 2016-09-12 17:09 UTC (permalink / raw)
  To: linux-arm-kernel



On 12/09/16 17:16, Lina Iyer wrote:
> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>
>> Hi Lina,
>>
>> Sorry for the delay here, Sudeep and I were both been on holiday last
>> week.
>>
>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>> [...]
>>>> This version is *not very descriptive*. Also the discussion we had
>>>> on v3
>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>> was proposed there or not ?
>>>>
>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>> for the new changes in the following patches. Let me know if that makes
>>> sense.

Please add all possible use-cases in the bindings. Though one can refer
the usage examples, it might not cover all usage descriptions. It helps
preventing people from defining their own when they don't see examples.
Again DT bindings are like specifications, it should be descriptive
especially this kind of generic ones.

>>
>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>
>> In that thread we initially proposed the idea of, instead of splitting
>> state phandles between cpu-idle-states and domain-idle-states, putting
>> CPUs in their own domains and using domain-idle-states for _all_
>> phandles, deprecating cpu-idle-states. I've brought this up in other
>> threads [2] but discussion keeps petering out, and neither this example
>> nor the 8916 dtsi in this patch series reflect the idea.
>>
> Brendan, while your idea is good and will work for CPUs, I do not expect
> other domains and possibly CPU domains on some architectures to follow
> this model. There is nothing that prevents you from doing this today,
> you can specify domains around CPUs in your devicetree and CPU PM will
> handle the hierarchy. I don't think its fair to force it on all SoCs
> using CPU domains.

I disagree. We are defining DT bindings here and it *should* be same for
all the SoC unless there is a compelling reason not to. I am fine if
those reasons are stated and agreed.

> This patchset does not restrict you from organizing
> the idle states the way you want it. This revision of the series, clubs
> CPU and domain idle states under idle-states umbrella. So part of your
> requirement is also satisfied.
>

I will look at the DTS changes in the series. But we *must* have more
description with more examples in the binding document.

> You can follow up the series with your new additions, I don't see a
> conflict with this change.
>

If we just need additions, then it should be fine.

-- 
Regards,
Sudeep

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-12 17:09                 ` Sudeep Holla
@ 2016-09-13 17:50                     ` Brendan Jackman
  -1 siblings, 0 replies; 70+ messages in thread
From: Brendan Jackman @ 2016-09-13 17:50 UTC (permalink / raw)
  To: Sudeep Holla
  Cc: Lina Iyer, Brendan Jackman, rjw-LthD3rsA81gm4RdzfppkhA,
	linux-pm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	ulf.hansson-QSEj5FYQhm4dnm+yROfE0A,
	khilman-DgEjT+Ai2ygdnm+yROfE0A,
	andy.gross-QSEj5FYQhm4dnm+yROfE0A, sboyd-sgV2jX0FEOL9JmXXK+q4OQ,
	linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	lorenzo.pieralisi-5wv7dgnIgG8, Juri.Lelli-5wv7dgnIgG8,
	Axel Haslam, devicetree-u79uwXL29TY76Z2rM5mHXA, Marc Titinger


On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
> On 12/09/16 17:16, Lina Iyer wrote:
>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>
>>> Hi Lina,
>>>
>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>> week.
>>>
>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>> [...]
>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>> on v3
>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>> was proposed there or not ?
>>>>>
>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>> for the new changes in the following patches. Let me know if that makes
>>>> sense.
>
> Please add all possible use-cases in the bindings. Though one can refer
> the usage examples, it might not cover all usage descriptions. It helps
> preventing people from defining their own when they don't see examples.
> Again DT bindings are like specifications, it should be descriptive
> especially this kind of generic ones.
>
>>>
>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>
>>> In that thread we initially proposed the idea of, instead of splitting
>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>> CPUs in their own domains and using domain-idle-states for _all_
>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>> threads [2] but discussion keeps petering out, and neither this example
>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>
>> Brendan, while your idea is good and will work for CPUs, I do not expect
>> other domains and possibly CPU domains on some architectures to follow
>> this model. There is nothing that prevents you from doing this today,

As I understand it your opposition to this approach is this:

There may be devices/CPUs which have idle states which do not constitute
"power off". If we put those  devices in their own power domain for the
purpose of putting their (non-power-off) idle state phandles in
domain-idle-states, we are "lying" because no true power domain exists
there.

Am I correct that that's your opposition?

If so, it seems we essentially disagree on the definition of a power
domain, i.e. you define it as a set of devices that are powered on/off
together while I define it as a set of devices whose power states
(including idle states, not just on/off) are tied together. I said
something similar on another thread [1] which died out.

Do you agree that this is basically where we disagree, or am I missing
something else?

[2] http://www.spinics.net/lists/devicetree/msg141050.html

>> you can specify domains around CPUs in your devicetree and CPU PM will
>> handle the hierarchy. I don't think its fair to force it on all SoCs
>> using CPU domains.
>
> I disagree. We are defining DT bindings here and it *should* be same for
> all the SoC unless there is a compelling reason not to. I am fine if
> those reasons are stated and agreed.
>
>> This patchset does not restrict you from organizing
>> the idle states the way you want it. This revision of the series, clubs
>> CPU and domain idle states under idle-states umbrella. So part of your
>> requirement is also satisfied.
>>
>
> I will look at the DTS changes in the series. But we *must* have more
> description with more examples in the binding document.
>
>> You can follow up the series with your new additions, I don't see a
>> conflict with this change.
>>
>
> If we just need additions, then it should be fine.
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-13 17:50                     ` Brendan Jackman
  0 siblings, 0 replies; 70+ messages in thread
From: Brendan Jackman @ 2016-09-13 17:50 UTC (permalink / raw)
  To: linux-arm-kernel


On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
> On 12/09/16 17:16, Lina Iyer wrote:
>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>
>>> Hi Lina,
>>>
>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>> week.
>>>
>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>> [...]
>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>> on v3
>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>> was proposed there or not ?
>>>>>
>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>> for the new changes in the following patches. Let me know if that makes
>>>> sense.
>
> Please add all possible use-cases in the bindings. Though one can refer
> the usage examples, it might not cover all usage descriptions. It helps
> preventing people from defining their own when they don't see examples.
> Again DT bindings are like specifications, it should be descriptive
> especially this kind of generic ones.
>
>>>
>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>
>>> In that thread we initially proposed the idea of, instead of splitting
>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>> CPUs in their own domains and using domain-idle-states for _all_
>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>> threads [2] but discussion keeps petering out, and neither this example
>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>
>> Brendan, while your idea is good and will work for CPUs, I do not expect
>> other domains and possibly CPU domains on some architectures to follow
>> this model. There is nothing that prevents you from doing this today,

As I understand it your opposition to this approach is this:

There may be devices/CPUs which have idle states which do not constitute
"power off". If we put those  devices in their own power domain for the
purpose of putting their (non-power-off) idle state phandles in
domain-idle-states, we are "lying" because no true power domain exists
there.

Am I correct that that's your opposition?

If so, it seems we essentially disagree on the definition of a power
domain, i.e. you define it as a set of devices that are powered on/off
together while I define it as a set of devices whose power states
(including idle states, not just on/off) are tied together. I said
something similar on another thread [1] which died out.

Do you agree that this is basically where we disagree, or am I missing
something else?

[2] http://www.spinics.net/lists/devicetree/msg141050.html

>> you can specify domains around CPUs in your devicetree and CPU PM will
>> handle the hierarchy. I don't think its fair to force it on all SoCs
>> using CPU domains.
>
> I disagree. We are defining DT bindings here and it *should* be same for
> all the SoC unless there is a compelling reason not to. I am fine if
> those reasons are stated and agreed.
>
>> This patchset does not restrict you from organizing
>> the idle states the way you want it. This revision of the series, clubs
>> CPU and domain idle states under idle-states umbrella. So part of your
>> requirement is also satisfied.
>>
>
> I will look at the DTS changes in the series. But we *must* have more
> description with more examples in the binding document.
>
>> You can follow up the series with your new additions, I don't see a
>> conflict with this change.
>>
>
> If we just need additions, then it should be fine.

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-13 17:50                     ` Brendan Jackman
@ 2016-09-13 19:38                       ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-09-13 19:38 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Sudeep Holla, rjw, linux-pm, linux-arm-kernel, ulf.hansson,
	khilman, andy.gross, sboyd, linux-arm-msm, lorenzo.pieralisi,
	Juri.Lelli, Axel Haslam, devicetree, Marc Titinger

On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>
>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>> On 12/09/16 17:16, Lina Iyer wrote:
>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>
>>>> Hi Lina,
>>>>
>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>> week.
>>>>
>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>> [...]
>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>> on v3
>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>> was proposed there or not ?
>>>>>>
>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>> for the new changes in the following patches. Let me know if that makes
>>>>> sense.
>>
>> Please add all possible use-cases in the bindings. Though one can refer
>> the usage examples, it might not cover all usage descriptions. It helps
>> preventing people from defining their own when they don't see examples.
>> Again DT bindings are like specifications, it should be descriptive
>> especially this kind of generic ones.
>>
>>>>
>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>
>>>> In that thread we initially proposed the idea of, instead of splitting
>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>> threads [2] but discussion keeps petering out, and neither this example
>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>
>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>> other domains and possibly CPU domains on some architectures to follow
>>> this model. There is nothing that prevents you from doing this today,
>
>As I understand it your opposition to this approach is this:
>
>There may be devices/CPUs which have idle states which do not constitute
>"power off". If we put those  devices in their own power domain for the
>purpose of putting their (non-power-off) idle state phandles in
>domain-idle-states, we are "lying" because no true power domain exists
>there.
>
>Am I correct that that's your opposition?
>
>If so, it seems we essentially disagree on the definition of a power
>domain, i.e. you define it as a set of devices that are powered on/off
>together while I define it as a set of devices whose power states
>(including idle states, not just on/off) are tied together. I said
>something similar on another thread [1] which died out.
>
>Do you agree that this is basically where we disagree, or am I missing
>something else?
>
>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>
Yes, you are right, I disagree with the definition of a domain around a
device. However, as long as you don't force SoC's to define devices in
the CPU PM domain to have their own virtual domains, I have no problem.
You are welcome to define it the way you want for Juno or any other
platform. I don't want that to be the forced and expected out of all
SoCs. All I am saying here is that the current implementation would
handle your case as well.

Thanks,
Lina

>>> you can specify domains around CPUs in your devicetree and CPU PM will
>>> handle the hierarchy. I don't think its fair to force it on all SoCs
>>> using CPU domains.
>>
>> I disagree. We are defining DT bindings here and it *should* be same for
>> all the SoC unless there is a compelling reason not to. I am fine if
>> those reasons are stated and agreed.
>>
>>> This patchset does not restrict you from organizing
>>> the idle states the way you want it. This revision of the series, clubs
>>> CPU and domain idle states under idle-states umbrella. So part of your
>>> requirement is also satisfied.
>>>
>>
>> I will look at the DTS changes in the series. But we *must* have more
>> description with more examples in the binding document.
>>
>>> You can follow up the series with your new additions, I don't see a
>>> conflict with this change.
>>>
>>
>> If we just need additions, then it should be fine.

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-13 19:38                       ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-09-13 19:38 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>
>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>> On 12/09/16 17:16, Lina Iyer wrote:
>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>
>>>> Hi Lina,
>>>>
>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>> week.
>>>>
>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>> [...]
>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>> on v3
>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>> was proposed there or not ?
>>>>>>
>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>> for the new changes in the following patches. Let me know if that makes
>>>>> sense.
>>
>> Please add all possible use-cases in the bindings. Though one can refer
>> the usage examples, it might not cover all usage descriptions. It helps
>> preventing people from defining their own when they don't see examples.
>> Again DT bindings are like specifications, it should be descriptive
>> especially this kind of generic ones.
>>
>>>>
>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>
>>>> In that thread we initially proposed the idea of, instead of splitting
>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>> threads [2] but discussion keeps petering out, and neither this example
>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>
>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>> other domains and possibly CPU domains on some architectures to follow
>>> this model. There is nothing that prevents you from doing this today,
>
>As I understand it your opposition to this approach is this:
>
>There may be devices/CPUs which have idle states which do not constitute
>"power off". If we put those  devices in their own power domain for the
>purpose of putting their (non-power-off) idle state phandles in
>domain-idle-states, we are "lying" because no true power domain exists
>there.
>
>Am I correct that that's your opposition?
>
>If so, it seems we essentially disagree on the definition of a power
>domain, i.e. you define it as a set of devices that are powered on/off
>together while I define it as a set of devices whose power states
>(including idle states, not just on/off) are tied together. I said
>something similar on another thread [1] which died out.
>
>Do you agree that this is basically where we disagree, or am I missing
>something else?
>
>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>
Yes, you are right, I disagree with the definition of a domain around a
device. However, as long as you don't force SoC's to define devices in
the CPU PM domain to have their own virtual domains, I have no problem.
You are welcome to define it the way you want for Juno or any other
platform. I don't want that to be the forced and expected out of all
SoCs. All I am saying here is that the current implementation would
handle your case as well.

Thanks,
Lina

>>> you can specify domains around CPUs in your devicetree and CPU PM will
>>> handle the hierarchy. I don't think its fair to force it on all SoCs
>>> using CPU domains.
>>
>> I disagree. We are defining DT bindings here and it *should* be same for
>> all the SoC unless there is a compelling reason not to. I am fine if
>> those reasons are stated and agreed.
>>
>>> This patchset does not restrict you from organizing
>>> the idle states the way you want it. This revision of the series, clubs
>>> CPU and domain idle states under idle-states umbrella. So part of your
>>> requirement is also satisfied.
>>>
>>
>> I will look at the DTS changes in the series. But we *must* have more
>> description with more examples in the binding document.
>>
>>> You can follow up the series with your new additions, I don't see a
>>> conflict with this change.
>>>
>>
>> If we just need additions, then it should be fine.

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-13 19:38                       ` Lina Iyer
@ 2016-09-14 10:14                         ` Brendan Jackman
  -1 siblings, 0 replies; 70+ messages in thread
From: Brendan Jackman @ 2016-09-14 10:14 UTC (permalink / raw)
  To: Lina Iyer
  Cc: Brendan Jackman, Sudeep Holla, rjw, linux-pm, linux-arm-kernel,
	ulf.hansson, khilman, andy.gross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, Juri.Lelli, Axel Haslam, devicetree,
	Marc Titinger


On Tue, Sep 13 2016 at 20:38, Lina Iyer wrote:
> On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>>
>>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>>> On 12/09/16 17:16, Lina Iyer wrote:
>>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>>
>>>>> Hi Lina,
>>>>>
>>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>>> week.
>>>>>
>>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>>> [...]
>>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>>> on v3
>>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>>> was proposed there or not ?
>>>>>>>
>>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>>> for the new changes in the following patches. Let me know if that makes
>>>>>> sense.
>>>
>>> Please add all possible use-cases in the bindings. Though one can refer
>>> the usage examples, it might not cover all usage descriptions. It helps
>>> preventing people from defining their own when they don't see examples.
>>> Again DT bindings are like specifications, it should be descriptive
>>> especially this kind of generic ones.
>>>
>>>>>
>>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>>
>>>>> In that thread we initially proposed the idea of, instead of splitting
>>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>>> threads [2] but discussion keeps petering out, and neither this example
>>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>>
>>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>>> other domains and possibly CPU domains on some architectures to follow
>>>> this model. There is nothing that prevents you from doing this today,
>>
>>As I understand it your opposition to this approach is this:
>>
>>There may be devices/CPUs which have idle states which do not constitute
>>"power off". If we put those  devices in their own power domain for the
>>purpose of putting their (non-power-off) idle state phandles in
>>domain-idle-states, we are "lying" because no true power domain exists
>>there.
>>
>>Am I correct that that's your opposition?
>>
>>If so, it seems we essentially disagree on the definition of a power
>>domain, i.e. you define it as a set of devices that are powered on/off
>>together while I define it as a set of devices whose power states
>>(including idle states, not just on/off) are tied together. I said
>>something similar on another thread [1] which died out.
>>
>>Do you agree that this is basically where we disagree, or am I missing
>>something else?
>>
>>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>>
> Yes, you are right, I disagree with the definition of a domain around a
> device.
OK, great.
> However, as long as you don't force SoC's to define devices in
> the CPU PM domain to have their own virtual domains, I have no problem.
> You are welcome to define it the way you want for Juno or any other
> platform.
I don't think that's true; the bindings have to work the same way for
all platforms. If for Juno we put CPU idle state phandles in a
domain-idle-states property for per-CPU domains then, with the current
implementation, the CPU-level idle states would be duplicated between
cpuidle and the CPU PM domains.
> I don't want that to be the forced and expected out of all
> SoCs. All I am saying here is that the current implementation would
> handle your case as well.

The current implementation certainly does cover the work I want to
do. The suggestion of per-device power domains for devices/CPUs with
their own idle states is simply intended to minimise the binding design,
since we'd no longer need cpu-idle-states or device-idle-states
(the latter was proposed elsewhere).

I am fine with the bindings as they are implemented currently so long
as:

- The binding doc makes clear how idle state phandles should be split
  between cpu-idle-states and domain-idle-states. It should make it
  obvious that no phandle should ever appear in both properties. It
  would even be worth briefly going over the backward-compatibility
  implications (e.g. what happens with old-kernel/new-DT and
  new-kernel/old-DT combos if a platform has OSI and PC support and we
  move cluster-level idle state phandles out of cpu-idle-states and into
  domai-idle-states).

- We have a reason against the definition of power domains as "a set of
  devices bound by a common power (including idle) state", since that
  definition would simplify the bindings. In my view, "nobody thinks
  that's what a power domain is" _is_ a compelling reason, so if others
  on the list get involved I'm convinced. I think I speak for Sudeep
  here too.

Cheers,
Brendan

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-14 10:14                         ` Brendan Jackman
  0 siblings, 0 replies; 70+ messages in thread
From: Brendan Jackman @ 2016-09-14 10:14 UTC (permalink / raw)
  To: linux-arm-kernel


On Tue, Sep 13 2016 at 20:38, Lina Iyer wrote:
> On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>>
>>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>>> On 12/09/16 17:16, Lina Iyer wrote:
>>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>>
>>>>> Hi Lina,
>>>>>
>>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>>> week.
>>>>>
>>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>>> [...]
>>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>>> on v3
>>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>>> was proposed there or not ?
>>>>>>>
>>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>>> for the new changes in the following patches. Let me know if that makes
>>>>>> sense.
>>>
>>> Please add all possible use-cases in the bindings. Though one can refer
>>> the usage examples, it might not cover all usage descriptions. It helps
>>> preventing people from defining their own when they don't see examples.
>>> Again DT bindings are like specifications, it should be descriptive
>>> especially this kind of generic ones.
>>>
>>>>>
>>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>>
>>>>> In that thread we initially proposed the idea of, instead of splitting
>>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>>> threads [2] but discussion keeps petering out, and neither this example
>>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>>
>>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>>> other domains and possibly CPU domains on some architectures to follow
>>>> this model. There is nothing that prevents you from doing this today,
>>
>>As I understand it your opposition to this approach is this:
>>
>>There may be devices/CPUs which have idle states which do not constitute
>>"power off". If we put those  devices in their own power domain for the
>>purpose of putting their (non-power-off) idle state phandles in
>>domain-idle-states, we are "lying" because no true power domain exists
>>there.
>>
>>Am I correct that that's your opposition?
>>
>>If so, it seems we essentially disagree on the definition of a power
>>domain, i.e. you define it as a set of devices that are powered on/off
>>together while I define it as a set of devices whose power states
>>(including idle states, not just on/off) are tied together. I said
>>something similar on another thread [1] which died out.
>>
>>Do you agree that this is basically where we disagree, or am I missing
>>something else?
>>
>>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>>
> Yes, you are right, I disagree with the definition of a domain around a
> device.
OK, great.
> However, as long as you don't force SoC's to define devices in
> the CPU PM domain to have their own virtual domains, I have no problem.
> You are welcome to define it the way you want for Juno or any other
> platform.
I don't think that's true; the bindings have to work the same way for
all platforms. If for Juno we put CPU idle state phandles in a
domain-idle-states property for per-CPU domains then, with the current
implementation, the CPU-level idle states would be duplicated between
cpuidle and the CPU PM domains.
> I don't want that to be the forced and expected out of all
> SoCs. All I am saying here is that the current implementation would
> handle your case as well.

The current implementation certainly does cover the work I want to
do. The suggestion of per-device power domains for devices/CPUs with
their own idle states is simply intended to minimise the binding design,
since we'd no longer need cpu-idle-states or device-idle-states
(the latter was proposed elsewhere).

I am fine with the bindings as they are implemented currently so long
as:

- The binding doc makes clear how idle state phandles should be split
  between cpu-idle-states and domain-idle-states. It should make it
  obvious that no phandle should ever appear in both properties. It
  would even be worth briefly going over the backward-compatibility
  implications (e.g. what happens with old-kernel/new-DT and
  new-kernel/old-DT combos if a platform has OSI and PC support and we
  move cluster-level idle state phandles out of cpu-idle-states and into
  domai-idle-states).

- We have a reason against the definition of power domains as "a set of
  devices bound by a common power (including idle) state", since that
  definition would simplify the bindings. In my view, "nobody thinks
  that's what a power domain is" _is_ a compelling reason, so if others
  on the list get involved I'm convinced. I think I speak for Sudeep
  here too.

Cheers,
Brendan

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-14 10:14                         ` Brendan Jackman
@ 2016-09-14 11:37                             ` Ulf Hansson
  -1 siblings, 0 replies; 70+ messages in thread
From: Ulf Hansson @ 2016-09-14 11:37 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Lina Iyer, Sudeep Holla, Rafael J. Wysocki,
	linux-pm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Kevin Hilman,
	Andy Gross, Stephen Boyd, linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	Lorenzo Pieralisi, Juri Lelli, Axel Haslam,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Marc Titinger

>>>
>> Yes, you are right, I disagree with the definition of a domain around a
>> device.

To fill in, I agree with Lina (and Kevin).

>From my point of view, a domain is per definition containing resources
which are shared among devices. Having one device per domain, does in
general not make sense.

> OK, great.
>> However, as long as you don't force SoC's to define devices in
>> the CPU PM domain to have their own virtual domains, I have no problem.
>> You are welcome to define it the way you want for Juno or any other
>> platform.
> I don't think that's true; the bindings have to work the same way for
> all platforms. If for Juno we put CPU idle state phandles in a
> domain-idle-states property for per-CPU domains then, with the current
> implementation, the CPU-level idle states would be duplicated between
> cpuidle and the CPU PM domains.
>> I don't want that to be the forced and expected out of all
>> SoCs. All I am saying here is that the current implementation would
>> handle your case as well.
>
> The current implementation certainly does cover the work I want to
> do. The suggestion of per-device power domains for devices/CPUs with
> their own idle states is simply intended to minimise the binding design,
> since we'd no longer need cpu-idle-states or device-idle-states
> (the latter was proposed elsewhere).

I see your point, but IMHO that would be to simplify the description
of the hardware. And I don't think it's sufficient to cover all
existing cases.

>
> I am fine with the bindings as they are implemented currently so long
> as:
>
> - The binding doc makes clear how idle state phandles should be split
>   between cpu-idle-states and domain-idle-states. It should make it
>   obvious that no phandle should ever appear in both properties. It
>   would even be worth briefly going over the backward-compatibility
>   implications (e.g. what happens with old-kernel/new-DT and
>   new-kernel/old-DT combos if a platform has OSI and PC support and we
>   move cluster-level idle state phandles out of cpu-idle-states and into
>   domai-idle-states).
>
> - We have a reason against the definition of power domains as "a set of
>   devices bound by a common power (including idle) state", since that
>   definition would simplify the bindings. In my view, "nobody thinks
>   that's what a power domain is" _is_ a compelling reason, so if others
>   on the list get involved I'm convinced. I think I speak for Sudeep
>   here too.
>

>From a CPU point of view, I think it may very well be considered as
any other device. Yes, we have treated them in a specific manner
regarding the idle state definitions we currently have - and we can
continue to do that.

Although, in the long run, I think we needs something more flexible
that can be used for domains and devices.

Kind regards
Uffe
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-14 11:37                             ` Ulf Hansson
  0 siblings, 0 replies; 70+ messages in thread
From: Ulf Hansson @ 2016-09-14 11:37 UTC (permalink / raw)
  To: linux-arm-kernel

>>>
>> Yes, you are right, I disagree with the definition of a domain around a
>> device.

To fill in, I agree with Lina (and Kevin).

>From my point of view, a domain is per definition containing resources
which are shared among devices. Having one device per domain, does in
general not make sense.

> OK, great.
>> However, as long as you don't force SoC's to define devices in
>> the CPU PM domain to have their own virtual domains, I have no problem.
>> You are welcome to define it the way you want for Juno or any other
>> platform.
> I don't think that's true; the bindings have to work the same way for
> all platforms. If for Juno we put CPU idle state phandles in a
> domain-idle-states property for per-CPU domains then, with the current
> implementation, the CPU-level idle states would be duplicated between
> cpuidle and the CPU PM domains.
>> I don't want that to be the forced and expected out of all
>> SoCs. All I am saying here is that the current implementation would
>> handle your case as well.
>
> The current implementation certainly does cover the work I want to
> do. The suggestion of per-device power domains for devices/CPUs with
> their own idle states is simply intended to minimise the binding design,
> since we'd no longer need cpu-idle-states or device-idle-states
> (the latter was proposed elsewhere).

I see your point, but IMHO that would be to simplify the description
of the hardware. And I don't think it's sufficient to cover all
existing cases.

>
> I am fine with the bindings as they are implemented currently so long
> as:
>
> - The binding doc makes clear how idle state phandles should be split
>   between cpu-idle-states and domain-idle-states. It should make it
>   obvious that no phandle should ever appear in both properties. It
>   would even be worth briefly going over the backward-compatibility
>   implications (e.g. what happens with old-kernel/new-DT and
>   new-kernel/old-DT combos if a platform has OSI and PC support and we
>   move cluster-level idle state phandles out of cpu-idle-states and into
>   domai-idle-states).
>
> - We have a reason against the definition of power domains as "a set of
>   devices bound by a common power (including idle) state", since that
>   definition would simplify the bindings. In my view, "nobody thinks
>   that's what a power domain is" _is_ a compelling reason, so if others
>   on the list get involved I'm convinced. I think I speak for Sudeep
>   here too.
>

>From a CPU point of view, I think it may very well be considered as
any other device. Yes, we have treated them in a specific manner
regarding the idle state definitions we currently have - and we can
continue to do that.

Although, in the long run, I think we needs something more flexible
that can be used for domains and devices.

Kind regards
Uffe

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-14 10:14                         ` Brendan Jackman
@ 2016-09-14 14:55                             ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-09-14 14:55 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Sudeep Holla, rjw-LthD3rsA81gm4RdzfppkhA,
	linux-pm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	ulf.hansson-QSEj5FYQhm4dnm+yROfE0A,
	khilman-DgEjT+Ai2ygdnm+yROfE0A,
	andy.gross-QSEj5FYQhm4dnm+yROfE0A, sboyd-sgV2jX0FEOL9JmXXK+q4OQ,
	linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	lorenzo.pieralisi-5wv7dgnIgG8, Juri.Lelli-5wv7dgnIgG8,
	Axel Haslam, devicetree-u79uwXL29TY76Z2rM5mHXA, Marc Titinger

On Wed, Sep 14 2016 at 04:18 -0600, Brendan Jackman wrote:
>
>On Tue, Sep 13 2016 at 20:38, Lina Iyer wrote:
>> On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>>>
>>>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>>>> On 12/09/16 17:16, Lina Iyer wrote:
>>>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>>>
>>>>>> Hi Lina,
>>>>>>
>>>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>>>> week.
>>>>>>
>>>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>>>> [...]
>>>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>>>> on v3
>>>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>>>> was proposed there or not ?
>>>>>>>>
>>>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>>>> for the new changes in the following patches. Let me know if that makes
>>>>>>> sense.
>>>>
>>>> Please add all possible use-cases in the bindings. Though one can refer
>>>> the usage examples, it might not cover all usage descriptions. It helps
>>>> preventing people from defining their own when they don't see examples.
>>>> Again DT bindings are like specifications, it should be descriptive
>>>> especially this kind of generic ones.
>>>>
>>>>>>
>>>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>>>
>>>>>> In that thread we initially proposed the idea of, instead of splitting
>>>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>>>> threads [2] but discussion keeps petering out, and neither this example
>>>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>>>
>>>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>>>> other domains and possibly CPU domains on some architectures to follow
>>>>> this model. There is nothing that prevents you from doing this today,
>>>
>>>As I understand it your opposition to this approach is this:
>>>
>>>There may be devices/CPUs which have idle states which do not constitute
>>>"power off". If we put those  devices in their own power domain for the
>>>purpose of putting their (non-power-off) idle state phandles in
>>>domain-idle-states, we are "lying" because no true power domain exists
>>>there.
>>>
>>>Am I correct that that's your opposition?
>>>
>>>If so, it seems we essentially disagree on the definition of a power
>>>domain, i.e. you define it as a set of devices that are powered on/off
>>>together while I define it as a set of devices whose power states
>>>(including idle states, not just on/off) are tied together. I said
>>>something similar on another thread [1] which died out.
>>>
>>>Do you agree that this is basically where we disagree, or am I missing
>>>something else?
>>>
>>>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>>>
>> Yes, you are right, I disagree with the definition of a domain around a
>> device.
>OK, great.
>> However, as long as you don't force SoC's to define devices in
>> the CPU PM domain to have their own virtual domains, I have no problem.
>> You are welcome to define it the way you want for Juno or any other
>> platform.
>I don't think that's true; the bindings have to work the same way for
>all platforms. If for Juno we put CPU idle state phandles in a
>domain-idle-states property for per-CPU domains then, with the current
>implementation, the CPU-level idle states would be duplicated between
>cpuidle and the CPU PM domains.

We don't have the code today. Your patches would add the functionality
of parsing domain idle states and attaching them to cpu-idle-states if
the firmware support and the mode is Platform-coordinated. And that
functionality is an easy addition. Nobody is making this change to
platforms with PC to use the CPU PM domains yet. 

What you are referring to is just a convergence PC and OSI to use
the same domain hierarchy. This definition is not impacted by your 
desire. I have my own doubts of defining PC domains this way, but I
would leave that to you to submit the relevant RFC and bring forth the
discussion. (Per DT, the definition of PC domain states is already
immutable from how it is defined today in DT. You have to be careful in
breaking it up.)

>> I don't want that to be the forced and expected out of all
>> SoCs. All I am saying here is that the current implementation would
>> handle your case as well.
>
>The current implementation certainly does cover the work I want to
>do. The suggestion of per-device power domains for devices/CPUs with
>their own idle states is simply intended to minimise the binding design,
>since we'd no longer need cpu-idle-states or device-idle-states
>(the latter was proposed elsewhere).
>
>I am fine with the bindings as they are implemented currently so long
>as:
>
>- The binding doc makes clear how idle state phandles should be split
>  between cpu-idle-states and domain-idle-states. It should make it
>  obvious that no phandle should ever appear in both properties. It
>  would even be worth briefly going over the backward-compatibility
>  implications (e.g. what happens with old-kernel/new-DT and
>  new-kernel/old-DT combos if a platform has OSI and PC support and we
>  move cluster-level idle state phandles out of cpu-idle-states and into
>  domai-idle-states).
>
Since, I have been only defining OSI initiated PM domains, this is not a
problem. I have clearly distinguished the explanation to be OSI
specific, for now.

>- We have a reason against the definition of power domains as "a set of
>  devices bound by a common power (including idle) state", since that
>  definition would simplify the bindings. In my view, "nobody thinks
>  that's what a power domain is" _is_ a compelling reason, so if others
>  on the list get involved I'm convinced. I think I speak for Sudeep
>  here too.
>

Look outside the context of the CPU - a generic PM domain is collective
of generic devices that share the same power island. A PM Domain may
also have other domains as sub-domains as well. So it is exactly that.
A CPU is just a specialized device.

Hope this helps.

Thanks,
Lina
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-14 14:55                             ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-09-14 14:55 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Sep 14 2016 at 04:18 -0600, Brendan Jackman wrote:
>
>On Tue, Sep 13 2016 at 20:38, Lina Iyer wrote:
>> On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>>>
>>>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>>>> On 12/09/16 17:16, Lina Iyer wrote:
>>>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>>>
>>>>>> Hi Lina,
>>>>>>
>>>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>>>> week.
>>>>>>
>>>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>>>> [...]
>>>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>>>> on v3
>>>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>>>> was proposed there or not ?
>>>>>>>>
>>>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>>>> for the new changes in the following patches. Let me know if that makes
>>>>>>> sense.
>>>>
>>>> Please add all possible use-cases in the bindings. Though one can refer
>>>> the usage examples, it might not cover all usage descriptions. It helps
>>>> preventing people from defining their own when they don't see examples.
>>>> Again DT bindings are like specifications, it should be descriptive
>>>> especially this kind of generic ones.
>>>>
>>>>>>
>>>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>>>
>>>>>> In that thread we initially proposed the idea of, instead of splitting
>>>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>>>> threads [2] but discussion keeps petering out, and neither this example
>>>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>>>
>>>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>>>> other domains and possibly CPU domains on some architectures to follow
>>>>> this model. There is nothing that prevents you from doing this today,
>>>
>>>As I understand it your opposition to this approach is this:
>>>
>>>There may be devices/CPUs which have idle states which do not constitute
>>>"power off". If we put those  devices in their own power domain for the
>>>purpose of putting their (non-power-off) idle state phandles in
>>>domain-idle-states, we are "lying" because no true power domain exists
>>>there.
>>>
>>>Am I correct that that's your opposition?
>>>
>>>If so, it seems we essentially disagree on the definition of a power
>>>domain, i.e. you define it as a set of devices that are powered on/off
>>>together while I define it as a set of devices whose power states
>>>(including idle states, not just on/off) are tied together. I said
>>>something similar on another thread [1] which died out.
>>>
>>>Do you agree that this is basically where we disagree, or am I missing
>>>something else?
>>>
>>>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>>>
>> Yes, you are right, I disagree with the definition of a domain around a
>> device.
>OK, great.
>> However, as long as you don't force SoC's to define devices in
>> the CPU PM domain to have their own virtual domains, I have no problem.
>> You are welcome to define it the way you want for Juno or any other
>> platform.
>I don't think that's true; the bindings have to work the same way for
>all platforms. If for Juno we put CPU idle state phandles in a
>domain-idle-states property for per-CPU domains then, with the current
>implementation, the CPU-level idle states would be duplicated between
>cpuidle and the CPU PM domains.

We don't have the code today. Your patches would add the functionality
of parsing domain idle states and attaching them to cpu-idle-states if
the firmware support and the mode is Platform-coordinated. And that
functionality is an easy addition. Nobody is making this change to
platforms with PC to use the CPU PM domains yet. 

What you are referring to is just a convergence PC and OSI to use
the same domain hierarchy. This definition is not impacted by your 
desire. I have my own doubts of defining PC domains this way, but I
would leave that to you to submit the relevant RFC and bring forth the
discussion. (Per DT, the definition of PC domain states is already
immutable from how it is defined today in DT. You have to be careful in
breaking it up.)

>> I don't want that to be the forced and expected out of all
>> SoCs. All I am saying here is that the current implementation would
>> handle your case as well.
>
>The current implementation certainly does cover the work I want to
>do. The suggestion of per-device power domains for devices/CPUs with
>their own idle states is simply intended to minimise the binding design,
>since we'd no longer need cpu-idle-states or device-idle-states
>(the latter was proposed elsewhere).
>
>I am fine with the bindings as they are implemented currently so long
>as:
>
>- The binding doc makes clear how idle state phandles should be split
>  between cpu-idle-states and domain-idle-states. It should make it
>  obvious that no phandle should ever appear in both properties. It
>  would even be worth briefly going over the backward-compatibility
>  implications (e.g. what happens with old-kernel/new-DT and
>  new-kernel/old-DT combos if a platform has OSI and PC support and we
>  move cluster-level idle state phandles out of cpu-idle-states and into
>  domai-idle-states).
>
Since, I have been only defining OSI initiated PM domains, this is not a
problem. I have clearly distinguished the explanation to be OSI
specific, for now.

>- We have a reason against the definition of power domains as "a set of
>  devices bound by a common power (including idle) state", since that
>  definition would simplify the bindings. In my view, "nobody thinks
>  that's what a power domain is" _is_ a compelling reason, so if others
>  on the list get involved I'm convinced. I think I speak for Sudeep
>  here too.
>

Look outside the context of the CPU - a generic PM domain is collective
of generic devices that share the same power island. A PM Domain may
also have other domains as sub-domains as well. So it is exactly that.
A CPU is just a specialized device.

Hope this helps.

Thanks,
Lina

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-14 10:14                         ` Brendan Jackman
@ 2016-09-16 17:13                           ` Kevin Hilman
  -1 siblings, 0 replies; 70+ messages in thread
From: Kevin Hilman @ 2016-09-16 17:13 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Lina Iyer, Sudeep Holla, rjw, linux-pm, linux-arm-kernel,
	ulf.hansson, andy.gross, sboyd, linux-arm-msm, lorenzo.pieralisi,
	Juri.Lelli, Axel Haslam, devicetree, Marc Titinger

Brendan Jackman <brendan.jackman@arm.com> writes:

> On Tue, Sep 13 2016 at 20:38, Lina Iyer wrote:
>> On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>>>
>>>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>>>> On 12/09/16 17:16, Lina Iyer wrote:
>>>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>>>
>>>>>> Hi Lina,
>>>>>>
>>>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>>>> week.
>>>>>>
>>>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>>>> [...]
>>>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>>>> on v3
>>>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>>>> was proposed there or not ?
>>>>>>>>
>>>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>>>> for the new changes in the following patches. Let me know if that makes
>>>>>>> sense.
>>>>
>>>> Please add all possible use-cases in the bindings. Though one can refer
>>>> the usage examples, it might not cover all usage descriptions. It helps
>>>> preventing people from defining their own when they don't see examples.
>>>> Again DT bindings are like specifications, it should be descriptive
>>>> especially this kind of generic ones.
>>>>
>>>>>>
>>>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>>>
>>>>>> In that thread we initially proposed the idea of, instead of splitting
>>>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>>>> threads [2] but discussion keeps petering out, and neither this example
>>>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>>>
>>>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>>>> other domains and possibly CPU domains on some architectures to follow
>>>>> this model. There is nothing that prevents you from doing this today,
>>>
>>>As I understand it your opposition to this approach is this:
>>>
>>>There may be devices/CPUs which have idle states which do not constitute
>>>"power off". If we put those  devices in their own power domain for the
>>>purpose of putting their (non-power-off) idle state phandles in
>>>domain-idle-states, we are "lying" because no true power domain exists
>>>there.
>>>
>>>Am I correct that that's your opposition?
>>>
>>>If so, it seems we essentially disagree on the definition of a power
>>>domain, i.e. you define it as a set of devices that are powered on/off
>>>together while I define it as a set of devices whose power states
>>>(including idle states, not just on/off) are tied together. I said
>>>something similar on another thread [1] which died out.
>>>
>>>Do you agree that this is basically where we disagree, or am I missing
>>>something else?
>>>
>>>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>>>
>> Yes, you are right, I disagree with the definition of a domain around a
>> device.
> OK, great.
>> However, as long as you don't force SoC's to define devices in
>> the CPU PM domain to have their own virtual domains, I have no problem.
>> You are welcome to define it the way you want for Juno or any other
>> platform.
> I don't think that's true; the bindings have to work the same way for
> all platforms. If for Juno we put CPU idle state phandles in a
> domain-idle-states property for per-CPU domains then, with the current
> implementation, the CPU-level idle states would be duplicated between
> cpuidle and the CPU PM domains.
>> I don't want that to be the forced and expected out of all
>> SoCs. All I am saying here is that the current implementation would
>> handle your case as well.
>
> The current implementation certainly does cover the work I want to
> do. The suggestion of per-device power domains for devices/CPUs with
> their own idle states is simply intended to minimise the binding design,
> since we'd no longer need cpu-idle-states or device-idle-states
> (the latter was proposed elsewhere).
>
> I am fine with the bindings as they are implemented currently so long
> as:
>
> - The binding doc makes clear how idle state phandles should be split
>   between cpu-idle-states and domain-idle-states. It should make it
>   obvious that no phandle should ever appear in both properties. It
>   would even be worth briefly going over the backward-compatibility
>   implications (e.g. what happens with old-kernel/new-DT and
>   new-kernel/old-DT combos if a platform has OSI and PC support and we
>   move cluster-level idle state phandles out of cpu-idle-states and into
>   domai-idle-states).
>
> - We have a reason against the definition of power domains as "a set of
>   devices bound by a common power (including idle) state", since that
>   definition would simplify the bindings. In my view, "nobody thinks
>   that's what a power domain is" _is_ a compelling reason, so if others
>   on the list get involved I'm convinced. I think I speak for Sudeep
>   here too.

I think we're having some terminology issues...

FWIW, the kernel terminolgy is actually "PM domain", not power domain.
This was intentional because the goal of the PM domain was to group
devices that some PM features.  To be very specific to the kernel, they
us the same set of PM callbacks.  Today, this is most commonly used to
model power domains, where a group of devices share a power rail, but it
does not need to be limited to that.

That being said, I'm having a hard time understanding the root of the
disagreement.

It seems that you and Sudeep would like to use domain-idle-states to
replace/superceed cpu-idle-states with the primary goal (and benefit)
being that it simplifies the DT bindings.  Is that correct?

The objections have come in because that means that implies that CPUs
become their own domains, which may not be the case in hardware in the
sense that they share a power rail.

However, IMO, thinking of a CPU as it's own "PM domain" may make some
sense based on the terminology above.

I think the other objection may be that using a genpd to model domain
with only a single device in it may be overkill, and I agree with that.
But, I'm not sure if making CPUs use domain-idle-states implies that
they necessarily have to use genpd is what you are proposing.  Maybe
someone could clarify that?

Kevin




^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-16 17:13                           ` Kevin Hilman
  0 siblings, 0 replies; 70+ messages in thread
From: Kevin Hilman @ 2016-09-16 17:13 UTC (permalink / raw)
  To: linux-arm-kernel

Brendan Jackman <brendan.jackman@arm.com> writes:

> On Tue, Sep 13 2016 at 20:38, Lina Iyer wrote:
>> On Tue, Sep 13 2016 at 11:50 -0600, Brendan Jackman wrote:
>>>
>>>On Mon, Sep 12 2016 at 18:09, Sudeep Holla wrote:
>>>> On 12/09/16 17:16, Lina Iyer wrote:
>>>>> On Mon, Sep 12 2016 at 09:19 -0600, Brendan Jackman wrote:
>>>>>>
>>>>>> Hi Lina,
>>>>>>
>>>>>> Sorry for the delay here, Sudeep and I were both been on holiday last
>>>>>> week.
>>>>>>
>>>>>> On Fri, Sep 02 2016 at 21:16, Lina Iyer wrote:
>>>>>>> On Fri, Sep 02 2016 at 07:21 -0700, Sudeep Holla wrote:
>>>>>> [...]
>>>>>>>> This version is *not very descriptive*. Also the discussion we had
>>>>>>>> on v3
>>>>>>>> version has not yet concluded IMO. So can I take that we agreed on what
>>>>>>>> was proposed there or not ?
>>>>>>>>
>>>>>>> Sorry, this example is not very descriptive. Pls. check the 8916 dtsi
>>>>>>> for the new changes in the following patches. Let me know if that makes
>>>>>>> sense.
>>>>
>>>> Please add all possible use-cases in the bindings. Though one can refer
>>>> the usage examples, it might not cover all usage descriptions. It helps
>>>> preventing people from defining their own when they don't see examples.
>>>> Again DT bindings are like specifications, it should be descriptive
>>>> especially this kind of generic ones.
>>>>
>>>>>>
>>>>>> The not-yet-concluded discussion Sudeep is referring to is at [1].
>>>>>>
>>>>>> In that thread we initially proposed the idea of, instead of splitting
>>>>>> state phandles between cpu-idle-states and domain-idle-states, putting
>>>>>> CPUs in their own domains and using domain-idle-states for _all_
>>>>>> phandles, deprecating cpu-idle-states. I've brought this up in other
>>>>>> threads [2] but discussion keeps petering out, and neither this example
>>>>>> nor the 8916 dtsi in this patch series reflect the idea.
>>>>>>
>>>>> Brendan, while your idea is good and will work for CPUs, I do not expect
>>>>> other domains and possibly CPU domains on some architectures to follow
>>>>> this model. There is nothing that prevents you from doing this today,
>>>
>>>As I understand it your opposition to this approach is this:
>>>
>>>There may be devices/CPUs which have idle states which do not constitute
>>>"power off". If we put those  devices in their own power domain for the
>>>purpose of putting their (non-power-off) idle state phandles in
>>>domain-idle-states, we are "lying" because no true power domain exists
>>>there.
>>>
>>>Am I correct that that's your opposition?
>>>
>>>If so, it seems we essentially disagree on the definition of a power
>>>domain, i.e. you define it as a set of devices that are powered on/off
>>>together while I define it as a set of devices whose power states
>>>(including idle states, not just on/off) are tied together. I said
>>>something similar on another thread [1] which died out.
>>>
>>>Do you agree that this is basically where we disagree, or am I missing
>>>something else?
>>>
>>>[2] http://www.spinics.net/lists/devicetree/msg141050.html
>>>
>> Yes, you are right, I disagree with the definition of a domain around a
>> device.
> OK, great.
>> However, as long as you don't force SoC's to define devices in
>> the CPU PM domain to have their own virtual domains, I have no problem.
>> You are welcome to define it the way you want for Juno or any other
>> platform.
> I don't think that's true; the bindings have to work the same way for
> all platforms. If for Juno we put CPU idle state phandles in a
> domain-idle-states property for per-CPU domains then, with the current
> implementation, the CPU-level idle states would be duplicated between
> cpuidle and the CPU PM domains.
>> I don't want that to be the forced and expected out of all
>> SoCs. All I am saying here is that the current implementation would
>> handle your case as well.
>
> The current implementation certainly does cover the work I want to
> do. The suggestion of per-device power domains for devices/CPUs with
> their own idle states is simply intended to minimise the binding design,
> since we'd no longer need cpu-idle-states or device-idle-states
> (the latter was proposed elsewhere).
>
> I am fine with the bindings as they are implemented currently so long
> as:
>
> - The binding doc makes clear how idle state phandles should be split
>   between cpu-idle-states and domain-idle-states. It should make it
>   obvious that no phandle should ever appear in both properties. It
>   would even be worth briefly going over the backward-compatibility
>   implications (e.g. what happens with old-kernel/new-DT and
>   new-kernel/old-DT combos if a platform has OSI and PC support and we
>   move cluster-level idle state phandles out of cpu-idle-states and into
>   domai-idle-states).
>
> - We have a reason against the definition of power domains as "a set of
>   devices bound by a common power (including idle) state", since that
>   definition would simplify the bindings. In my view, "nobody thinks
>   that's what a power domain is" _is_ a compelling reason, so if others
>   on the list get involved I'm convinced. I think I speak for Sudeep
>   here too.

I think we're having some terminology issues...

FWIW, the kernel terminolgy is actually "PM domain", not power domain.
This was intentional because the goal of the PM domain was to group
devices that some PM features.  To be very specific to the kernel, they
us the same set of PM callbacks.  Today, this is most commonly used to
model power domains, where a group of devices share a power rail, but it
does not need to be limited to that.

That being said, I'm having a hard time understanding the root of the
disagreement.

It seems that you and Sudeep would like to use domain-idle-states to
replace/superceed cpu-idle-states with the primary goal (and benefit)
being that it simplifies the DT bindings.  Is that correct?

The objections have come in because that means that implies that CPUs
become their own domains, which may not be the case in hardware in the
sense that they share a power rail.

However, IMO, thinking of a CPU as it's own "PM domain" may make some
sense based on the terminology above.

I think the other objection may be that using a genpd to model domain
with only a single device in it may be overkill, and I agree with that.
But, I'm not sure if making CPUs use domain-idle-states implies that
they necessarily have to use genpd is what you are proposing.  Maybe
someone could clarify that?

Kevin

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-16 17:13                           ` Kevin Hilman
@ 2016-09-16 17:39                               ` Sudeep Holla
  -1 siblings, 0 replies; 70+ messages in thread
From: Sudeep Holla @ 2016-09-16 17:39 UTC (permalink / raw)
  To: Kevin Hilman
  Cc: Brendan Jackman, Sudeep Holla, Lina Iyer,
	rjw-LthD3rsA81gm4RdzfppkhA, linux-pm-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	ulf.hansson-QSEj5FYQhm4dnm+yROfE0A,
	andy.gross-QSEj5FYQhm4dnm+yROfE0A, sboyd-sgV2jX0FEOL9JmXXK+q4OQ,
	linux-arm-msm-u79uwXL29TY76Z2rM5mHXA,
	lorenzo.pieralisi-5wv7dgnIgG8, Juri.Lelli-5wv7dgnIgG8,
	Axel Haslam, devicetree-u79uwXL29TY76Z2rM5mHXA, Marc Titinger

Hi Kevin,

Thanks for looking at this and simplifying various discussions we had so
far. I was thinking of summarizing something very similar. I couldn't
due to lack of time.

On 16/09/16 18:13, Kevin Hilman wrote:

[...]

> I think we're having some terminology issues...
>
> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
> This was intentional because the goal of the PM domain was to group
> devices that some PM features.  To be very specific to the kernel, they
> us the same set of PM callbacks.  Today, this is most commonly used to
> model power domains, where a group of devices share a power rail, but it
> does not need to be limited to that.
>

Agreed/Understood.

> That being said, I'm having a hard time understanding the root of the
> disagreement.
>

Yes. I tried to convey the same earlier, but have failed. The only
disagreement is about a small part of this DT bindings. We would like to
make it completely hierarchical up to CPU nodes. More comments on that
below.

> It seems that you and Sudeep would like to use domain-idle-states to
> replace/superceed cpu-idle-states with the primary goal (and benefit)
> being that it simplifies the DT bindings.  Is that correct?
>

Correct, we want to deprecate cpu-idle-states with the introduction of
this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
any ABI break we might trigger if we miss to consider some use-case now.

> The objections have come in because that means that implies that CPUs
> become their own domains, which may not be the case in hardware in the
> sense that they share a power rail.
>

Agreed.

> However, IMO, thinking of a CPU as it's own "PM domain" may make some
> sense based on the terminology above.
>

Thanks for that, we do understand that it may not be 100% correct when
we strictly considers hardware terminologies instead of above ones.
As along as we see no issues with the above terminologies it should be fine.

> I think the other objection may be that using a genpd to model domain
> with only a single device in it may be overkill, and I agree with that.

I too agree with that. Just because we represent that in DT in that way
doesn't mean we need to create a genpd to model domain. We can always
skip that if not required. That's pure implementation specifics and I
have tried to convey the same in my previous emails. I must say you have
summarized it very clearly in this email. Thanks again for that.

> But, I'm not sure if making CPUs use domain-idle-states implies that
> they necessarily have to use genpd is what you are proposing.  Maybe
> someone could clarify that?
>

No, I have not proposing anything around implementation in the whole
discussion so far. I have constrained myself just to DT bindings so far.
That's the main reason why I was opposed to mentions of OS vs platform
co-ordinated modes of CPU suspend in this discussion. IMO that's
completely out of scope of this DT binding we are defining here.

Hope that helps/clarifies the misunderstanding/disagreement.

-- 
Regards,
Sudeep
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-16 17:39                               ` Sudeep Holla
  0 siblings, 0 replies; 70+ messages in thread
From: Sudeep Holla @ 2016-09-16 17:39 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Kevin,

Thanks for looking at this and simplifying various discussions we had so
far. I was thinking of summarizing something very similar. I couldn't
due to lack of time.

On 16/09/16 18:13, Kevin Hilman wrote:

[...]

> I think we're having some terminology issues...
>
> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
> This was intentional because the goal of the PM domain was to group
> devices that some PM features.  To be very specific to the kernel, they
> us the same set of PM callbacks.  Today, this is most commonly used to
> model power domains, where a group of devices share a power rail, but it
> does not need to be limited to that.
>

Agreed/Understood.

> That being said, I'm having a hard time understanding the root of the
> disagreement.
>

Yes. I tried to convey the same earlier, but have failed. The only
disagreement is about a small part of this DT bindings. We would like to
make it completely hierarchical up to CPU nodes. More comments on that
below.

> It seems that you and Sudeep would like to use domain-idle-states to
> replace/superceed cpu-idle-states with the primary goal (and benefit)
> being that it simplifies the DT bindings.  Is that correct?
>

Correct, we want to deprecate cpu-idle-states with the introduction of
this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
any ABI break we might trigger if we miss to consider some use-case now.

> The objections have come in because that means that implies that CPUs
> become their own domains, which may not be the case in hardware in the
> sense that they share a power rail.
>

Agreed.

> However, IMO, thinking of a CPU as it's own "PM domain" may make some
> sense based on the terminology above.
>

Thanks for that, we do understand that it may not be 100% correct when
we strictly considers hardware terminologies instead of above ones.
As along as we see no issues with the above terminologies it should be fine.

> I think the other objection may be that using a genpd to model domain
> with only a single device in it may be overkill, and I agree with that.

I too agree with that. Just because we represent that in DT in that way
doesn't mean we need to create a genpd to model domain. We can always
skip that if not required. That's pure implementation specifics and I
have tried to convey the same in my previous emails. I must say you have
summarized it very clearly in this email. Thanks again for that.

> But, I'm not sure if making CPUs use domain-idle-states implies that
> they necessarily have to use genpd is what you are proposing.  Maybe
> someone could clarify that?
>

No, I have not proposing anything around implementation in the whole
discussion so far. I have constrained myself just to DT bindings so far.
That's the main reason why I was opposed to mentions of OS vs platform
co-ordinated modes of CPU suspend in this discussion. IMO that's
completely out of scope of this DT binding we are defining here.

Hope that helps/clarifies the misunderstanding/disagreement.

-- 
Regards,
Sudeep

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-16 17:39                               ` Sudeep Holla
@ 2016-09-19 15:09                                 ` Brendan Jackman
  -1 siblings, 0 replies; 70+ messages in thread
From: Brendan Jackman @ 2016-09-19 15:09 UTC (permalink / raw)
  To: Sudeep Holla
  Cc: Kevin Hilman, Brendan Jackman, Lina Iyer, rjw, linux-pm,
	linux-arm-kernel, ulf.hansson, andy.gross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, Juri.Lelli, Axel Haslam, devicetree,
	Marc Titinger


On Fri, Sep 16 2016 at 18:39, Sudeep Holla <sudeep.holla@arm.com> wrote:
> Hi Kevin,
>
> Thanks for looking at this and simplifying various discussions we had so
> far. I was thinking of summarizing something very similar. I couldn't
> due to lack of time.
>
> On 16/09/16 18:13, Kevin Hilman wrote:
>
> [...]
>
>> I think we're having some terminology issues...
>>
>> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
>> This was intentional because the goal of the PM domain was to group
>> devices that some PM features.  To be very specific to the kernel, they
>> us the same set of PM callbacks.  Today, this is most commonly used to
>> model power domains, where a group of devices share a power rail, but it
>> does not need to be limited to that.
>>
>
> Agreed/Understood.
>
>> That being said, I'm having a hard time understanding the root of the
>> disagreement.
>>
>
> Yes. I tried to convey the same earlier, but have failed. The only
> disagreement is about a small part of this DT bindings. We would like to
> make it completely hierarchical up to CPU nodes. More comments on that
> below.
>
>> It seems that you and Sudeep would like to use domain-idle-states to
>> replace/superceed cpu-idle-states with the primary goal (and benefit)
>> being that it simplifies the DT bindings.  Is that correct?
>>
>
> Correct, we want to deprecate cpu-idle-states with the introduction of
> this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
> any ABI break we might trigger if we miss to consider some use-case now.
>
>> The objections have come in because that means that implies that CPUs
>> become their own domains, which may not be the case in hardware in the
>> sense that they share a power rail.
>>
>
> Agreed.
>
>> However, IMO, thinking of a CPU as it's own "PM domain" may make some
>> sense based on the terminology above.
>>
>
> Thanks for that, we do understand that it may not be 100% correct when
> we strictly considers hardware terminologies instead of above ones.
> As along as we see no issues with the above terminologies it should be fine.
>
>> I think the other objection may be that using a genpd to model domain
>> with only a single device in it may be overkill, and I agree with that.
>
> I too agree with that. Just because we represent that in DT in that way
> doesn't mean we need to create a genpd to model domain. We can always
> skip that if not required. That's pure implementation specifics and I
> have tried to convey the same in my previous emails. I must say you have
> summarized it very clearly in this email. Thanks again for that.
>
>> But, I'm not sure if making CPUs use domain-idle-states implies that
>> they necessarily have to use genpd is what you are proposing.  Maybe
>> someone could clarify that?
>>
>
> No, I have not proposing anything around implementation in the whole
> discussion so far. I have constrained myself just to DT bindings so far.
> That's the main reason why I was opposed to mentions of OS vs platform
> co-ordinated modes of CPU suspend in this discussion. IMO that's
> completely out of scope of this DT binding we are defining here.
>
> Hope that helps/clarifies the misunderstanding/disagreement.

Indeed. My intention was that the proposal would result in the exact
same kernel behaviour as Lina's current patchset, i.e. there is one
genpd per cluster, and CPU-level idle states are still handled by
cpuidle.

The only change from the current patchset would be in initialisation
code: some coordination would need to be done to determine which idle
states go into cpuidle and which go into the genpds (whereas with the
current bindings, states from cpu-idle-states go into cpuidle and states
from domain-idle-states go into genpd). So you could say that this would
be a trade-off between binding simplicity and implementation simplicity.

Cheers,
Brendan

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-19 15:09                                 ` Brendan Jackman
  0 siblings, 0 replies; 70+ messages in thread
From: Brendan Jackman @ 2016-09-19 15:09 UTC (permalink / raw)
  To: linux-arm-kernel


On Fri, Sep 16 2016 at 18:39, Sudeep Holla <sudeep.holla@arm.com> wrote:
> Hi Kevin,
>
> Thanks for looking at this and simplifying various discussions we had so
> far. I was thinking of summarizing something very similar. I couldn't
> due to lack of time.
>
> On 16/09/16 18:13, Kevin Hilman wrote:
>
> [...]
>
>> I think we're having some terminology issues...
>>
>> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
>> This was intentional because the goal of the PM domain was to group
>> devices that some PM features.  To be very specific to the kernel, they
>> us the same set of PM callbacks.  Today, this is most commonly used to
>> model power domains, where a group of devices share a power rail, but it
>> does not need to be limited to that.
>>
>
> Agreed/Understood.
>
>> That being said, I'm having a hard time understanding the root of the
>> disagreement.
>>
>
> Yes. I tried to convey the same earlier, but have failed. The only
> disagreement is about a small part of this DT bindings. We would like to
> make it completely hierarchical up to CPU nodes. More comments on that
> below.
>
>> It seems that you and Sudeep would like to use domain-idle-states to
>> replace/superceed cpu-idle-states with the primary goal (and benefit)
>> being that it simplifies the DT bindings.  Is that correct?
>>
>
> Correct, we want to deprecate cpu-idle-states with the introduction of
> this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
> any ABI break we might trigger if we miss to consider some use-case now.
>
>> The objections have come in because that means that implies that CPUs
>> become their own domains, which may not be the case in hardware in the
>> sense that they share a power rail.
>>
>
> Agreed.
>
>> However, IMO, thinking of a CPU as it's own "PM domain" may make some
>> sense based on the terminology above.
>>
>
> Thanks for that, we do understand that it may not be 100% correct when
> we strictly considers hardware terminologies instead of above ones.
> As along as we see no issues with the above terminologies it should be fine.
>
>> I think the other objection may be that using a genpd to model domain
>> with only a single device in it may be overkill, and I agree with that.
>
> I too agree with that. Just because we represent that in DT in that way
> doesn't mean we need to create a genpd to model domain. We can always
> skip that if not required. That's pure implementation specifics and I
> have tried to convey the same in my previous emails. I must say you have
> summarized it very clearly in this email. Thanks again for that.
>
>> But, I'm not sure if making CPUs use domain-idle-states implies that
>> they necessarily have to use genpd is what you are proposing.  Maybe
>> someone could clarify that?
>>
>
> No, I have not proposing anything around implementation in the whole
> discussion so far. I have constrained myself just to DT bindings so far.
> That's the main reason why I was opposed to mentions of OS vs platform
> co-ordinated modes of CPU suspend in this discussion. IMO that's
> completely out of scope of this DT binding we are defining here.
>
> Hope that helps/clarifies the misunderstanding/disagreement.

Indeed. My intention was that the proposal would result in the exact
same kernel behaviour as Lina's current patchset, i.e. there is one
genpd per cluster, and CPU-level idle states are still handled by
cpuidle.

The only change from the current patchset would be in initialisation
code: some coordination would need to be done to determine which idle
states go into cpuidle and which go into the genpds (whereas with the
current bindings, states from cpu-idle-states go into cpuidle and states
from domain-idle-states go into genpd). So you could say that this would
be a trade-off between binding simplicity and implementation simplicity.

Cheers,
Brendan

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-19 15:09                                 ` Brendan Jackman
@ 2016-09-20 16:17                                   ` Lina Iyer
  -1 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-09-20 16:17 UTC (permalink / raw)
  To: Brendan Jackman
  Cc: Sudeep Holla, Kevin Hilman, rjw, linux-pm, linux-arm-kernel,
	ulf.hansson, andy.gross, sboyd, linux-arm-msm, lorenzo.pieralisi,
	Juri.Lelli, Axel Haslam, devicetree, Marc Titinger

On Mon, Sep 19 2016 at 09:09 -0600, Brendan Jackman wrote:
>
>On Fri, Sep 16 2016 at 18:39, Sudeep Holla <sudeep.holla@arm.com> wrote:
>> Hi Kevin,
>>
>> Thanks for looking at this and simplifying various discussions we had so
>> far. I was thinking of summarizing something very similar. I couldn't
>> due to lack of time.
>>
>> On 16/09/16 18:13, Kevin Hilman wrote:
>>
>> [...]
>>
>>> I think we're having some terminology issues...
>>>
>>> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
>>> This was intentional because the goal of the PM domain was to group
>>> devices that some PM features.  To be very specific to the kernel, they
>>> us the same set of PM callbacks.  Today, this is most commonly used to
>>> model power domains, where a group of devices share a power rail, but it
>>> does not need to be limited to that.
>>>
>>
>> Agreed/Understood.
>>
>>> That being said, I'm having a hard time understanding the root of the
>>> disagreement.
>>>
>>
>> Yes. I tried to convey the same earlier, but have failed. The only
>> disagreement is about a small part of this DT bindings. We would like to
>> make it completely hierarchical up to CPU nodes. More comments on that
>> below.
>>
>>> It seems that you and Sudeep would like to use domain-idle-states to
>>> replace/superceed cpu-idle-states with the primary goal (and benefit)
>>> being that it simplifies the DT bindings.  Is that correct?
>>>
>>
>> Correct, we want to deprecate cpu-idle-states with the introduction of
>> this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
>> any ABI break we might trigger if we miss to consider some use-case now.
>>
>>> The objections have come in because that means that implies that CPUs
>>> become their own domains, which may not be the case in hardware in the
>>> sense that they share a power rail.
>>>
>>
>> Agreed.
>>
>>> However, IMO, thinking of a CPU as it's own "PM domain" may make some
>>> sense based on the terminology above.
>>>
>>
>> Thanks for that, we do understand that it may not be 100% correct when
>> we strictly considers hardware terminologies instead of above ones.
>> As along as we see no issues with the above terminologies it should be fine.
>>
>>> I think the other objection may be that using a genpd to model domain
>>> with only a single device in it may be overkill, and I agree with that.
>>
>> I too agree with that. Just because we represent that in DT in that way
>> doesn't mean we need to create a genpd to model domain. We can always
>> skip that if not required. That's pure implementation specifics and I
>> have tried to convey the same in my previous emails. I must say you have
>> summarized it very clearly in this email. Thanks again for that.
>>
>>> But, I'm not sure if making CPUs use domain-idle-states implies that
>>> they necessarily have to use genpd is what you are proposing.  Maybe
>>> someone could clarify that?
>>>
>>
>> No, I have not proposing anything around implementation in the whole
>> discussion so far. I have constrained myself just to DT bindings so far.
>> That's the main reason why I was opposed to mentions of OS vs platform
>> co-ordinated modes of CPU suspend in this discussion. IMO that's
>> completely out of scope of this DT binding we are defining here.
>>
Fair. But understand the PM Domain bindings do not impose any
requirements of hierarchy. Domain idle states are defined by the
property domain-idle-states in the domain node. How the DT bindings are
organized is immaterial to the PM Domain core.

It is a different exercise all together to look at CPU PSCI modes and
have a unified way of representing them in DT. The current set of
patches does not dictate where the domain idle states be located (pardon
my example in the patch, which was not updated to reflect that). That
said, I do require that domains that are controlled by the PSCI f/w be
defined under the 'psci' node in DT, which is fair. All the domain needs
are phandles to the idle state definitions; how the nodes are arranged
in DT is not of consequence to the driver.

In my mind providing a structure to CPU PM domains that can be used for
both OSI and PC is a separate effort. It may also club what Brendan
mentions below as part of the effort. The hierarchy that is presented in
[1] is inherent in the PM domain hierarchy and idle states don't have to
duplicate that information.

>> Hope that helps/clarifies the misunderstanding/disagreement.
>
>Indeed. My intention was that the proposal would result in the exact
>same kernel behaviour as Lina's current patchset, i.e. there is one
>genpd per cluster, and CPU-level idle states are still handled by
>cpuidle.
>
>The only change from the current patchset would be in initialisation
>code: some coordination would need to be done to determine which idle
>states go into cpuidle and which go into the genpds (whereas with the
>current bindings, states from cpu-idle-states go into cpuidle and states
>from domain-idle-states go into genpd). So you could say that this would
>be a trade-off between binding simplicity and implementation simplicity.
>
I would not oppose the idea of virtual domains around CPUs (I admit I am
not comfortable with the idea though), if that is the right thing to do.
But the scope of that work is extensive and should not be clubbed as
part of this proposal. It is an extensive code rework spanning cpuidle
drivers and PSCI and there are hooks in this code to help you achieve
that.

Thanks,
Lina

[1]. https://patchwork.kernel.org/patch/9264507/

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-20 16:17                                   ` Lina Iyer
  0 siblings, 0 replies; 70+ messages in thread
From: Lina Iyer @ 2016-09-20 16:17 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Sep 19 2016 at 09:09 -0600, Brendan Jackman wrote:
>
>On Fri, Sep 16 2016 at 18:39, Sudeep Holla <sudeep.holla@arm.com> wrote:
>> Hi Kevin,
>>
>> Thanks for looking at this and simplifying various discussions we had so
>> far. I was thinking of summarizing something very similar. I couldn't
>> due to lack of time.
>>
>> On 16/09/16 18:13, Kevin Hilman wrote:
>>
>> [...]
>>
>>> I think we're having some terminology issues...
>>>
>>> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
>>> This was intentional because the goal of the PM domain was to group
>>> devices that some PM features.  To be very specific to the kernel, they
>>> us the same set of PM callbacks.  Today, this is most commonly used to
>>> model power domains, where a group of devices share a power rail, but it
>>> does not need to be limited to that.
>>>
>>
>> Agreed/Understood.
>>
>>> That being said, I'm having a hard time understanding the root of the
>>> disagreement.
>>>
>>
>> Yes. I tried to convey the same earlier, but have failed. The only
>> disagreement is about a small part of this DT bindings. We would like to
>> make it completely hierarchical up to CPU nodes. More comments on that
>> below.
>>
>>> It seems that you and Sudeep would like to use domain-idle-states to
>>> replace/superceed cpu-idle-states with the primary goal (and benefit)
>>> being that it simplifies the DT bindings.  Is that correct?
>>>
>>
>> Correct, we want to deprecate cpu-idle-states with the introduction of
>> this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
>> any ABI break we might trigger if we miss to consider some use-case now.
>>
>>> The objections have come in because that means that implies that CPUs
>>> become their own domains, which may not be the case in hardware in the
>>> sense that they share a power rail.
>>>
>>
>> Agreed.
>>
>>> However, IMO, thinking of a CPU as it's own "PM domain" may make some
>>> sense based on the terminology above.
>>>
>>
>> Thanks for that, we do understand that it may not be 100% correct when
>> we strictly considers hardware terminologies instead of above ones.
>> As along as we see no issues with the above terminologies it should be fine.
>>
>>> I think the other objection may be that using a genpd to model domain
>>> with only a single device in it may be overkill, and I agree with that.
>>
>> I too agree with that. Just because we represent that in DT in that way
>> doesn't mean we need to create a genpd to model domain. We can always
>> skip that if not required. That's pure implementation specifics and I
>> have tried to convey the same in my previous emails. I must say you have
>> summarized it very clearly in this email. Thanks again for that.
>>
>>> But, I'm not sure if making CPUs use domain-idle-states implies that
>>> they necessarily have to use genpd is what you are proposing.  Maybe
>>> someone could clarify that?
>>>
>>
>> No, I have not proposing anything around implementation in the whole
>> discussion so far. I have constrained myself just to DT bindings so far.
>> That's the main reason why I was opposed to mentions of OS vs platform
>> co-ordinated modes of CPU suspend in this discussion. IMO that's
>> completely out of scope of this DT binding we are defining here.
>>
Fair. But understand the PM Domain bindings do not impose any
requirements of hierarchy. Domain idle states are defined by the
property domain-idle-states in the domain node. How the DT bindings are
organized is immaterial to the PM Domain core.

It is a different exercise all together to look at CPU PSCI modes and
have a unified way of representing them in DT. The current set of
patches does not dictate where the domain idle states be located (pardon
my example in the patch, which was not updated to reflect that). That
said, I do require that domains that are controlled by the PSCI f/w be
defined under the 'psci' node in DT, which is fair. All the domain needs
are phandles to the idle state definitions; how the nodes are arranged
in DT is not of consequence to the driver.

In my mind providing a structure to CPU PM domains that can be used for
both OSI and PC is a separate effort. It may also club what Brendan
mentions below as part of the effort. The hierarchy that is presented in
[1] is inherent in the PM domain hierarchy and idle states don't have to
duplicate that information.

>> Hope that helps/clarifies the misunderstanding/disagreement.
>
>Indeed. My intention was that the proposal would result in the exact
>same kernel behaviour as Lina's current patchset, i.e. there is one
>genpd per cluster, and CPU-level idle states are still handled by
>cpuidle.
>
>The only change from the current patchset would be in initialisation
>code: some coordination would need to be done to determine which idle
>states go into cpuidle and which go into the genpds (whereas with the
>current bindings, states from cpu-idle-states go into cpuidle and states
>from domain-idle-states go into genpd). So you could say that this would
>be a trade-off between binding simplicity and implementation simplicity.
>
I would not oppose the idea of virtual domains around CPUs (I admit I am
not comfortable with the idea though), if that is the right thing to do.
But the scope of that work is extensive and should not be clubbed as
part of this proposal. It is an extensive code rework spanning cpuidle
drivers and PSCI and there are hooks in this code to help you achieve
that.

Thanks,
Lina

[1]. https://patchwork.kernel.org/patch/9264507/

^ permalink raw reply	[flat|nested] 70+ messages in thread

* Re: [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
  2016-09-20 16:17                                   ` Lina Iyer
@ 2016-09-21  9:48                                     ` Brendan Jackman
  -1 siblings, 0 replies; 70+ messages in thread
From: Brendan Jackman @ 2016-09-21  9:48 UTC (permalink / raw)
  To: Lina Iyer
  Cc: Brendan Jackman, Sudeep Holla, Kevin Hilman, rjw, linux-pm,
	linux-arm-kernel, ulf.hansson, andy.gross, sboyd, linux-arm-msm,
	lorenzo.pieralisi, Juri.Lelli, Axel Haslam, devicetree,
	Marc Titinger


On Tue, Sep 20 2016 at 17:17, Lina Iyer <lina.iyer@linaro.org> wrote:
> On Mon, Sep 19 2016 at 09:09 -0600, Brendan Jackman wrote:
>>
>>On Fri, Sep 16 2016 at 18:39, Sudeep Holla <sudeep.holla@arm.com> wrote:
>>> Hi Kevin,
>>>
>>> Thanks for looking at this and simplifying various discussions we had so
>>> far. I was thinking of summarizing something very similar. I couldn't
>>> due to lack of time.
>>>
>>> On 16/09/16 18:13, Kevin Hilman wrote:
>>>
>>> [...]
>>>
>>>> I think we're having some terminology issues...
>>>>
>>>> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
>>>> This was intentional because the goal of the PM domain was to group
>>>> devices that some PM features.  To be very specific to the kernel, they
>>>> us the same set of PM callbacks.  Today, this is most commonly used to
>>>> model power domains, where a group of devices share a power rail, but it
>>>> does not need to be limited to that.
>>>>
>>>
>>> Agreed/Understood.
>>>
>>>> That being said, I'm having a hard time understanding the root of the
>>>> disagreement.
>>>>
>>>
>>> Yes. I tried to convey the same earlier, but have failed. The only
>>> disagreement is about a small part of this DT bindings. We would like to
>>> make it completely hierarchical up to CPU nodes. More comments on that
>>> below.
>>>
>>>> It seems that you and Sudeep would like to use domain-idle-states to
>>>> replace/superceed cpu-idle-states with the primary goal (and benefit)
>>>> being that it simplifies the DT bindings.  Is that correct?
>>>>
>>>
>>> Correct, we want to deprecate cpu-idle-states with the introduction of
>>> this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
>>> any ABI break we might trigger if we miss to consider some use-case now.
>>>
>>>> The objections have come in because that means that implies that CPUs
>>>> become their own domains, which may not be the case in hardware in the
>>>> sense that they share a power rail.
>>>>
>>>
>>> Agreed.
>>>
>>>> However, IMO, thinking of a CPU as it's own "PM domain" may make some
>>>> sense based on the terminology above.
>>>>
>>>
>>> Thanks for that, we do understand that it may not be 100% correct when
>>> we strictly considers hardware terminologies instead of above ones.
>>> As along as we see no issues with the above terminologies it should be fine.
>>>
>>>> I think the other objection may be that using a genpd to model domain
>>>> with only a single device in it may be overkill, and I agree with that.
>>>
>>> I too agree with that. Just because we represent that in DT in that way
>>> doesn't mean we need to create a genpd to model domain. We can always
>>> skip that if not required. That's pure implementation specifics and I
>>> have tried to convey the same in my previous emails. I must say you have
>>> summarized it very clearly in this email. Thanks again for that.
>>>
>>>> But, I'm not sure if making CPUs use domain-idle-states implies that
>>>> they necessarily have to use genpd is what you are proposing.  Maybe
>>>> someone could clarify that?
>>>>
>>>
>>> No, I have not proposing anything around implementation in the whole
>>> discussion so far. I have constrained myself just to DT bindings so far.
>>> That's the main reason why I was opposed to mentions of OS vs platform
>>> co-ordinated modes of CPU suspend in this discussion. IMO that's
>>> completely out of scope of this DT binding we are defining here.
>>>
> Fair. But understand the PM Domain bindings do not impose any
> requirements of hierarchy. Domain idle states are defined by the
> property domain-idle-states in the domain node. How the DT bindings are
> organized is immaterial to the PM Domain core.
>
> It is a different exercise all together to look at CPU PSCI modes and
> have a unified way of representing them in DT. The current set of
> patches does not dictate where the domain idle states be located (pardon
> my example in the patch, which was not updated to reflect that). That
> said, I do require that domains that are controlled by the PSCI f/w be
> defined under the 'psci' node in DT, which is fair. All the domain needs
> are phandles to the idle state definitions; how the nodes are arranged
> in DT is not of consequence to the driver.
>
> In my mind providing a structure to CPU PM domains that can be used for
> both OSI and PC is a separate effort.

Do you mean a structure in the kernel or in DT? If the former, I agree,
if the latter I strongly disagree. I think DT bindings should be totally
unaware of PSCI suspend modes.

> It may also club what Brendan
> mentions below as part of the effort. The hierarchy that is presented in
> [1] is inherent in the PM domain hierarchy and idle states don't have to
> duplicate that information.
>
>>> Hope that helps/clarifies the misunderstanding/disagreement.
>>
>>Indeed. My intention was that the proposal would result in the exact
>>same kernel behaviour as Lina's current patchset, i.e. there is one
>>genpd per cluster, and CPU-level idle states are still handled by
>>cpuidle.
>>
>>The only change from the current patchset would be in initialisation
>>code: some coordination would need to be done to determine which idle
>>states go into cpuidle and which go into the genpds (whereas with the
>>current bindings, states from cpu-idle-states go into cpuidle and states
>>from domain-idle-states go into genpd). So you could say that this would
>>be a trade-off between binding simplicity and implementation simplicity.
>>
> I would not oppose the idea of virtual domains around CPUs (I admit I am
> not comfortable with the idea though), if that is the right thing to do.
> But the scope of that work is extensive and should not be clubbed as
> part of this proposal. It is an extensive code rework spanning cpuidle
> drivers and PSCI and there are hooks in this code to help you achieve
> that.

If we want to take the per-CPU domains approach, we _have_ to do it as
part of this proposal; it's a different set of semantics for the
cpu-idle-states/domain-idle-states properties. It would mean
cpu-idle-states is _superseded_ by domain idle states - implementing one
solution (where cpu-idle-states and domain idle states are both taken
into consideration by the implementation) then later switching to the
alternative (where cpu-idle-states is ignored when a CPU PM domain tree
is present) wouldn't make sense from a backward-compatibility
perspective.

You're right that implementing the alternative proposal in the Linux
kernel would mean quite a big rework. But, idealistically speaking,
Linux-specific implementation realities shouldn't be a factor in Device
Tree binding designs, right?

If people think that using both cpu-idle-states and domain-idle-states
is the pragmatic choice (or object fundamentally to the idea of devices
with idle states as being in their own PM domain) then that's fine IMO,
but it's a one-time decision and I think we should be clear about why
we're making it.

Cheers,
Brendan

^ permalink raw reply	[flat|nested] 70+ messages in thread

* [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states
@ 2016-09-21  9:48                                     ` Brendan Jackman
  0 siblings, 0 replies; 70+ messages in thread
From: Brendan Jackman @ 2016-09-21  9:48 UTC (permalink / raw)
  To: linux-arm-kernel


On Tue, Sep 20 2016 at 17:17, Lina Iyer <lina.iyer@linaro.org> wrote:
> On Mon, Sep 19 2016 at 09:09 -0600, Brendan Jackman wrote:
>>
>>On Fri, Sep 16 2016 at 18:39, Sudeep Holla <sudeep.holla@arm.com> wrote:
>>> Hi Kevin,
>>>
>>> Thanks for looking at this and simplifying various discussions we had so
>>> far. I was thinking of summarizing something very similar. I couldn't
>>> due to lack of time.
>>>
>>> On 16/09/16 18:13, Kevin Hilman wrote:
>>>
>>> [...]
>>>
>>>> I think we're having some terminology issues...
>>>>
>>>> FWIW, the kernel terminolgy is actually "PM domain", not power domain.
>>>> This was intentional because the goal of the PM domain was to group
>>>> devices that some PM features.  To be very specific to the kernel, they
>>>> us the same set of PM callbacks.  Today, this is most commonly used to
>>>> model power domains, where a group of devices share a power rail, but it
>>>> does not need to be limited to that.
>>>>
>>>
>>> Agreed/Understood.
>>>
>>>> That being said, I'm having a hard time understanding the root of the
>>>> disagreement.
>>>>
>>>
>>> Yes. I tried to convey the same earlier, but have failed. The only
>>> disagreement is about a small part of this DT bindings. We would like to
>>> make it completely hierarchical up to CPU nodes. More comments on that
>>> below.
>>>
>>>> It seems that you and Sudeep would like to use domain-idle-states to
>>>> replace/superceed cpu-idle-states with the primary goal (and benefit)
>>>> being that it simplifies the DT bindings.  Is that correct?
>>>>
>>>
>>> Correct, we want to deprecate cpu-idle-states with the introduction of
>>> this hierarchical PM bindings. Yes IMO, it simplifies things and avoids
>>> any ABI break we might trigger if we miss to consider some use-case now.
>>>
>>>> The objections have come in because that means that implies that CPUs
>>>> become their own domains, which may not be the case in hardware in the
>>>> sense that they share a power rail.
>>>>
>>>
>>> Agreed.
>>>
>>>> However, IMO, thinking of a CPU as it's own "PM domain" may make some
>>>> sense based on the terminology above.
>>>>
>>>
>>> Thanks for that, we do understand that it may not be 100% correct when
>>> we strictly considers hardware terminologies instead of above ones.
>>> As along as we see no issues with the above terminologies it should be fine.
>>>
>>>> I think the other objection may be that using a genpd to model domain
>>>> with only a single device in it may be overkill, and I agree with that.
>>>
>>> I too agree with that. Just because we represent that in DT in that way
>>> doesn't mean we need to create a genpd to model domain. We can always
>>> skip that if not required. That's pure implementation specifics and I
>>> have tried to convey the same in my previous emails. I must say you have
>>> summarized it very clearly in this email. Thanks again for that.
>>>
>>>> But, I'm not sure if making CPUs use domain-idle-states implies that
>>>> they necessarily have to use genpd is what you are proposing.  Maybe
>>>> someone could clarify that?
>>>>
>>>
>>> No, I have not proposing anything around implementation in the whole
>>> discussion so far. I have constrained myself just to DT bindings so far.
>>> That's the main reason why I was opposed to mentions of OS vs platform
>>> co-ordinated modes of CPU suspend in this discussion. IMO that's
>>> completely out of scope of this DT binding we are defining here.
>>>
> Fair. But understand the PM Domain bindings do not impose any
> requirements of hierarchy. Domain idle states are defined by the
> property domain-idle-states in the domain node. How the DT bindings are
> organized is immaterial to the PM Domain core.
>
> It is a different exercise all together to look at CPU PSCI modes and
> have a unified way of representing them in DT. The current set of
> patches does not dictate where the domain idle states be located (pardon
> my example in the patch, which was not updated to reflect that). That
> said, I do require that domains that are controlled by the PSCI f/w be
> defined under the 'psci' node in DT, which is fair. All the domain needs
> are phandles to the idle state definitions; how the nodes are arranged
> in DT is not of consequence to the driver.
>
> In my mind providing a structure to CPU PM domains that can be used for
> both OSI and PC is a separate effort.

Do you mean a structure in the kernel or in DT? If the former, I agree,
if the latter I strongly disagree. I think DT bindings should be totally
unaware of PSCI suspend modes.

> It may also club what Brendan
> mentions below as part of the effort. The hierarchy that is presented in
> [1] is inherent in the PM domain hierarchy and idle states don't have to
> duplicate that information.
>
>>> Hope that helps/clarifies the misunderstanding/disagreement.
>>
>>Indeed. My intention was that the proposal would result in the exact
>>same kernel behaviour as Lina's current patchset, i.e. there is one
>>genpd per cluster, and CPU-level idle states are still handled by
>>cpuidle.
>>
>>The only change from the current patchset would be in initialisation
>>code: some coordination would need to be done to determine which idle
>>states go into cpuidle and which go into the genpds (whereas with the
>>current bindings, states from cpu-idle-states go into cpuidle and states
>>from domain-idle-states go into genpd). So you could say that this would
>>be a trade-off between binding simplicity and implementation simplicity.
>>
> I would not oppose the idea of virtual domains around CPUs (I admit I am
> not comfortable with the idea though), if that is the right thing to do.
> But the scope of that work is extensive and should not be clubbed as
> part of this proposal. It is an extensive code rework spanning cpuidle
> drivers and PSCI and there are hooks in this code to help you achieve
> that.

If we want to take the per-CPU domains approach, we _have_ to do it as
part of this proposal; it's a different set of semantics for the
cpu-idle-states/domain-idle-states properties. It would mean
cpu-idle-states is _superseded_ by domain idle states - implementing one
solution (where cpu-idle-states and domain idle states are both taken
into consideration by the implementation) then later switching to the
alternative (where cpu-idle-states is ignored when a CPU PM domain tree
is present) wouldn't make sense from a backward-compatibility
perspective.

You're right that implementing the alternative proposal in the Linux
kernel would mean quite a big rework. But, idealistically speaking,
Linux-specific implementation realities shouldn't be a factor in Device
Tree binding designs, right?

If people think that using both cpu-idle-states and domain-idle-states
is the pragmatic choice (or object fundamentally to the idea of devices
with idle states as being in their own PM domain) then that's fine IMO,
but it's a one-time decision and I think we should be clear about why
we're making it.

Cheers,
Brendan

^ permalink raw reply	[flat|nested] 70+ messages in thread

end of thread, other threads:[~2016-09-21  9:48 UTC | newest]

Thread overview: 70+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-26 20:17 [PATCH v5 00/16] PM: SoC idle support using PM domains Lina Iyer
2016-08-26 20:17 ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 01/16] PM / Domains: Allow domain power states to be read from DT Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 02/16] dt/bindings: Update binding for PM domain idle states Lina Iyer
2016-08-26 20:17   ` Lina Iyer
     [not found]   ` <1472242678-33700-3-git-send-email-lina.iyer-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
2016-09-02 14:21     ` Sudeep Holla
2016-09-02 14:21       ` Sudeep Holla
2016-09-02 20:16       ` Lina Iyer
2016-09-02 20:16         ` Lina Iyer
     [not found]         ` <20160902201605.GA1705-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
2016-09-12 15:19           ` Brendan Jackman
2016-09-12 15:19             ` Brendan Jackman
2016-09-12 16:16             ` Lina Iyer
2016-09-12 16:16               ` Lina Iyer
2016-09-12 17:09               ` Sudeep Holla
2016-09-12 17:09                 ` Sudeep Holla
     [not found]                 ` <a4fc71ae-6fa5-4142-6dd4-7bc96eb20186-5wv7dgnIgG8@public.gmane.org>
2016-09-13 17:50                   ` Brendan Jackman
2016-09-13 17:50                     ` Brendan Jackman
2016-09-13 19:38                     ` Lina Iyer
2016-09-13 19:38                       ` Lina Iyer
2016-09-14 10:14                       ` Brendan Jackman
2016-09-14 10:14                         ` Brendan Jackman
     [not found]                         ` <87h99i6b5d.fsf-5wv7dgnIgG8@public.gmane.org>
2016-09-14 11:37                           ` Ulf Hansson
2016-09-14 11:37                             ` Ulf Hansson
2016-09-14 14:55                           ` Lina Iyer
2016-09-14 14:55                             ` Lina Iyer
2016-09-16 17:13                         ` Kevin Hilman
2016-09-16 17:13                           ` Kevin Hilman
     [not found]                           ` <7hpoo3ix80.fsf-rdvid1DuHRBWk0Htik3J/w@public.gmane.org>
2016-09-16 17:39                             ` Sudeep Holla
2016-09-16 17:39                               ` Sudeep Holla
2016-09-19 15:09                               ` Brendan Jackman
2016-09-19 15:09                                 ` Brendan Jackman
2016-09-20 16:17                                 ` Lina Iyer
2016-09-20 16:17                                   ` Lina Iyer
2016-09-21  9:48                                   ` Brendan Jackman
2016-09-21  9:48                                     ` Brendan Jackman
2016-08-26 20:17 ` [PATCH v5 03/16] PM / Domains: Abstract genpd locking Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 04/16] PM / Domains: Support IRQ safe PM domains Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 05/16] PM / doc: Update device documentation for devices in " Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 06/16] drivers: cpu: Setup CPU devices to do runtime PM Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 07/16] kernel/cpu_pm: Add runtime PM support for CPUs Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 08/16] PM / cpu_domains: Setup PM domains for CPUs/clusters Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 09/16] PM / cpu_domains: Initialize CPU PM domains from DT Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 23:28   ` kbuild test robot
2016-08-26 23:28     ` kbuild test robot
2016-08-26 20:17 ` [PATCH v5 10/16] timer: Export next wake up of a CPU Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 21:29   ` kbuild test robot
2016-08-26 21:29     ` kbuild test robot
2016-08-26 20:17 ` [PATCH v5 11/16] PM / cpu_domains: Add PM Domain governor for CPUs Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 23:10   ` kbuild test robot
2016-08-26 23:10     ` kbuild test robot
2016-08-26 20:17 ` [PATCH v5 12/16] doc / cpu_domains: Describe CPU PM domains setup and governor Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 13/16] drivers: firmware: psci: Allow OS Initiated suspend mode Lina Iyer
2016-08-26 20:17   ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 14/16] drivers: firmware: psci: Support cluster idle states for OS-Initiated Lina Iyer
2016-08-26 20:17   ` Lina Iyer
     [not found] ` <1472242678-33700-1-git-send-email-lina.iyer-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
2016-08-26 20:17   ` [PATCH v5 15/16] dt/bindings: Add PSCI OS-Initiated PM Domains bindings Lina Iyer
2016-08-26 20:17     ` Lina Iyer
2016-08-26 20:17 ` [PATCH v5 16/16] ARM64: dts: Define CPU power domain for MSM8916 Lina Iyer
2016-08-26 20:17   ` Lina Iyer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.