All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/2] rockchip: power-domain: support qos save and restore
@ 2016-03-18  7:17 ` Elaine Zhang
  0 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-03-18  7:17 UTC (permalink / raw)
  To: heiko, khilman, xf, wxt
  Cc: linux-arm-kernel, huangtao, zyw, xxx, jay.xu, linux-rockchip,
	linux-kernel, Elaine Zhang

add qos document in dt-bingings.
modify power domain driver to support qos save and restore.

Elaine Zhang (2):
  dt-bindings: modify document of Rockchip power domains
  rockchip: power-domain: support qos save and restore

 .../bindings/soc/rockchip/power_domain.txt         |  8 ++
 drivers/soc/rockchip/pm_domains.c                  | 87 +++++++++++++++++++++-
 2 files changed, 92 insertions(+), 3 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 0/2] rockchip: power-domain: support qos save and restore
@ 2016-03-18  7:17 ` Elaine Zhang
  0 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-03-18  7:17 UTC (permalink / raw)
  To: linux-arm-kernel

add qos document in dt-bingings.
modify power domain driver to support qos save and restore.

Elaine Zhang (2):
  dt-bindings: modify document of Rockchip power domains
  rockchip: power-domain: support qos save and restore

 .../bindings/soc/rockchip/power_domain.txt         |  8 ++
 drivers/soc/rockchip/pm_domains.c                  | 87 +++++++++++++++++++++-
 2 files changed, 92 insertions(+), 3 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 1/2] dt-bindings: modify document of Rockchip power domains
  2016-03-18  7:17 ` Elaine Zhang
@ 2016-03-18  7:17   ` Elaine Zhang
  -1 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-03-18  7:17 UTC (permalink / raw)
  To: heiko, khilman, xf, wxt
  Cc: linux-arm-kernel, huangtao, zyw, xxx, jay.xu, linux-rockchip,
	linux-kernel, Elaine Zhang

Add qos example for power domain which found on Rockchip SoCs.
These qos register description in TRMs
(rk3036, rk3228, rk3288, rk3366, rk3368, rk3399) looks the same.

Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
---
 Documentation/devicetree/bindings/soc/rockchip/power_domain.txt | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt b/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt
index 13dc6a3..5a61d36 100644
--- a/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt
+++ b/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt
@@ -19,6 +19,13 @@ Required properties for power domain sub nodes:
 - clocks (optional): phandles to clocks which need to be enabled while power domain
 	switches state.
 
+Qos Example:
+
+	qos_gpu: qos_gpu@ffaf0000 {
+		compatible ="syscon";
+		reg = <0x0 0xffaf0000 0x0 0x20>;
+	};
+
 Example:
 
 	power: power-controller {
@@ -30,6 +37,7 @@ Example:
 		pd_gpu {
 			reg = <RK3288_PD_GPU>;
 			clocks = <&cru ACLK_GPU>;
+			pm_qos = <&qos_gpu>;
 		};
 	};
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v1 1/2] dt-bindings: modify document of Rockchip power domains
@ 2016-03-18  7:17   ` Elaine Zhang
  0 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-03-18  7:17 UTC (permalink / raw)
  To: linux-arm-kernel

Add qos example for power domain which found on Rockchip SoCs.
These qos register description in TRMs
(rk3036, rk3228, rk3288, rk3366, rk3368, rk3399) looks the same.

Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
---
 Documentation/devicetree/bindings/soc/rockchip/power_domain.txt | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt b/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt
index 13dc6a3..5a61d36 100644
--- a/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt
+++ b/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt
@@ -19,6 +19,13 @@ Required properties for power domain sub nodes:
 - clocks (optional): phandles to clocks which need to be enabled while power domain
 	switches state.
 
+Qos Example:
+
+	qos_gpu: qos_gpu at ffaf0000 {
+		compatible ="syscon";
+		reg = <0x0 0xffaf0000 0x0 0x20>;
+	};
+
 Example:
 
 	power: power-controller {
@@ -30,6 +37,7 @@ Example:
 		pd_gpu {
 			reg = <RK3288_PD_GPU>;
 			clocks = <&cru ACLK_GPU>;
+			pm_qos = <&qos_gpu>;
 		};
 	};
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
  2016-03-18  7:17 ` Elaine Zhang
@ 2016-03-18  7:17   ` Elaine Zhang
  -1 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-03-18  7:17 UTC (permalink / raw)
  To: heiko, khilman, xf, wxt
  Cc: linux-arm-kernel, huangtao, zyw, xxx, jay.xu, linux-rockchip,
	linux-kernel, Elaine Zhang

support qos save and restore when power domain on/off.

Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
---
 drivers/soc/rockchip/pm_domains.c | 87 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 84 insertions(+), 3 deletions(-)

diff --git a/drivers/soc/rockchip/pm_domains.c b/drivers/soc/rockchip/pm_domains.c
index 18aee6b..c5f4be6 100644
--- a/drivers/soc/rockchip/pm_domains.c
+++ b/drivers/soc/rockchip/pm_domains.c
@@ -45,10 +45,21 @@ struct rockchip_pmu_info {
 	const struct rockchip_domain_info *domain_info;
 };
 
+#define MAX_QOS_NODE_NUM	20
+#define MAX_QOS_REGS_NUM	5
+#define QOS_PRIORITY		0x08
+#define QOS_MODE		0x0c
+#define QOS_BANDWIDTH		0x10
+#define QOS_SATURATION		0x14
+#define QOS_EXTCONTROL		0x18
+
 struct rockchip_pm_domain {
 	struct generic_pm_domain genpd;
 	const struct rockchip_domain_info *info;
 	struct rockchip_pmu *pmu;
+	int num_qos;
+	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
+	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
 	int num_clks;
 	struct clk *clks[];
 };
@@ -111,6 +122,55 @@ static int rockchip_pmu_set_idle_request(struct rockchip_pm_domain *pd,
 	return 0;
 }
 
+static int rockchip_pmu_save_qos(struct rockchip_pm_domain *pd)
+{
+	int i;
+
+	for (i = 0; i < pd->num_qos; i++) {
+		regmap_read(pd->qos_regmap[i],
+			    QOS_PRIORITY,
+			    &pd->qos_save_regs[i][0]);
+		regmap_read(pd->qos_regmap[i],
+			    QOS_MODE,
+			    &pd->qos_save_regs[i][1]);
+		regmap_read(pd->qos_regmap[i],
+			    QOS_BANDWIDTH,
+			    &pd->qos_save_regs[i][2]);
+		regmap_read(pd->qos_regmap[i],
+			    QOS_SATURATION,
+			    &pd->qos_save_regs[i][3]);
+		regmap_read(pd->qos_regmap[i],
+			    QOS_EXTCONTROL,
+			    &pd->qos_save_regs[i][4]);
+	}
+	return 0;
+}
+
+static int rockchip_pmu_restore_qos(struct rockchip_pm_domain *pd)
+{
+	int i;
+
+	for (i = 0; i < pd->num_qos; i++) {
+		regmap_write(pd->qos_regmap[i],
+			     QOS_PRIORITY,
+			     pd->qos_save_regs[i][0]);
+		regmap_write(pd->qos_regmap[i],
+			     QOS_MODE,
+			     pd->qos_save_regs[i][1]);
+		regmap_write(pd->qos_regmap[i],
+			     QOS_BANDWIDTH,
+			     pd->qos_save_regs[i][2]);
+		regmap_write(pd->qos_regmap[i],
+			     QOS_SATURATION,
+			     pd->qos_save_regs[i][3]);
+		regmap_write(pd->qos_regmap[i],
+			     QOS_EXTCONTROL,
+			     pd->qos_save_regs[i][4]);
+	}
+
+	return 0;
+}
+
 static bool rockchip_pmu_domain_is_on(struct rockchip_pm_domain *pd)
 {
 	struct rockchip_pmu *pmu = pd->pmu;
@@ -147,7 +207,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on)
 			clk_enable(pd->clks[i]);
 
 		if (!power_on) {
-			/* FIXME: add code to save AXI_QOS */
+			rockchip_pmu_save_qos(pd);
 
 			/* if powering down, idle request to NIU first */
 			rockchip_pmu_set_idle_request(pd, true);
@@ -159,7 +219,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on)
 			/* if powering up, leave idle mode */
 			rockchip_pmu_set_idle_request(pd, false);
 
-			/* FIXME: add code to restore AXI_QOS */
+			rockchip_pmu_restore_qos(pd);
 		}
 
 		for (i = pd->num_clks - 1; i >= 0; i--)
@@ -227,9 +287,10 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 {
 	const struct rockchip_domain_info *pd_info;
 	struct rockchip_pm_domain *pd;
+	struct device_node *qos_node;
 	struct clk *clk;
 	int clk_cnt;
-	int i;
+	int i, j;
 	u32 id;
 	int error;
 
@@ -289,6 +350,26 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 			clk, node->name);
 	}
 
+	pd->num_qos = of_count_phandle_with_args(node, "pm_qos",
+						 NULL);
+	if (pd->num_qos > MAX_QOS_NODE_NUM) {
+		dev_err(pmu->dev,
+			"the qos node num is overflow qos_num = %d\n",
+			pd->num_qos);
+		error = -EINVAL;
+		goto err_out;
+	}
+
+	for (j = 0; j < pd->num_qos; j++) {
+		qos_node = of_parse_phandle(node, "pm_qos", j);
+		if (!qos_node) {
+			error = -ENODEV;
+			goto err_out;
+		}
+		pd->qos_regmap[j] = syscon_node_to_regmap(qos_node);
+		of_node_put(qos_node);
+	}
+
 	error = rockchip_pd_power(pd, true);
 	if (error) {
 		dev_err(pmu->dev,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
@ 2016-03-18  7:17   ` Elaine Zhang
  0 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-03-18  7:17 UTC (permalink / raw)
  To: linux-arm-kernel

support qos save and restore when power domain on/off.

Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
---
 drivers/soc/rockchip/pm_domains.c | 87 +++++++++++++++++++++++++++++++++++++--
 1 file changed, 84 insertions(+), 3 deletions(-)

diff --git a/drivers/soc/rockchip/pm_domains.c b/drivers/soc/rockchip/pm_domains.c
index 18aee6b..c5f4be6 100644
--- a/drivers/soc/rockchip/pm_domains.c
+++ b/drivers/soc/rockchip/pm_domains.c
@@ -45,10 +45,21 @@ struct rockchip_pmu_info {
 	const struct rockchip_domain_info *domain_info;
 };
 
+#define MAX_QOS_NODE_NUM	20
+#define MAX_QOS_REGS_NUM	5
+#define QOS_PRIORITY		0x08
+#define QOS_MODE		0x0c
+#define QOS_BANDWIDTH		0x10
+#define QOS_SATURATION		0x14
+#define QOS_EXTCONTROL		0x18
+
 struct rockchip_pm_domain {
 	struct generic_pm_domain genpd;
 	const struct rockchip_domain_info *info;
 	struct rockchip_pmu *pmu;
+	int num_qos;
+	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
+	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
 	int num_clks;
 	struct clk *clks[];
 };
@@ -111,6 +122,55 @@ static int rockchip_pmu_set_idle_request(struct rockchip_pm_domain *pd,
 	return 0;
 }
 
+static int rockchip_pmu_save_qos(struct rockchip_pm_domain *pd)
+{
+	int i;
+
+	for (i = 0; i < pd->num_qos; i++) {
+		regmap_read(pd->qos_regmap[i],
+			    QOS_PRIORITY,
+			    &pd->qos_save_regs[i][0]);
+		regmap_read(pd->qos_regmap[i],
+			    QOS_MODE,
+			    &pd->qos_save_regs[i][1]);
+		regmap_read(pd->qos_regmap[i],
+			    QOS_BANDWIDTH,
+			    &pd->qos_save_regs[i][2]);
+		regmap_read(pd->qos_regmap[i],
+			    QOS_SATURATION,
+			    &pd->qos_save_regs[i][3]);
+		regmap_read(pd->qos_regmap[i],
+			    QOS_EXTCONTROL,
+			    &pd->qos_save_regs[i][4]);
+	}
+	return 0;
+}
+
+static int rockchip_pmu_restore_qos(struct rockchip_pm_domain *pd)
+{
+	int i;
+
+	for (i = 0; i < pd->num_qos; i++) {
+		regmap_write(pd->qos_regmap[i],
+			     QOS_PRIORITY,
+			     pd->qos_save_regs[i][0]);
+		regmap_write(pd->qos_regmap[i],
+			     QOS_MODE,
+			     pd->qos_save_regs[i][1]);
+		regmap_write(pd->qos_regmap[i],
+			     QOS_BANDWIDTH,
+			     pd->qos_save_regs[i][2]);
+		regmap_write(pd->qos_regmap[i],
+			     QOS_SATURATION,
+			     pd->qos_save_regs[i][3]);
+		regmap_write(pd->qos_regmap[i],
+			     QOS_EXTCONTROL,
+			     pd->qos_save_regs[i][4]);
+	}
+
+	return 0;
+}
+
 static bool rockchip_pmu_domain_is_on(struct rockchip_pm_domain *pd)
 {
 	struct rockchip_pmu *pmu = pd->pmu;
@@ -147,7 +207,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on)
 			clk_enable(pd->clks[i]);
 
 		if (!power_on) {
-			/* FIXME: add code to save AXI_QOS */
+			rockchip_pmu_save_qos(pd);
 
 			/* if powering down, idle request to NIU first */
 			rockchip_pmu_set_idle_request(pd, true);
@@ -159,7 +219,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain *pd, bool power_on)
 			/* if powering up, leave idle mode */
 			rockchip_pmu_set_idle_request(pd, false);
 
-			/* FIXME: add code to restore AXI_QOS */
+			rockchip_pmu_restore_qos(pd);
 		}
 
 		for (i = pd->num_clks - 1; i >= 0; i--)
@@ -227,9 +287,10 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 {
 	const struct rockchip_domain_info *pd_info;
 	struct rockchip_pm_domain *pd;
+	struct device_node *qos_node;
 	struct clk *clk;
 	int clk_cnt;
-	int i;
+	int i, j;
 	u32 id;
 	int error;
 
@@ -289,6 +350,26 @@ static int rockchip_pm_add_one_domain(struct rockchip_pmu *pmu,
 			clk, node->name);
 	}
 
+	pd->num_qos = of_count_phandle_with_args(node, "pm_qos",
+						 NULL);
+	if (pd->num_qos > MAX_QOS_NODE_NUM) {
+		dev_err(pmu->dev,
+			"the qos node num is overflow qos_num = %d\n",
+			pd->num_qos);
+		error = -EINVAL;
+		goto err_out;
+	}
+
+	for (j = 0; j < pd->num_qos; j++) {
+		qos_node = of_parse_phandle(node, "pm_qos", j);
+		if (!qos_node) {
+			error = -ENODEV;
+			goto err_out;
+		}
+		pd->qos_regmap[j] = syscon_node_to_regmap(qos_node);
+		of_node_put(qos_node);
+	}
+
 	error = rockchip_pd_power(pd, true);
 	if (error) {
 		dev_err(pmu->dev,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 1/2] dt-bindings: modify document of Rockchip power domains
  2016-03-18  7:17   ` Elaine Zhang
@ 2016-03-18 16:18     ` Kevin Hilman
  -1 siblings, 0 replies; 23+ messages in thread
From: Kevin Hilman @ 2016-03-18 16:18 UTC (permalink / raw)
  To: Elaine Zhang
  Cc: heiko, xf, wxt, linux-arm-kernel, huangtao, zyw, xxx, jay.xu,
	linux-rockchip, linux-kernel

Elaine Zhang <zhangqing@rock-chips.com> writes:

> Add qos example for power domain which found on Rockchip SoCs.
> These qos register description in TRMs
> (rk3036, rk3228, rk3288, rk3366, rk3368, rk3399) looks the same.

This should describe in more detail what "qos" is in this context.  At
first glance, it's just a range of registers that lose context that need
to be saved/restored.

Kevin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 1/2] dt-bindings: modify document of Rockchip power domains
@ 2016-03-18 16:18     ` Kevin Hilman
  0 siblings, 0 replies; 23+ messages in thread
From: Kevin Hilman @ 2016-03-18 16:18 UTC (permalink / raw)
  To: linux-arm-kernel

Elaine Zhang <zhangqing@rock-chips.com> writes:

> Add qos example for power domain which found on Rockchip SoCs.
> These qos register description in TRMs
> (rk3036, rk3228, rk3288, rk3366, rk3368, rk3399) looks the same.

This should describe in more detail what "qos" is in this context.  At
first glance, it's just a range of registers that lose context that need
to be saved/restored.

Kevin

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 1/2] dt-bindings: modify document of Rockchip power domains
  2016-03-18 16:18     ` Kevin Hilman
@ 2016-03-18 22:16       ` Heiko Stuebner
  -1 siblings, 0 replies; 23+ messages in thread
From: Heiko Stuebner @ 2016-03-18 22:16 UTC (permalink / raw)
  To: Kevin Hilman
  Cc: Elaine Zhang, xf, wxt, linux-arm-kernel, huangtao, zyw, xxx,
	jay.xu, linux-rockchip, linux-kernel

Am Freitag, 18. März 2016, 09:18:51 schrieb Kevin Hilman:
> Elaine Zhang <zhangqing@rock-chips.com> writes:
> > Add qos example for power domain which found on Rockchip SoCs.
> > These qos register description in TRMs
> > (rk3036, rk3228, rk3288, rk3366, rk3368, rk3399) looks the same.
> 
> This should describe in more detail what "qos" is in this context.  At
> first glance, it's just a range of registers that lose context that need
> to be saved/restored.

I guess that should be something like

---- 8< ----
Rockchip SoCs contain quality of service (qos) blocks managing priority, 
bandwidth, etc of the connection of each domain to the interconnect.
These blocks loose state when their domain gets disabled and therefore
need to be saved when disabling and restored when enabling a power-domain.

These qos blocks also are similar over all currently available Rockchip 
SoCs.
---- 8< ----

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 1/2] dt-bindings: modify document of Rockchip power domains
@ 2016-03-18 22:16       ` Heiko Stuebner
  0 siblings, 0 replies; 23+ messages in thread
From: Heiko Stuebner @ 2016-03-18 22:16 UTC (permalink / raw)
  To: linux-arm-kernel

Am Freitag, 18. M?rz 2016, 09:18:51 schrieb Kevin Hilman:
> Elaine Zhang <zhangqing@rock-chips.com> writes:
> > Add qos example for power domain which found on Rockchip SoCs.
> > These qos register description in TRMs
> > (rk3036, rk3228, rk3288, rk3366, rk3368, rk3399) looks the same.
> 
> This should describe in more detail what "qos" is in this context.  At
> first glance, it's just a range of registers that lose context that need
> to be saved/restored.

I guess that should be something like

---- 8< ----
Rockchip SoCs contain quality of service (qos) blocks managing priority, 
bandwidth, etc of the connection of each domain to the interconnect.
These blocks loose state when their domain gets disabled and therefore
need to be saved when disabling and restored when enabling a power-domain.

These qos blocks also are similar over all currently available Rockchip 
SoCs.
---- 8< ----

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
  2016-03-18  7:17   ` Elaine Zhang
@ 2016-03-31 16:31     ` Heiko Stuebner
  -1 siblings, 0 replies; 23+ messages in thread
From: Heiko Stuebner @ 2016-03-31 16:31 UTC (permalink / raw)
  To: Elaine Zhang
  Cc: khilman, xf, wxt, linux-arm-kernel, huangtao, zyw, xxx, jay.xu,
	linux-rockchip, linux-kernel

Hi Elaine,

Am Freitag, 18. März 2016, 15:17:24 schrieb Elaine Zhang:
> support qos save and restore when power domain on/off.
> 
> Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>

overall looks nice already ... some implementation-specific comments below.

> ---
>  drivers/soc/rockchip/pm_domains.c | 87
> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84 insertions(+),
> 3 deletions(-)
> 
> diff --git a/drivers/soc/rockchip/pm_domains.c
> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
> --- a/drivers/soc/rockchip/pm_domains.c
> +++ b/drivers/soc/rockchip/pm_domains.c
> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
>  	const struct rockchip_domain_info *domain_info;
>  };
> 
> +#define MAX_QOS_NODE_NUM	20
> +#define MAX_QOS_REGS_NUM	5
> +#define QOS_PRIORITY		0x08
> +#define QOS_MODE		0x0c
> +#define QOS_BANDWIDTH		0x10
> +#define QOS_SATURATION		0x14
> +#define QOS_EXTCONTROL		0x18
> +
>  struct rockchip_pm_domain {
>  	struct generic_pm_domain genpd;
>  	const struct rockchip_domain_info *info;
>  	struct rockchip_pmu *pmu;
> +	int num_qos;
> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];

struct regmap **qos_regmap;
u32 *qos_save_regs;


>  	int num_clks;
>  	struct clk *clks[];
>  };
> @@ -111,6 +122,55 @@ static int rockchip_pmu_set_idle_request(struct
> rockchip_pm_domain *pd, return 0;
>  }
> 
> +static int rockchip_pmu_save_qos(struct rockchip_pm_domain *pd)
> +{
> +	int i;
> +
> +	for (i = 0; i < pd->num_qos; i++) {
> +		regmap_read(pd->qos_regmap[i],
> +			    QOS_PRIORITY,
> +			    &pd->qos_save_regs[i][0]);
> +		regmap_read(pd->qos_regmap[i],
> +			    QOS_MODE,
> +			    &pd->qos_save_regs[i][1]);
> +		regmap_read(pd->qos_regmap[i],
> +			    QOS_BANDWIDTH,
> +			    &pd->qos_save_regs[i][2]);
> +		regmap_read(pd->qos_regmap[i],
> +			    QOS_SATURATION,
> +			    &pd->qos_save_regs[i][3]);
> +		regmap_read(pd->qos_regmap[i],
> +			    QOS_EXTCONTROL,
> +			    &pd->qos_save_regs[i][4]);
> +	}
> +	return 0;
> +}
> +
> +static int rockchip_pmu_restore_qos(struct rockchip_pm_domain *pd)
> +{
> +	int i;
> +
> +	for (i = 0; i < pd->num_qos; i++) {
> +		regmap_write(pd->qos_regmap[i],
> +			     QOS_PRIORITY,
> +			     pd->qos_save_regs[i][0]);
> +		regmap_write(pd->qos_regmap[i],
> +			     QOS_MODE,
> +			     pd->qos_save_regs[i][1]);
> +		regmap_write(pd->qos_regmap[i],
> +			     QOS_BANDWIDTH,
> +			     pd->qos_save_regs[i][2]);
> +		regmap_write(pd->qos_regmap[i],
> +			     QOS_SATURATION,
> +			     pd->qos_save_regs[i][3]);
> +		regmap_write(pd->qos_regmap[i],
> +			     QOS_EXTCONTROL,
> +			     pd->qos_save_regs[i][4]);
> +	}
> +
> +	return 0;
> +}
> +
>  static bool rockchip_pmu_domain_is_on(struct rockchip_pm_domain *pd)
>  {
>  	struct rockchip_pmu *pmu = pd->pmu;
> @@ -147,7 +207,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain
> *pd, bool power_on) clk_enable(pd->clks[i]);
> 
>  		if (!power_on) {
> -			/* FIXME: add code to save AXI_QOS */
> +			rockchip_pmu_save_qos(pd);
> 
>  			/* if powering down, idle request to NIU first */
>  			rockchip_pmu_set_idle_request(pd, true);
> @@ -159,7 +219,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain
> *pd, bool power_on) /* if powering up, leave idle mode */
>  			rockchip_pmu_set_idle_request(pd, false);
> 
> -			/* FIXME: add code to restore AXI_QOS */
> +			rockchip_pmu_restore_qos(pd);
>  		}
> 
>  		for (i = pd->num_clks - 1; i >= 0; i--)
> @@ -227,9 +287,10 @@ static int rockchip_pm_add_one_domain(struct
> rockchip_pmu *pmu, {
>  	const struct rockchip_domain_info *pd_info;
>  	struct rockchip_pm_domain *pd;
> +	struct device_node *qos_node;
>  	struct clk *clk;
>  	int clk_cnt;
> -	int i;
> +	int i, j;
>  	u32 id;
>  	int error;
> 
> @@ -289,6 +350,26 @@ static int rockchip_pm_add_one_domain(struct
> rockchip_pmu *pmu, clk, node->name);
>  	}
> 
> +	pd->num_qos = of_count_phandle_with_args(node, "pm_qos",
> +						 NULL);

missing error handling here:

if (pd->num_qos < 0) {
	error = pd->num_qos;
	goto err_out;
}

Right now, you always allocate MAX_QOS_NODE_NUM entries for regmaps and 
registers for each domain - a bit of a waste over all domains, so maybe 
like:

pd->qos_regmap = kcalloc(pd->num_qos, sizeof(*pd->qos_regmap), GFP_KERNEL);

pd->qos_save_regs = kcalloc, pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32), 
GFP_KERNEL);

+ of course error handling for both + cleanup in rockchip_remove_one_domain

> +
> +	for (j = 0; j < pd->num_qos; j++) {
> +		qos_node = of_parse_phandle(node, "pm_qos", j);
> +		if (!qos_node) {
> +			error = -ENODEV;
> +			goto err_out;
> +		}
> +		pd->qos_regmap[j] = syscon_node_to_regmap(qos_node);

missing
if (IS_ERR(pd->qos_regmap[j])) { ...}

> +		of_node_put(qos_node);
> +	}
> +
>  	error = rockchip_pd_power(pd, true);
>  	if (error) {
>  		dev_err(pmu->dev,

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
@ 2016-03-31 16:31     ` Heiko Stuebner
  0 siblings, 0 replies; 23+ messages in thread
From: Heiko Stuebner @ 2016-03-31 16:31 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Elaine,

Am Freitag, 18. M?rz 2016, 15:17:24 schrieb Elaine Zhang:
> support qos save and restore when power domain on/off.
> 
> Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>

overall looks nice already ... some implementation-specific comments below.

> ---
>  drivers/soc/rockchip/pm_domains.c | 87
> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84 insertions(+),
> 3 deletions(-)
> 
> diff --git a/drivers/soc/rockchip/pm_domains.c
> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
> --- a/drivers/soc/rockchip/pm_domains.c
> +++ b/drivers/soc/rockchip/pm_domains.c
> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
>  	const struct rockchip_domain_info *domain_info;
>  };
> 
> +#define MAX_QOS_NODE_NUM	20
> +#define MAX_QOS_REGS_NUM	5
> +#define QOS_PRIORITY		0x08
> +#define QOS_MODE		0x0c
> +#define QOS_BANDWIDTH		0x10
> +#define QOS_SATURATION		0x14
> +#define QOS_EXTCONTROL		0x18
> +
>  struct rockchip_pm_domain {
>  	struct generic_pm_domain genpd;
>  	const struct rockchip_domain_info *info;
>  	struct rockchip_pmu *pmu;
> +	int num_qos;
> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];

struct regmap **qos_regmap;
u32 *qos_save_regs;


>  	int num_clks;
>  	struct clk *clks[];
>  };
> @@ -111,6 +122,55 @@ static int rockchip_pmu_set_idle_request(struct
> rockchip_pm_domain *pd, return 0;
>  }
> 
> +static int rockchip_pmu_save_qos(struct rockchip_pm_domain *pd)
> +{
> +	int i;
> +
> +	for (i = 0; i < pd->num_qos; i++) {
> +		regmap_read(pd->qos_regmap[i],
> +			    QOS_PRIORITY,
> +			    &pd->qos_save_regs[i][0]);
> +		regmap_read(pd->qos_regmap[i],
> +			    QOS_MODE,
> +			    &pd->qos_save_regs[i][1]);
> +		regmap_read(pd->qos_regmap[i],
> +			    QOS_BANDWIDTH,
> +			    &pd->qos_save_regs[i][2]);
> +		regmap_read(pd->qos_regmap[i],
> +			    QOS_SATURATION,
> +			    &pd->qos_save_regs[i][3]);
> +		regmap_read(pd->qos_regmap[i],
> +			    QOS_EXTCONTROL,
> +			    &pd->qos_save_regs[i][4]);
> +	}
> +	return 0;
> +}
> +
> +static int rockchip_pmu_restore_qos(struct rockchip_pm_domain *pd)
> +{
> +	int i;
> +
> +	for (i = 0; i < pd->num_qos; i++) {
> +		regmap_write(pd->qos_regmap[i],
> +			     QOS_PRIORITY,
> +			     pd->qos_save_regs[i][0]);
> +		regmap_write(pd->qos_regmap[i],
> +			     QOS_MODE,
> +			     pd->qos_save_regs[i][1]);
> +		regmap_write(pd->qos_regmap[i],
> +			     QOS_BANDWIDTH,
> +			     pd->qos_save_regs[i][2]);
> +		regmap_write(pd->qos_regmap[i],
> +			     QOS_SATURATION,
> +			     pd->qos_save_regs[i][3]);
> +		regmap_write(pd->qos_regmap[i],
> +			     QOS_EXTCONTROL,
> +			     pd->qos_save_regs[i][4]);
> +	}
> +
> +	return 0;
> +}
> +
>  static bool rockchip_pmu_domain_is_on(struct rockchip_pm_domain *pd)
>  {
>  	struct rockchip_pmu *pmu = pd->pmu;
> @@ -147,7 +207,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain
> *pd, bool power_on) clk_enable(pd->clks[i]);
> 
>  		if (!power_on) {
> -			/* FIXME: add code to save AXI_QOS */
> +			rockchip_pmu_save_qos(pd);
> 
>  			/* if powering down, idle request to NIU first */
>  			rockchip_pmu_set_idle_request(pd, true);
> @@ -159,7 +219,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain
> *pd, bool power_on) /* if powering up, leave idle mode */
>  			rockchip_pmu_set_idle_request(pd, false);
> 
> -			/* FIXME: add code to restore AXI_QOS */
> +			rockchip_pmu_restore_qos(pd);
>  		}
> 
>  		for (i = pd->num_clks - 1; i >= 0; i--)
> @@ -227,9 +287,10 @@ static int rockchip_pm_add_one_domain(struct
> rockchip_pmu *pmu, {
>  	const struct rockchip_domain_info *pd_info;
>  	struct rockchip_pm_domain *pd;
> +	struct device_node *qos_node;
>  	struct clk *clk;
>  	int clk_cnt;
> -	int i;
> +	int i, j;
>  	u32 id;
>  	int error;
> 
> @@ -289,6 +350,26 @@ static int rockchip_pm_add_one_domain(struct
> rockchip_pmu *pmu, clk, node->name);
>  	}
> 
> +	pd->num_qos = of_count_phandle_with_args(node, "pm_qos",
> +						 NULL);

missing error handling here:

if (pd->num_qos < 0) {
	error = pd->num_qos;
	goto err_out;
}

Right now, you always allocate MAX_QOS_NODE_NUM entries for regmaps and 
registers for each domain - a bit of a waste over all domains, so maybe 
like:

pd->qos_regmap = kcalloc(pd->num_qos, sizeof(*pd->qos_regmap), GFP_KERNEL);

pd->qos_save_regs = kcalloc, pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32), 
GFP_KERNEL);

+ of course error handling for both + cleanup in rockchip_remove_one_domain

> +
> +	for (j = 0; j < pd->num_qos; j++) {
> +		qos_node = of_parse_phandle(node, "pm_qos", j);
> +		if (!qos_node) {
> +			error = -ENODEV;
> +			goto err_out;
> +		}
> +		pd->qos_regmap[j] = syscon_node_to_regmap(qos_node);

missing
if (IS_ERR(pd->qos_regmap[j])) { ...}

> +		of_node_put(qos_node);
> +	}
> +
>  	error = rockchip_pd_power(pd, true);
>  	if (error) {
>  		dev_err(pmu->dev,

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
  2016-03-31 16:31     ` Heiko Stuebner
@ 2016-04-01  2:33       ` Elaine Zhang
  -1 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-04-01  2:33 UTC (permalink / raw)
  To: Heiko Stuebner
  Cc: khilman, xf, wxt, linux-arm-kernel, huangtao, zyw, xxx, jay.xu,
	linux-rockchip, linux-kernel


hi, Heiko

I agree with most of your modifications.
Except, the u32 *qos_save_regs below

On 04/01/2016 12:31 AM, Heiko Stuebner wrote:
> Hi Elaine,
>
> Am Freitag, 18. März 2016, 15:17:24 schrieb Elaine Zhang:
>> support qos save and restore when power domain on/off.
>>
>> Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
>
> overall looks nice already ... some implementation-specific comments below.
>
>> ---
>>   drivers/soc/rockchip/pm_domains.c | 87
>> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84 insertions(+),
>> 3 deletions(-)
>>
>> diff --git a/drivers/soc/rockchip/pm_domains.c
>> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
>> --- a/drivers/soc/rockchip/pm_domains.c
>> +++ b/drivers/soc/rockchip/pm_domains.c
>> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
>>   	const struct rockchip_domain_info *domain_info;
>>   };
>>
>> +#define MAX_QOS_NODE_NUM	20
>> +#define MAX_QOS_REGS_NUM	5
>> +#define QOS_PRIORITY		0x08
>> +#define QOS_MODE		0x0c
>> +#define QOS_BANDWIDTH		0x10
>> +#define QOS_SATURATION		0x14
>> +#define QOS_EXTCONTROL		0x18
>> +
>>   struct rockchip_pm_domain {
>>   	struct generic_pm_domain genpd;
>>   	const struct rockchip_domain_info *info;
>>   	struct rockchip_pmu *pmu;
>> +	int num_qos;
>> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
>> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
>
> struct regmap **qos_regmap;
> u32 *qos_save_regs;
when we save and restore qos registers we need save five regs for every qos.
like this :
for (i = 0; i < pd->num_qos; i++) {
		regmap_read(pd->qos_regmap[i],
			    QOS_PRIORITY,
			    &pd->qos_save_regs[i][0]);
		regmap_read(pd->qos_regmap[i],
			    QOS_MODE,
			    &pd->qos_save_regs[i][1]);
		regmap_read(pd->qos_regmap[i],
			    QOS_BANDWIDTH,
			    &pd->qos_save_regs[i][2]);
		regmap_read(pd->qos_regmap[i],
			    QOS_SATURATION,
			    &pd->qos_save_regs[i][3]);
		regmap_read(pd->qos_regmap[i],
			    QOS_EXTCONTROL,
			    &pd->qos_save_regs[i][4]);
	}
so we can not define qos_save_regs like u32 *qos_save_regs;,
and apply buff like
pd->qos_save_regs = kcalloc(pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32),
GFP_KERNEL);
>
>
>>   	int num_clks;
>>   	struct clk *clks[];
>>   };
>> @@ -111,6 +122,55 @@ static int rockchip_pmu_set_idle_request(struct
>> rockchip_pm_domain *pd, return 0;
>>   }
>>
>> +static int rockchip_pmu_save_qos(struct rockchip_pm_domain *pd)
>> +{
>> +	int i;
>> +
>> +	for (i = 0; i < pd->num_qos; i++) {
>> +		regmap_read(pd->qos_regmap[i],
>> +			    QOS_PRIORITY,
>> +			    &pd->qos_save_regs[i][0]);
>> +		regmap_read(pd->qos_regmap[i],
>> +			    QOS_MODE,
>> +			    &pd->qos_save_regs[i][1]);
>> +		regmap_read(pd->qos_regmap[i],
>> +			    QOS_BANDWIDTH,
>> +			    &pd->qos_save_regs[i][2]);
>> +		regmap_read(pd->qos_regmap[i],
>> +			    QOS_SATURATION,
>> +			    &pd->qos_save_regs[i][3]);
>> +		regmap_read(pd->qos_regmap[i],
>> +			    QOS_EXTCONTROL,
>> +			    &pd->qos_save_regs[i][4]);
>> +	}
>> +	return 0;
>> +}
>> +
>> +static int rockchip_pmu_restore_qos(struct rockchip_pm_domain *pd)
>> +{
>> +	int i;
>> +
>> +	for (i = 0; i < pd->num_qos; i++) {
>> +		regmap_write(pd->qos_regmap[i],
>> +			     QOS_PRIORITY,
>> +			     pd->qos_save_regs[i][0]);
>> +		regmap_write(pd->qos_regmap[i],
>> +			     QOS_MODE,
>> +			     pd->qos_save_regs[i][1]);
>> +		regmap_write(pd->qos_regmap[i],
>> +			     QOS_BANDWIDTH,
>> +			     pd->qos_save_regs[i][2]);
>> +		regmap_write(pd->qos_regmap[i],
>> +			     QOS_SATURATION,
>> +			     pd->qos_save_regs[i][3]);
>> +		regmap_write(pd->qos_regmap[i],
>> +			     QOS_EXTCONTROL,
>> +			     pd->qos_save_regs[i][4]);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>   static bool rockchip_pmu_domain_is_on(struct rockchip_pm_domain *pd)
>>   {
>>   	struct rockchip_pmu *pmu = pd->pmu;
>> @@ -147,7 +207,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain
>> *pd, bool power_on) clk_enable(pd->clks[i]);
>>
>>   		if (!power_on) {
>> -			/* FIXME: add code to save AXI_QOS */
>> +			rockchip_pmu_save_qos(pd);
>>
>>   			/* if powering down, idle request to NIU first */
>>   			rockchip_pmu_set_idle_request(pd, true);
>> @@ -159,7 +219,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain
>> *pd, bool power_on) /* if powering up, leave idle mode */
>>   			rockchip_pmu_set_idle_request(pd, false);
>>
>> -			/* FIXME: add code to restore AXI_QOS */
>> +			rockchip_pmu_restore_qos(pd);
>>   		}
>>
>>   		for (i = pd->num_clks - 1; i >= 0; i--)
>> @@ -227,9 +287,10 @@ static int rockchip_pm_add_one_domain(struct
>> rockchip_pmu *pmu, {
>>   	const struct rockchip_domain_info *pd_info;
>>   	struct rockchip_pm_domain *pd;
>> +	struct device_node *qos_node;
>>   	struct clk *clk;
>>   	int clk_cnt;
>> -	int i;
>> +	int i, j;
>>   	u32 id;
>>   	int error;
>>
>> @@ -289,6 +350,26 @@ static int rockchip_pm_add_one_domain(struct
>> rockchip_pmu *pmu, clk, node->name);
>>   	}
>>
>> +	pd->num_qos = of_count_phandle_with_args(node, "pm_qos",
>> +						 NULL);
>
> missing error handling here:
>
> if (pd->num_qos < 0) {
> 	error = pd->num_qos;
> 	goto err_out;
> }
>
> Right now, you always allocate MAX_QOS_NODE_NUM entries for regmaps and
> registers for each domain - a bit of a waste over all domains, so maybe
> like:
>
> pd->qos_regmap = kcalloc(pd->num_qos, sizeof(*pd->qos_regmap), GFP_KERNEL);
>
> pd->qos_save_regs = kcalloc, pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32),
> GFP_KERNEL);
>
> + of course error handling for both + cleanup in rockchip_remove_one_domain
>
>> +
>> +	for (j = 0; j < pd->num_qos; j++) {
>> +		qos_node = of_parse_phandle(node, "pm_qos", j);
>> +		if (!qos_node) {
>> +			error = -ENODEV;
>> +			goto err_out;
>> +		}
>> +		pd->qos_regmap[j] = syscon_node_to_regmap(qos_node);
>
> missing
> if (IS_ERR(pd->qos_regmap[j])) { ...}
>
>> +		of_node_put(qos_node);
>> +	}
>> +
>>   	error = rockchip_pd_power(pd, true);
>>   	if (error) {
>>   		dev_err(pmu->dev,
>
>
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
@ 2016-04-01  2:33       ` Elaine Zhang
  0 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-04-01  2:33 UTC (permalink / raw)
  To: linux-arm-kernel


hi, Heiko

I agree with most of your modifications.
Except, the u32 *qos_save_regs below

On 04/01/2016 12:31 AM, Heiko Stuebner wrote:
> Hi Elaine,
>
> Am Freitag, 18. M?rz 2016, 15:17:24 schrieb Elaine Zhang:
>> support qos save and restore when power domain on/off.
>>
>> Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
>
> overall looks nice already ... some implementation-specific comments below.
>
>> ---
>>   drivers/soc/rockchip/pm_domains.c | 87
>> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84 insertions(+),
>> 3 deletions(-)
>>
>> diff --git a/drivers/soc/rockchip/pm_domains.c
>> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
>> --- a/drivers/soc/rockchip/pm_domains.c
>> +++ b/drivers/soc/rockchip/pm_domains.c
>> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
>>   	const struct rockchip_domain_info *domain_info;
>>   };
>>
>> +#define MAX_QOS_NODE_NUM	20
>> +#define MAX_QOS_REGS_NUM	5
>> +#define QOS_PRIORITY		0x08
>> +#define QOS_MODE		0x0c
>> +#define QOS_BANDWIDTH		0x10
>> +#define QOS_SATURATION		0x14
>> +#define QOS_EXTCONTROL		0x18
>> +
>>   struct rockchip_pm_domain {
>>   	struct generic_pm_domain genpd;
>>   	const struct rockchip_domain_info *info;
>>   	struct rockchip_pmu *pmu;
>> +	int num_qos;
>> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
>> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
>
> struct regmap **qos_regmap;
> u32 *qos_save_regs;
when we save and restore qos registers we need save five regs for every qos.
like this :
for (i = 0; i < pd->num_qos; i++) {
		regmap_read(pd->qos_regmap[i],
			    QOS_PRIORITY,
			    &pd->qos_save_regs[i][0]);
		regmap_read(pd->qos_regmap[i],
			    QOS_MODE,
			    &pd->qos_save_regs[i][1]);
		regmap_read(pd->qos_regmap[i],
			    QOS_BANDWIDTH,
			    &pd->qos_save_regs[i][2]);
		regmap_read(pd->qos_regmap[i],
			    QOS_SATURATION,
			    &pd->qos_save_regs[i][3]);
		regmap_read(pd->qos_regmap[i],
			    QOS_EXTCONTROL,
			    &pd->qos_save_regs[i][4]);
	}
so we can not define qos_save_regs like u32 *qos_save_regs;,
and apply buff like
pd->qos_save_regs = kcalloc(pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32),
GFP_KERNEL);
>
>
>>   	int num_clks;
>>   	struct clk *clks[];
>>   };
>> @@ -111,6 +122,55 @@ static int rockchip_pmu_set_idle_request(struct
>> rockchip_pm_domain *pd, return 0;
>>   }
>>
>> +static int rockchip_pmu_save_qos(struct rockchip_pm_domain *pd)
>> +{
>> +	int i;
>> +
>> +	for (i = 0; i < pd->num_qos; i++) {
>> +		regmap_read(pd->qos_regmap[i],
>> +			    QOS_PRIORITY,
>> +			    &pd->qos_save_regs[i][0]);
>> +		regmap_read(pd->qos_regmap[i],
>> +			    QOS_MODE,
>> +			    &pd->qos_save_regs[i][1]);
>> +		regmap_read(pd->qos_regmap[i],
>> +			    QOS_BANDWIDTH,
>> +			    &pd->qos_save_regs[i][2]);
>> +		regmap_read(pd->qos_regmap[i],
>> +			    QOS_SATURATION,
>> +			    &pd->qos_save_regs[i][3]);
>> +		regmap_read(pd->qos_regmap[i],
>> +			    QOS_EXTCONTROL,
>> +			    &pd->qos_save_regs[i][4]);
>> +	}
>> +	return 0;
>> +}
>> +
>> +static int rockchip_pmu_restore_qos(struct rockchip_pm_domain *pd)
>> +{
>> +	int i;
>> +
>> +	for (i = 0; i < pd->num_qos; i++) {
>> +		regmap_write(pd->qos_regmap[i],
>> +			     QOS_PRIORITY,
>> +			     pd->qos_save_regs[i][0]);
>> +		regmap_write(pd->qos_regmap[i],
>> +			     QOS_MODE,
>> +			     pd->qos_save_regs[i][1]);
>> +		regmap_write(pd->qos_regmap[i],
>> +			     QOS_BANDWIDTH,
>> +			     pd->qos_save_regs[i][2]);
>> +		regmap_write(pd->qos_regmap[i],
>> +			     QOS_SATURATION,
>> +			     pd->qos_save_regs[i][3]);
>> +		regmap_write(pd->qos_regmap[i],
>> +			     QOS_EXTCONTROL,
>> +			     pd->qos_save_regs[i][4]);
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>   static bool rockchip_pmu_domain_is_on(struct rockchip_pm_domain *pd)
>>   {
>>   	struct rockchip_pmu *pmu = pd->pmu;
>> @@ -147,7 +207,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain
>> *pd, bool power_on) clk_enable(pd->clks[i]);
>>
>>   		if (!power_on) {
>> -			/* FIXME: add code to save AXI_QOS */
>> +			rockchip_pmu_save_qos(pd);
>>
>>   			/* if powering down, idle request to NIU first */
>>   			rockchip_pmu_set_idle_request(pd, true);
>> @@ -159,7 +219,7 @@ static int rockchip_pd_power(struct rockchip_pm_domain
>> *pd, bool power_on) /* if powering up, leave idle mode */
>>   			rockchip_pmu_set_idle_request(pd, false);
>>
>> -			/* FIXME: add code to restore AXI_QOS */
>> +			rockchip_pmu_restore_qos(pd);
>>   		}
>>
>>   		for (i = pd->num_clks - 1; i >= 0; i--)
>> @@ -227,9 +287,10 @@ static int rockchip_pm_add_one_domain(struct
>> rockchip_pmu *pmu, {
>>   	const struct rockchip_domain_info *pd_info;
>>   	struct rockchip_pm_domain *pd;
>> +	struct device_node *qos_node;
>>   	struct clk *clk;
>>   	int clk_cnt;
>> -	int i;
>> +	int i, j;
>>   	u32 id;
>>   	int error;
>>
>> @@ -289,6 +350,26 @@ static int rockchip_pm_add_one_domain(struct
>> rockchip_pmu *pmu, clk, node->name);
>>   	}
>>
>> +	pd->num_qos = of_count_phandle_with_args(node, "pm_qos",
>> +						 NULL);
>
> missing error handling here:
>
> if (pd->num_qos < 0) {
> 	error = pd->num_qos;
> 	goto err_out;
> }
>
> Right now, you always allocate MAX_QOS_NODE_NUM entries for regmaps and
> registers for each domain - a bit of a waste over all domains, so maybe
> like:
>
> pd->qos_regmap = kcalloc(pd->num_qos, sizeof(*pd->qos_regmap), GFP_KERNEL);
>
> pd->qos_save_regs = kcalloc, pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32),
> GFP_KERNEL);
>
> + of course error handling for both + cleanup in rockchip_remove_one_domain
>
>> +
>> +	for (j = 0; j < pd->num_qos; j++) {
>> +		qos_node = of_parse_phandle(node, "pm_qos", j);
>> +		if (!qos_node) {
>> +			error = -ENODEV;
>> +			goto err_out;
>> +		}
>> +		pd->qos_regmap[j] = syscon_node_to_regmap(qos_node);
>
> missing
> if (IS_ERR(pd->qos_regmap[j])) { ...}
>
>> +		of_node_put(qos_node);
>> +	}
>> +
>>   	error = rockchip_pd_power(pd, true);
>>   	if (error) {
>>   		dev_err(pmu->dev,
>
>
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
  2016-04-01  2:33       ` Elaine Zhang
@ 2016-04-01 16:19         ` Heiko Stuebner
  -1 siblings, 0 replies; 23+ messages in thread
From: Heiko Stuebner @ 2016-04-01 16:19 UTC (permalink / raw)
  To: Elaine Zhang
  Cc: khilman, xf, wxt, linux-arm-kernel, huangtao, zyw, xxx, jay.xu,
	linux-rockchip, linux-kernel

Hi Elaine,

Am Freitag, 1. April 2016, 10:33:45 schrieb Elaine Zhang:
> I agree with most of your modifications.
> Except, the u32 *qos_save_regs below

you're right. I didn't take that into account when my open-coding my idea.
A bit more below:

> On 04/01/2016 12:31 AM, Heiko Stuebner wrote:
> > Hi Elaine,
> > 
> > Am Freitag, 18. März 2016, 15:17:24 schrieb Elaine Zhang:
> >> support qos save and restore when power domain on/off.
> >> 
> >> Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
> > 
> > overall looks nice already ... some implementation-specific comments
> > below.> 
> >> ---
> >> 
> >>   drivers/soc/rockchip/pm_domains.c | 87
> >> 
> >> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84
> >> insertions(+),
> >> 3 deletions(-)
> >> 
> >> diff --git a/drivers/soc/rockchip/pm_domains.c
> >> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
> >> --- a/drivers/soc/rockchip/pm_domains.c
> >> +++ b/drivers/soc/rockchip/pm_domains.c
> >> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
> >> 
> >>   	const struct rockchip_domain_info *domain_info;
> >>   
> >>   };
> >> 
> >> +#define MAX_QOS_NODE_NUM	20
> >> +#define MAX_QOS_REGS_NUM	5
> >> +#define QOS_PRIORITY		0x08
> >> +#define QOS_MODE		0x0c
> >> +#define QOS_BANDWIDTH		0x10
> >> +#define QOS_SATURATION		0x14
> >> +#define QOS_EXTCONTROL		0x18
> >> +
> >> 
> >>   struct rockchip_pm_domain {
> >>   
> >>   	struct generic_pm_domain genpd;
> >>   	const struct rockchip_domain_info *info;
> >>   	struct rockchip_pmu *pmu;
> >> 
> >> +	int num_qos;
> >> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
> >> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
> > 
> > struct regmap **qos_regmap;
> > u32 *qos_save_regs;
> 
> when we save and restore qos registers we need save five regs for every
> qos. like this :
> for (i = 0; i < pd->num_qos; i++) {
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_PRIORITY,
> 			    &pd->qos_save_regs[i][0]);
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_MODE,
> 			    &pd->qos_save_regs[i][1]);
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_BANDWIDTH,
> 			    &pd->qos_save_regs[i][2]);
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_SATURATION,
> 			    &pd->qos_save_regs[i][3]);
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_EXTCONTROL,
> 			    &pd->qos_save_regs[i][4]);
> 	}
> so we can not define qos_save_regs like u32 *qos_save_regs;,
> and apply buff like
> pd->qos_save_regs = kcalloc(pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32),
> GFP_KERNEL);

so how about simply swapping indices and doing it like

u32 *qos_save_regs[MAX_QOS_REGS_NUM];

for (i = 0; i < MAX_QOS_REGS_NUM; i++) {
	qos_save_regs[i] = kcalloc(pd->num_qos, sizeof(u32));
	/* error handling here */
}

...
		regmap_read(pd->qos_regmap[i],
			    QOS_SATURATION,
			    &pd->qos_save_regs[3][i]);
...


Asked the other way around, how did you measure to set MAX_QOS_REGS_NUM to 
20? From looking at the rk3399 TRM, it seems there are only 38 QoS 
generators on the SoC in general (24 on the rk3288 with PD_VIO having a 
maximum of 9 qos generators), so preparing for 20 seems a bit overkill ;-)


Heiko

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
@ 2016-04-01 16:19         ` Heiko Stuebner
  0 siblings, 0 replies; 23+ messages in thread
From: Heiko Stuebner @ 2016-04-01 16:19 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Elaine,

Am Freitag, 1. April 2016, 10:33:45 schrieb Elaine Zhang:
> I agree with most of your modifications.
> Except, the u32 *qos_save_regs below

you're right. I didn't take that into account when my open-coding my idea.
A bit more below:

> On 04/01/2016 12:31 AM, Heiko Stuebner wrote:
> > Hi Elaine,
> > 
> > Am Freitag, 18. M?rz 2016, 15:17:24 schrieb Elaine Zhang:
> >> support qos save and restore when power domain on/off.
> >> 
> >> Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
> > 
> > overall looks nice already ... some implementation-specific comments
> > below.> 
> >> ---
> >> 
> >>   drivers/soc/rockchip/pm_domains.c | 87
> >> 
> >> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84
> >> insertions(+),
> >> 3 deletions(-)
> >> 
> >> diff --git a/drivers/soc/rockchip/pm_domains.c
> >> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
> >> --- a/drivers/soc/rockchip/pm_domains.c
> >> +++ b/drivers/soc/rockchip/pm_domains.c
> >> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
> >> 
> >>   	const struct rockchip_domain_info *domain_info;
> >>   
> >>   };
> >> 
> >> +#define MAX_QOS_NODE_NUM	20
> >> +#define MAX_QOS_REGS_NUM	5
> >> +#define QOS_PRIORITY		0x08
> >> +#define QOS_MODE		0x0c
> >> +#define QOS_BANDWIDTH		0x10
> >> +#define QOS_SATURATION		0x14
> >> +#define QOS_EXTCONTROL		0x18
> >> +
> >> 
> >>   struct rockchip_pm_domain {
> >>   
> >>   	struct generic_pm_domain genpd;
> >>   	const struct rockchip_domain_info *info;
> >>   	struct rockchip_pmu *pmu;
> >> 
> >> +	int num_qos;
> >> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
> >> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
> > 
> > struct regmap **qos_regmap;
> > u32 *qos_save_regs;
> 
> when we save and restore qos registers we need save five regs for every
> qos. like this :
> for (i = 0; i < pd->num_qos; i++) {
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_PRIORITY,
> 			    &pd->qos_save_regs[i][0]);
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_MODE,
> 			    &pd->qos_save_regs[i][1]);
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_BANDWIDTH,
> 			    &pd->qos_save_regs[i][2]);
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_SATURATION,
> 			    &pd->qos_save_regs[i][3]);
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_EXTCONTROL,
> 			    &pd->qos_save_regs[i][4]);
> 	}
> so we can not define qos_save_regs like u32 *qos_save_regs;,
> and apply buff like
> pd->qos_save_regs = kcalloc(pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32),
> GFP_KERNEL);

so how about simply swapping indices and doing it like

u32 *qos_save_regs[MAX_QOS_REGS_NUM];

for (i = 0; i < MAX_QOS_REGS_NUM; i++) {
	qos_save_regs[i] = kcalloc(pd->num_qos, sizeof(u32));
	/* error handling here */
}

...
		regmap_read(pd->qos_regmap[i],
			    QOS_SATURATION,
			    &pd->qos_save_regs[3][i]);
...


Asked the other way around, how did you measure to set MAX_QOS_REGS_NUM to 
20? From looking at the rk3399 TRM, it seems there are only 38 QoS 
generators on the SoC in general (24 on the rk3288 with PD_VIO having a 
maximum of 9 qos generators), so preparing for 20 seems a bit overkill ;-)


Heiko

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
  2016-04-01 16:19         ` Heiko Stuebner
  (?)
@ 2016-04-05  1:57           ` Elaine Zhang
  -1 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-04-05  1:57 UTC (permalink / raw)
  To: Heiko Stuebner
  Cc: khilman, xf, wxt, linux-arm-kernel, huangtao, zyw, xxx, jay.xu,
	linux-rockchip, linux-kernel

hi, Heiko:

Thanks for your replay.
For your questions, I also have the same concerns.

On 04/02/2016 12:19 AM, Heiko Stuebner wrote:
> Hi Elaine,
>
> Am Freitag, 1. April 2016, 10:33:45 schrieb Elaine Zhang:
>> I agree with most of your modifications.
>> Except, the u32 *qos_save_regs below
>
> you're right. I didn't take that into account when my open-coding my idea.
> A bit more below:
>
>> On 04/01/2016 12:31 AM, Heiko Stuebner wrote:
>>> Hi Elaine,
>>>
>>> Am Freitag, 18. März 2016, 15:17:24 schrieb Elaine Zhang:
>>>> support qos save and restore when power domain on/off.
>>>>
>>>> Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
>>>
>>> overall looks nice already ... some implementation-specific comments
>>> below.>
>>>> ---
>>>>
>>>>    drivers/soc/rockchip/pm_domains.c | 87
>>>>
>>>> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84
>>>> insertions(+),
>>>> 3 deletions(-)
>>>>
>>>> diff --git a/drivers/soc/rockchip/pm_domains.c
>>>> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
>>>> --- a/drivers/soc/rockchip/pm_domains.c
>>>> +++ b/drivers/soc/rockchip/pm_domains.c
>>>> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
>>>>
>>>>    	const struct rockchip_domain_info *domain_info;
>>>>
>>>>    };
>>>>
>>>> +#define MAX_QOS_NODE_NUM	20
>>>> +#define MAX_QOS_REGS_NUM	5
>>>> +#define QOS_PRIORITY		0x08
>>>> +#define QOS_MODE		0x0c
>>>> +#define QOS_BANDWIDTH		0x10
>>>> +#define QOS_SATURATION		0x14
>>>> +#define QOS_EXTCONTROL		0x18
>>>> +
>>>>
>>>>    struct rockchip_pm_domain {
>>>>
>>>>    	struct generic_pm_domain genpd;
>>>>    	const struct rockchip_domain_info *info;
>>>>    	struct rockchip_pmu *pmu;
>>>>
>>>> +	int num_qos;
>>>> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
>>>> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
>>>
>>> struct regmap **qos_regmap;
>>> u32 *qos_save_regs;
>>
>> when we save and restore qos registers we need save five regs for every
>> qos. like this :
>> for (i = 0; i < pd->num_qos; i++) {
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_PRIORITY,
>> 			    &pd->qos_save_regs[i][0]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_MODE,
>> 			    &pd->qos_save_regs[i][1]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_BANDWIDTH,
>> 			    &pd->qos_save_regs[i][2]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_SATURATION,
>> 			    &pd->qos_save_regs[i][3]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_EXTCONTROL,
>> 			    &pd->qos_save_regs[i][4]);
>> 	}
>> so we can not define qos_save_regs like u32 *qos_save_regs;,
>> and apply buff like
>> pd->qos_save_regs = kcalloc(pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32),
>> GFP_KERNEL);
>
> so how about simply swapping indices and doing it like
>
> u32 *qos_save_regs[MAX_QOS_REGS_NUM];
>
> for (i = 0; i < MAX_QOS_REGS_NUM; i++) {
> 	qos_save_regs[i] = kcalloc(pd->num_qos, sizeof(u32));
> 	/* error handling here */
> }
>
> ...
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_SATURATION,
> 			    &pd->qos_save_regs[3][i]);
> ...

I agree with you on this modification.

>
>
> Asked the other way around, how did you measure to set MAX_QOS_REGS_NUM to
> 20? From looking at the rk3399 TRM, it seems there are only 38 QoS
> generators on the SoC in general (24 on the rk3288 with PD_VIO having a
> maximum of 9 qos generators), so preparing for 20 seems a bit overkill ;-)
>
About the MAX_QOS_NODE_NUM I also have some uncertaibty.
Although there are only 38 QoS on the RK3399(24 on the rk3288),but not 
all of the pd need to power on/off.So not all QOS need save and restore.
So about the MAX_QOS_NODE_NUM, what do you suggest.

MAX_QOS_REGS_NUM is 5 because the QOS register is just 5 need save and 
restore.
like :
#define QOS_PRIORITY		0x08
#define QOS_MODE		0x0c
#define QOS_BANDWIDTH		0x10
#define QOS_SATURATION		0x14
#define QOS_EXTCONTROL		0x18
>
> Heiko
>
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
@ 2016-04-05  1:57           ` Elaine Zhang
  0 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-04-05  1:57 UTC (permalink / raw)
  To: Heiko Stuebner
  Cc: huangtao-TNX95d0MmH7DzftRWevZcw, xf-TNX95d0MmH7DzftRWevZcw,
	khilman-rdvid1DuHRBWk0Htik3J/w, xxx-TNX95d0MmH7DzftRWevZcw,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-rockchip-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	zyw-TNX95d0MmH7DzftRWevZcw, jay.xu-TNX95d0MmH7DzftRWevZcw,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
	wxt-TNX95d0MmH7DzftRWevZcw

hi, Heiko:

Thanks for your replay.
For your questions, I also have the same concerns.

On 04/02/2016 12:19 AM, Heiko Stuebner wrote:
> Hi Elaine,
>
> Am Freitag, 1. April 2016, 10:33:45 schrieb Elaine Zhang:
>> I agree with most of your modifications.
>> Except, the u32 *qos_save_regs below
>
> you're right. I didn't take that into account when my open-coding my idea.
> A bit more below:
>
>> On 04/01/2016 12:31 AM, Heiko Stuebner wrote:
>>> Hi Elaine,
>>>
>>> Am Freitag, 18. März 2016, 15:17:24 schrieb Elaine Zhang:
>>>> support qos save and restore when power domain on/off.
>>>>
>>>> Signed-off-by: Elaine Zhang <zhangqing-TNX95d0MmH7DzftRWevZcw@public.gmane.org>
>>>
>>> overall looks nice already ... some implementation-specific comments
>>> below.>
>>>> ---
>>>>
>>>>    drivers/soc/rockchip/pm_domains.c | 87
>>>>
>>>> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84
>>>> insertions(+),
>>>> 3 deletions(-)
>>>>
>>>> diff --git a/drivers/soc/rockchip/pm_domains.c
>>>> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
>>>> --- a/drivers/soc/rockchip/pm_domains.c
>>>> +++ b/drivers/soc/rockchip/pm_domains.c
>>>> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
>>>>
>>>>    	const struct rockchip_domain_info *domain_info;
>>>>
>>>>    };
>>>>
>>>> +#define MAX_QOS_NODE_NUM	20
>>>> +#define MAX_QOS_REGS_NUM	5
>>>> +#define QOS_PRIORITY		0x08
>>>> +#define QOS_MODE		0x0c
>>>> +#define QOS_BANDWIDTH		0x10
>>>> +#define QOS_SATURATION		0x14
>>>> +#define QOS_EXTCONTROL		0x18
>>>> +
>>>>
>>>>    struct rockchip_pm_domain {
>>>>
>>>>    	struct generic_pm_domain genpd;
>>>>    	const struct rockchip_domain_info *info;
>>>>    	struct rockchip_pmu *pmu;
>>>>
>>>> +	int num_qos;
>>>> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
>>>> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
>>>
>>> struct regmap **qos_regmap;
>>> u32 *qos_save_regs;
>>
>> when we save and restore qos registers we need save five regs for every
>> qos. like this :
>> for (i = 0; i < pd->num_qos; i++) {
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_PRIORITY,
>> 			    &pd->qos_save_regs[i][0]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_MODE,
>> 			    &pd->qos_save_regs[i][1]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_BANDWIDTH,
>> 			    &pd->qos_save_regs[i][2]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_SATURATION,
>> 			    &pd->qos_save_regs[i][3]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_EXTCONTROL,
>> 			    &pd->qos_save_regs[i][4]);
>> 	}
>> so we can not define qos_save_regs like u32 *qos_save_regs;,
>> and apply buff like
>> pd->qos_save_regs = kcalloc(pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32),
>> GFP_KERNEL);
>
> so how about simply swapping indices and doing it like
>
> u32 *qos_save_regs[MAX_QOS_REGS_NUM];
>
> for (i = 0; i < MAX_QOS_REGS_NUM; i++) {
> 	qos_save_regs[i] = kcalloc(pd->num_qos, sizeof(u32));
> 	/* error handling here */
> }
>
> ...
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_SATURATION,
> 			    &pd->qos_save_regs[3][i]);
> ...

I agree with you on this modification.

>
>
> Asked the other way around, how did you measure to set MAX_QOS_REGS_NUM to
> 20? From looking at the rk3399 TRM, it seems there are only 38 QoS
> generators on the SoC in general (24 on the rk3288 with PD_VIO having a
> maximum of 9 qos generators), so preparing for 20 seems a bit overkill ;-)
>
About the MAX_QOS_NODE_NUM I also have some uncertaibty.
Although there are only 38 QoS on the RK3399(24 on the rk3288),but not 
all of the pd need to power on/off.So not all QOS need save and restore.
So about the MAX_QOS_NODE_NUM, what do you suggest.

MAX_QOS_REGS_NUM is 5 because the QOS register is just 5 need save and 
restore.
like :
#define QOS_PRIORITY		0x08
#define QOS_MODE		0x0c
#define QOS_BANDWIDTH		0x10
#define QOS_SATURATION		0x14
#define QOS_EXTCONTROL		0x18
>
> Heiko
>
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
@ 2016-04-05  1:57           ` Elaine Zhang
  0 siblings, 0 replies; 23+ messages in thread
From: Elaine Zhang @ 2016-04-05  1:57 UTC (permalink / raw)
  To: linux-arm-kernel

hi, Heiko:

Thanks for your replay.
For your questions, I also have the same concerns.

On 04/02/2016 12:19 AM, Heiko Stuebner wrote:
> Hi Elaine,
>
> Am Freitag, 1. April 2016, 10:33:45 schrieb Elaine Zhang:
>> I agree with most of your modifications.
>> Except, the u32 *qos_save_regs below
>
> you're right. I didn't take that into account when my open-coding my idea.
> A bit more below:
>
>> On 04/01/2016 12:31 AM, Heiko Stuebner wrote:
>>> Hi Elaine,
>>>
>>> Am Freitag, 18. M?rz 2016, 15:17:24 schrieb Elaine Zhang:
>>>> support qos save and restore when power domain on/off.
>>>>
>>>> Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
>>>
>>> overall looks nice already ... some implementation-specific comments
>>> below.>
>>>> ---
>>>>
>>>>    drivers/soc/rockchip/pm_domains.c | 87
>>>>
>>>> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84
>>>> insertions(+),
>>>> 3 deletions(-)
>>>>
>>>> diff --git a/drivers/soc/rockchip/pm_domains.c
>>>> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
>>>> --- a/drivers/soc/rockchip/pm_domains.c
>>>> +++ b/drivers/soc/rockchip/pm_domains.c
>>>> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
>>>>
>>>>    	const struct rockchip_domain_info *domain_info;
>>>>
>>>>    };
>>>>
>>>> +#define MAX_QOS_NODE_NUM	20
>>>> +#define MAX_QOS_REGS_NUM	5
>>>> +#define QOS_PRIORITY		0x08
>>>> +#define QOS_MODE		0x0c
>>>> +#define QOS_BANDWIDTH		0x10
>>>> +#define QOS_SATURATION		0x14
>>>> +#define QOS_EXTCONTROL		0x18
>>>> +
>>>>
>>>>    struct rockchip_pm_domain {
>>>>
>>>>    	struct generic_pm_domain genpd;
>>>>    	const struct rockchip_domain_info *info;
>>>>    	struct rockchip_pmu *pmu;
>>>>
>>>> +	int num_qos;
>>>> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
>>>> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
>>>
>>> struct regmap **qos_regmap;
>>> u32 *qos_save_regs;
>>
>> when we save and restore qos registers we need save five regs for every
>> qos. like this :
>> for (i = 0; i < pd->num_qos; i++) {
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_PRIORITY,
>> 			    &pd->qos_save_regs[i][0]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_MODE,
>> 			    &pd->qos_save_regs[i][1]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_BANDWIDTH,
>> 			    &pd->qos_save_regs[i][2]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_SATURATION,
>> 			    &pd->qos_save_regs[i][3]);
>> 		regmap_read(pd->qos_regmap[i],
>> 			    QOS_EXTCONTROL,
>> 			    &pd->qos_save_regs[i][4]);
>> 	}
>> so we can not define qos_save_regs like u32 *qos_save_regs;,
>> and apply buff like
>> pd->qos_save_regs = kcalloc(pd->num_qos * MAX_QOS_REGS_NUM, sizeof(u32),
>> GFP_KERNEL);
>
> so how about simply swapping indices and doing it like
>
> u32 *qos_save_regs[MAX_QOS_REGS_NUM];
>
> for (i = 0; i < MAX_QOS_REGS_NUM; i++) {
> 	qos_save_regs[i] = kcalloc(pd->num_qos, sizeof(u32));
> 	/* error handling here */
> }
>
> ...
> 		regmap_read(pd->qos_regmap[i],
> 			    QOS_SATURATION,
> 			    &pd->qos_save_regs[3][i]);
> ...

I agree with you on this modification.

>
>
> Asked the other way around, how did you measure to set MAX_QOS_REGS_NUM to
> 20? From looking at the rk3399 TRM, it seems there are only 38 QoS
> generators on the SoC in general (24 on the rk3288 with PD_VIO having a
> maximum of 9 qos generators), so preparing for 20 seems a bit overkill ;-)
>
About the MAX_QOS_NODE_NUM I also have some uncertaibty.
Although there are only 38 QoS on the RK3399(24 on the rk3288),but not 
all of the pd need to power on/off.So not all QOS need save and restore.
So about the MAX_QOS_NODE_NUM, what do you suggest.

MAX_QOS_REGS_NUM is 5 because the QOS register is just 5 need save and 
restore.
like :
#define QOS_PRIORITY		0x08
#define QOS_MODE		0x0c
#define QOS_BANDWIDTH		0x10
#define QOS_SATURATION		0x14
#define QOS_EXTCONTROL		0x18
>
> Heiko
>
>
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
  2016-04-05  1:57           ` Elaine Zhang
@ 2016-04-05 17:26             ` Heiko Stuebner
  -1 siblings, 0 replies; 23+ messages in thread
From: Heiko Stuebner @ 2016-04-05 17:26 UTC (permalink / raw)
  To: Elaine Zhang
  Cc: khilman, xf, wxt, linux-arm-kernel, huangtao, zyw, xxx, jay.xu,
	linux-rockchip, linux-kernel

Hi Elaine,

Am Dienstag, 5. April 2016, 09:57:20 schrieb Elaine Zhang:
> Thanks for your replay.
> For your questions, I also have the same concerns.
> 
> On 04/02/2016 12:19 AM, Heiko Stuebner wrote:
> > Am Freitag, 1. April 2016, 10:33:45 schrieb Elaine Zhang:
> >> I agree with most of your modifications.
> >> Except, the u32 *qos_save_regs below
> > 
> > you're right. I didn't take that into account when my open-coding my
> > idea.> 
> > A bit more below:
> >> On 04/01/2016 12:31 AM, Heiko Stuebner wrote:
> >>> Hi Elaine,
> >>> 
> >>> Am Freitag, 18. März 2016, 15:17:24 schrieb Elaine Zhang:
> >>>> support qos save and restore when power domain on/off.
> >>>> 
> >>>> Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
> >>> 
> >>> overall looks nice already ... some implementation-specific comments
> >>> below.>
> >>> 
> >>>> ---
> >>>> 
> >>>>    drivers/soc/rockchip/pm_domains.c | 87
> >>>> 
> >>>> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84
> >>>> insertions(+),
> >>>> 3 deletions(-)
> >>>> 
> >>>> diff --git a/drivers/soc/rockchip/pm_domains.c
> >>>> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
> >>>> --- a/drivers/soc/rockchip/pm_domains.c
> >>>> +++ b/drivers/soc/rockchip/pm_domains.c
> >>>> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
> >>>> 
> >>>>    	const struct rockchip_domain_info *domain_info;
> >>>>    
> >>>>    };
> >>>> 
> >>>> +#define MAX_QOS_NODE_NUM	20
> >>>> +#define MAX_QOS_REGS_NUM	5
> >>>> +#define QOS_PRIORITY		0x08
> >>>> +#define QOS_MODE		0x0c
> >>>> +#define QOS_BANDWIDTH		0x10
> >>>> +#define QOS_SATURATION		0x14
> >>>> +#define QOS_EXTCONTROL		0x18
> >>>> +
> >>>> 
> >>>>    struct rockchip_pm_domain {
> >>>>    
> >>>>    	struct generic_pm_domain genpd;
> >>>>    	const struct rockchip_domain_info *info;
> >>>>    	struct rockchip_pmu *pmu;
> >>>> 
> >>>> +	int num_qos;
> >>>> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
> >>>> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
> >>> 
> >>> struct regmap **qos_regmap;
> >>> u32 *qos_save_regs;
> >> 
> >> when we save and restore qos registers we need save five regs for every
> >> qos. like this :
> >> for (i = 0; i < pd->num_qos; i++) {
> >> 
> >> 		regmap_read(pd->qos_regmap[i],
> >> 		
> >> 			    QOS_PRIORITY,
> >> 			    &pd->qos_save_regs[i][0]);
> >> 		
> >> 		regmap_read(pd->qos_regmap[i],
> >> 		
> >> 			    QOS_MODE,
> >> 			    &pd->qos_save_regs[i][1]);
> >> 		
> >> 		regmap_read(pd->qos_regmap[i],
> >> 		
> >> 			    QOS_BANDWIDTH,
> >> 			    &pd->qos_save_regs[i][2]);
> >> 		
> >> 		regmap_read(pd->qos_regmap[i],
> >> 		
> >> 			    QOS_SATURATION,
> >> 			    &pd->qos_save_regs[i][3]);
> >> 		
> >> 		regmap_read(pd->qos_regmap[i],
> >> 		
> >> 			    QOS_EXTCONTROL,
> >> 			    &pd->qos_save_regs[i][4]);
> >> 	
> >> 	}
> >> 
> >> so we can not define qos_save_regs like u32 *qos_save_regs;,
> >> and apply buff like
> >> pd->qos_save_regs = kcalloc(pd->num_qos * MAX_QOS_REGS_NUM,
> >> sizeof(u32),
> >> GFP_KERNEL);
> > 
> > so how about simply swapping indices and doing it like
> > 
> > u32 *qos_save_regs[MAX_QOS_REGS_NUM];
> > 
> > for (i = 0; i < MAX_QOS_REGS_NUM; i++) {
> > 
> > 	qos_save_regs[i] = kcalloc(pd->num_qos, sizeof(u32));
> > 	/* error handling here */
> > 
> > }
> > 
> > ...
> > 
> > 		regmap_read(pd->qos_regmap[i],
> > 		
> > 			    QOS_SATURATION,
> > 			    &pd->qos_save_regs[3][i]);
> > 
> > ...
> 
> I agree with you on this modification.
> 
> > Asked the other way around, how did you measure to set MAX_QOS_REGS_NUM
> > to 20? From looking at the rk3399 TRM, it seems there are only 38 QoS
> > generators on the SoC in general (24 on the rk3288 with PD_VIO having a
> > maximum of 9 qos generators), so preparing for 20 seems a bit overkill
> > ;-)
> About the MAX_QOS_NODE_NUM I also have some uncertaibty.
> Although there are only 38 QoS on the RK3399(24 on the rk3288),but not
> all of the pd need to power on/off.So not all QOS need save and restore.
> So about the MAX_QOS_NODE_NUM, what do you suggest.

if we go the way outlined above, we don't need MAX_QOS_NODE_NUM, as the 
driver will be counting the real number of qos nodes and allocate needed 
structs dynamically - problem solved :-D


> MAX_QOS_REGS_NUM is 5 because the QOS register is just 5 need save and
> restore.

yep, that is no problem and expected.


Heiko

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 2/2] rockchip: power-domain: support qos save and restore
@ 2016-04-05 17:26             ` Heiko Stuebner
  0 siblings, 0 replies; 23+ messages in thread
From: Heiko Stuebner @ 2016-04-05 17:26 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Elaine,

Am Dienstag, 5. April 2016, 09:57:20 schrieb Elaine Zhang:
> Thanks for your replay.
> For your questions, I also have the same concerns.
> 
> On 04/02/2016 12:19 AM, Heiko Stuebner wrote:
> > Am Freitag, 1. April 2016, 10:33:45 schrieb Elaine Zhang:
> >> I agree with most of your modifications.
> >> Except, the u32 *qos_save_regs below
> > 
> > you're right. I didn't take that into account when my open-coding my
> > idea.> 
> > A bit more below:
> >> On 04/01/2016 12:31 AM, Heiko Stuebner wrote:
> >>> Hi Elaine,
> >>> 
> >>> Am Freitag, 18. M?rz 2016, 15:17:24 schrieb Elaine Zhang:
> >>>> support qos save and restore when power domain on/off.
> >>>> 
> >>>> Signed-off-by: Elaine Zhang <zhangqing@rock-chips.com>
> >>> 
> >>> overall looks nice already ... some implementation-specific comments
> >>> below.>
> >>> 
> >>>> ---
> >>>> 
> >>>>    drivers/soc/rockchip/pm_domains.c | 87
> >>>> 
> >>>> +++++++++++++++++++++++++++++++++++++-- 1 file changed, 84
> >>>> insertions(+),
> >>>> 3 deletions(-)
> >>>> 
> >>>> diff --git a/drivers/soc/rockchip/pm_domains.c
> >>>> b/drivers/soc/rockchip/pm_domains.c index 18aee6b..c5f4be6 100644
> >>>> --- a/drivers/soc/rockchip/pm_domains.c
> >>>> +++ b/drivers/soc/rockchip/pm_domains.c
> >>>> @@ -45,10 +45,21 @@ struct rockchip_pmu_info {
> >>>> 
> >>>>    	const struct rockchip_domain_info *domain_info;
> >>>>    
> >>>>    };
> >>>> 
> >>>> +#define MAX_QOS_NODE_NUM	20
> >>>> +#define MAX_QOS_REGS_NUM	5
> >>>> +#define QOS_PRIORITY		0x08
> >>>> +#define QOS_MODE		0x0c
> >>>> +#define QOS_BANDWIDTH		0x10
> >>>> +#define QOS_SATURATION		0x14
> >>>> +#define QOS_EXTCONTROL		0x18
> >>>> +
> >>>> 
> >>>>    struct rockchip_pm_domain {
> >>>>    
> >>>>    	struct generic_pm_domain genpd;
> >>>>    	const struct rockchip_domain_info *info;
> >>>>    	struct rockchip_pmu *pmu;
> >>>> 
> >>>> +	int num_qos;
> >>>> +	struct regmap *qos_regmap[MAX_QOS_NODE_NUM];
> >>>> +	u32 qos_save_regs[MAX_QOS_NODE_NUM][MAX_QOS_REGS_NUM];
> >>> 
> >>> struct regmap **qos_regmap;
> >>> u32 *qos_save_regs;
> >> 
> >> when we save and restore qos registers we need save five regs for every
> >> qos. like this :
> >> for (i = 0; i < pd->num_qos; i++) {
> >> 
> >> 		regmap_read(pd->qos_regmap[i],
> >> 		
> >> 			    QOS_PRIORITY,
> >> 			    &pd->qos_save_regs[i][0]);
> >> 		
> >> 		regmap_read(pd->qos_regmap[i],
> >> 		
> >> 			    QOS_MODE,
> >> 			    &pd->qos_save_regs[i][1]);
> >> 		
> >> 		regmap_read(pd->qos_regmap[i],
> >> 		
> >> 			    QOS_BANDWIDTH,
> >> 			    &pd->qos_save_regs[i][2]);
> >> 		
> >> 		regmap_read(pd->qos_regmap[i],
> >> 		
> >> 			    QOS_SATURATION,
> >> 			    &pd->qos_save_regs[i][3]);
> >> 		
> >> 		regmap_read(pd->qos_regmap[i],
> >> 		
> >> 			    QOS_EXTCONTROL,
> >> 			    &pd->qos_save_regs[i][4]);
> >> 	
> >> 	}
> >> 
> >> so we can not define qos_save_regs like u32 *qos_save_regs;,
> >> and apply buff like
> >> pd->qos_save_regs = kcalloc(pd->num_qos * MAX_QOS_REGS_NUM,
> >> sizeof(u32),
> >> GFP_KERNEL);
> > 
> > so how about simply swapping indices and doing it like
> > 
> > u32 *qos_save_regs[MAX_QOS_REGS_NUM];
> > 
> > for (i = 0; i < MAX_QOS_REGS_NUM; i++) {
> > 
> > 	qos_save_regs[i] = kcalloc(pd->num_qos, sizeof(u32));
> > 	/* error handling here */
> > 
> > }
> > 
> > ...
> > 
> > 		regmap_read(pd->qos_regmap[i],
> > 		
> > 			    QOS_SATURATION,
> > 			    &pd->qos_save_regs[3][i]);
> > 
> > ...
> 
> I agree with you on this modification.
> 
> > Asked the other way around, how did you measure to set MAX_QOS_REGS_NUM
> > to 20? From looking at the rk3399 TRM, it seems there are only 38 QoS
> > generators on the SoC in general (24 on the rk3288 with PD_VIO having a
> > maximum of 9 qos generators), so preparing for 20 seems a bit overkill
> > ;-)
> About the MAX_QOS_NODE_NUM I also have some uncertaibty.
> Although there are only 38 QoS on the RK3399(24 on the rk3288),but not
> all of the pd need to power on/off.So not all QOS need save and restore.
> So about the MAX_QOS_NODE_NUM, what do you suggest.

if we go the way outlined above, we don't need MAX_QOS_NODE_NUM, as the 
driver will be counting the real number of qos nodes and allocate needed 
structs dynamically - problem solved :-D


> MAX_QOS_REGS_NUM is 5 because the QOS register is just 5 need save and
> restore.

yep, that is no problem and expected.


Heiko

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 1/2] dt-bindings: modify document of Rockchip power domains
  2016-03-18 22:16       ` Heiko Stuebner
@ 2016-04-12  2:00         ` Heiko Stuebner
  -1 siblings, 0 replies; 23+ messages in thread
From: Heiko Stuebner @ 2016-04-12  2:00 UTC (permalink / raw)
  To: Kevin Hilman
  Cc: Elaine Zhang, xf, wxt, linux-arm-kernel, huangtao, zyw, xxx,
	jay.xu, linux-rockchip, linux-kernel

Hi Kevin,

Am Freitag, 18. März 2016, 23:16:55 schrieb Heiko Stuebner:
> Am Freitag, 18. März 2016, 09:18:51 schrieb Kevin Hilman:
> > Elaine Zhang <zhangqing@rock-chips.com> writes:
> > > Add qos example for power domain which found on Rockchip SoCs.
> > > These qos register description in TRMs
> > > (rk3036, rk3228, rk3288, rk3366, rk3368, rk3399) looks the same.
> > 
> > This should describe in more detail what "qos" is in this context.  At
> > first glance, it's just a range of registers that lose context that need
> > to be saved/restored.
> 
> I guess that should be something like
> 
> ---- 8< ----
> Rockchip SoCs contain quality of service (qos) blocks managing priority,
> bandwidth, etc of the connection of each domain to the interconnect.
> These blocks loose state when their domain gets disabled and therefore
> need to be saved when disabling and restored when enabling a power-domain.
> 
> These qos blocks also are similar over all currently available Rockchip
> SoCs.
> ---- 8< ----

does this look sane to you in terms of description, or do we need something 
more?


Heiko

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 1/2] dt-bindings: modify document of Rockchip power domains
@ 2016-04-12  2:00         ` Heiko Stuebner
  0 siblings, 0 replies; 23+ messages in thread
From: Heiko Stuebner @ 2016-04-12  2:00 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Kevin,

Am Freitag, 18. M?rz 2016, 23:16:55 schrieb Heiko Stuebner:
> Am Freitag, 18. M?rz 2016, 09:18:51 schrieb Kevin Hilman:
> > Elaine Zhang <zhangqing@rock-chips.com> writes:
> > > Add qos example for power domain which found on Rockchip SoCs.
> > > These qos register description in TRMs
> > > (rk3036, rk3228, rk3288, rk3366, rk3368, rk3399) looks the same.
> > 
> > This should describe in more detail what "qos" is in this context.  At
> > first glance, it's just a range of registers that lose context that need
> > to be saved/restored.
> 
> I guess that should be something like
> 
> ---- 8< ----
> Rockchip SoCs contain quality of service (qos) blocks managing priority,
> bandwidth, etc of the connection of each domain to the interconnect.
> These blocks loose state when their domain gets disabled and therefore
> need to be saved when disabling and restored when enabling a power-domain.
> 
> These qos blocks also are similar over all currently available Rockchip
> SoCs.
> ---- 8< ----

does this look sane to you in terms of description, or do we need something 
more?


Heiko

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2016-04-12  2:01 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-18  7:17 [PATCH v1 0/2] rockchip: power-domain: support qos save and restore Elaine Zhang
2016-03-18  7:17 ` Elaine Zhang
2016-03-18  7:17 ` [PATCH v1 1/2] dt-bindings: modify document of Rockchip power domains Elaine Zhang
2016-03-18  7:17   ` Elaine Zhang
2016-03-18 16:18   ` Kevin Hilman
2016-03-18 16:18     ` Kevin Hilman
2016-03-18 22:16     ` Heiko Stuebner
2016-03-18 22:16       ` Heiko Stuebner
2016-04-12  2:00       ` Heiko Stuebner
2016-04-12  2:00         ` Heiko Stuebner
2016-03-18  7:17 ` [PATCH v1 2/2] rockchip: power-domain: support qos save and restore Elaine Zhang
2016-03-18  7:17   ` Elaine Zhang
2016-03-31 16:31   ` Heiko Stuebner
2016-03-31 16:31     ` Heiko Stuebner
2016-04-01  2:33     ` Elaine Zhang
2016-04-01  2:33       ` Elaine Zhang
2016-04-01 16:19       ` Heiko Stuebner
2016-04-01 16:19         ` Heiko Stuebner
2016-04-05  1:57         ` Elaine Zhang
2016-04-05  1:57           ` Elaine Zhang
2016-04-05  1:57           ` Elaine Zhang
2016-04-05 17:26           ` Heiko Stuebner
2016-04-05 17:26             ` Heiko Stuebner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.