All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Coordinated Clks
@ 2019-03-05  4:49 ` Derek Basehore
  0 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-clk, linux-arm-kernel, linux-rockchip, linux-doc, sboyd,
	mturquette, heiko, aisheng.dong, mchehab+samsung, corbet,
	jbrunet, Derek Basehore

v2 changes:
-Unrolled recursion in unprepare/disable for
"clk: Remove recursion in clk_core_{prepare,enable}()"
-Fixed issue with "clk: fix clk_calc_subtree compute duplications"
-Fixed bug with too few allocated clk_change structs in
"clk: add coordinated clk changes support"
-Further cleaned up patches

Here's the first set of patches that I'm working on for the Common
Clk Framework. Part of this patch series adds a new clk op,
pre_rate_req. This is designed to replace the clk notifier approach
that many clk drivers use right now to setup alt parents or temporary
dividers. This should allow for the removal of the
CLK_RECALC_NEW_RATES flag and the implementation of a better locking
scheme for the prepare lock.

Derek Basehore (5):
  clk: fix clk_calc_subtree compute duplications
  clk: change rates via list iteration
  clk: add coordinated clk changes support
  docs: driver-api: add pre_rate_req to clk documentation
  clk: rockchip: use pre_rate_req for cpuclk

Stephen Boyd (1):
  clk: Remove recursion in clk_core_{prepare,enable}()

 Documentation/driver-api/clk.rst |   7 +-
 drivers/clk/clk.c                | 659 +++++++++++++++++++++++--------
 drivers/clk/rockchip/clk-cpu.c   | 256 ++++++------
 include/linux/clk-provider.h     |  10 +
 4 files changed, 642 insertions(+), 290 deletions(-)

-- 
2.21.0.352.gf09ad66450-goog


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v2 0/6] Coordinated Clks
@ 2019-03-05  4:49 ` Derek Basehore
  0 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: aisheng.dong, Derek Basehore, heiko, linux-doc, sboyd,
	mturquette, corbet, linux-rockchip, mchehab+samsung, linux-clk,
	linux-arm-kernel, jbrunet

v2 changes:
-Unrolled recursion in unprepare/disable for
"clk: Remove recursion in clk_core_{prepare,enable}()"
-Fixed issue with "clk: fix clk_calc_subtree compute duplications"
-Fixed bug with too few allocated clk_change structs in
"clk: add coordinated clk changes support"
-Further cleaned up patches

Here's the first set of patches that I'm working on for the Common
Clk Framework. Part of this patch series adds a new clk op,
pre_rate_req. This is designed to replace the clk notifier approach
that many clk drivers use right now to setup alt parents or temporary
dividers. This should allow for the removal of the
CLK_RECALC_NEW_RATES flag and the implementation of a better locking
scheme for the prepare lock.

Derek Basehore (5):
  clk: fix clk_calc_subtree compute duplications
  clk: change rates via list iteration
  clk: add coordinated clk changes support
  docs: driver-api: add pre_rate_req to clk documentation
  clk: rockchip: use pre_rate_req for cpuclk

Stephen Boyd (1):
  clk: Remove recursion in clk_core_{prepare,enable}()

 Documentation/driver-api/clk.rst |   7 +-
 drivers/clk/clk.c                | 659 +++++++++++++++++++++++--------
 drivers/clk/rockchip/clk-cpu.c   | 256 ++++++------
 include/linux/clk-provider.h     |  10 +
 4 files changed, 642 insertions(+), 290 deletions(-)

-- 
2.21.0.352.gf09ad66450-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare,enable}()
  2019-03-05  4:49 ` Derek Basehore
@ 2019-03-05  4:49   ` Derek Basehore
  -1 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-clk, linux-arm-kernel, linux-rockchip, linux-doc, sboyd,
	mturquette, heiko, aisheng.dong, mchehab+samsung, corbet,
	jbrunet, Stephen Boyd, Derek Basehore

From: Stephen Boyd <sboyd@codeaurora.org>

Enabling and preparing clocks can be written quite naturally with
recursion. We start at some point in the tree and recurse up the
tree to find the oldest parent clk that needs to be enabled or
prepared. Then we enable/prepare and return to the caller, going
back to the clk we started at and enabling/preparing along the
way. This also unroll the recursion in unprepare,disable which can
just be done in the order of walking up the clk tree.

The problem is recursion isn't great for kernel code where we
have a limited stack size. Furthermore, we may be calling this
code inside clk_set_rate() which also has recursion in it, so
we're really not looking good if we encounter a tall clk tree.

Let's create a stack instead by looping over the parent chain and
collecting clks of interest. Then the enable/prepare becomes as
simple as iterating over that list and calling enable.

Modified verison of https://lore.kernel.org/patchwork/patch/814369/
-Fixed kernel warning
-unrolled recursion in unprepare/disable too

Cc: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/clk.c | 191 ++++++++++++++++++++++++++--------------------
 1 file changed, 107 insertions(+), 84 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index d2477a5058ac..94b3ac783d90 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -68,6 +68,8 @@ struct clk_core {
 	struct hlist_head	children;
 	struct hlist_node	child_node;
 	struct hlist_head	clks;
+	struct list_head	prepare_list;
+	struct list_head	enable_list;
 	unsigned int		notifier_count;
 #ifdef CONFIG_DEBUG_FS
 	struct dentry		*dentry;
@@ -677,34 +679,34 @@ static void clk_core_unprepare(struct clk_core *core)
 {
 	lockdep_assert_held(&prepare_lock);
 
-	if (!core)
-		return;
-
-	if (WARN(core->prepare_count == 0,
-	    "%s already unprepared\n", core->name))
-		return;
-
-	if (WARN(core->prepare_count == 1 && core->flags & CLK_IS_CRITICAL,
-	    "Unpreparing critical %s\n", core->name))
-		return;
+	while (core) {
+		if (WARN(core->prepare_count == 0,
+		    "%s already unprepared\n", core->name))
+			return;
 
-	if (core->flags & CLK_SET_RATE_GATE)
-		clk_core_rate_unprotect(core);
+		if (WARN(core->prepare_count == 1 &&
+			 core->flags & CLK_IS_CRITICAL,
+			 "Unpreparing critical %s\n", core->name))
+			return;
 
-	if (--core->prepare_count > 0)
-		return;
+		if (core->flags & CLK_SET_RATE_GATE)
+			clk_core_rate_unprotect(core);
 
-	WARN(core->enable_count > 0, "Unpreparing enabled %s\n", core->name);
+		if (--core->prepare_count > 0)
+			return;
 
-	trace_clk_unprepare(core);
+		WARN(core->enable_count > 0, "Unpreparing enabled %s\n",
+		     core->name);
+		trace_clk_unprepare(core);
 
-	if (core->ops->unprepare)
-		core->ops->unprepare(core->hw);
+		if (core->ops->unprepare)
+			core->ops->unprepare(core->hw);
 
-	clk_pm_runtime_put(core);
+		clk_pm_runtime_put(core);
 
-	trace_clk_unprepare_complete(core);
-	clk_core_unprepare(core->parent);
+		trace_clk_unprepare_complete(core);
+		core = core->parent;
+	}
 }
 
 static void clk_core_unprepare_lock(struct clk_core *core)
@@ -737,49 +739,57 @@ EXPORT_SYMBOL_GPL(clk_unprepare);
 static int clk_core_prepare(struct clk_core *core)
 {
 	int ret = 0;
+	LIST_HEAD(head);
 
 	lockdep_assert_held(&prepare_lock);
 
-	if (!core)
-		return 0;
+	while (core) {
+		list_add(&core->prepare_list, &head);
+		/*
+		 * Stop once we see a clk that is already prepared. Adding a clk
+		 * to the list with a non-zero prepare count (or reaching NULL)
+		 * makes error handling work as implemented.
+		 */
+		if (core->prepare_count)
+			break;
+		core = core->parent;
+	}
 
-	if (core->prepare_count == 0) {
-		ret = clk_pm_runtime_get(core);
-		if (ret)
-			return ret;
+	/* First entry has either a prepare_count of 0 or a NULL parent. */
+	list_for_each_entry(core, &head, prepare_list) {
+		if (core->prepare_count == 0) {
+			ret = clk_pm_runtime_get(core);
+			if (ret)
+				goto unprepare_parent;
 
-		ret = clk_core_prepare(core->parent);
-		if (ret)
-			goto runtime_put;
+			trace_clk_prepare(core);
 
-		trace_clk_prepare(core);
+			if (core->ops->prepare)
+				ret = core->ops->prepare(core->hw);
 
-		if (core->ops->prepare)
-			ret = core->ops->prepare(core->hw);
+			trace_clk_prepare_complete(core);
 
-		trace_clk_prepare_complete(core);
+			if (ret)
+				goto runtime_put;
+		}
+		core->prepare_count++;
 
-		if (ret)
-			goto unprepare;
+		/*
+		 * CLK_SET_RATE_GATE is a special case of clock protection
+		 * Instead of a consumer claiming exclusive rate control, it is
+		 * actually the provider which prevents any consumer from making
+		 * any operation which could result in a rate change or rate
+		 * glitch while the clock is prepared.
+		 */
+		if (core->flags & CLK_SET_RATE_GATE)
+			clk_core_rate_protect(core);
 	}
 
-	core->prepare_count++;
-
-	/*
-	 * CLK_SET_RATE_GATE is a special case of clock protection
-	 * Instead of a consumer claiming exclusive rate control, it is
-	 * actually the provider which prevents any consumer from making any
-	 * operation which could result in a rate change or rate glitch while
-	 * the clock is prepared.
-	 */
-	if (core->flags & CLK_SET_RATE_GATE)
-		clk_core_rate_protect(core);
-
 	return 0;
-unprepare:
-	clk_core_unprepare(core->parent);
 runtime_put:
 	clk_pm_runtime_put(core);
+unprepare_parent:
+	clk_core_unprepare(core->parent);
 	return ret;
 }
 
@@ -819,27 +829,27 @@ static void clk_core_disable(struct clk_core *core)
 {
 	lockdep_assert_held(&enable_lock);
 
-	if (!core)
-		return;
-
-	if (WARN(core->enable_count == 0, "%s already disabled\n", core->name))
-		return;
-
-	if (WARN(core->enable_count == 1 && core->flags & CLK_IS_CRITICAL,
-	    "Disabling critical %s\n", core->name))
-		return;
+	while (core) {
+		if (WARN(core->enable_count == 0, "%s already disabled\n",
+			 core->name))
+			return;
 
-	if (--core->enable_count > 0)
-		return;
+		if (--core->enable_count > 0)
+			return;
 
-	trace_clk_disable_rcuidle(core);
+		if (WARN(core->enable_count == 1 &&
+			 core->flags & CLK_IS_CRITICAL,
+			 "Disabling critical %s\n", core->name))
+			return;
 
-	if (core->ops->disable)
-		core->ops->disable(core->hw);
+		trace_clk_disable_rcuidle(core);
 
-	trace_clk_disable_complete_rcuidle(core);
+		if (core->ops->disable)
+			core->ops->disable(core->hw);
 
-	clk_core_disable(core->parent);
+		trace_clk_disable_complete_rcuidle(core);
+		core = core->parent;
+	}
 }
 
 static void clk_core_disable_lock(struct clk_core *core)
@@ -875,37 +885,48 @@ EXPORT_SYMBOL_GPL(clk_disable);
 static int clk_core_enable(struct clk_core *core)
 {
 	int ret = 0;
+	LIST_HEAD(head);
 
 	lockdep_assert_held(&enable_lock);
 
-	if (!core)
-		return 0;
+	while (core) {
+		if (WARN(core->prepare_count == 0,
+			 "Enabling unprepared %s\n", core->name))
+			return -ESHUTDOWN;
 
-	if (WARN(core->prepare_count == 0,
-	    "Enabling unprepared %s\n", core->name))
-		return -ESHUTDOWN;
-
-	if (core->enable_count == 0) {
-		ret = clk_core_enable(core->parent);
+		list_add(&core->enable_list, &head);
+		/*
+		 * Stop once we see a clk that is already enabled. Adding a clk
+		 * to the list with a non-zero prepare count (or reaching NULL)
+		 * makes error handling work as implemented.
+		 */
+		if (core->enable_count)
+			break;
 
-		if (ret)
-			return ret;
+		core = core->parent;
+	}
 
-		trace_clk_enable_rcuidle(core);
+	/* First entry has either an enable_count of 0 or a NULL parent. */
+	list_for_each_entry(core, &head, enable_list) {
+		if (core->enable_count == 0) {
+			trace_clk_enable_rcuidle(core);
 
-		if (core->ops->enable)
-			ret = core->ops->enable(core->hw);
+			if (core->ops->enable)
+				ret = core->ops->enable(core->hw);
 
-		trace_clk_enable_complete_rcuidle(core);
+			trace_clk_enable_complete_rcuidle(core);
 
-		if (ret) {
-			clk_core_disable(core->parent);
-			return ret;
+			if (ret)
+				goto err;
 		}
+
+		core->enable_count++;
 	}
 
-	core->enable_count++;
 	return 0;
+err:
+	clk_core_disable(core->parent);
+	return ret;
 }
 
 static int clk_core_enable_lock(struct clk_core *core)
@@ -3288,6 +3309,8 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw)
 	core->num_parents = hw->init->num_parents;
 	core->min_rate = 0;
 	core->max_rate = ULONG_MAX;
+	INIT_LIST_HEAD(&core->prepare_list);
+	INIT_LIST_HEAD(&core->enable_list);
 	hw->core = core;
 
 	/* allocate local copy in case parent_names is __initdata */
-- 
2.21.0.352.gf09ad66450-goog


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare,enable}()
@ 2019-03-05  4:49   ` Derek Basehore
  0 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: aisheng.dong, Derek Basehore, heiko, linux-doc, sboyd,
	mturquette, corbet, Stephen Boyd, linux-rockchip,
	mchehab+samsung, linux-clk, linux-arm-kernel, jbrunet

From: Stephen Boyd <sboyd@codeaurora.org>

Enabling and preparing clocks can be written quite naturally with
recursion. We start at some point in the tree and recurse up the
tree to find the oldest parent clk that needs to be enabled or
prepared. Then we enable/prepare and return to the caller, going
back to the clk we started at and enabling/preparing along the
way. This also unroll the recursion in unprepare,disable which can
just be done in the order of walking up the clk tree.

The problem is recursion isn't great for kernel code where we
have a limited stack size. Furthermore, we may be calling this
code inside clk_set_rate() which also has recursion in it, so
we're really not looking good if we encounter a tall clk tree.

Let's create a stack instead by looping over the parent chain and
collecting clks of interest. Then the enable/prepare becomes as
simple as iterating over that list and calling enable.

Modified verison of https://lore.kernel.org/patchwork/patch/814369/
-Fixed kernel warning
-unrolled recursion in unprepare/disable too

Cc: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/clk.c | 191 ++++++++++++++++++++++++++--------------------
 1 file changed, 107 insertions(+), 84 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index d2477a5058ac..94b3ac783d90 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -68,6 +68,8 @@ struct clk_core {
 	struct hlist_head	children;
 	struct hlist_node	child_node;
 	struct hlist_head	clks;
+	struct list_head	prepare_list;
+	struct list_head	enable_list;
 	unsigned int		notifier_count;
 #ifdef CONFIG_DEBUG_FS
 	struct dentry		*dentry;
@@ -677,34 +679,34 @@ static void clk_core_unprepare(struct clk_core *core)
 {
 	lockdep_assert_held(&prepare_lock);
 
-	if (!core)
-		return;
-
-	if (WARN(core->prepare_count == 0,
-	    "%s already unprepared\n", core->name))
-		return;
-
-	if (WARN(core->prepare_count == 1 && core->flags & CLK_IS_CRITICAL,
-	    "Unpreparing critical %s\n", core->name))
-		return;
+	while (core) {
+		if (WARN(core->prepare_count == 0,
+		    "%s already unprepared\n", core->name))
+			return;
 
-	if (core->flags & CLK_SET_RATE_GATE)
-		clk_core_rate_unprotect(core);
+		if (WARN(core->prepare_count == 1 &&
+			 core->flags & CLK_IS_CRITICAL,
+			 "Unpreparing critical %s\n", core->name))
+			return;
 
-	if (--core->prepare_count > 0)
-		return;
+		if (core->flags & CLK_SET_RATE_GATE)
+			clk_core_rate_unprotect(core);
 
-	WARN(core->enable_count > 0, "Unpreparing enabled %s\n", core->name);
+		if (--core->prepare_count > 0)
+			return;
 
-	trace_clk_unprepare(core);
+		WARN(core->enable_count > 0, "Unpreparing enabled %s\n",
+		     core->name);
+		trace_clk_unprepare(core);
 
-	if (core->ops->unprepare)
-		core->ops->unprepare(core->hw);
+		if (core->ops->unprepare)
+			core->ops->unprepare(core->hw);
 
-	clk_pm_runtime_put(core);
+		clk_pm_runtime_put(core);
 
-	trace_clk_unprepare_complete(core);
-	clk_core_unprepare(core->parent);
+		trace_clk_unprepare_complete(core);
+		core = core->parent;
+	}
 }
 
 static void clk_core_unprepare_lock(struct clk_core *core)
@@ -737,49 +739,57 @@ EXPORT_SYMBOL_GPL(clk_unprepare);
 static int clk_core_prepare(struct clk_core *core)
 {
 	int ret = 0;
+	LIST_HEAD(head);
 
 	lockdep_assert_held(&prepare_lock);
 
-	if (!core)
-		return 0;
+	while (core) {
+		list_add(&core->prepare_list, &head);
+		/*
+		 * Stop once we see a clk that is already prepared. Adding a clk
+		 * to the list with a non-zero prepare count (or reaching NULL)
+		 * makes error handling work as implemented.
+		 */
+		if (core->prepare_count)
+			break;
+		core = core->parent;
+	}
 
-	if (core->prepare_count == 0) {
-		ret = clk_pm_runtime_get(core);
-		if (ret)
-			return ret;
+	/* First entry has either a prepare_count of 0 or a NULL parent. */
+	list_for_each_entry(core, &head, prepare_list) {
+		if (core->prepare_count == 0) {
+			ret = clk_pm_runtime_get(core);
+			if (ret)
+				goto unprepare_parent;
 
-		ret = clk_core_prepare(core->parent);
-		if (ret)
-			goto runtime_put;
+			trace_clk_prepare(core);
 
-		trace_clk_prepare(core);
+			if (core->ops->prepare)
+				ret = core->ops->prepare(core->hw);
 
-		if (core->ops->prepare)
-			ret = core->ops->prepare(core->hw);
+			trace_clk_prepare_complete(core);
 
-		trace_clk_prepare_complete(core);
+			if (ret)
+				goto runtime_put;
+		}
+		core->prepare_count++;
 
-		if (ret)
-			goto unprepare;
+		/*
+		 * CLK_SET_RATE_GATE is a special case of clock protection
+		 * Instead of a consumer claiming exclusive rate control, it is
+		 * actually the provider which prevents any consumer from making
+		 * any operation which could result in a rate change or rate
+		 * glitch while the clock is prepared.
+		 */
+		if (core->flags & CLK_SET_RATE_GATE)
+			clk_core_rate_protect(core);
 	}
 
-	core->prepare_count++;
-
-	/*
-	 * CLK_SET_RATE_GATE is a special case of clock protection
-	 * Instead of a consumer claiming exclusive rate control, it is
-	 * actually the provider which prevents any consumer from making any
-	 * operation which could result in a rate change or rate glitch while
-	 * the clock is prepared.
-	 */
-	if (core->flags & CLK_SET_RATE_GATE)
-		clk_core_rate_protect(core);
-
 	return 0;
-unprepare:
-	clk_core_unprepare(core->parent);
 runtime_put:
 	clk_pm_runtime_put(core);
+unprepare_parent:
+	clk_core_unprepare(core->parent);
 	return ret;
 }
 
@@ -819,27 +829,27 @@ static void clk_core_disable(struct clk_core *core)
 {
 	lockdep_assert_held(&enable_lock);
 
-	if (!core)
-		return;
-
-	if (WARN(core->enable_count == 0, "%s already disabled\n", core->name))
-		return;
-
-	if (WARN(core->enable_count == 1 && core->flags & CLK_IS_CRITICAL,
-	    "Disabling critical %s\n", core->name))
-		return;
+	while (core) {
+		if (WARN(core->enable_count == 0, "%s already disabled\n",
+			 core->name))
+			return;
 
-	if (--core->enable_count > 0)
-		return;
+		if (--core->enable_count > 0)
+			return;
 
-	trace_clk_disable_rcuidle(core);
+		if (WARN(core->enable_count == 1 &&
+			 core->flags & CLK_IS_CRITICAL,
+			 "Disabling critical %s\n", core->name))
+			return;
 
-	if (core->ops->disable)
-		core->ops->disable(core->hw);
+		trace_clk_disable_rcuidle(core);
 
-	trace_clk_disable_complete_rcuidle(core);
+		if (core->ops->disable)
+			core->ops->disable(core->hw);
 
-	clk_core_disable(core->parent);
+		trace_clk_disable_complete_rcuidle(core);
+		core = core->parent;
+	}
 }
 
 static void clk_core_disable_lock(struct clk_core *core)
@@ -875,37 +885,48 @@ EXPORT_SYMBOL_GPL(clk_disable);
 static int clk_core_enable(struct clk_core *core)
 {
 	int ret = 0;
+	LIST_HEAD(head);
 
 	lockdep_assert_held(&enable_lock);
 
-	if (!core)
-		return 0;
+	while (core) {
+		if (WARN(core->prepare_count == 0,
+			 "Enabling unprepared %s\n", core->name))
+			return -ESHUTDOWN;
 
-	if (WARN(core->prepare_count == 0,
-	    "Enabling unprepared %s\n", core->name))
-		return -ESHUTDOWN;
-
-	if (core->enable_count == 0) {
-		ret = clk_core_enable(core->parent);
+		list_add(&core->enable_list, &head);
+		/*
+		 * Stop once we see a clk that is already enabled. Adding a clk
+		 * to the list with a non-zero prepare count (or reaching NULL)
+		 * makes error handling work as implemented.
+		 */
+		if (core->enable_count)
+			break;
 
-		if (ret)
-			return ret;
+		core = core->parent;
+	}
 
-		trace_clk_enable_rcuidle(core);
+	/* First entry has either an enable_count of 0 or a NULL parent. */
+	list_for_each_entry(core, &head, enable_list) {
+		if (core->enable_count == 0) {
+			trace_clk_enable_rcuidle(core);
 
-		if (core->ops->enable)
-			ret = core->ops->enable(core->hw);
+			if (core->ops->enable)
+				ret = core->ops->enable(core->hw);
 
-		trace_clk_enable_complete_rcuidle(core);
+			trace_clk_enable_complete_rcuidle(core);
 
-		if (ret) {
-			clk_core_disable(core->parent);
-			return ret;
+			if (ret)
+				goto err;
 		}
+
+		core->enable_count++;
 	}
 
-	core->enable_count++;
 	return 0;
+err:
+	clk_core_disable(core->parent);
+	return ret;
 }
 
 static int clk_core_enable_lock(struct clk_core *core)
@@ -3288,6 +3309,8 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw)
 	core->num_parents = hw->init->num_parents;
 	core->min_rate = 0;
 	core->max_rate = ULONG_MAX;
+	INIT_LIST_HEAD(&core->prepare_list);
+	INIT_LIST_HEAD(&core->enable_list);
 	hw->core = core;
 
 	/* allocate local copy in case parent_names is __initdata */
-- 
2.21.0.352.gf09ad66450-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 2/6] clk: fix clk_calc_subtree compute duplications
  2019-03-05  4:49 ` Derek Basehore
  (?)
@ 2019-03-05  4:49   ` Derek Basehore
  -1 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-clk, linux-arm-kernel, linux-rockchip, linux-doc, sboyd,
	mturquette, heiko, aisheng.dong, mchehab+samsung, corbet,
	jbrunet, Derek Basehore

clk_calc_subtree was called at every step up the clk tree in
clk_calc_new_rates. Since it recursively calls itself for its
children, this means it would be called once on each clk for each
step above the top clk is.

This fixes this by breaking the subtree calculation into two parts.
The first part recalcs the rate for each child of the parent clk in
clk_calc_new_rates. This part is not recursive itself. The second part
recursively calls a new clk_calc_subtree on the clk_core that was
passed into clk_set_rate.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/clk.c | 49 ++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 38 insertions(+), 11 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 94b3ac783d90..e20364812b54 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1732,11 +1732,19 @@ static int __clk_speculate_rates(struct clk_core *core,
 	return ret;
 }
 
-static void clk_calc_subtree(struct clk_core *core, unsigned long new_rate,
-			     struct clk_core *new_parent, u8 p_index)
+static void clk_calc_subtree(struct clk_core *core)
 {
 	struct clk_core *child;
 
+	hlist_for_each_entry(child, &core->children, child_node) {
+		child->new_rate = clk_recalc(child, core->new_rate);
+		clk_calc_subtree(child);
+	}
+}
+
+static void clk_set_change(struct clk_core *core, unsigned long new_rate,
+			   struct clk_core *new_parent, u8 p_index)
+{
 	core->new_rate = new_rate;
 	core->new_parent = new_parent;
 	core->new_parent_index = p_index;
@@ -1744,11 +1752,6 @@ static void clk_calc_subtree(struct clk_core *core, unsigned long new_rate,
 	core->new_child = NULL;
 	if (new_parent && new_parent != core->parent)
 		new_parent->new_child = core;
-
-	hlist_for_each_entry(child, &core->children, child_node) {
-		child->new_rate = clk_recalc(child, new_rate);
-		clk_calc_subtree(child, child->new_rate, NULL, 0);
-	}
 }
 
 /*
@@ -1759,7 +1762,7 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 					   unsigned long rate)
 {
 	struct clk_core *top = core;
-	struct clk_core *old_parent, *parent;
+	struct clk_core *old_parent, *parent, *child;
 	unsigned long best_parent_rate = 0;
 	unsigned long new_rate;
 	unsigned long min_rate;
@@ -1806,6 +1809,13 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 		/* pass-through clock with adjustable parent */
 		top = clk_calc_new_rates(parent, rate);
 		new_rate = parent->new_rate;
+		hlist_for_each_entry(child, &parent->children, child_node) {
+			if (child == core)
+				continue;
+
+			child->new_rate = clk_recalc(child, new_rate);
+			clk_calc_subtree(child);
+		}
 		goto out;
 	}
 
@@ -1828,11 +1838,19 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 	}
 
 	if ((core->flags & CLK_SET_RATE_PARENT) && parent &&
-	    best_parent_rate != parent->rate)
+	    best_parent_rate != parent->rate) {
 		top = clk_calc_new_rates(parent, best_parent_rate);
+		hlist_for_each_entry(child, &parent->children, child_node) {
+			if (child == core)
+				continue;
+
+			child->new_rate = clk_recalc(child, parent->new_rate);
+			clk_calc_subtree(child);
+		}
+	}
 
 out:
-	clk_calc_subtree(core, new_rate, parent, p_index);
+	clk_set_change(core, new_rate, parent, p_index);
 
 	return top;
 }
@@ -2007,7 +2025,7 @@ static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
 static int clk_core_set_rate_nolock(struct clk_core *core,
 				    unsigned long req_rate)
 {
-	struct clk_core *top, *fail_clk;
+	struct clk_core *top, *fail_clk, *child;
 	unsigned long rate;
 	int ret = 0;
 
@@ -2033,6 +2051,15 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 	if (ret)
 		return ret;
 
+	if (top != core) {
+		/* new_parent cannot be NULL in this case */
+		hlist_for_each_entry(child, &core->new_parent->children,
+				child_node)
+			clk_calc_subtree(child);
+	} else {
+		clk_calc_subtree(core);
+	}
+
 	/* notify that we are about to change rates */
 	fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE);
 	if (fail_clk) {
-- 
2.21.0.352.gf09ad66450-goog


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 2/6] clk: fix clk_calc_subtree compute duplications
@ 2019-03-05  4:49   ` Derek Basehore
  0 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: aisheng.dong, Derek Basehore, heiko, linux-doc, sboyd,
	mturquette, corbet, linux-rockchip, mchehab+samsung, linux-clk,
	linux-arm-kernel, jbrunet

clk_calc_subtree was called at every step up the clk tree in
clk_calc_new_rates. Since it recursively calls itself for its
children, this means it would be called once on each clk for each
step above the top clk is.

This fixes this by breaking the subtree calculation into two parts.
The first part recalcs the rate for each child of the parent clk in
clk_calc_new_rates. This part is not recursive itself. The second part
recursively calls a new clk_calc_subtree on the clk_core that was
passed into clk_set_rate.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/clk.c | 49 ++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 38 insertions(+), 11 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 94b3ac783d90..e20364812b54 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1732,11 +1732,19 @@ static int __clk_speculate_rates(struct clk_core *core,
 	return ret;
 }
 
-static void clk_calc_subtree(struct clk_core *core, unsigned long new_rate,
-			     struct clk_core *new_parent, u8 p_index)
+static void clk_calc_subtree(struct clk_core *core)
 {
 	struct clk_core *child;
 
+	hlist_for_each_entry(child, &core->children, child_node) {
+		child->new_rate = clk_recalc(child, core->new_rate);
+		clk_calc_subtree(child);
+	}
+}
+
+static void clk_set_change(struct clk_core *core, unsigned long new_rate,
+			   struct clk_core *new_parent, u8 p_index)
+{
 	core->new_rate = new_rate;
 	core->new_parent = new_parent;
 	core->new_parent_index = p_index;
@@ -1744,11 +1752,6 @@ static void clk_calc_subtree(struct clk_core *core, unsigned long new_rate,
 	core->new_child = NULL;
 	if (new_parent && new_parent != core->parent)
 		new_parent->new_child = core;
-
-	hlist_for_each_entry(child, &core->children, child_node) {
-		child->new_rate = clk_recalc(child, new_rate);
-		clk_calc_subtree(child, child->new_rate, NULL, 0);
-	}
 }
 
 /*
@@ -1759,7 +1762,7 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 					   unsigned long rate)
 {
 	struct clk_core *top = core;
-	struct clk_core *old_parent, *parent;
+	struct clk_core *old_parent, *parent, *child;
 	unsigned long best_parent_rate = 0;
 	unsigned long new_rate;
 	unsigned long min_rate;
@@ -1806,6 +1809,13 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 		/* pass-through clock with adjustable parent */
 		top = clk_calc_new_rates(parent, rate);
 		new_rate = parent->new_rate;
+		hlist_for_each_entry(child, &parent->children, child_node) {
+			if (child == core)
+				continue;
+
+			child->new_rate = clk_recalc(child, new_rate);
+			clk_calc_subtree(child);
+		}
 		goto out;
 	}
 
@@ -1828,11 +1838,19 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 	}
 
 	if ((core->flags & CLK_SET_RATE_PARENT) && parent &&
-	    best_parent_rate != parent->rate)
+	    best_parent_rate != parent->rate) {
 		top = clk_calc_new_rates(parent, best_parent_rate);
+		hlist_for_each_entry(child, &parent->children, child_node) {
+			if (child == core)
+				continue;
+
+			child->new_rate = clk_recalc(child, parent->new_rate);
+			clk_calc_subtree(child);
+		}
+	}
 
 out:
-	clk_calc_subtree(core, new_rate, parent, p_index);
+	clk_set_change(core, new_rate, parent, p_index);
 
 	return top;
 }
@@ -2007,7 +2025,7 @@ static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
 static int clk_core_set_rate_nolock(struct clk_core *core,
 				    unsigned long req_rate)
 {
-	struct clk_core *top, *fail_clk;
+	struct clk_core *top, *fail_clk, *child;
 	unsigned long rate;
 	int ret = 0;
 
@@ -2033,6 +2051,15 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 	if (ret)
 		return ret;
 
+	if (top != core) {
+		/* new_parent cannot be NULL in this case */
+		hlist_for_each_entry(child, &core->new_parent->children,
+				child_node)
+			clk_calc_subtree(child);
+	} else {
+		clk_calc_subtree(core);
+	}
+
 	/* notify that we are about to change rates */
 	fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE);
 	if (fail_clk) {
-- 
2.21.0.352.gf09ad66450-goog

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 2/6] clk: fix clk_calc_subtree compute duplications
@ 2019-03-05  4:49   ` Derek Basehore
  0 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: aisheng.dong, Derek Basehore, heiko, linux-doc, sboyd,
	mturquette, corbet, linux-rockchip, mchehab+samsung, linux-clk,
	linux-arm-kernel, jbrunet

clk_calc_subtree was called at every step up the clk tree in
clk_calc_new_rates. Since it recursively calls itself for its
children, this means it would be called once on each clk for each
step above the top clk is.

This fixes this by breaking the subtree calculation into two parts.
The first part recalcs the rate for each child of the parent clk in
clk_calc_new_rates. This part is not recursive itself. The second part
recursively calls a new clk_calc_subtree on the clk_core that was
passed into clk_set_rate.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/clk.c | 49 ++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 38 insertions(+), 11 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 94b3ac783d90..e20364812b54 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1732,11 +1732,19 @@ static int __clk_speculate_rates(struct clk_core *core,
 	return ret;
 }
 
-static void clk_calc_subtree(struct clk_core *core, unsigned long new_rate,
-			     struct clk_core *new_parent, u8 p_index)
+static void clk_calc_subtree(struct clk_core *core)
 {
 	struct clk_core *child;
 
+	hlist_for_each_entry(child, &core->children, child_node) {
+		child->new_rate = clk_recalc(child, core->new_rate);
+		clk_calc_subtree(child);
+	}
+}
+
+static void clk_set_change(struct clk_core *core, unsigned long new_rate,
+			   struct clk_core *new_parent, u8 p_index)
+{
 	core->new_rate = new_rate;
 	core->new_parent = new_parent;
 	core->new_parent_index = p_index;
@@ -1744,11 +1752,6 @@ static void clk_calc_subtree(struct clk_core *core, unsigned long new_rate,
 	core->new_child = NULL;
 	if (new_parent && new_parent != core->parent)
 		new_parent->new_child = core;
-
-	hlist_for_each_entry(child, &core->children, child_node) {
-		child->new_rate = clk_recalc(child, new_rate);
-		clk_calc_subtree(child, child->new_rate, NULL, 0);
-	}
 }
 
 /*
@@ -1759,7 +1762,7 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 					   unsigned long rate)
 {
 	struct clk_core *top = core;
-	struct clk_core *old_parent, *parent;
+	struct clk_core *old_parent, *parent, *child;
 	unsigned long best_parent_rate = 0;
 	unsigned long new_rate;
 	unsigned long min_rate;
@@ -1806,6 +1809,13 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 		/* pass-through clock with adjustable parent */
 		top = clk_calc_new_rates(parent, rate);
 		new_rate = parent->new_rate;
+		hlist_for_each_entry(child, &parent->children, child_node) {
+			if (child == core)
+				continue;
+
+			child->new_rate = clk_recalc(child, new_rate);
+			clk_calc_subtree(child);
+		}
 		goto out;
 	}
 
@@ -1828,11 +1838,19 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 	}
 
 	if ((core->flags & CLK_SET_RATE_PARENT) && parent &&
-	    best_parent_rate != parent->rate)
+	    best_parent_rate != parent->rate) {
 		top = clk_calc_new_rates(parent, best_parent_rate);
+		hlist_for_each_entry(child, &parent->children, child_node) {
+			if (child == core)
+				continue;
+
+			child->new_rate = clk_recalc(child, parent->new_rate);
+			clk_calc_subtree(child);
+		}
+	}
 
 out:
-	clk_calc_subtree(core, new_rate, parent, p_index);
+	clk_set_change(core, new_rate, parent, p_index);
 
 	return top;
 }
@@ -2007,7 +2025,7 @@ static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
 static int clk_core_set_rate_nolock(struct clk_core *core,
 				    unsigned long req_rate)
 {
-	struct clk_core *top, *fail_clk;
+	struct clk_core *top, *fail_clk, *child;
 	unsigned long rate;
 	int ret = 0;
 
@@ -2033,6 +2051,15 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 	if (ret)
 		return ret;
 
+	if (top != core) {
+		/* new_parent cannot be NULL in this case */
+		hlist_for_each_entry(child, &core->new_parent->children,
+				child_node)
+			clk_calc_subtree(child);
+	} else {
+		clk_calc_subtree(core);
+	}
+
 	/* notify that we are about to change rates */
 	fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE);
 	if (fail_clk) {
-- 
2.21.0.352.gf09ad66450-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 3/6] clk: change rates via list iteration
  2019-03-05  4:49 ` Derek Basehore
@ 2019-03-05  4:49   ` Derek Basehore
  -1 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-clk, linux-arm-kernel, linux-rockchip, linux-doc, sboyd,
	mturquette, heiko, aisheng.dong, mchehab+samsung, corbet,
	jbrunet, Derek Basehore

This changes the clk_set_rate code to use lists instead of recursion.
While making this change, also add error handling for clk_set_rate.
This means that errors in the set_rate/set_parent/set_rate_and_parent
functions will not longer be ignored. When an error occurs, the clk
rates and parents are reset, unless an error occurs here, in which we
bail and cross our fingers.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/clk.c | 256 +++++++++++++++++++++++++++++++---------------
 1 file changed, 176 insertions(+), 80 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index e20364812b54..1637dc262884 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -39,6 +39,13 @@ static LIST_HEAD(clk_notifier_list);
 
 /***    private data structures    ***/
 
+struct clk_change {
+	struct list_head	change_list;
+	unsigned long		rate;
+	struct clk_core		*core;
+	struct clk_core		*parent;
+};
+
 struct clk_core {
 	const char		*name;
 	const struct clk_ops	*ops;
@@ -49,11 +56,9 @@ struct clk_core {
 	const char		**parent_names;
 	struct clk_core		**parents;
 	u8			num_parents;
-	u8			new_parent_index;
 	unsigned long		rate;
 	unsigned long		req_rate;
-	unsigned long		new_rate;
-	struct clk_core		*new_parent;
+	struct clk_change	change;
 	struct clk_core		*new_child;
 	unsigned long		flags;
 	bool			orphan;
@@ -1735,19 +1740,52 @@ static int __clk_speculate_rates(struct clk_core *core,
 static void clk_calc_subtree(struct clk_core *core)
 {
 	struct clk_core *child;
+	LIST_HEAD(tmp_list);
 
-	hlist_for_each_entry(child, &core->children, child_node) {
-		child->new_rate = clk_recalc(child, core->new_rate);
-		clk_calc_subtree(child);
+	list_add(&core->prepare_list, &tmp_list);
+	while (!list_empty(&tmp_list)) {
+		core = list_first_entry(&tmp_list, struct clk_core,
+					prepare_list);
+
+		hlist_for_each_entry(child, &core->children, child_node) {
+			child->change.rate = clk_recalc(child,
+							core->change.rate);
+			list_add_tail(&child->prepare_list, &tmp_list);
+		}
+
+		list_del(&core->prepare_list);
+	}
+}
+
+static void clk_prepare_changes(struct list_head *change_list,
+				struct clk_core *core)
+{
+	struct clk_change *change;
+	struct clk_core *tmp, *child;
+	LIST_HEAD(tmp_list);
+
+	list_add(&core->change.change_list, &tmp_list);
+	while (!list_empty(&tmp_list)) {
+		change = list_first_entry(&tmp_list, struct clk_change,
+					  change_list);
+		tmp = change->core;
+
+		hlist_for_each_entry(child, &tmp->children, child_node)
+			list_add_tail(&child->change.change_list, &tmp_list);
+
+		child = tmp->new_child;
+		if (child)
+			list_add_tail(&child->change.change_list, &tmp_list);
+
+		list_move_tail(&tmp->change.change_list, change_list);
 	}
 }
 
 static void clk_set_change(struct clk_core *core, unsigned long new_rate,
-			   struct clk_core *new_parent, u8 p_index)
+			   struct clk_core *new_parent)
 {
-	core->new_rate = new_rate;
-	core->new_parent = new_parent;
-	core->new_parent_index = p_index;
+	core->change.rate = new_rate;
+	core->change.parent = new_parent;
 	/* include clk in new parent's PRE_RATE_CHANGE notifications */
 	core->new_child = NULL;
 	if (new_parent && new_parent != core->parent)
@@ -1767,7 +1805,6 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 	unsigned long new_rate;
 	unsigned long min_rate;
 	unsigned long max_rate;
-	int p_index = 0;
 	long ret;
 
 	/* sanity */
@@ -1803,17 +1840,15 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 			return NULL;
 	} else if (!parent || !(core->flags & CLK_SET_RATE_PARENT)) {
 		/* pass-through clock without adjustable parent */
-		core->new_rate = core->rate;
 		return NULL;
 	} else {
 		/* pass-through clock with adjustable parent */
 		top = clk_calc_new_rates(parent, rate);
-		new_rate = parent->new_rate;
+		new_rate = parent->change.rate;
 		hlist_for_each_entry(child, &parent->children, child_node) {
 			if (child == core)
 				continue;
-
-			child->new_rate = clk_recalc(child, new_rate);
+			child->change.rate = clk_recalc(child, new_rate);
 			clk_calc_subtree(child);
 		}
 		goto out;
@@ -1827,16 +1862,6 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 		return NULL;
 	}
 
-	/* try finding the new parent index */
-	if (parent && core->num_parents > 1) {
-		p_index = clk_fetch_parent_index(core, parent);
-		if (p_index < 0) {
-			pr_debug("%s: clk %s can not be parent of clk %s\n",
-				 __func__, parent->name, core->name);
-			return NULL;
-		}
-	}
-
 	if ((core->flags & CLK_SET_RATE_PARENT) && parent &&
 	    best_parent_rate != parent->rate) {
 		top = clk_calc_new_rates(parent, best_parent_rate);
@@ -1844,13 +1869,14 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 			if (child == core)
 				continue;
 
-			child->new_rate = clk_recalc(child, parent->new_rate);
+			child->change.rate = clk_recalc(child,
+					parent->change.rate);
 			clk_calc_subtree(child);
 		}
 	}
 
 out:
-	clk_set_change(core, new_rate, parent, p_index);
+	clk_set_change(core, new_rate, parent);
 
 	return top;
 }
@@ -1866,18 +1892,18 @@ static struct clk_core *clk_propagate_rate_change(struct clk_core *core,
 	struct clk_core *child, *tmp_clk, *fail_clk = NULL;
 	int ret = NOTIFY_DONE;
 
-	if (core->rate == core->new_rate)
+	if (core->rate == core->change.rate)
 		return NULL;
 
 	if (core->notifier_count) {
-		ret = __clk_notify(core, event, core->rate, core->new_rate);
+		ret = __clk_notify(core, event, core->rate, core->change.rate);
 		if (ret & NOTIFY_STOP_MASK)
 			fail_clk = core;
 	}
 
 	hlist_for_each_entry(child, &core->children, child_node) {
 		/* Skip children who will be reparented to another clock */
-		if (child->new_parent && child->new_parent != core)
+		if (child->change.parent && child->change.parent != core)
 			continue;
 		tmp_clk = clk_propagate_rate_change(child, event);
 		if (tmp_clk)
@@ -1898,101 +1924,152 @@ static struct clk_core *clk_propagate_rate_change(struct clk_core *core,
  * walk down a subtree and set the new rates notifying the rate
  * change on the way
  */
-static void clk_change_rate(struct clk_core *core)
+static int clk_change_rate(struct clk_change *change)
 {
-	struct clk_core *child;
-	struct hlist_node *tmp;
-	unsigned long old_rate;
+	struct clk_core *core = change->core;
+	unsigned long old_rate, flags;
 	unsigned long best_parent_rate = 0;
 	bool skip_set_rate = false;
-	struct clk_core *old_parent;
+	struct clk_core *old_parent = NULL;
 	struct clk_core *parent = NULL;
+	int p_index;
+	int ret = 0;
 
 	old_rate = core->rate;
 
-	if (core->new_parent) {
-		parent = core->new_parent;
-		best_parent_rate = core->new_parent->rate;
+	if (change->parent) {
+		parent = change->parent;
+		best_parent_rate = parent->rate;
 	} else if (core->parent) {
 		parent = core->parent;
-		best_parent_rate = core->parent->rate;
+		best_parent_rate = parent->rate;
 	}
 
-	if (clk_pm_runtime_get(core))
-		return;
-
 	if (core->flags & CLK_SET_RATE_UNGATE) {
-		unsigned long flags;
-
 		clk_core_prepare(core);
 		flags = clk_enable_lock();
 		clk_core_enable(core);
 		clk_enable_unlock(flags);
 	}
 
-	if (core->new_parent && core->new_parent != core->parent) {
-		old_parent = __clk_set_parent_before(core, core->new_parent);
-		trace_clk_set_parent(core, core->new_parent);
+	if (core->flags & CLK_OPS_PARENT_ENABLE)
+		clk_core_prepare_enable(parent);
+
+	if (parent != core->parent) {
+		p_index = clk_fetch_parent_index(core, parent);
+		if (p_index < 0) {
+			pr_debug("%s: clk %s can not be parent of clk %s\n",
+				 __func__, parent->name, core->name);
+			ret = p_index;
+			goto out;
+		}
+		old_parent = __clk_set_parent_before(core, parent);
+
+		trace_clk_set_parent(core, change->parent);
 
 		if (core->ops->set_rate_and_parent) {
 			skip_set_rate = true;
-			core->ops->set_rate_and_parent(core->hw, core->new_rate,
+			ret = core->ops->set_rate_and_parent(core->hw,
+					change->rate,
 					best_parent_rate,
-					core->new_parent_index);
+					p_index);
 		} else if (core->ops->set_parent) {
-			core->ops->set_parent(core->hw, core->new_parent_index);
+			ret = core->ops->set_parent(core->hw, p_index);
 		}
 
-		trace_clk_set_parent_complete(core, core->new_parent);
-		__clk_set_parent_after(core, core->new_parent, old_parent);
-	}
+		trace_clk_set_parent_complete(core, change->parent);
+		if (ret) {
+			flags = clk_enable_lock();
+			clk_reparent(core, old_parent);
+			clk_enable_unlock(flags);
+			__clk_set_parent_after(core, old_parent, parent);
 
-	if (core->flags & CLK_OPS_PARENT_ENABLE)
-		clk_core_prepare_enable(parent);
+			goto out;
+		}
+		__clk_set_parent_after(core, parent, old_parent);
+
+	}
 
-	trace_clk_set_rate(core, core->new_rate);
+	trace_clk_set_rate(core, change->rate);
 
 	if (!skip_set_rate && core->ops->set_rate)
-		core->ops->set_rate(core->hw, core->new_rate, best_parent_rate);
+		ret = core->ops->set_rate(core->hw, change->rate,
+				best_parent_rate);
 
-	trace_clk_set_rate_complete(core, core->new_rate);
+	trace_clk_set_rate_complete(core, change->rate);
 
 	core->rate = clk_recalc(core, best_parent_rate);
 
-	if (core->flags & CLK_SET_RATE_UNGATE) {
-		unsigned long flags;
+out:
+	if (core->flags & CLK_OPS_PARENT_ENABLE)
+		clk_core_disable_unprepare(parent);
+
+	if (core->notifier_count && old_rate != core->rate)
+		__clk_notify(core, POST_RATE_CHANGE, old_rate, core->rate);
 
+	if (core->flags & CLK_SET_RATE_UNGATE) {
 		flags = clk_enable_lock();
 		clk_core_disable(core);
 		clk_enable_unlock(flags);
 		clk_core_unprepare(core);
 	}
 
-	if (core->flags & CLK_OPS_PARENT_ENABLE)
-		clk_core_disable_unprepare(parent);
+	if (core->flags & CLK_RECALC_NEW_RATES)
+		(void)clk_calc_new_rates(core, change->rate);
 
-	if (core->notifier_count && old_rate != core->rate)
-		__clk_notify(core, POST_RATE_CHANGE, old_rate, core->rate);
+	/*
+	 * Keep track of old parent and requested rate in case we have
+	 * to undo the change due to an error.
+	 */
+	change->parent = old_parent;
+	change->rate = old_rate;
+	return ret;
+}
 
-	if (core->flags & CLK_RECALC_NEW_RATES)
-		(void)clk_calc_new_rates(core, core->new_rate);
+static int clk_change_rates(struct list_head *list)
+{
+	struct clk_change *change, *tmp;
+	int ret = 0;
 
 	/*
-	 * Use safe iteration, as change_rate can actually swap parents
-	 * for certain clock types.
+	 * Make pm runtime get/put calls outside of clk_change_rate to avoid
+	 * clks bouncing back and forth between runtime_resume/suspend.
 	 */
-	hlist_for_each_entry_safe(child, tmp, &core->children, child_node) {
-		/* Skip children who will be reparented to another clock */
-		if (child->new_parent && child->new_parent != core)
-			continue;
-		clk_change_rate(child);
+	list_for_each_entry(change, list, change_list) {
+		ret = clk_pm_runtime_get(change->core);
+		if (ret) {
+			list_for_each_entry_continue_reverse(change, list,
+							     change_list)
+				clk_pm_runtime_put(change->core);
+
+			return ret;
+		}
 	}
 
-	/* handle the new child who might not be in core->children yet */
-	if (core->new_child)
-		clk_change_rate(core->new_child);
+	list_for_each_entry(change, list, change_list) {
+		ret = clk_change_rate(change);
+		clk_pm_runtime_put(change->core);
+		if (ret)
+			goto err;
+	}
 
-	clk_pm_runtime_put(core);
+	return 0;
+err:
+	/* Unwind the changes on an error. */
+	list_for_each_entry_continue_reverse(change, list, change_list) {
+		/* Just give up on an error when undoing changes. */
+		ret = clk_pm_runtime_get(change->core);
+		if (WARN_ON(ret))
+			return ret;
+
+		ret = clk_change_rate(change);
+		if (WARN_ON(ret))
+			return ret;
+
+		clk_pm_runtime_put(change->core);
+	}
+
+	return ret;
 }
 
 static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
@@ -2026,7 +2103,9 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 				    unsigned long req_rate)
 {
 	struct clk_core *top, *fail_clk, *child;
+	struct clk_change *change, *tmp;
 	unsigned long rate;
+	LIST_HEAD(changes);
 	int ret = 0;
 
 	if (!core)
@@ -2052,14 +2131,17 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 		return ret;
 
 	if (top != core) {
-		/* new_parent cannot be NULL in this case */
-		hlist_for_each_entry(child, &core->new_parent->children,
+		/* change.parent cannot be NULL in this case */
+		hlist_for_each_entry(child, &core->change.parent->children,
 				child_node)
 			clk_calc_subtree(child);
 	} else {
 		clk_calc_subtree(core);
 	}
 
+	/* Construct the list of changes */
+	clk_prepare_changes(&changes, top);
+
 	/* notify that we are about to change rates */
 	fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE);
 	if (fail_clk) {
@@ -2071,7 +2153,19 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 	}
 
 	/* change the rates */
-	clk_change_rate(top);
+	ret = clk_change_rates(&changes);
+	list_for_each_entry_safe(change, tmp, &changes, change_list) {
+		change->rate = 0;
+		change->parent = NULL;
+		list_del_init(&change->change_list);
+	}
+
+	if (ret) {
+		pr_debug("%s: failed to set %s rate via top clk %s\n", __func__,
+				core->name, top->name);
+		clk_propagate_rate_change(top, ABORT_RATE_CHANGE);
+		goto err;
+	}
 
 	core->req_rate = req_rate;
 err:
@@ -3338,6 +3432,8 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw)
 	core->max_rate = ULONG_MAX;
 	INIT_LIST_HEAD(&core->prepare_list);
 	INIT_LIST_HEAD(&core->enable_list);
+	INIT_LIST_HEAD(&core->change.change_list);
+	core->change.core = core;
 	hw->core = core;
 
 	/* allocate local copy in case parent_names is __initdata */
-- 
2.21.0.352.gf09ad66450-goog


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v3 3/6] clk: change rates via list iteration
@ 2019-03-05  4:49   ` Derek Basehore
  0 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: aisheng.dong, Derek Basehore, heiko, linux-doc, sboyd,
	mturquette, corbet, linux-rockchip, mchehab+samsung, linux-clk,
	linux-arm-kernel, jbrunet

This changes the clk_set_rate code to use lists instead of recursion.
While making this change, also add error handling for clk_set_rate.
This means that errors in the set_rate/set_parent/set_rate_and_parent
functions will not longer be ignored. When an error occurs, the clk
rates and parents are reset, unless an error occurs here, in which we
bail and cross our fingers.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/clk.c | 256 +++++++++++++++++++++++++++++++---------------
 1 file changed, 176 insertions(+), 80 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index e20364812b54..1637dc262884 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -39,6 +39,13 @@ static LIST_HEAD(clk_notifier_list);
 
 /***    private data structures    ***/
 
+struct clk_change {
+	struct list_head	change_list;
+	unsigned long		rate;
+	struct clk_core		*core;
+	struct clk_core		*parent;
+};
+
 struct clk_core {
 	const char		*name;
 	const struct clk_ops	*ops;
@@ -49,11 +56,9 @@ struct clk_core {
 	const char		**parent_names;
 	struct clk_core		**parents;
 	u8			num_parents;
-	u8			new_parent_index;
 	unsigned long		rate;
 	unsigned long		req_rate;
-	unsigned long		new_rate;
-	struct clk_core		*new_parent;
+	struct clk_change	change;
 	struct clk_core		*new_child;
 	unsigned long		flags;
 	bool			orphan;
@@ -1735,19 +1740,52 @@ static int __clk_speculate_rates(struct clk_core *core,
 static void clk_calc_subtree(struct clk_core *core)
 {
 	struct clk_core *child;
+	LIST_HEAD(tmp_list);
 
-	hlist_for_each_entry(child, &core->children, child_node) {
-		child->new_rate = clk_recalc(child, core->new_rate);
-		clk_calc_subtree(child);
+	list_add(&core->prepare_list, &tmp_list);
+	while (!list_empty(&tmp_list)) {
+		core = list_first_entry(&tmp_list, struct clk_core,
+					prepare_list);
+
+		hlist_for_each_entry(child, &core->children, child_node) {
+			child->change.rate = clk_recalc(child,
+							core->change.rate);
+			list_add_tail(&child->prepare_list, &tmp_list);
+		}
+
+		list_del(&core->prepare_list);
+	}
+}
+
+static void clk_prepare_changes(struct list_head *change_list,
+				struct clk_core *core)
+{
+	struct clk_change *change;
+	struct clk_core *tmp, *child;
+	LIST_HEAD(tmp_list);
+
+	list_add(&core->change.change_list, &tmp_list);
+	while (!list_empty(&tmp_list)) {
+		change = list_first_entry(&tmp_list, struct clk_change,
+					  change_list);
+		tmp = change->core;
+
+		hlist_for_each_entry(child, &tmp->children, child_node)
+			list_add_tail(&child->change.change_list, &tmp_list);
+
+		child = tmp->new_child;
+		if (child)
+			list_add_tail(&child->change.change_list, &tmp_list);
+
+		list_move_tail(&tmp->change.change_list, change_list);
 	}
 }
 
 static void clk_set_change(struct clk_core *core, unsigned long new_rate,
-			   struct clk_core *new_parent, u8 p_index)
+			   struct clk_core *new_parent)
 {
-	core->new_rate = new_rate;
-	core->new_parent = new_parent;
-	core->new_parent_index = p_index;
+	core->change.rate = new_rate;
+	core->change.parent = new_parent;
 	/* include clk in new parent's PRE_RATE_CHANGE notifications */
 	core->new_child = NULL;
 	if (new_parent && new_parent != core->parent)
@@ -1767,7 +1805,6 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 	unsigned long new_rate;
 	unsigned long min_rate;
 	unsigned long max_rate;
-	int p_index = 0;
 	long ret;
 
 	/* sanity */
@@ -1803,17 +1840,15 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 			return NULL;
 	} else if (!parent || !(core->flags & CLK_SET_RATE_PARENT)) {
 		/* pass-through clock without adjustable parent */
-		core->new_rate = core->rate;
 		return NULL;
 	} else {
 		/* pass-through clock with adjustable parent */
 		top = clk_calc_new_rates(parent, rate);
-		new_rate = parent->new_rate;
+		new_rate = parent->change.rate;
 		hlist_for_each_entry(child, &parent->children, child_node) {
 			if (child == core)
 				continue;
-
-			child->new_rate = clk_recalc(child, new_rate);
+			child->change.rate = clk_recalc(child, new_rate);
 			clk_calc_subtree(child);
 		}
 		goto out;
@@ -1827,16 +1862,6 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 		return NULL;
 	}
 
-	/* try finding the new parent index */
-	if (parent && core->num_parents > 1) {
-		p_index = clk_fetch_parent_index(core, parent);
-		if (p_index < 0) {
-			pr_debug("%s: clk %s can not be parent of clk %s\n",
-				 __func__, parent->name, core->name);
-			return NULL;
-		}
-	}
-
 	if ((core->flags & CLK_SET_RATE_PARENT) && parent &&
 	    best_parent_rate != parent->rate) {
 		top = clk_calc_new_rates(parent, best_parent_rate);
@@ -1844,13 +1869,14 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
 			if (child == core)
 				continue;
 
-			child->new_rate = clk_recalc(child, parent->new_rate);
+			child->change.rate = clk_recalc(child,
+					parent->change.rate);
 			clk_calc_subtree(child);
 		}
 	}
 
 out:
-	clk_set_change(core, new_rate, parent, p_index);
+	clk_set_change(core, new_rate, parent);
 
 	return top;
 }
@@ -1866,18 +1892,18 @@ static struct clk_core *clk_propagate_rate_change(struct clk_core *core,
 	struct clk_core *child, *tmp_clk, *fail_clk = NULL;
 	int ret = NOTIFY_DONE;
 
-	if (core->rate == core->new_rate)
+	if (core->rate == core->change.rate)
 		return NULL;
 
 	if (core->notifier_count) {
-		ret = __clk_notify(core, event, core->rate, core->new_rate);
+		ret = __clk_notify(core, event, core->rate, core->change.rate);
 		if (ret & NOTIFY_STOP_MASK)
 			fail_clk = core;
 	}
 
 	hlist_for_each_entry(child, &core->children, child_node) {
 		/* Skip children who will be reparented to another clock */
-		if (child->new_parent && child->new_parent != core)
+		if (child->change.parent && child->change.parent != core)
 			continue;
 		tmp_clk = clk_propagate_rate_change(child, event);
 		if (tmp_clk)
@@ -1898,101 +1924,152 @@ static struct clk_core *clk_propagate_rate_change(struct clk_core *core,
  * walk down a subtree and set the new rates notifying the rate
  * change on the way
  */
-static void clk_change_rate(struct clk_core *core)
+static int clk_change_rate(struct clk_change *change)
 {
-	struct clk_core *child;
-	struct hlist_node *tmp;
-	unsigned long old_rate;
+	struct clk_core *core = change->core;
+	unsigned long old_rate, flags;
 	unsigned long best_parent_rate = 0;
 	bool skip_set_rate = false;
-	struct clk_core *old_parent;
+	struct clk_core *old_parent = NULL;
 	struct clk_core *parent = NULL;
+	int p_index;
+	int ret = 0;
 
 	old_rate = core->rate;
 
-	if (core->new_parent) {
-		parent = core->new_parent;
-		best_parent_rate = core->new_parent->rate;
+	if (change->parent) {
+		parent = change->parent;
+		best_parent_rate = parent->rate;
 	} else if (core->parent) {
 		parent = core->parent;
-		best_parent_rate = core->parent->rate;
+		best_parent_rate = parent->rate;
 	}
 
-	if (clk_pm_runtime_get(core))
-		return;
-
 	if (core->flags & CLK_SET_RATE_UNGATE) {
-		unsigned long flags;
-
 		clk_core_prepare(core);
 		flags = clk_enable_lock();
 		clk_core_enable(core);
 		clk_enable_unlock(flags);
 	}
 
-	if (core->new_parent && core->new_parent != core->parent) {
-		old_parent = __clk_set_parent_before(core, core->new_parent);
-		trace_clk_set_parent(core, core->new_parent);
+	if (core->flags & CLK_OPS_PARENT_ENABLE)
+		clk_core_prepare_enable(parent);
+
+	if (parent != core->parent) {
+		p_index = clk_fetch_parent_index(core, parent);
+		if (p_index < 0) {
+			pr_debug("%s: clk %s can not be parent of clk %s\n",
+				 __func__, parent->name, core->name);
+			ret = p_index;
+			goto out;
+		}
+		old_parent = __clk_set_parent_before(core, parent);
+
+		trace_clk_set_parent(core, change->parent);
 
 		if (core->ops->set_rate_and_parent) {
 			skip_set_rate = true;
-			core->ops->set_rate_and_parent(core->hw, core->new_rate,
+			ret = core->ops->set_rate_and_parent(core->hw,
+					change->rate,
 					best_parent_rate,
-					core->new_parent_index);
+					p_index);
 		} else if (core->ops->set_parent) {
-			core->ops->set_parent(core->hw, core->new_parent_index);
+			ret = core->ops->set_parent(core->hw, p_index);
 		}
 
-		trace_clk_set_parent_complete(core, core->new_parent);
-		__clk_set_parent_after(core, core->new_parent, old_parent);
-	}
+		trace_clk_set_parent_complete(core, change->parent);
+		if (ret) {
+			flags = clk_enable_lock();
+			clk_reparent(core, old_parent);
+			clk_enable_unlock(flags);
+			__clk_set_parent_after(core, old_parent, parent);
 
-	if (core->flags & CLK_OPS_PARENT_ENABLE)
-		clk_core_prepare_enable(parent);
+			goto out;
+		}
+		__clk_set_parent_after(core, parent, old_parent);
+
+	}
 
-	trace_clk_set_rate(core, core->new_rate);
+	trace_clk_set_rate(core, change->rate);
 
 	if (!skip_set_rate && core->ops->set_rate)
-		core->ops->set_rate(core->hw, core->new_rate, best_parent_rate);
+		ret = core->ops->set_rate(core->hw, change->rate,
+				best_parent_rate);
 
-	trace_clk_set_rate_complete(core, core->new_rate);
+	trace_clk_set_rate_complete(core, change->rate);
 
 	core->rate = clk_recalc(core, best_parent_rate);
 
-	if (core->flags & CLK_SET_RATE_UNGATE) {
-		unsigned long flags;
+out:
+	if (core->flags & CLK_OPS_PARENT_ENABLE)
+		clk_core_disable_unprepare(parent);
+
+	if (core->notifier_count && old_rate != core->rate)
+		__clk_notify(core, POST_RATE_CHANGE, old_rate, core->rate);
 
+	if (core->flags & CLK_SET_RATE_UNGATE) {
 		flags = clk_enable_lock();
 		clk_core_disable(core);
 		clk_enable_unlock(flags);
 		clk_core_unprepare(core);
 	}
 
-	if (core->flags & CLK_OPS_PARENT_ENABLE)
-		clk_core_disable_unprepare(parent);
+	if (core->flags & CLK_RECALC_NEW_RATES)
+		(void)clk_calc_new_rates(core, change->rate);
 
-	if (core->notifier_count && old_rate != core->rate)
-		__clk_notify(core, POST_RATE_CHANGE, old_rate, core->rate);
+	/*
+	 * Keep track of old parent and requested rate in case we have
+	 * to undo the change due to an error.
+	 */
+	change->parent = old_parent;
+	change->rate = old_rate;
+	return ret;
+}
 
-	if (core->flags & CLK_RECALC_NEW_RATES)
-		(void)clk_calc_new_rates(core, core->new_rate);
+static int clk_change_rates(struct list_head *list)
+{
+	struct clk_change *change, *tmp;
+	int ret = 0;
 
 	/*
-	 * Use safe iteration, as change_rate can actually swap parents
-	 * for certain clock types.
+	 * Make pm runtime get/put calls outside of clk_change_rate to avoid
+	 * clks bouncing back and forth between runtime_resume/suspend.
 	 */
-	hlist_for_each_entry_safe(child, tmp, &core->children, child_node) {
-		/* Skip children who will be reparented to another clock */
-		if (child->new_parent && child->new_parent != core)
-			continue;
-		clk_change_rate(child);
+	list_for_each_entry(change, list, change_list) {
+		ret = clk_pm_runtime_get(change->core);
+		if (ret) {
+			list_for_each_entry_continue_reverse(change, list,
+							     change_list)
+				clk_pm_runtime_put(change->core);
+
+			return ret;
+		}
 	}
 
-	/* handle the new child who might not be in core->children yet */
-	if (core->new_child)
-		clk_change_rate(core->new_child);
+	list_for_each_entry(change, list, change_list) {
+		ret = clk_change_rate(change);
+		clk_pm_runtime_put(change->core);
+		if (ret)
+			goto err;
+	}
 
-	clk_pm_runtime_put(core);
+	return 0;
+err:
+	/* Unwind the changes on an error. */
+	list_for_each_entry_continue_reverse(change, list, change_list) {
+		/* Just give up on an error when undoing changes. */
+		ret = clk_pm_runtime_get(change->core);
+		if (WARN_ON(ret))
+			return ret;
+
+		ret = clk_change_rate(change);
+		if (WARN_ON(ret))
+			return ret;
+
+		clk_pm_runtime_put(change->core);
+	}
+
+	return ret;
 }
 
 static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
@@ -2026,7 +2103,9 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 				    unsigned long req_rate)
 {
 	struct clk_core *top, *fail_clk, *child;
+	struct clk_change *change, *tmp;
 	unsigned long rate;
+	LIST_HEAD(changes);
 	int ret = 0;
 
 	if (!core)
@@ -2052,14 +2131,17 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 		return ret;
 
 	if (top != core) {
-		/* new_parent cannot be NULL in this case */
-		hlist_for_each_entry(child, &core->new_parent->children,
+		/* change.parent cannot be NULL in this case */
+		hlist_for_each_entry(child, &core->change.parent->children,
 				child_node)
 			clk_calc_subtree(child);
 	} else {
 		clk_calc_subtree(core);
 	}
 
+	/* Construct the list of changes */
+	clk_prepare_changes(&changes, top);
+
 	/* notify that we are about to change rates */
 	fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE);
 	if (fail_clk) {
@@ -2071,7 +2153,19 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 	}
 
 	/* change the rates */
-	clk_change_rate(top);
+	ret = clk_change_rates(&changes);
+	list_for_each_entry_safe(change, tmp, &changes, change_list) {
+		change->rate = 0;
+		change->parent = NULL;
+		list_del_init(&change->change_list);
+	}
+
+	if (ret) {
+		pr_debug("%s: failed to set %s rate via top clk %s\n", __func__,
+				core->name, top->name);
+		clk_propagate_rate_change(top, ABORT_RATE_CHANGE);
+		goto err;
+	}
 
 	core->req_rate = req_rate;
 err:
@@ -3338,6 +3432,8 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw)
 	core->max_rate = ULONG_MAX;
 	INIT_LIST_HEAD(&core->prepare_list);
 	INIT_LIST_HEAD(&core->enable_list);
+	INIT_LIST_HEAD(&core->change.change_list);
+	core->change.core = core;
 	hw->core = core;
 
 	/* allocate local copy in case parent_names is __initdata */
-- 
2.21.0.352.gf09ad66450-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 4/6] clk: add coordinated clk changes support
  2019-03-05  4:49 ` Derek Basehore
@ 2019-03-05  4:49   ` Derek Basehore
  -1 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-clk, linux-arm-kernel, linux-rockchip, linux-doc, sboyd,
	mturquette, heiko, aisheng.dong, mchehab+samsung, corbet,
	jbrunet, Derek Basehore

This adds a new clk_op, pre_rate_req. It allows clks to setup an
intermediate state when clk rates are changed. One use case for this
is when a clk needs to switch to a safe parent when its PLL ancestor
changes rates. This is needed when a PLL cannot guarantee that it will
not exceed the new rate before it locks. The set_rate, set_parent, and
set_rate_and_parent callbacks are used with the pre_rate_req callback.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/clk.c            | 207 ++++++++++++++++++++++++++++++++---
 include/linux/clk-provider.h |  10 ++
 2 files changed, 200 insertions(+), 17 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 1637dc262884..b86940ca3c81 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -59,6 +59,7 @@ struct clk_core {
 	unsigned long		rate;
 	unsigned long		req_rate;
 	struct clk_change	change;
+	struct clk_change	pre_change;
 	struct clk_core		*new_child;
 	unsigned long		flags;
 	bool			orphan;
@@ -1920,6 +1921,141 @@ static struct clk_core *clk_propagate_rate_change(struct clk_core *core,
 	return fail_clk;
 }
 
+static void clk_add_change(struct list_head *changes,
+			       struct clk_change *change,
+			       struct clk_core *parent,
+			       unsigned long rate)
+{
+	change->parent = parent;
+	change->rate = rate;
+	list_add(&change->change_list, changes);
+}
+
+static int clk_prepare_pre_changes(struct list_head *pre_changes,
+				   struct list_head *post_changes,
+				   struct clk_core *core,
+				   unsigned long rate)
+{
+	while (core) {
+		struct clk_core *parent = core->parent;
+		unsigned long new_rate, min_rate, max_rate, best_parent_rate;
+		int ret;
+
+		clk_core_get_boundaries(core, &min_rate, &max_rate);
+			/* TODO: change to clk_core_can_round() */
+			if (clk_core_can_round(core)) {
+			struct clk_rate_request req;
+
+			req.rate = rate;
+			req.min_rate = min_rate;
+			req.max_rate = max_rate;
+
+			clk_core_init_rate_req(core, &req);
+
+			ret = clk_core_determine_round_nolock(core, &req);
+			if (ret < 0)
+				return -EINVAL;
+
+			best_parent_rate = req.best_parent_rate;
+			new_rate = req.rate;
+			parent = req.best_parent_hw ? req.best_parent_hw->core : NULL;
+
+			if (new_rate < min_rate || new_rate > max_rate)
+				return -EINVAL;
+		} else if (!parent || !(core->flags & CLK_SET_RATE_PARENT)) {
+			/* pass-through clock without adjustable parent */
+			return -EINVAL;
+		} else {
+			core = parent;
+			continue;
+		}
+
+		if (parent != core->parent || new_rate != core->rate) {
+			clk_add_change(pre_changes, &core->pre_change, parent,
+				       new_rate);
+			if (list_empty(&core->change.change_list))
+				clk_add_change(post_changes, &core->change,
+					       core->parent, core->rate);
+		}
+
+		core = parent;
+	}
+
+	return -EINVAL;
+}
+
+static int clk_pre_rate_req(struct list_head *pre_changes,
+			    struct list_head *post_changes,
+			    struct clk_core *core)
+{
+	struct clk_core *child, *parent = core->parent;
+	struct clk_rate_request next, pre;
+	struct clk_change *change, *tmp;
+	LIST_HEAD(tmp_list);
+	int ret;
+
+	/* The change list needs to be prepared already. */
+	WARN_ON(list_empty(&core->change.change_list));
+
+	if (!list_empty(&core->pre_change.change_list))
+		return -EINVAL;
+
+	list_add(&core->pre_change.change_list, &tmp_list);
+	while (!list_empty(&tmp_list)) {
+		change = list_first_entry(&tmp_list, struct clk_change,
+					  change_list);
+		core = change->core;
+		list_del_init(&core->pre_change.change_list);
+		hlist_for_each_entry(child, &core->children, child_node) {
+			if (!list_empty(&core->pre_change.change_list)) {
+				ret = -EINVAL;
+				goto err;
+			}
+
+			list_add(&child->pre_change.change_list, &tmp_list);
+		}
+
+		if (!core->ops->pre_rate_req)
+			continue;
+
+		parent = core->change.parent ? core->change.parent :
+			core->parent;
+		if (parent) {
+			next.best_parent_hw = parent->hw;
+			next.best_parent_rate = parent->change.rate;
+		}
+
+		next.rate = core->change.rate;
+		clk_core_get_boundaries(core, &next.min_rate, &next.max_rate);
+		ret = core->ops->pre_rate_req(core->hw, &next, &pre);
+		if (ret < 0)
+			goto err;
+
+		/*
+		 * A return value of 0 means that no pre_rate_req change is
+		 * needed.
+		 */
+		if (ret == 0)
+			continue;
+
+		parent = pre.best_parent_hw ? pre.best_parent_hw->core : NULL;
+		clk_add_change(pre_changes, &core->pre_change, parent,
+			       pre.rate);
+		if (parent != core->parent &&
+		    pre.best_parent_rate != parent->rate)
+			ret = clk_prepare_pre_changes(pre_changes, post_changes,
+						      parent,
+						      pre.best_parent_rate);
+	}
+
+	return 0;
+err:
+	list_for_each_entry_safe(change, tmp, &tmp_list, change_list)
+		list_del_init(&change->core->pre_change.change_list);
+
+	return ret;
+}
+
 /*
  * walk down a subtree and set the new rates notifying the rate
  * change on the way
@@ -2028,7 +2164,7 @@ static int clk_change_rate(struct clk_change *change)
 
 static int clk_change_rates(struct list_head *list)
 {
-	struct clk_change *change, *tmp;
+	struct clk_change *change;
 	int ret = 0;
 
 	/*
@@ -2099,13 +2235,25 @@ static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
 	return ret ? 0 : req.rate;
 }
 
+static void clk_del_change_list_entries(struct list_head *changes)
+{
+	struct clk_change *change, *tmp;
+
+	list_for_each_entry_safe(change, tmp, changes, change_list) {
+		change->rate = 0;
+		change->parent = NULL;
+		list_del_init(&change->change_list);
+	}
+}
+
 static int clk_core_set_rate_nolock(struct clk_core *core,
 				    unsigned long req_rate)
 {
 	struct clk_core *top, *fail_clk, *child;
-	struct clk_change *change, *tmp;
 	unsigned long rate;
+	LIST_HEAD(pre_changes);
 	LIST_HEAD(changes);
+	LIST_HEAD(post_changes);
 	int ret = 0;
 
 	if (!core)
@@ -2133,7 +2281,7 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 	if (top != core) {
 		/* change.parent cannot be NULL in this case */
 		hlist_for_each_entry(child, &core->change.parent->children,
-				child_node)
+				     child_node)
 			clk_calc_subtree(child);
 	} else {
 		clk_calc_subtree(core);
@@ -2142,33 +2290,54 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 	/* Construct the list of changes */
 	clk_prepare_changes(&changes, top);
 
+	/* We need a separate list for these changes due to error handling. */
+	ret = clk_pre_rate_req(&pre_changes, &post_changes, top);
+	if (ret) {
+		pr_debug("%s: failed pre_rate_req via top clk %s: %d\n",
+			 __func__, top->name, ret);
+		goto pre_rate_req;
+	}
+
 	/* notify that we are about to change rates */
 	fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE);
 	if (fail_clk) {
 		pr_debug("%s: failed to set %s rate\n", __func__,
-				fail_clk->name);
-		clk_propagate_rate_change(top, ABORT_RATE_CHANGE);
+			 fail_clk->name);
 		ret = -EBUSY;
-		goto err;
+		goto prop_rate;
 	}
 
-	/* change the rates */
-	ret = clk_change_rates(&changes);
-	list_for_each_entry_safe(change, tmp, &changes, change_list) {
-		change->rate = 0;
-		change->parent = NULL;
-		list_del_init(&change->change_list);
+	ret = clk_change_rates(&pre_changes);
+	if (ret) {
+		pr_debug("%s: rate rate changes failed via top clk %s: %d\n",
+			 __func__, top->name, ret);
+		goto pre_rate_req;
 	}
 
+	/* change the rates */
+	ret = clk_change_rates(&changes);
+	clk_del_change_list_entries(&changes);
 	if (ret) {
 		pr_debug("%s: failed to set %s rate via top clk %s\n", __func__,
-				core->name, top->name);
-		clk_propagate_rate_change(top, ABORT_RATE_CHANGE);
-		goto err;
+			 core->name, top->name);
+		goto change_rates;
 	}
 
+	ret = clk_change_rates(&post_changes);
+	clk_del_change_list_entries(&post_changes);
+
+	clk_del_change_list_entries(&pre_changes);
 	core->req_rate = req_rate;
-err:
+
+	return 0;
+
+change_rates:
+	WARN_ON(clk_change_rates(&pre_changes));
+prop_rate:
+	clk_propagate_rate_change(top, ABORT_RATE_CHANGE);
+pre_rate_req:
+	clk_del_change_list_entries(&pre_changes);
+	clk_del_change_list_entries(&changes);
 	clk_pm_runtime_put(core);
 
 	return ret;
@@ -3186,7 +3355,9 @@ static int __clk_core_init(struct clk_core *core)
 
 	/* check that clk_ops are sane.  See Documentation/driver-api/clk.rst */
 	if (core->ops->set_rate &&
-	    !((core->ops->round_rate || core->ops->determine_rate) &&
+	    !((core->ops->round_rate ||
+	       core->ops->determine_rate ||
+	       core->ops->pre_rate_req) &&
 	      core->ops->recalc_rate)) {
 		pr_err("%s: %s must implement .round_rate or .determine_rate in addition to .recalc_rate\n",
 		       __func__, core->name);
@@ -3433,7 +3604,9 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw)
 	INIT_LIST_HEAD(&core->prepare_list);
 	INIT_LIST_HEAD(&core->enable_list);
 	INIT_LIST_HEAD(&core->change.change_list);
+	INIT_LIST_HEAD(&core->pre_change.change_list);
 	core->change.core = core;
+	core->pre_change.core = core;
 	hw->core = core;
 
 	/* allocate local copy in case parent_names is __initdata */
diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
index e443fa9fa859..c11ca22e2089 100644
--- a/include/linux/clk-provider.h
+++ b/include/linux/clk-provider.h
@@ -133,6 +133,13 @@ struct clk_duty {
  *		actually supported by the clock, and optionally the parent clock
  *		that should be used to provide the clock rate.
  *
+ * @pre_rate_req: Given the next state that the clk will enter via a
+ * 		clk_rate_request struct, next, fill in another clk_rate_request
+ * 		struct, pre, with any desired intermediate state to change to
+ * 		before the state in next is applied. Returns positive to request
+ * 		an intermediate state transition, 0 for no transition, and
+ * 		-EERROR otherwise.
+ *
  * @set_parent:	Change the input source of this clock; for clocks with multiple
  *		possible parents specify a new parent by passing in the index
  *		as a u8 corresponding to the parent in either the .parent_names
@@ -231,6 +238,9 @@ struct clk_ops {
 					unsigned long *parent_rate);
 	int		(*determine_rate)(struct clk_hw *hw,
 					  struct clk_rate_request *req);
+	int		(*pre_rate_req)(struct clk_hw *hw,
+					const struct clk_rate_request *next,
+					struct clk_rate_request *pre);
 	int		(*set_parent)(struct clk_hw *hw, u8 index);
 	u8		(*get_parent)(struct clk_hw *hw);
 	int		(*set_rate)(struct clk_hw *hw, unsigned long rate,
-- 
2.21.0.352.gf09ad66450-goog


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 4/6] clk: add coordinated clk changes support
@ 2019-03-05  4:49   ` Derek Basehore
  0 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: aisheng.dong, Derek Basehore, heiko, linux-doc, sboyd,
	mturquette, corbet, linux-rockchip, mchehab+samsung, linux-clk,
	linux-arm-kernel, jbrunet

This adds a new clk_op, pre_rate_req. It allows clks to setup an
intermediate state when clk rates are changed. One use case for this
is when a clk needs to switch to a safe parent when its PLL ancestor
changes rates. This is needed when a PLL cannot guarantee that it will
not exceed the new rate before it locks. The set_rate, set_parent, and
set_rate_and_parent callbacks are used with the pre_rate_req callback.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/clk.c            | 207 ++++++++++++++++++++++++++++++++---
 include/linux/clk-provider.h |  10 ++
 2 files changed, 200 insertions(+), 17 deletions(-)

diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 1637dc262884..b86940ca3c81 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -59,6 +59,7 @@ struct clk_core {
 	unsigned long		rate;
 	unsigned long		req_rate;
 	struct clk_change	change;
+	struct clk_change	pre_change;
 	struct clk_core		*new_child;
 	unsigned long		flags;
 	bool			orphan;
@@ -1920,6 +1921,141 @@ static struct clk_core *clk_propagate_rate_change(struct clk_core *core,
 	return fail_clk;
 }
 
+static void clk_add_change(struct list_head *changes,
+			       struct clk_change *change,
+			       struct clk_core *parent,
+			       unsigned long rate)
+{
+	change->parent = parent;
+	change->rate = rate;
+	list_add(&change->change_list, changes);
+}
+
+static int clk_prepare_pre_changes(struct list_head *pre_changes,
+				   struct list_head *post_changes,
+				   struct clk_core *core,
+				   unsigned long rate)
+{
+	while (core) {
+		struct clk_core *parent = core->parent;
+		unsigned long new_rate, min_rate, max_rate, best_parent_rate;
+		int ret;
+
+		clk_core_get_boundaries(core, &min_rate, &max_rate);
+			/* TODO: change to clk_core_can_round() */
+			if (clk_core_can_round(core)) {
+			struct clk_rate_request req;
+
+			req.rate = rate;
+			req.min_rate = min_rate;
+			req.max_rate = max_rate;
+
+			clk_core_init_rate_req(core, &req);
+
+			ret = clk_core_determine_round_nolock(core, &req);
+			if (ret < 0)
+				return -EINVAL;
+
+			best_parent_rate = req.best_parent_rate;
+			new_rate = req.rate;
+			parent = req.best_parent_hw ? req.best_parent_hw->core : NULL;
+
+			if (new_rate < min_rate || new_rate > max_rate)
+				return -EINVAL;
+		} else if (!parent || !(core->flags & CLK_SET_RATE_PARENT)) {
+			/* pass-through clock without adjustable parent */
+			return -EINVAL;
+		} else {
+			core = parent;
+			continue;
+		}
+
+		if (parent != core->parent || new_rate != core->rate) {
+			clk_add_change(pre_changes, &core->pre_change, parent,
+				       new_rate);
+			if (list_empty(&core->change.change_list))
+				clk_add_change(post_changes, &core->change,
+					       core->parent, core->rate);
+		}
+
+		core = parent;
+	}
+
+	return -EINVAL;
+}
+
+static int clk_pre_rate_req(struct list_head *pre_changes,
+			    struct list_head *post_changes,
+			    struct clk_core *core)
+{
+	struct clk_core *child, *parent = core->parent;
+	struct clk_rate_request next, pre;
+	struct clk_change *change, *tmp;
+	LIST_HEAD(tmp_list);
+	int ret;
+
+	/* The change list needs to be prepared already. */
+	WARN_ON(list_empty(&core->change.change_list));
+
+	if (!list_empty(&core->pre_change.change_list))
+		return -EINVAL;
+
+	list_add(&core->pre_change.change_list, &tmp_list);
+	while (!list_empty(&tmp_list)) {
+		change = list_first_entry(&tmp_list, struct clk_change,
+					  change_list);
+		core = change->core;
+		list_del_init(&core->pre_change.change_list);
+		hlist_for_each_entry(child, &core->children, child_node) {
+			if (!list_empty(&core->pre_change.change_list)) {
+				ret = -EINVAL;
+				goto err;
+			}
+
+			list_add(&child->pre_change.change_list, &tmp_list);
+		}
+
+		if (!core->ops->pre_rate_req)
+			continue;
+
+		parent = core->change.parent ? core->change.parent :
+			core->parent;
+		if (parent) {
+			next.best_parent_hw = parent->hw;
+			next.best_parent_rate = parent->change.rate;
+		}
+
+		next.rate = core->change.rate;
+		clk_core_get_boundaries(core, &next.min_rate, &next.max_rate);
+		ret = core->ops->pre_rate_req(core->hw, &next, &pre);
+		if (ret < 0)
+			goto err;
+
+		/*
+		 * A return value of 0 means that no pre_rate_req change is
+		 * needed.
+		 */
+		if (ret == 0)
+			continue;
+
+		parent = pre.best_parent_hw ? pre.best_parent_hw->core : NULL;
+		clk_add_change(pre_changes, &core->pre_change, parent,
+			       pre.rate);
+		if (parent != core->parent &&
+		    pre.best_parent_rate != parent->rate)
+			ret = clk_prepare_pre_changes(pre_changes, post_changes,
+						      parent,
+						      pre.best_parent_rate);
+	}
+
+	return 0;
+err:
+	list_for_each_entry_safe(change, tmp, &tmp_list, change_list)
+		list_del_init(&change->core->pre_change.change_list);
+
+	return ret;
+}
+
 /*
  * walk down a subtree and set the new rates notifying the rate
  * change on the way
@@ -2028,7 +2164,7 @@ static int clk_change_rate(struct clk_change *change)
 
 static int clk_change_rates(struct list_head *list)
 {
-	struct clk_change *change, *tmp;
+	struct clk_change *change;
 	int ret = 0;
 
 	/*
@@ -2099,13 +2235,25 @@ static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
 	return ret ? 0 : req.rate;
 }
 
+static void clk_del_change_list_entries(struct list_head *changes)
+{
+	struct clk_change *change, *tmp;
+
+	list_for_each_entry_safe(change, tmp, changes, change_list) {
+		change->rate = 0;
+		change->parent = NULL;
+		list_del_init(&change->change_list);
+	}
+}
+
 static int clk_core_set_rate_nolock(struct clk_core *core,
 				    unsigned long req_rate)
 {
 	struct clk_core *top, *fail_clk, *child;
-	struct clk_change *change, *tmp;
 	unsigned long rate;
+	LIST_HEAD(pre_changes);
 	LIST_HEAD(changes);
+	LIST_HEAD(post_changes);
 	int ret = 0;
 
 	if (!core)
@@ -2133,7 +2281,7 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 	if (top != core) {
 		/* change.parent cannot be NULL in this case */
 		hlist_for_each_entry(child, &core->change.parent->children,
-				child_node)
+				     child_node)
 			clk_calc_subtree(child);
 	} else {
 		clk_calc_subtree(core);
@@ -2142,33 +2290,54 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
 	/* Construct the list of changes */
 	clk_prepare_changes(&changes, top);
 
+	/* We need a separate list for these changes due to error handling. */
+	ret = clk_pre_rate_req(&pre_changes, &post_changes, top);
+	if (ret) {
+		pr_debug("%s: failed pre_rate_req via top clk %s: %d\n",
+			 __func__, top->name, ret);
+		goto pre_rate_req;
+	}
+
 	/* notify that we are about to change rates */
 	fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE);
 	if (fail_clk) {
 		pr_debug("%s: failed to set %s rate\n", __func__,
-				fail_clk->name);
-		clk_propagate_rate_change(top, ABORT_RATE_CHANGE);
+			 fail_clk->name);
 		ret = -EBUSY;
-		goto err;
+		goto prop_rate;
 	}
 
-	/* change the rates */
-	ret = clk_change_rates(&changes);
-	list_for_each_entry_safe(change, tmp, &changes, change_list) {
-		change->rate = 0;
-		change->parent = NULL;
-		list_del_init(&change->change_list);
+	ret = clk_change_rates(&pre_changes);
+	if (ret) {
+		pr_debug("%s: rate rate changes failed via top clk %s: %d\n",
+			 __func__, top->name, ret);
+		goto pre_rate_req;
 	}
 
+	/* change the rates */
+	ret = clk_change_rates(&changes);
+	clk_del_change_list_entries(&changes);
 	if (ret) {
 		pr_debug("%s: failed to set %s rate via top clk %s\n", __func__,
-				core->name, top->name);
-		clk_propagate_rate_change(top, ABORT_RATE_CHANGE);
-		goto err;
+			 core->name, top->name);
+		goto change_rates;
 	}
 
+	ret = clk_change_rates(&post_changes);
+	clk_del_change_list_entries(&post_changes);
+
+	clk_del_change_list_entries(&pre_changes);
 	core->req_rate = req_rate;
-err:
+
+	return 0;
+
+change_rates:
+	WARN_ON(clk_change_rates(&pre_changes));
+prop_rate:
+	clk_propagate_rate_change(top, ABORT_RATE_CHANGE);
+pre_rate_req:
+	clk_del_change_list_entries(&pre_changes);
+	clk_del_change_list_entries(&changes);
 	clk_pm_runtime_put(core);
 
 	return ret;
@@ -3186,7 +3355,9 @@ static int __clk_core_init(struct clk_core *core)
 
 	/* check that clk_ops are sane.  See Documentation/driver-api/clk.rst */
 	if (core->ops->set_rate &&
-	    !((core->ops->round_rate || core->ops->determine_rate) &&
+	    !((core->ops->round_rate ||
+	       core->ops->determine_rate ||
+	       core->ops->pre_rate_req) &&
 	      core->ops->recalc_rate)) {
 		pr_err("%s: %s must implement .round_rate or .determine_rate in addition to .recalc_rate\n",
 		       __func__, core->name);
@@ -3433,7 +3604,9 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw)
 	INIT_LIST_HEAD(&core->prepare_list);
 	INIT_LIST_HEAD(&core->enable_list);
 	INIT_LIST_HEAD(&core->change.change_list);
+	INIT_LIST_HEAD(&core->pre_change.change_list);
 	core->change.core = core;
+	core->pre_change.core = core;
 	hw->core = core;
 
 	/* allocate local copy in case parent_names is __initdata */
diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
index e443fa9fa859..c11ca22e2089 100644
--- a/include/linux/clk-provider.h
+++ b/include/linux/clk-provider.h
@@ -133,6 +133,13 @@ struct clk_duty {
  *		actually supported by the clock, and optionally the parent clock
  *		that should be used to provide the clock rate.
  *
+ * @pre_rate_req: Given the next state that the clk will enter via a
+ * 		clk_rate_request struct, next, fill in another clk_rate_request
+ * 		struct, pre, with any desired intermediate state to change to
+ * 		before the state in next is applied. Returns positive to request
+ * 		an intermediate state transition, 0 for no transition, and
+ * 		-EERROR otherwise.
+ *
  * @set_parent:	Change the input source of this clock; for clocks with multiple
  *		possible parents specify a new parent by passing in the index
  *		as a u8 corresponding to the parent in either the .parent_names
@@ -231,6 +238,9 @@ struct clk_ops {
 					unsigned long *parent_rate);
 	int		(*determine_rate)(struct clk_hw *hw,
 					  struct clk_rate_request *req);
+	int		(*pre_rate_req)(struct clk_hw *hw,
+					const struct clk_rate_request *next,
+					struct clk_rate_request *pre);
 	int		(*set_parent)(struct clk_hw *hw, u8 index);
 	u8		(*get_parent)(struct clk_hw *hw);
 	int		(*set_rate)(struct clk_hw *hw, unsigned long rate,
-- 
2.21.0.352.gf09ad66450-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 5/6] docs: driver-api: add pre_rate_req to clk documentation
  2019-03-05  4:49 ` Derek Basehore
@ 2019-03-05  4:49   ` Derek Basehore
  -1 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-clk, linux-arm-kernel, linux-rockchip, linux-doc, sboyd,
	mturquette, heiko, aisheng.dong, mchehab+samsung, corbet,
	jbrunet, Derek Basehore

This adds documentation for the new clk op pre_rate_req.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 Documentation/driver-api/clk.rst | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/Documentation/driver-api/clk.rst b/Documentation/driver-api/clk.rst
index 593cca5058b1..917f6ac29645 100644
--- a/Documentation/driver-api/clk.rst
+++ b/Documentation/driver-api/clk.rst
@@ -82,6 +82,9 @@ the operations defined in clk-provider.h::
 						unsigned long *parent_rate);
 		int		(*determine_rate)(struct clk_hw *hw,
 						  struct clk_rate_request *req);
+		int		(*pre_rate_req)(struct clk_hw *hw,
+						const struct clk_rate_request *next,
+						struct clk_rate_request *pre);
 		int		(*set_parent)(struct clk_hw *hw, u8 index);
 		u8		(*get_parent)(struct clk_hw *hw);
 		int		(*set_rate)(struct clk_hw *hw,
@@ -224,6 +227,8 @@ optional or must be evaluated on a case-by-case basis.
    +----------------+------+-------------+---------------+-------------+------+
    |.determine_rate |      | y [1]_      |               |             |      |
    +----------------+------+-------------+---------------+-------------+------+
+   |.pre_rate_req   |      | y [1]_      |               |             |      |
+   +----------------+------+-------------+---------------+-------------+------+
    |.set_rate       |      | y           |               |             |      |
    +----------------+------+-------------+---------------+-------------+------+
    +----------------+------+-------------+---------------+-------------+------+
@@ -238,7 +243,7 @@ optional or must be evaluated on a case-by-case basis.
    |.init           |      |             |               |             |      |
    +----------------+------+-------------+---------------+-------------+------+
 
-.. [1] either one of round_rate or determine_rate is required.
+.. [1] one of round_rate, determine_rate, or pre_rate_req is required.
 
 Finally, register your clock at run-time with a hardware-specific
 registration function.  This function simply populates struct clk_foo's
-- 
2.21.0.352.gf09ad66450-goog


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 5/6] docs: driver-api: add pre_rate_req to clk documentation
@ 2019-03-05  4:49   ` Derek Basehore
  0 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: aisheng.dong, Derek Basehore, heiko, linux-doc, sboyd,
	mturquette, corbet, linux-rockchip, mchehab+samsung, linux-clk,
	linux-arm-kernel, jbrunet

This adds documentation for the new clk op pre_rate_req.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 Documentation/driver-api/clk.rst | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/Documentation/driver-api/clk.rst b/Documentation/driver-api/clk.rst
index 593cca5058b1..917f6ac29645 100644
--- a/Documentation/driver-api/clk.rst
+++ b/Documentation/driver-api/clk.rst
@@ -82,6 +82,9 @@ the operations defined in clk-provider.h::
 						unsigned long *parent_rate);
 		int		(*determine_rate)(struct clk_hw *hw,
 						  struct clk_rate_request *req);
+		int		(*pre_rate_req)(struct clk_hw *hw,
+						const struct clk_rate_request *next,
+						struct clk_rate_request *pre);
 		int		(*set_parent)(struct clk_hw *hw, u8 index);
 		u8		(*get_parent)(struct clk_hw *hw);
 		int		(*set_rate)(struct clk_hw *hw,
@@ -224,6 +227,8 @@ optional or must be evaluated on a case-by-case basis.
    +----------------+------+-------------+---------------+-------------+------+
    |.determine_rate |      | y [1]_      |               |             |      |
    +----------------+------+-------------+---------------+-------------+------+
+   |.pre_rate_req   |      | y [1]_      |               |             |      |
+   +----------------+------+-------------+---------------+-------------+------+
    |.set_rate       |      | y           |               |             |      |
    +----------------+------+-------------+---------------+-------------+------+
    +----------------+------+-------------+---------------+-------------+------+
@@ -238,7 +243,7 @@ optional or must be evaluated on a case-by-case basis.
    |.init           |      |             |               |             |      |
    +----------------+------+-------------+---------------+-------------+------+
 
-.. [1] either one of round_rate or determine_rate is required.
+.. [1] one of round_rate, determine_rate, or pre_rate_req is required.
 
 Finally, register your clock at run-time with a hardware-specific
 registration function.  This function simply populates struct clk_foo's
-- 
2.21.0.352.gf09ad66450-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 6/6] clk: rockchip: use pre_rate_req for cpuclk
  2019-03-05  4:49 ` Derek Basehore
@ 2019-03-05  4:49   ` Derek Basehore
  -1 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-clk, linux-arm-kernel, linux-rockchip, linux-doc, sboyd,
	mturquette, heiko, aisheng.dong, mchehab+samsung, corbet,
	jbrunet, Derek Basehore

This makes the rockchip cpuclk use the pre_rate_req op to change to
the alt parent instead of the clk notifier. This has the benefit of
the clk not changing parents behind the back of the common clk
framework. It also changes the divider setting for the alt parent to
only divide when the alt parent rate is higher than both the old and
new rates.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/rockchip/clk-cpu.c | 256 ++++++++++++++++++---------------
 1 file changed, 137 insertions(+), 119 deletions(-)

diff --git a/drivers/clk/rockchip/clk-cpu.c b/drivers/clk/rockchip/clk-cpu.c
index 32c19c0f1e14..3829e8e75c9e 100644
--- a/drivers/clk/rockchip/clk-cpu.c
+++ b/drivers/clk/rockchip/clk-cpu.c
@@ -45,8 +45,6 @@
  * @alt_parent:	alternate parent clock to use when switching the speed
  *		of the primary parent clock.
  * @reg_base:	base register for cpu-clock values.
- * @clk_nb:	clock notifier registered for changes in clock speed of the
- *		primary parent clock.
  * @rate_count:	number of rates in the rate_table
  * @rate_table:	pll-rates and their associated dividers
  * @reg_data:	cpu-specific register settings
@@ -60,7 +58,6 @@ struct rockchip_cpuclk {
 
 	struct clk				*alt_parent;
 	void __iomem				*reg_base;
-	struct notifier_block			clk_nb;
 	unsigned int				rate_count;
 	struct rockchip_cpuclk_rate_table	*rate_table;
 	const struct rockchip_cpuclk_reg_data	*reg_data;
@@ -78,12 +75,21 @@ static const struct rockchip_cpuclk_rate_table *rockchip_get_cpuclk_settings(
 							cpuclk->rate_table;
 	int i;
 
+	/*
+	 * Find the lowest rate settings for which the prate is greater than or
+	 * equal to the rate. Final rates should match exactly, but some
+	 * intermediate rates from pre_rate_req will not exactly match, but the
+	 * settings for a higher prate will work.
+	 */
 	for (i = 0; i < cpuclk->rate_count; i++) {
-		if (rate == rate_table[i].prate)
-			return &rate_table[i];
+		if (rate > rate_table[i].prate)
+			break;
 	}
 
-	return NULL;
+	if (i == 0 || i == cpuclk->rate_count)
+		return NULL;
+
+	return &rate_table[i - 1];
 }
 
 static unsigned long rockchip_cpuclk_recalc_rate(struct clk_hw *hw,
@@ -98,9 +104,70 @@ static unsigned long rockchip_cpuclk_recalc_rate(struct clk_hw *hw,
 	return parent_rate / (clksel0 + 1);
 }
 
-static const struct clk_ops rockchip_cpuclk_ops = {
-	.recalc_rate = rockchip_cpuclk_recalc_rate,
-};
+static int rockchip_cpuclk_pre_rate_req(struct clk_hw *hw,
+					const struct clk_rate_request *next,
+					struct clk_rate_request *pre)
+{
+	struct rockchip_cpuclk *cpuclk = container_of(hw,
+			struct rockchip_cpuclk, hw);
+	const struct rockchip_cpuclk_reg_data *reg_data = cpuclk->reg_data;
+	unsigned long alt_prate, alt_div, hi_rate;
+
+	pre->best_parent_hw = __clk_get_hw(cpuclk->alt_parent);
+	alt_prate = clk_get_rate(cpuclk->alt_parent);
+	pre->best_parent_rate = alt_prate;
+	hi_rate = max_t(unsigned long, next->rate, clk_hw_get_rate(hw));
+
+	/* Set dividers if we would go above the current or next rate. */
+	if (alt_prate > hi_rate) {
+		alt_div =  DIV_ROUND_UP(alt_prate, hi_rate);
+		if (alt_div > reg_data->div_core_mask) {
+			pr_warn("%s: limiting alt-divider %lu to %d\n",
+				__func__, alt_div, reg_data->div_core_mask);
+			alt_div = reg_data->div_core_mask;
+		}
+
+		pre->rate = alt_prate / alt_div;
+	} else {
+		pre->rate = alt_prate;
+	}
+
+	return 1;
+}
+
+static int rockchip_cpuclk_set_parent(struct clk_hw *hw, u8 index)
+{
+	struct rockchip_cpuclk *cpuclk = container_of(hw,
+			struct rockchip_cpuclk, hw);
+	const struct rockchip_cpuclk_reg_data *reg_data = cpuclk->reg_data;
+	unsigned long flags;
+
+	spin_lock_irqsave(cpuclk->lock, flags);
+	writel(HIWORD_UPDATE(index,
+			     reg_data->mux_core_mask,
+			     reg_data->mux_core_shift),
+	       cpuclk->reg_base + reg_data->core_reg);
+	spin_unlock_irqrestore(cpuclk->lock, flags);
+
+	return 0;
+}
+
+static u8 rockchip_cpuclk_get_parent(struct clk_hw *hw)
+{
+	struct rockchip_cpuclk *cpuclk = container_of(hw,
+			struct rockchip_cpuclk, hw);
+	const struct rockchip_cpuclk_reg_data *reg_data = cpuclk->reg_data;
+	unsigned long flags;
+	u32 val;
+
+	spin_lock_irqsave(cpuclk->lock, flags);
+	val = readl_relaxed(cpuclk->reg_base + reg_data->core_reg);
+	val >>= reg_data->mux_core_shift;
+	val &= reg_data->mux_core_mask;
+	spin_unlock_irqrestore(cpuclk->lock, flags);
+
+	return val;
+}
 
 static void rockchip_cpuclk_set_dividers(struct rockchip_cpuclk *cpuclk,
 				const struct rockchip_cpuclk_rate_table *rate)
@@ -120,131 +187,92 @@ static void rockchip_cpuclk_set_dividers(struct rockchip_cpuclk *cpuclk,
 	}
 }
 
-static int rockchip_cpuclk_pre_rate_change(struct rockchip_cpuclk *cpuclk,
-					   struct clk_notifier_data *ndata)
+static int rockchip_cpuclk_set_rate(struct clk_hw *hw, unsigned long rate,
+					unsigned long parent_rate)
 {
+	struct rockchip_cpuclk *cpuclk = container_of(hw,
+			struct rockchip_cpuclk, hw);
 	const struct rockchip_cpuclk_reg_data *reg_data = cpuclk->reg_data;
-	const struct rockchip_cpuclk_rate_table *rate;
-	unsigned long alt_prate, alt_div;
-	unsigned long flags;
+	const struct rockchip_cpuclk_rate_table *rate_divs;
+	unsigned long div = (parent_rate / rate) - 1;
+	unsigned long old_rate, flags;
 
-	/* check validity of the new rate */
-	rate = rockchip_get_cpuclk_settings(cpuclk, ndata->new_rate);
-	if (!rate) {
-		pr_err("%s: Invalid rate : %lu for cpuclk\n",
-		       __func__, ndata->new_rate);
+	if (div > reg_data->div_core_mask || rate > parent_rate) {
+		pr_err("%s: Invalid rate : %lu %lu for cpuclk\n", __func__,
+				rate, parent_rate);
 		return -EINVAL;
 	}
 
-	alt_prate = clk_get_rate(cpuclk->alt_parent);
-
+	old_rate = clk_hw_get_rate(hw);
+	rate_divs = rockchip_get_cpuclk_settings(cpuclk, rate);
 	spin_lock_irqsave(cpuclk->lock, flags);
+	if (old_rate < rate)
+		rockchip_cpuclk_set_dividers(cpuclk, rate_divs);
 
-	/*
-	 * If the old parent clock speed is less than the clock speed
-	 * of the alternate parent, then it should be ensured that at no point
-	 * the armclk speed is more than the old_rate until the dividers are
-	 * set.
-	 */
-	if (alt_prate > ndata->old_rate) {
-		/* calculate dividers */
-		alt_div =  DIV_ROUND_UP(alt_prate, ndata->old_rate) - 1;
-		if (alt_div > reg_data->div_core_mask) {
-			pr_warn("%s: limiting alt-divider %lu to %d\n",
-				__func__, alt_div, reg_data->div_core_mask);
-			alt_div = reg_data->div_core_mask;
-		}
-
-		/*
-		 * Change parents and add dividers in a single transaction.
-		 *
-		 * NOTE: we do this in a single transaction so we're never
-		 * dividing the primary parent by the extra dividers that were
-		 * needed for the alt.
-		 */
-		pr_debug("%s: setting div %lu as alt-rate %lu > old-rate %lu\n",
-			 __func__, alt_div, alt_prate, ndata->old_rate);
-
-		writel(HIWORD_UPDATE(alt_div, reg_data->div_core_mask,
-					      reg_data->div_core_shift) |
-		       HIWORD_UPDATE(reg_data->mux_core_alt,
-				     reg_data->mux_core_mask,
-				     reg_data->mux_core_shift),
-		       cpuclk->reg_base + reg_data->core_reg);
-	} else {
-		/* select alternate parent */
-		writel(HIWORD_UPDATE(reg_data->mux_core_alt,
-				     reg_data->mux_core_mask,
-				     reg_data->mux_core_shift),
-		       cpuclk->reg_base + reg_data->core_reg);
-	}
+	writel(HIWORD_UPDATE(div,
+			     reg_data->div_core_mask,
+			     reg_data->div_core_shift),
+	       cpuclk->reg_base + reg_data->core_reg);
+	if (old_rate > rate)
+		rockchip_cpuclk_set_dividers(cpuclk, rate_divs);
 
 	spin_unlock_irqrestore(cpuclk->lock, flags);
+
 	return 0;
 }
 
-static int rockchip_cpuclk_post_rate_change(struct rockchip_cpuclk *cpuclk,
-					    struct clk_notifier_data *ndata)
+static int rockchip_cpuclk_set_rate_and_parent(struct clk_hw *hw,
+					unsigned long rate,
+					unsigned long parent_rate,
+					u8 index)
 {
+	struct rockchip_cpuclk *cpuclk = container_of(hw,
+			struct rockchip_cpuclk, hw);
 	const struct rockchip_cpuclk_reg_data *reg_data = cpuclk->reg_data;
-	const struct rockchip_cpuclk_rate_table *rate;
-	unsigned long flags;
+	const struct rockchip_cpuclk_rate_table *rate_divs;
+	unsigned long div = (parent_rate / rate) - 1;
+	unsigned long old_rate, flags;
 
-	rate = rockchip_get_cpuclk_settings(cpuclk, ndata->new_rate);
-	if (!rate) {
-		pr_err("%s: Invalid rate : %lu for cpuclk\n",
-		       __func__, ndata->new_rate);
+	if (div > reg_data->div_core_mask || rate > parent_rate) {
+		pr_err("%s: Invalid rate : %lu %lu for cpuclk\n", __func__,
+				rate, parent_rate);
 		return -EINVAL;
 	}
 
+	old_rate = clk_hw_get_rate(hw);
+	rate_divs = rockchip_get_cpuclk_settings(cpuclk, rate);
 	spin_lock_irqsave(cpuclk->lock, flags);
-
-	if (ndata->old_rate < ndata->new_rate)
-		rockchip_cpuclk_set_dividers(cpuclk, rate);
-
 	/*
-	 * post-rate change event, re-mux to primary parent and remove dividers.
-	 *
-	 * NOTE: we do this in a single transaction so we're never dividing the
-	 * primary parent by the extra dividers that were needed for the alt.
+	 * TODO: This ain't great... Should change the get_cpuclk_settings code
+	 * to work with inexact matches to work with alt parent rates.
 	 */
-
-	writel(HIWORD_UPDATE(0, reg_data->div_core_mask,
-				reg_data->div_core_shift) |
-	       HIWORD_UPDATE(reg_data->mux_core_main,
-				reg_data->mux_core_mask,
-				reg_data->mux_core_shift),
+	if (old_rate < rate)
+		rockchip_cpuclk_set_dividers(cpuclk, rate_divs);
+
+	writel(HIWORD_UPDATE(div,
+			     reg_data->div_core_mask,
+			     reg_data->div_core_shift) |
+	       HIWORD_UPDATE(index,
+			     reg_data->mux_core_mask,
+			     reg_data->mux_core_shift),
 	       cpuclk->reg_base + reg_data->core_reg);
-
-	if (ndata->old_rate > ndata->new_rate)
-		rockchip_cpuclk_set_dividers(cpuclk, rate);
+	/* Not technically correct */
+	if (old_rate > rate)
+		rockchip_cpuclk_set_dividers(cpuclk, rate_divs);
 
 	spin_unlock_irqrestore(cpuclk->lock, flags);
+
 	return 0;
 }
 
-/*
- * This clock notifier is called when the frequency of the parent clock
- * of cpuclk is to be changed. This notifier handles the setting up all
- * the divider clocks, remux to temporary parent and handling the safe
- * frequency levels when using temporary parent.
- */
-static int rockchip_cpuclk_notifier_cb(struct notifier_block *nb,
-					unsigned long event, void *data)
-{
-	struct clk_notifier_data *ndata = data;
-	struct rockchip_cpuclk *cpuclk = to_rockchip_cpuclk_nb(nb);
-	int ret = 0;
-
-	pr_debug("%s: event %lu, old_rate %lu, new_rate: %lu\n",
-		 __func__, event, ndata->old_rate, ndata->new_rate);
-	if (event == PRE_RATE_CHANGE)
-		ret = rockchip_cpuclk_pre_rate_change(cpuclk, ndata);
-	else if (event == POST_RATE_CHANGE)
-		ret = rockchip_cpuclk_post_rate_change(cpuclk, ndata);
-
-	return notifier_from_errno(ret);
-}
+static const struct clk_ops rockchip_cpuclk_ops = {
+	.recalc_rate = rockchip_cpuclk_recalc_rate,
+	.pre_rate_req = rockchip_cpuclk_pre_rate_req,
+	.set_parent = rockchip_cpuclk_set_parent,
+	.get_parent = rockchip_cpuclk_get_parent,
+	.set_rate = rockchip_cpuclk_set_rate,
+	.set_rate_and_parent = rockchip_cpuclk_set_rate_and_parent,
+};
 
 struct clk *rockchip_clk_register_cpuclk(const char *name,
 			const char *const *parent_names, u8 num_parents,
@@ -267,8 +295,8 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
 		return ERR_PTR(-ENOMEM);
 
 	init.name = name;
-	init.parent_names = &parent_names[reg_data->mux_core_main];
-	init.num_parents = 1;
+	init.parent_names = parent_names;
+	init.num_parents = num_parents;
 	init.ops = &rockchip_cpuclk_ops;
 
 	/* only allow rate changes when we have a rate table */
@@ -282,7 +310,6 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
 	cpuclk->reg_base = reg_base;
 	cpuclk->lock = lock;
 	cpuclk->reg_data = reg_data;
-	cpuclk->clk_nb.notifier_call = rockchip_cpuclk_notifier_cb;
 	cpuclk->hw.init = &init;
 
 	cpuclk->alt_parent = __clk_lookup(parent_names[reg_data->mux_core_alt]);
@@ -309,13 +336,6 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
 		goto free_alt_parent;
 	}
 
-	ret = clk_notifier_register(clk, &cpuclk->clk_nb);
-	if (ret) {
-		pr_err("%s: failed to register clock notifier for %s\n",
-				__func__, name);
-		goto free_alt_parent;
-	}
-
 	if (nrates > 0) {
 		cpuclk->rate_count = nrates;
 		cpuclk->rate_table = kmemdup(rates,
@@ -323,7 +343,7 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
 					     GFP_KERNEL);
 		if (!cpuclk->rate_table) {
 			ret = -ENOMEM;
-			goto unregister_notifier;
+			goto free_alt_parent;
 		}
 	}
 
@@ -338,8 +358,6 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
 
 free_rate_table:
 	kfree(cpuclk->rate_table);
-unregister_notifier:
-	clk_notifier_unregister(clk, &cpuclk->clk_nb);
 free_alt_parent:
 	clk_disable_unprepare(cpuclk->alt_parent);
 free_cpuclk:
-- 
2.21.0.352.gf09ad66450-goog


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 6/6] clk: rockchip: use pre_rate_req for cpuclk
@ 2019-03-05  4:49   ` Derek Basehore
  0 siblings, 0 replies; 27+ messages in thread
From: Derek Basehore @ 2019-03-05  4:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: aisheng.dong, Derek Basehore, heiko, linux-doc, sboyd,
	mturquette, corbet, linux-rockchip, mchehab+samsung, linux-clk,
	linux-arm-kernel, jbrunet

This makes the rockchip cpuclk use the pre_rate_req op to change to
the alt parent instead of the clk notifier. This has the benefit of
the clk not changing parents behind the back of the common clk
framework. It also changes the divider setting for the alt parent to
only divide when the alt parent rate is higher than both the old and
new rates.

Signed-off-by: Derek Basehore <dbasehore@chromium.org>
---
 drivers/clk/rockchip/clk-cpu.c | 256 ++++++++++++++++++---------------
 1 file changed, 137 insertions(+), 119 deletions(-)

diff --git a/drivers/clk/rockchip/clk-cpu.c b/drivers/clk/rockchip/clk-cpu.c
index 32c19c0f1e14..3829e8e75c9e 100644
--- a/drivers/clk/rockchip/clk-cpu.c
+++ b/drivers/clk/rockchip/clk-cpu.c
@@ -45,8 +45,6 @@
  * @alt_parent:	alternate parent clock to use when switching the speed
  *		of the primary parent clock.
  * @reg_base:	base register for cpu-clock values.
- * @clk_nb:	clock notifier registered for changes in clock speed of the
- *		primary parent clock.
  * @rate_count:	number of rates in the rate_table
  * @rate_table:	pll-rates and their associated dividers
  * @reg_data:	cpu-specific register settings
@@ -60,7 +58,6 @@ struct rockchip_cpuclk {
 
 	struct clk				*alt_parent;
 	void __iomem				*reg_base;
-	struct notifier_block			clk_nb;
 	unsigned int				rate_count;
 	struct rockchip_cpuclk_rate_table	*rate_table;
 	const struct rockchip_cpuclk_reg_data	*reg_data;
@@ -78,12 +75,21 @@ static const struct rockchip_cpuclk_rate_table *rockchip_get_cpuclk_settings(
 							cpuclk->rate_table;
 	int i;
 
+	/*
+	 * Find the lowest rate settings for which the prate is greater than or
+	 * equal to the rate. Final rates should match exactly, but some
+	 * intermediate rates from pre_rate_req will not exactly match, but the
+	 * settings for a higher prate will work.
+	 */
 	for (i = 0; i < cpuclk->rate_count; i++) {
-		if (rate == rate_table[i].prate)
-			return &rate_table[i];
+		if (rate > rate_table[i].prate)
+			break;
 	}
 
-	return NULL;
+	if (i == 0 || i == cpuclk->rate_count)
+		return NULL;
+
+	return &rate_table[i - 1];
 }
 
 static unsigned long rockchip_cpuclk_recalc_rate(struct clk_hw *hw,
@@ -98,9 +104,70 @@ static unsigned long rockchip_cpuclk_recalc_rate(struct clk_hw *hw,
 	return parent_rate / (clksel0 + 1);
 }
 
-static const struct clk_ops rockchip_cpuclk_ops = {
-	.recalc_rate = rockchip_cpuclk_recalc_rate,
-};
+static int rockchip_cpuclk_pre_rate_req(struct clk_hw *hw,
+					const struct clk_rate_request *next,
+					struct clk_rate_request *pre)
+{
+	struct rockchip_cpuclk *cpuclk = container_of(hw,
+			struct rockchip_cpuclk, hw);
+	const struct rockchip_cpuclk_reg_data *reg_data = cpuclk->reg_data;
+	unsigned long alt_prate, alt_div, hi_rate;
+
+	pre->best_parent_hw = __clk_get_hw(cpuclk->alt_parent);
+	alt_prate = clk_get_rate(cpuclk->alt_parent);
+	pre->best_parent_rate = alt_prate;
+	hi_rate = max_t(unsigned long, next->rate, clk_hw_get_rate(hw));
+
+	/* Set dividers if we would go above the current or next rate. */
+	if (alt_prate > hi_rate) {
+		alt_div =  DIV_ROUND_UP(alt_prate, hi_rate);
+		if (alt_div > reg_data->div_core_mask) {
+			pr_warn("%s: limiting alt-divider %lu to %d\n",
+				__func__, alt_div, reg_data->div_core_mask);
+			alt_div = reg_data->div_core_mask;
+		}
+
+		pre->rate = alt_prate / alt_div;
+	} else {
+		pre->rate = alt_prate;
+	}
+
+	return 1;
+}
+
+static int rockchip_cpuclk_set_parent(struct clk_hw *hw, u8 index)
+{
+	struct rockchip_cpuclk *cpuclk = container_of(hw,
+			struct rockchip_cpuclk, hw);
+	const struct rockchip_cpuclk_reg_data *reg_data = cpuclk->reg_data;
+	unsigned long flags;
+
+	spin_lock_irqsave(cpuclk->lock, flags);
+	writel(HIWORD_UPDATE(index,
+			     reg_data->mux_core_mask,
+			     reg_data->mux_core_shift),
+	       cpuclk->reg_base + reg_data->core_reg);
+	spin_unlock_irqrestore(cpuclk->lock, flags);
+
+	return 0;
+}
+
+static u8 rockchip_cpuclk_get_parent(struct clk_hw *hw)
+{
+	struct rockchip_cpuclk *cpuclk = container_of(hw,
+			struct rockchip_cpuclk, hw);
+	const struct rockchip_cpuclk_reg_data *reg_data = cpuclk->reg_data;
+	unsigned long flags;
+	u32 val;
+
+	spin_lock_irqsave(cpuclk->lock, flags);
+	val = readl_relaxed(cpuclk->reg_base + reg_data->core_reg);
+	val >>= reg_data->mux_core_shift;
+	val &= reg_data->mux_core_mask;
+	spin_unlock_irqrestore(cpuclk->lock, flags);
+
+	return val;
+}
 
 static void rockchip_cpuclk_set_dividers(struct rockchip_cpuclk *cpuclk,
 				const struct rockchip_cpuclk_rate_table *rate)
@@ -120,131 +187,92 @@ static void rockchip_cpuclk_set_dividers(struct rockchip_cpuclk *cpuclk,
 	}
 }
 
-static int rockchip_cpuclk_pre_rate_change(struct rockchip_cpuclk *cpuclk,
-					   struct clk_notifier_data *ndata)
+static int rockchip_cpuclk_set_rate(struct clk_hw *hw, unsigned long rate,
+					unsigned long parent_rate)
 {
+	struct rockchip_cpuclk *cpuclk = container_of(hw,
+			struct rockchip_cpuclk, hw);
 	const struct rockchip_cpuclk_reg_data *reg_data = cpuclk->reg_data;
-	const struct rockchip_cpuclk_rate_table *rate;
-	unsigned long alt_prate, alt_div;
-	unsigned long flags;
+	const struct rockchip_cpuclk_rate_table *rate_divs;
+	unsigned long div = (parent_rate / rate) - 1;
+	unsigned long old_rate, flags;
 
-	/* check validity of the new rate */
-	rate = rockchip_get_cpuclk_settings(cpuclk, ndata->new_rate);
-	if (!rate) {
-		pr_err("%s: Invalid rate : %lu for cpuclk\n",
-		       __func__, ndata->new_rate);
+	if (div > reg_data->div_core_mask || rate > parent_rate) {
+		pr_err("%s: Invalid rate : %lu %lu for cpuclk\n", __func__,
+				rate, parent_rate);
 		return -EINVAL;
 	}
 
-	alt_prate = clk_get_rate(cpuclk->alt_parent);
-
+	old_rate = clk_hw_get_rate(hw);
+	rate_divs = rockchip_get_cpuclk_settings(cpuclk, rate);
 	spin_lock_irqsave(cpuclk->lock, flags);
+	if (old_rate < rate)
+		rockchip_cpuclk_set_dividers(cpuclk, rate_divs);
 
-	/*
-	 * If the old parent clock speed is less than the clock speed
-	 * of the alternate parent, then it should be ensured that at no point
-	 * the armclk speed is more than the old_rate until the dividers are
-	 * set.
-	 */
-	if (alt_prate > ndata->old_rate) {
-		/* calculate dividers */
-		alt_div =  DIV_ROUND_UP(alt_prate, ndata->old_rate) - 1;
-		if (alt_div > reg_data->div_core_mask) {
-			pr_warn("%s: limiting alt-divider %lu to %d\n",
-				__func__, alt_div, reg_data->div_core_mask);
-			alt_div = reg_data->div_core_mask;
-		}
-
-		/*
-		 * Change parents and add dividers in a single transaction.
-		 *
-		 * NOTE: we do this in a single transaction so we're never
-		 * dividing the primary parent by the extra dividers that were
-		 * needed for the alt.
-		 */
-		pr_debug("%s: setting div %lu as alt-rate %lu > old-rate %lu\n",
-			 __func__, alt_div, alt_prate, ndata->old_rate);
-
-		writel(HIWORD_UPDATE(alt_div, reg_data->div_core_mask,
-					      reg_data->div_core_shift) |
-		       HIWORD_UPDATE(reg_data->mux_core_alt,
-				     reg_data->mux_core_mask,
-				     reg_data->mux_core_shift),
-		       cpuclk->reg_base + reg_data->core_reg);
-	} else {
-		/* select alternate parent */
-		writel(HIWORD_UPDATE(reg_data->mux_core_alt,
-				     reg_data->mux_core_mask,
-				     reg_data->mux_core_shift),
-		       cpuclk->reg_base + reg_data->core_reg);
-	}
+	writel(HIWORD_UPDATE(div,
+			     reg_data->div_core_mask,
+			     reg_data->div_core_shift),
+	       cpuclk->reg_base + reg_data->core_reg);
+	if (old_rate > rate)
+		rockchip_cpuclk_set_dividers(cpuclk, rate_divs);
 
 	spin_unlock_irqrestore(cpuclk->lock, flags);
+
 	return 0;
 }
 
-static int rockchip_cpuclk_post_rate_change(struct rockchip_cpuclk *cpuclk,
-					    struct clk_notifier_data *ndata)
+static int rockchip_cpuclk_set_rate_and_parent(struct clk_hw *hw,
+					unsigned long rate,
+					unsigned long parent_rate,
+					u8 index)
 {
+	struct rockchip_cpuclk *cpuclk = container_of(hw,
+			struct rockchip_cpuclk, hw);
 	const struct rockchip_cpuclk_reg_data *reg_data = cpuclk->reg_data;
-	const struct rockchip_cpuclk_rate_table *rate;
-	unsigned long flags;
+	const struct rockchip_cpuclk_rate_table *rate_divs;
+	unsigned long div = (parent_rate / rate) - 1;
+	unsigned long old_rate, flags;
 
-	rate = rockchip_get_cpuclk_settings(cpuclk, ndata->new_rate);
-	if (!rate) {
-		pr_err("%s: Invalid rate : %lu for cpuclk\n",
-		       __func__, ndata->new_rate);
+	if (div > reg_data->div_core_mask || rate > parent_rate) {
+		pr_err("%s: Invalid rate : %lu %lu for cpuclk\n", __func__,
+				rate, parent_rate);
 		return -EINVAL;
 	}
 
+	old_rate = clk_hw_get_rate(hw);
+	rate_divs = rockchip_get_cpuclk_settings(cpuclk, rate);
 	spin_lock_irqsave(cpuclk->lock, flags);
-
-	if (ndata->old_rate < ndata->new_rate)
-		rockchip_cpuclk_set_dividers(cpuclk, rate);
-
 	/*
-	 * post-rate change event, re-mux to primary parent and remove dividers.
-	 *
-	 * NOTE: we do this in a single transaction so we're never dividing the
-	 * primary parent by the extra dividers that were needed for the alt.
+	 * TODO: This ain't great... Should change the get_cpuclk_settings code
+	 * to work with inexact matches to work with alt parent rates.
 	 */
-
-	writel(HIWORD_UPDATE(0, reg_data->div_core_mask,
-				reg_data->div_core_shift) |
-	       HIWORD_UPDATE(reg_data->mux_core_main,
-				reg_data->mux_core_mask,
-				reg_data->mux_core_shift),
+	if (old_rate < rate)
+		rockchip_cpuclk_set_dividers(cpuclk, rate_divs);
+
+	writel(HIWORD_UPDATE(div,
+			     reg_data->div_core_mask,
+			     reg_data->div_core_shift) |
+	       HIWORD_UPDATE(index,
+			     reg_data->mux_core_mask,
+			     reg_data->mux_core_shift),
 	       cpuclk->reg_base + reg_data->core_reg);
-
-	if (ndata->old_rate > ndata->new_rate)
-		rockchip_cpuclk_set_dividers(cpuclk, rate);
+	/* Not technically correct */
+	if (old_rate > rate)
+		rockchip_cpuclk_set_dividers(cpuclk, rate_divs);
 
 	spin_unlock_irqrestore(cpuclk->lock, flags);
+
 	return 0;
 }
 
-/*
- * This clock notifier is called when the frequency of the parent clock
- * of cpuclk is to be changed. This notifier handles the setting up all
- * the divider clocks, remux to temporary parent and handling the safe
- * frequency levels when using temporary parent.
- */
-static int rockchip_cpuclk_notifier_cb(struct notifier_block *nb,
-					unsigned long event, void *data)
-{
-	struct clk_notifier_data *ndata = data;
-	struct rockchip_cpuclk *cpuclk = to_rockchip_cpuclk_nb(nb);
-	int ret = 0;
-
-	pr_debug("%s: event %lu, old_rate %lu, new_rate: %lu\n",
-		 __func__, event, ndata->old_rate, ndata->new_rate);
-	if (event == PRE_RATE_CHANGE)
-		ret = rockchip_cpuclk_pre_rate_change(cpuclk, ndata);
-	else if (event == POST_RATE_CHANGE)
-		ret = rockchip_cpuclk_post_rate_change(cpuclk, ndata);
-
-	return notifier_from_errno(ret);
-}
+static const struct clk_ops rockchip_cpuclk_ops = {
+	.recalc_rate = rockchip_cpuclk_recalc_rate,
+	.pre_rate_req = rockchip_cpuclk_pre_rate_req,
+	.set_parent = rockchip_cpuclk_set_parent,
+	.get_parent = rockchip_cpuclk_get_parent,
+	.set_rate = rockchip_cpuclk_set_rate,
+	.set_rate_and_parent = rockchip_cpuclk_set_rate_and_parent,
+};
 
 struct clk *rockchip_clk_register_cpuclk(const char *name,
 			const char *const *parent_names, u8 num_parents,
@@ -267,8 +295,8 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
 		return ERR_PTR(-ENOMEM);
 
 	init.name = name;
-	init.parent_names = &parent_names[reg_data->mux_core_main];
-	init.num_parents = 1;
+	init.parent_names = parent_names;
+	init.num_parents = num_parents;
 	init.ops = &rockchip_cpuclk_ops;
 
 	/* only allow rate changes when we have a rate table */
@@ -282,7 +310,6 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
 	cpuclk->reg_base = reg_base;
 	cpuclk->lock = lock;
 	cpuclk->reg_data = reg_data;
-	cpuclk->clk_nb.notifier_call = rockchip_cpuclk_notifier_cb;
 	cpuclk->hw.init = &init;
 
 	cpuclk->alt_parent = __clk_lookup(parent_names[reg_data->mux_core_alt]);
@@ -309,13 +336,6 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
 		goto free_alt_parent;
 	}
 
-	ret = clk_notifier_register(clk, &cpuclk->clk_nb);
-	if (ret) {
-		pr_err("%s: failed to register clock notifier for %s\n",
-				__func__, name);
-		goto free_alt_parent;
-	}
-
 	if (nrates > 0) {
 		cpuclk->rate_count = nrates;
 		cpuclk->rate_table = kmemdup(rates,
@@ -323,7 +343,7 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
 					     GFP_KERNEL);
 		if (!cpuclk->rate_table) {
 			ret = -ENOMEM;
-			goto unregister_notifier;
+			goto free_alt_parent;
 		}
 	}
 
@@ -338,8 +358,6 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
 
 free_rate_table:
 	kfree(cpuclk->rate_table);
-unregister_notifier:
-	clk_notifier_unregister(clk, &cpuclk->clk_nb);
 free_alt_parent:
 	clk_disable_unprepare(cpuclk->alt_parent);
 free_cpuclk:
-- 
2.21.0.352.gf09ad66450-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare,enable}()
  2019-03-05  4:49   ` Derek Basehore
  (?)
@ 2019-03-05 18:49     ` Stephen Boyd
  -1 siblings, 0 replies; 27+ messages in thread
From: Stephen Boyd @ 2019-03-05 18:49 UTC (permalink / raw)
  To: Derek Basehore, linux-kernel
  Cc: linux-clk, linux-arm-kernel, linux-rockchip, linux-doc,
	mturquette, heiko, aisheng.dong, mchehab+samsung, corbet,
	jbrunet, Stephen Boyd, Derek Basehore

Quoting Derek Basehore (2019-03-04 20:49:31)
> From: Stephen Boyd <sboyd@codeaurora.org>
> 
> Enabling and preparing clocks can be written quite naturally with
> recursion. We start at some point in the tree and recurse up the
> tree to find the oldest parent clk that needs to be enabled or
> prepared. Then we enable/prepare and return to the caller, going
> back to the clk we started at and enabling/preparing along the
> way. This also unroll the recursion in unprepare,disable which can
> just be done in the order of walking up the clk tree.
> 
> The problem is recursion isn't great for kernel code where we
> have a limited stack size. Furthermore, we may be calling this
> code inside clk_set_rate() which also has recursion in it, so
> we're really not looking good if we encounter a tall clk tree.
> 
> Let's create a stack instead by looping over the parent chain and
> collecting clks of interest. Then the enable/prepare becomes as
> simple as iterating over that list and calling enable.
> 
> Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> -Fixed kernel warning
> -unrolled recursion in unprepare/disable too
> 
> Cc: Jerome Brunet <jbrunet@baylibre.com>
> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> ---

From the original post:

"I have some vague fear that this may not work if a clk op is framework 
reentrant and attemps to call consumer clk APIs from within the clk ops.
If the reentrant call tries to add a clk that's already in the list then
we'll corrupt the list. Ugh."

Do we have this sort of problem here? Or are you certain that we don't
have clks that prepare or enable something that is already in the
process of being prepared or enabled?


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}()
@ 2019-03-05 18:49     ` Stephen Boyd
  0 siblings, 0 replies; 27+ messages in thread
From: Stephen Boyd @ 2019-03-05 18:49 UTC (permalink / raw)
  To: linux-kernel
  Cc: aisheng.dong, Derek Basehore, heiko, linux-doc, mturquette,
	corbet, Stephen Boyd, linux-rockchip, mchehab+samsung, linux-clk,
	linux-arm-kernel, jbrunet

Quoting Derek Basehore (2019-03-04 20:49:31)
> From: Stephen Boyd <sboyd@codeaurora.org>
> 
> Enabling and preparing clocks can be written quite naturally with
> recursion. We start at some point in the tree and recurse up the
> tree to find the oldest parent clk that needs to be enabled or
> prepared. Then we enable/prepare and return to the caller, going
> back to the clk we started at and enabling/preparing along the
> way. This also unroll the recursion in unprepare,disable which can
> just be done in the order of walking up the clk tree.
> 
> The problem is recursion isn't great for kernel code where we
> have a limited stack size. Furthermore, we may be calling this
> code inside clk_set_rate() which also has recursion in it, so
> we're really not looking good if we encounter a tall clk tree.
> 
> Let's create a stack instead by looping over the parent chain and
> collecting clks of interest. Then the enable/prepare becomes as
> simple as iterating over that list and calling enable.
> 
> Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> -Fixed kernel warning
> -unrolled recursion in unprepare/disable too
> 
> Cc: Jerome Brunet <jbrunet@baylibre.com>
> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> ---

From the original post:

"I have some vague fear that this may not work if a clk op is framework 
reentrant and attemps to call consumer clk APIs from within the clk ops.
If the reentrant call tries to add a clk that's already in the list then
we'll corrupt the list. Ugh."

Do we have this sort of problem here? Or are you certain that we don't
have clks that prepare or enable something that is already in the
process of being prepared or enabled?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}()
@ 2019-03-05 18:49     ` Stephen Boyd
  0 siblings, 0 replies; 27+ messages in thread
From: Stephen Boyd @ 2019-03-05 18:49 UTC (permalink / raw)
  To: Derek Basehore, linux-kernel
  Cc: aisheng.dong, Derek Basehore, heiko, linux-doc, mturquette,
	corbet, Stephen Boyd, linux-rockchip, mchehab+samsung, linux-clk,
	linux-arm-kernel, jbrunet

Quoting Derek Basehore (2019-03-04 20:49:31)
> From: Stephen Boyd <sboyd@codeaurora.org>
> 
> Enabling and preparing clocks can be written quite naturally with
> recursion. We start at some point in the tree and recurse up the
> tree to find the oldest parent clk that needs to be enabled or
> prepared. Then we enable/prepare and return to the caller, going
> back to the clk we started at and enabling/preparing along the
> way. This also unroll the recursion in unprepare,disable which can
> just be done in the order of walking up the clk tree.
> 
> The problem is recursion isn't great for kernel code where we
> have a limited stack size. Furthermore, we may be calling this
> code inside clk_set_rate() which also has recursion in it, so
> we're really not looking good if we encounter a tall clk tree.
> 
> Let's create a stack instead by looping over the parent chain and
> collecting clks of interest. Then the enable/prepare becomes as
> simple as iterating over that list and calling enable.
> 
> Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> -Fixed kernel warning
> -unrolled recursion in unprepare/disable too
> 
> Cc: Jerome Brunet <jbrunet@baylibre.com>
> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> ---

From the original post:

"I have some vague fear that this may not work if a clk op is framework 
reentrant and attemps to call consumer clk APIs from within the clk ops.
If the reentrant call tries to add a clk that's already in the list then
we'll corrupt the list. Ugh."

Do we have this sort of problem here? Or are you certain that we don't
have clks that prepare or enable something that is already in the
process of being prepared or enabled?


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare,enable}()
  2019-03-05 18:49     ` Stephen Boyd
@ 2019-03-06  1:35       ` dbasehore .
  -1 siblings, 0 replies; 27+ messages in thread
From: dbasehore . @ 2019-03-06  1:35 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: linux-kernel, linux-clk, linux-arm-kernel, linux-rockchip,
	linux-doc, Michael Turquette, Heiko Stübner, aisheng.dong,
	mchehab+samsung, Jonathan Corbet, jbrunet, Stephen Boyd

On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Derek Basehore (2019-03-04 20:49:31)
> > From: Stephen Boyd <sboyd@codeaurora.org>
> >
> > Enabling and preparing clocks can be written quite naturally with
> > recursion. We start at some point in the tree and recurse up the
> > tree to find the oldest parent clk that needs to be enabled or
> > prepared. Then we enable/prepare and return to the caller, going
> > back to the clk we started at and enabling/preparing along the
> > way. This also unroll the recursion in unprepare,disable which can
> > just be done in the order of walking up the clk tree.
> >
> > The problem is recursion isn't great for kernel code where we
> > have a limited stack size. Furthermore, we may be calling this
> > code inside clk_set_rate() which also has recursion in it, so
> > we're really not looking good if we encounter a tall clk tree.
> >
> > Let's create a stack instead by looping over the parent chain and
> > collecting clks of interest. Then the enable/prepare becomes as
> > simple as iterating over that list and calling enable.
> >
> > Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> > -Fixed kernel warning
> > -unrolled recursion in unprepare/disable too
> >
> > Cc: Jerome Brunet <jbrunet@baylibre.com>
> > Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> > Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> > ---
>
> From the original post:
>
> "I have some vague fear that this may not work if a clk op is framework
> reentrant and attemps to call consumer clk APIs from within the clk ops.
> If the reentrant call tries to add a clk that's already in the list then
> we'll corrupt the list. Ugh."
>
> Do we have this sort of problem here? Or are you certain that we don't
> have clks that prepare or enable something that is already in the
> process of being prepared or enabled?

I can look into whether anything's doing this and add a WARN_ON which
returns an error if we ever hit that case. If this is happening on
some platform, we'd want to correct that anyways.

>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}()
@ 2019-03-06  1:35       ` dbasehore .
  0 siblings, 0 replies; 27+ messages in thread
From: dbasehore . @ 2019-03-06  1:35 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: aisheng.dong, Heiko Stübner, linux-doc, Michael Turquette,
	Jonathan Corbet, Stephen Boyd, linux-kernel, linux-rockchip,
	mchehab+samsung, linux-clk, linux-arm-kernel, jbrunet

On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd <sboyd@kernel.org> wrote:
>
> Quoting Derek Basehore (2019-03-04 20:49:31)
> > From: Stephen Boyd <sboyd@codeaurora.org>
> >
> > Enabling and preparing clocks can be written quite naturally with
> > recursion. We start at some point in the tree and recurse up the
> > tree to find the oldest parent clk that needs to be enabled or
> > prepared. Then we enable/prepare and return to the caller, going
> > back to the clk we started at and enabling/preparing along the
> > way. This also unroll the recursion in unprepare,disable which can
> > just be done in the order of walking up the clk tree.
> >
> > The problem is recursion isn't great for kernel code where we
> > have a limited stack size. Furthermore, we may be calling this
> > code inside clk_set_rate() which also has recursion in it, so
> > we're really not looking good if we encounter a tall clk tree.
> >
> > Let's create a stack instead by looping over the parent chain and
> > collecting clks of interest. Then the enable/prepare becomes as
> > simple as iterating over that list and calling enable.
> >
> > Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> > -Fixed kernel warning
> > -unrolled recursion in unprepare/disable too
> >
> > Cc: Jerome Brunet <jbrunet@baylibre.com>
> > Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> > Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> > ---
>
> From the original post:
>
> "I have some vague fear that this may not work if a clk op is framework
> reentrant and attemps to call consumer clk APIs from within the clk ops.
> If the reentrant call tries to add a clk that's already in the list then
> we'll corrupt the list. Ugh."
>
> Do we have this sort of problem here? Or are you certain that we don't
> have clks that prepare or enable something that is already in the
> process of being prepared or enabled?

I can look into whether anything's doing this and add a WARN_ON which
returns an error if we ever hit that case. If this is happening on
some platform, we'd want to correct that anyways.

>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare,enable}()
  2019-03-06  1:35       ` [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}() dbasehore .
  (?)
@ 2019-03-06  4:11         ` dbasehore .
  -1 siblings, 0 replies; 27+ messages in thread
From: dbasehore . @ 2019-03-06  4:11 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: linux-kernel, linux-clk, linux-arm-kernel, linux-rockchip,
	linux-doc, Michael Turquette, Heiko Stübner, aisheng.dong,
	mchehab+samsung, Jonathan Corbet, jbrunet, Stephen Boyd

On Tue, Mar 5, 2019 at 5:35 PM dbasehore . <dbasehore@chromium.org> wrote:
>
> On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd <sboyd@kernel.org> wrote:
> >
> > Quoting Derek Basehore (2019-03-04 20:49:31)
> > > From: Stephen Boyd <sboyd@codeaurora.org>
> > >
> > > Enabling and preparing clocks can be written quite naturally with
> > > recursion. We start at some point in the tree and recurse up the
> > > tree to find the oldest parent clk that needs to be enabled or
> > > prepared. Then we enable/prepare and return to the caller, going
> > > back to the clk we started at and enabling/preparing along the
> > > way. This also unroll the recursion in unprepare,disable which can
> > > just be done in the order of walking up the clk tree.
> > >
> > > The problem is recursion isn't great for kernel code where we
> > > have a limited stack size. Furthermore, we may be calling this
> > > code inside clk_set_rate() which also has recursion in it, so
> > > we're really not looking good if we encounter a tall clk tree.
> > >
> > > Let's create a stack instead by looping over the parent chain and
> > > collecting clks of interest. Then the enable/prepare becomes as
> > > simple as iterating over that list and calling enable.
> > >
> > > Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> > > -Fixed kernel warning
> > > -unrolled recursion in unprepare/disable too
> > >
> > > Cc: Jerome Brunet <jbrunet@baylibre.com>
> > > Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> > > Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> > > ---
> >
> > From the original post:
> >
> > "I have some vague fear that this may not work if a clk op is framework
> > reentrant and attemps to call consumer clk APIs from within the clk ops.
> > If the reentrant call tries to add a clk that's already in the list then
> > we'll corrupt the list. Ugh."
> >
> > Do we have this sort of problem here? Or are you certain that we don't
> > have clks that prepare or enable something that is already in the
> > process of being prepared or enabled?
>
> I can look into whether anything's doing this and add a WARN_ON which
> returns an error if we ever hit that case. If this is happening on
> some platform, we'd want to correct that anyways.
>

Also, if we're ever able to move to another locking scheme (hopefully
soon...), we can make the prepare/enable locks non-reentrant. Then if
anyone recursively calls back into the framework for another
prepare/enable, they will deadlock. I guess that's one way of making
sure no one does that.

> >

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}()
@ 2019-03-06  4:11         ` dbasehore .
  0 siblings, 0 replies; 27+ messages in thread
From: dbasehore . @ 2019-03-06  4:11 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: aisheng.dong, Heiko Stübner, linux-doc, Michael Turquette,
	Jonathan Corbet, Stephen Boyd, linux-kernel, linux-rockchip,
	mchehab+samsung, linux-clk, linux-arm-kernel, jbrunet

On Tue, Mar 5, 2019 at 5:35 PM dbasehore . <dbasehore@chromium.org> wrote:
>
> On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd <sboyd@kernel.org> wrote:
> >
> > Quoting Derek Basehore (2019-03-04 20:49:31)
> > > From: Stephen Boyd <sboyd@codeaurora.org>
> > >
> > > Enabling and preparing clocks can be written quite naturally with
> > > recursion. We start at some point in the tree and recurse up the
> > > tree to find the oldest parent clk that needs to be enabled or
> > > prepared. Then we enable/prepare and return to the caller, going
> > > back to the clk we started at and enabling/preparing along the
> > > way. This also unroll the recursion in unprepare,disable which can
> > > just be done in the order of walking up the clk tree.
> > >
> > > The problem is recursion isn't great for kernel code where we
> > > have a limited stack size. Furthermore, we may be calling this
> > > code inside clk_set_rate() which also has recursion in it, so
> > > we're really not looking good if we encounter a tall clk tree.
> > >
> > > Let's create a stack instead by looping over the parent chain and
> > > collecting clks of interest. Then the enable/prepare becomes as
> > > simple as iterating over that list and calling enable.
> > >
> > > Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> > > -Fixed kernel warning
> > > -unrolled recursion in unprepare/disable too
> > >
> > > Cc: Jerome Brunet <jbrunet@baylibre.com>
> > > Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> > > Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> > > ---
> >
> > From the original post:
> >
> > "I have some vague fear that this may not work if a clk op is framework
> > reentrant and attemps to call consumer clk APIs from within the clk ops.
> > If the reentrant call tries to add a clk that's already in the list then
> > we'll corrupt the list. Ugh."
> >
> > Do we have this sort of problem here? Or are you certain that we don't
> > have clks that prepare or enable something that is already in the
> > process of being prepared or enabled?
>
> I can look into whether anything's doing this and add a WARN_ON which
> returns an error if we ever hit that case. If this is happening on
> some platform, we'd want to correct that anyways.
>

Also, if we're ever able to move to another locking scheme (hopefully
soon...), we can make the prepare/enable locks non-reentrant. Then if
anyone recursively calls back into the framework for another
prepare/enable, they will deadlock. I guess that's one way of making
sure no one does that.

> >

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}()
@ 2019-03-06  4:11         ` dbasehore .
  0 siblings, 0 replies; 27+ messages in thread
From: dbasehore . @ 2019-03-06  4:11 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: aisheng.dong, Heiko Stübner, linux-doc, Michael Turquette,
	Jonathan Corbet, Stephen Boyd, linux-kernel, linux-rockchip,
	mchehab+samsung, linux-clk, linux-arm-kernel, jbrunet

On Tue, Mar 5, 2019 at 5:35 PM dbasehore . <dbasehore@chromium.org> wrote:
>
> On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd <sboyd@kernel.org> wrote:
> >
> > Quoting Derek Basehore (2019-03-04 20:49:31)
> > > From: Stephen Boyd <sboyd@codeaurora.org>
> > >
> > > Enabling and preparing clocks can be written quite naturally with
> > > recursion. We start at some point in the tree and recurse up the
> > > tree to find the oldest parent clk that needs to be enabled or
> > > prepared. Then we enable/prepare and return to the caller, going
> > > back to the clk we started at and enabling/preparing along the
> > > way. This also unroll the recursion in unprepare,disable which can
> > > just be done in the order of walking up the clk tree.
> > >
> > > The problem is recursion isn't great for kernel code where we
> > > have a limited stack size. Furthermore, we may be calling this
> > > code inside clk_set_rate() which also has recursion in it, so
> > > we're really not looking good if we encounter a tall clk tree.
> > >
> > > Let's create a stack instead by looping over the parent chain and
> > > collecting clks of interest. Then the enable/prepare becomes as
> > > simple as iterating over that list and calling enable.
> > >
> > > Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> > > -Fixed kernel warning
> > > -unrolled recursion in unprepare/disable too
> > >
> > > Cc: Jerome Brunet <jbrunet@baylibre.com>
> > > Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> > > Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> > > ---
> >
> > From the original post:
> >
> > "I have some vague fear that this may not work if a clk op is framework
> > reentrant and attemps to call consumer clk APIs from within the clk ops.
> > If the reentrant call tries to add a clk that's already in the list then
> > we'll corrupt the list. Ugh."
> >
> > Do we have this sort of problem here? Or are you certain that we don't
> > have clks that prepare or enable something that is already in the
> > process of being prepared or enabled?
>
> I can look into whether anything's doing this and add a WARN_ON which
> returns an error if we ever hit that case. If this is happening on
> some platform, we'd want to correct that anyways.
>

Also, if we're ever able to move to another locking scheme (hopefully
soon...), we can make the prepare/enable locks non-reentrant. Then if
anyone recursively calls back into the framework for another
prepare/enable, they will deadlock. I guess that's one way of making
sure no one does that.

> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}()
       [not found]         ` <CAGAzgsp0fWbH1f7gRKvhTotvdHMAL8gWw1bTKpVHfW9hJddXAw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2019-03-06 21:16           ` Stephen Boyd
  0 siblings, 0 replies; 27+ messages in thread
From: Stephen Boyd @ 2019-03-06 21:16 UTC (permalink / raw)
  To: dbasehore .

Quoting dbasehore . (2019-03-05 20:11:57)
> On Tue, Mar 5, 2019 at 5:35 PM dbasehore . <dbasehore-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org> wrote:
> >
> > On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd <sboyd-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> > >
> > > Quoting Derek Basehore (2019-03-04 20:49:31)
> > > > From: Stephen Boyd <sboyd-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>
> > > >
> > > > Enabling and preparing clocks can be written quite naturally with
> > > > recursion. We start at some point in the tree and recurse up the
> > > > tree to find the oldest parent clk that needs to be enabled or
> > > > prepared. Then we enable/prepare and return to the caller, going
> > > > back to the clk we started at and enabling/preparing along the
> > > > way. This also unroll the recursion in unprepare,disable which can
> > > > just be done in the order of walking up the clk tree.
> > > >
> > > > The problem is recursion isn't great for kernel code where we
> > > > have a limited stack size. Furthermore, we may be calling this
> > > > code inside clk_set_rate() which also has recursion in it, so
> > > > we're really not looking good if we encounter a tall clk tree.
> > > >
> > > > Let's create a stack instead by looping over the parent chain and
> > > > collecting clks of interest. Then the enable/prepare becomes as
> > > > simple as iterating over that list and calling enable.
> > > >
> > > > Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> > > > -Fixed kernel warning
> > > > -unrolled recursion in unprepare/disable too
> > > >
> > > > Cc: Jerome Brunet <jbrunet-rdvid1DuHRBWk0Htik3J/w@public.gmane.org>
> > > > Signed-off-by: Stephen Boyd <sboyd-sgV2jX0FEOL9JmXXK+q4OQ@public.gmane.org>
> > > > Signed-off-by: Derek Basehore <dbasehore-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
> > > > ---
> > >
> > > From the original post:
> > >
> > > "I have some vague fear that this may not work if a clk op is framework
> > > reentrant and attemps to call consumer clk APIs from within the clk ops.
> > > If the reentrant call tries to add a clk that's already in the list then
> > > we'll corrupt the list. Ugh."
> > >
> > > Do we have this sort of problem here? Or are you certain that we don't
> > > have clks that prepare or enable something that is already in the
> > > process of being prepared or enabled?
> >
> > I can look into whether anything's doing this and add a WARN_ON which
> > returns an error if we ever hit that case. If this is happening on
> > some platform, we'd want to correct that anyways.
> >
> 
> Also, if we're ever able to move to another locking scheme (hopefully
> soon...), we can make the prepare/enable locks non-reentrant. Then if
> anyone recursively calls back into the framework for another
> prepare/enable, they will deadlock. I guess that's one way of making
> sure no one does that.
> 

Sure, but we can't regress the system by making prepare and enable
non-reentrant. I was thinking we could write a Coccinelle script that
looks for suspects by matching against a clk_ops structure and then
picks out the prepare and enable ops from them and looks in those
functions for calls to clk_prepare_enable() or clk_prepare() or
clk_enable(). I don't know if or how it's possible to descend into the
call graph from the clk_ops function to check for clk API calls, so we
probably need to add the WARN_ON to help us find these issues at runtime
too.

You can use this cocci script to start poking at it though:

<smpl>
@ prepare_enabler @
identifier func;
identifier hw;
position p;
expression E;
@@
int func@p(struct clk_hw *hw)
{
...
(
clk_prepare_enable(E);
|
clk_prepare(E);
|
clk_enable(E);
)
...
}

@ has_preparer depends on prepare_enabler @
identifier ops;
identifier prepare_enabler.func;
position p;
@@
struct clk_ops ops = {
...,
.prepare = func@p,
...
};

@ has_enabler depends on prepare_enabler @
identifier ops;
identifier prepare_enabler.func;
position p;
@@
struct clk_ops ops = {
...,
.enable = func@p,
...
};

@script:python@
pf << prepare_enabler.p;
@@

coccilib.report.print_report(pf[0],"WARNING something bad called from clk op")

</smpl>

I ran it and found one hit in the davinci clk driver where the driver
manually turns a clk on to ensure a PLL locks. Hopefully that's a
different clk tree than the current one so that it's not an issue.

We already have quite a few grep hits for clk_prepare_enable() in
drivers/clk/ too but I think those are mostly drivers that haven't
converted to using critical clks so they have setup code to do that for
them. I suppose it would also be good to dig through all those drivers
and move them to the critical clk flag. For example, here's a patch for
the highbank driver that should definitely be moved to critical clks.

-----8<-----
diff --git a/drivers/clk/clk-highbank.c b/drivers/clk/clk-highbank.c
index 8e4581004695..bd328b0eb243 100644
--- a/drivers/clk/clk-highbank.c
+++ b/drivers/clk/clk-highbank.c
@@ -17,7 +17,6 @@
 #include <linux/kernel.h>
 #include <linux/slab.h>
 #include <linux/err.h>
-#include <linux/clk.h>
 #include <linux/clk-provider.h>
 #include <linux/io.h>
 #include <linux/of.h>
@@ -272,7 +271,7 @@ static const struct clk_ops periclk_ops = {
 	.set_rate = clk_periclk_set_rate,
 };
 
-static __init struct clk *hb_clk_init(struct device_node *node, const struct clk_ops *ops)
+static void __init hb_clk_init(struct device_node *node, const struct clk_ops *ops, unsigned long clkflags)
 {
 	u32 reg;
 	struct hb_clk *hb_clk;
@@ -284,11 +283,11 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk
 
 	rc = of_property_read_u32(node, "reg", &reg);
 	if (WARN_ON(rc))
-		return NULL;
+		return;
 
 	hb_clk = kzalloc(sizeof(*hb_clk), GFP_KERNEL);
 	if (WARN_ON(!hb_clk))
-		return NULL;
+		return;
 
 	/* Map system registers */
 	srnp = of_find_compatible_node(NULL, NULL, "calxeda,hb-sregs");
@@ -301,7 +300,7 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk
 
 	init.name = clk_name;
 	init.ops = ops;
-	init.flags = 0;
+	init.flags = clkflags;
 	parent_name = of_clk_get_parent_name(node, 0);
 	init.parent_names = &parent_name;
 	init.num_parents = 1;
@@ -311,33 +310,31 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk
 	rc = clk_hw_register(NULL, &hb_clk->hw);
 	if (WARN_ON(rc)) {
 		kfree(hb_clk);
-		return NULL;
+		return;
 	}
-	rc = of_clk_add_hw_provider(node, of_clk_hw_simple_get, &hb_clk->hw);
-	return hb_clk->hw.clk;
+	of_clk_add_hw_provider(node, of_clk_hw_simple_get, &hb_clk->hw);
 }
 
 static void __init hb_pll_init(struct device_node *node)
 {
-	hb_clk_init(node, &clk_pll_ops);
+	hb_clk_init(node, &clk_pll_ops, 0);
 }
 CLK_OF_DECLARE(hb_pll, "calxeda,hb-pll-clock", hb_pll_init);
 
 static void __init hb_a9periph_init(struct device_node *node)
 {
-	hb_clk_init(node, &a9periphclk_ops);
+	hb_clk_init(node, &a9periphclk_ops, 0);
 }
 CLK_OF_DECLARE(hb_a9periph, "calxeda,hb-a9periph-clock", hb_a9periph_init);
 
 static void __init hb_a9bus_init(struct device_node *node)
 {
-	struct clk *clk = hb_clk_init(node, &a9bclk_ops);
-	clk_prepare_enable(clk);
+	hb_clk_init(node, &a9bclk_ops, CLK_IS_CRITICAL);
 }
 CLK_OF_DECLARE(hb_a9bus, "calxeda,hb-a9bus-clock", hb_a9bus_init);
 
 static void __init hb_emmc_init(struct device_node *node)
 {
-	hb_clk_init(node, &periclk_ops);
+	hb_clk_init(node, &periclk_ops, 0);
 }
 CLK_OF_DECLARE(hb_emmc, "calxeda,hb-emmc-clock", hb_emmc_init);

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}()
  2019-03-06  4:11         ` dbasehore .
                           ` (2 preceding siblings ...)
  (?)
@ 2019-03-06 21:16         ` Stephen Boyd
  -1 siblings, 0 replies; 27+ messages in thread
From: Stephen Boyd @ 2019-03-06 21:16 UTC (permalink / raw)
  To: dbasehore .
  Cc: aisheng.dong, Heiko Stübner, linux-doc, Michael Turquette,
	Jonathan Corbet, Stephen Boyd, linux-kernel, linux-rockchip,
	mchehab+samsung, linux-clk, linux-arm-kernel, jbrunet

Quoting dbasehore . (2019-03-05 20:11:57)
> On Tue, Mar 5, 2019 at 5:35 PM dbasehore . <dbasehore@chromium.org> wrote:
> >
> > On Tue, Mar 5, 2019 at 10:49 AM Stephen Boyd <sboyd@kernel.org> wrote:
> > >
> > > Quoting Derek Basehore (2019-03-04 20:49:31)
> > > > From: Stephen Boyd <sboyd@codeaurora.org>
> > > >
> > > > Enabling and preparing clocks can be written quite naturally with
> > > > recursion. We start at some point in the tree and recurse up the
> > > > tree to find the oldest parent clk that needs to be enabled or
> > > > prepared. Then we enable/prepare and return to the caller, going
> > > > back to the clk we started at and enabling/preparing along the
> > > > way. This also unroll the recursion in unprepare,disable which can
> > > > just be done in the order of walking up the clk tree.
> > > >
> > > > The problem is recursion isn't great for kernel code where we
> > > > have a limited stack size. Furthermore, we may be calling this
> > > > code inside clk_set_rate() which also has recursion in it, so
> > > > we're really not looking good if we encounter a tall clk tree.
> > > >
> > > > Let's create a stack instead by looping over the parent chain and
> > > > collecting clks of interest. Then the enable/prepare becomes as
> > > > simple as iterating over that list and calling enable.
> > > >
> > > > Modified verison of https://lore.kernel.org/patchwork/patch/814369/
> > > > -Fixed kernel warning
> > > > -unrolled recursion in unprepare/disable too
> > > >
> > > > Cc: Jerome Brunet <jbrunet@baylibre.com>
> > > > Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
> > > > Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> > > > ---
> > >
> > > From the original post:
> > >
> > > "I have some vague fear that this may not work if a clk op is framework
> > > reentrant and attemps to call consumer clk APIs from within the clk ops.
> > > If the reentrant call tries to add a clk that's already in the list then
> > > we'll corrupt the list. Ugh."
> > >
> > > Do we have this sort of problem here? Or are you certain that we don't
> > > have clks that prepare or enable something that is already in the
> > > process of being prepared or enabled?
> >
> > I can look into whether anything's doing this and add a WARN_ON which
> > returns an error if we ever hit that case. If this is happening on
> > some platform, we'd want to correct that anyways.
> >
> 
> Also, if we're ever able to move to another locking scheme (hopefully
> soon...), we can make the prepare/enable locks non-reentrant. Then if
> anyone recursively calls back into the framework for another
> prepare/enable, they will deadlock. I guess that's one way of making
> sure no one does that.
> 

Sure, but we can't regress the system by making prepare and enable
non-reentrant. I was thinking we could write a Coccinelle script that
looks for suspects by matching against a clk_ops structure and then
picks out the prepare and enable ops from them and looks in those
functions for calls to clk_prepare_enable() or clk_prepare() or
clk_enable(). I don't know if or how it's possible to descend into the
call graph from the clk_ops function to check for clk API calls, so we
probably need to add the WARN_ON to help us find these issues at runtime
too.

You can use this cocci script to start poking at it though:

<smpl>
@ prepare_enabler @
identifier func;
identifier hw;
position p;
expression E;
@@
int func@p(struct clk_hw *hw)
{
...
(
clk_prepare_enable(E);
|
clk_prepare(E);
|
clk_enable(E);
)
...
}

@ has_preparer depends on prepare_enabler @
identifier ops;
identifier prepare_enabler.func;
position p;
@@
struct clk_ops ops = {
...,
.prepare = func@p,
...
};

@ has_enabler depends on prepare_enabler @
identifier ops;
identifier prepare_enabler.func;
position p;
@@
struct clk_ops ops = {
...,
.enable = func@p,
...
};

@script:python@
pf << prepare_enabler.p;
@@

coccilib.report.print_report(pf[0],"WARNING something bad called from clk op")

</smpl>

I ran it and found one hit in the davinci clk driver where the driver
manually turns a clk on to ensure a PLL locks. Hopefully that's a
different clk tree than the current one so that it's not an issue.

We already have quite a few grep hits for clk_prepare_enable() in
drivers/clk/ too but I think those are mostly drivers that haven't
converted to using critical clks so they have setup code to do that for
them. I suppose it would also be good to dig through all those drivers
and move them to the critical clk flag. For example, here's a patch for
the highbank driver that should definitely be moved to critical clks.

-----8<-----
diff --git a/drivers/clk/clk-highbank.c b/drivers/clk/clk-highbank.c
index 8e4581004695..bd328b0eb243 100644
--- a/drivers/clk/clk-highbank.c
+++ b/drivers/clk/clk-highbank.c
@@ -17,7 +17,6 @@
 #include <linux/kernel.h>
 #include <linux/slab.h>
 #include <linux/err.h>
-#include <linux/clk.h>
 #include <linux/clk-provider.h>
 #include <linux/io.h>
 #include <linux/of.h>
@@ -272,7 +271,7 @@ static const struct clk_ops periclk_ops = {
 	.set_rate = clk_periclk_set_rate,
 };
 
-static __init struct clk *hb_clk_init(struct device_node *node, const struct clk_ops *ops)
+static void __init hb_clk_init(struct device_node *node, const struct clk_ops *ops, unsigned long clkflags)
 {
 	u32 reg;
 	struct hb_clk *hb_clk;
@@ -284,11 +283,11 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk
 
 	rc = of_property_read_u32(node, "reg", &reg);
 	if (WARN_ON(rc))
-		return NULL;
+		return;
 
 	hb_clk = kzalloc(sizeof(*hb_clk), GFP_KERNEL);
 	if (WARN_ON(!hb_clk))
-		return NULL;
+		return;
 
 	/* Map system registers */
 	srnp = of_find_compatible_node(NULL, NULL, "calxeda,hb-sregs");
@@ -301,7 +300,7 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk
 
 	init.name = clk_name;
 	init.ops = ops;
-	init.flags = 0;
+	init.flags = clkflags;
 	parent_name = of_clk_get_parent_name(node, 0);
 	init.parent_names = &parent_name;
 	init.num_parents = 1;
@@ -311,33 +310,31 @@ static __init struct clk *hb_clk_init(struct device_node *node, const struct clk
 	rc = clk_hw_register(NULL, &hb_clk->hw);
 	if (WARN_ON(rc)) {
 		kfree(hb_clk);
-		return NULL;
+		return;
 	}
-	rc = of_clk_add_hw_provider(node, of_clk_hw_simple_get, &hb_clk->hw);
-	return hb_clk->hw.clk;
+	of_clk_add_hw_provider(node, of_clk_hw_simple_get, &hb_clk->hw);
 }
 
 static void __init hb_pll_init(struct device_node *node)
 {
-	hb_clk_init(node, &clk_pll_ops);
+	hb_clk_init(node, &clk_pll_ops, 0);
 }
 CLK_OF_DECLARE(hb_pll, "calxeda,hb-pll-clock", hb_pll_init);
 
 static void __init hb_a9periph_init(struct device_node *node)
 {
-	hb_clk_init(node, &a9periphclk_ops);
+	hb_clk_init(node, &a9periphclk_ops, 0);
 }
 CLK_OF_DECLARE(hb_a9periph, "calxeda,hb-a9periph-clock", hb_a9periph_init);
 
 static void __init hb_a9bus_init(struct device_node *node)
 {
-	struct clk *clk = hb_clk_init(node, &a9bclk_ops);
-	clk_prepare_enable(clk);
+	hb_clk_init(node, &a9bclk_ops, CLK_IS_CRITICAL);
 }
 CLK_OF_DECLARE(hb_a9bus, "calxeda,hb-a9bus-clock", hb_a9bus_init);
 
 static void __init hb_emmc_init(struct device_node *node)
 {
-	hb_clk_init(node, &periclk_ops);
+	hb_clk_init(node, &periclk_ops, 0);
 }
 CLK_OF_DECLARE(hb_emmc, "calxeda,hb-emmc-clock", hb_emmc_init);



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 3/6] clk: change rates via list iteration
  2019-03-05  4:49   ` Derek Basehore
@ 2019-03-09  0:07     ` dbasehore .
  -1 siblings, 0 replies; 27+ messages in thread
From: dbasehore . @ 2019-03-09  0:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-clk, linux-arm-kernel, linux-rockchip, linux-doc,
	Stephen Boyd, Michael Turquette, Heiko Stübner,
	aisheng.dong, mchehab+samsung, Jonathan Corbet, jbrunet

On Mon, Mar 4, 2019 at 8:49 PM Derek Basehore <dbasehore@chromium.org> wrote:
>
> This changes the clk_set_rate code to use lists instead of recursion.
> While making this change, also add error handling for clk_set_rate.
> This means that errors in the set_rate/set_parent/set_rate_and_parent
> functions will not longer be ignored. When an error occurs, the clk
> rates and parents are reset, unless an error occurs here, in which we
> bail and cross our fingers.
>
> Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> ---
>  drivers/clk/clk.c | 256 +++++++++++++++++++++++++++++++---------------
>  1 file changed, 176 insertions(+), 80 deletions(-)
>
> diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> index e20364812b54..1637dc262884 100644
> --- a/drivers/clk/clk.c
> +++ b/drivers/clk/clk.c
> @@ -39,6 +39,13 @@ static LIST_HEAD(clk_notifier_list);
>
>  /***    private data structures    ***/
>
> +struct clk_change {
> +       struct list_head        change_list;
> +       unsigned long           rate;
> +       struct clk_core         *core;
> +       struct clk_core         *parent;
> +};
> +
>  struct clk_core {
>         const char              *name;
>         const struct clk_ops    *ops;
> @@ -49,11 +56,9 @@ struct clk_core {
>         const char              **parent_names;
>         struct clk_core         **parents;
>         u8                      num_parents;
> -       u8                      new_parent_index;
>         unsigned long           rate;
>         unsigned long           req_rate;
> -       unsigned long           new_rate;
> -       struct clk_core         *new_parent;
> +       struct clk_change       change;
>         struct clk_core         *new_child;
>         unsigned long           flags;
>         bool                    orphan;
> @@ -1735,19 +1740,52 @@ static int __clk_speculate_rates(struct clk_core *core,
>  static void clk_calc_subtree(struct clk_core *core)
>  {
>         struct clk_core *child;
> +       LIST_HEAD(tmp_list);
>
> -       hlist_for_each_entry(child, &core->children, child_node) {
> -               child->new_rate = clk_recalc(child, core->new_rate);
> -               clk_calc_subtree(child);
> +       list_add(&core->prepare_list, &tmp_list);
> +       while (!list_empty(&tmp_list)) {
> +               core = list_first_entry(&tmp_list, struct clk_core,
> +                                       prepare_list);
> +
> +               hlist_for_each_entry(child, &core->children, child_node) {
> +                       child->change.rate = clk_recalc(child,
> +                                                       core->change.rate);
> +                       list_add_tail(&child->prepare_list, &tmp_list);
> +               }
> +
> +               list_del(&core->prepare_list);
> +       }
> +}
> +
> +static void clk_prepare_changes(struct list_head *change_list,
> +                               struct clk_core *core)
> +{
> +       struct clk_change *change;
> +       struct clk_core *tmp, *child;
> +       LIST_HEAD(tmp_list);
> +
> +       list_add(&core->change.change_list, &tmp_list);
> +       while (!list_empty(&tmp_list)) {
> +               change = list_first_entry(&tmp_list, struct clk_change,
> +                                         change_list);
> +               tmp = change->core;
> +
> +               hlist_for_each_entry(child, &tmp->children, child_node)
> +                       list_add_tail(&child->change.change_list, &tmp_list);
> +
> +               child = tmp->new_child;
> +               if (child)
> +                       list_add_tail(&child->change.change_list, &tmp_list);
> +
> +               list_move_tail(&tmp->change.change_list, change_list);
>         }
>  }
>
>  static void clk_set_change(struct clk_core *core, unsigned long new_rate,
> -                          struct clk_core *new_parent, u8 p_index)
> +                          struct clk_core *new_parent)
>  {
> -       core->new_rate = new_rate;
> -       core->new_parent = new_parent;
> -       core->new_parent_index = p_index;
> +       core->change.rate = new_rate;
> +       core->change.parent = new_parent;
>         /* include clk in new parent's PRE_RATE_CHANGE notifications */
>         core->new_child = NULL;
>         if (new_parent && new_parent != core->parent)
> @@ -1767,7 +1805,6 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
>         unsigned long new_rate;
>         unsigned long min_rate;
>         unsigned long max_rate;
> -       int p_index = 0;
>         long ret;
>
>         /* sanity */
> @@ -1803,17 +1840,15 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
>                         return NULL;
>         } else if (!parent || !(core->flags & CLK_SET_RATE_PARENT)) {
>                 /* pass-through clock without adjustable parent */
> -               core->new_rate = core->rate;
>                 return NULL;
>         } else {
>                 /* pass-through clock with adjustable parent */
>                 top = clk_calc_new_rates(parent, rate);
> -               new_rate = parent->new_rate;
> +               new_rate = parent->change.rate;
>                 hlist_for_each_entry(child, &parent->children, child_node) {
>                         if (child == core)
>                                 continue;
> -
> -                       child->new_rate = clk_recalc(child, new_rate);
> +                       child->change.rate = clk_recalc(child, new_rate);
>                         clk_calc_subtree(child);
>                 }
>                 goto out;
> @@ -1827,16 +1862,6 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
>                 return NULL;
>         }
>
> -       /* try finding the new parent index */
> -       if (parent && core->num_parents > 1) {
> -               p_index = clk_fetch_parent_index(core, parent);
> -               if (p_index < 0) {
> -                       pr_debug("%s: clk %s can not be parent of clk %s\n",
> -                                __func__, parent->name, core->name);
> -                       return NULL;
> -               }
> -       }
> -
>         if ((core->flags & CLK_SET_RATE_PARENT) && parent &&
>             best_parent_rate != parent->rate) {
>                 top = clk_calc_new_rates(parent, best_parent_rate);
> @@ -1844,13 +1869,14 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
>                         if (child == core)
>                                 continue;
>
> -                       child->new_rate = clk_recalc(child, parent->new_rate);
> +                       child->change.rate = clk_recalc(child,
> +                                       parent->change.rate);
>                         clk_calc_subtree(child);
>                 }
>         }
>
>  out:
> -       clk_set_change(core, new_rate, parent, p_index);
> +       clk_set_change(core, new_rate, parent);
>
>         return top;
>  }
> @@ -1866,18 +1892,18 @@ static struct clk_core *clk_propagate_rate_change(struct clk_core *core,
>         struct clk_core *child, *tmp_clk, *fail_clk = NULL;
>         int ret = NOTIFY_DONE;
>
> -       if (core->rate == core->new_rate)
> +       if (core->rate == core->change.rate)
>                 return NULL;
>
>         if (core->notifier_count) {
> -               ret = __clk_notify(core, event, core->rate, core->new_rate);
> +               ret = __clk_notify(core, event, core->rate, core->change.rate);
>                 if (ret & NOTIFY_STOP_MASK)
>                         fail_clk = core;
>         }
>
>         hlist_for_each_entry(child, &core->children, child_node) {
>                 /* Skip children who will be reparented to another clock */
> -               if (child->new_parent && child->new_parent != core)
> +               if (child->change.parent && child->change.parent != core)
>                         continue;
>                 tmp_clk = clk_propagate_rate_change(child, event);
>                 if (tmp_clk)
> @@ -1898,101 +1924,152 @@ static struct clk_core *clk_propagate_rate_change(struct clk_core *core,
>   * walk down a subtree and set the new rates notifying the rate
>   * change on the way
>   */
> -static void clk_change_rate(struct clk_core *core)
> +static int clk_change_rate(struct clk_change *change)
>  {
> -       struct clk_core *child;
> -       struct hlist_node *tmp;
> -       unsigned long old_rate;
> +       struct clk_core *core = change->core;
> +       unsigned long old_rate, flags;
>         unsigned long best_parent_rate = 0;
>         bool skip_set_rate = false;
> -       struct clk_core *old_parent;
> +       struct clk_core *old_parent = NULL;
>         struct clk_core *parent = NULL;
> +       int p_index;
> +       int ret = 0;
>
>         old_rate = core->rate;
>
> -       if (core->new_parent) {
> -               parent = core->new_parent;
> -               best_parent_rate = core->new_parent->rate;
> +       if (change->parent) {
> +               parent = change->parent;
> +               best_parent_rate = parent->rate;
>         } else if (core->parent) {
>                 parent = core->parent;
> -               best_parent_rate = core->parent->rate;
> +               best_parent_rate = parent->rate;
>         }
>
> -       if (clk_pm_runtime_get(core))
> -               return;
> -
>         if (core->flags & CLK_SET_RATE_UNGATE) {
> -               unsigned long flags;
> -
>                 clk_core_prepare(core);
>                 flags = clk_enable_lock();
>                 clk_core_enable(core);
>                 clk_enable_unlock(flags);
>         }
>
> -       if (core->new_parent && core->new_parent != core->parent) {
> -               old_parent = __clk_set_parent_before(core, core->new_parent);
> -               trace_clk_set_parent(core, core->new_parent);
> +       if (core->flags & CLK_OPS_PARENT_ENABLE)
> +               clk_core_prepare_enable(parent);
> +
> +       if (parent != core->parent) {
> +               p_index = clk_fetch_parent_index(core, parent);
> +               if (p_index < 0) {
> +                       pr_debug("%s: clk %s can not be parent of clk %s\n",
> +                                __func__, parent->name, core->name);
> +                       ret = p_index;
> +                       goto out;
> +               }
> +               old_parent = __clk_set_parent_before(core, parent);
> +
> +               trace_clk_set_parent(core, change->parent);
>
>                 if (core->ops->set_rate_and_parent) {
>                         skip_set_rate = true;
> -                       core->ops->set_rate_and_parent(core->hw, core->new_rate,
> +                       ret = core->ops->set_rate_and_parent(core->hw,
> +                                       change->rate,
>                                         best_parent_rate,
> -                                       core->new_parent_index);
> +                                       p_index);
>                 } else if (core->ops->set_parent) {
> -                       core->ops->set_parent(core->hw, core->new_parent_index);
> +                       ret = core->ops->set_parent(core->hw, p_index);
>                 }
>
> -               trace_clk_set_parent_complete(core, core->new_parent);
> -               __clk_set_parent_after(core, core->new_parent, old_parent);
> -       }
> +               trace_clk_set_parent_complete(core, change->parent);
> +               if (ret) {
> +                       flags = clk_enable_lock();
> +                       clk_reparent(core, old_parent);
> +                       clk_enable_unlock(flags);
> +                       __clk_set_parent_after(core, old_parent, parent);
>
> -       if (core->flags & CLK_OPS_PARENT_ENABLE)
> -               clk_core_prepare_enable(parent);
> +                       goto out;
> +               }
> +               __clk_set_parent_after(core, parent, old_parent);
> +
> +       }
>
> -       trace_clk_set_rate(core, core->new_rate);
> +       trace_clk_set_rate(core, change->rate);
>
>         if (!skip_set_rate && core->ops->set_rate)
> -               core->ops->set_rate(core->hw, core->new_rate, best_parent_rate);
> +               ret = core->ops->set_rate(core->hw, change->rate,
> +                               best_parent_rate);
>
> -       trace_clk_set_rate_complete(core, core->new_rate);
> +       trace_clk_set_rate_complete(core, change->rate);
>
>         core->rate = clk_recalc(core, best_parent_rate);
>
> -       if (core->flags & CLK_SET_RATE_UNGATE) {
> -               unsigned long flags;
> +out:
> +       if (core->flags & CLK_OPS_PARENT_ENABLE)
> +               clk_core_disable_unprepare(parent);
> +
> +       if (core->notifier_count && old_rate != core->rate)
> +               __clk_notify(core, POST_RATE_CHANGE, old_rate, core->rate);
>
> +       if (core->flags & CLK_SET_RATE_UNGATE) {
>                 flags = clk_enable_lock();
>                 clk_core_disable(core);
>                 clk_enable_unlock(flags);
>                 clk_core_unprepare(core);
>         }
>
> -       if (core->flags & CLK_OPS_PARENT_ENABLE)
> -               clk_core_disable_unprepare(parent);
> +       if (core->flags & CLK_RECALC_NEW_RATES)
> +               (void)clk_calc_new_rates(core, change->rate);
>
> -       if (core->notifier_count && old_rate != core->rate)
> -               __clk_notify(core, POST_RATE_CHANGE, old_rate, core->rate);
> +       /*
> +        * Keep track of old parent and requested rate in case we have
> +        * to undo the change due to an error.
> +        */
> +       change->parent = old_parent;
> +       change->rate = old_rate;
> +       return ret;
> +}
>
> -       if (core->flags & CLK_RECALC_NEW_RATES)
> -               (void)clk_calc_new_rates(core, core->new_rate);
> +static int clk_change_rates(struct list_head *list)
> +{
> +       struct clk_change *change, *tmp;
> +       int ret = 0;
>
>         /*
> -        * Use safe iteration, as change_rate can actually swap parents
> -        * for certain clock types.
> +        * Make pm runtime get/put calls outside of clk_change_rate to avoid
> +        * clks bouncing back and forth between runtime_resume/suspend.
>          */
> -       hlist_for_each_entry_safe(child, tmp, &core->children, child_node) {
> -               /* Skip children who will be reparented to another clock */
> -               if (child->new_parent && child->new_parent != core)
> -                       continue;
> -               clk_change_rate(child);
> +       list_for_each_entry(change, list, change_list) {
> +               ret = clk_pm_runtime_get(change->core);
> +               if (ret) {
> +                       list_for_each_entry_continue_reverse(change, list,
> +                                                            change_list)
> +                               clk_pm_runtime_put(change->core);
> +
> +                       return ret;
> +               }
>         }
>
> -       /* handle the new child who might not be in core->children yet */
> -       if (core->new_child)
> -               clk_change_rate(core->new_child);
> +       list_for_each_entry(change, list, change_list) {
> +               ret = clk_change_rate(change);
> +               clk_pm_runtime_put(change->core);
> +               if (ret)
> +                       goto err;
> +       }
>
> -       clk_pm_runtime_put(core);
> +       return 0;
> +err:
> +       /* Unwind the changes on an error. */
> +       list_for_each_entry_continue_reverse(change, list, change_list) {

I thought about this, and I think this should go back to the way I did
things in v1 with the change order. Since clk set_rate callbacks can
rely on the parent's current rate, undoing changes in reverse order
might result in incorrect changes.

> +               /* Just give up on an error when undoing changes. */
> +               ret = clk_pm_runtime_get(change->core);
> +               if (WARN_ON(ret))
> +                       return ret;
> +
> +               ret = clk_change_rate(change);
> +               if (WARN_ON(ret))
> +                       return ret;
> +
> +               clk_pm_runtime_put(change->core);
> +       }
> +
> +       return ret;
>  }
>
>  static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
> @@ -2026,7 +2103,9 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
>                                     unsigned long req_rate)
>  {
>         struct clk_core *top, *fail_clk, *child;
> +       struct clk_change *change, *tmp;
>         unsigned long rate;
> +       LIST_HEAD(changes);
>         int ret = 0;
>
>         if (!core)
> @@ -2052,14 +2131,17 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
>                 return ret;
>
>         if (top != core) {
> -               /* new_parent cannot be NULL in this case */
> -               hlist_for_each_entry(child, &core->new_parent->children,
> +               /* change.parent cannot be NULL in this case */
> +               hlist_for_each_entry(child, &core->change.parent->children,
>                                 child_node)
>                         clk_calc_subtree(child);
>         } else {
>                 clk_calc_subtree(core);
>         }
>
> +       /* Construct the list of changes */
> +       clk_prepare_changes(&changes, top);
> +
>         /* notify that we are about to change rates */
>         fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE);
>         if (fail_clk) {
> @@ -2071,7 +2153,19 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
>         }
>
>         /* change the rates */
> -       clk_change_rate(top);
> +       ret = clk_change_rates(&changes);
> +       list_for_each_entry_safe(change, tmp, &changes, change_list) {
> +               change->rate = 0;
> +               change->parent = NULL;
> +               list_del_init(&change->change_list);
> +       }
> +
> +       if (ret) {
> +               pr_debug("%s: failed to set %s rate via top clk %s\n", __func__,
> +                               core->name, top->name);
> +               clk_propagate_rate_change(top, ABORT_RATE_CHANGE);
> +               goto err;
> +       }
>
>         core->req_rate = req_rate;
>  err:
> @@ -3338,6 +3432,8 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw)
>         core->max_rate = ULONG_MAX;
>         INIT_LIST_HEAD(&core->prepare_list);
>         INIT_LIST_HEAD(&core->enable_list);
> +       INIT_LIST_HEAD(&core->change.change_list);
> +       core->change.core = core;
>         hw->core = core;
>
>         /* allocate local copy in case parent_names is __initdata */
> --
> 2.21.0.352.gf09ad66450-goog
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v3 3/6] clk: change rates via list iteration
@ 2019-03-09  0:07     ` dbasehore .
  0 siblings, 0 replies; 27+ messages in thread
From: dbasehore . @ 2019-03-09  0:07 UTC (permalink / raw)
  To: linux-kernel
  Cc: aisheng.dong, Heiko Stübner, linux-doc, Stephen Boyd,
	Michael Turquette, Jonathan Corbet, linux-rockchip,
	mchehab+samsung, linux-clk, linux-arm-kernel, jbrunet

On Mon, Mar 4, 2019 at 8:49 PM Derek Basehore <dbasehore@chromium.org> wrote:
>
> This changes the clk_set_rate code to use lists instead of recursion.
> While making this change, also add error handling for clk_set_rate.
> This means that errors in the set_rate/set_parent/set_rate_and_parent
> functions will not longer be ignored. When an error occurs, the clk
> rates and parents are reset, unless an error occurs here, in which we
> bail and cross our fingers.
>
> Signed-off-by: Derek Basehore <dbasehore@chromium.org>
> ---
>  drivers/clk/clk.c | 256 +++++++++++++++++++++++++++++++---------------
>  1 file changed, 176 insertions(+), 80 deletions(-)
>
> diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> index e20364812b54..1637dc262884 100644
> --- a/drivers/clk/clk.c
> +++ b/drivers/clk/clk.c
> @@ -39,6 +39,13 @@ static LIST_HEAD(clk_notifier_list);
>
>  /***    private data structures    ***/
>
> +struct clk_change {
> +       struct list_head        change_list;
> +       unsigned long           rate;
> +       struct clk_core         *core;
> +       struct clk_core         *parent;
> +};
> +
>  struct clk_core {
>         const char              *name;
>         const struct clk_ops    *ops;
> @@ -49,11 +56,9 @@ struct clk_core {
>         const char              **parent_names;
>         struct clk_core         **parents;
>         u8                      num_parents;
> -       u8                      new_parent_index;
>         unsigned long           rate;
>         unsigned long           req_rate;
> -       unsigned long           new_rate;
> -       struct clk_core         *new_parent;
> +       struct clk_change       change;
>         struct clk_core         *new_child;
>         unsigned long           flags;
>         bool                    orphan;
> @@ -1735,19 +1740,52 @@ static int __clk_speculate_rates(struct clk_core *core,
>  static void clk_calc_subtree(struct clk_core *core)
>  {
>         struct clk_core *child;
> +       LIST_HEAD(tmp_list);
>
> -       hlist_for_each_entry(child, &core->children, child_node) {
> -               child->new_rate = clk_recalc(child, core->new_rate);
> -               clk_calc_subtree(child);
> +       list_add(&core->prepare_list, &tmp_list);
> +       while (!list_empty(&tmp_list)) {
> +               core = list_first_entry(&tmp_list, struct clk_core,
> +                                       prepare_list);
> +
> +               hlist_for_each_entry(child, &core->children, child_node) {
> +                       child->change.rate = clk_recalc(child,
> +                                                       core->change.rate);
> +                       list_add_tail(&child->prepare_list, &tmp_list);
> +               }
> +
> +               list_del(&core->prepare_list);
> +       }
> +}
> +
> +static void clk_prepare_changes(struct list_head *change_list,
> +                               struct clk_core *core)
> +{
> +       struct clk_change *change;
> +       struct clk_core *tmp, *child;
> +       LIST_HEAD(tmp_list);
> +
> +       list_add(&core->change.change_list, &tmp_list);
> +       while (!list_empty(&tmp_list)) {
> +               change = list_first_entry(&tmp_list, struct clk_change,
> +                                         change_list);
> +               tmp = change->core;
> +
> +               hlist_for_each_entry(child, &tmp->children, child_node)
> +                       list_add_tail(&child->change.change_list, &tmp_list);
> +
> +               child = tmp->new_child;
> +               if (child)
> +                       list_add_tail(&child->change.change_list, &tmp_list);
> +
> +               list_move_tail(&tmp->change.change_list, change_list);
>         }
>  }
>
>  static void clk_set_change(struct clk_core *core, unsigned long new_rate,
> -                          struct clk_core *new_parent, u8 p_index)
> +                          struct clk_core *new_parent)
>  {
> -       core->new_rate = new_rate;
> -       core->new_parent = new_parent;
> -       core->new_parent_index = p_index;
> +       core->change.rate = new_rate;
> +       core->change.parent = new_parent;
>         /* include clk in new parent's PRE_RATE_CHANGE notifications */
>         core->new_child = NULL;
>         if (new_parent && new_parent != core->parent)
> @@ -1767,7 +1805,6 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
>         unsigned long new_rate;
>         unsigned long min_rate;
>         unsigned long max_rate;
> -       int p_index = 0;
>         long ret;
>
>         /* sanity */
> @@ -1803,17 +1840,15 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
>                         return NULL;
>         } else if (!parent || !(core->flags & CLK_SET_RATE_PARENT)) {
>                 /* pass-through clock without adjustable parent */
> -               core->new_rate = core->rate;
>                 return NULL;
>         } else {
>                 /* pass-through clock with adjustable parent */
>                 top = clk_calc_new_rates(parent, rate);
> -               new_rate = parent->new_rate;
> +               new_rate = parent->change.rate;
>                 hlist_for_each_entry(child, &parent->children, child_node) {
>                         if (child == core)
>                                 continue;
> -
> -                       child->new_rate = clk_recalc(child, new_rate);
> +                       child->change.rate = clk_recalc(child, new_rate);
>                         clk_calc_subtree(child);
>                 }
>                 goto out;
> @@ -1827,16 +1862,6 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
>                 return NULL;
>         }
>
> -       /* try finding the new parent index */
> -       if (parent && core->num_parents > 1) {
> -               p_index = clk_fetch_parent_index(core, parent);
> -               if (p_index < 0) {
> -                       pr_debug("%s: clk %s can not be parent of clk %s\n",
> -                                __func__, parent->name, core->name);
> -                       return NULL;
> -               }
> -       }
> -
>         if ((core->flags & CLK_SET_RATE_PARENT) && parent &&
>             best_parent_rate != parent->rate) {
>                 top = clk_calc_new_rates(parent, best_parent_rate);
> @@ -1844,13 +1869,14 @@ static struct clk_core *clk_calc_new_rates(struct clk_core *core,
>                         if (child == core)
>                                 continue;
>
> -                       child->new_rate = clk_recalc(child, parent->new_rate);
> +                       child->change.rate = clk_recalc(child,
> +                                       parent->change.rate);
>                         clk_calc_subtree(child);
>                 }
>         }
>
>  out:
> -       clk_set_change(core, new_rate, parent, p_index);
> +       clk_set_change(core, new_rate, parent);
>
>         return top;
>  }
> @@ -1866,18 +1892,18 @@ static struct clk_core *clk_propagate_rate_change(struct clk_core *core,
>         struct clk_core *child, *tmp_clk, *fail_clk = NULL;
>         int ret = NOTIFY_DONE;
>
> -       if (core->rate == core->new_rate)
> +       if (core->rate == core->change.rate)
>                 return NULL;
>
>         if (core->notifier_count) {
> -               ret = __clk_notify(core, event, core->rate, core->new_rate);
> +               ret = __clk_notify(core, event, core->rate, core->change.rate);
>                 if (ret & NOTIFY_STOP_MASK)
>                         fail_clk = core;
>         }
>
>         hlist_for_each_entry(child, &core->children, child_node) {
>                 /* Skip children who will be reparented to another clock */
> -               if (child->new_parent && child->new_parent != core)
> +               if (child->change.parent && child->change.parent != core)
>                         continue;
>                 tmp_clk = clk_propagate_rate_change(child, event);
>                 if (tmp_clk)
> @@ -1898,101 +1924,152 @@ static struct clk_core *clk_propagate_rate_change(struct clk_core *core,
>   * walk down a subtree and set the new rates notifying the rate
>   * change on the way
>   */
> -static void clk_change_rate(struct clk_core *core)
> +static int clk_change_rate(struct clk_change *change)
>  {
> -       struct clk_core *child;
> -       struct hlist_node *tmp;
> -       unsigned long old_rate;
> +       struct clk_core *core = change->core;
> +       unsigned long old_rate, flags;
>         unsigned long best_parent_rate = 0;
>         bool skip_set_rate = false;
> -       struct clk_core *old_parent;
> +       struct clk_core *old_parent = NULL;
>         struct clk_core *parent = NULL;
> +       int p_index;
> +       int ret = 0;
>
>         old_rate = core->rate;
>
> -       if (core->new_parent) {
> -               parent = core->new_parent;
> -               best_parent_rate = core->new_parent->rate;
> +       if (change->parent) {
> +               parent = change->parent;
> +               best_parent_rate = parent->rate;
>         } else if (core->parent) {
>                 parent = core->parent;
> -               best_parent_rate = core->parent->rate;
> +               best_parent_rate = parent->rate;
>         }
>
> -       if (clk_pm_runtime_get(core))
> -               return;
> -
>         if (core->flags & CLK_SET_RATE_UNGATE) {
> -               unsigned long flags;
> -
>                 clk_core_prepare(core);
>                 flags = clk_enable_lock();
>                 clk_core_enable(core);
>                 clk_enable_unlock(flags);
>         }
>
> -       if (core->new_parent && core->new_parent != core->parent) {
> -               old_parent = __clk_set_parent_before(core, core->new_parent);
> -               trace_clk_set_parent(core, core->new_parent);
> +       if (core->flags & CLK_OPS_PARENT_ENABLE)
> +               clk_core_prepare_enable(parent);
> +
> +       if (parent != core->parent) {
> +               p_index = clk_fetch_parent_index(core, parent);
> +               if (p_index < 0) {
> +                       pr_debug("%s: clk %s can not be parent of clk %s\n",
> +                                __func__, parent->name, core->name);
> +                       ret = p_index;
> +                       goto out;
> +               }
> +               old_parent = __clk_set_parent_before(core, parent);
> +
> +               trace_clk_set_parent(core, change->parent);
>
>                 if (core->ops->set_rate_and_parent) {
>                         skip_set_rate = true;
> -                       core->ops->set_rate_and_parent(core->hw, core->new_rate,
> +                       ret = core->ops->set_rate_and_parent(core->hw,
> +                                       change->rate,
>                                         best_parent_rate,
> -                                       core->new_parent_index);
> +                                       p_index);
>                 } else if (core->ops->set_parent) {
> -                       core->ops->set_parent(core->hw, core->new_parent_index);
> +                       ret = core->ops->set_parent(core->hw, p_index);
>                 }
>
> -               trace_clk_set_parent_complete(core, core->new_parent);
> -               __clk_set_parent_after(core, core->new_parent, old_parent);
> -       }
> +               trace_clk_set_parent_complete(core, change->parent);
> +               if (ret) {
> +                       flags = clk_enable_lock();
> +                       clk_reparent(core, old_parent);
> +                       clk_enable_unlock(flags);
> +                       __clk_set_parent_after(core, old_parent, parent);
>
> -       if (core->flags & CLK_OPS_PARENT_ENABLE)
> -               clk_core_prepare_enable(parent);
> +                       goto out;
> +               }
> +               __clk_set_parent_after(core, parent, old_parent);
> +
> +       }
>
> -       trace_clk_set_rate(core, core->new_rate);
> +       trace_clk_set_rate(core, change->rate);
>
>         if (!skip_set_rate && core->ops->set_rate)
> -               core->ops->set_rate(core->hw, core->new_rate, best_parent_rate);
> +               ret = core->ops->set_rate(core->hw, change->rate,
> +                               best_parent_rate);
>
> -       trace_clk_set_rate_complete(core, core->new_rate);
> +       trace_clk_set_rate_complete(core, change->rate);
>
>         core->rate = clk_recalc(core, best_parent_rate);
>
> -       if (core->flags & CLK_SET_RATE_UNGATE) {
> -               unsigned long flags;
> +out:
> +       if (core->flags & CLK_OPS_PARENT_ENABLE)
> +               clk_core_disable_unprepare(parent);
> +
> +       if (core->notifier_count && old_rate != core->rate)
> +               __clk_notify(core, POST_RATE_CHANGE, old_rate, core->rate);
>
> +       if (core->flags & CLK_SET_RATE_UNGATE) {
>                 flags = clk_enable_lock();
>                 clk_core_disable(core);
>                 clk_enable_unlock(flags);
>                 clk_core_unprepare(core);
>         }
>
> -       if (core->flags & CLK_OPS_PARENT_ENABLE)
> -               clk_core_disable_unprepare(parent);
> +       if (core->flags & CLK_RECALC_NEW_RATES)
> +               (void)clk_calc_new_rates(core, change->rate);
>
> -       if (core->notifier_count && old_rate != core->rate)
> -               __clk_notify(core, POST_RATE_CHANGE, old_rate, core->rate);
> +       /*
> +        * Keep track of old parent and requested rate in case we have
> +        * to undo the change due to an error.
> +        */
> +       change->parent = old_parent;
> +       change->rate = old_rate;
> +       return ret;
> +}
>
> -       if (core->flags & CLK_RECALC_NEW_RATES)
> -               (void)clk_calc_new_rates(core, core->new_rate);
> +static int clk_change_rates(struct list_head *list)
> +{
> +       struct clk_change *change, *tmp;
> +       int ret = 0;
>
>         /*
> -        * Use safe iteration, as change_rate can actually swap parents
> -        * for certain clock types.
> +        * Make pm runtime get/put calls outside of clk_change_rate to avoid
> +        * clks bouncing back and forth between runtime_resume/suspend.
>          */
> -       hlist_for_each_entry_safe(child, tmp, &core->children, child_node) {
> -               /* Skip children who will be reparented to another clock */
> -               if (child->new_parent && child->new_parent != core)
> -                       continue;
> -               clk_change_rate(child);
> +       list_for_each_entry(change, list, change_list) {
> +               ret = clk_pm_runtime_get(change->core);
> +               if (ret) {
> +                       list_for_each_entry_continue_reverse(change, list,
> +                                                            change_list)
> +                               clk_pm_runtime_put(change->core);
> +
> +                       return ret;
> +               }
>         }
>
> -       /* handle the new child who might not be in core->children yet */
> -       if (core->new_child)
> -               clk_change_rate(core->new_child);
> +       list_for_each_entry(change, list, change_list) {
> +               ret = clk_change_rate(change);
> +               clk_pm_runtime_put(change->core);
> +               if (ret)
> +                       goto err;
> +       }
>
> -       clk_pm_runtime_put(core);
> +       return 0;
> +err:
> +       /* Unwind the changes on an error. */
> +       list_for_each_entry_continue_reverse(change, list, change_list) {

I thought about this, and I think this should go back to the way I did
things in v1 with the change order. Since clk set_rate callbacks can
rely on the parent's current rate, undoing changes in reverse order
might result in incorrect changes.

> +               /* Just give up on an error when undoing changes. */
> +               ret = clk_pm_runtime_get(change->core);
> +               if (WARN_ON(ret))
> +                       return ret;
> +
> +               ret = clk_change_rate(change);
> +               if (WARN_ON(ret))
> +                       return ret;
> +
> +               clk_pm_runtime_put(change->core);
> +       }
> +
> +       return ret;
>  }
>
>  static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
> @@ -2026,7 +2103,9 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
>                                     unsigned long req_rate)
>  {
>         struct clk_core *top, *fail_clk, *child;
> +       struct clk_change *change, *tmp;
>         unsigned long rate;
> +       LIST_HEAD(changes);
>         int ret = 0;
>
>         if (!core)
> @@ -2052,14 +2131,17 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
>                 return ret;
>
>         if (top != core) {
> -               /* new_parent cannot be NULL in this case */
> -               hlist_for_each_entry(child, &core->new_parent->children,
> +               /* change.parent cannot be NULL in this case */
> +               hlist_for_each_entry(child, &core->change.parent->children,
>                                 child_node)
>                         clk_calc_subtree(child);
>         } else {
>                 clk_calc_subtree(core);
>         }
>
> +       /* Construct the list of changes */
> +       clk_prepare_changes(&changes, top);
> +
>         /* notify that we are about to change rates */
>         fail_clk = clk_propagate_rate_change(top, PRE_RATE_CHANGE);
>         if (fail_clk) {
> @@ -2071,7 +2153,19 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
>         }
>
>         /* change the rates */
> -       clk_change_rate(top);
> +       ret = clk_change_rates(&changes);
> +       list_for_each_entry_safe(change, tmp, &changes, change_list) {
> +               change->rate = 0;
> +               change->parent = NULL;
> +               list_del_init(&change->change_list);
> +       }
> +
> +       if (ret) {
> +               pr_debug("%s: failed to set %s rate via top clk %s\n", __func__,
> +                               core->name, top->name);
> +               clk_propagate_rate_change(top, ABORT_RATE_CHANGE);
> +               goto err;
> +       }
>
>         core->req_rate = req_rate;
>  err:
> @@ -3338,6 +3432,8 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw)
>         core->max_rate = ULONG_MAX;
>         INIT_LIST_HEAD(&core->prepare_list);
>         INIT_LIST_HEAD(&core->enable_list);
> +       INIT_LIST_HEAD(&core->change.change_list);
> +       core->change.core = core;
>         hw->core = core;
>
>         /* allocate local copy in case parent_names is __initdata */
> --
> 2.21.0.352.gf09ad66450-goog
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2019-03-09  0:08 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-05  4:49 [PATCH v2 0/6] Coordinated Clks Derek Basehore
2019-03-05  4:49 ` Derek Basehore
2019-03-05  4:49 ` [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare,enable}() Derek Basehore
2019-03-05  4:49   ` Derek Basehore
2019-03-05 18:49   ` Stephen Boyd
2019-03-05 18:49     ` [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}() Stephen Boyd
2019-03-05 18:49     ` Stephen Boyd
2019-03-06  1:35     ` [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare,enable}() dbasehore .
2019-03-06  1:35       ` [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}() dbasehore .
2019-03-06  4:11       ` [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare,enable}() dbasehore .
2019-03-06  4:11         ` [PATCH v2 1/6] clk: Remove recursion in clk_core_{prepare, enable}() dbasehore .
2019-03-06  4:11         ` dbasehore .
     [not found]         ` <CAGAzgsp0fWbH1f7gRKvhTotvdHMAL8gWw1bTKpVHfW9hJddXAw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-03-06 21:16           ` Stephen Boyd
2019-03-06 21:16         ` Stephen Boyd
2019-03-05  4:49 ` [PATCH v2 2/6] clk: fix clk_calc_subtree compute duplications Derek Basehore
2019-03-05  4:49   ` Derek Basehore
2019-03-05  4:49   ` Derek Basehore
2019-03-05  4:49 ` [PATCH v3 3/6] clk: change rates via list iteration Derek Basehore
2019-03-05  4:49   ` Derek Basehore
2019-03-09  0:07   ` dbasehore .
2019-03-09  0:07     ` dbasehore .
2019-03-05  4:49 ` [PATCH v2 4/6] clk: add coordinated clk changes support Derek Basehore
2019-03-05  4:49   ` Derek Basehore
2019-03-05  4:49 ` [PATCH v2 5/6] docs: driver-api: add pre_rate_req to clk documentation Derek Basehore
2019-03-05  4:49   ` Derek Basehore
2019-03-05  4:49 ` [PATCH v2 6/6] clk: rockchip: use pre_rate_req for cpuclk Derek Basehore
2019-03-05  4:49   ` Derek Basehore

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.