linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
@ 2021-01-05  3:17 Nicolas Pitre
  2021-01-17 23:49 ` Nicolas Pitre
                   ` (2 more replies)
  0 siblings, 3 replies; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-05  3:17 UTC (permalink / raw)
  To: Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King
  Cc: linux-pm, linux-clk, linux-kernel

The clock API splits its interface into sleepable ant atomic contexts:

- clk_prepare/clk_unprepare for stuff that might sleep

- clk_enable_clk_disable for anything that may be done in atomic context

The code handling runtime PM for clocks only calls clk_disable() on
suspend requests, and clk_enable on resume requests. This means that
runtime PM with clock providers that only have the prepare/unprepare
methods implemented is basically useless.

Many clock implementations can't accommodate atomic contexts. This is
often the case when communication with the clock happens through another
subsystem like I2C or SCMI.

Let's make the clock PM code useful with such clocks by safely invoking
clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
such clocks are registered with the PM layer then pm_runtime_irq_safe()
can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
may be invoked in atomic context.

For clocks that do implement the enable and disable methods then 
everything just works as before.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>

diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
index ced6863a16..a62fb0f9b1 100644
--- a/drivers/base/power/clock_ops.c
+++ b/drivers/base/power/clock_ops.c
@@ -23,6 +23,7 @@
 enum pce_status {
 	PCE_STATUS_NONE = 0,
 	PCE_STATUS_ACQUIRED,
+	PCE_STATUS_PREPARED,
 	PCE_STATUS_ENABLED,
 	PCE_STATUS_ERROR,
 };
@@ -32,8 +33,102 @@ struct pm_clock_entry {
 	char *con_id;
 	struct clk *clk;
 	enum pce_status status;
+	bool enabled_when_prepared;
 };
 
+/**
+ * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
+ *		      entry list.
+ * @psd: pm_subsys_data instance corresponding to the PM clock entry list
+ *	 and clk_op_might_sleep count to be modified.
+ *
+ * Get exclusive access before modifying the PM clock entry list and the
+ * clock_op_might_sleep count to guard against concurrent modifications.
+ * This also protects against a concurrent clock_op_might_sleep and PM clock
+ * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
+ * happen in atomic context, hence both the mutex and the spinlock must be
+ * taken here.
+ */
+static void pm_clk_list_lock(struct pm_subsys_data *psd)
+{
+	mutex_lock(&psd->clock_mutex);
+	spin_lock_irq(&psd->lock);
+}
+
+/**
+ * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
+ * @psd: the same pm_subsys_data instance previously passed to
+ *	 pm_clk_list_lock().
+ */
+static void pm_clk_list_unlock(struct pm_subsys_data *psd)
+{
+	spin_unlock_irq(&psd->lock);
+	mutex_unlock(&psd->clock_mutex);
+}
+
+/**
+ * pm_clk_op_lock - ensure exclusive access for performing clock operations.
+ * @psd: pm_subsys_data instance corresponding to the PM clock entry list
+ *	 and clk_op_might_sleep count being used.
+ * @flags: stored irq flags.
+ * @fn: string for the caller function's name.
+ *
+ * This is used by pm_clk_suspend() and pm_clk_resume() to guard
+ * against concurrent modifications to the clock entry list and the
+ * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
+ * only the mutex can be locked and those functions can only be used in
+ * non atomic context. If clock_op_might_sleep == 0 then these functions
+ * may be used in any context and only the spinlock can be locked.
+ * Returns -EINVAL if called in atomic context when clock ops might sleep.
+ */
+static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
+			  const char *fn)
+{
+	bool atomic_context = in_atomic() || irqs_disabled();
+
+try_again:
+	spin_lock_irqsave(&psd->lock, *flags);
+	if (!psd->clock_op_might_sleep)
+		return 0;
+
+	/* bail out if in atomic context */
+	if (atomic_context) {
+		pr_err("%s: atomic context with clock_ops_might_sleep = %d",
+		       fn, psd->clock_op_might_sleep);
+		spin_unlock_irqrestore(&psd->lock, *flags);
+		might_sleep();
+		return -EPERM;
+	}
+
+	/* we must switch to the mutex */
+	spin_unlock_irqrestore(&psd->lock, *flags);
+	mutex_lock(&psd->clock_mutex);
+
+	/*
+	 * There was a possibility for psd->clock_op_might_sleep
+	 * to become 0 above. Keep the mutex only if not the case.
+	 */
+	if (likely(psd->clock_op_might_sleep))
+		return 0;
+
+	mutex_unlock(&psd->clock_mutex);
+	goto try_again;
+}
+
+/**
+ * pm_clk_op_unlock - counterpart to pm_clk_op_lock().
+ * @psd: the same pm_subsys_data instance previously passed to
+ *	 pm_clk_op_lock().
+ * @flags: irq flags provided by pm_clk_op_lock().
+ */
+static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
+{
+	if (psd->clock_op_might_sleep)
+		mutex_unlock(&psd->clock_mutex);
+	else
+		spin_unlock_irqrestore(&psd->lock, *flags);
+}
+
 /**
  * pm_clk_enable - Enable a clock, reporting any errors
  * @dev: The device for the given clock
@@ -43,14 +138,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce
 {
 	int ret;
 
-	if (ce->status < PCE_STATUS_ERROR) {
+	switch (ce->status) {
+	case PCE_STATUS_ACQUIRED:
+		ret = clk_prepare_enable(ce->clk);
+		break;
+	case PCE_STATUS_PREPARED:
 		ret = clk_enable(ce->clk);
-		if (!ret)
-			ce->status = PCE_STATUS_ENABLED;
-		else
-			dev_err(dev, "%s: failed to enable clk %p, error %d\n",
-				__func__, ce->clk, ret);
+		break;
+	default:
+		return;
 	}
+	if (!ret)
+		ce->status = PCE_STATUS_ENABLED;
+	else
+		dev_err(dev, "%s: failed to enable clk %p, error %d\n",
+			__func__, ce->clk, ret);
 }
 
 /**
@@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
 		ce->clk = clk_get(dev, ce->con_id);
 	if (IS_ERR(ce->clk)) {
 		ce->status = PCE_STATUS_ERROR;
+		return;
+	} else if (clk_is_enabled_when_prepared(ce->clk)) {
+		/* we defer preparing the clock in that case */
+		ce->status = PCE_STATUS_ACQUIRED;
+		ce->enabled_when_prepared = true;
+	} else if (clk_prepare(ce->clk)) {
+		ce->status = PCE_STATUS_ERROR;
+		dev_err(dev, "clk_prepare() failed\n");
+		return;
 	} else {
-		if (clk_prepare(ce->clk)) {
-			ce->status = PCE_STATUS_ERROR;
-			dev_err(dev, "clk_prepare() failed\n");
-		} else {
-			ce->status = PCE_STATUS_ACQUIRED;
-			dev_dbg(dev,
-				"Clock %pC con_id %s managed by runtime PM.\n",
-				ce->clk, ce->con_id);
-		}
+		ce->status = PCE_STATUS_PREPARED;
 	}
+	dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
+		ce->clk, ce->con_id);
 }
 
 static int __pm_clk_add(struct device *dev, const char *con_id,
@@ -106,9 +211,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
 
 	pm_clk_acquire(dev, ce);
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 	list_add_tail(&ce->node, &psd->clock_list);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep++;
+	pm_clk_list_unlock(psd);
 	return 0;
 }
 
@@ -239,14 +346,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce)
 	if (!ce)
 		return;
 
-	if (ce->status < PCE_STATUS_ERROR) {
-		if (ce->status == PCE_STATUS_ENABLED)
-			clk_disable(ce->clk);
-
-		if (ce->status >= PCE_STATUS_ACQUIRED) {
-			clk_unprepare(ce->clk);
+	switch (ce->status) {
+	case PCE_STATUS_ENABLED:
+		clk_disable(ce->clk);
+		fallthrough;
+	case PCE_STATUS_PREPARED:
+		clk_unprepare(ce->clk);
+		fallthrough;
+	case PCE_STATUS_ACQUIRED:
+	case PCE_STATUS_ERROR:
+		if (!IS_ERR(ce->clk))
 			clk_put(ce->clk);
-		}
+		break;
+	default:
+		break;
 	}
 
 	kfree(ce->con_id);
@@ -269,7 +382,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
 	if (!psd)
 		return;
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry(ce, &psd->clock_list, node) {
 		if (!con_id && !ce->con_id)
@@ -280,12 +393,14 @@ void pm_clk_remove(struct device *dev, const char *con_id)
 			goto remove;
 	}
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 	return;
 
  remove:
 	list_del(&ce->node);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep--;
+	pm_clk_list_unlock(psd);
 
 	__pm_clk_remove(ce);
 }
@@ -307,19 +422,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
 	if (!psd || !clk)
 		return;
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry(ce, &psd->clock_list, node) {
 		if (clk == ce->clk)
 			goto remove;
 	}
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 	return;
 
  remove:
 	list_del(&ce->node);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep--;
+	pm_clk_list_unlock(psd);
 
 	__pm_clk_remove(ce);
 }
@@ -330,13 +447,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
  * @dev: Device to initialize the list of PM clocks for.
  *
  * Initialize the lock and clock_list members of the device's pm_subsys_data
- * object.
+ * object, set the count of clocks that might sleep to 0.
  */
 void pm_clk_init(struct device *dev)
 {
 	struct pm_subsys_data *psd = dev_to_psd(dev);
-	if (psd)
+	if (psd) {
 		INIT_LIST_HEAD(&psd->clock_list);
+		mutex_init(&psd->clock_mutex);
+		psd->clock_op_might_sleep = 0;
+	}
 }
 EXPORT_SYMBOL_GPL(pm_clk_init);
 
@@ -372,12 +492,13 @@ void pm_clk_destroy(struct device *dev)
 
 	INIT_LIST_HEAD(&list);
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
 		list_move(&ce->node, &list);
+	psd->clock_op_might_sleep = 0;
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 
 	dev_pm_put_subsys_data(dev);
 
@@ -397,23 +518,30 @@ int pm_clk_suspend(struct device *dev)
 	struct pm_subsys_data *psd = dev_to_psd(dev);
 	struct pm_clock_entry *ce;
 	unsigned long flags;
+	int ret;
 
 	dev_dbg(dev, "%s()\n", __func__);
 
 	if (!psd)
 		return 0;
 
-	spin_lock_irqsave(&psd->lock, flags);
+	ret = pm_clk_op_lock(psd, &flags, __func__);
+	if (ret)
+		return ret;
 
 	list_for_each_entry_reverse(ce, &psd->clock_list, node) {
-		if (ce->status < PCE_STATUS_ERROR) {
-			if (ce->status == PCE_STATUS_ENABLED)
+		if (ce->status == PCE_STATUS_ENABLED) {
+			if (ce->enabled_when_prepared) {
+				clk_disable_unprepare(ce->clk);
+				ce->status = PCE_STATUS_ACQUIRED;
+			} else {
 				clk_disable(ce->clk);
-			ce->status = PCE_STATUS_ACQUIRED;
+				ce->status = PCE_STATUS_PREPARED;
+			}
 		}
 	}
 
-	spin_unlock_irqrestore(&psd->lock, flags);
+	pm_clk_op_unlock(psd, &flags);
 
 	return 0;
 }
@@ -428,18 +556,21 @@ int pm_clk_resume(struct device *dev)
 	struct pm_subsys_data *psd = dev_to_psd(dev);
 	struct pm_clock_entry *ce;
 	unsigned long flags;
+	int ret;
 
 	dev_dbg(dev, "%s()\n", __func__);
 
 	if (!psd)
 		return 0;
 
-	spin_lock_irqsave(&psd->lock, flags);
+	ret = pm_clk_op_lock(psd, &flags, __func__);
+	if (ret)
+		return ret;
 
 	list_for_each_entry(ce, &psd->clock_list, node)
 		__pm_clk_enable(dev, ce);
 
-	spin_unlock_irqrestore(&psd->lock, flags);
+	pm_clk_op_unlock(psd, &flags);
 
 	return 0;
 }
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 8c1d04db99..3d751ae5bc 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk)
 }
 EXPORT_SYMBOL_GPL(clk_enable);
 
+/**
+ * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
+ * @clk: clock source
+ *
+ * Returns true if clk_prepare() implicitly enables the clock, effectively
+ * making clk_enable()/clk_disable() no-ops, false otherwise.
+ *
+ * This is of interest mainly to power management code where actually
+ * disabling the clock also requires unpreparing it to have any material
+ * effect.
+ *
+ * Regardless of the value returned here, the caller must always invoke
+ * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
+ * to be right.
+ */
+bool clk_is_enabled_when_prepared(struct clk *clk)
+{
+	return clk && !(clk->core->ops->enable && clk->core->ops->disable);
+}
+EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared);
+
 static int clk_core_prepare_enable(struct clk_core *core)
 {
 	int ret;
diff --git a/include/linux/clk.h b/include/linux/clk.h
index 31ff1bf1b7..71295906a2 100644
--- a/include/linux/clk.h
+++ b/include/linux/clk.h
@@ -554,6 +554,23 @@ void clk_disable(struct clk *clk);
  */
 void clk_bulk_disable(int num_clks, const struct clk_bulk_data *clks);
 
+/**
+ * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
+ * @clk: clock source
+ *
+ * Returns true if clk_prepare() implicitly enables the clock, effectively
+ * making clk_enable()/clk_disable() no-ops, false otherwise.
+ *
+ * This is of interest mainly to the power management code where actually
+ * disabling the clock also requires unpreparing it to have any material
+ * effect.
+ *
+ * Regardless of the value returned here, the caller must always invoke
+ * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
+ * to be right.
+ */
+bool clk_is_enabled_when_prepared(struct clk *clk);
+
 /**
  * clk_get_rate - obtain the current clock rate (in Hz) for a clock source.
  *		  This is only valid once the clock source has been enabled.
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 47aca6bac1..482313a8cc 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -537,6 +537,8 @@ struct pm_subsys_data {
 	spinlock_t lock;
 	unsigned int refcount;
 #ifdef CONFIG_PM_CLK
+	unsigned int clock_op_might_sleep;
+	struct mutex clock_mutex;
 	struct list_head clock_list;
 #endif
 #ifdef CONFIG_PM_GENERIC_DOMAINS

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-05  3:17 [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep Nicolas Pitre
@ 2021-01-17 23:49 ` Nicolas Pitre
  2021-01-19 20:15   ` Rafael J. Wysocki
  2021-01-19 18:45 ` Kevin Hilman
  2021-01-21  9:11 ` Naresh Kamboju
  2 siblings, 1 reply; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-17 23:49 UTC (permalink / raw)
  To: Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King
  Cc: linux-pm, linux-clk, linux-kernel

Ping.

On Mon, 4 Jan 2021, Nicolas Pitre wrote:

> The clock API splits its interface into sleepable ant atomic contexts:
> 
> - clk_prepare/clk_unprepare for stuff that might sleep
> 
> - clk_enable_clk_disable for anything that may be done in atomic context
> 
> The code handling runtime PM for clocks only calls clk_disable() on
> suspend requests, and clk_enable on resume requests. This means that
> runtime PM with clock providers that only have the prepare/unprepare
> methods implemented is basically useless.
> 
> Many clock implementations can't accommodate atomic contexts. This is
> often the case when communication with the clock happens through another
> subsystem like I2C or SCMI.
> 
> Let's make the clock PM code useful with such clocks by safely invoking
> clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> such clocks are registered with the PM layer then pm_runtime_irq_safe()
> can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> may be invoked in atomic context.
> 
> For clocks that do implement the enable and disable methods then 
> everything just works as before.
> 
> Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> 
> diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> index ced6863a16..a62fb0f9b1 100644
> --- a/drivers/base/power/clock_ops.c
> +++ b/drivers/base/power/clock_ops.c
> @@ -23,6 +23,7 @@
>  enum pce_status {
>  	PCE_STATUS_NONE = 0,
>  	PCE_STATUS_ACQUIRED,
> +	PCE_STATUS_PREPARED,
>  	PCE_STATUS_ENABLED,
>  	PCE_STATUS_ERROR,
>  };
> @@ -32,8 +33,102 @@ struct pm_clock_entry {
>  	char *con_id;
>  	struct clk *clk;
>  	enum pce_status status;
> +	bool enabled_when_prepared;
>  };
>  
> +/**
> + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> + *		      entry list.
> + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> + *	 and clk_op_might_sleep count to be modified.
> + *
> + * Get exclusive access before modifying the PM clock entry list and the
> + * clock_op_might_sleep count to guard against concurrent modifications.
> + * This also protects against a concurrent clock_op_might_sleep and PM clock
> + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> + * happen in atomic context, hence both the mutex and the spinlock must be
> + * taken here.
> + */
> +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> +{
> +	mutex_lock(&psd->clock_mutex);
> +	spin_lock_irq(&psd->lock);
> +}
> +
> +/**
> + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> + * @psd: the same pm_subsys_data instance previously passed to
> + *	 pm_clk_list_lock().
> + */
> +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
> +{
> +	spin_unlock_irq(&psd->lock);
> +	mutex_unlock(&psd->clock_mutex);
> +}
> +
> +/**
> + * pm_clk_op_lock - ensure exclusive access for performing clock operations.
> + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> + *	 and clk_op_might_sleep count being used.
> + * @flags: stored irq flags.
> + * @fn: string for the caller function's name.
> + *
> + * This is used by pm_clk_suspend() and pm_clk_resume() to guard
> + * against concurrent modifications to the clock entry list and the
> + * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
> + * only the mutex can be locked and those functions can only be used in
> + * non atomic context. If clock_op_might_sleep == 0 then these functions
> + * may be used in any context and only the spinlock can be locked.
> + * Returns -EINVAL if called in atomic context when clock ops might sleep.
> + */
> +static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
> +			  const char *fn)
> +{
> +	bool atomic_context = in_atomic() || irqs_disabled();
> +
> +try_again:
> +	spin_lock_irqsave(&psd->lock, *flags);
> +	if (!psd->clock_op_might_sleep)
> +		return 0;
> +
> +	/* bail out if in atomic context */
> +	if (atomic_context) {
> +		pr_err("%s: atomic context with clock_ops_might_sleep = %d",
> +		       fn, psd->clock_op_might_sleep);
> +		spin_unlock_irqrestore(&psd->lock, *flags);
> +		might_sleep();
> +		return -EPERM;
> +	}
> +
> +	/* we must switch to the mutex */
> +	spin_unlock_irqrestore(&psd->lock, *flags);
> +	mutex_lock(&psd->clock_mutex);
> +
> +	/*
> +	 * There was a possibility for psd->clock_op_might_sleep
> +	 * to become 0 above. Keep the mutex only if not the case.
> +	 */
> +	if (likely(psd->clock_op_might_sleep))
> +		return 0;
> +
> +	mutex_unlock(&psd->clock_mutex);
> +	goto try_again;
> +}
> +
> +/**
> + * pm_clk_op_unlock - counterpart to pm_clk_op_lock().
> + * @psd: the same pm_subsys_data instance previously passed to
> + *	 pm_clk_op_lock().
> + * @flags: irq flags provided by pm_clk_op_lock().
> + */
> +static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
> +{
> +	if (psd->clock_op_might_sleep)
> +		mutex_unlock(&psd->clock_mutex);
> +	else
> +		spin_unlock_irqrestore(&psd->lock, *flags);
> +}
> +
>  /**
>   * pm_clk_enable - Enable a clock, reporting any errors
>   * @dev: The device for the given clock
> @@ -43,14 +138,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce
>  {
>  	int ret;
>  
> -	if (ce->status < PCE_STATUS_ERROR) {
> +	switch (ce->status) {
> +	case PCE_STATUS_ACQUIRED:
> +		ret = clk_prepare_enable(ce->clk);
> +		break;
> +	case PCE_STATUS_PREPARED:
>  		ret = clk_enable(ce->clk);
> -		if (!ret)
> -			ce->status = PCE_STATUS_ENABLED;
> -		else
> -			dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> -				__func__, ce->clk, ret);
> +		break;
> +	default:
> +		return;
>  	}
> +	if (!ret)
> +		ce->status = PCE_STATUS_ENABLED;
> +	else
> +		dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> +			__func__, ce->clk, ret);
>  }
>  
>  /**
> @@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
>  		ce->clk = clk_get(dev, ce->con_id);
>  	if (IS_ERR(ce->clk)) {
>  		ce->status = PCE_STATUS_ERROR;
> +		return;
> +	} else if (clk_is_enabled_when_prepared(ce->clk)) {
> +		/* we defer preparing the clock in that case */
> +		ce->status = PCE_STATUS_ACQUIRED;
> +		ce->enabled_when_prepared = true;
> +	} else if (clk_prepare(ce->clk)) {
> +		ce->status = PCE_STATUS_ERROR;
> +		dev_err(dev, "clk_prepare() failed\n");
> +		return;
>  	} else {
> -		if (clk_prepare(ce->clk)) {
> -			ce->status = PCE_STATUS_ERROR;
> -			dev_err(dev, "clk_prepare() failed\n");
> -		} else {
> -			ce->status = PCE_STATUS_ACQUIRED;
> -			dev_dbg(dev,
> -				"Clock %pC con_id %s managed by runtime PM.\n",
> -				ce->clk, ce->con_id);
> -		}
> +		ce->status = PCE_STATUS_PREPARED;
>  	}
> +	dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
> +		ce->clk, ce->con_id);
>  }
>  
>  static int __pm_clk_add(struct device *dev, const char *con_id,
> @@ -106,9 +211,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
>  
>  	pm_clk_acquire(dev, ce);
>  
> -	spin_lock_irq(&psd->lock);
> +	pm_clk_list_lock(psd);
>  	list_add_tail(&ce->node, &psd->clock_list);
> -	spin_unlock_irq(&psd->lock);
> +	if (ce->enabled_when_prepared)
> +		psd->clock_op_might_sleep++;
> +	pm_clk_list_unlock(psd);
>  	return 0;
>  }
>  
> @@ -239,14 +346,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce)
>  	if (!ce)
>  		return;
>  
> -	if (ce->status < PCE_STATUS_ERROR) {
> -		if (ce->status == PCE_STATUS_ENABLED)
> -			clk_disable(ce->clk);
> -
> -		if (ce->status >= PCE_STATUS_ACQUIRED) {
> -			clk_unprepare(ce->clk);
> +	switch (ce->status) {
> +	case PCE_STATUS_ENABLED:
> +		clk_disable(ce->clk);
> +		fallthrough;
> +	case PCE_STATUS_PREPARED:
> +		clk_unprepare(ce->clk);
> +		fallthrough;
> +	case PCE_STATUS_ACQUIRED:
> +	case PCE_STATUS_ERROR:
> +		if (!IS_ERR(ce->clk))
>  			clk_put(ce->clk);
> -		}
> +		break;
> +	default:
> +		break;
>  	}
>  
>  	kfree(ce->con_id);
> @@ -269,7 +382,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
>  	if (!psd)
>  		return;
>  
> -	spin_lock_irq(&psd->lock);
> +	pm_clk_list_lock(psd);
>  
>  	list_for_each_entry(ce, &psd->clock_list, node) {
>  		if (!con_id && !ce->con_id)
> @@ -280,12 +393,14 @@ void pm_clk_remove(struct device *dev, const char *con_id)
>  			goto remove;
>  	}
>  
> -	spin_unlock_irq(&psd->lock);
> +	pm_clk_list_unlock(psd);
>  	return;
>  
>   remove:
>  	list_del(&ce->node);
> -	spin_unlock_irq(&psd->lock);
> +	if (ce->enabled_when_prepared)
> +		psd->clock_op_might_sleep--;
> +	pm_clk_list_unlock(psd);
>  
>  	__pm_clk_remove(ce);
>  }
> @@ -307,19 +422,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
>  	if (!psd || !clk)
>  		return;
>  
> -	spin_lock_irq(&psd->lock);
> +	pm_clk_list_lock(psd);
>  
>  	list_for_each_entry(ce, &psd->clock_list, node) {
>  		if (clk == ce->clk)
>  			goto remove;
>  	}
>  
> -	spin_unlock_irq(&psd->lock);
> +	pm_clk_list_unlock(psd);
>  	return;
>  
>   remove:
>  	list_del(&ce->node);
> -	spin_unlock_irq(&psd->lock);
> +	if (ce->enabled_when_prepared)
> +		psd->clock_op_might_sleep--;
> +	pm_clk_list_unlock(psd);
>  
>  	__pm_clk_remove(ce);
>  }
> @@ -330,13 +447,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
>   * @dev: Device to initialize the list of PM clocks for.
>   *
>   * Initialize the lock and clock_list members of the device's pm_subsys_data
> - * object.
> + * object, set the count of clocks that might sleep to 0.
>   */
>  void pm_clk_init(struct device *dev)
>  {
>  	struct pm_subsys_data *psd = dev_to_psd(dev);
> -	if (psd)
> +	if (psd) {
>  		INIT_LIST_HEAD(&psd->clock_list);
> +		mutex_init(&psd->clock_mutex);
> +		psd->clock_op_might_sleep = 0;
> +	}
>  }
>  EXPORT_SYMBOL_GPL(pm_clk_init);
>  
> @@ -372,12 +492,13 @@ void pm_clk_destroy(struct device *dev)
>  
>  	INIT_LIST_HEAD(&list);
>  
> -	spin_lock_irq(&psd->lock);
> +	pm_clk_list_lock(psd);
>  
>  	list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
>  		list_move(&ce->node, &list);
> +	psd->clock_op_might_sleep = 0;
>  
> -	spin_unlock_irq(&psd->lock);
> +	pm_clk_list_unlock(psd);
>  
>  	dev_pm_put_subsys_data(dev);
>  
> @@ -397,23 +518,30 @@ int pm_clk_suspend(struct device *dev)
>  	struct pm_subsys_data *psd = dev_to_psd(dev);
>  	struct pm_clock_entry *ce;
>  	unsigned long flags;
> +	int ret;
>  
>  	dev_dbg(dev, "%s()\n", __func__);
>  
>  	if (!psd)
>  		return 0;
>  
> -	spin_lock_irqsave(&psd->lock, flags);
> +	ret = pm_clk_op_lock(psd, &flags, __func__);
> +	if (ret)
> +		return ret;
>  
>  	list_for_each_entry_reverse(ce, &psd->clock_list, node) {
> -		if (ce->status < PCE_STATUS_ERROR) {
> -			if (ce->status == PCE_STATUS_ENABLED)
> +		if (ce->status == PCE_STATUS_ENABLED) {
> +			if (ce->enabled_when_prepared) {
> +				clk_disable_unprepare(ce->clk);
> +				ce->status = PCE_STATUS_ACQUIRED;
> +			} else {
>  				clk_disable(ce->clk);
> -			ce->status = PCE_STATUS_ACQUIRED;
> +				ce->status = PCE_STATUS_PREPARED;
> +			}
>  		}
>  	}
>  
> -	spin_unlock_irqrestore(&psd->lock, flags);
> +	pm_clk_op_unlock(psd, &flags);
>  
>  	return 0;
>  }
> @@ -428,18 +556,21 @@ int pm_clk_resume(struct device *dev)
>  	struct pm_subsys_data *psd = dev_to_psd(dev);
>  	struct pm_clock_entry *ce;
>  	unsigned long flags;
> +	int ret;
>  
>  	dev_dbg(dev, "%s()\n", __func__);
>  
>  	if (!psd)
>  		return 0;
>  
> -	spin_lock_irqsave(&psd->lock, flags);
> +	ret = pm_clk_op_lock(psd, &flags, __func__);
> +	if (ret)
> +		return ret;
>  
>  	list_for_each_entry(ce, &psd->clock_list, node)
>  		__pm_clk_enable(dev, ce);
>  
> -	spin_unlock_irqrestore(&psd->lock, flags);
> +	pm_clk_op_unlock(psd, &flags);
>  
>  	return 0;
>  }
> diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> index 8c1d04db99..3d751ae5bc 100644
> --- a/drivers/clk/clk.c
> +++ b/drivers/clk/clk.c
> @@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk)
>  }
>  EXPORT_SYMBOL_GPL(clk_enable);
>  
> +/**
> + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> + * @clk: clock source
> + *
> + * Returns true if clk_prepare() implicitly enables the clock, effectively
> + * making clk_enable()/clk_disable() no-ops, false otherwise.
> + *
> + * This is of interest mainly to power management code where actually
> + * disabling the clock also requires unpreparing it to have any material
> + * effect.
> + *
> + * Regardless of the value returned here, the caller must always invoke
> + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> + * to be right.
> + */
> +bool clk_is_enabled_when_prepared(struct clk *clk)
> +{
> +	return clk && !(clk->core->ops->enable && clk->core->ops->disable);
> +}
> +EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared);
> +
>  static int clk_core_prepare_enable(struct clk_core *core)
>  {
>  	int ret;
> diff --git a/include/linux/clk.h b/include/linux/clk.h
> index 31ff1bf1b7..71295906a2 100644
> --- a/include/linux/clk.h
> +++ b/include/linux/clk.h
> @@ -554,6 +554,23 @@ void clk_disable(struct clk *clk);
>   */
>  void clk_bulk_disable(int num_clks, const struct clk_bulk_data *clks);
>  
> +/**
> + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> + * @clk: clock source
> + *
> + * Returns true if clk_prepare() implicitly enables the clock, effectively
> + * making clk_enable()/clk_disable() no-ops, false otherwise.
> + *
> + * This is of interest mainly to the power management code where actually
> + * disabling the clock also requires unpreparing it to have any material
> + * effect.
> + *
> + * Regardless of the value returned here, the caller must always invoke
> + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> + * to be right.
> + */
> +bool clk_is_enabled_when_prepared(struct clk *clk);
> +
>  /**
>   * clk_get_rate - obtain the current clock rate (in Hz) for a clock source.
>   *		  This is only valid once the clock source has been enabled.
> diff --git a/include/linux/pm.h b/include/linux/pm.h
> index 47aca6bac1..482313a8cc 100644
> --- a/include/linux/pm.h
> +++ b/include/linux/pm.h
> @@ -537,6 +537,8 @@ struct pm_subsys_data {
>  	spinlock_t lock;
>  	unsigned int refcount;
>  #ifdef CONFIG_PM_CLK
> +	unsigned int clock_op_might_sleep;
> +	struct mutex clock_mutex;
>  	struct list_head clock_list;
>  #endif
>  #ifdef CONFIG_PM_GENERIC_DOMAINS
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-05  3:17 [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep Nicolas Pitre
  2021-01-17 23:49 ` Nicolas Pitre
@ 2021-01-19 18:45 ` Kevin Hilman
  2021-01-19 20:38   ` Geert Uytterhoeven
  2021-01-21  9:11 ` Naresh Kamboju
  2 siblings, 1 reply; 26+ messages in thread
From: Kevin Hilman @ 2021-01-19 18:45 UTC (permalink / raw)
  To: Nicolas Pitre, Rafael J. Wysocki, Greg Kroah-Hartman,
	Michael Turquette, Stephen Boyd, Russell King,
	Geert Uytterhoeven
  Cc: linux-pm, linux-clk, linux-kernel

[ + Geert.. renesas SoCs are the primary user of PM clk ]

Nicolas Pitre <npitre@baylibre.com> writes:

> The clock API splits its interface into sleepable ant atomic contexts:
>
> - clk_prepare/clk_unprepare for stuff that might sleep
>
> - clk_enable_clk_disable for anything that may be done in atomic context
>
> The code handling runtime PM for clocks only calls clk_disable() on
> suspend requests, and clk_enable on resume requests. This means that
> runtime PM with clock providers that only have the prepare/unprepare
> methods implemented is basically useless.
>
> Many clock implementations can't accommodate atomic contexts. This is
> often the case when communication with the clock happens through another
> subsystem like I2C or SCMI.
>
> Let's make the clock PM code useful with such clocks by safely invoking
> clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> such clocks are registered with the PM layer then pm_runtime_irq_safe()
> can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> may be invoked in atomic context.
>
> For clocks that do implement the enable and disable methods then 
> everything just works as before.
>
> Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
>
> diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> index ced6863a16..a62fb0f9b1 100644
> --- a/drivers/base/power/clock_ops.c
> +++ b/drivers/base/power/clock_ops.c
> @@ -23,6 +23,7 @@
>  enum pce_status {
>  	PCE_STATUS_NONE = 0,
>  	PCE_STATUS_ACQUIRED,
> +	PCE_STATUS_PREPARED,
>  	PCE_STATUS_ENABLED,
>  	PCE_STATUS_ERROR,
>  };
> @@ -32,8 +33,102 @@ struct pm_clock_entry {
>  	char *con_id;
>  	struct clk *clk;
>  	enum pce_status status;
> +	bool enabled_when_prepared;
>  };
>  
> +/**
> + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> + *		      entry list.
> + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> + *	 and clk_op_might_sleep count to be modified.
> + *
> + * Get exclusive access before modifying the PM clock entry list and the
> + * clock_op_might_sleep count to guard against concurrent modifications.
> + * This also protects against a concurrent clock_op_might_sleep and PM clock
> + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> + * happen in atomic context, hence both the mutex and the spinlock must be
> + * taken here.
> + */
> +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> +{
> +	mutex_lock(&psd->clock_mutex);
> +	spin_lock_irq(&psd->lock);
> +}
> +
> +/**
> + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> + * @psd: the same pm_subsys_data instance previously passed to
> + *	 pm_clk_list_lock().
> + */
> +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
> +{
> +	spin_unlock_irq(&psd->lock);
> +	mutex_unlock(&psd->clock_mutex);
> +}
> +
> +/**
> + * pm_clk_op_lock - ensure exclusive access for performing clock operations.
> + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> + *	 and clk_op_might_sleep count being used.
> + * @flags: stored irq flags.
> + * @fn: string for the caller function's name.
> + *
> + * This is used by pm_clk_suspend() and pm_clk_resume() to guard
> + * against concurrent modifications to the clock entry list and the
> + * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
> + * only the mutex can be locked and those functions can only be used in
> + * non atomic context. If clock_op_might_sleep == 0 then these functions
> + * may be used in any context and only the spinlock can be locked.
> + * Returns -EINVAL if called in atomic context when clock ops might sleep.
> + */
> +static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
> +			  const char *fn)
> +{
> +	bool atomic_context = in_atomic() || irqs_disabled();
> +
> +try_again:
> +	spin_lock_irqsave(&psd->lock, *flags);
> +	if (!psd->clock_op_might_sleep)
> +		return 0;
> +
> +	/* bail out if in atomic context */
> +	if (atomic_context) {
> +		pr_err("%s: atomic context with clock_ops_might_sleep = %d",
> +		       fn, psd->clock_op_might_sleep);
> +		spin_unlock_irqrestore(&psd->lock, *flags);
> +		might_sleep();
> +		return -EPERM;
> +	}
> +
> +	/* we must switch to the mutex */
> +	spin_unlock_irqrestore(&psd->lock, *flags);
> +	mutex_lock(&psd->clock_mutex);
> +
> +	/*
> +	 * There was a possibility for psd->clock_op_might_sleep
> +	 * to become 0 above. Keep the mutex only if not the case.
> +	 */
> +	if (likely(psd->clock_op_might_sleep))
> +		return 0;
> +
> +	mutex_unlock(&psd->clock_mutex);
> +	goto try_again;
> +}
> +
> +/**
> + * pm_clk_op_unlock - counterpart to pm_clk_op_lock().
> + * @psd: the same pm_subsys_data instance previously passed to
> + *	 pm_clk_op_lock().
> + * @flags: irq flags provided by pm_clk_op_lock().
> + */
> +static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
> +{
> +	if (psd->clock_op_might_sleep)
> +		mutex_unlock(&psd->clock_mutex);
> +	else
> +		spin_unlock_irqrestore(&psd->lock, *flags);
> +}
> +
>  /**
>   * pm_clk_enable - Enable a clock, reporting any errors
>   * @dev: The device for the given clock
> @@ -43,14 +138,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce
>  {
>  	int ret;
>  
> -	if (ce->status < PCE_STATUS_ERROR) {
> +	switch (ce->status) {
> +	case PCE_STATUS_ACQUIRED:
> +		ret = clk_prepare_enable(ce->clk);
> +		break;
> +	case PCE_STATUS_PREPARED:
>  		ret = clk_enable(ce->clk);
> -		if (!ret)
> -			ce->status = PCE_STATUS_ENABLED;
> -		else
> -			dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> -				__func__, ce->clk, ret);
> +		break;
> +	default:
> +		return;
>  	}
> +	if (!ret)
> +		ce->status = PCE_STATUS_ENABLED;
> +	else
> +		dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> +			__func__, ce->clk, ret);
>  }
>  
>  /**
> @@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
>  		ce->clk = clk_get(dev, ce->con_id);
>  	if (IS_ERR(ce->clk)) {
>  		ce->status = PCE_STATUS_ERROR;
> +		return;
> +	} else if (clk_is_enabled_when_prepared(ce->clk)) {
> +		/* we defer preparing the clock in that case */
> +		ce->status = PCE_STATUS_ACQUIRED;
> +		ce->enabled_when_prepared = true;
> +	} else if (clk_prepare(ce->clk)) {
> +		ce->status = PCE_STATUS_ERROR;
> +		dev_err(dev, "clk_prepare() failed\n");
> +		return;
>  	} else {
> -		if (clk_prepare(ce->clk)) {
> -			ce->status = PCE_STATUS_ERROR;
> -			dev_err(dev, "clk_prepare() failed\n");
> -		} else {
> -			ce->status = PCE_STATUS_ACQUIRED;
> -			dev_dbg(dev,
> -				"Clock %pC con_id %s managed by runtime PM.\n",
> -				ce->clk, ce->con_id);
> -		}
> +		ce->status = PCE_STATUS_PREPARED;
>  	}
> +	dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
> +		ce->clk, ce->con_id);
>  }
>  
>  static int __pm_clk_add(struct device *dev, const char *con_id,
> @@ -106,9 +211,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
>  
>  	pm_clk_acquire(dev, ce);
>  
> -	spin_lock_irq(&psd->lock);
> +	pm_clk_list_lock(psd);
>  	list_add_tail(&ce->node, &psd->clock_list);
> -	spin_unlock_irq(&psd->lock);
> +	if (ce->enabled_when_prepared)
> +		psd->clock_op_might_sleep++;
> +	pm_clk_list_unlock(psd);
>  	return 0;
>  }
>  
> @@ -239,14 +346,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce)
>  	if (!ce)
>  		return;
>  
> -	if (ce->status < PCE_STATUS_ERROR) {
> -		if (ce->status == PCE_STATUS_ENABLED)
> -			clk_disable(ce->clk);
> -
> -		if (ce->status >= PCE_STATUS_ACQUIRED) {
> -			clk_unprepare(ce->clk);
> +	switch (ce->status) {
> +	case PCE_STATUS_ENABLED:
> +		clk_disable(ce->clk);
> +		fallthrough;
> +	case PCE_STATUS_PREPARED:
> +		clk_unprepare(ce->clk);
> +		fallthrough;
> +	case PCE_STATUS_ACQUIRED:
> +	case PCE_STATUS_ERROR:
> +		if (!IS_ERR(ce->clk))
>  			clk_put(ce->clk);
> -		}
> +		break;
> +	default:
> +		break;
>  	}
>  
>  	kfree(ce->con_id);
> @@ -269,7 +382,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
>  	if (!psd)
>  		return;
>  
> -	spin_lock_irq(&psd->lock);
> +	pm_clk_list_lock(psd);
>  
>  	list_for_each_entry(ce, &psd->clock_list, node) {
>  		if (!con_id && !ce->con_id)
> @@ -280,12 +393,14 @@ void pm_clk_remove(struct device *dev, const char *con_id)
>  			goto remove;
>  	}
>  
> -	spin_unlock_irq(&psd->lock);
> +	pm_clk_list_unlock(psd);
>  	return;
>  
>   remove:
>  	list_del(&ce->node);
> -	spin_unlock_irq(&psd->lock);
> +	if (ce->enabled_when_prepared)
> +		psd->clock_op_might_sleep--;
> +	pm_clk_list_unlock(psd);
>  
>  	__pm_clk_remove(ce);
>  }
> @@ -307,19 +422,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
>  	if (!psd || !clk)
>  		return;
>  
> -	spin_lock_irq(&psd->lock);
> +	pm_clk_list_lock(psd);
>  
>  	list_for_each_entry(ce, &psd->clock_list, node) {
>  		if (clk == ce->clk)
>  			goto remove;
>  	}
>  
> -	spin_unlock_irq(&psd->lock);
> +	pm_clk_list_unlock(psd);
>  	return;
>  
>   remove:
>  	list_del(&ce->node);
> -	spin_unlock_irq(&psd->lock);
> +	if (ce->enabled_when_prepared)
> +		psd->clock_op_might_sleep--;
> +	pm_clk_list_unlock(psd);
>  
>  	__pm_clk_remove(ce);
>  }
> @@ -330,13 +447,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
>   * @dev: Device to initialize the list of PM clocks for.
>   *
>   * Initialize the lock and clock_list members of the device's pm_subsys_data
> - * object.
> + * object, set the count of clocks that might sleep to 0.
>   */
>  void pm_clk_init(struct device *dev)
>  {
>  	struct pm_subsys_data *psd = dev_to_psd(dev);
> -	if (psd)
> +	if (psd) {
>  		INIT_LIST_HEAD(&psd->clock_list);
> +		mutex_init(&psd->clock_mutex);
> +		psd->clock_op_might_sleep = 0;
> +	}
>  }
>  EXPORT_SYMBOL_GPL(pm_clk_init);
>  
> @@ -372,12 +492,13 @@ void pm_clk_destroy(struct device *dev)
>  
>  	INIT_LIST_HEAD(&list);
>  
> -	spin_lock_irq(&psd->lock);
> +	pm_clk_list_lock(psd);
>  
>  	list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
>  		list_move(&ce->node, &list);
> +	psd->clock_op_might_sleep = 0;
>  
> -	spin_unlock_irq(&psd->lock);
> +	pm_clk_list_unlock(psd);
>  
>  	dev_pm_put_subsys_data(dev);
>  
> @@ -397,23 +518,30 @@ int pm_clk_suspend(struct device *dev)
>  	struct pm_subsys_data *psd = dev_to_psd(dev);
>  	struct pm_clock_entry *ce;
>  	unsigned long flags;
> +	int ret;
>  
>  	dev_dbg(dev, "%s()\n", __func__);
>  
>  	if (!psd)
>  		return 0;
>  
> -	spin_lock_irqsave(&psd->lock, flags);
> +	ret = pm_clk_op_lock(psd, &flags, __func__);
> +	if (ret)
> +		return ret;
>  
>  	list_for_each_entry_reverse(ce, &psd->clock_list, node) {
> -		if (ce->status < PCE_STATUS_ERROR) {
> -			if (ce->status == PCE_STATUS_ENABLED)
> +		if (ce->status == PCE_STATUS_ENABLED) {
> +			if (ce->enabled_when_prepared) {
> +				clk_disable_unprepare(ce->clk);
> +				ce->status = PCE_STATUS_ACQUIRED;
> +			} else {
>  				clk_disable(ce->clk);
> -			ce->status = PCE_STATUS_ACQUIRED;
> +				ce->status = PCE_STATUS_PREPARED;
> +			}
>  		}
>  	}
>  
> -	spin_unlock_irqrestore(&psd->lock, flags);
> +	pm_clk_op_unlock(psd, &flags);
>  
>  	return 0;
>  }
> @@ -428,18 +556,21 @@ int pm_clk_resume(struct device *dev)
>  	struct pm_subsys_data *psd = dev_to_psd(dev);
>  	struct pm_clock_entry *ce;
>  	unsigned long flags;
> +	int ret;
>  
>  	dev_dbg(dev, "%s()\n", __func__);
>  
>  	if (!psd)
>  		return 0;
>  
> -	spin_lock_irqsave(&psd->lock, flags);
> +	ret = pm_clk_op_lock(psd, &flags, __func__);
> +	if (ret)
> +		return ret;
>  
>  	list_for_each_entry(ce, &psd->clock_list, node)
>  		__pm_clk_enable(dev, ce);
>  
> -	spin_unlock_irqrestore(&psd->lock, flags);
> +	pm_clk_op_unlock(psd, &flags);
>  
>  	return 0;
>  }
> diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> index 8c1d04db99..3d751ae5bc 100644
> --- a/drivers/clk/clk.c
> +++ b/drivers/clk/clk.c
> @@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk)
>  }
>  EXPORT_SYMBOL_GPL(clk_enable);
>  
> +/**
> + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> + * @clk: clock source
> + *
> + * Returns true if clk_prepare() implicitly enables the clock, effectively
> + * making clk_enable()/clk_disable() no-ops, false otherwise.
> + *
> + * This is of interest mainly to power management code where actually
> + * disabling the clock also requires unpreparing it to have any material
> + * effect.
> + *
> + * Regardless of the value returned here, the caller must always invoke
> + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> + * to be right.
> + */
> +bool clk_is_enabled_when_prepared(struct clk *clk)
> +{
> +	return clk && !(clk->core->ops->enable && clk->core->ops->disable);
> +}
> +EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared);
> +
>  static int clk_core_prepare_enable(struct clk_core *core)
>  {
>  	int ret;
> diff --git a/include/linux/clk.h b/include/linux/clk.h
> index 31ff1bf1b7..71295906a2 100644
> --- a/include/linux/clk.h
> +++ b/include/linux/clk.h
> @@ -554,6 +554,23 @@ void clk_disable(struct clk *clk);
>   */
>  void clk_bulk_disable(int num_clks, const struct clk_bulk_data *clks);
>  
> +/**
> + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> + * @clk: clock source
> + *
> + * Returns true if clk_prepare() implicitly enables the clock, effectively
> + * making clk_enable()/clk_disable() no-ops, false otherwise.
> + *
> + * This is of interest mainly to the power management code where actually
> + * disabling the clock also requires unpreparing it to have any material
> + * effect.
> + *
> + * Regardless of the value returned here, the caller must always invoke
> + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> + * to be right.
> + */
> +bool clk_is_enabled_when_prepared(struct clk *clk);
> +
>  /**
>   * clk_get_rate - obtain the current clock rate (in Hz) for a clock source.
>   *		  This is only valid once the clock source has been enabled.
> diff --git a/include/linux/pm.h b/include/linux/pm.h
> index 47aca6bac1..482313a8cc 100644
> --- a/include/linux/pm.h
> +++ b/include/linux/pm.h
> @@ -537,6 +537,8 @@ struct pm_subsys_data {
>  	spinlock_t lock;
>  	unsigned int refcount;
>  #ifdef CONFIG_PM_CLK
> +	unsigned int clock_op_might_sleep;
> +	struct mutex clock_mutex;
>  	struct list_head clock_list;
>  #endif
>  #ifdef CONFIG_PM_GENERIC_DOMAINS

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-17 23:49 ` Nicolas Pitre
@ 2021-01-19 20:15   ` Rafael J. Wysocki
  0 siblings, 0 replies; 26+ messages in thread
From: Rafael J. Wysocki @ 2021-01-19 20:15 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King, Linux PM, linux-clk,
	Linux Kernel Mailing List

On Mon, Jan 18, 2021 at 12:50 AM Nicolas Pitre <npitre@baylibre.com> wrote:
>
> Ping.

Applied as 5.12 material,  sorry for the delay.

Thanks!

> On Mon, 4 Jan 2021, Nicolas Pitre wrote:
>
> > The clock API splits its interface into sleepable ant atomic contexts:
> >
> > - clk_prepare/clk_unprepare for stuff that might sleep
> >
> > - clk_enable_clk_disable for anything that may be done in atomic context
> >
> > The code handling runtime PM for clocks only calls clk_disable() on
> > suspend requests, and clk_enable on resume requests. This means that
> > runtime PM with clock providers that only have the prepare/unprepare
> > methods implemented is basically useless.
> >
> > Many clock implementations can't accommodate atomic contexts. This is
> > often the case when communication with the clock happens through another
> > subsystem like I2C or SCMI.
> >
> > Let's make the clock PM code useful with such clocks by safely invoking
> > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> > such clocks are registered with the PM layer then pm_runtime_irq_safe()
> > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> > may be invoked in atomic context.
> >
> > For clocks that do implement the enable and disable methods then
> > everything just works as before.
> >
> > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> >
> > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> > index ced6863a16..a62fb0f9b1 100644
> > --- a/drivers/base/power/clock_ops.c
> > +++ b/drivers/base/power/clock_ops.c
> > @@ -23,6 +23,7 @@
> >  enum pce_status {
> >       PCE_STATUS_NONE = 0,
> >       PCE_STATUS_ACQUIRED,
> > +     PCE_STATUS_PREPARED,
> >       PCE_STATUS_ENABLED,
> >       PCE_STATUS_ERROR,
> >  };
> > @@ -32,8 +33,102 @@ struct pm_clock_entry {
> >       char *con_id;
> >       struct clk *clk;
> >       enum pce_status status;
> > +     bool enabled_when_prepared;
> >  };
> >
> > +/**
> > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> > + *                 entry list.
> > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> > + *    and clk_op_might_sleep count to be modified.
> > + *
> > + * Get exclusive access before modifying the PM clock entry list and the
> > + * clock_op_might_sleep count to guard against concurrent modifications.
> > + * This also protects against a concurrent clock_op_might_sleep and PM clock
> > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> > + * happen in atomic context, hence both the mutex and the spinlock must be
> > + * taken here.
> > + */
> > +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> > +{
> > +     mutex_lock(&psd->clock_mutex);
> > +     spin_lock_irq(&psd->lock);
> > +}
> > +
> > +/**
> > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> > + * @psd: the same pm_subsys_data instance previously passed to
> > + *    pm_clk_list_lock().
> > + */
> > +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
> > +{
> > +     spin_unlock_irq(&psd->lock);
> > +     mutex_unlock(&psd->clock_mutex);
> > +}
> > +
> > +/**
> > + * pm_clk_op_lock - ensure exclusive access for performing clock operations.
> > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> > + *    and clk_op_might_sleep count being used.
> > + * @flags: stored irq flags.
> > + * @fn: string for the caller function's name.
> > + *
> > + * This is used by pm_clk_suspend() and pm_clk_resume() to guard
> > + * against concurrent modifications to the clock entry list and the
> > + * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
> > + * only the mutex can be locked and those functions can only be used in
> > + * non atomic context. If clock_op_might_sleep == 0 then these functions
> > + * may be used in any context and only the spinlock can be locked.
> > + * Returns -EINVAL if called in atomic context when clock ops might sleep.
> > + */
> > +static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
> > +                       const char *fn)
> > +{
> > +     bool atomic_context = in_atomic() || irqs_disabled();
> > +
> > +try_again:
> > +     spin_lock_irqsave(&psd->lock, *flags);
> > +     if (!psd->clock_op_might_sleep)
> > +             return 0;
> > +
> > +     /* bail out if in atomic context */
> > +     if (atomic_context) {
> > +             pr_err("%s: atomic context with clock_ops_might_sleep = %d",
> > +                    fn, psd->clock_op_might_sleep);
> > +             spin_unlock_irqrestore(&psd->lock, *flags);
> > +             might_sleep();
> > +             return -EPERM;
> > +     }
> > +
> > +     /* we must switch to the mutex */
> > +     spin_unlock_irqrestore(&psd->lock, *flags);
> > +     mutex_lock(&psd->clock_mutex);
> > +
> > +     /*
> > +      * There was a possibility for psd->clock_op_might_sleep
> > +      * to become 0 above. Keep the mutex only if not the case.
> > +      */
> > +     if (likely(psd->clock_op_might_sleep))
> > +             return 0;
> > +
> > +     mutex_unlock(&psd->clock_mutex);
> > +     goto try_again;
> > +}
> > +
> > +/**
> > + * pm_clk_op_unlock - counterpart to pm_clk_op_lock().
> > + * @psd: the same pm_subsys_data instance previously passed to
> > + *    pm_clk_op_lock().
> > + * @flags: irq flags provided by pm_clk_op_lock().
> > + */
> > +static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
> > +{
> > +     if (psd->clock_op_might_sleep)
> > +             mutex_unlock(&psd->clock_mutex);
> > +     else
> > +             spin_unlock_irqrestore(&psd->lock, *flags);
> > +}
> > +
> >  /**
> >   * pm_clk_enable - Enable a clock, reporting any errors
> >   * @dev: The device for the given clock
> > @@ -43,14 +138,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce
> >  {
> >       int ret;
> >
> > -     if (ce->status < PCE_STATUS_ERROR) {
> > +     switch (ce->status) {
> > +     case PCE_STATUS_ACQUIRED:
> > +             ret = clk_prepare_enable(ce->clk);
> > +             break;
> > +     case PCE_STATUS_PREPARED:
> >               ret = clk_enable(ce->clk);
> > -             if (!ret)
> > -                     ce->status = PCE_STATUS_ENABLED;
> > -             else
> > -                     dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> > -                             __func__, ce->clk, ret);
> > +             break;
> > +     default:
> > +             return;
> >       }
> > +     if (!ret)
> > +             ce->status = PCE_STATUS_ENABLED;
> > +     else
> > +             dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> > +                     __func__, ce->clk, ret);
> >  }
> >
> >  /**
> > @@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
> >               ce->clk = clk_get(dev, ce->con_id);
> >       if (IS_ERR(ce->clk)) {
> >               ce->status = PCE_STATUS_ERROR;
> > +             return;
> > +     } else if (clk_is_enabled_when_prepared(ce->clk)) {
> > +             /* we defer preparing the clock in that case */
> > +             ce->status = PCE_STATUS_ACQUIRED;
> > +             ce->enabled_when_prepared = true;
> > +     } else if (clk_prepare(ce->clk)) {
> > +             ce->status = PCE_STATUS_ERROR;
> > +             dev_err(dev, "clk_prepare() failed\n");
> > +             return;
> >       } else {
> > -             if (clk_prepare(ce->clk)) {
> > -                     ce->status = PCE_STATUS_ERROR;
> > -                     dev_err(dev, "clk_prepare() failed\n");
> > -             } else {
> > -                     ce->status = PCE_STATUS_ACQUIRED;
> > -                     dev_dbg(dev,
> > -                             "Clock %pC con_id %s managed by runtime PM.\n",
> > -                             ce->clk, ce->con_id);
> > -             }
> > +             ce->status = PCE_STATUS_PREPARED;
> >       }
> > +     dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
> > +             ce->clk, ce->con_id);
> >  }
> >
> >  static int __pm_clk_add(struct device *dev, const char *con_id,
> > @@ -106,9 +211,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
> >
> >       pm_clk_acquire(dev, ce);
> >
> > -     spin_lock_irq(&psd->lock);
> > +     pm_clk_list_lock(psd);
> >       list_add_tail(&ce->node, &psd->clock_list);
> > -     spin_unlock_irq(&psd->lock);
> > +     if (ce->enabled_when_prepared)
> > +             psd->clock_op_might_sleep++;
> > +     pm_clk_list_unlock(psd);
> >       return 0;
> >  }
> >
> > @@ -239,14 +346,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce)
> >       if (!ce)
> >               return;
> >
> > -     if (ce->status < PCE_STATUS_ERROR) {
> > -             if (ce->status == PCE_STATUS_ENABLED)
> > -                     clk_disable(ce->clk);
> > -
> > -             if (ce->status >= PCE_STATUS_ACQUIRED) {
> > -                     clk_unprepare(ce->clk);
> > +     switch (ce->status) {
> > +     case PCE_STATUS_ENABLED:
> > +             clk_disable(ce->clk);
> > +             fallthrough;
> > +     case PCE_STATUS_PREPARED:
> > +             clk_unprepare(ce->clk);
> > +             fallthrough;
> > +     case PCE_STATUS_ACQUIRED:
> > +     case PCE_STATUS_ERROR:
> > +             if (!IS_ERR(ce->clk))
> >                       clk_put(ce->clk);
> > -             }
> > +             break;
> > +     default:
> > +             break;
> >       }
> >
> >       kfree(ce->con_id);
> > @@ -269,7 +382,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
> >       if (!psd)
> >               return;
> >
> > -     spin_lock_irq(&psd->lock);
> > +     pm_clk_list_lock(psd);
> >
> >       list_for_each_entry(ce, &psd->clock_list, node) {
> >               if (!con_id && !ce->con_id)
> > @@ -280,12 +393,14 @@ void pm_clk_remove(struct device *dev, const char *con_id)
> >                       goto remove;
> >       }
> >
> > -     spin_unlock_irq(&psd->lock);
> > +     pm_clk_list_unlock(psd);
> >       return;
> >
> >   remove:
> >       list_del(&ce->node);
> > -     spin_unlock_irq(&psd->lock);
> > +     if (ce->enabled_when_prepared)
> > +             psd->clock_op_might_sleep--;
> > +     pm_clk_list_unlock(psd);
> >
> >       __pm_clk_remove(ce);
> >  }
> > @@ -307,19 +422,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
> >       if (!psd || !clk)
> >               return;
> >
> > -     spin_lock_irq(&psd->lock);
> > +     pm_clk_list_lock(psd);
> >
> >       list_for_each_entry(ce, &psd->clock_list, node) {
> >               if (clk == ce->clk)
> >                       goto remove;
> >       }
> >
> > -     spin_unlock_irq(&psd->lock);
> > +     pm_clk_list_unlock(psd);
> >       return;
> >
> >   remove:
> >       list_del(&ce->node);
> > -     spin_unlock_irq(&psd->lock);
> > +     if (ce->enabled_when_prepared)
> > +             psd->clock_op_might_sleep--;
> > +     pm_clk_list_unlock(psd);
> >
> >       __pm_clk_remove(ce);
> >  }
> > @@ -330,13 +447,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
> >   * @dev: Device to initialize the list of PM clocks for.
> >   *
> >   * Initialize the lock and clock_list members of the device's pm_subsys_data
> > - * object.
> > + * object, set the count of clocks that might sleep to 0.
> >   */
> >  void pm_clk_init(struct device *dev)
> >  {
> >       struct pm_subsys_data *psd = dev_to_psd(dev);
> > -     if (psd)
> > +     if (psd) {
> >               INIT_LIST_HEAD(&psd->clock_list);
> > +             mutex_init(&psd->clock_mutex);
> > +             psd->clock_op_might_sleep = 0;
> > +     }
> >  }
> >  EXPORT_SYMBOL_GPL(pm_clk_init);
> >
> > @@ -372,12 +492,13 @@ void pm_clk_destroy(struct device *dev)
> >
> >       INIT_LIST_HEAD(&list);
> >
> > -     spin_lock_irq(&psd->lock);
> > +     pm_clk_list_lock(psd);
> >
> >       list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
> >               list_move(&ce->node, &list);
> > +     psd->clock_op_might_sleep = 0;
> >
> > -     spin_unlock_irq(&psd->lock);
> > +     pm_clk_list_unlock(psd);
> >
> >       dev_pm_put_subsys_data(dev);
> >
> > @@ -397,23 +518,30 @@ int pm_clk_suspend(struct device *dev)
> >       struct pm_subsys_data *psd = dev_to_psd(dev);
> >       struct pm_clock_entry *ce;
> >       unsigned long flags;
> > +     int ret;
> >
> >       dev_dbg(dev, "%s()\n", __func__);
> >
> >       if (!psd)
> >               return 0;
> >
> > -     spin_lock_irqsave(&psd->lock, flags);
> > +     ret = pm_clk_op_lock(psd, &flags, __func__);
> > +     if (ret)
> > +             return ret;
> >
> >       list_for_each_entry_reverse(ce, &psd->clock_list, node) {
> > -             if (ce->status < PCE_STATUS_ERROR) {
> > -                     if (ce->status == PCE_STATUS_ENABLED)
> > +             if (ce->status == PCE_STATUS_ENABLED) {
> > +                     if (ce->enabled_when_prepared) {
> > +                             clk_disable_unprepare(ce->clk);
> > +                             ce->status = PCE_STATUS_ACQUIRED;
> > +                     } else {
> >                               clk_disable(ce->clk);
> > -                     ce->status = PCE_STATUS_ACQUIRED;
> > +                             ce->status = PCE_STATUS_PREPARED;
> > +                     }
> >               }
> >       }
> >
> > -     spin_unlock_irqrestore(&psd->lock, flags);
> > +     pm_clk_op_unlock(psd, &flags);
> >
> >       return 0;
> >  }
> > @@ -428,18 +556,21 @@ int pm_clk_resume(struct device *dev)
> >       struct pm_subsys_data *psd = dev_to_psd(dev);
> >       struct pm_clock_entry *ce;
> >       unsigned long flags;
> > +     int ret;
> >
> >       dev_dbg(dev, "%s()\n", __func__);
> >
> >       if (!psd)
> >               return 0;
> >
> > -     spin_lock_irqsave(&psd->lock, flags);
> > +     ret = pm_clk_op_lock(psd, &flags, __func__);
> > +     if (ret)
> > +             return ret;
> >
> >       list_for_each_entry(ce, &psd->clock_list, node)
> >               __pm_clk_enable(dev, ce);
> >
> > -     spin_unlock_irqrestore(&psd->lock, flags);
> > +     pm_clk_op_unlock(psd, &flags);
> >
> >       return 0;
> >  }
> > diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> > index 8c1d04db99..3d751ae5bc 100644
> > --- a/drivers/clk/clk.c
> > +++ b/drivers/clk/clk.c
> > @@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk)
> >  }
> >  EXPORT_SYMBOL_GPL(clk_enable);
> >
> > +/**
> > + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> > + * @clk: clock source
> > + *
> > + * Returns true if clk_prepare() implicitly enables the clock, effectively
> > + * making clk_enable()/clk_disable() no-ops, false otherwise.
> > + *
> > + * This is of interest mainly to power management code where actually
> > + * disabling the clock also requires unpreparing it to have any material
> > + * effect.
> > + *
> > + * Regardless of the value returned here, the caller must always invoke
> > + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> > + * to be right.
> > + */
> > +bool clk_is_enabled_when_prepared(struct clk *clk)
> > +{
> > +     return clk && !(clk->core->ops->enable && clk->core->ops->disable);
> > +}
> > +EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared);
> > +
> >  static int clk_core_prepare_enable(struct clk_core *core)
> >  {
> >       int ret;
> > diff --git a/include/linux/clk.h b/include/linux/clk.h
> > index 31ff1bf1b7..71295906a2 100644
> > --- a/include/linux/clk.h
> > +++ b/include/linux/clk.h
> > @@ -554,6 +554,23 @@ void clk_disable(struct clk *clk);
> >   */
> >  void clk_bulk_disable(int num_clks, const struct clk_bulk_data *clks);
> >
> > +/**
> > + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> > + * @clk: clock source
> > + *
> > + * Returns true if clk_prepare() implicitly enables the clock, effectively
> > + * making clk_enable()/clk_disable() no-ops, false otherwise.
> > + *
> > + * This is of interest mainly to the power management code where actually
> > + * disabling the clock also requires unpreparing it to have any material
> > + * effect.
> > + *
> > + * Regardless of the value returned here, the caller must always invoke
> > + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> > + * to be right.
> > + */
> > +bool clk_is_enabled_when_prepared(struct clk *clk);
> > +
> >  /**
> >   * clk_get_rate - obtain the current clock rate (in Hz) for a clock source.
> >   *             This is only valid once the clock source has been enabled.
> > diff --git a/include/linux/pm.h b/include/linux/pm.h
> > index 47aca6bac1..482313a8cc 100644
> > --- a/include/linux/pm.h
> > +++ b/include/linux/pm.h
> > @@ -537,6 +537,8 @@ struct pm_subsys_data {
> >       spinlock_t lock;
> >       unsigned int refcount;
> >  #ifdef CONFIG_PM_CLK
> > +     unsigned int clock_op_might_sleep;
> > +     struct mutex clock_mutex;
> >       struct list_head clock_list;
> >  #endif
> >  #ifdef CONFIG_PM_GENERIC_DOMAINS
> >

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-19 18:45 ` Kevin Hilman
@ 2021-01-19 20:38   ` Geert Uytterhoeven
  2021-01-19 21:22     ` Nicolas Pitre
  0 siblings, 1 reply; 26+ messages in thread
From: Geert Uytterhoeven @ 2021-01-19 20:38 UTC (permalink / raw)
  To: Kevin Hilman, Nicolas Pitre
  Cc: Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King, Linux PM list, linux-clk,
	Linux Kernel Mailing List

Hi Kevin, Nicolas,

On Tue, Jan 19, 2021 at 7:45 PM Kevin Hilman <khilman@baylibre.com> wrote:
> [ + Geert.. renesas SoCs are the primary user of PM clk ]

Thanks!

> Nicolas Pitre <npitre@baylibre.com> writes:
> > The clock API splits its interface into sleepable ant atomic contexts:
> >
> > - clk_prepare/clk_unprepare for stuff that might sleep
> >
> > - clk_enable_clk_disable for anything that may be done in atomic context
> >
> > The code handling runtime PM for clocks only calls clk_disable() on
> > suspend requests, and clk_enable on resume requests. This means that
> > runtime PM with clock providers that only have the prepare/unprepare
> > methods implemented is basically useless.
> >
> > Many clock implementations can't accommodate atomic contexts. This is
> > often the case when communication with the clock happens through another
> > subsystem like I2C or SCMI.
> >
> > Let's make the clock PM code useful with such clocks by safely invoking
> > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> > such clocks are registered with the PM layer then pm_runtime_irq_safe()
> > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> > may be invoked in atomic context.
> >
> > For clocks that do implement the enable and disable methods then
> > everything just works as before.
> >
> > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>

Thanks for your patch!

> > --- a/drivers/base/power/clock_ops.c
> > +++ b/drivers/base/power/clock_ops.c

> > +/**
> > + * pm_clk_op_lock - ensure exclusive access for performing clock operations.
> > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> > + *    and clk_op_might_sleep count being used.
> > + * @flags: stored irq flags.
> > + * @fn: string for the caller function's name.
> > + *
> > + * This is used by pm_clk_suspend() and pm_clk_resume() to guard
> > + * against concurrent modifications to the clock entry list and the
> > + * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
> > + * only the mutex can be locked and those functions can only be used in
> > + * non atomic context. If clock_op_might_sleep == 0 then these functions
> > + * may be used in any context and only the spinlock can be locked.
> > + * Returns -EINVAL if called in atomic context when clock ops might sleep.
> > + */
> > +static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
> > +                       const char *fn)
> > +{
> > +     bool atomic_context = in_atomic() || irqs_disabled();

Is this safe? Cfr.
https://lore.kernel.org/dri-devel/20200914204209.256266093@linutronix.de/

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-19 20:38   ` Geert Uytterhoeven
@ 2021-01-19 21:22     ` Nicolas Pitre
  0 siblings, 0 replies; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-19 21:22 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Kevin Hilman, Rafael J. Wysocki, Greg Kroah-Hartman,
	Michael Turquette, Stephen Boyd, Russell King, Linux PM list,
	linux-clk, Linux Kernel Mailing List

On Tue, 19 Jan 2021, Geert Uytterhoeven wrote:

> Hi Kevin, Nicolas,
> 
> On Tue, Jan 19, 2021 at 7:45 PM Kevin Hilman <khilman@baylibre.com> wrote:
> > [ + Geert.. renesas SoCs are the primary user of PM clk ]
> 
> Thanks!
> 
> > Nicolas Pitre <npitre@baylibre.com> writes:
> > > The clock API splits its interface into sleepable ant atomic contexts:
> > >
> > > - clk_prepare/clk_unprepare for stuff that might sleep
> > >
> > > - clk_enable_clk_disable for anything that may be done in atomic context
> > >
> > > The code handling runtime PM for clocks only calls clk_disable() on
> > > suspend requests, and clk_enable on resume requests. This means that
> > > runtime PM with clock providers that only have the prepare/unprepare
> > > methods implemented is basically useless.
> > >
> > > Many clock implementations can't accommodate atomic contexts. This is
> > > often the case when communication with the clock happens through another
> > > subsystem like I2C or SCMI.
> > >
> > > Let's make the clock PM code useful with such clocks by safely invoking
> > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> > > such clocks are registered with the PM layer then pm_runtime_irq_safe()
> > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> > > may be invoked in atomic context.
> > >
> > > For clocks that do implement the enable and disable methods then
> > > everything just works as before.
> > >
> > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> 
> Thanks for your patch!
> 
> > > --- a/drivers/base/power/clock_ops.c
> > > +++ b/drivers/base/power/clock_ops.c
> 
> > > +/**
> > > + * pm_clk_op_lock - ensure exclusive access for performing clock operations.
> > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> > > + *    and clk_op_might_sleep count being used.
> > > + * @flags: stored irq flags.
> > > + * @fn: string for the caller function's name.
> > > + *
> > > + * This is used by pm_clk_suspend() and pm_clk_resume() to guard
> > > + * against concurrent modifications to the clock entry list and the
> > > + * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
> > > + * only the mutex can be locked and those functions can only be used in
> > > + * non atomic context. If clock_op_might_sleep == 0 then these functions
> > > + * may be used in any context and only the spinlock can be locked.
> > > + * Returns -EINVAL if called in atomic context when clock ops might sleep.
> > > + */
> > > +static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
> > > +                       const char *fn)
> > > +{
> > > +     bool atomic_context = in_atomic() || irqs_disabled();
> 
> Is this safe? Cfr.
> https://lore.kernel.org/dri-devel/20200914204209.256266093@linutronix.de/

I noticed this topic is a mess. This is why I'm not relying on 
in_atomic() alone as it turned out not to be sufficient in all cases 
during testing.

What's there now is safe at least in the context from which it is called 
i.e. the runtime pm core code. If not then hopefully the might_sleep() 
that follows will catch misuses.

It should be noted that we assume an atomic context by default. However, 
if you rely on clocks that must sleep then you must not invoke runtime 
pm facilities in atomic context from your driver in the first place. The 
atomic_context variable above is there only used further down as a 
validation check to catch programming mistakes and not an operational 
parameter.


Nicolas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-05  3:17 [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep Nicolas Pitre
  2021-01-17 23:49 ` Nicolas Pitre
  2021-01-19 18:45 ` Kevin Hilman
@ 2021-01-21  9:11 ` Naresh Kamboju
  2021-01-21 10:58   ` Geert Uytterhoeven
  2 siblings, 1 reply; 26+ messages in thread
From: Naresh Kamboju @ 2021-01-21  9:11 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King, Linux PM, linux-clk, open list,
	Mark Brown, Arnd Bergmann

Hi Nicolas,

On Tue, 5 Jan 2021 at 08:48, Nicolas Pitre <npitre@baylibre.com> wrote:
>
> The clock API splits its interface into sleepable ant atomic contexts:
>
> - clk_prepare/clk_unprepare for stuff that might sleep
>
> - clk_enable_clk_disable for anything that may be done in atomic context
>

<trim>

>
> Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
>
> diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> index ced6863a16..a62fb0f9b1 100644
> --- a/drivers/base/power/clock_ops.c
> +++ b/drivers/base/power/clock_ops.c

<trim>

> @@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
>                 ce->clk = clk_get(dev, ce->con_id);
>         if (IS_ERR(ce->clk)) {
>                 ce->status = PCE_STATUS_ERROR;
> +               return;
> +       } else if (clk_is_enabled_when_prepared(ce->clk)) {

arm-linux-gnueabihf-ld: drivers/base/power/clock_ops.o: in function
`pm_clk_acquire':
drivers/base/power/clock_ops.c:170: undefined reference to
`clk_is_enabled_when_prepared'

Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>

This build error was noticed on arm architecture on linux next 20210121 tag.
Following builds failed.
 - arm (omap1_defconfig) with clang-10 - FAILED
 - arm (omap1_defconfig) with clang-11 - FAILED

 - arm (omap1_defconfig) with gcc-8 - FAILED
 - arm (omap1_defconfig) with gcc-9 - FAILED
 - arm (omap1_defconfig) with gcc-10 - FAILED

Steps to reproduce:
---------------------------
1)

# TuxMake is a command line tool and Python library that provides
# portable and repeatable Linux kernel builds across a variety of
# architectures, toolchains, kernel configurations, and make targets.
#
# TuxMake supports the concept of runtimes.
# See https://docs.tuxmake.org/runtimes/, for that to work it requires
# that you install podman or docker on your system.
#
# To install tuxmake on your system globally:
# sudo pip3 install -U tuxmake
#
# See https://docs.tuxmake.org/ for complete documentation.

tuxmake --runtime docker --target-arch arm --toolchain gcc-8 --kconfig
omap1_defconfig

2)
tuxbuild build --git-repo
https://gitlab.com/Linaro/lkft/mirrors/next/linux-next --git-sha
bc085f8fc88fc16796c9f2364e2bfb3fef305cad --target-arch arm --toolchain
gcc-8 --kconfig omap1_defconfig

-- 
Linaro LKFT
https://lkft.linaro.org

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-21  9:11 ` Naresh Kamboju
@ 2021-01-21 10:58   ` Geert Uytterhoeven
  2021-01-21 12:09     ` Naresh Kamboju
  0 siblings, 1 reply; 26+ messages in thread
From: Geert Uytterhoeven @ 2021-01-21 10:58 UTC (permalink / raw)
  To: Naresh Kamboju
  Cc: Nicolas Pitre, Rafael J. Wysocki, Greg Kroah-Hartman,
	Michael Turquette, Stephen Boyd, Russell King, Linux PM,
	linux-clk, open list, Mark Brown, Arnd Bergmann

On Thu, Jan 21, 2021 at 10:13 AM Naresh Kamboju
<naresh.kamboju@linaro.org> wrote:
> On Tue, 5 Jan 2021 at 08:48, Nicolas Pitre <npitre@baylibre.com> wrote:
> >
> > The clock API splits its interface into sleepable ant atomic contexts:
> >
> > - clk_prepare/clk_unprepare for stuff that might sleep
> >
> > - clk_enable_clk_disable for anything that may be done in atomic context
> >
>
> <trim>
>
> >
> > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> >
> > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> > index ced6863a16..a62fb0f9b1 100644
> > --- a/drivers/base/power/clock_ops.c
> > +++ b/drivers/base/power/clock_ops.c
>
> <trim>
>
> > @@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
> >                 ce->clk = clk_get(dev, ce->con_id);
> >         if (IS_ERR(ce->clk)) {
> >                 ce->status = PCE_STATUS_ERROR;
> > +               return;
> > +       } else if (clk_is_enabled_when_prepared(ce->clk)) {
>
> arm-linux-gnueabihf-ld: drivers/base/power/clock_ops.o: in function
> `pm_clk_acquire':
> drivers/base/power/clock_ops.c:170: undefined reference to
> `clk_is_enabled_when_prepared'
>
> Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
>
> This build error was noticed on arm architecture on linux next 20210121 tag.
> Following builds failed.
>  - arm (omap1_defconfig) with clang-10 - FAILED
>  - arm (omap1_defconfig) with clang-11 - FAILED
>
>  - arm (omap1_defconfig) with gcc-8 - FAILED
>  - arm (omap1_defconfig) with gcc-9 - FAILED
>  - arm (omap1_defconfig) with gcc-10 - FAILED

Missing dummy clk_is_enabled_when_prepared() for the
!CONFIG_HAVE_CLK case?

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-21 10:58   ` Geert Uytterhoeven
@ 2021-01-21 12:09     ` Naresh Kamboju
  2021-01-21 14:44       ` Rafael J. Wysocki
  0 siblings, 1 reply; 26+ messages in thread
From: Naresh Kamboju @ 2021-01-21 12:09 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Nicolas Pitre, Rafael J. Wysocki, Greg Kroah-Hartman,
	Michael Turquette, Stephen Boyd, Russell King, Linux PM,
	linux-clk, open list, Mark Brown, Arnd Bergmann

On Thu, 21 Jan 2021 at 16:28, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>
> On Thu, Jan 21, 2021 at 10:13 AM Naresh Kamboju
> <naresh.kamboju@linaro.org> wrote:
> > On Tue, 5 Jan 2021 at 08:48, Nicolas Pitre <npitre@baylibre.com> wrote:
> > >
> > > The clock API splits its interface into sleepable ant atomic contexts:
> > >
> > > - clk_prepare/clk_unprepare for stuff that might sleep
> > >
> > > - clk_enable_clk_disable for anything that may be done in atomic context
> > >
> >
> > <trim>
> >
> > >
> > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> > >
> > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> > > index ced6863a16..a62fb0f9b1 100644
> > > --- a/drivers/base/power/clock_ops.c
> > > +++ b/drivers/base/power/clock_ops.c
> >
> > <trim>
> >
> > > @@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
> > >                 ce->clk = clk_get(dev, ce->con_id);
> > >         if (IS_ERR(ce->clk)) {
> > >                 ce->status = PCE_STATUS_ERROR;
> > > +               return;
> > > +       } else if (clk_is_enabled_when_prepared(ce->clk)) {
> >
> > arm-linux-gnueabihf-ld: drivers/base/power/clock_ops.o: in function
> > `pm_clk_acquire':
> > drivers/base/power/clock_ops.c:170: undefined reference to
> > `clk_is_enabled_when_prepared'
> >
> > Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> >
> > This build error was noticed on arm architecture on linux next 20210121 tag.
> > Following builds failed.
> >  - arm (omap1_defconfig) with clang-10 - FAILED
> >  - arm (omap1_defconfig) with clang-11 - FAILED
> >
> >  - arm (omap1_defconfig) with gcc-8 - FAILED
> >  - arm (omap1_defconfig) with gcc-9 - FAILED
> >  - arm (omap1_defconfig) with gcc-10 - FAILED
>
> Missing dummy clk_is_enabled_when_prepared() for the
> !CONFIG_HAVE_CLK case?

I see these configs enabled in failed builds config file,

CONFIG_HAVE_CLK=y
CONFIG_CLKDEV_LOOKUP=y
CONFIG_HAVE_LEGACY_CLK=y

ref:
https://builds.tuxbuild.com/1nN0vkpNP4qhvIuIJN12j7tTpQs/

- Naresh

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-21 12:09     ` Naresh Kamboju
@ 2021-01-21 14:44       ` Rafael J. Wysocki
  2021-01-21 16:18         ` Nicolas Pitre
  2021-01-21 17:23         ` [PATCH v2] " Nicolas Pitre
  0 siblings, 2 replies; 26+ messages in thread
From: Rafael J. Wysocki @ 2021-01-21 14:44 UTC (permalink / raw)
  To: Naresh Kamboju
  Cc: Geert Uytterhoeven, Nicolas Pitre, Rafael J. Wysocki,
	Greg Kroah-Hartman, Michael Turquette, Stephen Boyd,
	Russell King, Linux PM, linux-clk, open list, Mark Brown,
	Arnd Bergmann

On Thu, Jan 21, 2021 at 1:11 PM Naresh Kamboju
<naresh.kamboju@linaro.org> wrote:
>
> On Thu, 21 Jan 2021 at 16:28, Geert Uytterhoeven <geert@linux-m68k.org> wrote:
> >
> > On Thu, Jan 21, 2021 at 10:13 AM Naresh Kamboju
> > <naresh.kamboju@linaro.org> wrote:
> > > On Tue, 5 Jan 2021 at 08:48, Nicolas Pitre <npitre@baylibre.com> wrote:
> > > >
> > > > The clock API splits its interface into sleepable ant atomic contexts:
> > > >
> > > > - clk_prepare/clk_unprepare for stuff that might sleep
> > > >
> > > > - clk_enable_clk_disable for anything that may be done in atomic context
> > > >
> > >
> > > <trim>
> > >
> > > >
> > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> > > >
> > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> > > > index ced6863a16..a62fb0f9b1 100644
> > > > --- a/drivers/base/power/clock_ops.c
> > > > +++ b/drivers/base/power/clock_ops.c
> > >
> > > <trim>
> > >
> > > > @@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
> > > >                 ce->clk = clk_get(dev, ce->con_id);
> > > >         if (IS_ERR(ce->clk)) {
> > > >                 ce->status = PCE_STATUS_ERROR;
> > > > +               return;
> > > > +       } else if (clk_is_enabled_when_prepared(ce->clk)) {
> > >
> > > arm-linux-gnueabihf-ld: drivers/base/power/clock_ops.o: in function
> > > `pm_clk_acquire':
> > > drivers/base/power/clock_ops.c:170: undefined reference to
> > > `clk_is_enabled_when_prepared'
> > >
> > > Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> > >
> > > This build error was noticed on arm architecture on linux next 20210121 tag.
> > > Following builds failed.
> > >  - arm (omap1_defconfig) with clang-10 - FAILED
> > >  - arm (omap1_defconfig) with clang-11 - FAILED
> > >
> > >  - arm (omap1_defconfig) with gcc-8 - FAILED
> > >  - arm (omap1_defconfig) with gcc-9 - FAILED
> > >  - arm (omap1_defconfig) with gcc-10 - FAILED
> >
> > Missing dummy clk_is_enabled_when_prepared() for the
> > !CONFIG_HAVE_CLK case?
>
> I see these configs enabled in failed builds config file,
>
> CONFIG_HAVE_CLK=y
> CONFIG_CLKDEV_LOOKUP=y
> CONFIG_HAVE_LEGACY_CLK=y
>
> ref:
> https://builds.tuxbuild.com/1nN0vkpNP4qhvIuIJN12j7tTpQs/

So I'm going to drop this patch from linux-next until the issue is
resolved, thanks!

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-21 14:44       ` Rafael J. Wysocki
@ 2021-01-21 16:18         ` Nicolas Pitre
  2021-01-21 17:23         ` [PATCH v2] " Nicolas Pitre
  1 sibling, 0 replies; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-21 16:18 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Naresh Kamboju, Geert Uytterhoeven, Rafael J. Wysocki,
	Greg Kroah-Hartman, Michael Turquette, Stephen Boyd,
	Russell King, Linux PM, linux-clk, open list, Mark Brown,
	Arnd Bergmann

On Thu, 21 Jan 2021, Rafael J. Wysocki wrote:

> On Thu, Jan 21, 2021 at 1:11 PM Naresh Kamboju
> <naresh.kamboju@linaro.org> wrote:
> >
> > ref:
> > https://builds.tuxbuild.com/1nN0vkpNP4qhvIuIJN12j7tTpQs/
> 
> So I'm going to drop this patch from linux-next until the issue is
> resolved, thanks!

No problem - I'm on it.

Thanks Naresh for reporting the issue.


Nicolas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-21 14:44       ` Rafael J. Wysocki
  2021-01-21 16:18         ` Nicolas Pitre
@ 2021-01-21 17:23         ` Nicolas Pitre
  2021-01-21 19:01           ` Rafael J. Wysocki
  1 sibling, 1 reply; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-21 17:23 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Naresh Kamboju, Geert Uytterhoeven, Rafael J. Wysocki,
	Greg Kroah-Hartman, Michael Turquette, Stephen Boyd,
	Russell King, Linux PM, linux-clk, open list, Mark Brown,
	Arnd Bergmann

The clock API splits its interface into sleepable ant atomic contexts:

- clk_prepare/clk_unprepare for stuff that might sleep

- clk_enable_clk_disable for anything that may be done in atomic context

The code handling runtime PM for clocks only calls clk_disable() on
suspend requests, and clk_enable on resume requests. This means that
runtime PM with clock providers that only have the prepare/unprepare
methods implemented is basically useless.

Many clock implementations can't accommodate atomic contexts. This is
often the case when communication with the clock happens through another
subsystem like I2C or SCMI.

Let's make the clock PM code useful with such clocks by safely invoking
clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
such clocks are registered with the PM layer then pm_runtime_irq_safe()
can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
may be invoked in atomic context.

For clocks that do implement the enable and disable methods then
everything just works as before.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>

---

On Thu, 21 Jan 2021, Rafael J. Wysocki wrote:

> So I'm going to drop this patch from linux-next until the issue is
> resolved, thanks!

Here's the fixed version.

Changes from v1:

- Moved clk_is_enabled_when_prepared() declaration under 
  CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that 
  config option is unset.

diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
index ced6863a16..a62fb0f9b1 100644
--- a/drivers/base/power/clock_ops.c
+++ b/drivers/base/power/clock_ops.c
@@ -23,6 +23,7 @@
 enum pce_status {
 	PCE_STATUS_NONE = 0,
 	PCE_STATUS_ACQUIRED,
+	PCE_STATUS_PREPARED,
 	PCE_STATUS_ENABLED,
 	PCE_STATUS_ERROR,
 };
@@ -32,8 +33,102 @@ struct pm_clock_entry {
 	char *con_id;
 	struct clk *clk;
 	enum pce_status status;
+	bool enabled_when_prepared;
 };
 
+/**
+ * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
+ *		      entry list.
+ * @psd: pm_subsys_data instance corresponding to the PM clock entry list
+ *	 and clk_op_might_sleep count to be modified.
+ *
+ * Get exclusive access before modifying the PM clock entry list and the
+ * clock_op_might_sleep count to guard against concurrent modifications.
+ * This also protects against a concurrent clock_op_might_sleep and PM clock
+ * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
+ * happen in atomic context, hence both the mutex and the spinlock must be
+ * taken here.
+ */
+static void pm_clk_list_lock(struct pm_subsys_data *psd)
+{
+	mutex_lock(&psd->clock_mutex);
+	spin_lock_irq(&psd->lock);
+}
+
+/**
+ * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
+ * @psd: the same pm_subsys_data instance previously passed to
+ *	 pm_clk_list_lock().
+ */
+static void pm_clk_list_unlock(struct pm_subsys_data *psd)
+{
+	spin_unlock_irq(&psd->lock);
+	mutex_unlock(&psd->clock_mutex);
+}
+
+/**
+ * pm_clk_op_lock - ensure exclusive access for performing clock operations.
+ * @psd: pm_subsys_data instance corresponding to the PM clock entry list
+ *	 and clk_op_might_sleep count being used.
+ * @flags: stored irq flags.
+ * @fn: string for the caller function's name.
+ *
+ * This is used by pm_clk_suspend() and pm_clk_resume() to guard
+ * against concurrent modifications to the clock entry list and the
+ * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
+ * only the mutex can be locked and those functions can only be used in
+ * non atomic context. If clock_op_might_sleep == 0 then these functions
+ * may be used in any context and only the spinlock can be locked.
+ * Returns -EINVAL if called in atomic context when clock ops might sleep.
+ */
+static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
+			  const char *fn)
+{
+	bool atomic_context = in_atomic() || irqs_disabled();
+
+try_again:
+	spin_lock_irqsave(&psd->lock, *flags);
+	if (!psd->clock_op_might_sleep)
+		return 0;
+
+	/* bail out if in atomic context */
+	if (atomic_context) {
+		pr_err("%s: atomic context with clock_ops_might_sleep = %d",
+		       fn, psd->clock_op_might_sleep);
+		spin_unlock_irqrestore(&psd->lock, *flags);
+		might_sleep();
+		return -EPERM;
+	}
+
+	/* we must switch to the mutex */
+	spin_unlock_irqrestore(&psd->lock, *flags);
+	mutex_lock(&psd->clock_mutex);
+
+	/*
+	 * There was a possibility for psd->clock_op_might_sleep
+	 * to become 0 above. Keep the mutex only if not the case.
+	 */
+	if (likely(psd->clock_op_might_sleep))
+		return 0;
+
+	mutex_unlock(&psd->clock_mutex);
+	goto try_again;
+}
+
+/**
+ * pm_clk_op_unlock - counterpart to pm_clk_op_lock().
+ * @psd: the same pm_subsys_data instance previously passed to
+ *	 pm_clk_op_lock().
+ * @flags: irq flags provided by pm_clk_op_lock().
+ */
+static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
+{
+	if (psd->clock_op_might_sleep)
+		mutex_unlock(&psd->clock_mutex);
+	else
+		spin_unlock_irqrestore(&psd->lock, *flags);
+}
+
 /**
  * pm_clk_enable - Enable a clock, reporting any errors
  * @dev: The device for the given clock
@@ -43,14 +138,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce
 {
 	int ret;
 
-	if (ce->status < PCE_STATUS_ERROR) {
+	switch (ce->status) {
+	case PCE_STATUS_ACQUIRED:
+		ret = clk_prepare_enable(ce->clk);
+		break;
+	case PCE_STATUS_PREPARED:
 		ret = clk_enable(ce->clk);
-		if (!ret)
-			ce->status = PCE_STATUS_ENABLED;
-		else
-			dev_err(dev, "%s: failed to enable clk %p, error %d\n",
-				__func__, ce->clk, ret);
+		break;
+	default:
+		return;
 	}
+	if (!ret)
+		ce->status = PCE_STATUS_ENABLED;
+	else
+		dev_err(dev, "%s: failed to enable clk %p, error %d\n",
+			__func__, ce->clk, ret);
 }
 
 /**
@@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
 		ce->clk = clk_get(dev, ce->con_id);
 	if (IS_ERR(ce->clk)) {
 		ce->status = PCE_STATUS_ERROR;
+		return;
+	} else if (clk_is_enabled_when_prepared(ce->clk)) {
+		/* we defer preparing the clock in that case */
+		ce->status = PCE_STATUS_ACQUIRED;
+		ce->enabled_when_prepared = true;
+	} else if (clk_prepare(ce->clk)) {
+		ce->status = PCE_STATUS_ERROR;
+		dev_err(dev, "clk_prepare() failed\n");
+		return;
 	} else {
-		if (clk_prepare(ce->clk)) {
-			ce->status = PCE_STATUS_ERROR;
-			dev_err(dev, "clk_prepare() failed\n");
-		} else {
-			ce->status = PCE_STATUS_ACQUIRED;
-			dev_dbg(dev,
-				"Clock %pC con_id %s managed by runtime PM.\n",
-				ce->clk, ce->con_id);
-		}
+		ce->status = PCE_STATUS_PREPARED;
 	}
+	dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
+		ce->clk, ce->con_id);
 }
 
 static int __pm_clk_add(struct device *dev, const char *con_id,
@@ -106,9 +211,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
 
 	pm_clk_acquire(dev, ce);
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 	list_add_tail(&ce->node, &psd->clock_list);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep++;
+	pm_clk_list_unlock(psd);
 	return 0;
 }
 
@@ -239,14 +346,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce)
 	if (!ce)
 		return;
 
-	if (ce->status < PCE_STATUS_ERROR) {
-		if (ce->status == PCE_STATUS_ENABLED)
-			clk_disable(ce->clk);
-
-		if (ce->status >= PCE_STATUS_ACQUIRED) {
-			clk_unprepare(ce->clk);
+	switch (ce->status) {
+	case PCE_STATUS_ENABLED:
+		clk_disable(ce->clk);
+		fallthrough;
+	case PCE_STATUS_PREPARED:
+		clk_unprepare(ce->clk);
+		fallthrough;
+	case PCE_STATUS_ACQUIRED:
+	case PCE_STATUS_ERROR:
+		if (!IS_ERR(ce->clk))
 			clk_put(ce->clk);
-		}
+		break;
+	default:
+		break;
 	}
 
 	kfree(ce->con_id);
@@ -269,7 +382,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
 	if (!psd)
 		return;
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry(ce, &psd->clock_list, node) {
 		if (!con_id && !ce->con_id)
@@ -280,12 +393,14 @@ void pm_clk_remove(struct device *dev, const char *con_id)
 			goto remove;
 	}
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 	return;
 
  remove:
 	list_del(&ce->node);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep--;
+	pm_clk_list_unlock(psd);
 
 	__pm_clk_remove(ce);
 }
@@ -307,19 +422,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
 	if (!psd || !clk)
 		return;
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry(ce, &psd->clock_list, node) {
 		if (clk == ce->clk)
 			goto remove;
 	}
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 	return;
 
  remove:
 	list_del(&ce->node);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep--;
+	pm_clk_list_unlock(psd);
 
 	__pm_clk_remove(ce);
 }
@@ -330,13 +447,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
  * @dev: Device to initialize the list of PM clocks for.
  *
  * Initialize the lock and clock_list members of the device's pm_subsys_data
- * object.
+ * object, set the count of clocks that might sleep to 0.
  */
 void pm_clk_init(struct device *dev)
 {
 	struct pm_subsys_data *psd = dev_to_psd(dev);
-	if (psd)
+	if (psd) {
 		INIT_LIST_HEAD(&psd->clock_list);
+		mutex_init(&psd->clock_mutex);
+		psd->clock_op_might_sleep = 0;
+	}
 }
 EXPORT_SYMBOL_GPL(pm_clk_init);
 
@@ -372,12 +492,13 @@ void pm_clk_destroy(struct device *dev)
 
 	INIT_LIST_HEAD(&list);
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
 		list_move(&ce->node, &list);
+	psd->clock_op_might_sleep = 0;
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 
 	dev_pm_put_subsys_data(dev);
 
@@ -397,23 +518,30 @@ int pm_clk_suspend(struct device *dev)
 	struct pm_subsys_data *psd = dev_to_psd(dev);
 	struct pm_clock_entry *ce;
 	unsigned long flags;
+	int ret;
 
 	dev_dbg(dev, "%s()\n", __func__);
 
 	if (!psd)
 		return 0;
 
-	spin_lock_irqsave(&psd->lock, flags);
+	ret = pm_clk_op_lock(psd, &flags, __func__);
+	if (ret)
+		return ret;
 
 	list_for_each_entry_reverse(ce, &psd->clock_list, node) {
-		if (ce->status < PCE_STATUS_ERROR) {
-			if (ce->status == PCE_STATUS_ENABLED)
+		if (ce->status == PCE_STATUS_ENABLED) {
+			if (ce->enabled_when_prepared) {
+				clk_disable_unprepare(ce->clk);
+				ce->status = PCE_STATUS_ACQUIRED;
+			} else {
 				clk_disable(ce->clk);
-			ce->status = PCE_STATUS_ACQUIRED;
+				ce->status = PCE_STATUS_PREPARED;
+			}
 		}
 	}
 
-	spin_unlock_irqrestore(&psd->lock, flags);
+	pm_clk_op_unlock(psd, &flags);
 
 	return 0;
 }
@@ -428,18 +556,21 @@ int pm_clk_resume(struct device *dev)
 	struct pm_subsys_data *psd = dev_to_psd(dev);
 	struct pm_clock_entry *ce;
 	unsigned long flags;
+	int ret;
 
 	dev_dbg(dev, "%s()\n", __func__);
 
 	if (!psd)
 		return 0;
 
-	spin_lock_irqsave(&psd->lock, flags);
+	ret = pm_clk_op_lock(psd, &flags, __func__);
+	if (ret)
+		return ret;
 
 	list_for_each_entry(ce, &psd->clock_list, node)
 		__pm_clk_enable(dev, ce);
 
-	spin_unlock_irqrestore(&psd->lock, flags);
+	pm_clk_op_unlock(psd, &flags);
 
 	return 0;
 }
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 8c1d04db99..3d751ae5bc 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk)
 }
 EXPORT_SYMBOL_GPL(clk_enable);
 
+/**
+ * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
+ * @clk: clock source
+ *
+ * Returns true if clk_prepare() implicitly enables the clock, effectively
+ * making clk_enable()/clk_disable() no-ops, false otherwise.
+ *
+ * This is of interest mainly to power management code where actually
+ * disabling the clock also requires unpreparing it to have any material
+ * effect.
+ *
+ * Regardless of the value returned here, the caller must always invoke
+ * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
+ * to be right.
+ */
+bool clk_is_enabled_when_prepared(struct clk *clk)
+{
+	return clk && !(clk->core->ops->enable && clk->core->ops->disable);
+}
+EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared);
+
 static int clk_core_prepare_enable(struct clk_core *core)
 {
 	int ret;
diff --git a/include/linux/clk.h b/include/linux/clk.h
index 31ff1bf1b7..a4a86aa8b1 100644
--- a/include/linux/clk.h
+++ b/include/linux/clk.h
@@ -238,6 +238,7 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
 
 #endif
 
+#ifdef CONFIG_HAVE_CLK_PREPARE
 /**
  * clk_prepare - prepare a clock source
  * @clk: clock source
@@ -246,10 +247,26 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
  *
  * Must not be called from within atomic context.
  */
-#ifdef CONFIG_HAVE_CLK_PREPARE
 int clk_prepare(struct clk *clk);
 int __must_check clk_bulk_prepare(int num_clks,
 				  const struct clk_bulk_data *clks);
+
+/**
+ * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
+ * @clk: clock source
+ *
+ * Returns true if clk_prepare() implicitly enables the clock, effectively
+ * making clk_enable()/clk_disable() no-ops, false otherwise.
+ *
+ * This is of interest mainly to the power management code where actually
+ * disabling the clock also requires unpreparing it to have any material
+ * effect.
+ *
+ * Regardless of the value returned here, the caller must always invoke
+ * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
+ * to be right.
+ */
+bool clk_is_enabled_when_prepared(struct clk *clk);
 #else
 static inline int clk_prepare(struct clk *clk)
 {
@@ -263,6 +280,11 @@ clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks)
 	might_sleep();
 	return 0;
 }
+
+static inline bool clk_is_enabled_when_prepared(struct clk *clk)
+{
+	return false;
+}
 #endif
 
 /**
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 47aca6bac1..482313a8cc 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -537,6 +537,8 @@ struct pm_subsys_data {
 	spinlock_t lock;
 	unsigned int refcount;
 #ifdef CONFIG_PM_CLK
+	unsigned int clock_op_might_sleep;
+	struct mutex clock_mutex;
 	struct list_head clock_list;
 #endif
 #ifdef CONFIG_PM_GENERIC_DOMAINS

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-21 17:23         ` [PATCH v2] " Nicolas Pitre
@ 2021-01-21 19:01           ` Rafael J. Wysocki
  2021-01-22 15:09             ` Rafael J. Wysocki
  0 siblings, 1 reply; 26+ messages in thread
From: Rafael J. Wysocki @ 2021-01-21 19:01 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Rafael J. Wysocki, Naresh Kamboju, Geert Uytterhoeven,
	Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King, Linux PM, linux-clk, open list,
	Mark Brown, Arnd Bergmann

On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote:
>
> The clock API splits its interface into sleepable ant atomic contexts:
>
> - clk_prepare/clk_unprepare for stuff that might sleep
>
> - clk_enable_clk_disable for anything that may be done in atomic context
>
> The code handling runtime PM for clocks only calls clk_disable() on
> suspend requests, and clk_enable on resume requests. This means that
> runtime PM with clock providers that only have the prepare/unprepare
> methods implemented is basically useless.
>
> Many clock implementations can't accommodate atomic contexts. This is
> often the case when communication with the clock happens through another
> subsystem like I2C or SCMI.
>
> Let's make the clock PM code useful with such clocks by safely invoking
> clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> such clocks are registered with the PM layer then pm_runtime_irq_safe()
> can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> may be invoked in atomic context.
>
> For clocks that do implement the enable and disable methods then
> everything just works as before.
>
> Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
>
> ---
>
> On Thu, 21 Jan 2021, Rafael J. Wysocki wrote:
>
> > So I'm going to drop this patch from linux-next until the issue is
> > resolved, thanks!
>
> Here's the fixed version.

Applied instead of the v1, thanks!

> Changes from v1:
>
> - Moved clk_is_enabled_when_prepared() declaration under
>   CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that
>   config option is unset.
>
> diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> index ced6863a16..a62fb0f9b1 100644
> --- a/drivers/base/power/clock_ops.c
> +++ b/drivers/base/power/clock_ops.c
> @@ -23,6 +23,7 @@
>  enum pce_status {
>         PCE_STATUS_NONE = 0,
>         PCE_STATUS_ACQUIRED,
> +       PCE_STATUS_PREPARED,
>         PCE_STATUS_ENABLED,
>         PCE_STATUS_ERROR,
>  };
> @@ -32,8 +33,102 @@ struct pm_clock_entry {
>         char *con_id;
>         struct clk *clk;
>         enum pce_status status;
> +       bool enabled_when_prepared;
>  };
>
> +/**
> + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> + *                   entry list.
> + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> + *      and clk_op_might_sleep count to be modified.
> + *
> + * Get exclusive access before modifying the PM clock entry list and the
> + * clock_op_might_sleep count to guard against concurrent modifications.
> + * This also protects against a concurrent clock_op_might_sleep and PM clock
> + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> + * happen in atomic context, hence both the mutex and the spinlock must be
> + * taken here.
> + */
> +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> +{
> +       mutex_lock(&psd->clock_mutex);
> +       spin_lock_irq(&psd->lock);
> +}
> +
> +/**
> + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> + * @psd: the same pm_subsys_data instance previously passed to
> + *      pm_clk_list_lock().
> + */
> +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
> +{
> +       spin_unlock_irq(&psd->lock);
> +       mutex_unlock(&psd->clock_mutex);
> +}
> +
> +/**
> + * pm_clk_op_lock - ensure exclusive access for performing clock operations.
> + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> + *      and clk_op_might_sleep count being used.
> + * @flags: stored irq flags.
> + * @fn: string for the caller function's name.
> + *
> + * This is used by pm_clk_suspend() and pm_clk_resume() to guard
> + * against concurrent modifications to the clock entry list and the
> + * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
> + * only the mutex can be locked and those functions can only be used in
> + * non atomic context. If clock_op_might_sleep == 0 then these functions
> + * may be used in any context and only the spinlock can be locked.
> + * Returns -EINVAL if called in atomic context when clock ops might sleep.
> + */
> +static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
> +                         const char *fn)
> +{
> +       bool atomic_context = in_atomic() || irqs_disabled();
> +
> +try_again:
> +       spin_lock_irqsave(&psd->lock, *flags);
> +       if (!psd->clock_op_might_sleep)
> +               return 0;
> +
> +       /* bail out if in atomic context */
> +       if (atomic_context) {
> +               pr_err("%s: atomic context with clock_ops_might_sleep = %d",
> +                      fn, psd->clock_op_might_sleep);
> +               spin_unlock_irqrestore(&psd->lock, *flags);
> +               might_sleep();
> +               return -EPERM;
> +       }
> +
> +       /* we must switch to the mutex */
> +       spin_unlock_irqrestore(&psd->lock, *flags);
> +       mutex_lock(&psd->clock_mutex);
> +
> +       /*
> +        * There was a possibility for psd->clock_op_might_sleep
> +        * to become 0 above. Keep the mutex only if not the case.
> +        */
> +       if (likely(psd->clock_op_might_sleep))
> +               return 0;
> +
> +       mutex_unlock(&psd->clock_mutex);
> +       goto try_again;
> +}
> +
> +/**
> + * pm_clk_op_unlock - counterpart to pm_clk_op_lock().
> + * @psd: the same pm_subsys_data instance previously passed to
> + *      pm_clk_op_lock().
> + * @flags: irq flags provided by pm_clk_op_lock().
> + */
> +static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
> +{
> +       if (psd->clock_op_might_sleep)
> +               mutex_unlock(&psd->clock_mutex);
> +       else
> +               spin_unlock_irqrestore(&psd->lock, *flags);
> +}
> +
>  /**
>   * pm_clk_enable - Enable a clock, reporting any errors
>   * @dev: The device for the given clock
> @@ -43,14 +138,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce
>  {
>         int ret;
>
> -       if (ce->status < PCE_STATUS_ERROR) {
> +       switch (ce->status) {
> +       case PCE_STATUS_ACQUIRED:
> +               ret = clk_prepare_enable(ce->clk);
> +               break;
> +       case PCE_STATUS_PREPARED:
>                 ret = clk_enable(ce->clk);
> -               if (!ret)
> -                       ce->status = PCE_STATUS_ENABLED;
> -               else
> -                       dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> -                               __func__, ce->clk, ret);
> +               break;
> +       default:
> +               return;
>         }
> +       if (!ret)
> +               ce->status = PCE_STATUS_ENABLED;
> +       else
> +               dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> +                       __func__, ce->clk, ret);
>  }
>
>  /**
> @@ -64,17 +166,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
>                 ce->clk = clk_get(dev, ce->con_id);
>         if (IS_ERR(ce->clk)) {
>                 ce->status = PCE_STATUS_ERROR;
> +               return;
> +       } else if (clk_is_enabled_when_prepared(ce->clk)) {
> +               /* we defer preparing the clock in that case */
> +               ce->status = PCE_STATUS_ACQUIRED;
> +               ce->enabled_when_prepared = true;
> +       } else if (clk_prepare(ce->clk)) {
> +               ce->status = PCE_STATUS_ERROR;
> +               dev_err(dev, "clk_prepare() failed\n");
> +               return;
>         } else {
> -               if (clk_prepare(ce->clk)) {
> -                       ce->status = PCE_STATUS_ERROR;
> -                       dev_err(dev, "clk_prepare() failed\n");
> -               } else {
> -                       ce->status = PCE_STATUS_ACQUIRED;
> -                       dev_dbg(dev,
> -                               "Clock %pC con_id %s managed by runtime PM.\n",
> -                               ce->clk, ce->con_id);
> -               }
> +               ce->status = PCE_STATUS_PREPARED;
>         }
> +       dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
> +               ce->clk, ce->con_id);
>  }
>
>  static int __pm_clk_add(struct device *dev, const char *con_id,
> @@ -106,9 +211,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
>
>         pm_clk_acquire(dev, ce);
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>         list_add_tail(&ce->node, &psd->clock_list);
> -       spin_unlock_irq(&psd->lock);
> +       if (ce->enabled_when_prepared)
> +               psd->clock_op_might_sleep++;
> +       pm_clk_list_unlock(psd);
>         return 0;
>  }
>
> @@ -239,14 +346,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce)
>         if (!ce)
>                 return;
>
> -       if (ce->status < PCE_STATUS_ERROR) {
> -               if (ce->status == PCE_STATUS_ENABLED)
> -                       clk_disable(ce->clk);
> -
> -               if (ce->status >= PCE_STATUS_ACQUIRED) {
> -                       clk_unprepare(ce->clk);
> +       switch (ce->status) {
> +       case PCE_STATUS_ENABLED:
> +               clk_disable(ce->clk);
> +               fallthrough;
> +       case PCE_STATUS_PREPARED:
> +               clk_unprepare(ce->clk);
> +               fallthrough;
> +       case PCE_STATUS_ACQUIRED:
> +       case PCE_STATUS_ERROR:
> +               if (!IS_ERR(ce->clk))
>                         clk_put(ce->clk);
> -               }
> +               break;
> +       default:
> +               break;
>         }
>
>         kfree(ce->con_id);
> @@ -269,7 +382,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
>         if (!psd)
>                 return;
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>
>         list_for_each_entry(ce, &psd->clock_list, node) {
>                 if (!con_id && !ce->con_id)
> @@ -280,12 +393,14 @@ void pm_clk_remove(struct device *dev, const char *con_id)
>                         goto remove;
>         }
>
> -       spin_unlock_irq(&psd->lock);
> +       pm_clk_list_unlock(psd);
>         return;
>
>   remove:
>         list_del(&ce->node);
> -       spin_unlock_irq(&psd->lock);
> +       if (ce->enabled_when_prepared)
> +               psd->clock_op_might_sleep--;
> +       pm_clk_list_unlock(psd);
>
>         __pm_clk_remove(ce);
>  }
> @@ -307,19 +422,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
>         if (!psd || !clk)
>                 return;
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>
>         list_for_each_entry(ce, &psd->clock_list, node) {
>                 if (clk == ce->clk)
>                         goto remove;
>         }
>
> -       spin_unlock_irq(&psd->lock);
> +       pm_clk_list_unlock(psd);
>         return;
>
>   remove:
>         list_del(&ce->node);
> -       spin_unlock_irq(&psd->lock);
> +       if (ce->enabled_when_prepared)
> +               psd->clock_op_might_sleep--;
> +       pm_clk_list_unlock(psd);
>
>         __pm_clk_remove(ce);
>  }
> @@ -330,13 +447,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
>   * @dev: Device to initialize the list of PM clocks for.
>   *
>   * Initialize the lock and clock_list members of the device's pm_subsys_data
> - * object.
> + * object, set the count of clocks that might sleep to 0.
>   */
>  void pm_clk_init(struct device *dev)
>  {
>         struct pm_subsys_data *psd = dev_to_psd(dev);
> -       if (psd)
> +       if (psd) {
>                 INIT_LIST_HEAD(&psd->clock_list);
> +               mutex_init(&psd->clock_mutex);
> +               psd->clock_op_might_sleep = 0;
> +       }
>  }
>  EXPORT_SYMBOL_GPL(pm_clk_init);
>
> @@ -372,12 +492,13 @@ void pm_clk_destroy(struct device *dev)
>
>         INIT_LIST_HEAD(&list);
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>
>         list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
>                 list_move(&ce->node, &list);
> +       psd->clock_op_might_sleep = 0;
>
> -       spin_unlock_irq(&psd->lock);
> +       pm_clk_list_unlock(psd);
>
>         dev_pm_put_subsys_data(dev);
>
> @@ -397,23 +518,30 @@ int pm_clk_suspend(struct device *dev)
>         struct pm_subsys_data *psd = dev_to_psd(dev);
>         struct pm_clock_entry *ce;
>         unsigned long flags;
> +       int ret;
>
>         dev_dbg(dev, "%s()\n", __func__);
>
>         if (!psd)
>                 return 0;
>
> -       spin_lock_irqsave(&psd->lock, flags);
> +       ret = pm_clk_op_lock(psd, &flags, __func__);
> +       if (ret)
> +               return ret;
>
>         list_for_each_entry_reverse(ce, &psd->clock_list, node) {
> -               if (ce->status < PCE_STATUS_ERROR) {
> -                       if (ce->status == PCE_STATUS_ENABLED)
> +               if (ce->status == PCE_STATUS_ENABLED) {
> +                       if (ce->enabled_when_prepared) {
> +                               clk_disable_unprepare(ce->clk);
> +                               ce->status = PCE_STATUS_ACQUIRED;
> +                       } else {
>                                 clk_disable(ce->clk);
> -                       ce->status = PCE_STATUS_ACQUIRED;
> +                               ce->status = PCE_STATUS_PREPARED;
> +                       }
>                 }
>         }
>
> -       spin_unlock_irqrestore(&psd->lock, flags);
> +       pm_clk_op_unlock(psd, &flags);
>
>         return 0;
>  }
> @@ -428,18 +556,21 @@ int pm_clk_resume(struct device *dev)
>         struct pm_subsys_data *psd = dev_to_psd(dev);
>         struct pm_clock_entry *ce;
>         unsigned long flags;
> +       int ret;
>
>         dev_dbg(dev, "%s()\n", __func__);
>
>         if (!psd)
>                 return 0;
>
> -       spin_lock_irqsave(&psd->lock, flags);
> +       ret = pm_clk_op_lock(psd, &flags, __func__);
> +       if (ret)
> +               return ret;
>
>         list_for_each_entry(ce, &psd->clock_list, node)
>                 __pm_clk_enable(dev, ce);
>
> -       spin_unlock_irqrestore(&psd->lock, flags);
> +       pm_clk_op_unlock(psd, &flags);
>
>         return 0;
>  }
> diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> index 8c1d04db99..3d751ae5bc 100644
> --- a/drivers/clk/clk.c
> +++ b/drivers/clk/clk.c
> @@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk)
>  }
>  EXPORT_SYMBOL_GPL(clk_enable);
>
> +/**
> + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> + * @clk: clock source
> + *
> + * Returns true if clk_prepare() implicitly enables the clock, effectively
> + * making clk_enable()/clk_disable() no-ops, false otherwise.
> + *
> + * This is of interest mainly to power management code where actually
> + * disabling the clock also requires unpreparing it to have any material
> + * effect.
> + *
> + * Regardless of the value returned here, the caller must always invoke
> + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> + * to be right.
> + */
> +bool clk_is_enabled_when_prepared(struct clk *clk)
> +{
> +       return clk && !(clk->core->ops->enable && clk->core->ops->disable);
> +}
> +EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared);
> +
>  static int clk_core_prepare_enable(struct clk_core *core)
>  {
>         int ret;
> diff --git a/include/linux/clk.h b/include/linux/clk.h
> index 31ff1bf1b7..a4a86aa8b1 100644
> --- a/include/linux/clk.h
> +++ b/include/linux/clk.h
> @@ -238,6 +238,7 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
>
>  #endif
>
> +#ifdef CONFIG_HAVE_CLK_PREPARE
>  /**
>   * clk_prepare - prepare a clock source
>   * @clk: clock source
> @@ -246,10 +247,26 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
>   *
>   * Must not be called from within atomic context.
>   */
> -#ifdef CONFIG_HAVE_CLK_PREPARE
>  int clk_prepare(struct clk *clk);
>  int __must_check clk_bulk_prepare(int num_clks,
>                                   const struct clk_bulk_data *clks);
> +
> +/**
> + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> + * @clk: clock source
> + *
> + * Returns true if clk_prepare() implicitly enables the clock, effectively
> + * making clk_enable()/clk_disable() no-ops, false otherwise.
> + *
> + * This is of interest mainly to the power management code where actually
> + * disabling the clock also requires unpreparing it to have any material
> + * effect.
> + *
> + * Regardless of the value returned here, the caller must always invoke
> + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> + * to be right.
> + */
> +bool clk_is_enabled_when_prepared(struct clk *clk);
>  #else
>  static inline int clk_prepare(struct clk *clk)
>  {
> @@ -263,6 +280,11 @@ clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks)
>         might_sleep();
>         return 0;
>  }
> +
> +static inline bool clk_is_enabled_when_prepared(struct clk *clk)
> +{
> +       return false;
> +}
>  #endif
>
>  /**
> diff --git a/include/linux/pm.h b/include/linux/pm.h
> index 47aca6bac1..482313a8cc 100644
> --- a/include/linux/pm.h
> +++ b/include/linux/pm.h
> @@ -537,6 +537,8 @@ struct pm_subsys_data {
>         spinlock_t lock;
>         unsigned int refcount;
>  #ifdef CONFIG_PM_CLK
> +       unsigned int clock_op_might_sleep;
> +       struct mutex clock_mutex;
>         struct list_head clock_list;
>  #endif
>  #ifdef CONFIG_PM_GENERIC_DOMAINS

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-21 19:01           ` Rafael J. Wysocki
@ 2021-01-22 15:09             ` Rafael J. Wysocki
  2021-01-22 15:48               ` Naresh Kamboju
  2021-01-22 15:52               ` [PATCH v2] " Nicolas Pitre
  0 siblings, 2 replies; 26+ messages in thread
From: Rafael J. Wysocki @ 2021-01-22 15:09 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Nicolas Pitre, Naresh Kamboju, Geert Uytterhoeven,
	Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King, Linux PM, linux-clk, open list,
	Mark Brown, Arnd Bergmann

On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote:
> >
> > The clock API splits its interface into sleepable ant atomic contexts:
> >
> > - clk_prepare/clk_unprepare for stuff that might sleep
> >
> > - clk_enable_clk_disable for anything that may be done in atomic context
> >
> > The code handling runtime PM for clocks only calls clk_disable() on
> > suspend requests, and clk_enable on resume requests. This means that
> > runtime PM with clock providers that only have the prepare/unprepare
> > methods implemented is basically useless.
> >
> > Many clock implementations can't accommodate atomic contexts. This is
> > often the case when communication with the clock happens through another
> > subsystem like I2C or SCMI.
> >
> > Let's make the clock PM code useful with such clocks by safely invoking
> > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> > such clocks are registered with the PM layer then pm_runtime_irq_safe()
> > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> > may be invoked in atomic context.
> >
> > For clocks that do implement the enable and disable methods then
> > everything just works as before.
> >
> > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> >
> > ---
> >
> > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote:
> >
> > > So I'm going to drop this patch from linux-next until the issue is
> > > resolved, thanks!
> >
> > Here's the fixed version.
>
> Applied instead of the v1, thanks!
>
> > Changes from v1:
> >
> > - Moved clk_is_enabled_when_prepared() declaration under
> >   CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that
> >   config option is unset.
> >
> > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> > index ced6863a16..a62fb0f9b1 100644
> > --- a/drivers/base/power/clock_ops.c
> > +++ b/drivers/base/power/clock_ops.c
> > @@ -23,6 +23,7 @@
> >  enum pce_status {
> >         PCE_STATUS_NONE = 0,
> >         PCE_STATUS_ACQUIRED,
> > +       PCE_STATUS_PREPARED,
> >         PCE_STATUS_ENABLED,
> >         PCE_STATUS_ERROR,
> >  };
> > @@ -32,8 +33,102 @@ struct pm_clock_entry {
> >         char *con_id;
> >         struct clk *clk;
> >         enum pce_status status;
> > +       bool enabled_when_prepared;
> >  };
> >
> > +/**
> > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> > + *                   entry list.
> > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> > + *      and clk_op_might_sleep count to be modified.
> > + *
> > + * Get exclusive access before modifying the PM clock entry list and the
> > + * clock_op_might_sleep count to guard against concurrent modifications.
> > + * This also protects against a concurrent clock_op_might_sleep and PM clock
> > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> > + * happen in atomic context, hence both the mutex and the spinlock must be
> > + * taken here.
> > + */
> > +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> > +{
> > +       mutex_lock(&psd->clock_mutex);
> > +       spin_lock_irq(&psd->lock);
> > +}
> > +
> > +/**
> > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> > + * @psd: the same pm_subsys_data instance previously passed to
> > + *      pm_clk_list_lock().
> > + */
> > +static void pm_clk_list_unlock(struct pm_subsys_data *psd)

Locking annotations for sparse were missing here and above, so I've
added them by hand.

Please double check the result in my linux-next branch (just pushed).

Thanks!

> > +{
> > +       spin_unlock_irq(&psd->lock);
> > +       mutex_unlock(&psd->clock_mutex);
> > +}

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-22 15:09             ` Rafael J. Wysocki
@ 2021-01-22 15:48               ` Naresh Kamboju
  2021-01-22 15:56                 ` Rafael J. Wysocki
  2021-01-22 15:52               ` [PATCH v2] " Nicolas Pitre
  1 sibling, 1 reply; 26+ messages in thread
From: Naresh Kamboju @ 2021-01-22 15:48 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Nicolas Pitre, Geert Uytterhoeven, Rafael J. Wysocki,
	Greg Kroah-Hartman, Michael Turquette, Stephen Boyd,
	Russell King, Linux PM, linux-clk, open list, Mark Brown,
	Arnd Bergmann

On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote:
> > >
> > > The clock API splits its interface into sleepable ant atomic contexts:
> > >
> > > - clk_prepare/clk_unprepare for stuff that might sleep
> > >
> > > - clk_enable_clk_disable for anything that may be done in atomic context
> > >
> > > The code handling runtime PM for clocks only calls clk_disable() on
> > > suspend requests, and clk_enable on resume requests. This means that
> > > runtime PM with clock providers that only have the prepare/unprepare
> > > methods implemented is basically useless.
> > >
> > > Many clock implementations can't accommodate atomic contexts. This is
> > > often the case when communication with the clock happens through another
> > > subsystem like I2C or SCMI.
> > >
> > > Let's make the clock PM code useful with such clocks by safely invoking
> > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> > > such clocks are registered with the PM layer then pm_runtime_irq_safe()
> > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> > > may be invoked in atomic context.
> > >
> > > For clocks that do implement the enable and disable methods then
> > > everything just works as before.
> > >
> > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> > >
> > > ---
> > >
> > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote:
> > >
> > > > So I'm going to drop this patch from linux-next until the issue is
> > > > resolved, thanks!
> > >
> > > Here's the fixed version.
> >
> > Applied instead of the v1, thanks!
> >
> > > Changes from v1:
> > >
> > > - Moved clk_is_enabled_when_prepared() declaration under
> > >   CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that
> > >   config option is unset.
> > >
> > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> > > index ced6863a16..a62fb0f9b1 100644
> > > --- a/drivers/base/power/clock_ops.c
> > > +++ b/drivers/base/power/clock_ops.c
> > > @@ -23,6 +23,7 @@
> > >  enum pce_status {
> > >         PCE_STATUS_NONE = 0,
> > >         PCE_STATUS_ACQUIRED,
> > > +       PCE_STATUS_PREPARED,
> > >         PCE_STATUS_ENABLED,
> > >         PCE_STATUS_ERROR,
> > >  };
> > > @@ -32,8 +33,102 @@ struct pm_clock_entry {
> > >         char *con_id;
> > >         struct clk *clk;
> > >         enum pce_status status;
> > > +       bool enabled_when_prepared;
> > >  };
> > >
> > > +/**
> > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> > > + *                   entry list.
> > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> > > + *      and clk_op_might_sleep count to be modified.
> > > + *
> > > + * Get exclusive access before modifying the PM clock entry list and the
> > > + * clock_op_might_sleep count to guard against concurrent modifications.
> > > + * This also protects against a concurrent clock_op_might_sleep and PM clock
> > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> > > + * happen in atomic context, hence both the mutex and the spinlock must be
> > > + * taken here.
> > > + */
> > > +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> > > +{
> > > +       mutex_lock(&psd->clock_mutex);
> > > +       spin_lock_irq(&psd->lock);
> > > +}
> > > +
> > > +/**
> > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> > > + * @psd: the same pm_subsys_data instance previously passed to
> > > + *      pm_clk_list_lock().
> > > + */
> > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
>
> Locking annotations for sparse were missing here and above, so I've
> added them by hand.
>
> Please double check the result in my linux-next branch (just pushed).

May i request to add Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>

- Naresh

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-22 15:09             ` Rafael J. Wysocki
  2021-01-22 15:48               ` Naresh Kamboju
@ 2021-01-22 15:52               ` Nicolas Pitre
  1 sibling, 0 replies; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-22 15:52 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Naresh Kamboju, Geert Uytterhoeven, Rafael J. Wysocki,
	Greg Kroah-Hartman, Michael Turquette, Stephen Boyd,
	Russell King, Linux PM, linux-clk, open list, Mark Brown,
	Arnd Bergmann

On Fri, 22 Jan 2021, Rafael J. Wysocki wrote:

> > > +/**
> > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> > > + * @psd: the same pm_subsys_data instance previously passed to
> > > + *      pm_clk_list_lock().
> > > + */
> > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
> 
> Locking annotations for sparse were missing here and above, so I've
> added them by hand.

Thanks.

> Please double check the result in my linux-next branch (just pushed).

There are still the following warnings:

drivers/base/power/clock_ops.c:52:13: warning: context imbalance in 'pm_clk_list_lock' - wrong count at exit
drivers/base/power/clock_ops.c:64:13: warning: context imbalance in 'pm_clk_list_unlock' - wrong count at exit

I guess this can be silenced (still need to investigate how those 
annotations work).

But I'm more worried about these:

drivers/base/power/clock_ops.c:86:12: warning: context imbalance in 'pm_clk_op_lock' - different lock contexts for basic block
drivers/base/power/clock_ops.c:131:39: warning: context imbalance in 'pm_clk_op_unlock' - unexpected unlock

Those are special locking helpers indeed and I don't know if that can be 
dealt with.


Nicolas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-22 15:48               ` Naresh Kamboju
@ 2021-01-22 15:56                 ` Rafael J. Wysocki
  2021-01-22 15:59                   ` Nicolas Pitre
  0 siblings, 1 reply; 26+ messages in thread
From: Rafael J. Wysocki @ 2021-01-22 15:56 UTC (permalink / raw)
  To: Naresh Kamboju
  Cc: Rafael J. Wysocki, Nicolas Pitre, Geert Uytterhoeven,
	Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King, Linux PM, linux-clk, open list,
	Mark Brown, Arnd Bergmann

On Fri, Jan 22, 2021 at 4:48 PM Naresh Kamboju
<naresh.kamboju@linaro.org> wrote:
>
> On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote:
> > > >
> > > > The clock API splits its interface into sleepable ant atomic contexts:
> > > >
> > > > - clk_prepare/clk_unprepare for stuff that might sleep
> > > >
> > > > - clk_enable_clk_disable for anything that may be done in atomic context
> > > >
> > > > The code handling runtime PM for clocks only calls clk_disable() on
> > > > suspend requests, and clk_enable on resume requests. This means that
> > > > runtime PM with clock providers that only have the prepare/unprepare
> > > > methods implemented is basically useless.
> > > >
> > > > Many clock implementations can't accommodate atomic contexts. This is
> > > > often the case when communication with the clock happens through another
> > > > subsystem like I2C or SCMI.
> > > >
> > > > Let's make the clock PM code useful with such clocks by safely invoking
> > > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> > > > such clocks are registered with the PM layer then pm_runtime_irq_safe()
> > > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> > > > may be invoked in atomic context.
> > > >
> > > > For clocks that do implement the enable and disable methods then
> > > > everything just works as before.
> > > >
> > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> > > >
> > > > ---
> > > >
> > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote:
> > > >
> > > > > So I'm going to drop this patch from linux-next until the issue is
> > > > > resolved, thanks!
> > > >
> > > > Here's the fixed version.
> > >
> > > Applied instead of the v1, thanks!
> > >
> > > > Changes from v1:
> > > >
> > > > - Moved clk_is_enabled_when_prepared() declaration under
> > > >   CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that
> > > >   config option is unset.
> > > >
> > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> > > > index ced6863a16..a62fb0f9b1 100644
> > > > --- a/drivers/base/power/clock_ops.c
> > > > +++ b/drivers/base/power/clock_ops.c
> > > > @@ -23,6 +23,7 @@
> > > >  enum pce_status {
> > > >         PCE_STATUS_NONE = 0,
> > > >         PCE_STATUS_ACQUIRED,
> > > > +       PCE_STATUS_PREPARED,
> > > >         PCE_STATUS_ENABLED,
> > > >         PCE_STATUS_ERROR,
> > > >  };
> > > > @@ -32,8 +33,102 @@ struct pm_clock_entry {
> > > >         char *con_id;
> > > >         struct clk *clk;
> > > >         enum pce_status status;
> > > > +       bool enabled_when_prepared;
> > > >  };
> > > >
> > > > +/**
> > > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> > > > + *                   entry list.
> > > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> > > > + *      and clk_op_might_sleep count to be modified.
> > > > + *
> > > > + * Get exclusive access before modifying the PM clock entry list and the
> > > > + * clock_op_might_sleep count to guard against concurrent modifications.
> > > > + * This also protects against a concurrent clock_op_might_sleep and PM clock
> > > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> > > > + * happen in atomic context, hence both the mutex and the spinlock must be
> > > > + * taken here.
> > > > + */
> > > > +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> > > > +{
> > > > +       mutex_lock(&psd->clock_mutex);
> > > > +       spin_lock_irq(&psd->lock);
> > > > +}
> > > > +
> > > > +/**
> > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> > > > + * @psd: the same pm_subsys_data instance previously passed to
> > > > + *      pm_clk_list_lock().
> > > > + */
> > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
> >
> > Locking annotations for sparse were missing here and above, so I've
> > added them by hand.
> >
> > Please double check the result in my linux-next branch (just pushed).
>
> May i request to add Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>

If this had been a patch fixing a problem reported by you, there would
have been a reason to add a Reported-by,

In this case, it is just a new version of a patch taking your testing
feedback into account.

I can add a Tested-by for you to it if desired, though.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-22 15:56                 ` Rafael J. Wysocki
@ 2021-01-22 15:59                   ` Nicolas Pitre
  2021-01-22 16:00                     ` Nicolas Pitre
  2021-01-22 16:02                     ` Rafael J. Wysocki
  0 siblings, 2 replies; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-22 15:59 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Naresh Kamboju, Geert Uytterhoeven, Rafael J. Wysocki,
	Greg Kroah-Hartman, Michael Turquette, Stephen Boyd,
	Russell King, Linux PM, linux-clk, open list, Mark Brown,
	Arnd Bergmann

On Fri, 22 Jan 2021, Rafael J. Wysocki wrote:

> On Fri, Jan 22, 2021 at 4:48 PM Naresh Kamboju
> <naresh.kamboju@linaro.org> wrote:
> >
> > On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > >
> > > On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > >
> > > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote:
> > > > >
> > > > > The clock API splits its interface into sleepable ant atomic contexts:
> > > > >
> > > > > - clk_prepare/clk_unprepare for stuff that might sleep
> > > > >
> > > > > - clk_enable_clk_disable for anything that may be done in atomic context
> > > > >
> > > > > The code handling runtime PM for clocks only calls clk_disable() on
> > > > > suspend requests, and clk_enable on resume requests. This means that
> > > > > runtime PM with clock providers that only have the prepare/unprepare
> > > > > methods implemented is basically useless.
> > > > >
> > > > > Many clock implementations can't accommodate atomic contexts. This is
> > > > > often the case when communication with the clock happens through another
> > > > > subsystem like I2C or SCMI.
> > > > >
> > > > > Let's make the clock PM code useful with such clocks by safely invoking
> > > > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> > > > > such clocks are registered with the PM layer then pm_runtime_irq_safe()
> > > > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> > > > > may be invoked in atomic context.
> > > > >
> > > > > For clocks that do implement the enable and disable methods then
> > > > > everything just works as before.
> > > > >
> > > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> > > > >
> > > > > ---
> > > > >
> > > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote:
> > > > >
> > > > > > So I'm going to drop this patch from linux-next until the issue is
> > > > > > resolved, thanks!
> > > > >
> > > > > Here's the fixed version.
> > > >
> > > > Applied instead of the v1, thanks!
> > > >
> > > > > Changes from v1:
> > > > >
> > > > > - Moved clk_is_enabled_when_prepared() declaration under
> > > > >   CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that
> > > > >   config option is unset.
> > > > >
> > > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> > > > > index ced6863a16..a62fb0f9b1 100644
> > > > > --- a/drivers/base/power/clock_ops.c
> > > > > +++ b/drivers/base/power/clock_ops.c
> > > > > @@ -23,6 +23,7 @@
> > > > >  enum pce_status {
> > > > >         PCE_STATUS_NONE = 0,
> > > > >         PCE_STATUS_ACQUIRED,
> > > > > +       PCE_STATUS_PREPARED,
> > > > >         PCE_STATUS_ENABLED,
> > > > >         PCE_STATUS_ERROR,
> > > > >  };
> > > > > @@ -32,8 +33,102 @@ struct pm_clock_entry {
> > > > >         char *con_id;
> > > > >         struct clk *clk;
> > > > >         enum pce_status status;
> > > > > +       bool enabled_when_prepared;
> > > > >  };
> > > > >
> > > > > +/**
> > > > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> > > > > + *                   entry list.
> > > > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> > > > > + *      and clk_op_might_sleep count to be modified.
> > > > > + *
> > > > > + * Get exclusive access before modifying the PM clock entry list and the
> > > > > + * clock_op_might_sleep count to guard against concurrent modifications.
> > > > > + * This also protects against a concurrent clock_op_might_sleep and PM clock
> > > > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> > > > > + * happen in atomic context, hence both the mutex and the spinlock must be
> > > > > + * taken here.
> > > > > + */
> > > > > +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> > > > > +{
> > > > > +       mutex_lock(&psd->clock_mutex);
> > > > > +       spin_lock_irq(&psd->lock);
> > > > > +}
> > > > > +
> > > > > +/**
> > > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> > > > > + * @psd: the same pm_subsys_data instance previously passed to
> > > > > + *      pm_clk_list_lock().
> > > > > + */
> > > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
> > >
> > > Locking annotations for sparse were missing here and above, so I've
> > > added them by hand.
> > >
> > > Please double check the result in my linux-next branch (just pushed).
> >
> > May i request to add Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> 
> If this had been a patch fixing a problem reported by you, there would
> have been a reason to add a Reported-by,
> 
> In this case, it is just a new version of a patch taking your testing
> feedback into account.
> 
> I can add a Tested-by for you to it if desired, though.

It is probably fair to mention that Naresh reported the issue too.
My bad, I should have added the tag myself in v2.


Nicolas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-22 15:59                   ` Nicolas Pitre
@ 2021-01-22 16:00                     ` Nicolas Pitre
  2021-01-22 16:02                     ` Rafael J. Wysocki
  1 sibling, 0 replies; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-22 16:00 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Naresh Kamboju, Geert Uytterhoeven, Rafael J. Wysocki,
	Greg Kroah-Hartman, Michael Turquette, Stephen Boyd,
	Russell King, Linux PM, linux-clk, open list, Mark Brown,
	Arnd Bergmann

On Fri, 22 Jan 2021, Nicolas Pitre wrote:

> On Fri, 22 Jan 2021, Rafael J. Wysocki wrote:
> 
> > On Fri, Jan 22, 2021 at 4:48 PM Naresh Kamboju
> > <naresh.kamboju@linaro.org> wrote:
> > >
> > > On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > >
> > > > On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > >
> > > > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote:
> > > > > >
> > > > > > The clock API splits its interface into sleepable ant atomic contexts:
> > > > > >
> > > > > > - clk_prepare/clk_unprepare for stuff that might sleep
> > > > > >
> > > > > > - clk_enable_clk_disable for anything that may be done in atomic context
> > > > > >
> > > > > > The code handling runtime PM for clocks only calls clk_disable() on
> > > > > > suspend requests, and clk_enable on resume requests. This means that
> > > > > > runtime PM with clock providers that only have the prepare/unprepare
> > > > > > methods implemented is basically useless.
> > > > > >
> > > > > > Many clock implementations can't accommodate atomic contexts. This is
> > > > > > often the case when communication with the clock happens through another
> > > > > > subsystem like I2C or SCMI.
> > > > > >
> > > > > > Let's make the clock PM code useful with such clocks by safely invoking
> > > > > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> > > > > > such clocks are registered with the PM layer then pm_runtime_irq_safe()
> > > > > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> > > > > > may be invoked in atomic context.
> > > > > >
> > > > > > For clocks that do implement the enable and disable methods then
> > > > > > everything just works as before.
> > > > > >
> > > > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> > > > > >
> > > > > > ---
> > > > > >
> > > > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote:
> > > > > >
> > > > > > > So I'm going to drop this patch from linux-next until the issue is
> > > > > > > resolved, thanks!
> > > > > >
> > > > > > Here's the fixed version.
> > > > >
> > > > > Applied instead of the v1, thanks!
> > > > >
> > > > > > Changes from v1:
> > > > > >
> > > > > > - Moved clk_is_enabled_when_prepared() declaration under
> > > > > >   CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that
> > > > > >   config option is unset.
> > > > > >
> > > > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> > > > > > index ced6863a16..a62fb0f9b1 100644
> > > > > > --- a/drivers/base/power/clock_ops.c
> > > > > > +++ b/drivers/base/power/clock_ops.c
> > > > > > @@ -23,6 +23,7 @@
> > > > > >  enum pce_status {
> > > > > >         PCE_STATUS_NONE = 0,
> > > > > >         PCE_STATUS_ACQUIRED,
> > > > > > +       PCE_STATUS_PREPARED,
> > > > > >         PCE_STATUS_ENABLED,
> > > > > >         PCE_STATUS_ERROR,
> > > > > >  };
> > > > > > @@ -32,8 +33,102 @@ struct pm_clock_entry {
> > > > > >         char *con_id;
> > > > > >         struct clk *clk;
> > > > > >         enum pce_status status;
> > > > > > +       bool enabled_when_prepared;
> > > > > >  };
> > > > > >
> > > > > > +/**
> > > > > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> > > > > > + *                   entry list.
> > > > > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> > > > > > + *      and clk_op_might_sleep count to be modified.
> > > > > > + *
> > > > > > + * Get exclusive access before modifying the PM clock entry list and the
> > > > > > + * clock_op_might_sleep count to guard against concurrent modifications.
> > > > > > + * This also protects against a concurrent clock_op_might_sleep and PM clock
> > > > > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> > > > > > + * happen in atomic context, hence both the mutex and the spinlock must be
> > > > > > + * taken here.
> > > > > > + */
> > > > > > +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> > > > > > +{
> > > > > > +       mutex_lock(&psd->clock_mutex);
> > > > > > +       spin_lock_irq(&psd->lock);
> > > > > > +}
> > > > > > +
> > > > > > +/**
> > > > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> > > > > > + * @psd: the same pm_subsys_data instance previously passed to
> > > > > > + *      pm_clk_list_lock().
> > > > > > + */
> > > > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
> > > >
> > > > Locking annotations for sparse were missing here and above, so I've
> > > > added them by hand.
> > > >
> > > > Please double check the result in my linux-next branch (just pushed).
> > >
> > > May i request to add Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> > 
> > If this had been a patch fixing a problem reported by you, there would
> > have been a reason to add a Reported-by,
> > 
> > In this case, it is just a new version of a patch taking your testing
> > feedback into account.
> > 
> > I can add a Tested-by for you to it if desired, though.
> 
> It is probably fair to mention that Naresh reported the issue too.
> My bad, I should have added the tag myself in v2.

That being said, I agree that this isn't a fix but a whole new patch, so 
I agree that Tested-by is probably more appropriate.


Nicolas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-22 15:59                   ` Nicolas Pitre
  2021-01-22 16:00                     ` Nicolas Pitre
@ 2021-01-22 16:02                     ` Rafael J. Wysocki
  2021-01-23 23:07                       ` [PATCH v3] " Nicolas Pitre
  1 sibling, 1 reply; 26+ messages in thread
From: Rafael J. Wysocki @ 2021-01-22 16:02 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Rafael J. Wysocki, Naresh Kamboju, Geert Uytterhoeven,
	Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King, Linux PM, linux-clk, open list,
	Mark Brown, Arnd Bergmann

On Fri, Jan 22, 2021 at 4:59 PM Nicolas Pitre <nico@fluxnic.net> wrote:
>
> On Fri, 22 Jan 2021, Rafael J. Wysocki wrote:
>
> > On Fri, Jan 22, 2021 at 4:48 PM Naresh Kamboju
> > <naresh.kamboju@linaro.org> wrote:
> > >
> > > On Fri, 22 Jan 2021 at 20:39, Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > >
> > > > On Thu, Jan 21, 2021 at 8:01 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > > >
> > > > > On Thu, Jan 21, 2021 at 6:23 PM Nicolas Pitre <npitre@baylibre.com> wrote:
> > > > > >
> > > > > > The clock API splits its interface into sleepable ant atomic contexts:
> > > > > >
> > > > > > - clk_prepare/clk_unprepare for stuff that might sleep
> > > > > >
> > > > > > - clk_enable_clk_disable for anything that may be done in atomic context
> > > > > >
> > > > > > The code handling runtime PM for clocks only calls clk_disable() on
> > > > > > suspend requests, and clk_enable on resume requests. This means that
> > > > > > runtime PM with clock providers that only have the prepare/unprepare
> > > > > > methods implemented is basically useless.
> > > > > >
> > > > > > Many clock implementations can't accommodate atomic contexts. This is
> > > > > > often the case when communication with the clock happens through another
> > > > > > subsystem like I2C or SCMI.
> > > > > >
> > > > > > Let's make the clock PM code useful with such clocks by safely invoking
> > > > > > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> > > > > > such clocks are registered with the PM layer then pm_runtime_irq_safe()
> > > > > > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> > > > > > may be invoked in atomic context.
> > > > > >
> > > > > > For clocks that do implement the enable and disable methods then
> > > > > > everything just works as before.
> > > > > >
> > > > > > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> > > > > >
> > > > > > ---
> > > > > >
> > > > > > On Thu, 21 Jan 2021, Rafael J. Wysocki wrote:
> > > > > >
> > > > > > > So I'm going to drop this patch from linux-next until the issue is
> > > > > > > resolved, thanks!
> > > > > >
> > > > > > Here's the fixed version.
> > > > >
> > > > > Applied instead of the v1, thanks!
> > > > >
> > > > > > Changes from v1:
> > > > > >
> > > > > > - Moved clk_is_enabled_when_prepared() declaration under
> > > > > >   CONFIG_HAVE_CLK_PREPARE and provided a dummy definition when that
> > > > > >   config option is unset.
> > > > > >
> > > > > > diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> > > > > > index ced6863a16..a62fb0f9b1 100644
> > > > > > --- a/drivers/base/power/clock_ops.c
> > > > > > +++ b/drivers/base/power/clock_ops.c
> > > > > > @@ -23,6 +23,7 @@
> > > > > >  enum pce_status {
> > > > > >         PCE_STATUS_NONE = 0,
> > > > > >         PCE_STATUS_ACQUIRED,
> > > > > > +       PCE_STATUS_PREPARED,
> > > > > >         PCE_STATUS_ENABLED,
> > > > > >         PCE_STATUS_ERROR,
> > > > > >  };
> > > > > > @@ -32,8 +33,102 @@ struct pm_clock_entry {
> > > > > >         char *con_id;
> > > > > >         struct clk *clk;
> > > > > >         enum pce_status status;
> > > > > > +       bool enabled_when_prepared;
> > > > > >  };
> > > > > >
> > > > > > +/**
> > > > > > + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> > > > > > + *                   entry list.
> > > > > > + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> > > > > > + *      and clk_op_might_sleep count to be modified.
> > > > > > + *
> > > > > > + * Get exclusive access before modifying the PM clock entry list and the
> > > > > > + * clock_op_might_sleep count to guard against concurrent modifications.
> > > > > > + * This also protects against a concurrent clock_op_might_sleep and PM clock
> > > > > > + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> > > > > > + * happen in atomic context, hence both the mutex and the spinlock must be
> > > > > > + * taken here.
> > > > > > + */
> > > > > > +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> > > > > > +{
> > > > > > +       mutex_lock(&psd->clock_mutex);
> > > > > > +       spin_lock_irq(&psd->lock);
> > > > > > +}
> > > > > > +
> > > > > > +/**
> > > > > > + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> > > > > > + * @psd: the same pm_subsys_data instance previously passed to
> > > > > > + *      pm_clk_list_lock().
> > > > > > + */
> > > > > > +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
> > > >
> > > > Locking annotations for sparse were missing here and above, so I've
> > > > added them by hand.
> > > >
> > > > Please double check the result in my linux-next branch (just pushed).
> > >
> > > May i request to add Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> >
> > If this had been a patch fixing a problem reported by you, there would
> > have been a reason to add a Reported-by,
> >
> > In this case, it is just a new version of a patch taking your testing
> > feedback into account.
> >
> > I can add a Tested-by for you to it if desired, though.
>
> It is probably fair to mention that Naresh reported the issue too.
> My bad, I should have added the tag myself in v2.

OK

I'm assuming that there will be a v3 because of the sparse warnings,
so please add the tags as needed when posting it.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-22 16:02                     ` Rafael J. Wysocki
@ 2021-01-23 23:07                       ` Nicolas Pitre
  2021-01-25 18:15                         ` Rafael J. Wysocki
  0 siblings, 1 reply; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-23 23:07 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Naresh Kamboju, Geert Uytterhoeven, Rafael J. Wysocki,
	Greg Kroah-Hartman, Michael Turquette, Stephen Boyd,
	Russell King, Linux PM, linux-clk, linux-kernel, Mark Brown,
	Arnd Bergmann

The clock API splits its interface into sleepable ant atomic contexts:

- clk_prepare/clk_unprepare for stuff that might sleep

- clk_enable_clk_disable for anything that may be done in atomic context

The code handling runtime PM for clocks only calls clk_disable() on
suspend requests, and clk_enable on resume requests. This means that
runtime PM with clock providers that only have the prepare/unprepare
methods implemented is basically useless.

Many clock implementations can't accommodate atomic contexts. This is
often the case when communication with the clock happens through another
subsystem like I2C or SCMI.

Let's make the clock PM code useful with such clocks by safely invoking
clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
such clocks are registered with the PM layer then pm_runtime_irq_safe()
can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
may be invoked in atomic context.

For clocks that do implement the enable and disable methods then
everything just works as before.

A note on sparse:
According to https://lwn.net/Articles/109066/ there are things
that sparse can't cope with. In particular, pm_clk_op_lock() and
pm_clk_op_unlock() may or may not lock/unlock psd->lock depending on
some runtime condition. To work around that we tell sparse the lock
is always untaken for the purpose of static analisys.

Thanks to Naresh Kamboju for reporting issues with the initial patch.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org>

---

Changes from v2:

- added workarounds to cope with sparse limitations (see above).

Changes from v1:

- made dummy clk_is_enabled_when_prepared() dependent on 
  CONFIG_HAVE_CLK_PREPARE instead of CONFIG_HAVE_CLK.

diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
index ced6863a16..e6956ce301 100644
--- a/drivers/base/power/clock_ops.c
+++ b/drivers/base/power/clock_ops.c
@@ -23,6 +23,7 @@
 enum pce_status {
 	PCE_STATUS_NONE = 0,
 	PCE_STATUS_ACQUIRED,
+	PCE_STATUS_PREPARED,
 	PCE_STATUS_ENABLED,
 	PCE_STATUS_ERROR,
 };
@@ -32,8 +33,108 @@ struct pm_clock_entry {
 	char *con_id;
 	struct clk *clk;
 	enum pce_status status;
+	bool enabled_when_prepared;
 };
 
+/**
+ * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
+ *		      entry list.
+ * @psd: pm_subsys_data instance corresponding to the PM clock entry list
+ *	 and clk_op_might_sleep count to be modified.
+ *
+ * Get exclusive access before modifying the PM clock entry list and the
+ * clock_op_might_sleep count to guard against concurrent modifications.
+ * This also protects against a concurrent clock_op_might_sleep and PM clock
+ * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
+ * happen in atomic context, hence both the mutex and the spinlock must be
+ * taken here.
+ */
+static inline void pm_clk_list_lock(struct pm_subsys_data *psd)
+{
+	mutex_lock(&psd->clock_mutex);
+	spin_lock_irq(&psd->lock);
+}
+
+/**
+ * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
+ * @psd: the same pm_subsys_data instance previously passed to
+ *	 pm_clk_list_lock().
+ */
+static inline void pm_clk_list_unlock(struct pm_subsys_data *psd)
+{
+	spin_unlock_irq(&psd->lock);
+	mutex_unlock(&psd->clock_mutex);
+}
+
+/**
+ * pm_clk_op_lock - ensure exclusive access for performing clock operations.
+ * @psd: pm_subsys_data instance corresponding to the PM clock entry list
+ *	 and clk_op_might_sleep count being used.
+ * @flags: stored irq flags.
+ * @fn: string for the caller function's name.
+ *
+ * This is used by pm_clk_suspend() and pm_clk_resume() to guard
+ * against concurrent modifications to the clock entry list and the
+ * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
+ * only the mutex can be locked and those functions can only be used in
+ * non atomic context. If clock_op_might_sleep == 0 then these functions
+ * may be used in any context and only the spinlock can be locked.
+ * Returns -EINVAL if called in atomic context when clock ops might sleep.
+ */
+static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
+			  const char *fn)
+{
+	bool atomic_context = in_atomic() || irqs_disabled();
+
+try_again:
+	spin_lock_irqsave(&psd->lock, *flags);
+	if (!psd->clock_op_might_sleep) {
+		/* the __release is there to work around sparse limitations */
+		__release(&psd->lock);
+		return 0;
+	}
+
+	/* bail out if in atomic context */
+	if (atomic_context) {
+		pr_err("%s: atomic context with clock_ops_might_sleep = %d",
+		       fn, psd->clock_op_might_sleep);
+		spin_unlock_irqrestore(&psd->lock, *flags);
+		might_sleep();
+		return -EPERM;
+	}
+
+	/* we must switch to the mutex */
+	spin_unlock_irqrestore(&psd->lock, *flags);
+	mutex_lock(&psd->clock_mutex);
+
+	/*
+	 * There was a possibility for psd->clock_op_might_sleep
+	 * to become 0 above. Keep the mutex only if not the case.
+	 */
+	if (likely(psd->clock_op_might_sleep))
+		return 0;
+
+	mutex_unlock(&psd->clock_mutex);
+	goto try_again;
+}
+
+/**
+ * pm_clk_op_unlock - counterpart to pm_clk_op_lock().
+ * @psd: the same pm_subsys_data instance previously passed to
+ *	 pm_clk_op_lock().
+ * @flags: irq flags provided by pm_clk_op_lock().
+ */
+static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
+{
+	if (psd->clock_op_might_sleep) {
+		mutex_unlock(&psd->clock_mutex);
+	} else {
+		/* the __acquire is there to work around sparse limitations */
+		__acquire(&psd->lock);
+		spin_unlock_irqrestore(&psd->lock, *flags);
+	}
+}
+
 /**
  * pm_clk_enable - Enable a clock, reporting any errors
  * @dev: The device for the given clock
@@ -43,14 +144,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce
 {
 	int ret;
 
-	if (ce->status < PCE_STATUS_ERROR) {
+	switch (ce->status) {
+	case PCE_STATUS_ACQUIRED:
+		ret = clk_prepare_enable(ce->clk);
+		break;
+	case PCE_STATUS_PREPARED:
 		ret = clk_enable(ce->clk);
-		if (!ret)
-			ce->status = PCE_STATUS_ENABLED;
-		else
-			dev_err(dev, "%s: failed to enable clk %p, error %d\n",
-				__func__, ce->clk, ret);
+		break;
+	default:
+		return;
 	}
+	if (!ret)
+		ce->status = PCE_STATUS_ENABLED;
+	else
+		dev_err(dev, "%s: failed to enable clk %p, error %d\n",
+			__func__, ce->clk, ret);
 }
 
 /**
@@ -64,17 +172,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
 		ce->clk = clk_get(dev, ce->con_id);
 	if (IS_ERR(ce->clk)) {
 		ce->status = PCE_STATUS_ERROR;
+		return;
+	} else if (clk_is_enabled_when_prepared(ce->clk)) {
+		/* we defer preparing the clock in that case */
+		ce->status = PCE_STATUS_ACQUIRED;
+		ce->enabled_when_prepared = true;
+	} else if (clk_prepare(ce->clk)) {
+		ce->status = PCE_STATUS_ERROR;
+		dev_err(dev, "clk_prepare() failed\n");
+		return;
 	} else {
-		if (clk_prepare(ce->clk)) {
-			ce->status = PCE_STATUS_ERROR;
-			dev_err(dev, "clk_prepare() failed\n");
-		} else {
-			ce->status = PCE_STATUS_ACQUIRED;
-			dev_dbg(dev,
-				"Clock %pC con_id %s managed by runtime PM.\n",
-				ce->clk, ce->con_id);
-		}
+		ce->status = PCE_STATUS_PREPARED;
 	}
+	dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
+		ce->clk, ce->con_id);
 }
 
 static int __pm_clk_add(struct device *dev, const char *con_id,
@@ -106,9 +217,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
 
 	pm_clk_acquire(dev, ce);
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 	list_add_tail(&ce->node, &psd->clock_list);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep++;
+	pm_clk_list_unlock(psd);
 	return 0;
 }
 
@@ -239,14 +352,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce)
 	if (!ce)
 		return;
 
-	if (ce->status < PCE_STATUS_ERROR) {
-		if (ce->status == PCE_STATUS_ENABLED)
-			clk_disable(ce->clk);
-
-		if (ce->status >= PCE_STATUS_ACQUIRED) {
-			clk_unprepare(ce->clk);
+	switch (ce->status) {
+	case PCE_STATUS_ENABLED:
+		clk_disable(ce->clk);
+		fallthrough;
+	case PCE_STATUS_PREPARED:
+		clk_unprepare(ce->clk);
+		fallthrough;
+	case PCE_STATUS_ACQUIRED:
+	case PCE_STATUS_ERROR:
+		if (!IS_ERR(ce->clk))
 			clk_put(ce->clk);
-		}
+		break;
+	default:
+		break;
 	}
 
 	kfree(ce->con_id);
@@ -269,7 +388,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
 	if (!psd)
 		return;
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry(ce, &psd->clock_list, node) {
 		if (!con_id && !ce->con_id)
@@ -280,12 +399,14 @@ void pm_clk_remove(struct device *dev, const char *con_id)
 			goto remove;
 	}
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 	return;
 
  remove:
 	list_del(&ce->node);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep--;
+	pm_clk_list_unlock(psd);
 
 	__pm_clk_remove(ce);
 }
@@ -307,19 +428,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
 	if (!psd || !clk)
 		return;
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry(ce, &psd->clock_list, node) {
 		if (clk == ce->clk)
 			goto remove;
 	}
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 	return;
 
  remove:
 	list_del(&ce->node);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep--;
+	pm_clk_list_unlock(psd);
 
 	__pm_clk_remove(ce);
 }
@@ -330,13 +453,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
  * @dev: Device to initialize the list of PM clocks for.
  *
  * Initialize the lock and clock_list members of the device's pm_subsys_data
- * object.
+ * object, set the count of clocks that might sleep to 0.
  */
 void pm_clk_init(struct device *dev)
 {
 	struct pm_subsys_data *psd = dev_to_psd(dev);
-	if (psd)
+	if (psd) {
 		INIT_LIST_HEAD(&psd->clock_list);
+		mutex_init(&psd->clock_mutex);
+		psd->clock_op_might_sleep = 0;
+	}
 }
 EXPORT_SYMBOL_GPL(pm_clk_init);
 
@@ -372,12 +498,13 @@ void pm_clk_destroy(struct device *dev)
 
 	INIT_LIST_HEAD(&list);
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
 		list_move(&ce->node, &list);
+	psd->clock_op_might_sleep = 0;
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 
 	dev_pm_put_subsys_data(dev);
 
@@ -397,23 +524,30 @@ int pm_clk_suspend(struct device *dev)
 	struct pm_subsys_data *psd = dev_to_psd(dev);
 	struct pm_clock_entry *ce;
 	unsigned long flags;
+	int ret;
 
 	dev_dbg(dev, "%s()\n", __func__);
 
 	if (!psd)
 		return 0;
 
-	spin_lock_irqsave(&psd->lock, flags);
+	ret = pm_clk_op_lock(psd, &flags, __func__);
+	if (ret)
+		return ret;
 
 	list_for_each_entry_reverse(ce, &psd->clock_list, node) {
-		if (ce->status < PCE_STATUS_ERROR) {
-			if (ce->status == PCE_STATUS_ENABLED)
+		if (ce->status == PCE_STATUS_ENABLED) {
+			if (ce->enabled_when_prepared) {
+				clk_disable_unprepare(ce->clk);
+				ce->status = PCE_STATUS_ACQUIRED;
+			} else {
 				clk_disable(ce->clk);
-			ce->status = PCE_STATUS_ACQUIRED;
+				ce->status = PCE_STATUS_PREPARED;
+			}
 		}
 	}
 
-	spin_unlock_irqrestore(&psd->lock, flags);
+	pm_clk_op_unlock(psd, &flags);
 
 	return 0;
 }
@@ -428,18 +562,21 @@ int pm_clk_resume(struct device *dev)
 	struct pm_subsys_data *psd = dev_to_psd(dev);
 	struct pm_clock_entry *ce;
 	unsigned long flags;
+	int ret;
 
 	dev_dbg(dev, "%s()\n", __func__);
 
 	if (!psd)
 		return 0;
 
-	spin_lock_irqsave(&psd->lock, flags);
+	ret = pm_clk_op_lock(psd, &flags, __func__);
+	if (ret)
+		return ret;
 
 	list_for_each_entry(ce, &psd->clock_list, node)
 		__pm_clk_enable(dev, ce);
 
-	spin_unlock_irqrestore(&psd->lock, flags);
+	pm_clk_op_unlock(psd, &flags);
 
 	return 0;
 }
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 8c1d04db99..3d751ae5bc 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk)
 }
 EXPORT_SYMBOL_GPL(clk_enable);
 
+/**
+ * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
+ * @clk: clock source
+ *
+ * Returns true if clk_prepare() implicitly enables the clock, effectively
+ * making clk_enable()/clk_disable() no-ops, false otherwise.
+ *
+ * This is of interest mainly to power management code where actually
+ * disabling the clock also requires unpreparing it to have any material
+ * effect.
+ *
+ * Regardless of the value returned here, the caller must always invoke
+ * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
+ * to be right.
+ */
+bool clk_is_enabled_when_prepared(struct clk *clk)
+{
+	return clk && !(clk->core->ops->enable && clk->core->ops->disable);
+}
+EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared);
+
 static int clk_core_prepare_enable(struct clk_core *core)
 {
 	int ret;
diff --git a/include/linux/clk.h b/include/linux/clk.h
index 31ff1bf1b7..a4a86aa8b1 100644
--- a/include/linux/clk.h
+++ b/include/linux/clk.h
@@ -238,6 +238,7 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
 
 #endif
 
+#ifdef CONFIG_HAVE_CLK_PREPARE
 /**
  * clk_prepare - prepare a clock source
  * @clk: clock source
@@ -246,10 +247,26 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
  *
  * Must not be called from within atomic context.
  */
-#ifdef CONFIG_HAVE_CLK_PREPARE
 int clk_prepare(struct clk *clk);
 int __must_check clk_bulk_prepare(int num_clks,
 				  const struct clk_bulk_data *clks);
+
+/**
+ * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
+ * @clk: clock source
+ *
+ * Returns true if clk_prepare() implicitly enables the clock, effectively
+ * making clk_enable()/clk_disable() no-ops, false otherwise.
+ *
+ * This is of interest mainly to the power management code where actually
+ * disabling the clock also requires unpreparing it to have any material
+ * effect.
+ *
+ * Regardless of the value returned here, the caller must always invoke
+ * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
+ * to be right.
+ */
+bool clk_is_enabled_when_prepared(struct clk *clk);
 #else
 static inline int clk_prepare(struct clk *clk)
 {
@@ -263,6 +280,11 @@ clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks)
 	might_sleep();
 	return 0;
 }
+
+static inline bool clk_is_enabled_when_prepared(struct clk *clk)
+{
+	return false;
+}
 #endif
 
 /**
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 47aca6bac1..482313a8cc 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -537,6 +537,8 @@ struct pm_subsys_data {
 	spinlock_t lock;
 	unsigned int refcount;
 #ifdef CONFIG_PM_CLK
+	unsigned int clock_op_might_sleep;
+	struct mutex clock_mutex;
 	struct list_head clock_list;
 #endif
 #ifdef CONFIG_PM_GENERIC_DOMAINS

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-23 23:07                       ` [PATCH v3] " Nicolas Pitre
@ 2021-01-25 18:15                         ` Rafael J. Wysocki
  2021-01-25 18:24                           ` Nicolas Pitre
  0 siblings, 1 reply; 26+ messages in thread
From: Rafael J. Wysocki @ 2021-01-25 18:15 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Rafael J. Wysocki, Naresh Kamboju, Geert Uytterhoeven,
	Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King, Linux PM, linux-clk,
	Linux Kernel Mailing List, Mark Brown, Arnd Bergmann

On Sun, Jan 24, 2021 at 12:07 AM Nicolas Pitre <npitre@baylibre.com> wrote:
>
> The clock API splits its interface into sleepable ant atomic contexts:
>
> - clk_prepare/clk_unprepare for stuff that might sleep
>
> - clk_enable_clk_disable for anything that may be done in atomic context
>
> The code handling runtime PM for clocks only calls clk_disable() on
> suspend requests, and clk_enable on resume requests. This means that
> runtime PM with clock providers that only have the prepare/unprepare
> methods implemented is basically useless.
>
> Many clock implementations can't accommodate atomic contexts. This is
> often the case when communication with the clock happens through another
> subsystem like I2C or SCMI.
>
> Let's make the clock PM code useful with such clocks by safely invoking
> clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> such clocks are registered with the PM layer then pm_runtime_irq_safe()
> can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> may be invoked in atomic context.
>
> For clocks that do implement the enable and disable methods then
> everything just works as before.
>
> A note on sparse:
> According to https://lwn.net/Articles/109066/ there are things
> that sparse can't cope with. In particular, pm_clk_op_lock() and
> pm_clk_op_unlock() may or may not lock/unlock psd->lock depending on
> some runtime condition. To work around that we tell sparse the lock
> is always untaken for the purpose of static analisys.
>
> Thanks to Naresh Kamboju for reporting issues with the initial patch.
>
> Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org>

It looks like sparse is still complaining:

https://lore.kernel.org/linux-acpi/600dc681.3mAl9WQXnragfNZk%25lkp@intel.com/




>
> ---
>
> Changes from v2:
>
> - added workarounds to cope with sparse limitations (see above).
>
> Changes from v1:
>
> - made dummy clk_is_enabled_when_prepared() dependent on
>   CONFIG_HAVE_CLK_PREPARE instead of CONFIG_HAVE_CLK.
>
> diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> index ced6863a16..e6956ce301 100644
> --- a/drivers/base/power/clock_ops.c
> +++ b/drivers/base/power/clock_ops.c
> @@ -23,6 +23,7 @@
>  enum pce_status {
>         PCE_STATUS_NONE = 0,
>         PCE_STATUS_ACQUIRED,
> +       PCE_STATUS_PREPARED,
>         PCE_STATUS_ENABLED,
>         PCE_STATUS_ERROR,
>  };
> @@ -32,8 +33,108 @@ struct pm_clock_entry {
>         char *con_id;
>         struct clk *clk;
>         enum pce_status status;
> +       bool enabled_when_prepared;
>  };
>
> +/**
> + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> + *                   entry list.
> + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> + *      and clk_op_might_sleep count to be modified.
> + *
> + * Get exclusive access before modifying the PM clock entry list and the
> + * clock_op_might_sleep count to guard against concurrent modifications.
> + * This also protects against a concurrent clock_op_might_sleep and PM clock
> + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> + * happen in atomic context, hence both the mutex and the spinlock must be
> + * taken here.
> + */
> +static inline void pm_clk_list_lock(struct pm_subsys_data *psd)
> +{
> +       mutex_lock(&psd->clock_mutex);
> +       spin_lock_irq(&psd->lock);
> +}
> +
> +/**
> + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> + * @psd: the same pm_subsys_data instance previously passed to
> + *      pm_clk_list_lock().
> + */
> +static inline void pm_clk_list_unlock(struct pm_subsys_data *psd)
> +{
> +       spin_unlock_irq(&psd->lock);
> +       mutex_unlock(&psd->clock_mutex);
> +}
> +
> +/**
> + * pm_clk_op_lock - ensure exclusive access for performing clock operations.
> + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> + *      and clk_op_might_sleep count being used.
> + * @flags: stored irq flags.
> + * @fn: string for the caller function's name.
> + *
> + * This is used by pm_clk_suspend() and pm_clk_resume() to guard
> + * against concurrent modifications to the clock entry list and the
> + * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
> + * only the mutex can be locked and those functions can only be used in
> + * non atomic context. If clock_op_might_sleep == 0 then these functions
> + * may be used in any context and only the spinlock can be locked.
> + * Returns -EINVAL if called in atomic context when clock ops might sleep.
> + */
> +static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
> +                         const char *fn)
> +{
> +       bool atomic_context = in_atomic() || irqs_disabled();
> +
> +try_again:
> +       spin_lock_irqsave(&psd->lock, *flags);
> +       if (!psd->clock_op_might_sleep) {
> +               /* the __release is there to work around sparse limitations */
> +               __release(&psd->lock);
> +               return 0;
> +       }
> +
> +       /* bail out if in atomic context */
> +       if (atomic_context) {
> +               pr_err("%s: atomic context with clock_ops_might_sleep = %d",
> +                      fn, psd->clock_op_might_sleep);
> +               spin_unlock_irqrestore(&psd->lock, *flags);
> +               might_sleep();
> +               return -EPERM;
> +       }
> +
> +       /* we must switch to the mutex */
> +       spin_unlock_irqrestore(&psd->lock, *flags);
> +       mutex_lock(&psd->clock_mutex);
> +
> +       /*
> +        * There was a possibility for psd->clock_op_might_sleep
> +        * to become 0 above. Keep the mutex only if not the case.
> +        */
> +       if (likely(psd->clock_op_might_sleep))
> +               return 0;
> +
> +       mutex_unlock(&psd->clock_mutex);
> +       goto try_again;
> +}
> +
> +/**
> + * pm_clk_op_unlock - counterpart to pm_clk_op_lock().
> + * @psd: the same pm_subsys_data instance previously passed to
> + *      pm_clk_op_lock().
> + * @flags: irq flags provided by pm_clk_op_lock().
> + */
> +static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
> +{
> +       if (psd->clock_op_might_sleep) {
> +               mutex_unlock(&psd->clock_mutex);
> +       } else {
> +               /* the __acquire is there to work around sparse limitations */
> +               __acquire(&psd->lock);
> +               spin_unlock_irqrestore(&psd->lock, *flags);
> +       }
> +}
> +
>  /**
>   * pm_clk_enable - Enable a clock, reporting any errors
>   * @dev: The device for the given clock
> @@ -43,14 +144,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce
>  {
>         int ret;
>
> -       if (ce->status < PCE_STATUS_ERROR) {
> +       switch (ce->status) {
> +       case PCE_STATUS_ACQUIRED:
> +               ret = clk_prepare_enable(ce->clk);
> +               break;
> +       case PCE_STATUS_PREPARED:
>                 ret = clk_enable(ce->clk);
> -               if (!ret)
> -                       ce->status = PCE_STATUS_ENABLED;
> -               else
> -                       dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> -                               __func__, ce->clk, ret);
> +               break;
> +       default:
> +               return;
>         }
> +       if (!ret)
> +               ce->status = PCE_STATUS_ENABLED;
> +       else
> +               dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> +                       __func__, ce->clk, ret);
>  }
>
>  /**
> @@ -64,17 +172,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
>                 ce->clk = clk_get(dev, ce->con_id);
>         if (IS_ERR(ce->clk)) {
>                 ce->status = PCE_STATUS_ERROR;
> +               return;
> +       } else if (clk_is_enabled_when_prepared(ce->clk)) {
> +               /* we defer preparing the clock in that case */
> +               ce->status = PCE_STATUS_ACQUIRED;
> +               ce->enabled_when_prepared = true;
> +       } else if (clk_prepare(ce->clk)) {
> +               ce->status = PCE_STATUS_ERROR;
> +               dev_err(dev, "clk_prepare() failed\n");
> +               return;
>         } else {
> -               if (clk_prepare(ce->clk)) {
> -                       ce->status = PCE_STATUS_ERROR;
> -                       dev_err(dev, "clk_prepare() failed\n");
> -               } else {
> -                       ce->status = PCE_STATUS_ACQUIRED;
> -                       dev_dbg(dev,
> -                               "Clock %pC con_id %s managed by runtime PM.\n",
> -                               ce->clk, ce->con_id);
> -               }
> +               ce->status = PCE_STATUS_PREPARED;
>         }
> +       dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
> +               ce->clk, ce->con_id);
>  }
>
>  static int __pm_clk_add(struct device *dev, const char *con_id,
> @@ -106,9 +217,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
>
>         pm_clk_acquire(dev, ce);
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>         list_add_tail(&ce->node, &psd->clock_list);
> -       spin_unlock_irq(&psd->lock);
> +       if (ce->enabled_when_prepared)
> +               psd->clock_op_might_sleep++;
> +       pm_clk_list_unlock(psd);
>         return 0;
>  }
>
> @@ -239,14 +352,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce)
>         if (!ce)
>                 return;
>
> -       if (ce->status < PCE_STATUS_ERROR) {
> -               if (ce->status == PCE_STATUS_ENABLED)
> -                       clk_disable(ce->clk);
> -
> -               if (ce->status >= PCE_STATUS_ACQUIRED) {
> -                       clk_unprepare(ce->clk);
> +       switch (ce->status) {
> +       case PCE_STATUS_ENABLED:
> +               clk_disable(ce->clk);
> +               fallthrough;
> +       case PCE_STATUS_PREPARED:
> +               clk_unprepare(ce->clk);
> +               fallthrough;
> +       case PCE_STATUS_ACQUIRED:
> +       case PCE_STATUS_ERROR:
> +               if (!IS_ERR(ce->clk))
>                         clk_put(ce->clk);
> -               }
> +               break;
> +       default:
> +               break;
>         }
>
>         kfree(ce->con_id);
> @@ -269,7 +388,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
>         if (!psd)
>                 return;
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>
>         list_for_each_entry(ce, &psd->clock_list, node) {
>                 if (!con_id && !ce->con_id)
> @@ -280,12 +399,14 @@ void pm_clk_remove(struct device *dev, const char *con_id)
>                         goto remove;
>         }
>
> -       spin_unlock_irq(&psd->lock);
> +       pm_clk_list_unlock(psd);
>         return;
>
>   remove:
>         list_del(&ce->node);
> -       spin_unlock_irq(&psd->lock);
> +       if (ce->enabled_when_prepared)
> +               psd->clock_op_might_sleep--;
> +       pm_clk_list_unlock(psd);
>
>         __pm_clk_remove(ce);
>  }
> @@ -307,19 +428,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
>         if (!psd || !clk)
>                 return;
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>
>         list_for_each_entry(ce, &psd->clock_list, node) {
>                 if (clk == ce->clk)
>                         goto remove;
>         }
>
> -       spin_unlock_irq(&psd->lock);
> +       pm_clk_list_unlock(psd);
>         return;
>
>   remove:
>         list_del(&ce->node);
> -       spin_unlock_irq(&psd->lock);
> +       if (ce->enabled_when_prepared)
> +               psd->clock_op_might_sleep--;
> +       pm_clk_list_unlock(psd);
>
>         __pm_clk_remove(ce);
>  }
> @@ -330,13 +453,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
>   * @dev: Device to initialize the list of PM clocks for.
>   *
>   * Initialize the lock and clock_list members of the device's pm_subsys_data
> - * object.
> + * object, set the count of clocks that might sleep to 0.
>   */
>  void pm_clk_init(struct device *dev)
>  {
>         struct pm_subsys_data *psd = dev_to_psd(dev);
> -       if (psd)
> +       if (psd) {
>                 INIT_LIST_HEAD(&psd->clock_list);
> +               mutex_init(&psd->clock_mutex);
> +               psd->clock_op_might_sleep = 0;
> +       }
>  }
>  EXPORT_SYMBOL_GPL(pm_clk_init);
>
> @@ -372,12 +498,13 @@ void pm_clk_destroy(struct device *dev)
>
>         INIT_LIST_HEAD(&list);
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>
>         list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
>                 list_move(&ce->node, &list);
> +       psd->clock_op_might_sleep = 0;
>
> -       spin_unlock_irq(&psd->lock);
> +       pm_clk_list_unlock(psd);
>
>         dev_pm_put_subsys_data(dev);
>
> @@ -397,23 +524,30 @@ int pm_clk_suspend(struct device *dev)
>         struct pm_subsys_data *psd = dev_to_psd(dev);
>         struct pm_clock_entry *ce;
>         unsigned long flags;
> +       int ret;
>
>         dev_dbg(dev, "%s()\n", __func__);
>
>         if (!psd)
>                 return 0;
>
> -       spin_lock_irqsave(&psd->lock, flags);
> +       ret = pm_clk_op_lock(psd, &flags, __func__);
> +       if (ret)
> +               return ret;
>
>         list_for_each_entry_reverse(ce, &psd->clock_list, node) {
> -               if (ce->status < PCE_STATUS_ERROR) {
> -                       if (ce->status == PCE_STATUS_ENABLED)
> +               if (ce->status == PCE_STATUS_ENABLED) {
> +                       if (ce->enabled_when_prepared) {
> +                               clk_disable_unprepare(ce->clk);
> +                               ce->status = PCE_STATUS_ACQUIRED;
> +                       } else {
>                                 clk_disable(ce->clk);
> -                       ce->status = PCE_STATUS_ACQUIRED;
> +                               ce->status = PCE_STATUS_PREPARED;
> +                       }
>                 }
>         }
>
> -       spin_unlock_irqrestore(&psd->lock, flags);
> +       pm_clk_op_unlock(psd, &flags);
>
>         return 0;
>  }
> @@ -428,18 +562,21 @@ int pm_clk_resume(struct device *dev)
>         struct pm_subsys_data *psd = dev_to_psd(dev);
>         struct pm_clock_entry *ce;
>         unsigned long flags;
> +       int ret;
>
>         dev_dbg(dev, "%s()\n", __func__);
>
>         if (!psd)
>                 return 0;
>
> -       spin_lock_irqsave(&psd->lock, flags);
> +       ret = pm_clk_op_lock(psd, &flags, __func__);
> +       if (ret)
> +               return ret;
>
>         list_for_each_entry(ce, &psd->clock_list, node)
>                 __pm_clk_enable(dev, ce);
>
> -       spin_unlock_irqrestore(&psd->lock, flags);
> +       pm_clk_op_unlock(psd, &flags);
>
>         return 0;
>  }
> diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> index 8c1d04db99..3d751ae5bc 100644
> --- a/drivers/clk/clk.c
> +++ b/drivers/clk/clk.c
> @@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk)
>  }
>  EXPORT_SYMBOL_GPL(clk_enable);
>
> +/**
> + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> + * @clk: clock source
> + *
> + * Returns true if clk_prepare() implicitly enables the clock, effectively
> + * making clk_enable()/clk_disable() no-ops, false otherwise.
> + *
> + * This is of interest mainly to power management code where actually
> + * disabling the clock also requires unpreparing it to have any material
> + * effect.
> + *
> + * Regardless of the value returned here, the caller must always invoke
> + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> + * to be right.
> + */
> +bool clk_is_enabled_when_prepared(struct clk *clk)
> +{
> +       return clk && !(clk->core->ops->enable && clk->core->ops->disable);
> +}
> +EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared);
> +
>  static int clk_core_prepare_enable(struct clk_core *core)
>  {
>         int ret;
> diff --git a/include/linux/clk.h b/include/linux/clk.h
> index 31ff1bf1b7..a4a86aa8b1 100644
> --- a/include/linux/clk.h
> +++ b/include/linux/clk.h
> @@ -238,6 +238,7 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
>
>  #endif
>
> +#ifdef CONFIG_HAVE_CLK_PREPARE
>  /**
>   * clk_prepare - prepare a clock source
>   * @clk: clock source
> @@ -246,10 +247,26 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
>   *
>   * Must not be called from within atomic context.
>   */
> -#ifdef CONFIG_HAVE_CLK_PREPARE
>  int clk_prepare(struct clk *clk);
>  int __must_check clk_bulk_prepare(int num_clks,
>                                   const struct clk_bulk_data *clks);
> +
> +/**
> + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> + * @clk: clock source
> + *
> + * Returns true if clk_prepare() implicitly enables the clock, effectively
> + * making clk_enable()/clk_disable() no-ops, false otherwise.
> + *
> + * This is of interest mainly to the power management code where actually
> + * disabling the clock also requires unpreparing it to have any material
> + * effect.
> + *
> + * Regardless of the value returned here, the caller must always invoke
> + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> + * to be right.
> + */
> +bool clk_is_enabled_when_prepared(struct clk *clk);
>  #else
>  static inline int clk_prepare(struct clk *clk)
>  {
> @@ -263,6 +280,11 @@ clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks)
>         might_sleep();
>         return 0;
>  }
> +
> +static inline bool clk_is_enabled_when_prepared(struct clk *clk)
> +{
> +       return false;
> +}
>  #endif
>
>  /**
> diff --git a/include/linux/pm.h b/include/linux/pm.h
> index 47aca6bac1..482313a8cc 100644
> --- a/include/linux/pm.h
> +++ b/include/linux/pm.h
> @@ -537,6 +537,8 @@ struct pm_subsys_data {
>         spinlock_t lock;
>         unsigned int refcount;
>  #ifdef CONFIG_PM_CLK
> +       unsigned int clock_op_might_sleep;
> +       struct mutex clock_mutex;
>         struct list_head clock_list;
>  #endif
>  #ifdef CONFIG_PM_GENERIC_DOMAINS

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-25 18:15                         ` Rafael J. Wysocki
@ 2021-01-25 18:24                           ` Nicolas Pitre
  2021-01-25 19:29                             ` [PATCH v4] " Nicolas Pitre
  0 siblings, 1 reply; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-25 18:24 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Naresh Kamboju, Geert Uytterhoeven, Rafael J. Wysocki,
	Greg Kroah-Hartman, Michael Turquette, Stephen Boyd,
	Russell King, Linux PM, linux-clk, Linux Kernel Mailing List,
	Mark Brown, Arnd Bergmann

On Mon, 25 Jan 2021, Rafael J. Wysocki wrote:

> On Sun, Jan 24, 2021 at 12:07 AM Nicolas Pitre <npitre@baylibre.com> wrote:
> > A note on sparse:
> > According to https://lwn.net/Articles/109066/ there are things
> > that sparse can't cope with. In particular, pm_clk_op_lock() and
> > pm_clk_op_unlock() may or may not lock/unlock psd->lock depending on
> > some runtime condition. To work around that we tell sparse the lock
> > is always untaken for the purpose of static analisys.
> 
> It looks like sparse is still complaining:
> 
> https://lore.kernel.org/linux-acpi/600dc681.3mAl9WQXnragfNZk%25lkp@intel.com/

Would you happen to still have one of those randconfig configuration?

I'd like to know why sparse complains about 3 out of 93 configs.


Nicolas

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v4] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-25 18:24                           ` Nicolas Pitre
@ 2021-01-25 19:29                             ` Nicolas Pitre
  2021-01-27 18:31                               ` Rafael J. Wysocki
       [not found]                               ` <161301451636.1254594.7473642352348913785@swboyd.mtv.corp.google.com>
  0 siblings, 2 replies; 26+ messages in thread
From: Nicolas Pitre @ 2021-01-25 19:29 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Naresh Kamboju, Geert Uytterhoeven, Rafael J. Wysocki,
	Greg Kroah-Hartman, Michael Turquette, Stephen Boyd,
	Russell King, Linux PM, linux-clk, Linux Kernel Mailing List,
	Mark Brown, Arnd Bergmann

The clock API splits its interface into sleepable ant atomic contexts:

- clk_prepare/clk_unprepare for stuff that might sleep

- clk_enable_clk_disable for anything that may be done in atomic context

The code handling runtime PM for clocks only calls clk_disable() on
suspend requests, and clk_enable on resume requests. This means that
runtime PM with clock providers that only have the prepare/unprepare
methods implemented is basically useless.

Many clock implementations can't accommodate atomic contexts. This is
often the case when communication with the clock happens through another
subsystem like I2C or SCMI.

Let's make the clock PM code useful with such clocks by safely invoking
clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
such clocks are registered with the PM layer then pm_runtime_irq_safe()
can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
may be invoked in atomic context.

For clocks that do implement the enable and disable methods then
everything just works as before.

A note on sparse:
According to https://lwn.net/Articles/109066/ there are things
that sparse can't cope with. In particular, pm_clk_op_lock() and
pm_clk_op_unlock() may or may not lock/unlock psd->lock depending on
some runtime condition. To work around that we tell it the lock is
always untaken for the purpose of static analisys.

Thanks to Naresh Kamboju for reporting issues with the initial patch.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org>

---

On Mon, 25 Jan 2021, Nicolas Pitre wrote:

> On Mon, 25 Jan 2021, Rafael J. Wysocki wrote:
> 
> > It looks like sparse is still complaining:
> > 
> > https://lore.kernel.org/linux-acpi/600dc681.3mAl9WQXnragfNZk%25lkp@intel.com/
> 
> Would you happen to still have one of those randconfig configuration?
> I'd like to know why sparse complains about 3 out of 93 configs.

Well... never mind. Although I'm not able to reproduce, the only 
explanation I can guess is that, dfor some configs, the inline attribute 
was inhibited somehow.

Let's hope this one will do. If not please keep the problematic config.

Changes from v3:

- more sparse annotation as inlining isn't always enough.

Changes from v2:

- added workarounds to cope with sparse limitations (see above).

Changes from v1:

- made dummy clk_is_enabled_when_prepared() dependent on
  CONFIG_HAVE_CLK_PREPARE instead of CONFIG_HAVE_CLK.

diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
index ced6863a16..84d5acb630 100644
--- a/drivers/base/power/clock_ops.c
+++ b/drivers/base/power/clock_ops.c
@@ -23,6 +23,7 @@
 enum pce_status {
 	PCE_STATUS_NONE = 0,
 	PCE_STATUS_ACQUIRED,
+	PCE_STATUS_PREPARED,
 	PCE_STATUS_ENABLED,
 	PCE_STATUS_ERROR,
 };
@@ -32,8 +33,112 @@ struct pm_clock_entry {
 	char *con_id;
 	struct clk *clk;
 	enum pce_status status;
+	bool enabled_when_prepared;
 };
 
+/**
+ * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
+ *		      entry list.
+ * @psd: pm_subsys_data instance corresponding to the PM clock entry list
+ *	 and clk_op_might_sleep count to be modified.
+ *
+ * Get exclusive access before modifying the PM clock entry list and the
+ * clock_op_might_sleep count to guard against concurrent modifications.
+ * This also protects against a concurrent clock_op_might_sleep and PM clock
+ * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
+ * happen in atomic context, hence both the mutex and the spinlock must be
+ * taken here.
+ */
+static void pm_clk_list_lock(struct pm_subsys_data *psd)
+	__acquires(&psd->lock)
+{
+	mutex_lock(&psd->clock_mutex);
+	spin_lock_irq(&psd->lock);
+}
+
+/**
+ * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
+ * @psd: the same pm_subsys_data instance previously passed to
+ *	 pm_clk_list_lock().
+ */
+static void pm_clk_list_unlock(struct pm_subsys_data *psd)
+	__releases(&psd->lock)
+{
+	spin_unlock_irq(&psd->lock);
+	mutex_unlock(&psd->clock_mutex);
+}
+
+/**
+ * pm_clk_op_lock - ensure exclusive access for performing clock operations.
+ * @psd: pm_subsys_data instance corresponding to the PM clock entry list
+ *	 and clk_op_might_sleep count being used.
+ * @flags: stored irq flags.
+ * @fn: string for the caller function's name.
+ *
+ * This is used by pm_clk_suspend() and pm_clk_resume() to guard
+ * against concurrent modifications to the clock entry list and the
+ * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
+ * only the mutex can be locked and those functions can only be used in
+ * non atomic context. If clock_op_might_sleep == 0 then these functions
+ * may be used in any context and only the spinlock can be locked.
+ * Returns -EINVAL if called in atomic context when clock ops might sleep.
+ */
+static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
+			  const char *fn)
+	/* sparse annotations don't work here as exit state isn't static */
+{
+	bool atomic_context = in_atomic() || irqs_disabled();
+
+try_again:
+	spin_lock_irqsave(&psd->lock, *flags);
+	if (!psd->clock_op_might_sleep) {
+		/* the __release is there to work around sparse limitations */
+		__release(&psd->lock);
+		return 0;
+	}
+
+	/* bail out if in atomic context */
+	if (atomic_context) {
+		pr_err("%s: atomic context with clock_ops_might_sleep = %d",
+		       fn, psd->clock_op_might_sleep);
+		spin_unlock_irqrestore(&psd->lock, *flags);
+		might_sleep();
+		return -EPERM;
+	}
+
+	/* we must switch to the mutex */
+	spin_unlock_irqrestore(&psd->lock, *flags);
+	mutex_lock(&psd->clock_mutex);
+
+	/*
+	 * There was a possibility for psd->clock_op_might_sleep
+	 * to become 0 above. Keep the mutex only if not the case.
+	 */
+	if (likely(psd->clock_op_might_sleep))
+		return 0;
+
+	mutex_unlock(&psd->clock_mutex);
+	goto try_again;
+}
+
+/**
+ * pm_clk_op_unlock - counterpart to pm_clk_op_lock().
+ * @psd: the same pm_subsys_data instance previously passed to
+ *	 pm_clk_op_lock().
+ * @flags: irq flags provided by pm_clk_op_lock().
+ */
+static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
+	/* sparse annotations don't work here as entry state isn't static */
+{
+	if (psd->clock_op_might_sleep) {
+		mutex_unlock(&psd->clock_mutex);
+	} else {
+		/* the __acquire is there to work around sparse limitations */
+		__acquire(&psd->lock);
+		spin_unlock_irqrestore(&psd->lock, *flags);
+	}
+}
+
 /**
  * pm_clk_enable - Enable a clock, reporting any errors
  * @dev: The device for the given clock
@@ -43,14 +148,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce
 {
 	int ret;
 
-	if (ce->status < PCE_STATUS_ERROR) {
+	switch (ce->status) {
+	case PCE_STATUS_ACQUIRED:
+		ret = clk_prepare_enable(ce->clk);
+		break;
+	case PCE_STATUS_PREPARED:
 		ret = clk_enable(ce->clk);
-		if (!ret)
-			ce->status = PCE_STATUS_ENABLED;
-		else
-			dev_err(dev, "%s: failed to enable clk %p, error %d\n",
-				__func__, ce->clk, ret);
+		break;
+	default:
+		return;
 	}
+	if (!ret)
+		ce->status = PCE_STATUS_ENABLED;
+	else
+		dev_err(dev, "%s: failed to enable clk %p, error %d\n",
+			__func__, ce->clk, ret);
 }
 
 /**
@@ -64,17 +176,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
 		ce->clk = clk_get(dev, ce->con_id);
 	if (IS_ERR(ce->clk)) {
 		ce->status = PCE_STATUS_ERROR;
+		return;
+	} else if (clk_is_enabled_when_prepared(ce->clk)) {
+		/* we defer preparing the clock in that case */
+		ce->status = PCE_STATUS_ACQUIRED;
+		ce->enabled_when_prepared = true;
+	} else if (clk_prepare(ce->clk)) {
+		ce->status = PCE_STATUS_ERROR;
+		dev_err(dev, "clk_prepare() failed\n");
+		return;
 	} else {
-		if (clk_prepare(ce->clk)) {
-			ce->status = PCE_STATUS_ERROR;
-			dev_err(dev, "clk_prepare() failed\n");
-		} else {
-			ce->status = PCE_STATUS_ACQUIRED;
-			dev_dbg(dev,
-				"Clock %pC con_id %s managed by runtime PM.\n",
-				ce->clk, ce->con_id);
-		}
+		ce->status = PCE_STATUS_PREPARED;
 	}
+	dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
+		ce->clk, ce->con_id);
 }
 
 static int __pm_clk_add(struct device *dev, const char *con_id,
@@ -106,9 +221,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
 
 	pm_clk_acquire(dev, ce);
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 	list_add_tail(&ce->node, &psd->clock_list);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep++;
+	pm_clk_list_unlock(psd);
 	return 0;
 }
 
@@ -239,14 +356,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce)
 	if (!ce)
 		return;
 
-	if (ce->status < PCE_STATUS_ERROR) {
-		if (ce->status == PCE_STATUS_ENABLED)
-			clk_disable(ce->clk);
-
-		if (ce->status >= PCE_STATUS_ACQUIRED) {
-			clk_unprepare(ce->clk);
+	switch (ce->status) {
+	case PCE_STATUS_ENABLED:
+		clk_disable(ce->clk);
+		fallthrough;
+	case PCE_STATUS_PREPARED:
+		clk_unprepare(ce->clk);
+		fallthrough;
+	case PCE_STATUS_ACQUIRED:
+	case PCE_STATUS_ERROR:
+		if (!IS_ERR(ce->clk))
 			clk_put(ce->clk);
-		}
+		break;
+	default:
+		break;
 	}
 
 	kfree(ce->con_id);
@@ -269,7 +392,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
 	if (!psd)
 		return;
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry(ce, &psd->clock_list, node) {
 		if (!con_id && !ce->con_id)
@@ -280,12 +403,14 @@ void pm_clk_remove(struct device *dev, const char *con_id)
 			goto remove;
 	}
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 	return;
 
  remove:
 	list_del(&ce->node);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep--;
+	pm_clk_list_unlock(psd);
 
 	__pm_clk_remove(ce);
 }
@@ -307,19 +432,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
 	if (!psd || !clk)
 		return;
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry(ce, &psd->clock_list, node) {
 		if (clk == ce->clk)
 			goto remove;
 	}
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 	return;
 
  remove:
 	list_del(&ce->node);
-	spin_unlock_irq(&psd->lock);
+	if (ce->enabled_when_prepared)
+		psd->clock_op_might_sleep--;
+	pm_clk_list_unlock(psd);
 
 	__pm_clk_remove(ce);
 }
@@ -330,13 +457,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
  * @dev: Device to initialize the list of PM clocks for.
  *
  * Initialize the lock and clock_list members of the device's pm_subsys_data
- * object.
+ * object, set the count of clocks that might sleep to 0.
  */
 void pm_clk_init(struct device *dev)
 {
 	struct pm_subsys_data *psd = dev_to_psd(dev);
-	if (psd)
+	if (psd) {
 		INIT_LIST_HEAD(&psd->clock_list);
+		mutex_init(&psd->clock_mutex);
+		psd->clock_op_might_sleep = 0;
+	}
 }
 EXPORT_SYMBOL_GPL(pm_clk_init);
 
@@ -372,12 +502,13 @@ void pm_clk_destroy(struct device *dev)
 
 	INIT_LIST_HEAD(&list);
 
-	spin_lock_irq(&psd->lock);
+	pm_clk_list_lock(psd);
 
 	list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
 		list_move(&ce->node, &list);
+	psd->clock_op_might_sleep = 0;
 
-	spin_unlock_irq(&psd->lock);
+	pm_clk_list_unlock(psd);
 
 	dev_pm_put_subsys_data(dev);
 
@@ -397,23 +528,30 @@ int pm_clk_suspend(struct device *dev)
 	struct pm_subsys_data *psd = dev_to_psd(dev);
 	struct pm_clock_entry *ce;
 	unsigned long flags;
+	int ret;
 
 	dev_dbg(dev, "%s()\n", __func__);
 
 	if (!psd)
 		return 0;
 
-	spin_lock_irqsave(&psd->lock, flags);
+	ret = pm_clk_op_lock(psd, &flags, __func__);
+	if (ret)
+		return ret;
 
 	list_for_each_entry_reverse(ce, &psd->clock_list, node) {
-		if (ce->status < PCE_STATUS_ERROR) {
-			if (ce->status == PCE_STATUS_ENABLED)
+		if (ce->status == PCE_STATUS_ENABLED) {
+			if (ce->enabled_when_prepared) {
+				clk_disable_unprepare(ce->clk);
+				ce->status = PCE_STATUS_ACQUIRED;
+			} else {
 				clk_disable(ce->clk);
-			ce->status = PCE_STATUS_ACQUIRED;
+				ce->status = PCE_STATUS_PREPARED;
+			}
 		}
 	}
 
-	spin_unlock_irqrestore(&psd->lock, flags);
+	pm_clk_op_unlock(psd, &flags);
 
 	return 0;
 }
@@ -428,18 +566,21 @@ int pm_clk_resume(struct device *dev)
 	struct pm_subsys_data *psd = dev_to_psd(dev);
 	struct pm_clock_entry *ce;
 	unsigned long flags;
+	int ret;
 
 	dev_dbg(dev, "%s()\n", __func__);
 
 	if (!psd)
 		return 0;
 
-	spin_lock_irqsave(&psd->lock, flags);
+	ret = pm_clk_op_lock(psd, &flags, __func__);
+	if (ret)
+		return ret;
 
 	list_for_each_entry(ce, &psd->clock_list, node)
 		__pm_clk_enable(dev, ce);
 
-	spin_unlock_irqrestore(&psd->lock, flags);
+	pm_clk_op_unlock(psd, &flags);
 
 	return 0;
 }
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 8c1d04db99..3d751ae5bc 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk)
 }
 EXPORT_SYMBOL_GPL(clk_enable);
 
+/**
+ * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
+ * @clk: clock source
+ *
+ * Returns true if clk_prepare() implicitly enables the clock, effectively
+ * making clk_enable()/clk_disable() no-ops, false otherwise.
+ *
+ * This is of interest mainly to power management code where actually
+ * disabling the clock also requires unpreparing it to have any material
+ * effect.
+ *
+ * Regardless of the value returned here, the caller must always invoke
+ * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
+ * to be right.
+ */
+bool clk_is_enabled_when_prepared(struct clk *clk)
+{
+	return clk && !(clk->core->ops->enable && clk->core->ops->disable);
+}
+EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared);
+
 static int clk_core_prepare_enable(struct clk_core *core)
 {
 	int ret;
diff --git a/include/linux/clk.h b/include/linux/clk.h
index 31ff1bf1b7..a4a86aa8b1 100644
--- a/include/linux/clk.h
+++ b/include/linux/clk.h
@@ -238,6 +238,7 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
 
 #endif
 
+#ifdef CONFIG_HAVE_CLK_PREPARE
 /**
  * clk_prepare - prepare a clock source
  * @clk: clock source
@@ -246,10 +247,26 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
  *
  * Must not be called from within atomic context.
  */
-#ifdef CONFIG_HAVE_CLK_PREPARE
 int clk_prepare(struct clk *clk);
 int __must_check clk_bulk_prepare(int num_clks,
 				  const struct clk_bulk_data *clks);
+
+/**
+ * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
+ * @clk: clock source
+ *
+ * Returns true if clk_prepare() implicitly enables the clock, effectively
+ * making clk_enable()/clk_disable() no-ops, false otherwise.
+ *
+ * This is of interest mainly to the power management code where actually
+ * disabling the clock also requires unpreparing it to have any material
+ * effect.
+ *
+ * Regardless of the value returned here, the caller must always invoke
+ * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
+ * to be right.
+ */
+bool clk_is_enabled_when_prepared(struct clk *clk);
 #else
 static inline int clk_prepare(struct clk *clk)
 {
@@ -263,6 +280,11 @@ clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks)
 	might_sleep();
 	return 0;
 }
+
+static inline bool clk_is_enabled_when_prepared(struct clk *clk)
+{
+	return false;
+}
 #endif
 
 /**
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 47aca6bac1..482313a8cc 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -537,6 +537,8 @@ struct pm_subsys_data {
 	spinlock_t lock;
 	unsigned int refcount;
 #ifdef CONFIG_PM_CLK
+	unsigned int clock_op_might_sleep;
+	struct mutex clock_mutex;
 	struct list_head clock_list;
 #endif
 #ifdef CONFIG_PM_GENERIC_DOMAINS

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v4] PM / clk: make PM clock layer compatible with clocks that must sleep
  2021-01-25 19:29                             ` [PATCH v4] " Nicolas Pitre
@ 2021-01-27 18:31                               ` Rafael J. Wysocki
       [not found]                               ` <161301451636.1254594.7473642352348913785@swboyd.mtv.corp.google.com>
  1 sibling, 0 replies; 26+ messages in thread
From: Rafael J. Wysocki @ 2021-01-27 18:31 UTC (permalink / raw)
  To: Nicolas Pitre
  Cc: Rafael J. Wysocki, Naresh Kamboju, Geert Uytterhoeven,
	Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Stephen Boyd, Russell King, Linux PM, linux-clk,
	Linux Kernel Mailing List, Mark Brown, Arnd Bergmann

On Mon, Jan 25, 2021 at 8:29 PM Nicolas Pitre <npitre@baylibre.com> wrote:
>
> The clock API splits its interface into sleepable ant atomic contexts:
>
> - clk_prepare/clk_unprepare for stuff that might sleep
>
> - clk_enable_clk_disable for anything that may be done in atomic context
>
> The code handling runtime PM for clocks only calls clk_disable() on
> suspend requests, and clk_enable on resume requests. This means that
> runtime PM with clock providers that only have the prepare/unprepare
> methods implemented is basically useless.
>
> Many clock implementations can't accommodate atomic contexts. This is
> often the case when communication with the clock happens through another
> subsystem like I2C or SCMI.
>
> Let's make the clock PM code useful with such clocks by safely invoking
> clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> such clocks are registered with the PM layer then pm_runtime_irq_safe()
> can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> may be invoked in atomic context.
>
> For clocks that do implement the enable and disable methods then
> everything just works as before.
>
> A note on sparse:
> According to https://lwn.net/Articles/109066/ there are things
> that sparse can't cope with. In particular, pm_clk_op_lock() and
> pm_clk_op_unlock() may or may not lock/unlock psd->lock depending on
> some runtime condition. To work around that we tell it the lock is
> always untaken for the purpose of static analisys.
>
> Thanks to Naresh Kamboju for reporting issues with the initial patch.
>
> Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org>
>
> ---
>
> On Mon, 25 Jan 2021, Nicolas Pitre wrote:
>
> > On Mon, 25 Jan 2021, Rafael J. Wysocki wrote:
> >
> > > It looks like sparse is still complaining:
> > >
> > > https://lore.kernel.org/linux-acpi/600dc681.3mAl9WQXnragfNZk%25lkp@intel.com/
> >
> > Would you happen to still have one of those randconfig configuration?
> > I'd like to know why sparse complains about 3 out of 93 configs.
>
> Well... never mind. Although I'm not able to reproduce, the only
> explanation I can guess is that, dfor some configs, the inline attribute
> was inhibited somehow.
>
> Let's hope this one will do. If not please keep the problematic config.

OK, applied and let's see.

Thanks!

> Changes from v3:
>
> - more sparse annotation as inlining isn't always enough.
>
> Changes from v2:
>
> - added workarounds to cope with sparse limitations (see above).
>
> Changes from v1:
>
> - made dummy clk_is_enabled_when_prepared() dependent on
>   CONFIG_HAVE_CLK_PREPARE instead of CONFIG_HAVE_CLK.
>
> diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
> index ced6863a16..84d5acb630 100644
> --- a/drivers/base/power/clock_ops.c
> +++ b/drivers/base/power/clock_ops.c
> @@ -23,6 +23,7 @@
>  enum pce_status {
>         PCE_STATUS_NONE = 0,
>         PCE_STATUS_ACQUIRED,
> +       PCE_STATUS_PREPARED,
>         PCE_STATUS_ENABLED,
>         PCE_STATUS_ERROR,
>  };
> @@ -32,8 +33,112 @@ struct pm_clock_entry {
>         char *con_id;
>         struct clk *clk;
>         enum pce_status status;
> +       bool enabled_when_prepared;
>  };
>
> +/**
> + * pm_clk_list_lock - ensure exclusive access for modifying the PM clock
> + *                   entry list.
> + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> + *      and clk_op_might_sleep count to be modified.
> + *
> + * Get exclusive access before modifying the PM clock entry list and the
> + * clock_op_might_sleep count to guard against concurrent modifications.
> + * This also protects against a concurrent clock_op_might_sleep and PM clock
> + * entry list usage in pm_clk_suspend()/pm_clk_resume() that may or may not
> + * happen in atomic context, hence both the mutex and the spinlock must be
> + * taken here.
> + */
> +static void pm_clk_list_lock(struct pm_subsys_data *psd)
> +       __acquires(&psd->lock)
> +{
> +       mutex_lock(&psd->clock_mutex);
> +       spin_lock_irq(&psd->lock);
> +}
> +
> +/**
> + * pm_clk_list_unlock - counterpart to pm_clk_list_lock().
> + * @psd: the same pm_subsys_data instance previously passed to
> + *      pm_clk_list_lock().
> + */
> +static void pm_clk_list_unlock(struct pm_subsys_data *psd)
> +       __releases(&psd->lock)
> +{
> +       spin_unlock_irq(&psd->lock);
> +       mutex_unlock(&psd->clock_mutex);
> +}
> +
> +/**
> + * pm_clk_op_lock - ensure exclusive access for performing clock operations.
> + * @psd: pm_subsys_data instance corresponding to the PM clock entry list
> + *      and clk_op_might_sleep count being used.
> + * @flags: stored irq flags.
> + * @fn: string for the caller function's name.
> + *
> + * This is used by pm_clk_suspend() and pm_clk_resume() to guard
> + * against concurrent modifications to the clock entry list and the
> + * clock_op_might_sleep count. If clock_op_might_sleep is != 0 then
> + * only the mutex can be locked and those functions can only be used in
> + * non atomic context. If clock_op_might_sleep == 0 then these functions
> + * may be used in any context and only the spinlock can be locked.
> + * Returns -EINVAL if called in atomic context when clock ops might sleep.
> + */
> +static int pm_clk_op_lock(struct pm_subsys_data *psd, unsigned long *flags,
> +                         const char *fn)
> +       /* sparse annotations don't work here as exit state isn't static */
> +{
> +       bool atomic_context = in_atomic() || irqs_disabled();
> +
> +try_again:
> +       spin_lock_irqsave(&psd->lock, *flags);
> +       if (!psd->clock_op_might_sleep) {
> +               /* the __release is there to work around sparse limitations */
> +               __release(&psd->lock);
> +               return 0;
> +       }
> +
> +       /* bail out if in atomic context */
> +       if (atomic_context) {
> +               pr_err("%s: atomic context with clock_ops_might_sleep = %d",
> +                      fn, psd->clock_op_might_sleep);
> +               spin_unlock_irqrestore(&psd->lock, *flags);
> +               might_sleep();
> +               return -EPERM;
> +       }
> +
> +       /* we must switch to the mutex */
> +       spin_unlock_irqrestore(&psd->lock, *flags);
> +       mutex_lock(&psd->clock_mutex);
> +
> +       /*
> +        * There was a possibility for psd->clock_op_might_sleep
> +        * to become 0 above. Keep the mutex only if not the case.
> +        */
> +       if (likely(psd->clock_op_might_sleep))
> +               return 0;
> +
> +       mutex_unlock(&psd->clock_mutex);
> +       goto try_again;
> +}
> +
> +/**
> + * pm_clk_op_unlock - counterpart to pm_clk_op_lock().
> + * @psd: the same pm_subsys_data instance previously passed to
> + *      pm_clk_op_lock().
> + * @flags: irq flags provided by pm_clk_op_lock().
> + */
> +static void pm_clk_op_unlock(struct pm_subsys_data *psd, unsigned long *flags)
> +       /* sparse annotations don't work here as entry state isn't static */
> +{
> +       if (psd->clock_op_might_sleep) {
> +               mutex_unlock(&psd->clock_mutex);
> +       } else {
> +               /* the __acquire is there to work around sparse limitations */
> +               __acquire(&psd->lock);
> +               spin_unlock_irqrestore(&psd->lock, *flags);
> +       }
> +}
> +
>  /**
>   * pm_clk_enable - Enable a clock, reporting any errors
>   * @dev: The device for the given clock
> @@ -43,14 +148,21 @@ static inline void __pm_clk_enable(struct device *dev, struct pm_clock_entry *ce
>  {
>         int ret;
>
> -       if (ce->status < PCE_STATUS_ERROR) {
> +       switch (ce->status) {
> +       case PCE_STATUS_ACQUIRED:
> +               ret = clk_prepare_enable(ce->clk);
> +               break;
> +       case PCE_STATUS_PREPARED:
>                 ret = clk_enable(ce->clk);
> -               if (!ret)
> -                       ce->status = PCE_STATUS_ENABLED;
> -               else
> -                       dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> -                               __func__, ce->clk, ret);
> +               break;
> +       default:
> +               return;
>         }
> +       if (!ret)
> +               ce->status = PCE_STATUS_ENABLED;
> +       else
> +               dev_err(dev, "%s: failed to enable clk %p, error %d\n",
> +                       __func__, ce->clk, ret);
>  }
>
>  /**
> @@ -64,17 +176,20 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
>                 ce->clk = clk_get(dev, ce->con_id);
>         if (IS_ERR(ce->clk)) {
>                 ce->status = PCE_STATUS_ERROR;
> +               return;
> +       } else if (clk_is_enabled_when_prepared(ce->clk)) {
> +               /* we defer preparing the clock in that case */
> +               ce->status = PCE_STATUS_ACQUIRED;
> +               ce->enabled_when_prepared = true;
> +       } else if (clk_prepare(ce->clk)) {
> +               ce->status = PCE_STATUS_ERROR;
> +               dev_err(dev, "clk_prepare() failed\n");
> +               return;
>         } else {
> -               if (clk_prepare(ce->clk)) {
> -                       ce->status = PCE_STATUS_ERROR;
> -                       dev_err(dev, "clk_prepare() failed\n");
> -               } else {
> -                       ce->status = PCE_STATUS_ACQUIRED;
> -                       dev_dbg(dev,
> -                               "Clock %pC con_id %s managed by runtime PM.\n",
> -                               ce->clk, ce->con_id);
> -               }
> +               ce->status = PCE_STATUS_PREPARED;
>         }
> +       dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
> +               ce->clk, ce->con_id);
>  }
>
>  static int __pm_clk_add(struct device *dev, const char *con_id,
> @@ -106,9 +221,11 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
>
>         pm_clk_acquire(dev, ce);
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>         list_add_tail(&ce->node, &psd->clock_list);
> -       spin_unlock_irq(&psd->lock);
> +       if (ce->enabled_when_prepared)
> +               psd->clock_op_might_sleep++;
> +       pm_clk_list_unlock(psd);
>         return 0;
>  }
>
> @@ -239,14 +356,20 @@ static void __pm_clk_remove(struct pm_clock_entry *ce)
>         if (!ce)
>                 return;
>
> -       if (ce->status < PCE_STATUS_ERROR) {
> -               if (ce->status == PCE_STATUS_ENABLED)
> -                       clk_disable(ce->clk);
> -
> -               if (ce->status >= PCE_STATUS_ACQUIRED) {
> -                       clk_unprepare(ce->clk);
> +       switch (ce->status) {
> +       case PCE_STATUS_ENABLED:
> +               clk_disable(ce->clk);
> +               fallthrough;
> +       case PCE_STATUS_PREPARED:
> +               clk_unprepare(ce->clk);
> +               fallthrough;
> +       case PCE_STATUS_ACQUIRED:
> +       case PCE_STATUS_ERROR:
> +               if (!IS_ERR(ce->clk))
>                         clk_put(ce->clk);
> -               }
> +               break;
> +       default:
> +               break;
>         }
>
>         kfree(ce->con_id);
> @@ -269,7 +392,7 @@ void pm_clk_remove(struct device *dev, const char *con_id)
>         if (!psd)
>                 return;
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>
>         list_for_each_entry(ce, &psd->clock_list, node) {
>                 if (!con_id && !ce->con_id)
> @@ -280,12 +403,14 @@ void pm_clk_remove(struct device *dev, const char *con_id)
>                         goto remove;
>         }
>
> -       spin_unlock_irq(&psd->lock);
> +       pm_clk_list_unlock(psd);
>         return;
>
>   remove:
>         list_del(&ce->node);
> -       spin_unlock_irq(&psd->lock);
> +       if (ce->enabled_when_prepared)
> +               psd->clock_op_might_sleep--;
> +       pm_clk_list_unlock(psd);
>
>         __pm_clk_remove(ce);
>  }
> @@ -307,19 +432,21 @@ void pm_clk_remove_clk(struct device *dev, struct clk *clk)
>         if (!psd || !clk)
>                 return;
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>
>         list_for_each_entry(ce, &psd->clock_list, node) {
>                 if (clk == ce->clk)
>                         goto remove;
>         }
>
> -       spin_unlock_irq(&psd->lock);
> +       pm_clk_list_unlock(psd);
>         return;
>
>   remove:
>         list_del(&ce->node);
> -       spin_unlock_irq(&psd->lock);
> +       if (ce->enabled_when_prepared)
> +               psd->clock_op_might_sleep--;
> +       pm_clk_list_unlock(psd);
>
>         __pm_clk_remove(ce);
>  }
> @@ -330,13 +457,16 @@ EXPORT_SYMBOL_GPL(pm_clk_remove_clk);
>   * @dev: Device to initialize the list of PM clocks for.
>   *
>   * Initialize the lock and clock_list members of the device's pm_subsys_data
> - * object.
> + * object, set the count of clocks that might sleep to 0.
>   */
>  void pm_clk_init(struct device *dev)
>  {
>         struct pm_subsys_data *psd = dev_to_psd(dev);
> -       if (psd)
> +       if (psd) {
>                 INIT_LIST_HEAD(&psd->clock_list);
> +               mutex_init(&psd->clock_mutex);
> +               psd->clock_op_might_sleep = 0;
> +       }
>  }
>  EXPORT_SYMBOL_GPL(pm_clk_init);
>
> @@ -372,12 +502,13 @@ void pm_clk_destroy(struct device *dev)
>
>         INIT_LIST_HEAD(&list);
>
> -       spin_lock_irq(&psd->lock);
> +       pm_clk_list_lock(psd);
>
>         list_for_each_entry_safe_reverse(ce, c, &psd->clock_list, node)
>                 list_move(&ce->node, &list);
> +       psd->clock_op_might_sleep = 0;
>
> -       spin_unlock_irq(&psd->lock);
> +       pm_clk_list_unlock(psd);
>
>         dev_pm_put_subsys_data(dev);
>
> @@ -397,23 +528,30 @@ int pm_clk_suspend(struct device *dev)
>         struct pm_subsys_data *psd = dev_to_psd(dev);
>         struct pm_clock_entry *ce;
>         unsigned long flags;
> +       int ret;
>
>         dev_dbg(dev, "%s()\n", __func__);
>
>         if (!psd)
>                 return 0;
>
> -       spin_lock_irqsave(&psd->lock, flags);
> +       ret = pm_clk_op_lock(psd, &flags, __func__);
> +       if (ret)
> +               return ret;
>
>         list_for_each_entry_reverse(ce, &psd->clock_list, node) {
> -               if (ce->status < PCE_STATUS_ERROR) {
> -                       if (ce->status == PCE_STATUS_ENABLED)
> +               if (ce->status == PCE_STATUS_ENABLED) {
> +                       if (ce->enabled_when_prepared) {
> +                               clk_disable_unprepare(ce->clk);
> +                               ce->status = PCE_STATUS_ACQUIRED;
> +                       } else {
>                                 clk_disable(ce->clk);
> -                       ce->status = PCE_STATUS_ACQUIRED;
> +                               ce->status = PCE_STATUS_PREPARED;
> +                       }
>                 }
>         }
>
> -       spin_unlock_irqrestore(&psd->lock, flags);
> +       pm_clk_op_unlock(psd, &flags);
>
>         return 0;
>  }
> @@ -428,18 +566,21 @@ int pm_clk_resume(struct device *dev)
>         struct pm_subsys_data *psd = dev_to_psd(dev);
>         struct pm_clock_entry *ce;
>         unsigned long flags;
> +       int ret;
>
>         dev_dbg(dev, "%s()\n", __func__);
>
>         if (!psd)
>                 return 0;
>
> -       spin_lock_irqsave(&psd->lock, flags);
> +       ret = pm_clk_op_lock(psd, &flags, __func__);
> +       if (ret)
> +               return ret;
>
>         list_for_each_entry(ce, &psd->clock_list, node)
>                 __pm_clk_enable(dev, ce);
>
> -       spin_unlock_irqrestore(&psd->lock, flags);
> +       pm_clk_op_unlock(psd, &flags);
>
>         return 0;
>  }
> diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> index 8c1d04db99..3d751ae5bc 100644
> --- a/drivers/clk/clk.c
> +++ b/drivers/clk/clk.c
> @@ -1164,6 +1164,27 @@ int clk_enable(struct clk *clk)
>  }
>  EXPORT_SYMBOL_GPL(clk_enable);
>
> +/**
> + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> + * @clk: clock source
> + *
> + * Returns true if clk_prepare() implicitly enables the clock, effectively
> + * making clk_enable()/clk_disable() no-ops, false otherwise.
> + *
> + * This is of interest mainly to power management code where actually
> + * disabling the clock also requires unpreparing it to have any material
> + * effect.
> + *
> + * Regardless of the value returned here, the caller must always invoke
> + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> + * to be right.
> + */
> +bool clk_is_enabled_when_prepared(struct clk *clk)
> +{
> +       return clk && !(clk->core->ops->enable && clk->core->ops->disable);
> +}
> +EXPORT_SYMBOL_GPL(clk_is_enabled_when_prepared);
> +
>  static int clk_core_prepare_enable(struct clk_core *core)
>  {
>         int ret;
> diff --git a/include/linux/clk.h b/include/linux/clk.h
> index 31ff1bf1b7..a4a86aa8b1 100644
> --- a/include/linux/clk.h
> +++ b/include/linux/clk.h
> @@ -238,6 +238,7 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
>
>  #endif
>
> +#ifdef CONFIG_HAVE_CLK_PREPARE
>  /**
>   * clk_prepare - prepare a clock source
>   * @clk: clock source
> @@ -246,10 +247,26 @@ static inline bool clk_is_match(const struct clk *p, const struct clk *q)
>   *
>   * Must not be called from within atomic context.
>   */
> -#ifdef CONFIG_HAVE_CLK_PREPARE
>  int clk_prepare(struct clk *clk);
>  int __must_check clk_bulk_prepare(int num_clks,
>                                   const struct clk_bulk_data *clks);
> +
> +/**
> + * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it.
> + * @clk: clock source
> + *
> + * Returns true if clk_prepare() implicitly enables the clock, effectively
> + * making clk_enable()/clk_disable() no-ops, false otherwise.
> + *
> + * This is of interest mainly to the power management code where actually
> + * disabling the clock also requires unpreparing it to have any material
> + * effect.
> + *
> + * Regardless of the value returned here, the caller must always invoke
> + * clk_enable() or clk_prepare_enable()  and counterparts for usage counts
> + * to be right.
> + */
> +bool clk_is_enabled_when_prepared(struct clk *clk);
>  #else
>  static inline int clk_prepare(struct clk *clk)
>  {
> @@ -263,6 +280,11 @@ clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks)
>         might_sleep();
>         return 0;
>  }
> +
> +static inline bool clk_is_enabled_when_prepared(struct clk *clk)
> +{
> +       return false;
> +}
>  #endif
>
>  /**
> diff --git a/include/linux/pm.h b/include/linux/pm.h
> index 47aca6bac1..482313a8cc 100644
> --- a/include/linux/pm.h
> +++ b/include/linux/pm.h
> @@ -537,6 +537,8 @@ struct pm_subsys_data {
>         spinlock_t lock;
>         unsigned int refcount;
>  #ifdef CONFIG_PM_CLK
> +       unsigned int clock_op_might_sleep;
> +       struct mutex clock_mutex;
>         struct list_head clock_list;
>  #endif
>  #ifdef CONFIG_PM_GENERIC_DOMAINS

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v4] PM / clk: make PM clock layer compatible with clocks that must sleep
       [not found]                               ` <161301451636.1254594.7473642352348913785@swboyd.mtv.corp.google.com>
@ 2021-02-11  7:55                                 ` Geert Uytterhoeven
  0 siblings, 0 replies; 26+ messages in thread
From: Geert Uytterhoeven @ 2021-02-11  7:55 UTC (permalink / raw)
  To: Stephen Boyd
  Cc: Nicolas Pitre, Rafael J. Wysocki, Naresh Kamboju,
	Rafael J. Wysocki, Greg Kroah-Hartman, Michael Turquette,
	Russell King, Linux PM, linux-clk, Linux Kernel Mailing List,
	Mark Brown, Arnd Bergmann

Hi Stephen,

On Thu, Feb 11, 2021 at 4:35 AM Stephen Boyd <sboyd@kernel.org> wrote:
> Quoting Nicolas Pitre (2021-01-25 11:29:18)
> > The clock API splits its interface into sleepable ant atomic contexts:
> >
> > - clk_prepare/clk_unprepare for stuff that might sleep
> >
> > - clk_enable_clk_disable for anything that may be done in atomic context
> >
> > The code handling runtime PM for clocks only calls clk_disable() on
> > suspend requests, and clk_enable on resume requests. This means that
> > runtime PM with clock providers that only have the prepare/unprepare
> > methods implemented is basically useless.
> >
> > Many clock implementations can't accommodate atomic contexts. This is
> > often the case when communication with the clock happens through another
> > subsystem like I2C or SCMI.
> >
> > Let's make the clock PM code useful with such clocks by safely invoking
> > clk_prepare/clk_unprepare upon resume/suspend requests. Of course, when
> > such clocks are registered with the PM layer then pm_runtime_irq_safe()
> > can't be used, and neither pm_runtime_suspend() nor pm_runtime_resume()
> > may be invoked in atomic context.
> >
> > For clocks that do implement the enable and disable methods then
> > everything just works as before.
> >
> > A note on sparse:
> > According to https://lwn.net/Articles/109066/ there are things
> > that sparse can't cope with. In particular, pm_clk_op_lock() and
> > pm_clk_op_unlock() may or may not lock/unlock psd->lock depending on
> > some runtime condition. To work around that we tell it the lock is
> > always untaken for the purpose of static analisys.
> >
> > Thanks to Naresh Kamboju for reporting issues with the initial patch.
> >
> > Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
> > Tested-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> >
> > ---
>
> Thanks for doing this. I think it's the main reason why nobody uses the
> PM clock code so far.

"git grep pm_clk_add" tells you otherwise?

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2021-02-11  7:59 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-05  3:17 [PATCH] PM / clk: make PM clock layer compatible with clocks that must sleep Nicolas Pitre
2021-01-17 23:49 ` Nicolas Pitre
2021-01-19 20:15   ` Rafael J. Wysocki
2021-01-19 18:45 ` Kevin Hilman
2021-01-19 20:38   ` Geert Uytterhoeven
2021-01-19 21:22     ` Nicolas Pitre
2021-01-21  9:11 ` Naresh Kamboju
2021-01-21 10:58   ` Geert Uytterhoeven
2021-01-21 12:09     ` Naresh Kamboju
2021-01-21 14:44       ` Rafael J. Wysocki
2021-01-21 16:18         ` Nicolas Pitre
2021-01-21 17:23         ` [PATCH v2] " Nicolas Pitre
2021-01-21 19:01           ` Rafael J. Wysocki
2021-01-22 15:09             ` Rafael J. Wysocki
2021-01-22 15:48               ` Naresh Kamboju
2021-01-22 15:56                 ` Rafael J. Wysocki
2021-01-22 15:59                   ` Nicolas Pitre
2021-01-22 16:00                     ` Nicolas Pitre
2021-01-22 16:02                     ` Rafael J. Wysocki
2021-01-23 23:07                       ` [PATCH v3] " Nicolas Pitre
2021-01-25 18:15                         ` Rafael J. Wysocki
2021-01-25 18:24                           ` Nicolas Pitre
2021-01-25 19:29                             ` [PATCH v4] " Nicolas Pitre
2021-01-27 18:31                               ` Rafael J. Wysocki
     [not found]                               ` <161301451636.1254594.7473642352348913785@swboyd.mtv.corp.google.com>
2021-02-11  7:55                                 ` Geert Uytterhoeven
2021-01-22 15:52               ` [PATCH v2] " Nicolas Pitre

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).