All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] regulator: core: Clean up on enable failure
@ 2022-08-19 19:43 Andrew Halaney
  2022-08-19 20:03 ` Doug Anderson
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Andrew Halaney @ 2022-08-19 19:43 UTC (permalink / raw)
  To: lgirdwood, broonie; +Cc: linux-kernel, bmasney, dianders, Andrew Halaney

If regulator_enable() fails, enable_count is incremented still.
A consumer, assuming no matching regulator_disable() is necessary on
failure, will then get this error message upon regulator_put()
since enable_count is non-zero:

    [    1.277418] WARNING: CPU: 3 PID: 1 at drivers/regulator/core.c:2304 _regulator_put.part.0+0x168/0x170

The consumer could try to fix this in their driver by cleaning up on
error from regulator_enable() (i.e. call regulator_disable()), but that
results in the following since regulator_enable() failed and didn't
increment user_count:

    [    1.258112] unbalanced disables for vreg_l17c
    [    1.262606] WARNING: CPU: 4 PID: 1 at drivers/regulator/core.c:2899 _regulator_disable+0xd4/0x190

Fix this by decrementing enable_count upon failure to enable.

With this in place, just the reason for failure to enable is printed
as expected and developers can focus on the root cause of their issue
instead of thinking their usage of the regulator consumer api is
incorrect. For example, in my case:

    [    1.240426] vreg_l17c: invalid input voltage found

Fixes: 5451781dadf8 ("regulator: core: Only count load for enabled consumers")
Signed-off-by: Andrew Halaney <ahalaney@redhat.com>
---

I'm new to using the regulator framework, but I _believe_ this is a
cosmetic bug that's fixed by this patch. I went down a bit of a rabbit
hole because of the original WARN() message, so I'm trying to prevent
others from doing the same :)

Please let me know what you think, I tested on the misconfigured system
and on a working system and things seemed to work as expected.

Thanks,
Andrew

 drivers/regulator/core.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
index d8373cb04f90..d3e8dc32832d 100644
--- a/drivers/regulator/core.c
+++ b/drivers/regulator/core.c
@@ -2733,13 +2733,18 @@ static int _regulator_do_enable(struct regulator_dev *rdev)
  */
 static int _regulator_handle_consumer_enable(struct regulator *regulator)
 {
+	int ret;
 	struct regulator_dev *rdev = regulator->rdev;
 
 	lockdep_assert_held_once(&rdev->mutex.base);
 
 	regulator->enable_count++;
-	if (regulator->uA_load && regulator->enable_count == 1)
-		return drms_uA_update(rdev);
+	if (regulator->uA_load && regulator->enable_count == 1) {
+		ret = drms_uA_update(rdev);
+		if (ret)
+			regulator->enable_count--;
+		return ret;
+	}
 
 	return 0;
 }
-- 
2.37.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] regulator: core: Clean up on enable failure
  2022-08-19 19:43 [PATCH] regulator: core: Clean up on enable failure Andrew Halaney
@ 2022-08-19 20:03 ` Doug Anderson
  2022-08-19 20:46 ` Brian Masney
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Doug Anderson @ 2022-08-19 20:03 UTC (permalink / raw)
  To: Andrew Halaney; +Cc: Liam Girdwood, Mark Brown, LKML, bmasney

Hi,

On Fri, Aug 19, 2022 at 12:44 PM Andrew Halaney <ahalaney@redhat.com> wrote:
>
> If regulator_enable() fails, enable_count is incremented still.
> A consumer, assuming no matching regulator_disable() is necessary on
> failure, will then get this error message upon regulator_put()
> since enable_count is non-zero:
>
>     [    1.277418] WARNING: CPU: 3 PID: 1 at drivers/regulator/core.c:2304 _regulator_put.part.0+0x168/0x170
>
> The consumer could try to fix this in their driver by cleaning up on
> error from regulator_enable() (i.e. call regulator_disable()), but that
> results in the following since regulator_enable() failed and didn't
> increment user_count:
>
>     [    1.258112] unbalanced disables for vreg_l17c
>     [    1.262606] WARNING: CPU: 4 PID: 1 at drivers/regulator/core.c:2899 _regulator_disable+0xd4/0x190
>
> Fix this by decrementing enable_count upon failure to enable.
>
> With this in place, just the reason for failure to enable is printed
> as expected and developers can focus on the root cause of their issue
> instead of thinking their usage of the regulator consumer api is
> incorrect. For example, in my case:
>
>     [    1.240426] vreg_l17c: invalid input voltage found
>
> Fixes: 5451781dadf8 ("regulator: core: Only count load for enabled consumers")
> Signed-off-by: Andrew Halaney <ahalaney@redhat.com>
> ---
>
> I'm new to using the regulator framework, but I _believe_ this is a
> cosmetic bug that's fixed by this patch. I went down a bit of a rabbit
> hole because of the original WARN() message, so I'm trying to prevent
> others from doing the same :)
>
> Please let me know what you think, I tested on the misconfigured system
> and on a working system and things seemed to work as expected.
>
> Thanks,
> Andrew
>
>  drivers/regulator/core.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)

This looks right to me. Thanks!

Reviewed-by: Douglas Anderson <dianders@chromium.org>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] regulator: core: Clean up on enable failure
  2022-08-19 19:43 [PATCH] regulator: core: Clean up on enable failure Andrew Halaney
  2022-08-19 20:03 ` Doug Anderson
@ 2022-08-19 20:46 ` Brian Masney
  2022-08-19 22:48 ` Mark Brown
  2022-08-22 16:16 ` Mark Brown
  3 siblings, 0 replies; 7+ messages in thread
From: Brian Masney @ 2022-08-19 20:46 UTC (permalink / raw)
  To: Andrew Halaney; +Cc: lgirdwood, broonie, linux-kernel, dianders

On Fri, Aug 19, 2022 at 02:43:36PM -0500, Andrew Halaney wrote:
> If regulator_enable() fails, enable_count is incremented still.
> A consumer, assuming no matching regulator_disable() is necessary on
> failure, will then get this error message upon regulator_put()
> since enable_count is non-zero:
> 
>     [    1.277418] WARNING: CPU: 3 PID: 1 at drivers/regulator/core.c:2304 _regulator_put.part.0+0x168/0x170
> 
> The consumer could try to fix this in their driver by cleaning up on
> error from regulator_enable() (i.e. call regulator_disable()), but that
> results in the following since regulator_enable() failed and didn't
> increment user_count:
> 
>     [    1.258112] unbalanced disables for vreg_l17c
>     [    1.262606] WARNING: CPU: 4 PID: 1 at drivers/regulator/core.c:2899 _regulator_disable+0xd4/0x190
> 
> Fix this by decrementing enable_count upon failure to enable.
> 
> With this in place, just the reason for failure to enable is printed
> as expected and developers can focus on the root cause of their issue
> instead of thinking their usage of the regulator consumer api is
> incorrect. For example, in my case:
> 
>     [    1.240426] vreg_l17c: invalid input voltage found
> 
> Fixes: 5451781dadf8 ("regulator: core: Only count load for enabled consumers")
> Signed-off-by: Andrew Halaney <ahalaney@redhat.com>

Reviewed-by: Brian Masney <bmasney@redhat.com>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] regulator: core: Clean up on enable failure
  2022-08-19 19:43 [PATCH] regulator: core: Clean up on enable failure Andrew Halaney
  2022-08-19 20:03 ` Doug Anderson
  2022-08-19 20:46 ` Brian Masney
@ 2022-08-19 22:48 ` Mark Brown
  2022-08-19 23:03   ` Doug Anderson
  2022-08-22 16:16 ` Mark Brown
  3 siblings, 1 reply; 7+ messages in thread
From: Mark Brown @ 2022-08-19 22:48 UTC (permalink / raw)
  To: Andrew Halaney; +Cc: lgirdwood, linux-kernel, bmasney, dianders

[-- Attachment #1: Type: text/plain, Size: 332 bytes --]

On Fri, Aug 19, 2022 at 02:43:36PM -0500, Andrew Halaney wrote:

> -	if (regulator->uA_load && regulator->enable_count == 1)
> -		return drms_uA_update(rdev);
> +	if (regulator->uA_load && regulator->enable_count == 1) {
> +		ret = drms_uA_update(rdev);

I am vaugely surprised and a bit concerned that drms_uA_update() can
fail...

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] regulator: core: Clean up on enable failure
  2022-08-19 22:48 ` Mark Brown
@ 2022-08-19 23:03   ` Doug Anderson
  2022-08-22 19:42     ` Andrew Halaney
  0 siblings, 1 reply; 7+ messages in thread
From: Doug Anderson @ 2022-08-19 23:03 UTC (permalink / raw)
  To: Mark Brown; +Cc: Andrew Halaney, Liam Girdwood, LKML, bmasney

Hi,

On Fri, Aug 19, 2022 at 3:48 PM Mark Brown <broonie@kernel.org> wrote:
>
> On Fri, Aug 19, 2022 at 02:43:36PM -0500, Andrew Halaney wrote:
>
> > -     if (regulator->uA_load && regulator->enable_count == 1)
> > -             return drms_uA_update(rdev);
> > +     if (regulator->uA_load && regulator->enable_count == 1) {
> > +             ret = drms_uA_update(rdev);
>
> I am vaugely surprised and a bit concerned that drms_uA_update() can
> fail...

In his original email Andrew implied that his system was misconfigured
when he was seeing this error. I presume this isn't common which is
why nobody had noticed this before now, but given that the function
drms_uA_update() does return an error code it seems like we should
handle it properly in the caller.

-Doug

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] regulator: core: Clean up on enable failure
  2022-08-19 19:43 [PATCH] regulator: core: Clean up on enable failure Andrew Halaney
                   ` (2 preceding siblings ...)
  2022-08-19 22:48 ` Mark Brown
@ 2022-08-22 16:16 ` Mark Brown
  3 siblings, 0 replies; 7+ messages in thread
From: Mark Brown @ 2022-08-22 16:16 UTC (permalink / raw)
  To: lgirdwood, Andrew Halaney; +Cc: dianders, bmasney, linux-kernel

On Fri, 19 Aug 2022 14:43:36 -0500, Andrew Halaney wrote:
> If regulator_enable() fails, enable_count is incremented still.
> A consumer, assuming no matching regulator_disable() is necessary on
> failure, will then get this error message upon regulator_put()
> since enable_count is non-zero:
> 
>     [    1.277418] WARNING: CPU: 3 PID: 1 at drivers/regulator/core.c:2304 _regulator_put.part.0+0x168/0x170
> 
> [...]

Applied to

   https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator.git for-next

Thanks!

[1/1] regulator: core: Clean up on enable failure
      commit: c32f1ebfd26bece77141257864ed7b4720da1557

All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.

You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.

If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.

Please add any relevant lists and maintainers to the CCs when replying
to this mail.

Thanks,
Mark

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] regulator: core: Clean up on enable failure
  2022-08-19 23:03   ` Doug Anderson
@ 2022-08-22 19:42     ` Andrew Halaney
  0 siblings, 0 replies; 7+ messages in thread
From: Andrew Halaney @ 2022-08-22 19:42 UTC (permalink / raw)
  To: Doug Anderson; +Cc: Mark Brown, Liam Girdwood, LKML, bmasney

On Fri, Aug 19, 2022 at 04:03:24PM -0700, Doug Anderson wrote:
> Hi,
> 
> On Fri, Aug 19, 2022 at 3:48 PM Mark Brown <broonie@kernel.org> wrote:
> >
> > On Fri, Aug 19, 2022 at 02:43:36PM -0500, Andrew Halaney wrote:
> >
> > > -     if (regulator->uA_load && regulator->enable_count == 1)
> > > -             return drms_uA_update(rdev);
> > > +     if (regulator->uA_load && regulator->enable_count == 1) {
> > > +             ret = drms_uA_update(rdev);
> >
> > I am vaugely surprised and a bit concerned that drms_uA_update() can
> > fail...
> 
> In his original email Andrew implied that his system was misconfigured
> when he was seeing this error. I presume this isn't common which is
> why nobody had noticed this before now, but given that the function
> drms_uA_update() does return an error code it seems like we should
> handle it properly in the caller.

Doug was right, I thought my devicetree was bad. But after spending the
morning trying to write a reply about why exactly it was bad, I realized
that it might not be that. I agree with him that this patch makes sense
(to me at least) regardless.

Here I am reporting on how a recent change in qcom-rpmh-regulator.c made
this appear (no idea yet if the change is at fault, or if I've got some
other misconfiguration though in my devicetree!):

    https://lore.kernel.org/all/20220822193153.zn2oxljmd76awqot@halaneylaptop/

Thanks,
Andrew


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-08-22 19:42 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-19 19:43 [PATCH] regulator: core: Clean up on enable failure Andrew Halaney
2022-08-19 20:03 ` Doug Anderson
2022-08-19 20:46 ` Brian Masney
2022-08-19 22:48 ` Mark Brown
2022-08-19 23:03   ` Doug Anderson
2022-08-22 19:42     ` Andrew Halaney
2022-08-22 16:16 ` Mark Brown

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.