linux-clk.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stephen Boyd <sboyd@kernel.org>
To: Jerome Brunet <jbrunet@baylibre.com>
Cc: Michael Turquette <mturquette@baylibre.com>,
	linux-kernel@vger.kernel.org, linux-clk@vger.kernel.org,
	Douglas Anderson <dianders@chromium.org>,
	Heiko Stuebner <heiko@sntech.de>,
	Jerome Brunet <jbrunet@baylibre.com>
Subject: Re: [PATCH] clk: Don't cache errors from clk_ops::get_phase()
Date: Sat, 04 Jan 2020 23:50:49 -0800	[thread overview]
Message-ID: <20200105075050.1B93E20866@mail.kernel.org> (raw)
In-Reply-To: <1jd0ffr1jh.fsf@starbuckisacylon.baylibre.com>

(Sorry I'm way behind on emails)

Quoting Jerome Brunet (2019-10-02 01:31:46)
> 
> On Tue 01 Oct 2019 at 19:44, Stephen Boyd <sboyd@kernel.org> wrote:
> 
> > We don't check for errors from clk_ops::get_phase() before storing away
> > the result into the clk_core::phase member. This can lead to some fairly
> > confusing debugfs information if these ops do return an error. Let's
> > skip the store when this op fails to fix this. While we're here, move
> > the locking outside of clk_core_get_phase() to simplify callers from
> > the debugfs side.
> 
> Function already under the lock seem to be marked with  "_nolock"
> Maybe one should added for get_phase() ?
> 
> Also the debugfs side calls clk_core_get_rate() and
> clk_core_get_accuracy(). Both are taking the prepare_lock.

Yes both are taking the lock again when we're already holding the lock.
It is wasteful. I'll send another patch with the series to make those
calls in debugfs use the nolock variants. That will open up the question
of how we sometimes recalc rates and other times don't depending on if
the nolock or lock variant of the get_rate() API is used.

> 
> So I don't get why clk_get_phase() should do thing differently from the
> others, and not take the lock ?

Got it.

> 
> >
> > diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> > index 1c677d7f7f53..16add5626dfa 100644
> > --- a/drivers/clk/clk.c
> > +++ b/drivers/clk/clk.c
> > @@ -3349,10 +3366,7 @@ static int __clk_core_init(struct clk_core *core)
> >        * Since a phase is by definition relative to its parent, just
> >        * query the current clock phase, or just assume it's in phase.
> >        */
> > -     if (core->ops->get_phase)
> > -             core->phase = core->ops->get_phase(core->hw);
> > -     else
> > -             core->phase = 0;
> > +     clk_core_get_phase(core);
> 
> Should the error be checked here as well ?

What error?


  reply	other threads:[~2020-01-05  7:50 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-01 17:44 [PATCH] clk: Don't cache errors from clk_ops::get_phase() Stephen Boyd
2019-10-01 21:20 ` Doug Anderson
2020-01-05  7:53   ` Stephen Boyd
2019-10-02  8:31 ` Jerome Brunet
2020-01-05  7:50   ` Stephen Boyd [this message]
2020-01-05  7:55     ` Stephen Boyd
2020-01-07  9:44       ` Jerome Brunet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200105075050.1B93E20866@mail.kernel.org \
    --to=sboyd@kernel.org \
    --cc=dianders@chromium.org \
    --cc=heiko@sntech.de \
    --cc=jbrunet@baylibre.com \
    --cc=linux-clk@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mturquette@baylibre.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).