From: Peter De Schrijver <pdeschrijver@nvidia.com>
To: Dmitry Osipenko <digetx@gmail.com>
Cc: Thierry Reding <thierry.reding@gmail.com>,
Jonathan Hunter <jonathanh@nvidia.com>,
Prashant Gaikwad <pgaikwad@nvidia.com>,
Michael Turquette <mturquette@baylibre.com>,
Stephen Boyd <sboyd@kernel.org>, <linux-clk@vger.kernel.org>,
<linux-tegra@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2 2/2] clk: tegra20: Enable lock-status polling for PLLs
Date: Fri, 31 Aug 2018 12:29:48 +0300 [thread overview]
Message-ID: <20180831092948.GP1636@tbergstrom-lnx.Nvidia.com> (raw)
In-Reply-To: <20180830184210.5369-2-digetx@gmail.com>
On Thu, Aug 30, 2018 at 09:42:10PM +0300, Dmitry Osipenko wrote:
> Currently all PLL's on Tegra20 use a hardcoded delay despite of having
> a lock-status bit. The lock-status polling was disabled ~7 years ago
> because PLLE was failing to lock and was a suspicion that other PLLs
> might be faulty too. Other PLLs are okay, hence enable the lock-status
> polling for them. This reduces delay of any operation that require PLL
> to lock.
>
> Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
> ---
>
> Changelog:
>
> v2: Don't enable polling for PLLE as it known to not being able to lock.
>
This isn't correct. The lock bit of PLLE can declare lock too early, but the
PLL itself does lock.
> drivers/clk/tegra/clk-tegra20.c | 20 +++++++++++++-------
> 1 file changed, 13 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/clk/tegra/clk-tegra20.c b/drivers/clk/tegra/clk-tegra20.c
> index cc857d4d4a86..cfde3745a0db 100644
> --- a/drivers/clk/tegra/clk-tegra20.c
> +++ b/drivers/clk/tegra/clk-tegra20.c
> @@ -298,7 +298,8 @@ static struct tegra_clk_pll_params pll_c_params = {
> .lock_enable_bit_idx = PLL_MISC_LOCK_ENABLE,
> .lock_delay = 300,
> .freq_table = pll_c_freq_table,
> - .flags = TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE,
> + .flags = TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE |
> + TEGRA_PLL_USE_LOCK,
> };
>
> static struct tegra_clk_pll_params pll_m_params = {
> @@ -314,7 +315,8 @@ static struct tegra_clk_pll_params pll_m_params = {
> .lock_enable_bit_idx = PLL_MISC_LOCK_ENABLE,
> .lock_delay = 300,
> .freq_table = pll_m_freq_table,
> - .flags = TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE,
> + .flags = TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE |
> + TEGRA_PLL_USE_LOCK,
> };
>
> static struct tegra_clk_pll_params pll_p_params = {
> @@ -331,7 +333,7 @@ static struct tegra_clk_pll_params pll_p_params = {
> .lock_delay = 300,
> .freq_table = pll_p_freq_table,
> .flags = TEGRA_PLL_FIXED | TEGRA_PLL_HAS_CPCON |
> - TEGRA_PLL_HAS_LOCK_ENABLE,
> + TEGRA_PLL_HAS_LOCK_ENABLE | TEGRA_PLL_USE_LOCK,
> .fixed_rate = 216000000,
> };
>
> @@ -348,7 +350,8 @@ static struct tegra_clk_pll_params pll_a_params = {
> .lock_enable_bit_idx = PLL_MISC_LOCK_ENABLE,
> .lock_delay = 300,
> .freq_table = pll_a_freq_table,
> - .flags = TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE,
> + .flags = TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE |
> + TEGRA_PLL_USE_LOCK,
> };
>
> static struct tegra_clk_pll_params pll_d_params = {
> @@ -364,7 +367,8 @@ static struct tegra_clk_pll_params pll_d_params = {
> .lock_enable_bit_idx = PLLDU_MISC_LOCK_ENABLE,
> .lock_delay = 1000,
> .freq_table = pll_d_freq_table,
> - .flags = TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE,
> + .flags = TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE |
> + TEGRA_PLL_USE_LOCK,
> };
>
> static const struct pdiv_map pllu_p[] = {
> @@ -387,7 +391,8 @@ static struct tegra_clk_pll_params pll_u_params = {
> .lock_delay = 1000,
> .pdiv_tohw = pllu_p,
> .freq_table = pll_u_freq_table,
> - .flags = TEGRA_PLLU | TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE,
> + .flags = TEGRA_PLLU | TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE |
> + TEGRA_PLL_USE_LOCK,
> };
>
> static struct tegra_clk_pll_params pll_x_params = {
> @@ -403,7 +408,8 @@ static struct tegra_clk_pll_params pll_x_params = {
> .lock_enable_bit_idx = PLL_MISC_LOCK_ENABLE,
> .lock_delay = 300,
> .freq_table = pll_x_freq_table,
> - .flags = TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE,
> + .flags = TEGRA_PLL_HAS_CPCON | TEGRA_PLL_HAS_LOCK_ENABLE |
> + TEGRA_PLL_USE_LOCK,
> };
>
> static struct tegra_clk_pll_params pll_e_params = {
> --
> 2.18.0
>
next prev parent reply other threads:[~2018-08-31 9:29 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-30 18:42 [PATCH v2 1/2] clk: tegra: Don't enable already enabled PLLs Dmitry Osipenko
2018-08-30 18:42 ` [PATCH v2 2/2] clk: tegra20: Enable lock-status polling for PLLs Dmitry Osipenko
2018-08-31 9:29 ` Peter De Schrijver [this message]
2018-08-31 9:45 ` Dmitry Osipenko
2018-09-03 8:01 ` Peter De Schrijver
2018-09-04 9:06 ` Dmitry Osipenko
2018-10-17 11:52 ` Dmitry Osipenko
2018-09-06 12:13 ` Marcel Ziswiler
2018-10-17 10:59 ` Marcel Ziswiler
2018-10-17 11:41 ` Dmitry Osipenko
2018-12-10 0:58 ` Dmitry Osipenko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180831092948.GP1636@tbergstrom-lnx.Nvidia.com \
--to=pdeschrijver@nvidia.com \
--cc=digetx@gmail.com \
--cc=jonathanh@nvidia.com \
--cc=linux-clk@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=mturquette@baylibre.com \
--cc=pgaikwad@nvidia.com \
--cc=sboyd@kernel.org \
--cc=thierry.reding@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).