All of lore.kernel.org
 help / color / mirror / Atom feed
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Sumit Gupta <sumitg@nvidia.com>
Cc: rafael@kernel.org, linux-pm@vger.kernel.org,
	linux-tegra@vger.kernel.org, linux-kernel@vger.kernel.org,
	treding@nvidia.com, jonathanh@nvidia.com, bbasu@nvidia.com,
	amiettinen@nvidia.com
Subject: Re: [Patch v3 0/2] Improvements to the Tegra CPUFREQ driver
Date: Tue, 10 Oct 2023 11:07:10 +0530	[thread overview]
Message-ID: <20231010053710.hrq3ifktt7j4n4ln@vireshk-i7> (raw)
In-Reply-To: <72e9f769-9cbb-274e-e99d-10c71f84bbe0@nvidia.com>

On 09-10-23, 17:06, Sumit Gupta wrote:
> 
> 
> On 04/10/23 19:35, Sumit Gupta wrote:
> > This patch set adds below improvements to the Tegra194 CPUFREQ driver.
> > They are applicable to all the Tegra SoC's supported by the driver.
> > 
> > 1) Patch 1: Avoid making SMP call on every frequency request to reduce
> >     the time for frequency set and get calls.
> > 
> > 2) Patch 2: Use reference clock count based loop instead of udelay()
> >     to improve the accuracy of re-generated CPU frequency.
> > 
> > The patches are not related but have minor conflict. So, need to be
> > applied in order of patch numbers. If 'Patch 2' is to be applied first
> > then will rebase that and send separately.
> > 
> > ---
> > v1[2] -> v3:
> > - Patch 1: used sizeof(*data->cpu_data) in devm_kcalloc().
> > 
> > v1[1] -> v2:
> > - Patch 1: added new patch.
> > - Patch 2: changed subject and patch order.
> > 
> > Sumit Gupta (2):
> >    cpufreq: tegra194: save CPU data to avoid repeated SMP calls
> >    cpufreq: tegra194: use refclk delta based loop instead of udelay
> > 
> >   drivers/cpufreq/tegra194-cpufreq.c | 151 ++++++++++++++++++++---------
> >   1 file changed, 106 insertions(+), 45 deletions(-)
> > 
> > [2] https://lore.kernel.org/lkml/20230901164113.29139-1-sumitg@nvidia.com/
> > [1] https://lore.kernel.org/lkml/20230901152046.25662-1-sumitg@nvidia.com/
> > 
> 
> Hi Viresh,
> 
> If there is no further comment.
> Can we please still apply these patches for 6.7 ?

Applied. Thanks.

FWIW, you should have rebased the other commit (which removes cpu
online mask) over this one. I had to fix the commit manually now.

-- 
viresh

  reply	other threads:[~2023-10-10  5:37 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-04 14:05 [Patch v3 0/2] Improvements to the Tegra CPUFREQ driver Sumit Gupta
2023-10-04 14:05 ` [Patch v3 1/2] cpufreq: tegra194: save CPU data to avoid repeated SMP calls Sumit Gupta
2023-10-04 14:05 ` [Patch v3 2/2] cpufreq: tegra194: use refclk delta based loop instead of udelay Sumit Gupta
2023-10-09 11:36 ` [Patch v3 0/2] Improvements to the Tegra CPUFREQ driver Sumit Gupta
2023-10-10  5:37   ` Viresh Kumar [this message]
2023-10-10  5:43     ` Sumit Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231010053710.hrq3ifktt7j4n4ln@vireshk-i7 \
    --to=viresh.kumar@linaro.org \
    --cc=amiettinen@nvidia.com \
    --cc=bbasu@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-tegra@vger.kernel.org \
    --cc=rafael@kernel.org \
    --cc=sumitg@nvidia.com \
    --cc=treding@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.