linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Hector Yuan <hector.yuan@mediatek.com>
Cc: linux-mediatek@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, linux-pm@vger.kernel.org,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Rob Herring <robh+dt@kernel.org>,
	devicetree@vger.kernel.org, linux-kernel@vger.kernel.org,
	wsd_upstream@mediatek.com
Subject: Re: [PATCH v13 2/2] cpufreq: mediatek-hw: Add support for CPUFREQ HW
Date: Tue, 17 Aug 2021 09:12:06 +0530	[thread overview]
Message-ID: <20210817034206.hmpjdz4bqvwxfn3c@vireshk-i7> (raw)
In-Reply-To: <1629118594.3246.13.camel@mtkswgap22>

On 16-08-21, 20:56, Hector Yuan wrote:
> On Tue, 2021-08-03 at 12:43 +0530, Viresh Kumar wrote:
> > On 30-07-21, 00:08, Hector Yuan wrote:
> > > +	for (i = REG_FREQ_LUT_TABLE; i < REG_ARRAY_SIZE; i++)
> > > +		c->reg_bases[i] = base + offsets[i];
> > > +
> > > +	ret = of_perf_domain_get_sharing_cpumask(index, "performance-domains",
> > 
> > Instead of parsing parsing "performance-domains" twice, I would rather
> > pass a CPU number here instead of index.
> > 
> Sorry, could you give me more details? For now, will use index to parse
> per-cpu to related cpus.You mean pass policy->cpu or? Thanks.

Yes, pass the cpu number from policy->cpu instead.

> > > +	latency = readl_relaxed(c->reg_bases[REG_FREQ_LATENCY]);
> > > +	if (!latency)
> > > +		latency = CPUFREQ_ETERNAL;
> > > +
> > > +	/* us convert to ns */
> > > +	policy->cpuinfo.transition_latency = latency * 1000;
> > 
> > You want to multiple CPUFREQ_ETERNAL too ?

s/multiple/multiply/

Sorry about this.

> Yes, may be different power domain with different transition latency.
> > > +
> > > +	policy->fast_switch_possible = true;
> > > +
> > > +	qos_request = kzalloc(sizeof(*qos_request), GFP_KERNEL);
> > 
> > This is a small structure, why not allocate it on stack instead ?
> > 
> For qos part, we'd like to take more time to re-consider the SW flow and
> put this to another patch set.Is this okay to you?

So you will drop entire qos stuff ? Fine by me.

-- 
viresh

      reply	other threads:[~2021-08-17  3:42 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-29 16:08 [PATCH v13] cpufreq: mediatek-hw: Add support for Mediatek cpufreq HW driver Hector Yuan
2021-07-29 16:08 ` [PATCH v13 1/2] dt-bindings: cpufreq: add bindings for MediaTek cpufreq HW Hector Yuan
2021-08-03  5:05   ` Viresh Kumar
2021-08-03 19:17     ` Rob Herring
2021-08-03 19:22   ` Rob Herring
2021-07-29 16:08 ` [PATCH v13 2/2] cpufreq: mediatek-hw: Add support for CPUFREQ HW Hector Yuan
2021-08-03  7:13   ` Viresh Kumar
2021-08-16 12:56     ` Hector Yuan
2021-08-17  3:42       ` Viresh Kumar [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210817034206.hmpjdz4bqvwxfn3c@vireshk-i7 \
    --to=viresh.kumar@linaro.org \
    --cc=devicetree@vger.kernel.org \
    --cc=hector.yuan@mediatek.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mediatek@lists.infradead.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=rjw@rjwysocki.net \
    --cc=robh+dt@kernel.org \
    --cc=wsd_upstream@mediatek.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).