linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ionela Voinescu <ionela.voinescu@arm.com>
To: Jeremy Linton <jeremy.linton@arm.com>
Cc: rjw@rjwysocki.net, viresh.kumar@linaro.org, lenb@kernel.org,
	sudeep.holla@arm.com, morten.rasmussen@arm.com,
	linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 4/8] cppc_cpufreq: replace per-cpu structures with lists
Date: Thu, 5 Nov 2020 17:00:43 +0000	[thread overview]
Message-ID: <20201105170043.GA28398@arm.com> (raw)
In-Reply-To: <e568847d-b15c-970c-6ad5-b431c81c811c@arm.com>

Hi Jeremy,

On Thursday 05 Nov 2020 at 09:50:30 (-0600), Jeremy Linton wrote:
> Hi,
> 
> On 11/5/20 6:55 AM, Ionela Voinescu wrote:
> > The cppc_cpudata per-cpu storage was inefficient (1) additional to causing
> > functional issues (2) when CPUs are hotplugged out, due to per-cpu data
> > being improperly initialised.
> > 
> > (1) The amount of information needed for CPPC performance control in its
> >      cpufreq driver depends on the domain (PSD) coordination type:
> > 
> >      ANY:    One set of CPPC control and capability data (e.g desired
> >              performance, highest/lowest performance, etc) applies to all
> >              CPUs in the domain.
> > 
> >      ALL:    Same as ANY. To be noted that this type is not currently
> >              supported. When supported, information about which CPUs
> >              belong to a domain is needed in order for frequency change
> >              requests to be sent to each of them.
> > 
> >      HW:     It's necessary to store CPPC control and capability
> >              information for all the CPUs. HW will then coordinate the
> >              performance state based on their limitations and requests.
> > 
> >      NONE:   Same as HW. No HW coordination is expected.
> > 
> >      Despite this, the previous initialisation code would indiscriminately
> >      allocate memory for all CPUs (all_cpu_data) and unnecessarily
> >      duplicate performance capabilities and the domain sharing mask and type
> >      for each possible CPU.
> 
> I should have mentioned this on the last set.
> 
> If the common case on arm/acpi machines is a single core per _PSD (which I
> believe it is), then you are actually increasing the overhead doing this.
> 

Thanks for taking another look and pointing this out.

Yes, that would be quite inefficient as I'd be holding both CPU and domain
information uselessly, for that case. I could drop the domain
information without actually losing anything (shared type and shared cpu
map have no purpose for single CPUs in a domain).

Also, I don't actually need a list of CPUs in the domain, an array will
work just as well, as I know the number of CPUs when I allocate the
domain - that will allow me to remove the node from cppc_cpudata and
save me some pointers.

Also, I now remember I wanted to get rid of cpu and cur_policy from
cppc_cpudata as well, as they serve no purpose. Let me know if you guys
see a reason against this.

All of this should at least bring things on par for HW and NONE types,
while improving ANY and ALL types. Thanks again for bringing this up.

Regards,
Ionela.

  reply	other threads:[~2020-11-05 17:00 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-05 12:55 [PATCH 0/8] cppc_cpufreq: fix, clarify and improve support Ionela Voinescu
2020-11-05 12:55 ` [PATCH 1/8] cppc_cpufreq: fix misspelling, code style and readability issues Ionela Voinescu
2020-11-09  6:58   ` Viresh Kumar
2020-11-05 12:55 ` [PATCH 2/8] cppc_cpufreq: clean up cpu, cpu_num and cpunum variable use Ionela Voinescu
2020-11-09  6:59   ` Viresh Kumar
2020-11-05 12:55 ` [PATCH 3/8] cppc_cpufreq: simplify use of performance capabilities Ionela Voinescu
2020-11-09  7:01   ` Viresh Kumar
2020-11-05 12:55 ` [PATCH 4/8] cppc_cpufreq: replace per-cpu structures with lists Ionela Voinescu
2020-11-05 15:50   ` Jeremy Linton
2020-11-05 17:00     ` Ionela Voinescu [this message]
2020-11-05 12:55 ` [PATCH 5/8] cppc_cpufreq: use policy->cpu as driver of frequency setting Ionela Voinescu
2020-11-09  7:05   ` Viresh Kumar
2020-11-05 12:55 ` [PATCH 6/8] cppc_cpufreq: clarify support for coordination types Ionela Voinescu
2020-11-09  7:07   ` Viresh Kumar
2020-11-05 12:55 ` [PATCH 7/8] cppc_cpufreq: expose information on frequency domains Ionela Voinescu
2020-11-09  7:09   ` Viresh Kumar
2020-11-05 12:55 ` [PATCH 8/8] acpi: fix NONE coordination for domain mapping failure Ionela Voinescu
2020-11-05 13:05   ` Rafael J. Wysocki
2020-11-05 14:02     ` Ionela Voinescu
2020-11-05 14:47       ` Rafael J. Wysocki
2020-11-09  7:10   ` Viresh Kumar
2020-11-17 14:59 ` [PATCH 0/8] cppc_cpufreq: fix, clarify and improve support Rafael J. Wysocki
2020-11-17 15:32   ` Ionela Voinescu
2020-11-17 16:30     ` Rafael J. Wysocki
2020-11-17 19:04       ` Ionela Voinescu
2020-11-17 18:49 ` [PATCH] cppc_cpufreq: optimise memory allocation for HW and NONE coordination Ionela Voinescu
2020-11-23 17:32   ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201105170043.GA28398@arm.com \
    --to=ionela.voinescu@arm.com \
    --cc=jeremy.linton@arm.com \
    --cc=lenb@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=morten.rasmussen@arm.com \
    --cc=rjw@rjwysocki.net \
    --cc=sudeep.holla@arm.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).