linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: Ionela Voinescu <ionela.voinescu@arm.com>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Linux ARM <linux-arm-kernel@lists.infradead.org>,
	Rob Herring <robh@kernel.org>,
	Daniel Lezcano <daniel.lezcano@linaro.org>,
	"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Linux PM <linux-pm@vger.kernel.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Sudeep Holla <sudeep.holla@arm.com>,
	Nicola Mazzucato <nicola.mazzucato@arm.com>,
	Viresh Kumar <vireshk@kernel.org>,
	Chris Redpath <chris.redpath@arm.com>,
	Morten Rasmussen <morten.rasmussen@arm.com>,
	Lukasz Luba <lukasz.luba@arm.com>
Subject: Re: [PATCH v2 2/2] [RFC] CPUFreq: Add support for cpu-perf-dependencies
Date: Tue, 13 Oct 2020 13:39:01 +0100	[thread overview]
Message-ID: <20201013123901.GA4945@arm.com> (raw)
In-Reply-To: <CAJZ5v0hMtPARYezJEZqeUZBsyaSggQvtvvfEvONhz6Z=Y32bhQ@mail.gmail.com>

Hi Rafael,

On Tuesday 13 Oct 2020 at 13:53:37 (+0200), Rafael J. Wysocki wrote:
> On Tue, Oct 13, 2020 at 12:01 AM Ionela Voinescu
> <ionela.voinescu@arm.com> wrote:
> >
> > Hey Lukasz,
> >
> > I think after all this discussion (in our own way of describing things)
> > we agree on how the current cpufreq based FIE implementation is affected
> > in systems that use hardware coordination.
> >
> > What we don't agree on is the location where that implementation (that
> > uses the new mask and aggregation) should be.
> >
> > On Monday 12 Oct 2020 at 19:19:29 (+0100), Lukasz Luba wrote:
> > [..]
> > > The previous FIE implementation where arch_set_freq_scale()
> > > was called from the drivers, was better suited for this issue.
> > > Driver could just use internal dependency cpumask or even
> > > do the aggregation to figure out the max freq for cluster
> > > if there is a need, before calling arch_set_freq_scale().
> > >
> > > It is not perfect solution for software FIE, but one of possible
> > > when there is no hw counters.
> > >
> > [..]
> >
> > > Difference between new FIE and old FIE (from v5.8) is that the new one
> > > purely relies on schedutil max freq value (which will now be missing),
> > > while the old FIE was called by the driver and thus it was an option to
> > > fix only the affected cpufreq driver [1][2].
> > >
> >
> > My final argument is that now you have 2 drivers that would need this
> > support, next you'll have 3 (the new mediatek driver), and in the future
> > there will be more. So why limit and duplicate this functionality in the
> > drivers? Why not make it generic for all drivers to use if the system
> > is using hardware coordination?
> >
> > Additionally, I don't think drivers should not even need to know about
> > these dependency/clock domains. They should act at the level of the
> > policy, which in this case will be at the level of each CPU.
> 
> The policies come from the driver, though.
> 
> The driver decides how many CPUs will be there in a policy and how to
> handle them at the initialization time.

Yes, policies are built based on information populated from the drivers
at .init(): what CPUs will belong to a policy, what methods to use for
setting and getting frequency, etc.

So they do pass this information to the cpufreq core to be stored at the
level of the policy, but later drivers (in the majority of cases) will
not need to store on their own information on what CPUs belong to a
frequency domain, they rely on having passed that information to the
core, and the core mechanisms hold this information on the clock domains
(currently through policy->cpus and policy->related_cpus).

> 
> The core has no idea whether or not there is HW coordination in the
> system, the driver is expected to know that and take that into
> account.
> 

Given that multiple drivers could use hardware coordination, and
drivers already have a way to pass information about the type of
coordination to the core through policy->shared_type, could there be a
case for supporting this in the core, rather than the drivers?

In my head I'm finding this option better compared to having a select
set of drivers that would instruct the core to build the policies
per-cpu, while holding in the driver information about what CPUs
actually belong to clock domains.

Additionally, the cpufreq core will have to be able to present to other
frameworks (scheduler, thermal) this mask when requested, through a
cpufreq interface function. So in the end we'll still potentially end
up passing on this information from the driver to the core and then to
the user.

> Accordingly, it looks like there should be an option for drivers to
> arrange things in the most convenient way (from their perspective) and
> that option has gone away now.

IMO, even if this hardware coordination support is entirely managed by
the driver, one requirement is that other subsystems would be able to
acquire information about dependent cpus. The scheduler FIE should just
be another one of those users, with the decision on how that information
is handled residing in architecture code (arch_set_freq_scale()).
Architecture code might decide to have a default way of handling these
cases or not to support them at all.

Thank you,
Ionela.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2020-10-13 12:40 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-24  9:53 [PATCH v2 0/2] CPUFreq: Add support for cpu performance dependencies Nicola Mazzucato
2020-09-24  9:53 ` [PATCH v2 1/2] dt-bindings: arm: Add devicetree binding for cpu-performance-dependencies Nicola Mazzucato
2020-10-08 13:42   ` Ionela Voinescu
2020-09-24  9:53 ` [PATCH v2 2/2] [RFC] CPUFreq: Add support for cpu-perf-dependencies Nicola Mazzucato
2020-10-06  7:19   ` Viresh Kumar
2020-10-07 12:58     ` Nicola Mazzucato
2020-10-08 11:02       ` Viresh Kumar
2020-10-08 15:03         ` Ionela Voinescu
2020-10-08 15:57           ` Rafael J. Wysocki
2020-10-08 17:08             ` Ionela Voinescu
2020-10-12 16:06             ` Sudeep Holla
2020-10-08 16:00           ` Nicola Mazzucato
2020-10-09  5:39             ` Viresh Kumar
2020-10-09 11:10               ` Nicola Mazzucato
2020-10-09 11:17                 ` Viresh Kumar
2020-10-09 14:01                 ` Rob Herring
2020-10-09 15:28                   ` Nicola Mazzucato
2020-10-12  4:19                     ` Viresh Kumar
2020-10-12 10:22                   ` Lukasz Luba
2020-10-12 10:50                     ` Rafael J. Wysocki
2020-10-12 11:05                       ` Lukasz Luba
2020-10-12 10:59                     ` Ionela Voinescu
2020-10-12 13:48                       ` Lukasz Luba
2020-10-12 16:30                         ` Ionela Voinescu
2020-10-12 18:19                           ` Lukasz Luba
2020-10-12 22:01                             ` Ionela Voinescu
2020-10-13 11:53                               ` Rafael J. Wysocki
2020-10-13 12:39                                 ` Ionela Voinescu [this message]
2020-10-15 15:56                                   ` Rafael J. Wysocki
2020-10-15 18:38                                     ` Ionela Voinescu
2020-10-12 13:59                     ` Rob Herring
2020-10-12 16:02                     ` Sudeep Holla
2020-10-12 15:54                   ` Sudeep Holla
2020-10-12 15:49               ` Sudeep Holla
2020-10-12 16:52                 ` Ionela Voinescu
2020-10-12 17:18                   ` Lukasz Luba
2020-10-14  4:25                     ` Viresh Kumar
2020-10-14  9:11                       ` Lukasz Luba
2020-10-19  8:50                       ` Nicola Mazzucato
2020-10-19  9:46                         ` Viresh Kumar
2020-10-19 13:36                           ` Nicola Mazzucato
2020-10-20 10:48                             ` Viresh Kumar
2020-10-13 13:53               ` Lukasz Luba
2020-10-14  4:20                 ` Viresh Kumar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201013123901.GA4945@arm.com \
    --to=ionela.voinescu@arm.com \
    --cc=chris.redpath@arm.com \
    --cc=daniel.lezcano@linaro.org \
    --cc=devicetree@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=lukasz.luba@arm.com \
    --cc=morten.rasmussen@arm.com \
    --cc=nicola.mazzucato@arm.com \
    --cc=rafael@kernel.org \
    --cc=rjw@rjwysocki.net \
    --cc=robh@kernel.org \
    --cc=sudeep.holla@arm.com \
    --cc=viresh.kumar@linaro.org \
    --cc=vireshk@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).