All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vincent Guittot <vincent.guittot@linaro.org>
To: Ionela Voinescu <ionela.voinescu@arm.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Rafael Wysocki <rjw@rjwysocki.net>,
	Ben Segall <bsegall@google.com>,
	Daniel Bristot de Oliveira <bristot@redhat.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Ingo Molnar <mingo@redhat.com>,
	Juri Lelli <juri.lelli@redhat.com>, Mel Gorman <mgorman@suse.de>,
	Peter Zijlstra <peterz@infradead.org>,
	"Rafael J. Wysocki" <rafael@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	Sudeep Holla <sudeep.holla@arm.com>,
	Will Deacon <will@kernel.org>,
	"open list:THERMAL" <linux-pm@vger.kernel.org>,
	Qian Cai <quic_qiancai@quicinc.com>,
	ACPI Devel Maling List <linux-acpi@vger.kernel.org>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	"Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Subject: Re: [PATCH V3 0/4] cpufreq: cppc: Add support for frequency invariance
Date: Mon, 28 Jun 2021 14:17:55 +0200	[thread overview]
Message-ID: <CAKfTPtB3w5Zih_gCFgt9Hp=bq-Z7tQaFDbZkfAd+cg2TKRMsMw@mail.gmail.com> (raw)
In-Reply-To: <CAKfTPtAtE1WHA19=BrWyekHgFYVn0+LdTLROJzYRdshp-EYOWA@mail.gmail.com>

On Mon, 28 Jun 2021 at 14:14, Vincent Guittot
<vincent.guittot@linaro.org> wrote:
>
> On Mon, 28 Jun 2021 at 13:54, Ionela Voinescu <ionela.voinescu@arm.com> wrote:
> >
> > Hi guys,
> >
> > On Monday 21 Jun 2021 at 14:49:33 (+0530), Viresh Kumar wrote:
> > > Hello,
> > >
> > > Changes since V2:
> > >
> > > - We don't need start_cpu() and stop_cpu() callbacks anymore, we can make it
> > >   work using policy ->init() and exit() alone.
> > >
> > > - Two new cleanup patches 1/4 and 2/4.
> > >
> > > - Improved commit log of 3/4.
> > >
> > > - Dropped WARN_ON(local_freq_scale > 1024), since this can occur on counter's
> > >   overlap (seen with Vincent's setup).
> > >
> >
> > If you happen to have the data around, I would like to know more about
> > your observations on ThunderX2.
> >
> >
> > I tried ThunderX2 as well, with the following observations:
> >
> > Booting with userspace governor and all CPUs online, the CPPC frequency
> > scale factor was all over the place (even much larger than 1024).
> >
> > My initial assumptions:
> >  - Counters do not behave properly in light of SMT
> >  - Firmware does not do a good job to keep the reference and core
> >    counters monotonic: save and restore at core off.
> >
> > So I offlined all CPUs with the exception of 0, 32, 64, 96 - threads of
> > a single core (part of policy0). With this all works very well:
> >
> > root@target:/sys/devices/system/cpu/cpufreq/policy0# echo 1056000 > scaling_setspeed
> > root@target:/sys/devices/system/cpu/cpufreq/policy0#
> > [ 1863.095370] CPU96: cppc scale: 697.
> > [ 1863.175370] CPU0: cppc scale: 492.
> > [ 1863.215367] CPU64: cppc scale: 492.
> > [ 1863.235366] CPU96: cppc scale: 492.
> > [ 1863.485368] CPU32: cppc scale: 492.
> >
> > root@target:/sys/devices/system/cpu/cpufreq/policy0# echo 1936000 > scaling_setspeed
> > root@target:/sys/devices/system/cpu/cpufreq/policy0#
> > [ 1891.395363] CPU96: cppc scale: 558.
> > [ 1891.415362] CPU0: cppc scale: 595.
> > [ 1891.435362] CPU32: cppc scale: 615.
> > [ 1891.465363] CPU96: cppc scale: 635.
> > [ 1891.495361] CPU0: cppc scale: 673.
> > [ 1891.515360] CPU32: cppc scale: 703.
> > [ 1891.545360] CPU96: cppc scale: 738.
> > [ 1891.575360] CPU0: cppc scale: 779.
> > [ 1891.605360] CPU96: cppc scale: 829.
> > [ 1891.635360] CPU0: cppc scale: 879.
> >
> > root@target:/sys/devices/system/cpu/cpufreq/policy0#
> > root@target:/sys/devices/system/cpu/cpufreq/policy0# echo 2200000 > scaling_setspeed
> > root@target:/sys/devices/system/cpu/cpufreq/policy0#
> > [ 1896.585363] CPU32: cppc scale: 1004.
> > [ 1896.675359] CPU64: cppc scale: 973.
> > [ 1896.715359] CPU0: cppc scale: 1024.
> >
> > I'm doing a rate limited printk only for increase/decrease values over
> > 64 in the scale factor value.
> >
> > This showed me that SMT is handled properly.
> >
> > Then, as soon as I start onlining CPUs 1, 33, 65, 97, the scale factor
> > stops being even close to correct, for example:
> >
> > [238394.770328] CPU96: cppc scale: 22328.
> > [238395.628846] CPU96: cppc scale: 245.
> > [238516.087115] CPU96: cppc scale: 930.
> > [238523.385009] CPU96: cppc scale: 245.
> > [238538.767473] CPU96: cppc scale: 936.
> > [238538.867546] CPU96: cppc scale: 245.
> > [238599.367932] CPU97: cppc scale: 2728.
> > [238599.859865] CPU97: cppc scale: 452.
> > [238647.786284] CPU96: cppc scale: 1438.
> > [238669.604684] CPU96: cppc scale: 27306.
> > [238676.805049] CPU96: cppc scale: 245.
> > [238737.642902] CPU97: cppc scale: 2035.
> > [238737.664995] CPU97: cppc scale: 452.
> > [238788.066193] CPU96: cppc scale: 2749.
> > [238788.110192] CPU96: cppc scale: 245.
> > [238817.231659] CPU96: cppc scale: 2698.
> > [238818.083687] CPU96: cppc scale: 245.
> > [238845.466850] CPU97: cppc scale: 2990.
> > [238847.477805] CPU97: cppc scale: 452.
> > [238936.984107] CPU97: cppc scale: 1590.
> > [238937.029079] CPU97: cppc scale: 452.
> > [238979.052464] CPU97: cppc scale: 911.
> > [238980.900668] CPU97: cppc scale: 452.
> > [239149.587889] CPU96: cppc scale: 803.
> > [239151.085516] CPU96: cppc scale: 245.
> > [239303.871373] CPU64: cppc scale: 956.
> > [239303.906837] CPU64: cppc scale: 245.
> > [239308.666786] CPU96: cppc scale: 821.
> > [239319.440634] CPU96: cppc scale: 245.
> > [239389.978395] CPU97: cppc scale: 4229.
> > [239391.969562] CPU97: cppc scale: 452.
> > [239415.894738] CPU96: cppc scale: 630.
> > [239417.875326] CPU96: cppc scale: 245.
> >
>
> With the counter being 32bits and the freq scaling being update at
> tick, you can easily get a overflow on it in idle system. I can easily
> imagine that when you unplug CPUs there is enough activity on the CPU
> to update it regularly whereas with all CPUs, the idle time is longer
> that the counter overflow
>
> > The counter values shown by feedback_ctrs do not seem monotonic even
> > when only core 0 threads are online.
> >
> > ref:2812420736 del:166051103
> > ref:3683620736 del:641578595
> > ref:1049653440 del:1548202980
> > ref:2099053440 del:2120997459
> > ref:3185853440 del:2714205997
> > ref:712486144  del:3708490753
> > ref:3658438336 del:3401357212
> > ref:1570998080 del:2279728438

There are 32bits and the overflow need to be handled by cppc_cpufreq driver

> >
> > For now I was just wondering if you have seen the same and whether you
> > have an opinion on this.
> >
> > > This is tested on my Hikey platform (without the actual read/write to
> > > performance counters), with this script for over an hour:
> > >
> > > while true; do
> > >     for i in `seq 1 7`;
> > >     do
> > >         echo 0 > /sys/devices/system/cpu/cpu$i/online;
> > >     done;
> > >
> > >     for i in `seq 1 7`;
> > >     do
> > >         echo 1 > /sys/devices/system/cpu/cpu$i/online;
> > >     done;
> > > done
> > >
> > >
> > > The same is done by Vincent on ThunderX2 and no issues were seen.
> >
> > Hotplug worked fine for me as well on both platforms I tested (Juno R2
> > and ThunderX2).
> >
> > Thanks,
> > Ionela.

  reply	other threads:[~2021-06-28 12:18 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-21  9:19 [PATCH V3 0/4] cpufreq: cppc: Add support for frequency invariance Viresh Kumar
2021-06-21  9:19 ` [PATCH V3 1/4] cpufreq: cppc: Fix potential memleak in cppc_cpufreq_cpu_init Viresh Kumar
2021-06-23 13:44   ` Ionela Voinescu
2021-06-24  2:08     ` Viresh Kumar
2021-06-24  2:10   ` [PATCH V3.1 " Viresh Kumar
2021-06-25 10:33     ` Ionela Voinescu
2021-06-21  9:19 ` [PATCH V3 2/4] cpufreq: cppc: Pass structure instance by reference Viresh Kumar
2021-06-23 13:45   ` Ionela Voinescu
2021-06-24  2:22     ` Viresh Kumar
2021-06-25 10:30       ` Ionela Voinescu
2021-06-21  9:19 ` [PATCH V3 3/4] arch_topology: Avoid use-after-free for scale_freq_data Viresh Kumar
2021-06-23 13:50   ` Ionela Voinescu
2021-06-21  9:19 ` [PATCH V3 4/4] cpufreq: CPPC: Add support for frequency invariance Viresh Kumar
2021-06-24  9:48   ` Ionela Voinescu
2021-06-24 13:04     ` Viresh Kumar
2021-06-25  8:54       ` Ionela Voinescu
2021-06-25 16:54         ` Viresh Kumar
2021-06-28 10:49           ` Ionela Voinescu
2021-06-29  4:32             ` Viresh Kumar
2021-06-29  8:47               ` Ionela Voinescu
2021-06-29  8:53                 ` Viresh Kumar
2021-06-21 20:48 ` [PATCH V3 0/4] cpufreq: cppc: " Qian Cai
2021-06-22  6:52   ` Viresh Kumar
2021-06-23  4:16   ` Viresh Kumar
2021-06-23 12:57     ` Qian Cai
2021-06-24  2:54       ` Viresh Kumar
2021-06-24  9:49         ` Vincent Guittot
2021-06-24 10:48           ` Ionela Voinescu
2021-06-24 11:15             ` Vincent Guittot
2021-06-24 11:23               ` Ionela Voinescu
2021-06-24 11:59                 ` Vincent Guittot
2021-06-24 15:17             ` Qian Cai
2021-06-25 10:21               ` Ionela Voinescu
2021-06-25 13:31                 ` Qian Cai
2021-06-25 14:37                   ` Ionela Voinescu
2021-06-25 16:56                     ` Qian Cai
2021-06-26  2:29                     ` Qian Cai
2021-06-26 13:41                       ` Qian Cai
2021-06-29  4:55                         ` Viresh Kumar
2021-06-29  4:52                       ` Viresh Kumar
2021-06-29  9:06                       ` Ionela Voinescu
2021-06-29 13:38                         ` Qian Cai
2021-06-29  4:45                   ` Viresh Kumar
2021-06-24 20:44             ` Qian Cai
2021-06-28 11:54 ` Ionela Voinescu
2021-06-28 12:14   ` Vincent Guittot
2021-06-28 12:17     ` Vincent Guittot [this message]
2021-06-28 13:08     ` Ionela Voinescu
2021-06-28 21:37       ` Ionela Voinescu
2021-06-29  8:45         ` Vincent Guittot
2021-06-29  5:20   ` Viresh Kumar
2021-06-29  8:46     ` Ionela Voinescu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKfTPtB3w5Zih_gCFgt9Hp=bq-Z7tQaFDbZkfAd+cg2TKRMsMw@mail.gmail.com' \
    --to=vincent.guittot@linaro.org \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=ionela.voinescu@arm.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-acpi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=paulmck@kernel.org \
    --cc=peterz@infradead.org \
    --cc=quic_qiancai@quicinc.com \
    --cc=rafael.j.wysocki@intel.com \
    --cc=rafael@kernel.org \
    --cc=rjw@rjwysocki.net \
    --cc=rostedt@goodmis.org \
    --cc=sudeep.holla@arm.com \
    --cc=viresh.kumar@linaro.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.