linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Rafael J. Wysocki" <rafael@kernel.org>
To: Anson Huang <anson.huang@nxp.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Jacky Bai <ping.bai@nxp.com>,
	"rafael@kernel.org" <rafael@kernel.org>,
	"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>
Subject: Re: About CPU hot-plug stress test failed in cpufreq driver
Date: Thu, 21 Nov 2019 11:53:56 +0100	[thread overview]
Message-ID: <CAJZ5v0geykeebX-67+h4twj+t7oTVBf7X7_UsXw0LAc+0Ap75Q@mail.gmail.com> (raw)
In-Reply-To: <DB3PR0402MB39165544EDD0317095A1B72DF54E0@DB3PR0402MB3916.eurprd04.prod.outlook.com>

On Thu, Nov 21, 2019 at 11:13 AM Anson Huang <anson.huang@nxp.com> wrote:
>
> Thanks Viresh for your quick response.
> The output of cpufreq info are as below, some more info for you are, our internal tree is based on v5.4-rc7,
> and the CPU hotplug has no i.MX platform code, so far we reproduced it on i.MX8QXP, i.MX8QM and i.MX8MN.
> With cpufreq disabled, no issue met.
> I also reproduced this issue with v5.4-rc7,
> Will continue to debug and let you know if any new found.
>
> > Subject: Re: About CPU hot-plug stress test failed in cpufreq driver
> >
> > +Rafael and PM list.
> >
> > Please provide output of following for your platform while I am having a look
> > at your problem.
> >
> > grep . /sys/devices/system/cpu/cpufreq/*/*
>
> root@imx8qxpmek:~# grep . /sys/devices/system/cpu/cpufreq/*/*
> /sys/devices/system/cpu/cpufreq/ondemand/ignore_nice_load:0
> /sys/devices/system/cpu/cpufreq/ondemand/io_is_busy:0
> /sys/devices/system/cpu/cpufreq/ondemand/powersave_bias:0
> /sys/devices/system/cpu/cpufreq/ondemand/sampling_down_factor:1
> /sys/devices/system/cpu/cpufreq/ondemand/sampling_rate:10000
> /sys/devices/system/cpu/cpufreq/ondemand/up_threshold:95
> /sys/devices/system/cpu/cpufreq/policy0/affected_cpus:0 1 2 3

All CPUs in one policy, CPU0 is the policy CPU and it never goes offline AFAICS.

> /sys/devices/system/cpu/cpufreq/policy0/cpuinfo_cur_freq:900000
> /sys/devices/system/cpu/cpufreq/policy0/cpuinfo_max_freq:1200000
> /sys/devices/system/cpu/cpufreq/policy0/cpuinfo_min_freq:900000
> /sys/devices/system/cpu/cpufreq/policy0/cpuinfo_transition_latency:150000
> /sys/devices/system/cpu/cpufreq/policy0/related_cpus:0 1 2 3
> /sys/devices/system/cpu/cpufreq/policy0/scaling_available_frequencies:900000 1200000
> /sys/devices/system/cpu/cpufreq/policy0/scaling_available_governors:ondemand userspace performance schedutil
> /sys/devices/system/cpu/cpufreq/policy0/scaling_cur_freq:900000
> /sys/devices/system/cpu/cpufreq/policy0/scaling_driver:cpufreq-dt
> /sys/devices/system/cpu/cpufreq/policy0/scaling_governor:ondemand

Hm.  That shouldn't really make a difference, but I'm wondering if you
can reproduce this with the schedutil governor?

> /sys/devices/system/cpu/cpufreq/policy0/scaling_max_freq:1200000
> /sys/devices/system/cpu/cpufreq/policy0/scaling_min_freq:900000
> /sys/devices/system/cpu/cpufreq/policy0/scaling_setspeed:<unsupported>
> grep: /sys/devices/system/cpu/cpufreq/policy0/stats: Is a directory
>
>
> CPUHotplug: 4524 times remaining
> [ 5954.441803] CPU1: shutdown
> [ 5954.444529] psci: CPU1 killed.
> [ 5954.481739] CPU2: shutdown
> [ 5954.484484] psci: CPU2 killed.
> [ 5954.530509] CPU3: shutdown
> [ 5954.533270] psci: CPU3 killed.
> [ 5955.561978] Detected VIPT I-cache on CPU1
> [ 5955.562015] GICv3: CPU1: found redistributor 1 region 0:0x0000000051b20000
> [ 5955.562073] CPU1: Booted secondary processor 0x0000000001 [0x410fd042]
> [ 5955.596921] Detected VIPT I-cache on CPU2
> [ 5955.596959] GICv3: CPU2: found redistributor 2 region 0:0x0000000051b40000
> [ 5955.597018] CPU2: Booted secondary processor 0x0000000002 [0x410fd042]
> [ 5955.645878] Detected VIPT I-cache on CPU3
> [ 5955.645921] GICv3: CPU3: found redistributor 3 region 0:0x0000000051b60000
> [ 5955.645986] CPU3: Booted secondary processor 0x0000000003 [0x410fd042]
> CPUHotplug: 4523 times remaining
> [ 5956.769790] CPU1: shutdown
> [ 5956.772518] psci: CPU1 killed.
> [ 5956.809752] CPU2: shutdown
> [ 5956.812480] psci: CPU2 killed.
> [ 5956.849769] CPU3: shutdown
> [ 5956.852494] psci: CPU3 killed.
> [ 5957.882045] Detected VIPT I-cache on CPU1
> [ 5957.882089] GICv3: CPU1: found redistributor 1 region 0:0x0000000051b20000
> [ 5957.882153] CPU1: Booted secondary processor 0x0000000001 [0x410fd042]
>
>
> Looping here, no hang, can response to debug console.... if attaching JTAG, I can see the CPU1
> Will busy waiting for irq_work to be free..

Well, cpufreq_offline() calls cpufreq_stop_governor() too, so there
shouldn't be any pending irq_works coming from cpufreq on the offline
CPUs after that.

Hence, if an irq_work is pending at the cpufreq_online() time, it must
be on CPU0 (which is always online).

  reply	other threads:[~2019-11-21 10:54 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <DB3PR0402MB391626A8ECFDC182C6EDCF8DF54E0@DB3PR0402MB3916.eurprd04.prod.outlook.com>
2019-11-21  9:35 ` About CPU hot-plug stress test failed in cpufreq driver Viresh Kumar
2019-11-21 10:13   ` Anson Huang
2019-11-21 10:53     ` Rafael J. Wysocki [this message]
2019-11-21 10:56       ` Rafael J. Wysocki
2019-11-22  5:15         ` Anson Huang
2019-11-22  9:59           ` Rafael J. Wysocki
2019-11-25  6:05             ` Anson Huang
2019-11-25  9:43               ` Anson Huang
2019-11-26  6:18                 ` Viresh Kumar
2019-11-26  8:22                   ` Anson Huang
2019-11-26  8:25                     ` Viresh Kumar
2019-11-25 12:44               ` Rafael J. Wysocki
2019-11-26  8:57                 ` Rafael J. Wysocki
2019-11-29 11:39                 ` Rafael J. Wysocki
2019-11-29 13:44                   ` Anson Huang
2019-12-05  8:53                     ` Anson Huang
2019-12-05 10:48                       ` Rafael J. Wysocki
2019-12-05 13:18                         ` Anson Huang
2019-12-05 15:52                           ` Rafael J. Wysocki
2019-12-09 10:31                             ` Peng Fan
2019-12-09 10:37                             ` Anson Huang
2019-12-09 10:56                               ` Anson Huang
2019-12-09 11:23                                 ` Rafael J. Wysocki
2019-12-09 12:32                                   ` Anson Huang
2019-12-09 12:44                                     ` Rafael J. Wysocki
2019-12-09 14:18                                       ` Anson Huang
2019-12-10  5:39                                         ` Anson Huang
2019-12-10  5:53                                       ` Peng Fan
2019-12-10  7:05                                         ` Viresh Kumar
2019-12-10  8:22                                           ` Rafael J. Wysocki
2019-12-10  8:29                                             ` Anson Huang
2019-12-10  8:36                                               ` Viresh Kumar
2019-12-10  8:37                                                 ` Peng Fan
2019-12-10  8:37                                               ` Rafael J. Wysocki
2019-12-10  8:43                                                 ` Peng Fan
2019-12-10  8:45                                                 ` Anson Huang
2019-12-10  8:50                                                   ` Rafael J. Wysocki
2019-12-10  8:51                                                     ` Anson Huang
2019-12-10 10:39                                                       ` Rafael J. Wysocki
2019-12-10 10:54                                                         ` Rafael J. Wysocki
2019-12-11  5:08                                                           ` Anson Huang
2019-12-11  8:59                                                           ` Peng Fan
2019-12-11  9:36                                                             ` Rafael J. Wysocki
2019-12-11  9:43                                                               ` Peng Fan
2019-12-11  9:52                                                                 ` Rafael J. Wysocki
2019-12-11 10:11                                                                   ` Peng Fan
2019-12-10 10:54                                                         ` Viresh Kumar
2019-12-10 11:07                                                           ` Rafael J. Wysocki
2019-12-10  8:57                                                     ` Viresh Kumar
2019-12-10 11:03                                                       ` Rafael J. Wysocki
2019-12-10  9:04                                                     ` Rafael J. Wysocki
2019-12-10  8:31                                             ` Viresh Kumar
2019-12-10  8:12                                         ` Rafael J. Wysocki
2019-12-05 11:00                       ` Viresh Kumar
2019-12-05 11:10                         ` Rafael J. Wysocki
2019-12-05 11:17                           ` Viresh Kumar
2019-11-21 10:37   ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJZ5v0geykeebX-67+h4twj+t7oTVBf7X7_UsXw0LAc+0Ap75Q@mail.gmail.com \
    --to=rafael@kernel.org \
    --cc=anson.huang@nxp.com \
    --cc=linux-pm@vger.kernel.org \
    --cc=ping.bai@nxp.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).