All of lore.kernel.org
 help / color / mirror / Atom feed
From: Feng Tang <feng.tang@intel.com>
To: Doug Smythies <dsmythies@telus.net>
Cc: 'Thomas Gleixner' <tglx@linutronix.de>,
	"paulmck@kernel.org" <paulmck@kernel.org>,
	"stable@vger.kernel.org" <stable@vger.kernel.org>,
	"x86@kernel.org" <x86@kernel.org>,
	"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
	'srinivas pandruvada' <srinivas.pandruvada@linux.intel.com>
Subject: Re: CPU excessively long times between frequency scaling driver calls - bisected
Date: Tue, 8 Feb 2022 10:39:40 +0800	[thread overview]
Message-ID: <20220208023940.GA5558@shbuild999.sh.intel.com> (raw)
In-Reply-To: <003f01d81c8c$d20ee3e0$762caba0$@telus.net>

Hi Doug,

Thanks for the report.

On Tue, Feb 08, 2022 at 09:40:14AM +0800, Doug Smythies wrote:
> Hi All,
> 
> Note before: I do not know If I have the e-mail address list correct,
> nor am I actually a member of the x86 distribution list. I am on
> the linux-pm email list.
> 
> When using the intel_pstate CPU frequency scaling driver with HWP disabled,
> active mode, powersave scaling governor, the times between calls to the driver
> have never exceeded 10 seconds.
> 
> Since kernel 5.16-rc4 and commit: b50db7095fe002fa3e16605546cba66bf1b68a3e
> " x86/tsc: Disable clocksource watchdog for TSC on qualified platorms"
> 
> There are now occasions where times between calls to the driver can be
> over 100's of seconds and can result in the CPU frequency being left
> unnecessarily high for extended periods.
> 
> >From the number of clock cycles executed between these long
> durations one can tell that the CPU has been running code, but
> the driver never got called.
> 
> Attached are some graphs from some trace data acquired using
> intel_pstate_tracer.py where one can observe an idle system between
> about 42 and well over 200 seconds elapsed time, yet CPU10 never gets
> called, which would have resulted in reducing it's pstate request, until
> an elapsed time of 167.616 seconds, 126 seconds since the last call. The
> CPU frequency never does go to minimum.
> 
> For reference, a similar CPU frequency graph is also attached, with
> the commit reverted. The CPU frequency drops to minimum,
> over about 10 or 15 seconds.


commit b50db7095fe0 essentially disables the clocksource watchdog, 
which literally doesn't have much to do with cpufreq code. 

One thing I can think of is, without the patch, there is a periodic
clocksource timer running every 500 ms, and it loops to run on
all CPUs in turn. For your HW, it has 12 CPUs (from the graph),
so each CPU will get a timer (HW timer interrupt backed) every 6
seconds. Could this affect the cpufreq governor's work flow (I just
quickly read some cpufreq code, and seem there is irq_work/workqueue
involved).

Can you try one test that keep all the current setting and change
the irq affinity of disk/network-card to 0xfff to let interrupts
from them be distributed to all CPUs?

Thanks,
Feng


> Processor: Intel(R) Core(TM) i5-10600K CPU @ 4.10GHz
> 
> Why this particular configuration, i.e. no-hwp, active, powersave?
> Because it is, by far, the easiest to observe what is going on.
> 
> ... Doug
> 






  reply	other threads:[~2022-02-08  2:40 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-08  1:40 CPU excessively long times between frequency scaling driver calls - bisected Doug Smythies
2022-02-08  2:39 ` Feng Tang [this message]
2022-02-08  7:13   ` Doug Smythies
2022-02-08  9:15     ` Feng Tang
2022-02-09  6:23       ` Doug Smythies
2022-02-10  7:45         ` Zhang, Rui
2022-02-13 18:54           ` Doug Smythies
2022-02-14 15:17             ` srinivas pandruvada
2022-02-15 21:35               ` Doug Smythies
2022-02-22  7:34               ` Feng Tang
2022-02-22 18:04                 ` Rafael J. Wysocki
2022-02-23  0:07                   ` Doug Smythies
2022-02-23  0:32                     ` srinivas pandruvada
2022-02-23  0:40                       ` Feng Tang
2022-02-23 14:23                         ` Rafael J. Wysocki
2022-02-24  8:08                           ` Feng Tang
2022-02-24 14:44                             ` Paul E. McKenney
2022-02-24 16:29                               ` Doug Smythies
2022-02-24 16:58                                 ` Paul E. McKenney
2022-02-25  0:29                               ` Feng Tang
2022-02-25  1:06                                 ` Paul E. McKenney
2022-02-25 17:45                             ` Rafael J. Wysocki
2022-02-26  0:36                               ` Doug Smythies
2022-02-28  4:12                                 ` Feng Tang
2022-02-28 19:36                                   ` Rafael J. Wysocki
2022-03-01  5:52                                     ` Feng Tang
2022-03-01 11:58                                       ` Rafael J. Wysocki
2022-03-01 17:18                                         ` Doug Smythies
2022-03-01 17:34                                           ` Rafael J. Wysocki
2022-03-02  4:06                                             ` Doug Smythies
2022-03-02 19:00                                               ` Rafael J. Wysocki
2022-03-03 23:00                                                 ` Doug Smythies
2022-03-04  6:59                                                   ` Doug Smythies
2022-03-16 15:54                                                     ` Doug Smythies
2022-03-17 12:30                                                       ` Rafael J. Wysocki
2022-03-17 13:58                                                         ` Doug Smythies
2022-03-24 14:04                                                           ` Doug Smythies
2022-03-24 18:17                                                             ` Rafael J. Wysocki
2022-03-25  0:03                                                               ` Doug Smythies
2022-03-03  5:27                                               ` Feng Tang
2022-03-03 12:02                                                 ` Rafael J. Wysocki
2022-03-04  5:13                                                   ` Feng Tang
2022-03-04 16:23                                                     ` Paul E. McKenney
2022-02-23  2:49                   ` Feng Tang
2022-02-23 14:11                     ` Rafael J. Wysocki
2022-02-23  9:40                   ` Thomas Gleixner
2022-02-23 14:23                     ` Rafael J. Wysocki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220208023940.GA5558@shbuild999.sh.intel.com \
    --to=feng.tang@intel.com \
    --cc=dsmythies@telus.net \
    --cc=linux-pm@vger.kernel.org \
    --cc=paulmck@kernel.org \
    --cc=srinivas.pandruvada@linux.intel.com \
    --cc=stable@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.