linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] tools: add power/x86/turbostat
@ 2010-10-23  5:31 Len Brown
  2010-10-24  4:06 ` [PATCH] tools: add power/x86/turbostat - v2 Len Brown
  0 siblings, 1 reply; 11+ messages in thread
From: Len Brown @ 2010-10-23  5:31 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-pm, linux-kernel, x86

[-- Attachment #1: Type: TEXT/PLAIN, Size: 30478 bytes --]

From: Len Brown <len.brown@intel.com>

turbostat displays actual processor frequency and
idle power saving state residency on Intel Core i3/i5/i7 (Nehalem)
and newer processors.

turbostat depends on a non-stop TSC and free-running
ACNT/MCNT MSRs hardware, plus the msr driver.

Rather than refuse to run on older hardware
turbostat does some simple error checking --
as some older hardware does have a functional TSC,
and for some those that do not, deep C-states can be
disabled in order to make the TSC work.

Note also that acpu_cpufreq scribbled on ACNT/MCNT MSRs
up through Linux-2.6.29, and so turbostat runs best
on kernels 2.6.30 and newer.

See the comments in the source code for example usage.

Signed-off-by: Len Brown <len.brown@intel.com>
---
 tools/power/x86/turbostat/Makefile    |    7 +
 tools/power/x86/turbostat/turbostat.c | 1103 +++++++++++++++++++++++++++++++++
 2 files changed, 1110 insertions(+), 0 deletions(-)
 create mode 100644 tools/power/x86/turbostat/Makefile
 create mode 100644 tools/power/x86/turbostat/turbostat.c

diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
new file mode 100644
index 0000000..5a18f55
--- /dev/null
+++ b/tools/power/x86/turbostat/Makefile
@@ -0,0 +1,7 @@
+turbostat : turbostat.c
+
+clean :
+	rm -f turbostat
+
+install :
+	install turbostat /usr/bin/turbostat
diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
new file mode 100644
index 0000000..27e19ba
--- /dev/null
+++ b/tools/power/x86/turbostat/turbostat.c
@@ -0,0 +1,1103 @@
+/*
+ * turbostat -- show CPU frequency and C-state residency
+ * on modern Intel turbo-capable processors.
+ *
+ * Copyright (c) 2010, Intel Corporation.
+ * Len Brown <len.brown@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+/*
+ * Works properly on Nehalem and newer processors.
+ * Nehalem features an always-running TSC, plus hardware
+ * C-state residency MSRs.
+ *
+ * Works poorly on systems before Nehalem with
+ * a TSC that stops in deep C-states.
+ *
+ * Works properly on Linux-2.6.30 and later.
+ * Works poorly Linux-2.6.29 and earlier, as acpi-cpufreq
+ * used to clear APERF/MPERF counters on access.
+ *
+ * APERF, MPERF count non-halted cycles.
+ * Although it is not guaranteed by the architecture, we assume
+ * here that they count at TSC rate, which is true today.
+ *
+ * References:
+ * "Intel® Turbo Boost Technology
+ * in Intel® Core™ Microarchitecture (Nehalem) Based Processors"
+ * http://download.intel.com/design/processor/applnots/320354.pdf
+ *
+ * "Intel® 64 and IA-32 Architectures Software Developer's Manual
+ * Volume 3B: System Programming Guide"
+ * http://www.intel.com/products/processor/manuals/
+ */
+
+/*
+ * usage:
+ * # turbostat [-v] [-i interval_sec] [command [arg]...]
+ *
+ * examples:
+ *
+ * [root@x980]# ./turbostat
+ *core CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+ *           0.04 1.62 3.38   0.11   0.00  99.85   0.00  95.07
+ *   0   0   0.04 1.62 3.38   0.06   0.00  99.90   0.00  95.07
+ *   0   6   0.02 1.62 3.38   0.08   0.00  99.90   0.00  95.07
+ *   1   2   0.10 1.62 3.38   0.29   0.00  99.61   0.00  95.07
+ *   1   8   0.11 1.62 3.38   0.28   0.00  99.61   0.00  95.07
+ *   2   4   0.01 1.62 3.38   0.01   0.00  99.98   0.00  95.07
+ *   2  10   0.01 1.61 3.38   0.02   0.00  99.98   0.00  95.07
+ *   8   1   0.07 1.62 3.38   0.15   0.00  99.78   0.00  95.07
+ *   8   7   0.03 1.62 3.38   0.19   0.00  99.78   0.00  95.07
+ *   9   3   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+ *   9   9   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+ *  10   5   0.01 1.62 3.38   0.13   0.00  99.86   0.00  95.07
+ *  10  11   0.08 1.62 3.38   0.05   0.00  99.86   0.00  95.07
+ *
+ * Without any parameters, turbostat prints out counters ever 5 seconds.
+ * (override interval with "-i sec" option).
+ *
+ * core is the processor core number
+ *
+ * CPU is the cpu number according to Linux, ala /dev/cpu
+ *
+ * The first row of data is a system wide summary, and thus has no core or CPU#.
+ *
+ * %c0 is the percent of the interval that the core retired instructions.
+ *
+ * GHz is the average clock rate while the core was in c0 state.
+ *
+ * TSC is the average GHz that the TSC ran during the entire interval.
+ *
+ * %c1, %c3, %c6 show the residency in hardware CPU core idle states.
+ * Note that these may not equal the software states requested by Linux.
+ *
+ * pc3%, pc6%, show package idle C-states.
+ *
+ * The "-v" option adds verbosity to the output: eg.
+ *
+ * CPUID GenuineIntel 11 levels family:model:stepping 0x6:2c:2 (6:44:2)
+ * 12 * 133 = 1600 MHz max efficiency
+ * 25 * 133 = 3333 MHz TSC frequency
+ * 26 * 133 = 3467 MHz max turbo 4 active cores
+ * 26 * 133 = 3467 MHz max turbo 3 active cores
+ * 27 * 133 = 3600 MHz max turbo 2 active cores
+ * 27 * 133 = 3600 MHz max turbo 1 active cores
+ *
+ * This tells you what you should be able to get, given nominal
+ * cooling and electrical capacity.  Note that NHM-EX/WSM-EX do
+ * not tell us these max turbo frequencies like NHM/NHM-EP/WSM/WSM-EP do.
+ *
+ * If turbostat is invoked with a command, it will fork that command
+ * and output the statistics gathered while that command was running.
+ * eg. Here a cycle soaker is run on 1 CPU (see %c0) for a few seconds
+ * until ^C while the others are mostly idle:
+ *
+ * [root@x980 lenb]# ./turbostat cat /dev/zero > /dev/null
+ *
+ *^Ccore CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+ *            8.49 3.63 3.38  16.23   0.66  74.63   0.00   0.00
+ *    0   0   1.22 3.62 3.38  32.18   0.00  66.60   0.00   0.00
+ *    0   6   0.40 3.61 3.38  33.00   0.00  66.60   0.00   0.00
+ *    1   2   0.11 3.14 3.38   0.19   3.95  95.75   0.00   0.00
+ *    1   8   0.05 2.88 3.38   0.25   3.95  95.75   0.00   0.00
+ *    2   4   0.00 3.13 3.38   0.02   0.00  99.98   0.00   0.00
+ *    2  10   0.00 3.09 3.38   0.02   0.00  99.98   0.00   0.00
+ *    8   1   0.04 3.50 3.38  14.43   0.00  85.54   0.00   0.00
+ *    8   7   0.03 2.98 3.38  14.43   0.00  85.54   0.00   0.00
+ *    9   3   0.00 3.16 3.38 100.00   0.00   0.00   0.00   0.00
+ *    9   9  99.93 3.63 3.38   0.06   0.00   0.00   0.00   0.00
+ *   10   5   0.01 2.82 3.38   0.08   0.00  99.91   0.00   0.00
+ *   10  11   0.02 3.36 3.38   0.06   0.00  99.91   0.00   0.00
+ * 6.950866 sec
+ *
+ * Above the cycles soaker drives cpu9 up 3.6 Ghz turbo limit
+ * while the other processors are generally in various states of idle.
+ *
+ * Note that cpu3 is an HT sibling sharing the same core (9)
+ * with cpu9, and thus it is unble to get more idle than c1
+ * when cpu9 is busy.
+ *
+ * Note that the arithmetic average of the GHz column above is 3.24,
+ * yet turbostat shows avg 3.61.  This is a weighted average --
+ * where the weight is %c0.  ie. it is the total number of
+ * unhalted cycles elapsed per time divided by the number of CPUs.
+ *
+ * Note that turbostat reads hardware counters, but doesn't write them.
+ * So it will not interfere with the OS or other programs, including
+ * multiple invocations of itself.
+ *
+ * turbostat depends on the Linux msr driver for /dev/cpu/.../msr
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/resource.h>
+#include <fcntl.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <stdlib.h>
+#include <dirent.h>
+
+#define MSR_TSC	0x10
+#define MSR_NEHALEM_PLATFORM_INFO	0xCE
+#define MSR_NEHALEM_TURBO_RATIO_LIMIT	0x1AD
+#define MSR_APERF	0xE8
+#define MSR_MPERF	0xE7
+#define MSR_PKG_C3_RESIDENCY	0x3F8
+#define MSR_PKG_C6_RESIDENCY	0x3F9
+#define MSR_CORE_C3_RESIDENCY	0x3FC
+#define MSR_CORE_C6_RESIDENCY	0x3FD
+
+char *proc_stat = "/proc/stat";
+unsigned int interval_sec = 5;	/* set with -i interval_sec */
+unsigned int verbose;		/* set with -v */
+unsigned int skip_c0;
+unsigned int skip_c1;
+unsigned int do_nhm_cstates;
+unsigned int do_snb_cstates;
+unsigned int do_aperf = 1;	/* TBD set with CPUID */
+unsigned int units = 1000000000.0;	/* Ghz etc */
+unsigned int do_non_stop_tsc;
+unsigned int do_nehalem_platform_info;
+unsigned int do_nehalem_turbo_ratio_limit;
+unsigned int extra_msr_offset;
+double bclk;
+unsigned int show_pkg;
+unsigned int show_core;
+unsigned int show_cpu;
+
+int aperf_mperf_unstable;
+int backwards_count;
+char *progname;
+int need_reinitialize;
+
+int num_cpus;
+
+typedef struct per_cpu_counters {
+	unsigned long long tsc;		/* per thread */
+	unsigned long long aperf;	/* per thread */
+	unsigned long long mperf;	/* per thread */
+	unsigned long long c1;	/* per thread (calculated) */
+	unsigned long long c3;	/* per core */
+	unsigned long long c6;	/* per core */
+	unsigned long long c7;	/* per core */
+	unsigned long long pc2;	/* per package */
+	unsigned long long pc3;	/* per package */
+	unsigned long long pc6;	/* per package */
+	unsigned long long pc7;	/* per package */
+	unsigned long long extra_msr;	/* per thread */
+	int pkg;
+	int core;
+	int cpu;
+	struct per_cpu_counters *next;
+} PCC;
+
+PCC *pcc_even;
+PCC *pcc_odd;
+PCC *pcc_delta;
+PCC *pcc_average;
+struct timeval tv_even;
+struct timeval tv_odd;
+struct timeval tv_delta;
+
+unsigned long long get_msr(int cpu, off_t offset)
+{
+	ssize_t retval;
+	unsigned long long msr;
+	char pathname[32];
+	int fd;
+
+	sprintf(pathname, "/dev/cpu/%d/msr", cpu);
+	fd = open(pathname, O_RDONLY);
+	if (fd < 0) {
+		perror(pathname);
+		need_reinitialize = 1;
+		return 0;
+	}
+
+	retval = pread(fd, &msr, sizeof msr, offset);
+	if (retval != sizeof msr) {
+		fprintf(stderr, "cpu%d pread(..., 0x%x) = %d\n",
+		cpu, offset, retval);
+		exit(-2);
+	}
+
+	close(fd);
+	return msr;
+}
+void print_header()
+{
+	if (show_pkg)
+		fprintf(stderr, "pkg ");
+	if (show_core)
+		fprintf(stderr, "core");
+	if (show_cpu)
+		fprintf(stderr, " CPU");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c0 ");
+	fprintf(stderr, "  GHz");
+	fprintf(stderr, "  TSC");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c1 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c6 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc6 ");
+	if (extra_msr_offset)
+		fprintf(stderr, "       MSR 0x%x ", extra_msr_offset);
+
+	putc('\n', stderr);
+}
+void dump_pcc(PCC *pcc)
+{
+	fprintf(stderr, "package: %d ", pcc->pkg);
+	fprintf(stderr, "core:: %d ", pcc->core);
+	fprintf(stderr, "CPU: %d ", pcc->cpu);
+	fprintf(stderr, "TSC: %016llX\n", pcc->tsc);
+	fprintf(stderr, "c3: %016llX\n", pcc->c3);
+	fprintf(stderr, "c6: %016llX\n", pcc->c6);
+	fprintf(stderr, "c7: %016llX\n", pcc->c7);
+	fprintf(stderr, "aperf: %016llX\n", pcc->aperf);
+	fprintf(stderr, "pc2: %016llX\n", pcc->pc2);
+	fprintf(stderr, "pc3: %016llX\n", pcc->pc3);
+	fprintf(stderr, "pc6: %016llX\n", pcc->pc6);
+	fprintf(stderr, "pc7: %016llX\n", pcc->pc7);
+	fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, pcc->extra_msr);
+}
+void dump_list(PCC *pcc)
+{
+	printf("dump_list 0x%p\n", pcc);
+
+	for (; pcc; pcc = pcc->next)
+		dump_pcc(pcc);
+}
+
+print_pcc(PCC *p)
+{
+	double interval_float;
+
+	interval_float = tv_delta.tv_sec + tv_delta.tv_usec/1000000.0;
+
+	if (p == pcc_average) {
+		if (show_pkg)
+			fprintf(stderr, "    ");
+		if (show_core)
+			fprintf(stderr, "    ");
+		if (show_cpu)
+			fprintf(stderr, "    ");
+	} else {
+		if (show_pkg)
+			fprintf(stderr, "%4d", p->pkg);
+		if (show_core)
+			fprintf(stderr, "%4d", p->core);
+		if (show_cpu)
+			fprintf(stderr, "%4d", p->cpu);
+	}
+
+	/* C0 */
+	if (do_nhm_cstates) {
+		if (!skip_c0)
+			fprintf(stderr, "%7.2f", 100.0 * p->mperf/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+
+	if (do_aperf) {
+		if (!aperf_mperf_unstable) {
+			fprintf(stderr, "%5.2f",
+				1.0 * p->tsc / units * p->aperf /
+				p->mperf / interval_float);
+		} else {
+			if (p->aperf > p->tsc || p->mperf > p->tsc) {
+				fprintf(stderr, " ****");
+			} else {
+				fprintf(stderr, "%4.1f*",
+					1.0 * p->tsc /
+					units * p->aperf /
+					p->mperf / interval_float);
+			}
+		}
+	}
+
+	fprintf(stderr, "%5.2f", 1.0 * p->tsc/units/interval_float);
+
+	if (do_nhm_cstates) {
+		if (!skip_c1)
+			fprintf(stderr, "%7.2f", 100.0 * p->c1/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c6/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc6/p->tsc);
+	if (extra_msr_offset)
+		fprintf(stderr, "  0x%016llx", p->extra_msr);
+	putc('\n', stderr);
+}
+
+void print_counters(PCC *cnt)
+{
+	int i;
+	PCC *pcc;
+
+	print_header();
+
+	if (num_cpus > 1)
+		print_pcc(pcc_average);
+
+	for (pcc = cnt; pcc != NULL; pcc = pcc->next)
+		print_pcc(pcc);
+
+}
+
+#define SUBTRACT_COUNTER(after, before, delta) (delta = (after - before), (before > after))
+
+
+int compute_delta(PCC *after, PCC *before, PCC *delta)
+{
+	int i;
+	int errors = 0;
+	int perf_err = 0;
+
+	skip_c0 = skip_c1 = 0;
+
+	for ( ; after && before && delta;
+		after = after->next, before = before->next, delta = delta->next) {
+		if (before->cpu != after->cpu) {
+			printf("cpu configuration changed: %d != %d\n",
+				before->cpu, after->cpu);
+			return -1;
+		}
+
+		if (SUBTRACT_COUNTER(after->tsc, before->tsc, delta->tsc)) {
+			fprintf(stderr, "cpu%d TSC went backwards %llX to %llX\n",
+				i, before->tsc, after->tsc);
+			errors++;
+		}
+		/* check for TSC < 1 Mcycles over interval */
+		if (delta->tsc < (1000 * 1000)) {
+			fprintf(stderr, "Insanely slow TSC rate,"
+				" TSC stops in idle?\n");
+			fprintf(stderr, "You can disable all c-states"
+				" by booting with \"idle=poll\"\n");
+			fprintf(stderr, "or just the deep ones with"
+				" \"processor.max_cstate=1\"\n");
+			exit(-3);
+		}
+		if (SUBTRACT_COUNTER(after->c3, before->c3, delta->c3)) {
+			fprintf(stderr, "cpu%d c3 counter went backwards %llX to %llX\n",
+				i, before->c3, after->c3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c6, before->c6, delta->c6)) {
+			fprintf(stderr, "cpu%d c6 counter went backwards %llX to %llX\n",
+				i, before->c6, after->c6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c7, before->c7, delta->c7)) {
+			fprintf(stderr, "cpu%d c7 counter went backwards %llX to %llX\n",
+				i, before->c7, after->c7);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc2, before->pc2, delta->pc2)) {
+			fprintf(stderr, "cpu%d pc2 counter went backwards %llX to %llX\n",
+				i, before->pc2, after->pc2);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc3, before->pc3, delta->pc3)) {
+			fprintf(stderr, "cpu%d pc3 counter went backwards %llX to %llX\n",
+				i, before->pc3, after->pc3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc6, before->pc6, delta->pc6)) {
+			fprintf(stderr, "cpu%d pc6 counter went backwards %llX to %llX\n",
+				i, before->pc6, after->pc6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc7, before->pc7, delta->pc7)) {
+			fprintf(stderr, "cpu%d pc7 counter went backwards %llX to %llX\n",
+				i, before->pc7, after->pc7);
+			errors++;
+		}
+
+		perf_err = SUBTRACT_COUNTER(after->aperf, before->aperf, delta->aperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d aperf counter went backwards %llX to %llX\n",
+				i, before->aperf, after->aperf);
+		}
+		perf_err |= SUBTRACT_COUNTER(after->mperf, before->mperf, delta->mperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d mperf counter went backwards %llX to %llX\n",
+				i, before->mperf, after->mperf);
+		}
+		if (perf_err) {
+			if (!aperf_mperf_unstable) {
+				fprintf(stderr, "%s: APERF or MPERF went backwards *\n", progname);
+				fprintf(stderr, "* Frequency results do not cover entire interval *\n");
+				fprintf(stderr, "* fix this by running Linux-2.6.30 or later *\n");
+
+				aperf_mperf_unstable = 1;
+			}
+			/*
+			 * mperf delta is likely a huge "positive" number
+			 * can not use it for calculating c0 time
+			 */
+			skip_c0 = 1;
+			skip_c1 = 1;
+		}
+
+		/*
+		 * As mperf and tsc collection are not atomic,
+		 * it is possible for mperf's non-halted cycles
+		 * to exceed TSC's all cycles: show c1l = 0% in that case.
+		 */
+		if (delta->mperf > delta->tsc)
+			delta->c1 = 0;
+		else /* normal case, derive c1 */
+			delta->c1 = delta->tsc - delta->mperf
+				- delta->c3 - delta->c6 - delta->c7;
+
+		if (delta->mperf == 0)
+			delta->mperf = 1;	/* divide by 0 protection */
+
+		/*
+		 * for "extra msr", just copy the latest w/o subtracting
+		 */
+		delta->extra_msr = after->extra_msr;
+		if (errors) {
+			fprintf(stderr, "ERROR cpu%d before:\n", i);
+			dump_pcc(before);
+			fprintf(stderr, "ERROR cpu%d after:\n", i);
+			dump_pcc(after);
+			errors = 0;
+		}
+	}
+	return 0;
+}
+
+void
+compute_average(PCC *delta, PCC *avg)
+{
+	int i;
+	PCC *sum;
+
+	sum = calloc(1, sizeof(PCC));
+	if (sum == NULL) {
+		perror("calloc sum");
+		exit(-1);
+	}
+
+	for (; delta; delta = delta->next) {
+		sum->tsc += delta->tsc;
+		sum->c1 += delta->c1;
+		sum->c3 += delta->c3;
+		sum->c6 += delta->c6;
+		sum->c7 += delta->c7;
+		sum->aperf += delta->aperf;
+		sum->mperf += delta->mperf;
+		sum->pc2 += delta->pc2;
+		sum->pc3 += delta->pc3;
+		sum->pc6 += delta->pc6;
+		sum->pc7 += delta->pc7;
+	}
+	avg->tsc = sum->tsc/num_cpus;
+	avg->c1 = sum->c1/num_cpus;
+	avg->c3 = sum->c3/num_cpus;
+	avg->c6 = sum->c6/num_cpus;
+	avg->c7 = sum->c7/num_cpus;
+	avg->aperf = sum->aperf/num_cpus;
+	avg->mperf = sum->mperf/num_cpus;
+	avg->pc2 = sum->pc2/num_cpus;
+	avg->pc3 = sum->pc3/num_cpus;
+	avg->pc6 = sum->pc6/num_cpus;
+	avg->pc7 = sum->pc7/num_cpus;
+
+	free(sum);
+}
+
+
+void get_counters(PCC *pcc)
+{
+	for ( ; pcc; pcc = pcc->next) {
+		pcc->tsc = get_msr(pcc->cpu, MSR_TSC);
+		if (do_nhm_cstates)
+			pcc->c3 = get_msr(pcc->cpu, MSR_CORE_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->c6 = get_msr(pcc->cpu, MSR_CORE_C6_RESIDENCY);
+		if (do_aperf)
+			pcc->aperf = get_msr(pcc->cpu, MSR_APERF);
+		if (do_aperf)
+			pcc->mperf = get_msr(pcc->cpu, MSR_MPERF);
+		if (do_nhm_cstates)
+			pcc->pc3 = get_msr(pcc->cpu, MSR_PKG_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->pc6 = get_msr(pcc->cpu, MSR_PKG_C6_RESIDENCY);
+		if (extra_msr_offset)
+			pcc->extra_msr = get_msr(pcc->cpu, extra_msr_offset);
+	}
+}
+
+
+print_nehalem_info()
+{
+	unsigned long long msr;
+	unsigned int ratio;
+
+	if (!do_nehalem_platform_info)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_PLATFORM_INFO);
+
+	ratio = (msr >> 40) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz TSC frequency\n",
+		ratio, bclk, ratio * bclk);
+
+	if (verbose > 1)
+		fprintf(stderr, "MSR_NEHALEM_PLATFORM_INFO: 0x%llx\n", msr);
+
+	if (!do_nehalem_turbo_ratio_limit)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_TURBO_RATIO_LIMIT);
+
+	ratio = (msr >> 24) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 4 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 16) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 3 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 2 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 0) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 1 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+}
+void free_counter_list(PCC *list)
+{
+	PCC *p;
+
+	for (p = list; p; ) {
+		PCC *free_me;
+
+		free_me = p;
+		p = p->next;
+		free(free_me);
+	}
+	return;
+}
+
+void free_all_counters(void)
+{
+	free_counter_list(pcc_even);
+	pcc_even = NULL;
+
+	free_counter_list(pcc_odd);
+	pcc_odd = NULL;
+
+	free_counter_list(pcc_delta);
+	pcc_delta = NULL;
+
+	free_counter_list(pcc_average);
+	pcc_average = NULL;
+}
+
+void alloc_new_cpu_counters(int pkg, int core, int cpu)
+{
+	PCC *new;
+
+	if (verbose > 1)
+		printf("pkg%d core%d, cpu%d\n", pkg, core, cpu);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_odd, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_even, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_delta, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	pcc_average = new;
+}
+
+void re_initialize(void)
+{
+	printf("turbostat: topology changed, re-initializing.\n");
+	free_all_counters();
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+	need_reinitialize = 0;
+	printf("num_cpus is now %d\n", num_cpus);
+}
+
+void turbostat_loop()
+{
+restart:
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	while (1) {
+		int retval;
+
+		verify_num_cpus();
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_odd);
+		gettimeofday(&tv_odd, (struct timezone *)NULL);
+
+		compute_delta(pcc_odd, pcc_even, pcc_delta);
+		timersub(&tv_odd, &tv_even, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_even);
+		gettimeofday(&tv_even, (struct timezone *)NULL);
+		compute_delta(pcc_even, pcc_odd, pcc_delta);
+		timersub(&tv_even, &tv_odd, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+	}
+}
+
+check_dev_msr() {
+	struct stat sb;
+
+	if (stat("/dev/cpu/0/msr", &sb)) {
+		fprintf(stderr, "no /dev/cpu/0/msr\n");
+		fprintf(stderr, "Try \"# modprobe msr\"\n");
+		exit(-5);
+	}
+}
+
+int has_nehalem_turbo_ratio_limit(unsigned int family, unsigned int model)
+{
+	if (family != 6)
+		return 0;
+
+	switch (model) {
+	case 0x1A:	/* Core i7, Xeon 5500 series - Bloomfield, Gainstown NHM-EP */
+	case 0x1E:	/* Core i7 and i5 Processor - Clarksfield, Lynnfield, Jasper Forest */
+	case 0x1F:	/* Core i7 and i5 Processor - Nehalem */
+	case 0x25:	/* Westmere Client - Clarkdale, Arrandale */
+	case 0x2C:	/* Westmere EP - Gulftown */
+	case 0x2A:	/* SNB */
+	case 0x2D:	/* SNB Xeon */
+		return 1;
+	case 0x2E:	/* Nehalem-EX Xeon - Beckton */
+	case 0x2F:	/* Westmere-EX Xeon - Eagleton */
+	default:
+		return 0;
+	}
+}
+
+int is_snb(unsigned int family, unsigned int model)
+{
+	switch (model) {
+	case 0x2A:
+	case 0x2D:
+		return 1;
+	}
+	return 0;
+}
+
+double discover_bclk(unsigned int family, unsigned int model)
+{
+	if (is_snb(family, model))
+		return 100.00;
+	else
+		return 133.33;
+}
+
+void check_cpuid()
+{
+	unsigned int eax, ebx, ecx, edx, max_level;
+	char brand[16];
+	unsigned int fms, family, model, stepping;
+
+	eax = ebx = ecx = edx = 0;
+
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0));
+
+	sprintf(brand, "%.4s%.4s%.4s", &ebx, &edx, &ecx);
+
+	if (strncmp(brand, "GenuineIntel", 12)) {
+		fprintf(stderr, "CPUID: %s != GenuineIntel\n", brand);
+		exit(-1);
+	}
+
+	asm("cpuid" : "=a" (fms), "=c" (ecx), "=d" (edx) : "a" (1) : "ebx");
+	family = (fms >> 8) & 0xf;
+	model = (fms >> 4) & 0xf;
+	stepping = fms & 0xf;
+	if (family == 6 || family == 0xf)
+		model += ((fms >> 16) & 0xf) << 4;
+
+	if (verbose)
+		fprintf(stderr, "CPUID %s %d levels family:model:stepping 0x%x:%x:%x (%d:%d:%d)\n",
+			brand, max_level, family, model, stepping, family, model, stepping);
+
+	if (!(edx & (1 << 5))) {
+		fprintf(stderr, "CPUID: no MSR\n");
+		exit(-1);
+	}
+
+	ebx = ecx = edx = 0;
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000000));
+
+	if (max_level >= 0x80000007) {
+		/*
+		 * Non-Stop TSC is advertised by CPUID.EAX=0x80000007: EDX.bit8
+		 */
+		asm("cpuid" : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000007));
+		do_non_stop_tsc = edx && (1 << 8);
+	}
+	do_nehalem_platform_info = do_non_stop_tsc;
+	do_nhm_cstates =  do_non_stop_tsc;
+	do_snb_cstates = is_snb(family, model);
+	if (do_snb_cstates)
+		fprintf(stderr, "This version does not support SNB\n");
+	bclk = discover_bclk(family, model);
+
+	do_nehalem_turbo_ratio_limit = has_nehalem_turbo_ratio_limit(family, model);
+}
+
+
+usage() {
+	fprintf(stderr, "%s: [-v] [-i interval_sec] [-M MSR#] [command [arg]...]\n",
+		progname);
+}
+
+int
+get_physical_package_id(int cpu) {
+	char path[64];
+	FILE *filep;
+	int pkg;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/physical_package_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(-1);
+	}
+	fscanf(filep, "%d", &pkg);
+	fclose(filep);
+	return pkg;
+}
+int
+get_core_id(int cpu) {
+	char path[64];
+	FILE *filep;
+	int core;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/core_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(-1);
+	}
+	fscanf(filep, "%d", &core);
+	fclose(filep);
+	return core;
+}
+/*
+ * in /dev/cpu/ return success for names that are numbers
+ * ie. filter out ".", "..", "microcode".
+ */
+int dir_filter(const struct dirent *dirp)
+{
+	if (isdigit(dirp->d_name[0]))
+		return 1;
+	else
+		return 0;
+}
+
+int open_dev_cpu_msr(int dummy1)
+{
+	return 0;
+}
+
+insert_cpu_counters(PCC **list, PCC *new)
+{
+	PCC *prev, *next;
+
+	/*
+	 * list was empty
+	 */
+	if (*list == NULL) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	/*
+	 * insert on front of list.
+	 * It is sorted by ascending package#, core#, cpu#
+	 */
+	if (((*list)->pkg > new->pkg) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core > new->core)) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core == new->core) && ((*list)->cpu > new->cpu))) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	prev = *list;
+
+	while (prev->next && (prev->next->pkg < new->pkg)) {
+		prev = prev->next;
+		show_pkg = 1;	/* there is more than 1 package */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core < new->core)) {
+		prev = prev->next;
+		show_core = 1;	/* there is more than 1 core */
+		show_cpu = 1;	/* there is more than 1 CPU (HW thread) */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core == new->core)
+		&& (prev->next->cpu < new->cpu)) {
+		prev = prev->next;
+		show_cpu = 1;
+	}
+
+	/*
+	 * insert after "prev"
+	 */
+	new->next = prev->next;
+	prev->next = new;
+
+	return;
+}
+
+
+
+/*
+ * run func(index, cpu) on every cpu in /proc/stat
+ */
+
+int for_all_cpus(void (func)(int, int, int))
+{
+	FILE *fp;
+	int cpu_count;
+	int retval;
+
+	fp = fopen(proc_stat, "r");
+	if (fp == NULL) {
+		perror(proc_stat);
+		exit(-1);
+	}
+
+	retval = fscanf(fp, "cpu %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n");
+	if (retval != 0) {
+		perror("/proc/stat format");
+		exit(-1);
+	}
+
+	for (cpu_count = 0; ; cpu_count++) {
+		int cpu;
+
+		retval = fscanf(fp, "cpu%u %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n", &cpu);
+		if (retval != 1)
+			return cpu_count;
+
+		func(get_physical_package_id(cpu), get_core_id(cpu), cpu);
+	}
+	fclose(fp);
+}
+
+void dummy(int pkg, int core, int cpu) { return; }
+/*
+ * check to see if a cpu came on-line
+ */
+verify_num_cpus()
+{
+	int new_num_cpus;
+
+	new_num_cpus = for_all_cpus(dummy);
+
+	if (new_num_cpus != num_cpus) {
+		if (verbose)
+			printf("num_cpus was %d, is now  %d\n",
+				num_cpus, new_num_cpus);
+		need_reinitialize = 1;
+	}
+
+	return;
+}
+
+void turbostat_init()
+{
+	int i;
+
+	check_cpuid();
+
+	check_dev_msr();
+
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+
+	if (verbose)
+		print_nehalem_info();
+}
+
+int fork_it(char **argv)
+{
+	int retval;
+	pid_t child_pid;
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	child_pid = fork();
+	if (!child_pid) {
+		/* child */
+		execvp(argv[0], argv);
+	} else  {
+		int status;
+
+		/* parent */
+		if (child_pid == -1) {
+			perror("fork");
+			exit(-1);
+		}
+
+		signal(SIGINT, SIG_IGN);
+		signal(SIGQUIT, SIG_IGN);
+		if (waitpid(child_pid, &status, 0) == -1) {
+			perror("wait");
+			exit(-1);
+		}
+	}
+	get_counters(pcc_odd);
+	gettimeofday(&tv_odd, (struct timezone *)NULL);
+	retval = compute_delta(pcc_odd, pcc_even, pcc_delta);
+
+	timersub(&tv_odd, &tv_even, &tv_delta);
+	compute_average(pcc_delta, pcc_average);
+	if (!retval)
+		print_counters(pcc_delta);
+
+	fprintf(stderr, "%.6f sec\n", tv_delta.tv_sec + tv_delta.tv_usec/1000000.0);;
+
+	return 0;
+}
+
+cmdline(int argc, char **argv)
+{
+	int opt;
+
+	progname = argv[0];
+
+	while ((opt = getopt(argc, argv, "+vi:M:")) != -1) {
+		switch (opt) {
+		case 'v':
+			verbose++;
+			show_pkg = 1;
+			show_core = 1;
+			show_cpu = 1;
+			break;
+		case 'i':
+			interval_sec = atoi(optarg);
+			break;
+		case 'M':
+			sscanf(optarg, "%x", &extra_msr_offset);
+			if (verbose > 1)
+				fprintf(stderr, "MSR 0x%X\n", extra_msr_offset);
+			break;
+		default:
+			usage();
+			exit(-1);
+		}
+	}
+}
+int main(int argc, char **argv)
+{
+	cmdline(argc, argv);
+
+	if (verbose > 1)
+		fprintf(stderr, "turbostat Aug 25, 2010"
+			" - Len Brown <lenb@kernel.org>\n");
+	if (verbose > 1)
+		fprintf(stderr, "http://userweb.kernel.org/~lenb/acpi/utils/pmtools/turbostat/\n");
+
+	turbostat_init();
+
+	/*
+	 * if any params left, it must be a command to fork
+	 */
+	if (argc - optind)
+		return fork_it(argv + optind);
+	else
+		turbostat_loop();
+
+	return 0;
+}
-- 
1.7.3.1.127.g1bb28

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH] tools: add power/x86/turbostat - v2
  2010-10-23  5:31 [PATCH] tools: add power/x86/turbostat Len Brown
@ 2010-10-24  4:06 ` Len Brown
  2010-10-27  3:14   ` [PATCH RESEND] " Len Brown
  0 siblings, 1 reply; 11+ messages in thread
From: Len Brown @ 2010-10-24  4:06 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-pm, linux-kernel, x86

[-- Attachment #1: Type: TEXT/PLAIN, Size: 30263 bytes --]

From: Len Brown <len.brown@intel.com>

turbostat displays actual processor frequency on modern
Intel processors.  Note that this capability depends on free-running
APERF and MPERF MSRs, which were cleared by the acpi_cpufreq
driver up through Linux-2.6.29.

On Intel Core i3/i5/i7 (Nehalem) and newer processors,
turbostat displays residency in idle power saving states.

See the comments in the source code for example usage.

Signed-off-by: Len Brown <len.brown@intel.com>
---
v2 refreshed in response to style review from Otavio Salvador

 tools/power/x86/turbostat/Makefile    |    7 +
 tools/power/x86/turbostat/turbostat.c | 1103 +++++++++++++++++++++++++++++++++
 2 files changed, 1110 insertions(+), 0 deletions(-)
 create mode 100644 tools/power/x86/turbostat/Makefile
 create mode 100644 tools/power/x86/turbostat/turbostat.c

diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
new file mode 100644
index 0000000..5a18f55
--- /dev/null
+++ b/tools/power/x86/turbostat/Makefile
@@ -0,0 +1,7 @@
+turbostat : turbostat.c
+
+clean :
+	rm -f turbostat
+
+install :
+	install turbostat /usr/bin/turbostat
diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
new file mode 100644
index 0000000..27e19ba
--- /dev/null
+++ b/tools/power/x86/turbostat/turbostat.c
@@ -0,0 +1,1103 @@
+/*
+ * turbostat -- show CPU frequency and C-state residency
+ * on modern Intel turbo-capable processors.
+ *
+ * Copyright (c) 2010, Intel Corporation.
+ * Len Brown <len.brown@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+/*
+ * Works properly on Nehalem and newer processors.
+ * Nehalem features an always-running TSC, plus hardware
+ * C-state residency MSRs.
+ *
+ * Works poorly on systems before Nehalem with
+ * a TSC that stops in deep C-states.
+ *
+ * Works properly on Linux-2.6.30 and later.
+ * Works poorly Linux-2.6.29 and earlier, as acpi-cpufreq
+ * used to clear APERF/MPERF counters on access.
+ *
+ * APERF, MPERF count non-halted cycles.
+ * Although it is not guaranteed by the architecture, we assume
+ * here that they count at TSC rate, which is true today.
+ *
+ * References:
+ * "Intel® Turbo Boost Technology
+ * in Intel® Core™ Microarchitecture (Nehalem) Based Processors"
+ * http://download.intel.com/design/processor/applnots/320354.pdf
+ *
+ * "Intel® 64 and IA-32 Architectures Software Developer's Manual
+ * Volume 3B: System Programming Guide"
+ * http://www.intel.com/products/processor/manuals/
+ */
+
+/*
+ * usage:
+ * # turbostat [-v] [-i interval_sec] [command [arg]...]
+ *
+ * examples:
+ *
+ * [root@x980]# ./turbostat
+ *core CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+ *           0.04 1.62 3.38   0.11   0.00  99.85   0.00  95.07
+ *   0   0   0.04 1.62 3.38   0.06   0.00  99.90   0.00  95.07
+ *   0   6   0.02 1.62 3.38   0.08   0.00  99.90   0.00  95.07
+ *   1   2   0.10 1.62 3.38   0.29   0.00  99.61   0.00  95.07
+ *   1   8   0.11 1.62 3.38   0.28   0.00  99.61   0.00  95.07
+ *   2   4   0.01 1.62 3.38   0.01   0.00  99.98   0.00  95.07
+ *   2  10   0.01 1.61 3.38   0.02   0.00  99.98   0.00  95.07
+ *   8   1   0.07 1.62 3.38   0.15   0.00  99.78   0.00  95.07
+ *   8   7   0.03 1.62 3.38   0.19   0.00  99.78   0.00  95.07
+ *   9   3   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+ *   9   9   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+ *  10   5   0.01 1.62 3.38   0.13   0.00  99.86   0.00  95.07
+ *  10  11   0.08 1.62 3.38   0.05   0.00  99.86   0.00  95.07
+ *
+ * Without any parameters, turbostat prints out counters ever 5 seconds.
+ * (override interval with "-i sec" option).
+ *
+ * core is the processor core number
+ *
+ * CPU is the cpu number according to Linux, ala /dev/cpu
+ *
+ * The first row of data is a system wide summary, and thus has no core or CPU#.
+ *
+ * %c0 is the percent of the interval that the core retired instructions.
+ *
+ * GHz is the average clock rate while the core was in c0 state.
+ *
+ * TSC is the average GHz that the TSC ran during the entire interval.
+ *
+ * %c1, %c3, %c6 show the residency in hardware CPU core idle states.
+ * Note that these may not equal the software states requested by Linux.
+ *
+ * pc3%, pc6%, show package idle C-states.
+ *
+ * The "-v" option adds verbosity to the output: eg.
+ *
+ * CPUID GenuineIntel 11 levels family:model:stepping 0x6:2c:2 (6:44:2)
+ * 12 * 133 = 1600 MHz max efficiency
+ * 25 * 133 = 3333 MHz TSC frequency
+ * 26 * 133 = 3467 MHz max turbo 4 active cores
+ * 26 * 133 = 3467 MHz max turbo 3 active cores
+ * 27 * 133 = 3600 MHz max turbo 2 active cores
+ * 27 * 133 = 3600 MHz max turbo 1 active cores
+ *
+ * This tells you what you should be able to get, given nominal
+ * cooling and electrical capacity.  Note that NHM-EX/WSM-EX do
+ * not tell us these max turbo frequencies like NHM/NHM-EP/WSM/WSM-EP do.
+ *
+ * If turbostat is invoked with a command, it will fork that command
+ * and output the statistics gathered while that command was running.
+ * eg. Here a cycle soaker is run on 1 CPU (see %c0) for a few seconds
+ * until ^C while the others are mostly idle:
+ *
+ * [root@x980 lenb]# ./turbostat cat /dev/zero > /dev/null
+ *
+ *^Ccore CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+ *            8.49 3.63 3.38  16.23   0.66  74.63   0.00   0.00
+ *    0   0   1.22 3.62 3.38  32.18   0.00  66.60   0.00   0.00
+ *    0   6   0.40 3.61 3.38  33.00   0.00  66.60   0.00   0.00
+ *    1   2   0.11 3.14 3.38   0.19   3.95  95.75   0.00   0.00
+ *    1   8   0.05 2.88 3.38   0.25   3.95  95.75   0.00   0.00
+ *    2   4   0.00 3.13 3.38   0.02   0.00  99.98   0.00   0.00
+ *    2  10   0.00 3.09 3.38   0.02   0.00  99.98   0.00   0.00
+ *    8   1   0.04 3.50 3.38  14.43   0.00  85.54   0.00   0.00
+ *    8   7   0.03 2.98 3.38  14.43   0.00  85.54   0.00   0.00
+ *    9   3   0.00 3.16 3.38 100.00   0.00   0.00   0.00   0.00
+ *    9   9  99.93 3.63 3.38   0.06   0.00   0.00   0.00   0.00
+ *   10   5   0.01 2.82 3.38   0.08   0.00  99.91   0.00   0.00
+ *   10  11   0.02 3.36 3.38   0.06   0.00  99.91   0.00   0.00
+ * 6.950866 sec
+ *
+ * Above the cycles soaker drives cpu9 up 3.6 Ghz turbo limit
+ * while the other processors are generally in various states of idle.
+ *
+ * Note that cpu3 is an HT sibling sharing the same core (9)
+ * with cpu9, and thus it is unble to get more idle than c1
+ * when cpu9 is busy.
+ *
+ * Note that the arithmetic average of the GHz column above is 3.24,
+ * yet turbostat shows avg 3.61.  This is a weighted average --
+ * where the weight is %c0.  ie. it is the total number of
+ * unhalted cycles elapsed per time divided by the number of CPUs.
+ *
+ * Note that turbostat reads hardware counters, but doesn't write them.
+ * So it will not interfere with the OS or other programs, including
+ * multiple invocations of itself.
+ *
+ * turbostat depends on the Linux msr driver for /dev/cpu/.../msr
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/resource.h>
+#include <fcntl.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <stdlib.h>
+#include <dirent.h>
+
+#define MSR_TSC	0x10
+#define MSR_NEHALEM_PLATFORM_INFO	0xCE
+#define MSR_NEHALEM_TURBO_RATIO_LIMIT	0x1AD
+#define MSR_APERF	0xE8
+#define MSR_MPERF	0xE7
+#define MSR_PKG_C3_RESIDENCY	0x3F8
+#define MSR_PKG_C6_RESIDENCY	0x3F9
+#define MSR_CORE_C3_RESIDENCY	0x3FC
+#define MSR_CORE_C6_RESIDENCY	0x3FD
+
+char *proc_stat = "/proc/stat";
+unsigned int interval_sec = 5;	/* set with -i interval_sec */
+unsigned int verbose;		/* set with -v */
+unsigned int skip_c0;
+unsigned int skip_c1;
+unsigned int do_nhm_cstates;
+unsigned int do_snb_cstates;
+unsigned int do_aperf = 1;	/* TBD set with CPUID */
+unsigned int units = 1000000000.0;	/* Ghz etc */
+unsigned int do_non_stop_tsc;
+unsigned int do_nehalem_platform_info;
+unsigned int do_nehalem_turbo_ratio_limit;
+unsigned int extra_msr_offset;
+double bclk;
+unsigned int show_pkg;
+unsigned int show_core;
+unsigned int show_cpu;
+
+int aperf_mperf_unstable;
+int backwards_count;
+char *progname;
+int need_reinitialize;
+
+int num_cpus;
+
+typedef struct per_cpu_counters {
+	unsigned long long tsc;		/* per thread */
+	unsigned long long aperf;	/* per thread */
+	unsigned long long mperf;	/* per thread */
+	unsigned long long c1;	/* per thread (calculated) */
+	unsigned long long c3;	/* per core */
+	unsigned long long c6;	/* per core */
+	unsigned long long c7;	/* per core */
+	unsigned long long pc2;	/* per package */
+	unsigned long long pc3;	/* per package */
+	unsigned long long pc6;	/* per package */
+	unsigned long long pc7;	/* per package */
+	unsigned long long extra_msr;	/* per thread */
+	int pkg;
+	int core;
+	int cpu;
+	struct per_cpu_counters *next;
+} PCC;
+
+PCC *pcc_even;
+PCC *pcc_odd;
+PCC *pcc_delta;
+PCC *pcc_average;
+struct timeval tv_even;
+struct timeval tv_odd;
+struct timeval tv_delta;
+
+unsigned long long get_msr(int cpu, off_t offset)
+{
+	ssize_t retval;
+	unsigned long long msr;
+	char pathname[32];
+	int fd;
+
+	sprintf(pathname, "/dev/cpu/%d/msr", cpu);
+	fd = open(pathname, O_RDONLY);
+	if (fd < 0) {
+		perror(pathname);
+		need_reinitialize = 1;
+		return 0;
+	}
+
+	retval = pread(fd, &msr, sizeof msr, offset);
+	if (retval != sizeof msr) {
+		fprintf(stderr, "cpu%d pread(..., 0x%x) = %d\n",
+		cpu, offset, retval);
+		exit(-2);
+	}
+
+	close(fd);
+	return msr;
+}
+void print_header()
+{
+	if (show_pkg)
+		fprintf(stderr, "pkg ");
+	if (show_core)
+		fprintf(stderr, "core");
+	if (show_cpu)
+		fprintf(stderr, " CPU");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c0 ");
+	fprintf(stderr, "  GHz");
+	fprintf(stderr, "  TSC");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c1 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c6 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc6 ");
+	if (extra_msr_offset)
+		fprintf(stderr, "       MSR 0x%x ", extra_msr_offset);
+
+	putc('\n', stderr);
+}
+void dump_pcc(PCC *pcc)
+{
+	fprintf(stderr, "package: %d ", pcc->pkg);
+	fprintf(stderr, "core:: %d ", pcc->core);
+	fprintf(stderr, "CPU: %d ", pcc->cpu);
+	fprintf(stderr, "TSC: %016llX\n", pcc->tsc);
+	fprintf(stderr, "c3: %016llX\n", pcc->c3);
+	fprintf(stderr, "c6: %016llX\n", pcc->c6);
+	fprintf(stderr, "c7: %016llX\n", pcc->c7);
+	fprintf(stderr, "aperf: %016llX\n", pcc->aperf);
+	fprintf(stderr, "pc2: %016llX\n", pcc->pc2);
+	fprintf(stderr, "pc3: %016llX\n", pcc->pc3);
+	fprintf(stderr, "pc6: %016llX\n", pcc->pc6);
+	fprintf(stderr, "pc7: %016llX\n", pcc->pc7);
+	fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, pcc->extra_msr);
+}
+void dump_list(PCC *pcc)
+{
+	printf("dump_list 0x%p\n", pcc);
+
+	for (; pcc; pcc = pcc->next)
+		dump_pcc(pcc);
+}
+
+print_pcc(PCC *p)
+{
+	double interval_float;
+
+	interval_float = tv_delta.tv_sec + tv_delta.tv_usec/1000000.0;
+
+	if (p == pcc_average) {
+		if (show_pkg)
+			fprintf(stderr, "    ");
+		if (show_core)
+			fprintf(stderr, "    ");
+		if (show_cpu)
+			fprintf(stderr, "    ");
+	} else {
+		if (show_pkg)
+			fprintf(stderr, "%4d", p->pkg);
+		if (show_core)
+			fprintf(stderr, "%4d", p->core);
+		if (show_cpu)
+			fprintf(stderr, "%4d", p->cpu);
+	}
+
+	/* C0 */
+	if (do_nhm_cstates) {
+		if (!skip_c0)
+			fprintf(stderr, "%7.2f", 100.0 * p->mperf/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+
+	if (do_aperf) {
+		if (!aperf_mperf_unstable) {
+			fprintf(stderr, "%5.2f",
+				1.0 * p->tsc / units * p->aperf /
+				p->mperf / interval_float);
+		} else {
+			if (p->aperf > p->tsc || p->mperf > p->tsc) {
+				fprintf(stderr, " ****");
+			} else {
+				fprintf(stderr, "%4.1f*",
+					1.0 * p->tsc /
+					units * p->aperf /
+					p->mperf / interval_float);
+			}
+		}
+	}
+
+	fprintf(stderr, "%5.2f", 1.0 * p->tsc/units/interval_float);
+
+	if (do_nhm_cstates) {
+		if (!skip_c1)
+			fprintf(stderr, "%7.2f", 100.0 * p->c1/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c6/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc6/p->tsc);
+	if (extra_msr_offset)
+		fprintf(stderr, "  0x%016llx", p->extra_msr);
+	putc('\n', stderr);
+}
+
+void print_counters(PCC *cnt)
+{
+	int i;
+	PCC *pcc;
+
+	print_header();
+
+	if (num_cpus > 1)
+		print_pcc(pcc_average);
+
+	for (pcc = cnt; pcc != NULL; pcc = pcc->next)
+		print_pcc(pcc);
+
+}
+
+#define SUBTRACT_COUNTER(after, before, delta) (delta = (after - before), (before > after))
+
+
+int compute_delta(PCC *after, PCC *before, PCC *delta)
+{
+	int i;
+	int errors = 0;
+	int perf_err = 0;
+
+	skip_c0 = skip_c1 = 0;
+
+	for ( ; after && before && delta;
+		after = after->next, before = before->next, delta = delta->next) {
+		if (before->cpu != after->cpu) {
+			printf("cpu configuration changed: %d != %d\n",
+				before->cpu, after->cpu);
+			return -1;
+		}
+
+		if (SUBTRACT_COUNTER(after->tsc, before->tsc, delta->tsc)) {
+			fprintf(stderr, "cpu%d TSC went backwards %llX to %llX\n",
+				i, before->tsc, after->tsc);
+			errors++;
+		}
+		/* check for TSC < 1 Mcycles over interval */
+		if (delta->tsc < (1000 * 1000)) {
+			fprintf(stderr, "Insanely slow TSC rate,"
+				" TSC stops in idle?\n");
+			fprintf(stderr, "You can disable all c-states"
+				" by booting with \"idle=poll\"\n");
+			fprintf(stderr, "or just the deep ones with"
+				" \"processor.max_cstate=1\"\n");
+			exit(-3);
+		}
+		if (SUBTRACT_COUNTER(after->c3, before->c3, delta->c3)) {
+			fprintf(stderr, "cpu%d c3 counter went backwards %llX to %llX\n",
+				i, before->c3, after->c3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c6, before->c6, delta->c6)) {
+			fprintf(stderr, "cpu%d c6 counter went backwards %llX to %llX\n",
+				i, before->c6, after->c6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c7, before->c7, delta->c7)) {
+			fprintf(stderr, "cpu%d c7 counter went backwards %llX to %llX\n",
+				i, before->c7, after->c7);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc2, before->pc2, delta->pc2)) {
+			fprintf(stderr, "cpu%d pc2 counter went backwards %llX to %llX\n",
+				i, before->pc2, after->pc2);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc3, before->pc3, delta->pc3)) {
+			fprintf(stderr, "cpu%d pc3 counter went backwards %llX to %llX\n",
+				i, before->pc3, after->pc3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc6, before->pc6, delta->pc6)) {
+			fprintf(stderr, "cpu%d pc6 counter went backwards %llX to %llX\n",
+				i, before->pc6, after->pc6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc7, before->pc7, delta->pc7)) {
+			fprintf(stderr, "cpu%d pc7 counter went backwards %llX to %llX\n",
+				i, before->pc7, after->pc7);
+			errors++;
+		}
+
+		perf_err = SUBTRACT_COUNTER(after->aperf, before->aperf, delta->aperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d aperf counter went backwards %llX to %llX\n",
+				i, before->aperf, after->aperf);
+		}
+		perf_err |= SUBTRACT_COUNTER(after->mperf, before->mperf, delta->mperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d mperf counter went backwards %llX to %llX\n",
+				i, before->mperf, after->mperf);
+		}
+		if (perf_err) {
+			if (!aperf_mperf_unstable) {
+				fprintf(stderr, "%s: APERF or MPERF went backwards *\n", progname);
+				fprintf(stderr, "* Frequency results do not cover entire interval *\n");
+				fprintf(stderr, "* fix this by running Linux-2.6.30 or later *\n");
+
+				aperf_mperf_unstable = 1;
+			}
+			/*
+			 * mperf delta is likely a huge "positive" number
+			 * can not use it for calculating c0 time
+			 */
+			skip_c0 = 1;
+			skip_c1 = 1;
+		}
+
+		/*
+		 * As mperf and tsc collection are not atomic,
+		 * it is possible for mperf's non-halted cycles
+		 * to exceed TSC's all cycles: show c1l = 0% in that case.
+		 */
+		if (delta->mperf > delta->tsc)
+			delta->c1 = 0;
+		else /* normal case, derive c1 */
+			delta->c1 = delta->tsc - delta->mperf
+				- delta->c3 - delta->c6 - delta->c7;
+
+		if (delta->mperf == 0)
+			delta->mperf = 1;	/* divide by 0 protection */
+
+		/*
+		 * for "extra msr", just copy the latest w/o subtracting
+		 */
+		delta->extra_msr = after->extra_msr;
+		if (errors) {
+			fprintf(stderr, "ERROR cpu%d before:\n", i);
+			dump_pcc(before);
+			fprintf(stderr, "ERROR cpu%d after:\n", i);
+			dump_pcc(after);
+			errors = 0;
+		}
+	}
+	return 0;
+}
+
+void
+compute_average(PCC *delta, PCC *avg)
+{
+	int i;
+	PCC *sum;
+
+	sum = calloc(1, sizeof(PCC));
+	if (sum == NULL) {
+		perror("calloc sum");
+		exit(-1);
+	}
+
+	for (; delta; delta = delta->next) {
+		sum->tsc += delta->tsc;
+		sum->c1 += delta->c1;
+		sum->c3 += delta->c3;
+		sum->c6 += delta->c6;
+		sum->c7 += delta->c7;
+		sum->aperf += delta->aperf;
+		sum->mperf += delta->mperf;
+		sum->pc2 += delta->pc2;
+		sum->pc3 += delta->pc3;
+		sum->pc6 += delta->pc6;
+		sum->pc7 += delta->pc7;
+	}
+	avg->tsc = sum->tsc/num_cpus;
+	avg->c1 = sum->c1/num_cpus;
+	avg->c3 = sum->c3/num_cpus;
+	avg->c6 = sum->c6/num_cpus;
+	avg->c7 = sum->c7/num_cpus;
+	avg->aperf = sum->aperf/num_cpus;
+	avg->mperf = sum->mperf/num_cpus;
+	avg->pc2 = sum->pc2/num_cpus;
+	avg->pc3 = sum->pc3/num_cpus;
+	avg->pc6 = sum->pc6/num_cpus;
+	avg->pc7 = sum->pc7/num_cpus;
+
+	free(sum);
+}
+
+
+void get_counters(PCC *pcc)
+{
+	for ( ; pcc; pcc = pcc->next) {
+		pcc->tsc = get_msr(pcc->cpu, MSR_TSC);
+		if (do_nhm_cstates)
+			pcc->c3 = get_msr(pcc->cpu, MSR_CORE_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->c6 = get_msr(pcc->cpu, MSR_CORE_C6_RESIDENCY);
+		if (do_aperf)
+			pcc->aperf = get_msr(pcc->cpu, MSR_APERF);
+		if (do_aperf)
+			pcc->mperf = get_msr(pcc->cpu, MSR_MPERF);
+		if (do_nhm_cstates)
+			pcc->pc3 = get_msr(pcc->cpu, MSR_PKG_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->pc6 = get_msr(pcc->cpu, MSR_PKG_C6_RESIDENCY);
+		if (extra_msr_offset)
+			pcc->extra_msr = get_msr(pcc->cpu, extra_msr_offset);
+	}
+}
+
+
+print_nehalem_info()
+{
+	unsigned long long msr;
+	unsigned int ratio;
+
+	if (!do_nehalem_platform_info)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_PLATFORM_INFO);
+
+	ratio = (msr >> 40) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz TSC frequency\n",
+		ratio, bclk, ratio * bclk);
+
+	if (verbose > 1)
+		fprintf(stderr, "MSR_NEHALEM_PLATFORM_INFO: 0x%llx\n", msr);
+
+	if (!do_nehalem_turbo_ratio_limit)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_TURBO_RATIO_LIMIT);
+
+	ratio = (msr >> 24) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 4 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 16) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 3 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 2 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 0) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 1 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+}
+void free_counter_list(PCC *list)
+{
+	PCC *p;
+
+	for (p = list; p; ) {
+		PCC *free_me;
+
+		free_me = p;
+		p = p->next;
+		free(free_me);
+	}
+	return;
+}
+
+void free_all_counters(void)
+{
+	free_counter_list(pcc_even);
+	pcc_even = NULL;
+
+	free_counter_list(pcc_odd);
+	pcc_odd = NULL;
+
+	free_counter_list(pcc_delta);
+	pcc_delta = NULL;
+
+	free_counter_list(pcc_average);
+	pcc_average = NULL;
+}
+
+void alloc_new_cpu_counters(int pkg, int core, int cpu)
+{
+	PCC *new;
+
+	if (verbose > 1)
+		printf("pkg%d core%d, cpu%d\n", pkg, core, cpu);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_odd, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_even, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_delta, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	pcc_average = new;
+}
+
+void re_initialize(void)
+{
+	printf("turbostat: topology changed, re-initializing.\n");
+	free_all_counters();
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+	need_reinitialize = 0;
+	printf("num_cpus is now %d\n", num_cpus);
+}
+
+void turbostat_loop()
+{
+restart:
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	while (1) {
+		int retval;
+
+		verify_num_cpus();
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_odd);
+		gettimeofday(&tv_odd, (struct timezone *)NULL);
+
+		compute_delta(pcc_odd, pcc_even, pcc_delta);
+		timersub(&tv_odd, &tv_even, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_even);
+		gettimeofday(&tv_even, (struct timezone *)NULL);
+		compute_delta(pcc_even, pcc_odd, pcc_delta);
+		timersub(&tv_even, &tv_odd, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+	}
+}
+
+check_dev_msr() {
+	struct stat sb;
+
+	if (stat("/dev/cpu/0/msr", &sb)) {
+		fprintf(stderr, "no /dev/cpu/0/msr\n");
+		fprintf(stderr, "Try \"# modprobe msr\"\n");
+		exit(-5);
+	}
+}
+
+int has_nehalem_turbo_ratio_limit(unsigned int family, unsigned int model)
+{
+	if (family != 6)
+		return 0;
+
+	switch (model) {
+	case 0x1A:	/* Core i7, Xeon 5500 series - Bloomfield, Gainstown NHM-EP */
+	case 0x1E:	/* Core i7 and i5 Processor - Clarksfield, Lynnfield, Jasper Forest */
+	case 0x1F:	/* Core i7 and i5 Processor - Nehalem */
+	case 0x25:	/* Westmere Client - Clarkdale, Arrandale */
+	case 0x2C:	/* Westmere EP - Gulftown */
+	case 0x2A:	/* SNB */
+	case 0x2D:	/* SNB Xeon */
+		return 1;
+	case 0x2E:	/* Nehalem-EX Xeon - Beckton */
+	case 0x2F:	/* Westmere-EX Xeon - Eagleton */
+	default:
+		return 0;
+	}
+}
+
+int is_snb(unsigned int family, unsigned int model)
+{
+	switch (model) {
+	case 0x2A:
+	case 0x2D:
+		return 1;
+	}
+	return 0;
+}
+
+double discover_bclk(unsigned int family, unsigned int model)
+{
+	if (is_snb(family, model))
+		return 100.00;
+	else
+		return 133.33;
+}
+
+void check_cpuid()
+{
+	unsigned int eax, ebx, ecx, edx, max_level;
+	char brand[16];
+	unsigned int fms, family, model, stepping;
+
+	eax = ebx = ecx = edx = 0;
+
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0));
+
+	sprintf(brand, "%.4s%.4s%.4s", &ebx, &edx, &ecx);
+
+	if (strncmp(brand, "GenuineIntel", 12)) {
+		fprintf(stderr, "CPUID: %s != GenuineIntel\n", brand);
+		exit(-1);
+	}
+
+	asm("cpuid" : "=a" (fms), "=c" (ecx), "=d" (edx) : "a" (1) : "ebx");
+	family = (fms >> 8) & 0xf;
+	model = (fms >> 4) & 0xf;
+	stepping = fms & 0xf;
+	if (family == 6 || family == 0xf)
+		model += ((fms >> 16) & 0xf) << 4;
+
+	if (verbose)
+		fprintf(stderr, "CPUID %s %d levels family:model:stepping 0x%x:%x:%x (%d:%d:%d)\n",
+			brand, max_level, family, model, stepping, family, model, stepping);
+
+	if (!(edx & (1 << 5))) {
+		fprintf(stderr, "CPUID: no MSR\n");
+		exit(-1);
+	}
+
+	ebx = ecx = edx = 0;
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000000));
+
+	if (max_level >= 0x80000007) {
+		/*
+		 * Non-Stop TSC is advertised by CPUID.EAX=0x80000007: EDX.bit8
+		 */
+		asm("cpuid" : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000007));
+		do_non_stop_tsc = edx && (1 << 8);
+	}
+	do_nehalem_platform_info = do_non_stop_tsc;
+	do_nhm_cstates =  do_non_stop_tsc;
+	do_snb_cstates = is_snb(family, model);
+	if (do_snb_cstates)
+		fprintf(stderr, "This version does not support SNB\n");
+	bclk = discover_bclk(family, model);
+
+	do_nehalem_turbo_ratio_limit = has_nehalem_turbo_ratio_limit(family, model);
+}
+
+
+usage() {
+	fprintf(stderr, "%s: [-v] [-i interval_sec] [-M MSR#] [command [arg]...]\n",
+		progname);
+}
+
+int
+get_physical_package_id(int cpu) {
+	char path[64];
+	FILE *filep;
+	int pkg;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/physical_package_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(-1);
+	}
+	fscanf(filep, "%d", &pkg);
+	fclose(filep);
+	return pkg;
+}
+int
+get_core_id(int cpu) {
+	char path[64];
+	FILE *filep;
+	int core;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/core_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(-1);
+	}
+	fscanf(filep, "%d", &core);
+	fclose(filep);
+	return core;
+}
+/*
+ * in /dev/cpu/ return success for names that are numbers
+ * ie. filter out ".", "..", "microcode".
+ */
+int dir_filter(const struct dirent *dirp)
+{
+	if (isdigit(dirp->d_name[0]))
+		return 1;
+	else
+		return 0;
+}
+
+int open_dev_cpu_msr(int dummy1)
+{
+	return 0;
+}
+
+insert_cpu_counters(PCC **list, PCC *new)
+{
+	PCC *prev, *next;
+
+	/*
+	 * list was empty
+	 */
+	if (*list == NULL) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	/*
+	 * insert on front of list.
+	 * It is sorted by ascending package#, core#, cpu#
+	 */
+	if (((*list)->pkg > new->pkg) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core > new->core)) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core == new->core) && ((*list)->cpu > new->cpu))) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	prev = *list;
+
+	while (prev->next && (prev->next->pkg < new->pkg)) {
+		prev = prev->next;
+		show_pkg = 1;	/* there is more than 1 package */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core < new->core)) {
+		prev = prev->next;
+		show_core = 1;	/* there is more than 1 core */
+		show_cpu = 1;	/* there is more than 1 CPU (HW thread) */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core == new->core)
+		&& (prev->next->cpu < new->cpu)) {
+		prev = prev->next;
+		show_cpu = 1;
+	}
+
+	/*
+	 * insert after "prev"
+	 */
+	new->next = prev->next;
+	prev->next = new;
+
+	return;
+}
+
+
+
+/*
+ * run func(index, cpu) on every cpu in /proc/stat
+ */
+
+int for_all_cpus(void (func)(int, int, int))
+{
+	FILE *fp;
+	int cpu_count;
+	int retval;
+
+	fp = fopen(proc_stat, "r");
+	if (fp == NULL) {
+		perror(proc_stat);
+		exit(-1);
+	}
+
+	retval = fscanf(fp, "cpu %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n");
+	if (retval != 0) {
+		perror("/proc/stat format");
+		exit(-1);
+	}
+
+	for (cpu_count = 0; ; cpu_count++) {
+		int cpu;
+
+		retval = fscanf(fp, "cpu%u %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n", &cpu);
+		if (retval != 1)
+			return cpu_count;
+
+		func(get_physical_package_id(cpu), get_core_id(cpu), cpu);
+	}
+	fclose(fp);
+}
+
+void dummy(int pkg, int core, int cpu) { return; }
+/*
+ * check to see if a cpu came on-line
+ */
+verify_num_cpus()
+{
+	int new_num_cpus;
+
+	new_num_cpus = for_all_cpus(dummy);
+
+	if (new_num_cpus != num_cpus) {
+		if (verbose)
+			printf("num_cpus was %d, is now  %d\n",
+				num_cpus, new_num_cpus);
+		need_reinitialize = 1;
+	}
+
+	return;
+}
+
+void turbostat_init()
+{
+	int i;
+
+	check_cpuid();
+
+	check_dev_msr();
+
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+
+	if (verbose)
+		print_nehalem_info();
+}
+
+int fork_it(char **argv)
+{
+	int retval;
+	pid_t child_pid;
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	child_pid = fork();
+	if (!child_pid) {
+		/* child */
+		execvp(argv[0], argv);
+	} else  {
+		int status;
+
+		/* parent */
+		if (child_pid == -1) {
+			perror("fork");
+			exit(-1);
+		}
+
+		signal(SIGINT, SIG_IGN);
+		signal(SIGQUIT, SIG_IGN);
+		if (waitpid(child_pid, &status, 0) == -1) {
+			perror("wait");
+			exit(-1);
+		}
+	}
+	get_counters(pcc_odd);
+	gettimeofday(&tv_odd, (struct timezone *)NULL);
+	retval = compute_delta(pcc_odd, pcc_even, pcc_delta);
+
+	timersub(&tv_odd, &tv_even, &tv_delta);
+	compute_average(pcc_delta, pcc_average);
+	if (!retval)
+		print_counters(pcc_delta);
+
+	fprintf(stderr, "%.6f sec\n", tv_delta.tv_sec + tv_delta.tv_usec/1000000.0);;
+
+	return 0;
+}
+
+cmdline(int argc, char **argv)
+{
+	int opt;
+
+	progname = argv[0];
+
+	while ((opt = getopt(argc, argv, "+vi:M:")) != -1) {
+		switch (opt) {
+		case 'v':
+			verbose++;
+			show_pkg = 1;
+			show_core = 1;
+			show_cpu = 1;
+			break;
+		case 'i':
+			interval_sec = atoi(optarg);
+			break;
+		case 'M':
+			sscanf(optarg, "%x", &extra_msr_offset);
+			if (verbose > 1)
+				fprintf(stderr, "MSR 0x%X\n", extra_msr_offset);
+			break;
+		default:
+			usage();
+			exit(-1);
+		}
+	}
+}
+int main(int argc, char **argv)
+{
+	cmdline(argc, argv);
+
+	if (verbose > 1)
+		fprintf(stderr, "turbostat Aug 25, 2010"
+			" - Len Brown <lenb@kernel.org>\n");
+	if (verbose > 1)
+		fprintf(stderr, "http://userweb.kernel.org/~lenb/acpi/utils/pmtools/turbostat/\n");
+
+	turbostat_init();
+
+	/*
+	 * if any params left, it must be a command to fork
+	 */
+	if (argc - optind)
+		return fork_it(argv + optind);
+	else
+		turbostat_loop();
+
+	return 0;
+}
-- 
1.7.3.1.127.g1bb28


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RESEND] tools: add power/x86/turbostat - v2
  2010-10-24  4:06 ` [PATCH] tools: add power/x86/turbostat - v2 Len Brown
@ 2010-10-27  3:14   ` Len Brown
  2010-11-15 16:01     ` Len Brown
  0 siblings, 1 reply; 11+ messages in thread
From: Len Brown @ 2010-10-27  3:14 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-pm, x86, linux-kernel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 30345 bytes --]

From: Len Brown <len.brown@intel.com>

turbostat displays actual processor frequency on modern
Intel processors.  Note that this capability depends on free-running
APERF and MPERF MSRs, which were cleared by the acpi_cpufreq
driver up through Linux-2.6.29.

On Intel Core i3/i5/i7 (Nehalem) and newer processors,
turbostat displays residency in idle power saving states.

See the comments in the source code for example usage.

Signed-off-by: Len Brown <len.brown@intel.com>
---
v2 refreshed in response to style review from Otavio Salvador
v2-resent, as 1st v2 sent was the same as v1:-)

 tools/power/x86/turbostat/Makefile    |    7 +
 tools/power/x86/turbostat/turbostat.c | 1107 +++++++++++++++++++++++++++++++++
 2 files changed, 1114 insertions(+), 0 deletions(-)
 create mode 100644 tools/power/x86/turbostat/Makefile
 create mode 100644 tools/power/x86/turbostat/turbostat.c

diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
new file mode 100644
index 0000000..5a18f55
--- /dev/null
+++ b/tools/power/x86/turbostat/Makefile
@@ -0,0 +1,7 @@
+turbostat : turbostat.c
+
+clean :
+	rm -f turbostat
+
+install :
+	install turbostat /usr/bin/turbostat
diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
new file mode 100644
index 0000000..e4526a6
--- /dev/null
+++ b/tools/power/x86/turbostat/turbostat.c
@@ -0,0 +1,1107 @@
+/*
+ * turbostat -- show CPU frequency and C-state residency
+ * on modern Intel turbo-capable processors.
+ *
+ * Copyright (c) 2010, Intel Corporation.
+ * Len Brown <len.brown@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+/*
+ * Works properly on Nehalem and newer processors.
+ * Nehalem features an always-running TSC, plus hardware
+ * C-state residency MSRs.
+ *
+ * Works poorly on systems before Nehalem with
+ * a TSC that stops in deep C-states.
+ *
+ * Works properly on Linux-2.6.30 and later.
+ * Works poorly Linux-2.6.29 and earlier, as acpi-cpufreq
+ * used to clear APERF/MPERF counters on access.
+ *
+ * APERF, MPERF count non-halted cycles.
+ * Although it is not guaranteed by the architecture, we assume
+ * here that they count at TSC rate, which is true today.
+ *
+ * References:
+ * "Intel® Turbo Boost Technology
+ * in Intel® Core™ Microarchitecture (Nehalem) Based Processors"
+ * http://download.intel.com/design/processor/applnots/320354.pdf
+ *
+ * "Intel® 64 and IA-32 Architectures Software Developer's Manual
+ * Volume 3B: System Programming Guide"
+ * http://www.intel.com/products/processor/manuals/
+ */
+
+/*
+ * usage:
+ * # turbostat [-v] [-i interval_sec] [command [arg]...]
+ *
+ * examples:
+ *
+ * [root@x980]# ./turbostat
+ *core CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+ *           0.04 1.62 3.38   0.11   0.00  99.85   0.00  95.07
+ *   0   0   0.04 1.62 3.38   0.06   0.00  99.90   0.00  95.07
+ *   0   6   0.02 1.62 3.38   0.08   0.00  99.90   0.00  95.07
+ *   1   2   0.10 1.62 3.38   0.29   0.00  99.61   0.00  95.07
+ *   1   8   0.11 1.62 3.38   0.28   0.00  99.61   0.00  95.07
+ *   2   4   0.01 1.62 3.38   0.01   0.00  99.98   0.00  95.07
+ *   2  10   0.01 1.61 3.38   0.02   0.00  99.98   0.00  95.07
+ *   8   1   0.07 1.62 3.38   0.15   0.00  99.78   0.00  95.07
+ *   8   7   0.03 1.62 3.38   0.19   0.00  99.78   0.00  95.07
+ *   9   3   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+ *   9   9   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+ *  10   5   0.01 1.62 3.38   0.13   0.00  99.86   0.00  95.07
+ *  10  11   0.08 1.62 3.38   0.05   0.00  99.86   0.00  95.07
+ *
+ * Without any parameters, turbostat prints out counters ever 5 seconds.
+ * (override interval with "-i sec" option).
+ *
+ * core is the processor core number
+ *
+ * CPU is the cpu number according to Linux, ala /dev/cpu
+ *
+ * The first row of data is a system wide summary, and thus has no core or CPU#.
+ *
+ * %c0 is the percent of the interval that the core retired instructions.
+ *
+ * GHz is the average clock rate while the core was in c0 state.
+ *
+ * TSC is the average GHz that the TSC ran during the entire interval.
+ *
+ * %c1, %c3, %c6 show the residency in hardware CPU core idle states.
+ * Note that these may not equal the software states requested by Linux.
+ *
+ * pc3%, pc6%, show package idle C-states.
+ *
+ * The "-v" option adds verbosity to the output: eg.
+ *
+ * CPUID GenuineIntel 11 levels family:model:stepping 0x6:2c:2 (6:44:2)
+ * 12 * 133 = 1600 MHz max efficiency
+ * 25 * 133 = 3333 MHz TSC frequency
+ * 26 * 133 = 3467 MHz max turbo 4 active cores
+ * 26 * 133 = 3467 MHz max turbo 3 active cores
+ * 27 * 133 = 3600 MHz max turbo 2 active cores
+ * 27 * 133 = 3600 MHz max turbo 1 active cores
+ *
+ * This tells you what you should be able to get, given nominal
+ * cooling and electrical capacity.  Note that NHM-EX/WSM-EX do
+ * not tell us these max turbo frequencies like NHM/NHM-EP/WSM/WSM-EP do.
+ *
+ * If turbostat is invoked with a command, it will fork that command
+ * and output the statistics gathered while that command was running.
+ * eg. Here a cycle soaker is run on 1 CPU (see %c0) for a few seconds
+ * until ^C while the others are mostly idle:
+ *
+ * [root@x980 lenb]# ./turbostat cat /dev/zero > /dev/null
+ *
+ *^Ccore CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+ *            8.49 3.63 3.38  16.23   0.66  74.63   0.00   0.00
+ *    0   0   1.22 3.62 3.38  32.18   0.00  66.60   0.00   0.00
+ *    0   6   0.40 3.61 3.38  33.00   0.00  66.60   0.00   0.00
+ *    1   2   0.11 3.14 3.38   0.19   3.95  95.75   0.00   0.00
+ *    1   8   0.05 2.88 3.38   0.25   3.95  95.75   0.00   0.00
+ *    2   4   0.00 3.13 3.38   0.02   0.00  99.98   0.00   0.00
+ *    2  10   0.00 3.09 3.38   0.02   0.00  99.98   0.00   0.00
+ *    8   1   0.04 3.50 3.38  14.43   0.00  85.54   0.00   0.00
+ *    8   7   0.03 2.98 3.38  14.43   0.00  85.54   0.00   0.00
+ *    9   3   0.00 3.16 3.38 100.00   0.00   0.00   0.00   0.00
+ *    9   9  99.93 3.63 3.38   0.06   0.00   0.00   0.00   0.00
+ *   10   5   0.01 2.82 3.38   0.08   0.00  99.91   0.00   0.00
+ *   10  11   0.02 3.36 3.38   0.06   0.00  99.91   0.00   0.00
+ * 6.950866 sec
+ *
+ * Above the cycles soaker drives cpu9 up 3.6 Ghz turbo limit
+ * while the other processors are generally in various states of idle.
+ *
+ * Note that cpu3 is an HT sibling sharing the same core (9)
+ * with cpu9, and thus it is unble to get more idle than c1
+ * when cpu9 is busy.
+ *
+ * Note that the arithmetic average of the GHz column above is 3.24,
+ * yet turbostat shows avg 3.61.  This is a weighted average --
+ * where the weight is %c0.  ie. it is the total number of
+ * unhalted cycles elapsed per time divided by the number of CPUs.
+ *
+ * Note that turbostat reads hardware counters, but doesn't write them.
+ * So it will not interfere with the OS or other programs, including
+ * multiple invocations of itself.
+ *
+ * turbostat depends on the Linux msr driver for /dev/cpu/.../msr
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/resource.h>
+#include <fcntl.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <stdlib.h>
+#include <dirent.h>
+
+#define MSR_TSC	0x10
+#define MSR_NEHALEM_PLATFORM_INFO	0xCE
+#define MSR_NEHALEM_TURBO_RATIO_LIMIT	0x1AD
+#define MSR_APERF	0xE8
+#define MSR_MPERF	0xE7
+#define MSR_PKG_C3_RESIDENCY	0x3F8
+#define MSR_PKG_C6_RESIDENCY	0x3F9
+#define MSR_CORE_C3_RESIDENCY	0x3FC
+#define MSR_CORE_C6_RESIDENCY	0x3FD
+
+char *proc_stat = "/proc/stat";
+unsigned int interval_sec = 5;	/* set with -i interval_sec */
+unsigned int verbose;		/* set with -v */
+unsigned int skip_c0;
+unsigned int skip_c1;
+unsigned int do_nhm_cstates;
+unsigned int do_snb_cstates;
+unsigned int do_aperf = 1;	/* TBD set with CPUID */
+unsigned int units = 1000000000;	/* Ghz etc */
+unsigned int do_non_stop_tsc;
+unsigned int do_nehalem_platform_info;
+unsigned int do_nehalem_turbo_ratio_limit;
+unsigned int extra_msr_offset;
+double bclk;
+unsigned int show_pkg;
+unsigned int show_core;
+unsigned int show_cpu;
+
+int aperf_mperf_unstable;
+int backwards_count;
+char *progname;
+int need_reinitialize;
+
+int num_cpus;
+
+typedef struct per_cpu_counters {
+	unsigned long long tsc;		/* per thread */
+	unsigned long long aperf;	/* per thread */
+	unsigned long long mperf;	/* per thread */
+	unsigned long long c1;	/* per thread (calculated) */
+	unsigned long long c3;	/* per core */
+	unsigned long long c6;	/* per core */
+	unsigned long long c7;	/* per core */
+	unsigned long long pc2;	/* per package */
+	unsigned long long pc3;	/* per package */
+	unsigned long long pc6;	/* per package */
+	unsigned long long pc7;	/* per package */
+	unsigned long long extra_msr;	/* per thread */
+	int pkg;
+	int core;
+	int cpu;
+	struct per_cpu_counters *next;
+} PCC;
+
+PCC *pcc_even;
+PCC *pcc_odd;
+PCC *pcc_delta;
+PCC *pcc_average;
+struct timeval tv_even;
+struct timeval tv_odd;
+struct timeval tv_delta;
+
+unsigned long long get_msr(int cpu, off_t offset)
+{
+	ssize_t retval;
+	unsigned long long msr;
+	char pathname[32];
+	int fd;
+
+	sprintf(pathname, "/dev/cpu/%d/msr", cpu);
+	fd = open(pathname, O_RDONLY);
+	if (fd < 0) {
+		perror(pathname);
+		need_reinitialize = 1;
+		return 0;
+	}
+
+	retval = pread(fd, &msr, sizeof msr, offset);
+	if (retval != sizeof msr) {
+		fprintf(stderr, "cpu%d pread(..., 0x%x) = %d\n",
+		cpu, offset, retval);
+		exit(-2);
+	}
+
+	close(fd);
+	return msr;
+}
+
+void print_header()
+{
+	if (show_pkg)
+		fprintf(stderr, "pkg ");
+	if (show_core)
+		fprintf(stderr, "core");
+	if (show_cpu)
+		fprintf(stderr, " CPU");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c0 ");
+	fprintf(stderr, "  GHz");
+	fprintf(stderr, "  TSC");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c1 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c6 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc6 ");
+	if (extra_msr_offset)
+		fprintf(stderr, "       MSR 0x%x ", extra_msr_offset);
+
+	putc('\n', stderr);
+}
+
+void dump_pcc(PCC *pcc)
+{
+	fprintf(stderr, "package: %d ", pcc->pkg);
+	fprintf(stderr, "core:: %d ", pcc->core);
+	fprintf(stderr, "CPU: %d ", pcc->cpu);
+	fprintf(stderr, "TSC: %016llX\n", pcc->tsc);
+	fprintf(stderr, "c3: %016llX\n", pcc->c3);
+	fprintf(stderr, "c6: %016llX\n", pcc->c6);
+	fprintf(stderr, "c7: %016llX\n", pcc->c7);
+	fprintf(stderr, "aperf: %016llX\n", pcc->aperf);
+	fprintf(stderr, "pc2: %016llX\n", pcc->pc2);
+	fprintf(stderr, "pc3: %016llX\n", pcc->pc3);
+	fprintf(stderr, "pc6: %016llX\n", pcc->pc6);
+	fprintf(stderr, "pc7: %016llX\n", pcc->pc7);
+	fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, pcc->extra_msr);
+}
+
+void dump_list(PCC *pcc)
+{
+	printf("dump_list 0x%p\n", pcc);
+
+	for (; pcc; pcc = pcc->next)
+		dump_pcc(pcc);
+}
+
+void print_pcc(PCC *p)
+{
+	double interval_float;
+
+	interval_float = tv_delta.tv_sec + tv_delta.tv_usec/1000000.0;
+
+	if (p == pcc_average) {
+		if (show_pkg)
+			fprintf(stderr, "    ");
+		if (show_core)
+			fprintf(stderr, "    ");
+		if (show_cpu)
+			fprintf(stderr, "    ");
+	} else {
+		if (show_pkg)
+			fprintf(stderr, "%4d", p->pkg);
+		if (show_core)
+			fprintf(stderr, "%4d", p->core);
+		if (show_cpu)
+			fprintf(stderr, "%4d", p->cpu);
+	}
+
+	/* C0 */
+	if (do_nhm_cstates) {
+		if (!skip_c0)
+			fprintf(stderr, "%7.2f", 100.0 * p->mperf/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+
+	if (do_aperf) {
+		if (!aperf_mperf_unstable) {
+			fprintf(stderr, "%5.2f",
+				1.0 * p->tsc / units * p->aperf /
+				p->mperf / interval_float);
+		} else {
+			if (p->aperf > p->tsc || p->mperf > p->tsc) {
+				fprintf(stderr, " ****");
+			} else {
+				fprintf(stderr, "%4.1f*",
+					1.0 * p->tsc /
+					units * p->aperf /
+					p->mperf / interval_float);
+			}
+		}
+	}
+
+	fprintf(stderr, "%5.2f", 1.0 * p->tsc/units/interval_float);
+
+	if (do_nhm_cstates) {
+		if (!skip_c1)
+			fprintf(stderr, "%7.2f", 100.0 * p->c1/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c6/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc6/p->tsc);
+	if (extra_msr_offset)
+		fprintf(stderr, "  0x%016llx", p->extra_msr);
+	putc('\n', stderr);
+}
+
+void print_counters(PCC *cnt)
+{
+	int i;
+	PCC *pcc;
+
+	print_header();
+
+	if (num_cpus > 1)
+		print_pcc(pcc_average);
+
+	for (pcc = cnt; pcc != NULL; pcc = pcc->next)
+		print_pcc(pcc);
+
+}
+
+#define SUBTRACT_COUNTER(after, before, delta) (delta = (after - before), (before > after))
+
+
+int compute_delta(PCC *after, PCC *before, PCC *delta)
+{
+	int i;
+	int errors = 0;
+	int perf_err = 0;
+
+	skip_c0 = skip_c1 = 0;
+
+	for ( ; after && before && delta;
+		after = after->next, before = before->next, delta = delta->next) {
+		if (before->cpu != after->cpu) {
+			printf("cpu configuration changed: %d != %d\n",
+				before->cpu, after->cpu);
+			return -1;
+		}
+
+		if (SUBTRACT_COUNTER(after->tsc, before->tsc, delta->tsc)) {
+			fprintf(stderr, "cpu%d TSC went backwards %llX to %llX\n",
+				i, before->tsc, after->tsc);
+			errors++;
+		}
+		/* check for TSC < 1 Mcycles over interval */
+		if (delta->tsc < (1000 * 1000)) {
+			fprintf(stderr, "Insanely slow TSC rate,"
+				" TSC stops in idle?\n");
+			fprintf(stderr, "You can disable all c-states"
+				" by booting with \"idle=poll\"\n");
+			fprintf(stderr, "or just the deep ones with"
+				" \"processor.max_cstate=1\"\n");
+			exit(-3);
+		}
+		if (SUBTRACT_COUNTER(after->c3, before->c3, delta->c3)) {
+			fprintf(stderr, "cpu%d c3 counter went backwards %llX to %llX\n",
+				i, before->c3, after->c3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c6, before->c6, delta->c6)) {
+			fprintf(stderr, "cpu%d c6 counter went backwards %llX to %llX\n",
+				i, before->c6, after->c6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c7, before->c7, delta->c7)) {
+			fprintf(stderr, "cpu%d c7 counter went backwards %llX to %llX\n",
+				i, before->c7, after->c7);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc2, before->pc2, delta->pc2)) {
+			fprintf(stderr, "cpu%d pc2 counter went backwards %llX to %llX\n",
+				i, before->pc2, after->pc2);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc3, before->pc3, delta->pc3)) {
+			fprintf(stderr, "cpu%d pc3 counter went backwards %llX to %llX\n",
+				i, before->pc3, after->pc3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc6, before->pc6, delta->pc6)) {
+			fprintf(stderr, "cpu%d pc6 counter went backwards %llX to %llX\n",
+				i, before->pc6, after->pc6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc7, before->pc7, delta->pc7)) {
+			fprintf(stderr, "cpu%d pc7 counter went backwards %llX to %llX\n",
+				i, before->pc7, after->pc7);
+			errors++;
+		}
+
+		perf_err = SUBTRACT_COUNTER(after->aperf, before->aperf, delta->aperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d aperf counter went backwards %llX to %llX\n",
+				i, before->aperf, after->aperf);
+		}
+		perf_err |= SUBTRACT_COUNTER(after->mperf, before->mperf, delta->mperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d mperf counter went backwards %llX to %llX\n",
+				i, before->mperf, after->mperf);
+		}
+		if (perf_err) {
+			if (!aperf_mperf_unstable) {
+				fprintf(stderr, "%s: APERF or MPERF went backwards *\n", progname);
+				fprintf(stderr, "* Frequency results do not cover entire interval *\n");
+				fprintf(stderr, "* fix this by running Linux-2.6.30 or later *\n");
+
+				aperf_mperf_unstable = 1;
+			}
+			/*
+			 * mperf delta is likely a huge "positive" number
+			 * can not use it for calculating c0 time
+			 */
+			skip_c0 = 1;
+			skip_c1 = 1;
+		}
+
+		/*
+		 * As mperf and tsc collection are not atomic,
+		 * it is possible for mperf's non-halted cycles
+		 * to exceed TSC's all cycles: show c1l = 0% in that case.
+		 */
+		if (delta->mperf > delta->tsc)
+			delta->c1 = 0;
+		else /* normal case, derive c1 */
+			delta->c1 = delta->tsc - delta->mperf
+				- delta->c3 - delta->c6 - delta->c7;
+
+		if (delta->mperf == 0)
+			delta->mperf = 1;	/* divide by 0 protection */
+
+		/*
+		 * for "extra msr", just copy the latest w/o subtracting
+		 */
+		delta->extra_msr = after->extra_msr;
+		if (errors) {
+			fprintf(stderr, "ERROR cpu%d before:\n", i);
+			dump_pcc(before);
+			fprintf(stderr, "ERROR cpu%d after:\n", i);
+			dump_pcc(after);
+			errors = 0;
+		}
+	}
+	return 0;
+}
+
+void compute_average(PCC *delta, PCC *avg)
+{
+	int i;
+	PCC *sum;
+
+	sum = calloc(1, sizeof(PCC));
+	if (sum == NULL) {
+		perror("calloc sum");
+		exit(-1);
+	}
+
+	for (; delta; delta = delta->next) {
+		sum->tsc += delta->tsc;
+		sum->c1 += delta->c1;
+		sum->c3 += delta->c3;
+		sum->c6 += delta->c6;
+		sum->c7 += delta->c7;
+		sum->aperf += delta->aperf;
+		sum->mperf += delta->mperf;
+		sum->pc2 += delta->pc2;
+		sum->pc3 += delta->pc3;
+		sum->pc6 += delta->pc6;
+		sum->pc7 += delta->pc7;
+	}
+	avg->tsc = sum->tsc/num_cpus;
+	avg->c1 = sum->c1/num_cpus;
+	avg->c3 = sum->c3/num_cpus;
+	avg->c6 = sum->c6/num_cpus;
+	avg->c7 = sum->c7/num_cpus;
+	avg->aperf = sum->aperf/num_cpus;
+	avg->mperf = sum->mperf/num_cpus;
+	avg->pc2 = sum->pc2/num_cpus;
+	avg->pc3 = sum->pc3/num_cpus;
+	avg->pc6 = sum->pc6/num_cpus;
+	avg->pc7 = sum->pc7/num_cpus;
+
+	free(sum);
+}
+
+void get_counters(PCC *pcc)
+{
+	for ( ; pcc; pcc = pcc->next) {
+		pcc->tsc = get_msr(pcc->cpu, MSR_TSC);
+		if (do_nhm_cstates)
+			pcc->c3 = get_msr(pcc->cpu, MSR_CORE_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->c6 = get_msr(pcc->cpu, MSR_CORE_C6_RESIDENCY);
+		if (do_aperf)
+			pcc->aperf = get_msr(pcc->cpu, MSR_APERF);
+		if (do_aperf)
+			pcc->mperf = get_msr(pcc->cpu, MSR_MPERF);
+		if (do_nhm_cstates)
+			pcc->pc3 = get_msr(pcc->cpu, MSR_PKG_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->pc6 = get_msr(pcc->cpu, MSR_PKG_C6_RESIDENCY);
+		if (extra_msr_offset)
+			pcc->extra_msr = get_msr(pcc->cpu, extra_msr_offset);
+	}
+}
+
+
+void print_nehalem_info()
+{
+	unsigned long long msr;
+	unsigned int ratio;
+
+	if (!do_nehalem_platform_info)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_PLATFORM_INFO);
+
+	ratio = (msr >> 40) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz TSC frequency\n",
+		ratio, bclk, ratio * bclk);
+
+	if (verbose > 1)
+		fprintf(stderr, "MSR_NEHALEM_PLATFORM_INFO: 0x%llx\n", msr);
+
+	if (!do_nehalem_turbo_ratio_limit)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_TURBO_RATIO_LIMIT);
+
+	ratio = (msr >> 24) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 4 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 16) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 3 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 2 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 0) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 1 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+}
+
+void free_counter_list(PCC *list)
+{
+	PCC *p;
+
+	for (p = list; p; ) {
+		PCC *free_me;
+
+		free_me = p;
+		p = p->next;
+		free(free_me);
+	}
+	return;
+}
+
+void free_all_counters(void)
+{
+	free_counter_list(pcc_even);
+	pcc_even = NULL;
+
+	free_counter_list(pcc_odd);
+	pcc_odd = NULL;
+
+	free_counter_list(pcc_delta);
+	pcc_delta = NULL;
+
+	free_counter_list(pcc_average);
+	pcc_average = NULL;
+}
+
+void insert_cpu_counters(PCC **list, PCC *new)
+{
+	PCC *prev, *next;
+
+	/*
+	 * list was empty
+	 */
+	if (*list == NULL) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	/*
+	 * insert on front of list.
+	 * It is sorted by ascending package#, core#, cpu#
+	 */
+	if (((*list)->pkg > new->pkg) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core > new->core)) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core == new->core) && ((*list)->cpu > new->cpu))) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	prev = *list;
+
+	while (prev->next && (prev->next->pkg < new->pkg)) {
+		prev = prev->next;
+		show_pkg = 1;	/* there is more than 1 package */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core < new->core)) {
+		prev = prev->next;
+		show_core = 1;	/* there is more than 1 core */
+		show_cpu = 1;	/* there is more than 1 CPU (HW thread) */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core == new->core)
+		&& (prev->next->cpu < new->cpu)) {
+		prev = prev->next;
+		show_cpu = 1;
+	}
+
+	/*
+	 * insert after "prev"
+	 */
+	new->next = prev->next;
+	prev->next = new;
+
+	return;
+}
+
+void alloc_new_cpu_counters(int pkg, int core, int cpu)
+{
+	PCC *new;
+
+	if (verbose > 1)
+		printf("pkg%d core%d, cpu%d\n", pkg, core, cpu);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_odd, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_even, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_delta, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	pcc_average = new;
+}
+
+void re_initialize(void)
+{
+	printf("turbostat: topology changed, re-initializing.\n");
+	free_all_counters();
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+	need_reinitialize = 0;
+	printf("num_cpus is now %d\n", num_cpus);
+}
+
+void dummy(int pkg, int core, int cpu) { return; }
+/*
+ * check to see if a cpu came on-line
+ */
+void verify_num_cpus()
+{
+	int new_num_cpus;
+
+	new_num_cpus = for_all_cpus(dummy);
+
+	if (new_num_cpus != num_cpus) {
+		if (verbose)
+			printf("num_cpus was %d, is now  %d\n",
+				num_cpus, new_num_cpus);
+		need_reinitialize = 1;
+	}
+
+	return;
+}
+
+void turbostat_loop()
+{
+restart:
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	while (1) {
+		int retval;
+
+		verify_num_cpus();
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_odd);
+		gettimeofday(&tv_odd, (struct timezone *)NULL);
+
+		compute_delta(pcc_odd, pcc_even, pcc_delta);
+		timersub(&tv_odd, &tv_even, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_even);
+		gettimeofday(&tv_even, (struct timezone *)NULL);
+		compute_delta(pcc_even, pcc_odd, pcc_delta);
+		timersub(&tv_even, &tv_odd, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+	}
+}
+
+void check_dev_msr()
+{
+	struct stat sb;
+
+	if (stat("/dev/cpu/0/msr", &sb)) {
+		fprintf(stderr, "no /dev/cpu/0/msr\n");
+		fprintf(stderr, "Try \"# modprobe msr\"\n");
+		exit(-5);
+	}
+}
+
+int has_nehalem_turbo_ratio_limit(unsigned int family, unsigned int model)
+{
+	if (family != 6)
+		return 0;
+
+	switch (model) {
+	case 0x1A:	/* Core i7, Xeon 5500 series - Bloomfield, Gainstown NHM-EP */
+	case 0x1E:	/* Core i7 and i5 Processor - Clarksfield, Lynnfield, Jasper Forest */
+	case 0x1F:	/* Core i7 and i5 Processor - Nehalem */
+	case 0x25:	/* Westmere Client - Clarkdale, Arrandale */
+	case 0x2C:	/* Westmere EP - Gulftown */
+	case 0x2A:	/* SNB */
+	case 0x2D:	/* SNB Xeon */
+		return 1;
+	case 0x2E:	/* Nehalem-EX Xeon - Beckton */
+	case 0x2F:	/* Westmere-EX Xeon - Eagleton */
+	default:
+		return 0;
+	}
+}
+
+int is_snb(unsigned int family, unsigned int model)
+{
+	switch (model) {
+	case 0x2A:
+	case 0x2D:
+		return 1;
+	}
+	return 0;
+}
+
+double discover_bclk(unsigned int family, unsigned int model)
+{
+	if (is_snb(family, model))
+		return 100.00;
+	else
+		return 133.33;
+}
+
+void check_cpuid()
+{
+	unsigned int eax, ebx, ecx, edx, max_level;
+	char brand[16];
+	unsigned int fms, family, model, stepping;
+
+	eax = ebx = ecx = edx = 0;
+
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0));
+
+	sprintf(brand, "%.4s%.4s%.4s", &ebx, &edx, &ecx);
+
+	if (strncmp(brand, "GenuineIntel", 12)) {
+		fprintf(stderr, "CPUID: %s != GenuineIntel\n", brand);
+		exit(-1);
+	}
+
+	asm("cpuid" : "=a" (fms), "=c" (ecx), "=d" (edx) : "a" (1) : "ebx");
+	family = (fms >> 8) & 0xf;
+	model = (fms >> 4) & 0xf;
+	stepping = fms & 0xf;
+	if (family == 6 || family == 0xf)
+		model += ((fms >> 16) & 0xf) << 4;
+
+	if (verbose)
+		fprintf(stderr, "CPUID %s %d levels family:model:stepping 0x%x:%x:%x (%d:%d:%d)\n",
+			brand, max_level, family, model, stepping, family, model, stepping);
+
+	if (!(edx & (1 << 5))) {
+		fprintf(stderr, "CPUID: no MSR\n");
+		exit(-1);
+	}
+
+	ebx = ecx = edx = 0;
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000000));
+
+	if (max_level >= 0x80000007) {
+		/*
+		 * Non-Stop TSC is advertised by CPUID.EAX=0x80000007: EDX.bit8
+		 */
+		asm("cpuid" : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000007));
+		do_non_stop_tsc = edx && (1 << 8);
+	}
+	do_nehalem_platform_info = do_non_stop_tsc;
+	do_nhm_cstates =  do_non_stop_tsc;
+	do_snb_cstates = is_snb(family, model);
+	if (do_snb_cstates)
+		fprintf(stderr, "This version does not support SNB\n");
+	bclk = discover_bclk(family, model);
+
+	do_nehalem_turbo_ratio_limit = has_nehalem_turbo_ratio_limit(family, model);
+}
+
+
+usage() {
+	fprintf(stderr, "%s: [-v] [-i interval_sec] [-M MSR#] [command [arg]...]\n",
+		progname);
+}
+
+int get_physical_package_id(int cpu)
+{
+	char path[64];
+	FILE *filep;
+	int pkg;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/physical_package_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(-1);
+	}
+	fscanf(filep, "%d", &pkg);
+	fclose(filep);
+	return pkg;
+}
+
+int get_core_id(int cpu)
+{
+	char path[64];
+	FILE *filep;
+	int core;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/core_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(-1);
+	}
+	fscanf(filep, "%d", &core);
+	fclose(filep);
+	return core;
+}
+
+/*
+ * in /dev/cpu/ return success for names that are numbers
+ * ie. filter out ".", "..", "microcode".
+ */
+int dir_filter(const struct dirent *dirp)
+{
+	if (isdigit(dirp->d_name[0]))
+		return 1;
+	else
+		return 0;
+}
+
+int open_dev_cpu_msr(int dummy1)
+{
+	return 0;
+}
+
+/*
+ * run func(index, cpu) on every cpu in /proc/stat
+ */
+
+int for_all_cpus(void (func)(int, int, int))
+{
+	FILE *fp;
+	int cpu_count;
+	int retval;
+
+	fp = fopen(proc_stat, "r");
+	if (fp == NULL) {
+		perror(proc_stat);
+		exit(-1);
+	}
+
+	retval = fscanf(fp, "cpu %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n");
+	if (retval != 0) {
+		perror("/proc/stat format");
+		exit(-1);
+	}
+
+	for (cpu_count = 0; ; cpu_count++) {
+		int cpu;
+
+		retval = fscanf(fp, "cpu%u %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n", &cpu);
+		if (retval != 1)
+			return cpu_count;
+
+		func(get_physical_package_id(cpu), get_core_id(cpu), cpu);
+	}
+	fclose(fp);
+}
+
+void turbostat_init()
+{
+	int i;
+
+	check_cpuid();
+
+	check_dev_msr();
+
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+
+	if (verbose)
+		print_nehalem_info();
+}
+
+int fork_it(char **argv)
+{
+	int retval;
+	pid_t child_pid;
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	child_pid = fork();
+	if (!child_pid) {
+		/* child */
+		execvp(argv[0], argv);
+	} else  {
+		int status;
+
+		/* parent */
+		if (child_pid == -1) {
+			perror("fork");
+			exit(-1);
+		}
+
+		signal(SIGINT, SIG_IGN);
+		signal(SIGQUIT, SIG_IGN);
+		if (waitpid(child_pid, &status, 0) == -1) {
+			perror("wait");
+			exit(-1);
+		}
+	}
+	get_counters(pcc_odd);
+	gettimeofday(&tv_odd, (struct timezone *)NULL);
+	retval = compute_delta(pcc_odd, pcc_even, pcc_delta);
+
+	timersub(&tv_odd, &tv_even, &tv_delta);
+	compute_average(pcc_delta, pcc_average);
+	if (!retval)
+		print_counters(pcc_delta);
+
+	fprintf(stderr, "%.6f sec\n", tv_delta.tv_sec + tv_delta.tv_usec/1000000.0);;
+
+	return 0;
+}
+
+void cmdline(int argc, char **argv)
+{
+	int opt;
+
+	progname = argv[0];
+
+	while ((opt = getopt(argc, argv, "+vi:M:")) != -1) {
+		switch (opt) {
+		case 'v':
+			verbose++;
+			show_pkg = 1;
+			show_core = 1;
+			show_cpu = 1;
+			break;
+		case 'i':
+			interval_sec = atoi(optarg);
+			break;
+		case 'M':
+			sscanf(optarg, "%x", &extra_msr_offset);
+			if (verbose > 1)
+				fprintf(stderr, "MSR 0x%X\n", extra_msr_offset);
+			break;
+		default:
+			usage();
+			exit(-1);
+		}
+	}
+}
+
+int main(int argc, char **argv)
+{
+	cmdline(argc, argv);
+
+	if (verbose > 1)
+		fprintf(stderr, "turbostat Aug 25, 2010"
+			" - Len Brown <lenb@kernel.org>\n");
+	if (verbose > 1)
+		fprintf(stderr, "http://userweb.kernel.org/~lenb/acpi/utils/pmtools/turbostat/\n");
+
+	turbostat_init();
+
+	/*
+	 * if any params left, it must be a command to fork
+	 */
+	if (argc - optind)
+		return fork_it(argv + optind);
+	else
+		turbostat_loop();
+
+	return 0;
+}
-- 
1.7.3.2.90.gd4c43

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH RESEND] tools: add power/x86/turbostat - v2
  2010-10-27  3:14   ` [PATCH RESEND] " Len Brown
@ 2010-11-15 16:01     ` Len Brown
  2010-11-15 17:16       ` Otavio Salvador
                         ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Len Brown @ 2010-11-15 16:01 UTC (permalink / raw)
  To: Greg Kroah-Hartman; +Cc: linux-pm, x86, linux-kernel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 30300 bytes --]

From: Len Brown <len.brown@intel.com>

turbostat displays actual processor frequency on modern
Intel processors.  Note that this capability depends on free-running
APERF and MPERF MSRs, which were cleared by the acpi_cpufreq
driver up through Linux-2.6.29.

On Intel Core i3/i5/i7 (Nehalem) and newer processors,
turbostat displays residency in idle power saving states.

See the comments in the source code for example usage.

Signed-off-by: Len Brown <len.brown@intel.com>
Reviewed-by: Otavio Salvador

 tools/power/x86/turbostat/Makefile    |    7 +
 tools/power/x86/turbostat/turbostat.c | 1107 +++++++++++++++++++++++++++++++++
 2 files changed, 1114 insertions(+), 0 deletions(-)
 create mode 100644 tools/power/x86/turbostat/Makefile
 create mode 100644 tools/power/x86/turbostat/turbostat.c

diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
new file mode 100644
index 0000000..5a18f55
--- /dev/null
+++ b/tools/power/x86/turbostat/Makefile
@@ -0,0 +1,7 @@
+turbostat : turbostat.c
+
+clean :
+	rm -f turbostat
+
+install :
+	install turbostat /usr/bin/turbostat
diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
new file mode 100644
index 0000000..e4526a6
--- /dev/null
+++ b/tools/power/x86/turbostat/turbostat.c
@@ -0,0 +1,1107 @@
+/*
+ * turbostat -- show CPU frequency and C-state residency
+ * on modern Intel turbo-capable processors.
+ *
+ * Copyright (c) 2010, Intel Corporation.
+ * Len Brown <len.brown@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+/*
+ * Works properly on Nehalem and newer processors.
+ * Nehalem features an always-running TSC, plus hardware
+ * C-state residency MSRs.
+ *
+ * Works poorly on systems before Nehalem with
+ * a TSC that stops in deep C-states.
+ *
+ * Works properly on Linux-2.6.30 and later.
+ * Works poorly Linux-2.6.29 and earlier, as acpi-cpufreq
+ * used to clear APERF/MPERF counters on access.
+ *
+ * APERF, MPERF count non-halted cycles.
+ * Although it is not guaranteed by the architecture, we assume
+ * here that they count at TSC rate, which is true today.
+ *
+ * References:
+ * "Intel® Turbo Boost Technology
+ * in Intel® Core™ Microarchitecture (Nehalem) Based Processors"
+ * http://download.intel.com/design/processor/applnots/320354.pdf
+ *
+ * "Intel® 64 and IA-32 Architectures Software Developer's Manual
+ * Volume 3B: System Programming Guide"
+ * http://www.intel.com/products/processor/manuals/
+ */
+
+/*
+ * usage:
+ * # turbostat [-v] [-i interval_sec] [command [arg]...]
+ *
+ * examples:
+ *
+ * [root@x980]# ./turbostat
+ *core CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+ *           0.04 1.62 3.38   0.11   0.00  99.85   0.00  95.07
+ *   0   0   0.04 1.62 3.38   0.06   0.00  99.90   0.00  95.07
+ *   0   6   0.02 1.62 3.38   0.08   0.00  99.90   0.00  95.07
+ *   1   2   0.10 1.62 3.38   0.29   0.00  99.61   0.00  95.07
+ *   1   8   0.11 1.62 3.38   0.28   0.00  99.61   0.00  95.07
+ *   2   4   0.01 1.62 3.38   0.01   0.00  99.98   0.00  95.07
+ *   2  10   0.01 1.61 3.38   0.02   0.00  99.98   0.00  95.07
+ *   8   1   0.07 1.62 3.38   0.15   0.00  99.78   0.00  95.07
+ *   8   7   0.03 1.62 3.38   0.19   0.00  99.78   0.00  95.07
+ *   9   3   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+ *   9   9   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+ *  10   5   0.01 1.62 3.38   0.13   0.00  99.86   0.00  95.07
+ *  10  11   0.08 1.62 3.38   0.05   0.00  99.86   0.00  95.07
+ *
+ * Without any parameters, turbostat prints out counters ever 5 seconds.
+ * (override interval with "-i sec" option).
+ *
+ * core is the processor core number
+ *
+ * CPU is the cpu number according to Linux, ala /dev/cpu
+ *
+ * The first row of data is a system wide summary, and thus has no core or CPU#.
+ *
+ * %c0 is the percent of the interval that the core retired instructions.
+ *
+ * GHz is the average clock rate while the core was in c0 state.
+ *
+ * TSC is the average GHz that the TSC ran during the entire interval.
+ *
+ * %c1, %c3, %c6 show the residency in hardware CPU core idle states.
+ * Note that these may not equal the software states requested by Linux.
+ *
+ * pc3%, pc6%, show package idle C-states.
+ *
+ * The "-v" option adds verbosity to the output: eg.
+ *
+ * CPUID GenuineIntel 11 levels family:model:stepping 0x6:2c:2 (6:44:2)
+ * 12 * 133 = 1600 MHz max efficiency
+ * 25 * 133 = 3333 MHz TSC frequency
+ * 26 * 133 = 3467 MHz max turbo 4 active cores
+ * 26 * 133 = 3467 MHz max turbo 3 active cores
+ * 27 * 133 = 3600 MHz max turbo 2 active cores
+ * 27 * 133 = 3600 MHz max turbo 1 active cores
+ *
+ * This tells you what you should be able to get, given nominal
+ * cooling and electrical capacity.  Note that NHM-EX/WSM-EX do
+ * not tell us these max turbo frequencies like NHM/NHM-EP/WSM/WSM-EP do.
+ *
+ * If turbostat is invoked with a command, it will fork that command
+ * and output the statistics gathered while that command was running.
+ * eg. Here a cycle soaker is run on 1 CPU (see %c0) for a few seconds
+ * until ^C while the others are mostly idle:
+ *
+ * [root@x980 lenb]# ./turbostat cat /dev/zero > /dev/null
+ *
+ *^Ccore CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+ *            8.49 3.63 3.38  16.23   0.66  74.63   0.00   0.00
+ *    0   0   1.22 3.62 3.38  32.18   0.00  66.60   0.00   0.00
+ *    0   6   0.40 3.61 3.38  33.00   0.00  66.60   0.00   0.00
+ *    1   2   0.11 3.14 3.38   0.19   3.95  95.75   0.00   0.00
+ *    1   8   0.05 2.88 3.38   0.25   3.95  95.75   0.00   0.00
+ *    2   4   0.00 3.13 3.38   0.02   0.00  99.98   0.00   0.00
+ *    2  10   0.00 3.09 3.38   0.02   0.00  99.98   0.00   0.00
+ *    8   1   0.04 3.50 3.38  14.43   0.00  85.54   0.00   0.00
+ *    8   7   0.03 2.98 3.38  14.43   0.00  85.54   0.00   0.00
+ *    9   3   0.00 3.16 3.38 100.00   0.00   0.00   0.00   0.00
+ *    9   9  99.93 3.63 3.38   0.06   0.00   0.00   0.00   0.00
+ *   10   5   0.01 2.82 3.38   0.08   0.00  99.91   0.00   0.00
+ *   10  11   0.02 3.36 3.38   0.06   0.00  99.91   0.00   0.00
+ * 6.950866 sec
+ *
+ * Above the cycles soaker drives cpu9 up 3.6 Ghz turbo limit
+ * while the other processors are generally in various states of idle.
+ *
+ * Note that cpu3 is an HT sibling sharing the same core (9)
+ * with cpu9, and thus it is unble to get more idle than c1
+ * when cpu9 is busy.
+ *
+ * Note that the arithmetic average of the GHz column above is 3.24,
+ * yet turbostat shows avg 3.61.  This is a weighted average --
+ * where the weight is %c0.  ie. it is the total number of
+ * unhalted cycles elapsed per time divided by the number of CPUs.
+ *
+ * Note that turbostat reads hardware counters, but doesn't write them.
+ * So it will not interfere with the OS or other programs, including
+ * multiple invocations of itself.
+ *
+ * turbostat depends on the Linux msr driver for /dev/cpu/.../msr
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/resource.h>
+#include <fcntl.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <stdlib.h>
+#include <dirent.h>
+
+#define MSR_TSC	0x10
+#define MSR_NEHALEM_PLATFORM_INFO	0xCE
+#define MSR_NEHALEM_TURBO_RATIO_LIMIT	0x1AD
+#define MSR_APERF	0xE8
+#define MSR_MPERF	0xE7
+#define MSR_PKG_C3_RESIDENCY	0x3F8
+#define MSR_PKG_C6_RESIDENCY	0x3F9
+#define MSR_CORE_C3_RESIDENCY	0x3FC
+#define MSR_CORE_C6_RESIDENCY	0x3FD
+
+char *proc_stat = "/proc/stat";
+unsigned int interval_sec = 5;	/* set with -i interval_sec */
+unsigned int verbose;		/* set with -v */
+unsigned int skip_c0;
+unsigned int skip_c1;
+unsigned int do_nhm_cstates;
+unsigned int do_snb_cstates;
+unsigned int do_aperf = 1;	/* TBD set with CPUID */
+unsigned int units = 1000000000;	/* Ghz etc */
+unsigned int do_non_stop_tsc;
+unsigned int do_nehalem_platform_info;
+unsigned int do_nehalem_turbo_ratio_limit;
+unsigned int extra_msr_offset;
+double bclk;
+unsigned int show_pkg;
+unsigned int show_core;
+unsigned int show_cpu;
+
+int aperf_mperf_unstable;
+int backwards_count;
+char *progname;
+int need_reinitialize;
+
+int num_cpus;
+
+typedef struct per_cpu_counters {
+	unsigned long long tsc;		/* per thread */
+	unsigned long long aperf;	/* per thread */
+	unsigned long long mperf;	/* per thread */
+	unsigned long long c1;	/* per thread (calculated) */
+	unsigned long long c3;	/* per core */
+	unsigned long long c6;	/* per core */
+	unsigned long long c7;	/* per core */
+	unsigned long long pc2;	/* per package */
+	unsigned long long pc3;	/* per package */
+	unsigned long long pc6;	/* per package */
+	unsigned long long pc7;	/* per package */
+	unsigned long long extra_msr;	/* per thread */
+	int pkg;
+	int core;
+	int cpu;
+	struct per_cpu_counters *next;
+} PCC;
+
+PCC *pcc_even;
+PCC *pcc_odd;
+PCC *pcc_delta;
+PCC *pcc_average;
+struct timeval tv_even;
+struct timeval tv_odd;
+struct timeval tv_delta;
+
+unsigned long long get_msr(int cpu, off_t offset)
+{
+	ssize_t retval;
+	unsigned long long msr;
+	char pathname[32];
+	int fd;
+
+	sprintf(pathname, "/dev/cpu/%d/msr", cpu);
+	fd = open(pathname, O_RDONLY);
+	if (fd < 0) {
+		perror(pathname);
+		need_reinitialize = 1;
+		return 0;
+	}
+
+	retval = pread(fd, &msr, sizeof msr, offset);
+	if (retval != sizeof msr) {
+		fprintf(stderr, "cpu%d pread(..., 0x%x) = %d\n",
+		cpu, offset, retval);
+		exit(-2);
+	}
+
+	close(fd);
+	return msr;
+}
+
+void print_header()
+{
+	if (show_pkg)
+		fprintf(stderr, "pkg ");
+	if (show_core)
+		fprintf(stderr, "core");
+	if (show_cpu)
+		fprintf(stderr, " CPU");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c0 ");
+	fprintf(stderr, "  GHz");
+	fprintf(stderr, "  TSC");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c1 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c6 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc6 ");
+	if (extra_msr_offset)
+		fprintf(stderr, "       MSR 0x%x ", extra_msr_offset);
+
+	putc('\n', stderr);
+}
+
+void dump_pcc(PCC *pcc)
+{
+	fprintf(stderr, "package: %d ", pcc->pkg);
+	fprintf(stderr, "core:: %d ", pcc->core);
+	fprintf(stderr, "CPU: %d ", pcc->cpu);
+	fprintf(stderr, "TSC: %016llX\n", pcc->tsc);
+	fprintf(stderr, "c3: %016llX\n", pcc->c3);
+	fprintf(stderr, "c6: %016llX\n", pcc->c6);
+	fprintf(stderr, "c7: %016llX\n", pcc->c7);
+	fprintf(stderr, "aperf: %016llX\n", pcc->aperf);
+	fprintf(stderr, "pc2: %016llX\n", pcc->pc2);
+	fprintf(stderr, "pc3: %016llX\n", pcc->pc3);
+	fprintf(stderr, "pc6: %016llX\n", pcc->pc6);
+	fprintf(stderr, "pc7: %016llX\n", pcc->pc7);
+	fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, pcc->extra_msr);
+}
+
+void dump_list(PCC *pcc)
+{
+	printf("dump_list 0x%p\n", pcc);
+
+	for (; pcc; pcc = pcc->next)
+		dump_pcc(pcc);
+}
+
+void print_pcc(PCC *p)
+{
+	double interval_float;
+
+	interval_float = tv_delta.tv_sec + tv_delta.tv_usec/1000000.0;
+
+	if (p == pcc_average) {
+		if (show_pkg)
+			fprintf(stderr, "    ");
+		if (show_core)
+			fprintf(stderr, "    ");
+		if (show_cpu)
+			fprintf(stderr, "    ");
+	} else {
+		if (show_pkg)
+			fprintf(stderr, "%4d", p->pkg);
+		if (show_core)
+			fprintf(stderr, "%4d", p->core);
+		if (show_cpu)
+			fprintf(stderr, "%4d", p->cpu);
+	}
+
+	/* C0 */
+	if (do_nhm_cstates) {
+		if (!skip_c0)
+			fprintf(stderr, "%7.2f", 100.0 * p->mperf/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+
+	if (do_aperf) {
+		if (!aperf_mperf_unstable) {
+			fprintf(stderr, "%5.2f",
+				1.0 * p->tsc / units * p->aperf /
+				p->mperf / interval_float);
+		} else {
+			if (p->aperf > p->tsc || p->mperf > p->tsc) {
+				fprintf(stderr, " ****");
+			} else {
+				fprintf(stderr, "%4.1f*",
+					1.0 * p->tsc /
+					units * p->aperf /
+					p->mperf / interval_float);
+			}
+		}
+	}
+
+	fprintf(stderr, "%5.2f", 1.0 * p->tsc/units/interval_float);
+
+	if (do_nhm_cstates) {
+		if (!skip_c1)
+			fprintf(stderr, "%7.2f", 100.0 * p->c1/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c6/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc6/p->tsc);
+	if (extra_msr_offset)
+		fprintf(stderr, "  0x%016llx", p->extra_msr);
+	putc('\n', stderr);
+}
+
+void print_counters(PCC *cnt)
+{
+	int i;
+	PCC *pcc;
+
+	print_header();
+
+	if (num_cpus > 1)
+		print_pcc(pcc_average);
+
+	for (pcc = cnt; pcc != NULL; pcc = pcc->next)
+		print_pcc(pcc);
+
+}
+
+#define SUBTRACT_COUNTER(after, before, delta) (delta = (after - before), (before > after))
+
+
+int compute_delta(PCC *after, PCC *before, PCC *delta)
+{
+	int i;
+	int errors = 0;
+	int perf_err = 0;
+
+	skip_c0 = skip_c1 = 0;
+
+	for ( ; after && before && delta;
+		after = after->next, before = before->next, delta = delta->next) {
+		if (before->cpu != after->cpu) {
+			printf("cpu configuration changed: %d != %d\n",
+				before->cpu, after->cpu);
+			return -1;
+		}
+
+		if (SUBTRACT_COUNTER(after->tsc, before->tsc, delta->tsc)) {
+			fprintf(stderr, "cpu%d TSC went backwards %llX to %llX\n",
+				i, before->tsc, after->tsc);
+			errors++;
+		}
+		/* check for TSC < 1 Mcycles over interval */
+		if (delta->tsc < (1000 * 1000)) {
+			fprintf(stderr, "Insanely slow TSC rate,"
+				" TSC stops in idle?\n");
+			fprintf(stderr, "You can disable all c-states"
+				" by booting with \"idle=poll\"\n");
+			fprintf(stderr, "or just the deep ones with"
+				" \"processor.max_cstate=1\"\n");
+			exit(-3);
+		}
+		if (SUBTRACT_COUNTER(after->c3, before->c3, delta->c3)) {
+			fprintf(stderr, "cpu%d c3 counter went backwards %llX to %llX\n",
+				i, before->c3, after->c3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c6, before->c6, delta->c6)) {
+			fprintf(stderr, "cpu%d c6 counter went backwards %llX to %llX\n",
+				i, before->c6, after->c6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c7, before->c7, delta->c7)) {
+			fprintf(stderr, "cpu%d c7 counter went backwards %llX to %llX\n",
+				i, before->c7, after->c7);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc2, before->pc2, delta->pc2)) {
+			fprintf(stderr, "cpu%d pc2 counter went backwards %llX to %llX\n",
+				i, before->pc2, after->pc2);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc3, before->pc3, delta->pc3)) {
+			fprintf(stderr, "cpu%d pc3 counter went backwards %llX to %llX\n",
+				i, before->pc3, after->pc3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc6, before->pc6, delta->pc6)) {
+			fprintf(stderr, "cpu%d pc6 counter went backwards %llX to %llX\n",
+				i, before->pc6, after->pc6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc7, before->pc7, delta->pc7)) {
+			fprintf(stderr, "cpu%d pc7 counter went backwards %llX to %llX\n",
+				i, before->pc7, after->pc7);
+			errors++;
+		}
+
+		perf_err = SUBTRACT_COUNTER(after->aperf, before->aperf, delta->aperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d aperf counter went backwards %llX to %llX\n",
+				i, before->aperf, after->aperf);
+		}
+		perf_err |= SUBTRACT_COUNTER(after->mperf, before->mperf, delta->mperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d mperf counter went backwards %llX to %llX\n",
+				i, before->mperf, after->mperf);
+		}
+		if (perf_err) {
+			if (!aperf_mperf_unstable) {
+				fprintf(stderr, "%s: APERF or MPERF went backwards *\n", progname);
+				fprintf(stderr, "* Frequency results do not cover entire interval *\n");
+				fprintf(stderr, "* fix this by running Linux-2.6.30 or later *\n");
+
+				aperf_mperf_unstable = 1;
+			}
+			/*
+			 * mperf delta is likely a huge "positive" number
+			 * can not use it for calculating c0 time
+			 */
+			skip_c0 = 1;
+			skip_c1 = 1;
+		}
+
+		/*
+		 * As mperf and tsc collection are not atomic,
+		 * it is possible for mperf's non-halted cycles
+		 * to exceed TSC's all cycles: show c1l = 0% in that case.
+		 */
+		if (delta->mperf > delta->tsc)
+			delta->c1 = 0;
+		else /* normal case, derive c1 */
+			delta->c1 = delta->tsc - delta->mperf
+				- delta->c3 - delta->c6 - delta->c7;
+
+		if (delta->mperf == 0)
+			delta->mperf = 1;	/* divide by 0 protection */
+
+		/*
+		 * for "extra msr", just copy the latest w/o subtracting
+		 */
+		delta->extra_msr = after->extra_msr;
+		if (errors) {
+			fprintf(stderr, "ERROR cpu%d before:\n", i);
+			dump_pcc(before);
+			fprintf(stderr, "ERROR cpu%d after:\n", i);
+			dump_pcc(after);
+			errors = 0;
+		}
+	}
+	return 0;
+}
+
+void compute_average(PCC *delta, PCC *avg)
+{
+	int i;
+	PCC *sum;
+
+	sum = calloc(1, sizeof(PCC));
+	if (sum == NULL) {
+		perror("calloc sum");
+		exit(-1);
+	}
+
+	for (; delta; delta = delta->next) {
+		sum->tsc += delta->tsc;
+		sum->c1 += delta->c1;
+		sum->c3 += delta->c3;
+		sum->c6 += delta->c6;
+		sum->c7 += delta->c7;
+		sum->aperf += delta->aperf;
+		sum->mperf += delta->mperf;
+		sum->pc2 += delta->pc2;
+		sum->pc3 += delta->pc3;
+		sum->pc6 += delta->pc6;
+		sum->pc7 += delta->pc7;
+	}
+	avg->tsc = sum->tsc/num_cpus;
+	avg->c1 = sum->c1/num_cpus;
+	avg->c3 = sum->c3/num_cpus;
+	avg->c6 = sum->c6/num_cpus;
+	avg->c7 = sum->c7/num_cpus;
+	avg->aperf = sum->aperf/num_cpus;
+	avg->mperf = sum->mperf/num_cpus;
+	avg->pc2 = sum->pc2/num_cpus;
+	avg->pc3 = sum->pc3/num_cpus;
+	avg->pc6 = sum->pc6/num_cpus;
+	avg->pc7 = sum->pc7/num_cpus;
+
+	free(sum);
+}
+
+void get_counters(PCC *pcc)
+{
+	for ( ; pcc; pcc = pcc->next) {
+		pcc->tsc = get_msr(pcc->cpu, MSR_TSC);
+		if (do_nhm_cstates)
+			pcc->c3 = get_msr(pcc->cpu, MSR_CORE_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->c6 = get_msr(pcc->cpu, MSR_CORE_C6_RESIDENCY);
+		if (do_aperf)
+			pcc->aperf = get_msr(pcc->cpu, MSR_APERF);
+		if (do_aperf)
+			pcc->mperf = get_msr(pcc->cpu, MSR_MPERF);
+		if (do_nhm_cstates)
+			pcc->pc3 = get_msr(pcc->cpu, MSR_PKG_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->pc6 = get_msr(pcc->cpu, MSR_PKG_C6_RESIDENCY);
+		if (extra_msr_offset)
+			pcc->extra_msr = get_msr(pcc->cpu, extra_msr_offset);
+	}
+}
+
+
+void print_nehalem_info()
+{
+	unsigned long long msr;
+	unsigned int ratio;
+
+	if (!do_nehalem_platform_info)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_PLATFORM_INFO);
+
+	ratio = (msr >> 40) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz TSC frequency\n",
+		ratio, bclk, ratio * bclk);
+
+	if (verbose > 1)
+		fprintf(stderr, "MSR_NEHALEM_PLATFORM_INFO: 0x%llx\n", msr);
+
+	if (!do_nehalem_turbo_ratio_limit)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_TURBO_RATIO_LIMIT);
+
+	ratio = (msr >> 24) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 4 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 16) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 3 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 2 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 0) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 1 active cores\n",
+		ratio, bclk, ratio * bclk);
+
+}
+
+void free_counter_list(PCC *list)
+{
+	PCC *p;
+
+	for (p = list; p; ) {
+		PCC *free_me;
+
+		free_me = p;
+		p = p->next;
+		free(free_me);
+	}
+	return;
+}
+
+void free_all_counters(void)
+{
+	free_counter_list(pcc_even);
+	pcc_even = NULL;
+
+	free_counter_list(pcc_odd);
+	pcc_odd = NULL;
+
+	free_counter_list(pcc_delta);
+	pcc_delta = NULL;
+
+	free_counter_list(pcc_average);
+	pcc_average = NULL;
+}
+
+void insert_cpu_counters(PCC **list, PCC *new)
+{
+	PCC *prev, *next;
+
+	/*
+	 * list was empty
+	 */
+	if (*list == NULL) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	/*
+	 * insert on front of list.
+	 * It is sorted by ascending package#, core#, cpu#
+	 */
+	if (((*list)->pkg > new->pkg) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core > new->core)) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core == new->core) && ((*list)->cpu > new->cpu))) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	prev = *list;
+
+	while (prev->next && (prev->next->pkg < new->pkg)) {
+		prev = prev->next;
+		show_pkg = 1;	/* there is more than 1 package */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core < new->core)) {
+		prev = prev->next;
+		show_core = 1;	/* there is more than 1 core */
+		show_cpu = 1;	/* there is more than 1 CPU (HW thread) */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core == new->core)
+		&& (prev->next->cpu < new->cpu)) {
+		prev = prev->next;
+		show_cpu = 1;
+	}
+
+	/*
+	 * insert after "prev"
+	 */
+	new->next = prev->next;
+	prev->next = new;
+
+	return;
+}
+
+void alloc_new_cpu_counters(int pkg, int core, int cpu)
+{
+	PCC *new;
+
+	if (verbose > 1)
+		printf("pkg%d core%d, cpu%d\n", pkg, core, cpu);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_odd, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_even, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_delta, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(-1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	pcc_average = new;
+}
+
+void re_initialize(void)
+{
+	printf("turbostat: topology changed, re-initializing.\n");
+	free_all_counters();
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+	need_reinitialize = 0;
+	printf("num_cpus is now %d\n", num_cpus);
+}
+
+void dummy(int pkg, int core, int cpu) { return; }
+/*
+ * check to see if a cpu came on-line
+ */
+void verify_num_cpus()
+{
+	int new_num_cpus;
+
+	new_num_cpus = for_all_cpus(dummy);
+
+	if (new_num_cpus != num_cpus) {
+		if (verbose)
+			printf("num_cpus was %d, is now  %d\n",
+				num_cpus, new_num_cpus);
+		need_reinitialize = 1;
+	}
+
+	return;
+}
+
+void turbostat_loop()
+{
+restart:
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	while (1) {
+		int retval;
+
+		verify_num_cpus();
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_odd);
+		gettimeofday(&tv_odd, (struct timezone *)NULL);
+
+		compute_delta(pcc_odd, pcc_even, pcc_delta);
+		timersub(&tv_odd, &tv_even, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_even);
+		gettimeofday(&tv_even, (struct timezone *)NULL);
+		compute_delta(pcc_even, pcc_odd, pcc_delta);
+		timersub(&tv_even, &tv_odd, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+	}
+}
+
+void check_dev_msr()
+{
+	struct stat sb;
+
+	if (stat("/dev/cpu/0/msr", &sb)) {
+		fprintf(stderr, "no /dev/cpu/0/msr\n");
+		fprintf(stderr, "Try \"# modprobe msr\"\n");
+		exit(-5);
+	}
+}
+
+int has_nehalem_turbo_ratio_limit(unsigned int family, unsigned int model)
+{
+	if (family != 6)
+		return 0;
+
+	switch (model) {
+	case 0x1A:	/* Core i7, Xeon 5500 series - Bloomfield, Gainstown NHM-EP */
+	case 0x1E:	/* Core i7 and i5 Processor - Clarksfield, Lynnfield, Jasper Forest */
+	case 0x1F:	/* Core i7 and i5 Processor - Nehalem */
+	case 0x25:	/* Westmere Client - Clarkdale, Arrandale */
+	case 0x2C:	/* Westmere EP - Gulftown */
+	case 0x2A:	/* SNB */
+	case 0x2D:	/* SNB Xeon */
+		return 1;
+	case 0x2E:	/* Nehalem-EX Xeon - Beckton */
+	case 0x2F:	/* Westmere-EX Xeon - Eagleton */
+	default:
+		return 0;
+	}
+}
+
+int is_snb(unsigned int family, unsigned int model)
+{
+	switch (model) {
+	case 0x2A:
+	case 0x2D:
+		return 1;
+	}
+	return 0;
+}
+
+double discover_bclk(unsigned int family, unsigned int model)
+{
+	if (is_snb(family, model))
+		return 100.00;
+	else
+		return 133.33;
+}
+
+void check_cpuid()
+{
+	unsigned int eax, ebx, ecx, edx, max_level;
+	char brand[16];
+	unsigned int fms, family, model, stepping;
+
+	eax = ebx = ecx = edx = 0;
+
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0));
+
+	sprintf(brand, "%.4s%.4s%.4s", &ebx, &edx, &ecx);
+
+	if (strncmp(brand, "GenuineIntel", 12)) {
+		fprintf(stderr, "CPUID: %s != GenuineIntel\n", brand);
+		exit(-1);
+	}
+
+	asm("cpuid" : "=a" (fms), "=c" (ecx), "=d" (edx) : "a" (1) : "ebx");
+	family = (fms >> 8) & 0xf;
+	model = (fms >> 4) & 0xf;
+	stepping = fms & 0xf;
+	if (family == 6 || family == 0xf)
+		model += ((fms >> 16) & 0xf) << 4;
+
+	if (verbose)
+		fprintf(stderr, "CPUID %s %d levels family:model:stepping 0x%x:%x:%x (%d:%d:%d)\n",
+			brand, max_level, family, model, stepping, family, model, stepping);
+
+	if (!(edx & (1 << 5))) {
+		fprintf(stderr, "CPUID: no MSR\n");
+		exit(-1);
+	}
+
+	ebx = ecx = edx = 0;
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000000));
+
+	if (max_level >= 0x80000007) {
+		/*
+		 * Non-Stop TSC is advertised by CPUID.EAX=0x80000007: EDX.bit8
+		 */
+		asm("cpuid" : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000007));
+		do_non_stop_tsc = edx && (1 << 8);
+	}
+	do_nehalem_platform_info = do_non_stop_tsc;
+	do_nhm_cstates =  do_non_stop_tsc;
+	do_snb_cstates = is_snb(family, model);
+	if (do_snb_cstates)
+		fprintf(stderr, "This version does not support SNB\n");
+	bclk = discover_bclk(family, model);
+
+	do_nehalem_turbo_ratio_limit = has_nehalem_turbo_ratio_limit(family, model);
+}
+
+
+usage() {
+	fprintf(stderr, "%s: [-v] [-i interval_sec] [-M MSR#] [command [arg]...]\n",
+		progname);
+}
+
+int get_physical_package_id(int cpu)
+{
+	char path[64];
+	FILE *filep;
+	int pkg;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/physical_package_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(-1);
+	}
+	fscanf(filep, "%d", &pkg);
+	fclose(filep);
+	return pkg;
+}
+
+int get_core_id(int cpu)
+{
+	char path[64];
+	FILE *filep;
+	int core;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/core_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(-1);
+	}
+	fscanf(filep, "%d", &core);
+	fclose(filep);
+	return core;
+}
+
+/*
+ * in /dev/cpu/ return success for names that are numbers
+ * ie. filter out ".", "..", "microcode".
+ */
+int dir_filter(const struct dirent *dirp)
+{
+	if (isdigit(dirp->d_name[0]))
+		return 1;
+	else
+		return 0;
+}
+
+int open_dev_cpu_msr(int dummy1)
+{
+	return 0;
+}
+
+/*
+ * run func(index, cpu) on every cpu in /proc/stat
+ */
+
+int for_all_cpus(void (func)(int, int, int))
+{
+	FILE *fp;
+	int cpu_count;
+	int retval;
+
+	fp = fopen(proc_stat, "r");
+	if (fp == NULL) {
+		perror(proc_stat);
+		exit(-1);
+	}
+
+	retval = fscanf(fp, "cpu %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n");
+	if (retval != 0) {
+		perror("/proc/stat format");
+		exit(-1);
+	}
+
+	for (cpu_count = 0; ; cpu_count++) {
+		int cpu;
+
+		retval = fscanf(fp, "cpu%u %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n", &cpu);
+		if (retval != 1)
+			return cpu_count;
+
+		func(get_physical_package_id(cpu), get_core_id(cpu), cpu);
+	}
+	fclose(fp);
+}
+
+void turbostat_init()
+{
+	int i;
+
+	check_cpuid();
+
+	check_dev_msr();
+
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+
+	if (verbose)
+		print_nehalem_info();
+}
+
+int fork_it(char **argv)
+{
+	int retval;
+	pid_t child_pid;
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	child_pid = fork();
+	if (!child_pid) {
+		/* child */
+		execvp(argv[0], argv);
+	} else  {
+		int status;
+
+		/* parent */
+		if (child_pid == -1) {
+			perror("fork");
+			exit(-1);
+		}
+
+		signal(SIGINT, SIG_IGN);
+		signal(SIGQUIT, SIG_IGN);
+		if (waitpid(child_pid, &status, 0) == -1) {
+			perror("wait");
+			exit(-1);
+		}
+	}
+	get_counters(pcc_odd);
+	gettimeofday(&tv_odd, (struct timezone *)NULL);
+	retval = compute_delta(pcc_odd, pcc_even, pcc_delta);
+
+	timersub(&tv_odd, &tv_even, &tv_delta);
+	compute_average(pcc_delta, pcc_average);
+	if (!retval)
+		print_counters(pcc_delta);
+
+	fprintf(stderr, "%.6f sec\n", tv_delta.tv_sec + tv_delta.tv_usec/1000000.0);;
+
+	return 0;
+}
+
+void cmdline(int argc, char **argv)
+{
+	int opt;
+
+	progname = argv[0];
+
+	while ((opt = getopt(argc, argv, "+vi:M:")) != -1) {
+		switch (opt) {
+		case 'v':
+			verbose++;
+			show_pkg = 1;
+			show_core = 1;
+			show_cpu = 1;
+			break;
+		case 'i':
+			interval_sec = atoi(optarg);
+			break;
+		case 'M':
+			sscanf(optarg, "%x", &extra_msr_offset);
+			if (verbose > 1)
+				fprintf(stderr, "MSR 0x%X\n", extra_msr_offset);
+			break;
+		default:
+			usage();
+			exit(-1);
+		}
+	}
+}
+
+int main(int argc, char **argv)
+{
+	cmdline(argc, argv);
+
+	if (verbose > 1)
+		fprintf(stderr, "turbostat Aug 25, 2010"
+			" - Len Brown <lenb@kernel.org>\n");
+	if (verbose > 1)
+		fprintf(stderr, "http://userweb.kernel.org/~lenb/acpi/utils/pmtools/turbostat/\n");
+
+	turbostat_init();
+
+	/*
+	 * if any params left, it must be a command to fork
+	 */
+	if (argc - optind)
+		return fork_it(argv + optind);
+	else
+		turbostat_loop();
+
+	return 0;
+}
-- 
1.7.3.2.90.gd4c43

--8323328-1715002308-1288149298=:3209--

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH RESEND] tools: add power/x86/turbostat - v2
  2010-11-15 16:01     ` Len Brown
@ 2010-11-15 17:16       ` Otavio Salvador
  2010-11-16 10:29       ` Andreas Herrmann
  2010-11-24  4:58       ` [PATCH v3] tools: create power/x86/turbostat Len Brown
  2 siblings, 0 replies; 11+ messages in thread
From: Otavio Salvador @ 2010-11-15 17:16 UTC (permalink / raw)
  To: Len Brown; +Cc: Greg Kroah-Hartman, linux-pm, x86, linux-kernel

On Mon, Nov 15, 2010 at 2:01 PM, Len Brown <lenb@kernel.org> wrote:
> From: Len Brown <len.brown@intel.com>
>
> turbostat displays actual processor frequency on modern
> Intel processors.  Note that this capability depends on free-running
> APERF and MPERF MSRs, which were cleared by the acpi_cpufreq
> driver up through Linux-2.6.29.
>
> On Intel Core i3/i5/i7 (Nehalem) and newer processors,
> turbostat displays residency in idle power saving states.
>
> See the comments in the source code for example usage.
>
> Signed-off-by: Len Brown <len.brown@intel.com>
> Reviewed-by: Otavio Salvador

Reviewed-by: Otavio Salvador <otavio@ossystems.com.br>

-- 
Otavio Salvador                  O.S. Systems
E-mail: otavio@ossystems.com.br  http://www.ossystems.com.br
Mobile: +55 53 9981-7854         http://projetos.ossystems.com.br

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RESEND] tools: add power/x86/turbostat - v2
  2010-11-15 16:01     ` Len Brown
  2010-11-15 17:16       ` Otavio Salvador
@ 2010-11-16 10:29       ` Andreas Herrmann
  2010-11-17  6:44         ` Len Brown
  2010-11-24  4:58       ` [PATCH v3] tools: create power/x86/turbostat Len Brown
  2 siblings, 1 reply; 11+ messages in thread
From: Andreas Herrmann @ 2010-11-16 10:29 UTC (permalink / raw)
  To: Len Brown; +Cc: Greg Kroah-Hartman, linux-pm, x86, linux-kernel

On Mon, Nov 15, 2010 at 11:01:05AM -0500, Len Brown wrote:
> From: Len Brown <len.brown@intel.com>
> 
> turbostat displays actual processor frequency on modern
> Intel processors.  Note that this capability depends on free-running
> APERF and MPERF MSRs, which were cleared by the acpi_cpufreq
> driver up through Linux-2.6.29.

Why isn't cpufreq-aperf sufficient for showing the actual frequency?
The tool can handle both AMD and Intel CPUs.

There is a CPUID flag which can be checked for APERF/MPERF existence,
no need to enforce a general vendor check like this:

  $ ./turbostat
  CPUID: AuthenticAMD != GenuineIntel

> On Intel Core i3/i5/i7 (Nehalem) and newer processors,
> turbostat displays residency in idle power saving states.

Can't this be added to cpuidle-info (see branch cpupowerutils in
git://git.kernel.org/pub/scm/utils/kernel/cpufreq/cpufrequtils.git)?

BTW, gcc v4.4 throws several warnings when compiling your
code:

linux-2.6/tools/power/x86/turbostat $ make
cc -Wall    turbostat.c   -o turbostat
turbostat.c: In function ‘get_msr’:
turbostat.c:234: warning: implicit declaration of function ‘pread’
turbostat.c:237: warning: format ‘%x’ expects type ‘unsigned int’, but argument 4 has type ‘off_t’
turbostat.c:237: warning: format ‘%d’ expects type ‘int’, but argument 5 has type ‘ssize_t’
turbostat.c: In function ‘print_counters’:
turbostat.c:368: warning: unused variable ‘i’
turbostat.c: In function ‘compute_average’:
turbostat.c:508: warning: unused variable ‘i’
turbostat.c: In function ‘insert_cpu_counters’:
turbostat.c:642: warning: unused variable ‘next’
turbostat.c: In function ‘re_initialize’:
turbostat.c:747: warning: implicit declaration of function ‘for_all_cpus’
turbostat.c: In function ‘turbostat_loop’:
turbostat.c:779: warning: unused variable ‘retval’
turbostat.c: In function ‘check_cpuid’:
turbostat.c:868: warning: format ‘%.4s’ expects type ‘char *’, but argument 3 has type ‘unsigned int *’
turbostat.c:868: warning: format ‘%.4s’ expects type ‘char *’, but argument 4 has type ‘unsigned int *’
turbostat.c:868: warning: format ‘%.4s’ expects type ‘char *’, but argument 5 has type ‘unsigned int *’
turbostat.c:870: warning: implicit declaration of function ‘strncmp’
turbostat.c: At top level:
turbostat.c:912: warning: return type defaults to ‘int’
turbostat.c: In function ‘dir_filter’:
turbostat.c:957: warning: implicit declaration of function ‘isdigit’
turbostat.c: In function ‘turbostat_init’:
turbostat.c:1004: warning: unused variable ‘i’
turbostat.c: In function ‘fork_it’:
turbostat.c:1038: warning: implicit declaration of function ‘waitpid’
turbostat.c: In function ‘usage’:
turbostat.c:915: warning: control reaches end of non-void function
turbostat.c: In function ‘compute_delta’:
turbostat.c:401: warning: ‘i’ may be used uninitialized in this function


Regards,
Andreas

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH RESEND] tools: add power/x86/turbostat - v2
  2010-11-16 10:29       ` Andreas Herrmann
@ 2010-11-17  6:44         ` Len Brown
  0 siblings, 0 replies; 11+ messages in thread
From: Len Brown @ 2010-11-17  6:44 UTC (permalink / raw)
  To: Andreas Herrmann; +Cc: Greg Kroah-Hartman, linux-pm, x86, linux-kernel

On Tue, 16 Nov 2010, Andreas Herrmann wrote:

> On Mon, Nov 15, 2010 at 11:01:05AM -0500, Len Brown wrote:
> > From: Len Brown <len.brown@intel.com>
> > 
> > turbostat displays actual processor frequency on modern
> > Intel processors.  Note that this capability depends on free-running
> > APERF and MPERF MSRs, which were cleared by the acpi_cpufreq
> > driver up through Linux-2.6.29.
> 
> Why isn't cpufreq-aperf sufficient for showing the actual frequency?
> The tool can handle both AMD and Intel CPUs.

Looking over its source code, I don't see any reason that cpufreq-aperf
wouldn't be sufficient for showing actual frequency, were that your goal.
I hadn't heard of this tool until your note.  I complied it, but I guess
I'm too lazy to figure out why it didnt' run.
It seems somewhat elaborate as compared to the task at hand...

> There is a CPUID flag which can be checked for APERF/MPERF existence,
> no need to enforce a general vendor check like this:
> 
>   $ ./turbostat
>   CPUID: AuthenticAMD != GenuineIntel

turbostat isn't very interesting on systems that don't
have a reliable TSC or C-state residency MSRs.  AFAIK,
AMD has neither.  If you have information to the contrary
and can test it on AMD hardware, I'd be happy to enable AMD,
as I have neither.

> > On Intel Core i3/i5/i7 (Nehalem) and newer processors,
> > turbostat displays residency in idle power saving states.
> 
> Can't this be added to cpuidle-info (see branch cpupowerutils in
> git://git.kernel.org/pub/scm/utils/kernel/cpufreq/cpufrequtils.git)?

I suppose it could if somebody were in need of a hobby.
However, if you want a big tool in this space,
you'd probably be better off running powertop2,
which will incorporate what turbostat does, and more.

> BTW, gcc v4.4 throws several warnings when compiling your
> code:

thanks, I'll fix these.

cheers,
Len Brown, Intel Open Source Technology Center

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3] tools: create power/x86/turbostat
  2010-11-15 16:01     ` Len Brown
  2010-11-15 17:16       ` Otavio Salvador
  2010-11-16 10:29       ` Andreas Herrmann
@ 2010-11-24  4:58       ` Len Brown
  2010-12-07  6:00         ` [PATCH v4] " Len Brown
  2 siblings, 1 reply; 11+ messages in thread
From: Len Brown @ 2010-11-24  4:58 UTC (permalink / raw)
  To: Greg Kroah-Hartman; +Cc: linux-pm, x86, linux-kernel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 33302 bytes --]

turbostat is a tool to help verify proper operation of CPU idle power saving
states, processor performance states, and turbo-mode on modern x86 processors.

Signed-off-by: Len Brown <len.brown@intel.com>
---
v3:
Move source comments to a man page
Allow operation on AMD
Add support for new counters in Sandy Bridge

 tools/power/x86/turbostat/Makefile    |    8 +
 tools/power/x86/turbostat/turbostat.8 |  172 ++++++
 tools/power/x86/turbostat/turbostat.c | 1047 +++++++++++++++++++++++++++++++++
 3 files changed, 1227 insertions(+), 0 deletions(-)

diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
new file mode 100644
index 0000000..fd8e1f1
--- /dev/null
+++ b/tools/power/x86/turbostat/Makefile
@@ -0,0 +1,8 @@
+turbostat : turbostat.c
+
+clean :
+	rm -f turbostat
+
+install :
+	install turbostat /usr/bin/turbostat
+	install turbostat.8 /usr/share/man/man8
diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
new file mode 100644
index 0000000..ff75125
--- /dev/null
+++ b/tools/power/x86/turbostat/turbostat.8
@@ -0,0 +1,172 @@
+.TH TURBOSTAT 8
+.SH NAME
+turbostat \- Report processor frequency and idle statistics
+.SH SYNOPSIS
+.ft B
+.B turbostat
+.RB [ "\-v" ]
+.RB [ "\-M MSR#" ]
+.RB command
+.br
+.B turbostat
+.RB [ "\-v" ]
+.RB [ "\-M MSR#" ]
+.RB [ "\-i interval_sec" ]
+.SH DESCRIPTION
+\fBturbostat \fP reports processor topology, frequency
+and idle power state statistics on modern X86 processors.
+Either \fBcommand\fP is forked and statistics are printed
+upon its completion, or statistics are printed periodically.
+
+\fBturbostat \fP
+requires that the processor
+supports an "invariant" TSC, plus the APERF and MPERF MSRs.
+\fBturbostat \fP will report idle cpu power state residency
+on processors that additionally support C-state residency counters.
+
+.SS Options
+The \fB-v\fP option increases verbosity.
+.PP
+The \fB-M MSR#\fP option dumps the specified MSR,
+in addition to the usual frequency and idle statistics.
+.PP
+The \fB-i interval_sec\fP option prints statistics every \fiinterval_sec\fP seconds.
+The default is 5 seconds.
+.PP
+The \fBcommand\fP parameter forks \fBcommand\fP and upon its exit,
+displays the statistics gathered since it was forked.
+.PP
+.SH FIELD DESCRIPTIONS
+.nf
+\fBpkg\fP processor package number.
+\fBcore\fP processor core number.
+\fBCPU\fP Linux CPU (logical processor) number.
+\fB%c0\fP percent of the interval that the CPU retired instructions.
+\fBGHz\fP average clock rate while the CPU was in c0 state.
+\fBTSC\fP average GHz that the TSC ran during the entire interval.
+\fB%c1, %c3, %c6\fP show the percentage residency in hardware core idle states.
+\fB%pc3, %pc6\fP percentage residency in hardware package idle states.
+.fi
+.PP
+.SH EXAMPLE
+Without any parameters, turbostat prints out counters ever 5 seconds.
+(override interval with "-i sec" option, or specify a command
+for turbostat to fork).
+
+The first row of statistics reflect the average for the entire system.
+Subsequent rows show per-CPU statistics.
+
+.nf
+[root@x980]# ./turbostat
+core CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+          0.04 1.62 3.38   0.11   0.00  99.85   0.00  95.07
+  0   0   0.04 1.62 3.38   0.06   0.00  99.90   0.00  95.07
+  0   6   0.02 1.62 3.38   0.08   0.00  99.90   0.00  95.07
+  1   2   0.10 1.62 3.38   0.29   0.00  99.61   0.00  95.07
+  1   8   0.11 1.62 3.38   0.28   0.00  99.61   0.00  95.07
+  2   4   0.01 1.62 3.38   0.01   0.00  99.98   0.00  95.07
+  2  10   0.01 1.61 3.38   0.02   0.00  99.98   0.00  95.07
+  8   1   0.07 1.62 3.38   0.15   0.00  99.78   0.00  95.07
+  8   7   0.03 1.62 3.38   0.19   0.00  99.78   0.00  95.07
+  9   3   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+  9   9   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+ 10   5   0.01 1.62 3.38   0.13   0.00  99.86   0.00  95.07
+ 10  11   0.08 1.62 3.38   0.05   0.00  99.86   0.00  95.07
+.fi
+.SH VERBOSE EXAMPLE
+The "-v" option adds verbosity to the output:
+
+.nf
+GenuineIntel 11 CPUID levels; family:model:stepping 0x6:2c:2 (6:44:2)
+12 * 133 = 1600 MHz max efficiency
+25 * 133 = 3333 MHz TSC frequency
+26 * 133 = 3467 MHz max turbo 4 active cores
+26 * 133 = 3467 MHz max turbo 3 active cores
+27 * 133 = 3600 MHz max turbo 2 active cores
+27 * 133 = 3600 MHz max turbo 1 active cores
+
+.fi
+The \fBmax efficiency\fP frequency, a.k.a. Low Frequency Mode, is the frequency
+available at the minimum package voltage.  The \fBTSC frequency\fP is the nominal
+maximum frequency of the processor if turbo-mode were not available.  This frequency
+should be sustainable on all CPUs indefinitely, given nominal power and cooling.
+The remaining rows show what maximum turbo frequency is possible
+depending on the number of idle cores.  Note that this information is
+not available on all processors.
+.SH FORK EXAMPLE
+If turbostat is invoked with a command, it will fork that command
+and output the statistics gathered when the command exits.
+eg. Here a cycle soaker is run on 1 CPU (see %c0) for a few seconds
+until ^C while the other CPUs are mostly idle:
+
+.nf
+[root@x980 lenb]# ./turbostat cat /dev/zero > /dev/null
+
+^Ccore CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+           8.49 3.63 3.38  16.23   0.66  74.63   0.00   0.00
+   0   0   1.22 3.62 3.38  32.18   0.00  66.60   0.00   0.00
+   0   6   0.40 3.61 3.38  33.00   0.00  66.60   0.00   0.00
+   1   2   0.11 3.14 3.38   0.19   3.95  95.75   0.00   0.00
+   1   8   0.05 2.88 3.38   0.25   3.95  95.75   0.00   0.00
+   2   4   0.00 3.13 3.38   0.02   0.00  99.98   0.00   0.00
+   2  10   0.00 3.09 3.38   0.02   0.00  99.98   0.00   0.00
+   8   1   0.04 3.50 3.38  14.43   0.00  85.54   0.00   0.00
+   8   7   0.03 2.98 3.38  14.43   0.00  85.54   0.00   0.00
+   9   3   0.00 3.16 3.38 100.00   0.00   0.00   0.00   0.00
+   9   9  99.93 3.63 3.38   0.06   0.00   0.00   0.00   0.00
+  10   5   0.01 2.82 3.38   0.08   0.00  99.91   0.00   0.00
+  10  11   0.02 3.36 3.38   0.06   0.00  99.91   0.00   0.00
+6.950866 sec
+
+.fi
+Above the cycle soaker drives cpu9 up 3.6 Ghz turbo limit
+while the other processors are generally in various states of idle.
+
+Note that cpu3 is an HT sibling sharing core9
+with cpu9, and thus it is unable to get to an idle state
+deeper than c1 while cpu9 is busy.
+
+Note that turbostat reports average GHz of 3.61, while
+the arithmetic average of the GHz column above is 3.24.
+This is a weighted average, where the weight is %c0.  ie. it is the total number of
+un-halted cycles elapsed per time divided by the number of CPUs.
+.SH NOTES
+
+.B "turbostat "
+must be run as root.
+
+.B "turbostat "
+reads hardware counters, but doesn't write them.
+So it will not interfere with the OS or other programs, including
+multiple invocations of itself.
+
+\fBturbostat \fP
+may work poorly on Linux-2.6.20 through 2.6.29,
+as \fBacpi-cpufreq \fPperiodically cleared the APERF and MPERF
+in those kernels.
+
+The APERF, MPERF MSRs are defined to count non-halted cycles.
+Although it is not guaranteed by the architecture, turbostat assumes
+that they count at TSC rate, which is true on all processors tested to date.
+
+.SH REFERENCES
+"Intel® Turbo Boost Technology
+in Intel® Core™ Microarchitecture (Nehalem) Based Processors"
+http://download.intel.com/design/processor/applnots/320354.pdf
+
+"Intel® 64 and IA-32 Architectures Software Developer's Manual
+Volume 3B: System Programming Guide"
+http://www.intel.com/products/processor/manuals/
+
+.SH FILES
+.ta
+.nf
+/dev/cpu/*/msr
+.fi
+
+.SH "SEE ALSO"
+msr(4), vmstat(8)
+.PP
+.SH AUTHORS
+.nf
+Written by Len Brown <len.brown@intel.com>
diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
new file mode 100644
index 0000000..b8001b1
--- /dev/null
+++ b/tools/power/x86/turbostat/turbostat.c
@@ -0,0 +1,1047 @@
+/*
+ * turbostat -- show CPU frequency and C-state residency
+ * on modern Intel turbo-capable processors.
+ *
+ * Copyright (c) 2010, Intel Corporation.
+ * Len Brown <len.brown@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <sys/stat.h>
+#include <sys/resource.h>
+#include <fcntl.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <ctype.h>
+
+#define MSR_TSC	0x10
+#define MSR_NEHALEM_PLATFORM_INFO	0xCE
+#define MSR_NEHALEM_TURBO_RATIO_LIMIT	0x1AD
+#define MSR_APERF	0xE8
+#define MSR_MPERF	0xE7
+#define MSR_PKG_C2_RESIDENCY	0x60D	/* SNB only */
+#define MSR_PKG_C3_RESIDENCY	0x3F8
+#define MSR_PKG_C6_RESIDENCY	0x3F9
+#define MSR_PKG_C7_RESIDENCY	0x3FA	/* SNB only */
+#define MSR_CORE_C3_RESIDENCY	0x3FC
+#define MSR_CORE_C6_RESIDENCY	0x3FD
+#define MSR_CORE_C7_RESIDENCY	0x3FE	/* SNB only */
+
+char *proc_stat = "/proc/stat";
+unsigned int interval_sec = 5;	/* set with -i interval_sec */
+unsigned int verbose;		/* set with -v */
+unsigned int skip_c0;
+unsigned int skip_c1;
+unsigned int do_nhm_cstates;
+unsigned int do_snb_cstates;
+unsigned int has_aperf;
+unsigned int units = 1000000000;	/* Ghz etc */
+unsigned int genuine_intel;
+unsigned int has_invariant_tsc;
+unsigned int do_nehalem_platform_info;
+unsigned int do_nehalem_turbo_ratio_limit;
+unsigned int extra_msr_offset;
+double bclk;
+unsigned int show_pkg;
+unsigned int show_core;
+unsigned int show_cpu;
+
+int aperf_mperf_unstable;
+int backwards_count;
+char *progname;
+int need_reinitialize;
+
+int num_cpus;
+
+typedef struct per_cpu_counters {
+	unsigned long long tsc;		/* per thread */
+	unsigned long long aperf;	/* per thread */
+	unsigned long long mperf;	/* per thread */
+	unsigned long long c1;	/* per thread (calculated) */
+	unsigned long long c3;	/* per core */
+	unsigned long long c6;	/* per core */
+	unsigned long long c7;	/* per core */
+	unsigned long long pc2;	/* per package */
+	unsigned long long pc3;	/* per package */
+	unsigned long long pc6;	/* per package */
+	unsigned long long pc7;	/* per package */
+	unsigned long long extra_msr;	/* per thread */
+	int pkg;
+	int core;
+	int cpu;
+	struct per_cpu_counters *next;
+} PCC;
+
+PCC *pcc_even;
+PCC *pcc_odd;
+PCC *pcc_delta;
+PCC *pcc_average;
+struct timeval tv_even;
+struct timeval tv_odd;
+struct timeval tv_delta;
+
+unsigned long long get_msr(int cpu, off_t offset)
+{
+	ssize_t retval;
+	unsigned long long msr;
+	char pathname[32];
+	int fd;
+
+	sprintf(pathname, "/dev/cpu/%d/msr", cpu);
+	fd = open(pathname, O_RDONLY);
+	if (fd < 0) {
+		perror(pathname);
+		need_reinitialize = 1;
+		return 0;
+	}
+
+	retval = pread(fd, &msr, sizeof msr, offset);
+	if (retval != sizeof msr) {
+		fprintf(stderr, "cpu%d pread(..., 0x%zx) = %jd\n",
+			cpu, offset, retval);
+		exit(-2);
+	}
+
+	close(fd);
+	return msr;
+}
+
+void print_header()
+{
+	if (show_pkg)
+		fprintf(stderr, "pkg ");
+	if (show_core)
+		fprintf(stderr, "core");
+	if (show_cpu)
+		fprintf(stderr, " CPU");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c0 ");
+	if (has_aperf)
+		fprintf(stderr, "  GHz");
+	fprintf(stderr, "  TSC");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c1 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c6 ");
+	if (do_snb_cstates)
+		fprintf(stderr, "   %%c7 ");
+	if (do_snb_cstates)
+		fprintf(stderr, "  %%pc2 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc6 ");
+	if (do_snb_cstates)
+		fprintf(stderr, "  %%pc7 ");
+	if (extra_msr_offset)
+		fprintf(stderr, "       MSR 0x%x ", extra_msr_offset);
+
+	putc('\n', stderr);
+}
+
+void dump_pcc(PCC *pcc)
+{
+	fprintf(stderr, "package: %d ", pcc->pkg);
+	fprintf(stderr, "core:: %d ", pcc->core);
+	fprintf(stderr, "CPU: %d ", pcc->cpu);
+	fprintf(stderr, "TSC: %016llX\n", pcc->tsc);
+	fprintf(stderr, "c3: %016llX\n", pcc->c3);
+	fprintf(stderr, "c6: %016llX\n", pcc->c6);
+	fprintf(stderr, "c7: %016llX\n", pcc->c7);
+	fprintf(stderr, "aperf: %016llX\n", pcc->aperf);
+	fprintf(stderr, "pc2: %016llX\n", pcc->pc2);
+	fprintf(stderr, "pc3: %016llX\n", pcc->pc3);
+	fprintf(stderr, "pc6: %016llX\n", pcc->pc6);
+	fprintf(stderr, "pc7: %016llX\n", pcc->pc7);
+	fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, pcc->extra_msr);
+}
+
+void dump_list(PCC *pcc)
+{
+	printf("dump_list 0x%p\n", pcc);
+
+	for (; pcc; pcc = pcc->next)
+		dump_pcc(pcc);
+}
+
+void print_pcc(PCC *p)
+{
+	double interval_float;
+
+	interval_float = tv_delta.tv_sec + tv_delta.tv_usec/1000000.0;
+
+	/* topology columns, print blanks on 1st (average) line */
+	if (p == pcc_average) {
+		if (show_pkg)
+			fprintf(stderr, "    ");
+		if (show_core)
+			fprintf(stderr, "    ");
+		if (show_cpu)
+			fprintf(stderr, "    ");
+	} else {
+		if (show_pkg)
+			fprintf(stderr, "%4d", p->pkg);
+		if (show_core)
+			fprintf(stderr, "%4d", p->core);
+		if (show_cpu)
+			fprintf(stderr, "%4d", p->cpu);
+	}
+
+	/* %c0 */
+	if (do_nhm_cstates) {
+		if (!skip_c0)
+			fprintf(stderr, "%7.2f", 100.0 * p->mperf/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+
+	/* GHz */
+	if (has_aperf) {
+		if (!aperf_mperf_unstable) {
+			fprintf(stderr, "%5.2f",
+				1.0 * p->tsc / units * p->aperf /
+				p->mperf / interval_float);
+		} else {
+			if (p->aperf > p->tsc || p->mperf > p->tsc) {
+				fprintf(stderr, " ****");
+			} else {
+				fprintf(stderr, "%4.1f*",
+					1.0 * p->tsc /
+					units * p->aperf /
+					p->mperf / interval_float);
+			}
+		}
+	}
+
+	/* TSC */
+	fprintf(stderr, "%5.2f", 1.0 * p->tsc/units/interval_float);
+
+	if (do_nhm_cstates) {
+		if (!skip_c1)
+			fprintf(stderr, "%7.2f", 100.0 * p->c1/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c6/p->tsc);
+	if (do_snb_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c7/p->tsc);
+	if (do_snb_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc2/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc6/p->tsc);
+	if (do_snb_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc7/p->tsc);
+	if (extra_msr_offset)
+		fprintf(stderr, "  0x%016llx", p->extra_msr);
+	putc('\n', stderr);
+}
+
+void print_counters(PCC *cnt)
+{
+	PCC *pcc;
+
+	print_header();
+
+	if (num_cpus > 1)
+		print_pcc(pcc_average);
+
+	for (pcc = cnt; pcc != NULL; pcc = pcc->next)
+		print_pcc(pcc);
+
+}
+
+#define SUBTRACT_COUNTER(after, before, delta) (delta = (after - before), (before > after))
+
+
+int compute_delta(PCC *after, PCC *before, PCC *delta)
+{
+	int errors = 0;
+	int perf_err = 0;
+
+	skip_c0 = skip_c1 = 0;
+
+	for ( ; after && before && delta;
+		after = after->next, before = before->next, delta = delta->next) {
+		if (before->cpu != after->cpu) {
+			printf("cpu configuration changed: %d != %d\n",
+				before->cpu, after->cpu);
+			return -1;
+		}
+
+		if (SUBTRACT_COUNTER(after->tsc, before->tsc, delta->tsc)) {
+			fprintf(stderr, "cpu%d TSC went backwards %llX to %llX\n",
+				before->cpu, before->tsc, after->tsc);
+			errors++;
+		}
+		/* check for TSC < 1 Mcycles over interval */
+		if (delta->tsc < (1000 * 1000)) {
+			fprintf(stderr, "Insanely slow TSC rate,"
+				" TSC stops in idle?\n");
+			fprintf(stderr, "You can disable all c-states"
+				" by booting with \"idle=poll\"\n");
+			fprintf(stderr, "or just the deep ones with"
+				" \"processor.max_cstate=1\"\n");
+			exit(-3);
+		}
+		if (SUBTRACT_COUNTER(after->c3, before->c3, delta->c3)) {
+			fprintf(stderr, "cpu%d c3 counter went backwards %llX to %llX\n",
+				before->cpu, before->c3, after->c3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c6, before->c6, delta->c6)) {
+			fprintf(stderr, "cpu%d c6 counter went backwards %llX to %llX\n",
+				before->cpu, before->c6, after->c6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c7, before->c7, delta->c7)) {
+			fprintf(stderr, "cpu%d c7 counter went backwards %llX to %llX\n",
+				before->cpu, before->c7, after->c7);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc2, before->pc2, delta->pc2)) {
+			fprintf(stderr, "cpu%d pc2 counter went backwards %llX to %llX\n",
+				before->cpu, before->pc2, after->pc2);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc3, before->pc3, delta->pc3)) {
+			fprintf(stderr, "cpu%d pc3 counter went backwards %llX to %llX\n",
+				before->cpu, before->pc3, after->pc3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc6, before->pc6, delta->pc6)) {
+			fprintf(stderr, "cpu%d pc6 counter went backwards %llX to %llX\n",
+				before->cpu, before->pc6, after->pc6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc7, before->pc7, delta->pc7)) {
+			fprintf(stderr, "cpu%d pc7 counter went backwards %llX to %llX\n",
+				before->cpu, before->pc7, after->pc7);
+			errors++;
+		}
+
+		perf_err = SUBTRACT_COUNTER(after->aperf, before->aperf, delta->aperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d aperf counter went backwards %llX to %llX\n",
+				before->cpu, before->aperf, after->aperf);
+		}
+		perf_err |= SUBTRACT_COUNTER(after->mperf, before->mperf, delta->mperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d mperf counter went backwards %llX to %llX\n",
+				before->cpu, before->mperf, after->mperf);
+		}
+		if (perf_err) {
+			if (!aperf_mperf_unstable) {
+				fprintf(stderr, "%s: APERF or MPERF went backwards *\n", progname);
+				fprintf(stderr, "* Frequency results do not cover entire interval *\n");
+				fprintf(stderr, "* fix this by running Linux-2.6.30 or later *\n");
+
+				aperf_mperf_unstable = 1;
+			}
+			/*
+			 * mperf delta is likely a huge "positive" number
+			 * can not use it for calculating c0 time
+			 */
+			skip_c0 = 1;
+			skip_c1 = 1;
+		}
+
+		/*
+		 * As mperf and tsc collection are not atomic,
+		 * it is possible for mperf's non-halted cycles
+		 * to exceed TSC's all cycles: show c1 = 0% in that case.
+		 */
+		if (delta->mperf > delta->tsc)
+			delta->c1 = 0;
+		else /* normal case, derive c1 */
+			delta->c1 = delta->tsc - delta->mperf
+				- delta->c3 - delta->c6 - delta->c7;
+
+		if (delta->mperf == 0)
+			delta->mperf = 1;	/* divide by 0 protection */
+
+		/*
+		 * for "extra msr", just copy the latest w/o subtracting
+		 */
+		delta->extra_msr = after->extra_msr;
+		if (errors) {
+			fprintf(stderr, "ERROR cpu%d before:\n", before->cpu);
+			dump_pcc(before);
+			fprintf(stderr, "ERROR cpu%d after:\n", before->cpu);
+			dump_pcc(after);
+			errors = 0;
+		}
+	}
+	return 0;
+}
+
+void compute_average(PCC *delta, PCC *avg)
+{
+	PCC *sum;
+
+	sum = calloc(1, sizeof(PCC));
+	if (sum == NULL) {
+		perror("calloc sum");
+		exit(1);
+	}
+
+	for (; delta; delta = delta->next) {
+		sum->tsc += delta->tsc;
+		sum->c1 += delta->c1;
+		sum->c3 += delta->c3;
+		sum->c6 += delta->c6;
+		sum->c7 += delta->c7;
+		sum->aperf += delta->aperf;
+		sum->mperf += delta->mperf;
+		sum->pc2 += delta->pc2;
+		sum->pc3 += delta->pc3;
+		sum->pc6 += delta->pc6;
+		sum->pc7 += delta->pc7;
+	}
+	avg->tsc = sum->tsc/num_cpus;
+	avg->c1 = sum->c1/num_cpus;
+	avg->c3 = sum->c3/num_cpus;
+	avg->c6 = sum->c6/num_cpus;
+	avg->c7 = sum->c7/num_cpus;
+	avg->aperf = sum->aperf/num_cpus;
+	avg->mperf = sum->mperf/num_cpus;
+	avg->pc2 = sum->pc2/num_cpus;
+	avg->pc3 = sum->pc3/num_cpus;
+	avg->pc6 = sum->pc6/num_cpus;
+	avg->pc7 = sum->pc7/num_cpus;
+
+	free(sum);
+}
+
+void get_counters(PCC *pcc)
+{
+	for ( ; pcc; pcc = pcc->next) {
+		pcc->tsc = get_msr(pcc->cpu, MSR_TSC);
+		if (do_nhm_cstates)
+			pcc->c3 = get_msr(pcc->cpu, MSR_CORE_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->c6 = get_msr(pcc->cpu, MSR_CORE_C6_RESIDENCY);
+		if (do_snb_cstates)
+			pcc->c7 = get_msr(pcc->cpu, MSR_CORE_C7_RESIDENCY);
+		if (has_aperf)
+			pcc->aperf = get_msr(pcc->cpu, MSR_APERF);
+		if (has_aperf)
+			pcc->mperf = get_msr(pcc->cpu, MSR_MPERF);
+		if (do_snb_cstates)
+			pcc->pc2 = get_msr(pcc->cpu, MSR_PKG_C2_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->pc3 = get_msr(pcc->cpu, MSR_PKG_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->pc6 = get_msr(pcc->cpu, MSR_PKG_C6_RESIDENCY);
+		if (do_snb_cstates)
+			pcc->pc7 = get_msr(pcc->cpu, MSR_PKG_C7_RESIDENCY);
+		if (extra_msr_offset)
+			pcc->extra_msr = get_msr(pcc->cpu, extra_msr_offset);
+	}
+}
+
+
+void print_nehalem_info()
+{
+	unsigned long long msr;
+	unsigned int ratio;
+
+	if (!do_nehalem_platform_info)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_PLATFORM_INFO);
+
+	ratio = (msr >> 40) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz TSC frequency\n",
+		ratio, bclk, ratio * bclk);
+
+	if (verbose > 1)
+		fprintf(stderr, "MSR_NEHALEM_PLATFORM_INFO: 0x%llx\n", msr);
+
+	if (!do_nehalem_turbo_ratio_limit)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_TURBO_RATIO_LIMIT);
+
+	ratio = (msr >> 24) & 0xFF;
+	if (ratio)
+		fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 4 active cores\n",
+			ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 16) & 0xFF;
+	if (ratio)
+		fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 3 active cores\n",
+			ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	if (ratio)
+		fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 2 active cores\n",
+			ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 0) & 0xFF;
+	if (ratio)
+		fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 1 active cores\n",
+			ratio, bclk, ratio * bclk);
+
+}
+
+void free_counter_list(PCC *list)
+{
+	PCC *p;
+
+	for (p = list; p; ) {
+		PCC *free_me;
+
+		free_me = p;
+		p = p->next;
+		free(free_me);
+	}
+	return;
+}
+
+void free_all_counters(void)
+{
+	free_counter_list(pcc_even);
+	pcc_even = NULL;
+
+	free_counter_list(pcc_odd);
+	pcc_odd = NULL;
+
+	free_counter_list(pcc_delta);
+	pcc_delta = NULL;
+
+	free_counter_list(pcc_average);
+	pcc_average = NULL;
+}
+
+void insert_cpu_counters(PCC **list, PCC *new)
+{
+	PCC *prev;
+
+	/*
+	 * list was empty
+	 */
+	if (*list == NULL) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	show_cpu = 1;	/* there is more than one CPU */
+
+	/*
+	 * insert on front of list.
+	 * It is sorted by ascending package#, core#, cpu#
+	 */
+	if (((*list)->pkg > new->pkg) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core > new->core)) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core == new->core) && ((*list)->cpu > new->cpu))) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	prev = *list;
+
+	while (prev->next && (prev->next->pkg < new->pkg)) {
+		prev = prev->next;
+		show_pkg = 1;	/* there is more than 1 package */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core < new->core)) {
+		prev = prev->next;
+		show_core = 1;	/* there is more than 1 core */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core == new->core)
+		&& (prev->next->cpu < new->cpu)) {
+		prev = prev->next;
+	}
+
+	/*
+	 * insert after "prev"
+	 */
+	new->next = prev->next;
+	prev->next = new;
+
+	return;
+}
+
+void alloc_new_cpu_counters(int pkg, int core, int cpu)
+{
+	PCC *new;
+
+	if (verbose > 1)
+		printf("pkg%d core%d, cpu%d\n", pkg, core, cpu);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_odd, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_even, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_delta, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	pcc_average = new;
+}
+
+int get_physical_package_id(int cpu)
+{
+	char path[64];
+	FILE *filep;
+	int pkg;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/physical_package_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(1);
+	}
+	fscanf(filep, "%d", &pkg);
+	fclose(filep);
+	return pkg;
+}
+
+int get_core_id(int cpu)
+{
+	char path[64];
+	FILE *filep;
+	int core;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/core_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(1);
+	}
+	fscanf(filep, "%d", &core);
+	fclose(filep);
+	return core;
+}
+
+/*
+ * run func(index, cpu) on every cpu in /proc/stat
+ */
+
+int for_all_cpus(void (func)(int, int, int))
+{
+	FILE *fp;
+	int cpu_count;
+	int retval;
+
+	fp = fopen(proc_stat, "r");
+	if (fp == NULL) {
+		perror(proc_stat);
+		exit(1);
+	}
+
+	retval = fscanf(fp, "cpu %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n");
+	if (retval != 0) {
+		perror("/proc/stat format");
+		exit(1);
+	}
+
+	for (cpu_count = 0; ; cpu_count++) {
+		int cpu;
+
+		retval = fscanf(fp, "cpu%u %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n", &cpu);
+		if (retval != 1)
+			return cpu_count;
+
+		func(get_physical_package_id(cpu), get_core_id(cpu), cpu);
+	}
+	fclose(fp);
+}
+
+void re_initialize(void)
+{
+	printf("turbostat: topology changed, re-initializing.\n");
+	free_all_counters();
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+	need_reinitialize = 0;
+	printf("num_cpus is now %d\n", num_cpus);
+}
+
+void dummy(int pkg, int core, int cpu) { return; }
+/*
+ * check to see if a cpu came on-line
+ */
+void verify_num_cpus()
+{
+	int new_num_cpus;
+
+	new_num_cpus = for_all_cpus(dummy);
+
+	if (new_num_cpus != num_cpus) {
+		if (verbose)
+			printf("num_cpus was %d, is now  %d\n",
+				num_cpus, new_num_cpus);
+		need_reinitialize = 1;
+	}
+
+	return;
+}
+
+void turbostat_loop()
+{
+restart:
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	while (1) {
+		verify_num_cpus();
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_odd);
+		gettimeofday(&tv_odd, (struct timezone *)NULL);
+
+		compute_delta(pcc_odd, pcc_even, pcc_delta);
+		timersub(&tv_odd, &tv_even, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_even);
+		gettimeofday(&tv_even, (struct timezone *)NULL);
+		compute_delta(pcc_even, pcc_odd, pcc_delta);
+		timersub(&tv_even, &tv_odd, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+	}
+}
+
+void check_dev_msr()
+{
+	struct stat sb;
+
+	if (stat("/dev/cpu/0/msr", &sb)) {
+		fprintf(stderr, "no /dev/cpu/0/msr\n");
+		fprintf(stderr, "Try \"# modprobe msr\"\n");
+		exit(-5);
+	}
+}
+
+void check_super_user()
+{
+	if (getuid() != 0) {
+		fprintf(stderr, "must be root\n");
+		exit(-6);
+	}
+}
+
+int has_nehalem_turbo_ratio_limit(unsigned int family, unsigned int model)
+{
+	if (!genuine_intel)
+		return 0;
+
+	if (family != 6)
+		return 0;
+
+	switch (model) {
+	case 0x1A:	/* Core i7, Xeon 5500 series - Bloomfield, Gainstown NHM-EP */
+	case 0x1E:	/* Core i7 and i5 Processor - Clarksfield, Lynnfield, Jasper Forest */
+	case 0x1F:	/* Core i7 and i5 Processor - Nehalem */
+	case 0x25:	/* Westmere Client - Clarkdale, Arrandale */
+	case 0x2C:	/* Westmere EP - Gulftown */
+	case 0x2A:	/* SNB */
+	case 0x2D:	/* SNB Xeon */
+		return 1;
+	case 0x2E:	/* Nehalem-EX Xeon - Beckton */
+	case 0x2F:	/* Westmere-EX Xeon - Eagleton */
+	default:
+		return 0;
+	}
+}
+
+int is_snb(unsigned int family, unsigned int model)
+{
+	if (!genuine_intel)
+		return 0;
+
+	switch (model) {
+	case 0x2A:
+	case 0x2D:
+		return 1;
+	}
+	return 0;
+}
+
+double discover_bclk(unsigned int family, unsigned int model)
+{
+	if (is_snb(family, model))
+		return 100.00;
+	else
+		return 133.33;
+}
+
+void check_cpuid()
+{
+	unsigned int eax, ebx, ecx, edx, max_level;
+	unsigned int fms, family, model, stepping;
+
+	eax = ebx = ecx = edx = 0;
+
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0));
+
+	if (ebx == 0x756e6547 && edx == 0x49656e69 && ecx == 0x6c65746e)
+		genuine_intel = 1;
+
+	if (verbose)
+		fprintf(stderr, "%.4s%.4s%.4s ",
+			(char *)&ebx, (char *)&edx, (char *)&ecx);
+
+	asm("cpuid" : "=a" (fms), "=c" (ecx), "=d" (edx) : "a" (1) : "ebx");
+	family = (fms >> 8) & 0xf;
+	model = (fms >> 4) & 0xf;
+	stepping = fms & 0xf;
+	if (family == 6 || family == 0xf)
+		model += ((fms >> 16) & 0xf) << 4;
+
+	if (verbose)
+		fprintf(stderr, "%d CPUID levels; family:model:stepping 0x%x:%x:%x (%d:%d:%d)\n",
+			max_level, family, model, stepping, family, model, stepping);
+
+	if (!(edx & (1 << 5))) {
+		fprintf(stderr, "CPUID: no MSR\n");
+		exit(1);
+	}
+
+	/*
+	 * check max extended function levels of CPUID.
+	 * This is needed to check for invariant TSC.
+	 * This check is valid for both Intel and AMD.
+	 */
+	ebx = ecx = edx = 0;
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000000));
+
+	if (max_level < 0x80000007) {
+		fprintf(stderr, "CPUID: no invariant TSC (max_level 0x%x)\n", max_level);
+		exit(1);
+	}
+
+	/*
+	 * Non-Stop TSC is advertised by CPUID.EAX=0x80000007: EDX.bit8
+	 * this check is valid for both Intel and AMD
+	 */
+	asm("cpuid" : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000007));
+	has_invariant_tsc = edx && (1 << 8);
+
+	if (!has_invariant_tsc) {
+		fprintf(stderr, "No invariant TSC\n");
+		exit(1);
+	}
+
+	/*
+	 * APERF/MPERF is advertised by CPUID.EAX=0x6: ECX.bit0
+	 * this check is valid for both Intel and AMD
+	 */
+
+	asm("cpuid" : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x6));
+	has_aperf = ecx && (1 << 0);
+	if (!has_aperf) {
+		fprintf(stderr, "No APERF MSR\n");
+		exit(1);
+	}
+
+	do_nehalem_platform_info = genuine_intel && has_invariant_tsc;
+	do_nhm_cstates = genuine_intel;	/* all Intel w/ non-stop TSC have NHM counters */
+	do_snb_cstates = is_snb(family, model);
+	bclk = discover_bclk(family, model);
+
+	do_nehalem_turbo_ratio_limit = has_nehalem_turbo_ratio_limit(family, model);
+}
+
+
+void usage()
+{
+	fprintf(stderr, "%s: [-v] [-M MSR#] [-i interval_sec | command ...]\n",
+		progname);
+	exit(1);
+}
+
+
+/*
+ * in /dev/cpu/ return success for names that are numbers
+ * ie. filter out ".", "..", "microcode".
+ */
+int dir_filter(const struct dirent *dirp)
+{
+	if (isdigit(dirp->d_name[0]))
+		return 1;
+	else
+		return 0;
+}
+
+int open_dev_cpu_msr(int dummy1)
+{
+	return 0;
+}
+
+void turbostat_init()
+{
+	check_cpuid();
+
+	check_dev_msr();
+	check_super_user();
+
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+
+	if (verbose)
+		print_nehalem_info();
+}
+
+int fork_it(char **argv)
+{
+	int retval;
+	pid_t child_pid;
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	child_pid = fork();
+	if (!child_pid) {
+		/* child */
+		execvp(argv[0], argv);
+	} else {
+		int status;
+
+		/* parent */
+		if (child_pid == -1) {
+			perror("fork");
+			exit(1);
+		}
+
+		signal(SIGINT, SIG_IGN);
+		signal(SIGQUIT, SIG_IGN);
+		if (waitpid(child_pid, &status, 0) == -1) {
+			perror("wait");
+			exit(1);
+		}
+	}
+	get_counters(pcc_odd);
+	gettimeofday(&tv_odd, (struct timezone *)NULL);
+	retval = compute_delta(pcc_odd, pcc_even, pcc_delta);
+
+	timersub(&tv_odd, &tv_even, &tv_delta);
+	compute_average(pcc_delta, pcc_average);
+	if (!retval)
+		print_counters(pcc_delta);
+
+	fprintf(stderr, "%.6f sec\n", tv_delta.tv_sec + tv_delta.tv_usec/1000000.0);;
+
+	return 0;
+}
+
+void cmdline(int argc, char **argv)
+{
+	int opt;
+
+	progname = argv[0];
+
+	while ((opt = getopt(argc, argv, "+vi:M:")) != -1) {
+		switch (opt) {
+		case 'v':
+			verbose++;
+			break;
+		case 'i':
+			interval_sec = atoi(optarg);
+			break;
+		case 'M':
+			sscanf(optarg, "%x", &extra_msr_offset);
+			if (verbose > 1)
+				fprintf(stderr, "MSR 0x%X\n", extra_msr_offset);
+			break;
+		default:
+			usage();
+		}
+	}
+}
+
+int main(int argc, char **argv)
+{
+	cmdline(argc, argv);
+
+	if (verbose > 1)
+		fprintf(stderr, "turbostat Nov 16, 2010"
+			" - Len Brown <lenb@kernel.org>\n");
+	if (verbose > 1)
+		fprintf(stderr, "http://userweb.kernel.org/~lenb/acpi/utils/pmtools/turbostat/\n");
+
+	turbostat_init();
+
+	/*
+	 * if any params left, it must be a command to fork
+	 */
+	if (argc - optind)
+		return fork_it(argv + optind);
+	else
+		turbostat_loop();
+
+	return 0;
+}

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4] tools: create power/x86/turbostat
  2010-11-24  4:58       ` [PATCH v3] tools: create power/x86/turbostat Len Brown
@ 2010-12-07  6:00         ` Len Brown
  2010-12-07 14:29           ` Thiago Farina
  0 siblings, 1 reply; 11+ messages in thread
From: Len Brown @ 2010-12-07  6:00 UTC (permalink / raw)
  To: Greg Kroah-Hartman; +Cc: linux-pm, x86, linux-kernel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 33251 bytes --]

From: Len Brown <len.brown@intel.com>

turbostat is a tool to help verify proper operation
of processor power management states on modern x86.

Signed-off-by: Len Brown <len.brown@intel.com>
---
v4: fix open file descriptor leak, reported by David C. Wong

 tools/power/x86/turbostat/Makefile    |    8 +
 tools/power/x86/turbostat/turbostat.8 |  172 ++++++
 tools/power/x86/turbostat/turbostat.c | 1048 +++++++++++++++++++++++++++++++++
 3 files changed, 1228 insertions(+), 0 deletions(-)

diff --git a/tools/power/x86/turbostat/Makefile b/tools/power/x86/turbostat/Makefile
new file mode 100644
index 0000000..fd8e1f1
--- /dev/null
+++ b/tools/power/x86/turbostat/Makefile
@@ -0,0 +1,8 @@
+turbostat : turbostat.c
+
+clean :
+	rm -f turbostat
+
+install :
+	install turbostat /usr/bin/turbostat
+	install turbostat.8 /usr/share/man/man8
diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
new file mode 100644
index 0000000..ff75125
--- /dev/null
+++ b/tools/power/x86/turbostat/turbostat.8
@@ -0,0 +1,172 @@
+.TH TURBOSTAT 8
+.SH NAME
+turbostat \- Report processor frequency and idle statistics
+.SH SYNOPSIS
+.ft B
+.B turbostat
+.RB [ "\-v" ]
+.RB [ "\-M MSR#" ]
+.RB command
+.br
+.B turbostat
+.RB [ "\-v" ]
+.RB [ "\-M MSR#" ]
+.RB [ "\-i interval_sec" ]
+.SH DESCRIPTION
+\fBturbostat \fP reports processor topology, frequency
+and idle power state statistics on modern X86 processors.
+Either \fBcommand\fP is forked and statistics are printed
+upon its completion, or statistics are printed periodically.
+
+\fBturbostat \fP
+requires that the processor
+supports an "invariant" TSC, plus the APERF and MPERF MSRs.
+\fBturbostat \fP will report idle cpu power state residency
+on processors that additionally support C-state residency counters.
+
+.SS Options
+The \fB-v\fP option increases verbosity.
+.PP
+The \fB-M MSR#\fP option dumps the specified MSR,
+in addition to the usual frequency and idle statistics.
+.PP
+The \fB-i interval_sec\fP option prints statistics every \fiinterval_sec\fP seconds.
+The default is 5 seconds.
+.PP
+The \fBcommand\fP parameter forks \fBcommand\fP and upon its exit,
+displays the statistics gathered since it was forked.
+.PP
+.SH FIELD DESCRIPTIONS
+.nf
+\fBpkg\fP processor package number.
+\fBcore\fP processor core number.
+\fBCPU\fP Linux CPU (logical processor) number.
+\fB%c0\fP percent of the interval that the CPU retired instructions.
+\fBGHz\fP average clock rate while the CPU was in c0 state.
+\fBTSC\fP average GHz that the TSC ran during the entire interval.
+\fB%c1, %c3, %c6\fP show the percentage residency in hardware core idle states.
+\fB%pc3, %pc6\fP percentage residency in hardware package idle states.
+.fi
+.PP
+.SH EXAMPLE
+Without any parameters, turbostat prints out counters ever 5 seconds.
+(override interval with "-i sec" option, or specify a command
+for turbostat to fork).
+
+The first row of statistics reflect the average for the entire system.
+Subsequent rows show per-CPU statistics.
+
+.nf
+[root@x980]# ./turbostat
+core CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+          0.04 1.62 3.38   0.11   0.00  99.85   0.00  95.07
+  0   0   0.04 1.62 3.38   0.06   0.00  99.90   0.00  95.07
+  0   6   0.02 1.62 3.38   0.08   0.00  99.90   0.00  95.07
+  1   2   0.10 1.62 3.38   0.29   0.00  99.61   0.00  95.07
+  1   8   0.11 1.62 3.38   0.28   0.00  99.61   0.00  95.07
+  2   4   0.01 1.62 3.38   0.01   0.00  99.98   0.00  95.07
+  2  10   0.01 1.61 3.38   0.02   0.00  99.98   0.00  95.07
+  8   1   0.07 1.62 3.38   0.15   0.00  99.78   0.00  95.07
+  8   7   0.03 1.62 3.38   0.19   0.00  99.78   0.00  95.07
+  9   3   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+  9   9   0.01 1.62 3.38   0.02   0.00  99.98   0.00  95.07
+ 10   5   0.01 1.62 3.38   0.13   0.00  99.86   0.00  95.07
+ 10  11   0.08 1.62 3.38   0.05   0.00  99.86   0.00  95.07
+.fi
+.SH VERBOSE EXAMPLE
+The "-v" option adds verbosity to the output:
+
+.nf
+GenuineIntel 11 CPUID levels; family:model:stepping 0x6:2c:2 (6:44:2)
+12 * 133 = 1600 MHz max efficiency
+25 * 133 = 3333 MHz TSC frequency
+26 * 133 = 3467 MHz max turbo 4 active cores
+26 * 133 = 3467 MHz max turbo 3 active cores
+27 * 133 = 3600 MHz max turbo 2 active cores
+27 * 133 = 3600 MHz max turbo 1 active cores
+
+.fi
+The \fBmax efficiency\fP frequency, a.k.a. Low Frequency Mode, is the frequency
+available at the minimum package voltage.  The \fBTSC frequency\fP is the nominal
+maximum frequency of the processor if turbo-mode were not available.  This frequency
+should be sustainable on all CPUs indefinitely, given nominal power and cooling.
+The remaining rows show what maximum turbo frequency is possible
+depending on the number of idle cores.  Note that this information is
+not available on all processors.
+.SH FORK EXAMPLE
+If turbostat is invoked with a command, it will fork that command
+and output the statistics gathered when the command exits.
+eg. Here a cycle soaker is run on 1 CPU (see %c0) for a few seconds
+until ^C while the other CPUs are mostly idle:
+
+.nf
+[root@x980 lenb]# ./turbostat cat /dev/zero > /dev/null
+
+^Ccore CPU   %c0   GHz  TSC   %c1    %c3    %c6   %pc3   %pc6
+           8.49 3.63 3.38  16.23   0.66  74.63   0.00   0.00
+   0   0   1.22 3.62 3.38  32.18   0.00  66.60   0.00   0.00
+   0   6   0.40 3.61 3.38  33.00   0.00  66.60   0.00   0.00
+   1   2   0.11 3.14 3.38   0.19   3.95  95.75   0.00   0.00
+   1   8   0.05 2.88 3.38   0.25   3.95  95.75   0.00   0.00
+   2   4   0.00 3.13 3.38   0.02   0.00  99.98   0.00   0.00
+   2  10   0.00 3.09 3.38   0.02   0.00  99.98   0.00   0.00
+   8   1   0.04 3.50 3.38  14.43   0.00  85.54   0.00   0.00
+   8   7   0.03 2.98 3.38  14.43   0.00  85.54   0.00   0.00
+   9   3   0.00 3.16 3.38 100.00   0.00   0.00   0.00   0.00
+   9   9  99.93 3.63 3.38   0.06   0.00   0.00   0.00   0.00
+  10   5   0.01 2.82 3.38   0.08   0.00  99.91   0.00   0.00
+  10  11   0.02 3.36 3.38   0.06   0.00  99.91   0.00   0.00
+6.950866 sec
+
+.fi
+Above the cycle soaker drives cpu9 up 3.6 Ghz turbo limit
+while the other processors are generally in various states of idle.
+
+Note that cpu3 is an HT sibling sharing core9
+with cpu9, and thus it is unable to get to an idle state
+deeper than c1 while cpu9 is busy.
+
+Note that turbostat reports average GHz of 3.61, while
+the arithmetic average of the GHz column above is 3.24.
+This is a weighted average, where the weight is %c0.  ie. it is the total number of
+un-halted cycles elapsed per time divided by the number of CPUs.
+.SH NOTES
+
+.B "turbostat "
+must be run as root.
+
+.B "turbostat "
+reads hardware counters, but doesn't write them.
+So it will not interfere with the OS or other programs, including
+multiple invocations of itself.
+
+\fBturbostat \fP
+may work poorly on Linux-2.6.20 through 2.6.29,
+as \fBacpi-cpufreq \fPperiodically cleared the APERF and MPERF
+in those kernels.
+
+The APERF, MPERF MSRs are defined to count non-halted cycles.
+Although it is not guaranteed by the architecture, turbostat assumes
+that they count at TSC rate, which is true on all processors tested to date.
+
+.SH REFERENCES
+"Intel® Turbo Boost Technology
+in Intel® Core™ Microarchitecture (Nehalem) Based Processors"
+http://download.intel.com/design/processor/applnots/320354.pdf
+
+"Intel® 64 and IA-32 Architectures Software Developer's Manual
+Volume 3B: System Programming Guide"
+http://www.intel.com/products/processor/manuals/
+
+.SH FILES
+.ta
+.nf
+/dev/cpu/*/msr
+.fi
+
+.SH "SEE ALSO"
+msr(4), vmstat(8)
+.PP
+.SH AUTHORS
+.nf
+Written by Len Brown <len.brown@intel.com>
diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
new file mode 100644
index 0000000..4c6983d
--- /dev/null
+++ b/tools/power/x86/turbostat/turbostat.c
@@ -0,0 +1,1048 @@
+/*
+ * turbostat -- show CPU frequency and C-state residency
+ * on modern Intel turbo-capable processors.
+ *
+ * Copyright (c) 2010, Intel Corporation.
+ * Len Brown <len.brown@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+#include <stdio.h>
+#include <unistd.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <sys/stat.h>
+#include <sys/resource.h>
+#include <fcntl.h>
+#include <signal.h>
+#include <sys/time.h>
+#include <stdlib.h>
+#include <dirent.h>
+#include <string.h>
+#include <ctype.h>
+
+#define MSR_TSC	0x10
+#define MSR_NEHALEM_PLATFORM_INFO	0xCE
+#define MSR_NEHALEM_TURBO_RATIO_LIMIT	0x1AD
+#define MSR_APERF	0xE8
+#define MSR_MPERF	0xE7
+#define MSR_PKG_C2_RESIDENCY	0x60D	/* SNB only */
+#define MSR_PKG_C3_RESIDENCY	0x3F8
+#define MSR_PKG_C6_RESIDENCY	0x3F9
+#define MSR_PKG_C7_RESIDENCY	0x3FA	/* SNB only */
+#define MSR_CORE_C3_RESIDENCY	0x3FC
+#define MSR_CORE_C6_RESIDENCY	0x3FD
+#define MSR_CORE_C7_RESIDENCY	0x3FE	/* SNB only */
+
+char *proc_stat = "/proc/stat";
+unsigned int interval_sec = 5;	/* set with -i interval_sec */
+unsigned int verbose;		/* set with -v */
+unsigned int skip_c0;
+unsigned int skip_c1;
+unsigned int do_nhm_cstates;
+unsigned int do_snb_cstates;
+unsigned int has_aperf;
+unsigned int units = 1000000000;	/* Ghz etc */
+unsigned int genuine_intel;
+unsigned int has_invariant_tsc;
+unsigned int do_nehalem_platform_info;
+unsigned int do_nehalem_turbo_ratio_limit;
+unsigned int extra_msr_offset;
+double bclk;
+unsigned int show_pkg;
+unsigned int show_core;
+unsigned int show_cpu;
+
+int aperf_mperf_unstable;
+int backwards_count;
+char *progname;
+int need_reinitialize;
+
+int num_cpus;
+
+typedef struct per_cpu_counters {
+	unsigned long long tsc;		/* per thread */
+	unsigned long long aperf;	/* per thread */
+	unsigned long long mperf;	/* per thread */
+	unsigned long long c1;	/* per thread (calculated) */
+	unsigned long long c3;	/* per core */
+	unsigned long long c6;	/* per core */
+	unsigned long long c7;	/* per core */
+	unsigned long long pc2;	/* per package */
+	unsigned long long pc3;	/* per package */
+	unsigned long long pc6;	/* per package */
+	unsigned long long pc7;	/* per package */
+	unsigned long long extra_msr;	/* per thread */
+	int pkg;
+	int core;
+	int cpu;
+	struct per_cpu_counters *next;
+} PCC;
+
+PCC *pcc_even;
+PCC *pcc_odd;
+PCC *pcc_delta;
+PCC *pcc_average;
+struct timeval tv_even;
+struct timeval tv_odd;
+struct timeval tv_delta;
+
+unsigned long long get_msr(int cpu, off_t offset)
+{
+	ssize_t retval;
+	unsigned long long msr;
+	char pathname[32];
+	int fd;
+
+	sprintf(pathname, "/dev/cpu/%d/msr", cpu);
+	fd = open(pathname, O_RDONLY);
+	if (fd < 0) {
+		perror(pathname);
+		need_reinitialize = 1;
+		return 0;
+	}
+
+	retval = pread(fd, &msr, sizeof msr, offset);
+	if (retval != sizeof msr) {
+		fprintf(stderr, "cpu%d pread(..., 0x%zx) = %jd\n",
+			cpu, offset, retval);
+		exit(-2);
+	}
+
+	close(fd);
+	return msr;
+}
+
+void print_header()
+{
+	if (show_pkg)
+		fprintf(stderr, "pkg ");
+	if (show_core)
+		fprintf(stderr, "core");
+	if (show_cpu)
+		fprintf(stderr, " CPU");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c0 ");
+	if (has_aperf)
+		fprintf(stderr, "  GHz");
+	fprintf(stderr, "  TSC");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c1 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "   %%c6 ");
+	if (do_snb_cstates)
+		fprintf(stderr, "   %%c7 ");
+	if (do_snb_cstates)
+		fprintf(stderr, "  %%pc2 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc3 ");
+	if (do_nhm_cstates)
+		fprintf(stderr, "  %%pc6 ");
+	if (do_snb_cstates)
+		fprintf(stderr, "  %%pc7 ");
+	if (extra_msr_offset)
+		fprintf(stderr, "       MSR 0x%x ", extra_msr_offset);
+
+	putc('\n', stderr);
+}
+
+void dump_pcc(PCC *pcc)
+{
+	fprintf(stderr, "package: %d ", pcc->pkg);
+	fprintf(stderr, "core:: %d ", pcc->core);
+	fprintf(stderr, "CPU: %d ", pcc->cpu);
+	fprintf(stderr, "TSC: %016llX\n", pcc->tsc);
+	fprintf(stderr, "c3: %016llX\n", pcc->c3);
+	fprintf(stderr, "c6: %016llX\n", pcc->c6);
+	fprintf(stderr, "c7: %016llX\n", pcc->c7);
+	fprintf(stderr, "aperf: %016llX\n", pcc->aperf);
+	fprintf(stderr, "pc2: %016llX\n", pcc->pc2);
+	fprintf(stderr, "pc3: %016llX\n", pcc->pc3);
+	fprintf(stderr, "pc6: %016llX\n", pcc->pc6);
+	fprintf(stderr, "pc7: %016llX\n", pcc->pc7);
+	fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, pcc->extra_msr);
+}
+
+void dump_list(PCC *pcc)
+{
+	printf("dump_list 0x%p\n", pcc);
+
+	for (; pcc; pcc = pcc->next)
+		dump_pcc(pcc);
+}
+
+void print_pcc(PCC *p)
+{
+	double interval_float;
+
+	interval_float = tv_delta.tv_sec + tv_delta.tv_usec/1000000.0;
+
+	/* topology columns, print blanks on 1st (average) line */
+	if (p == pcc_average) {
+		if (show_pkg)
+			fprintf(stderr, "    ");
+		if (show_core)
+			fprintf(stderr, "    ");
+		if (show_cpu)
+			fprintf(stderr, "    ");
+	} else {
+		if (show_pkg)
+			fprintf(stderr, "%4d", p->pkg);
+		if (show_core)
+			fprintf(stderr, "%4d", p->core);
+		if (show_cpu)
+			fprintf(stderr, "%4d", p->cpu);
+	}
+
+	/* %c0 */
+	if (do_nhm_cstates) {
+		if (!skip_c0)
+			fprintf(stderr, "%7.2f", 100.0 * p->mperf/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+
+	/* GHz */
+	if (has_aperf) {
+		if (!aperf_mperf_unstable) {
+			fprintf(stderr, "%5.2f",
+				1.0 * p->tsc / units * p->aperf /
+				p->mperf / interval_float);
+		} else {
+			if (p->aperf > p->tsc || p->mperf > p->tsc) {
+				fprintf(stderr, " ****");
+			} else {
+				fprintf(stderr, "%4.1f*",
+					1.0 * p->tsc /
+					units * p->aperf /
+					p->mperf / interval_float);
+			}
+		}
+	}
+
+	/* TSC */
+	fprintf(stderr, "%5.2f", 1.0 * p->tsc/units/interval_float);
+
+	if (do_nhm_cstates) {
+		if (!skip_c1)
+			fprintf(stderr, "%7.2f", 100.0 * p->c1/p->tsc);
+		else
+			fprintf(stderr, "   ****");
+	}
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c6/p->tsc);
+	if (do_snb_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->c7/p->tsc);
+	if (do_snb_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc2/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc3/p->tsc);
+	if (do_nhm_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc6/p->tsc);
+	if (do_snb_cstates)
+		fprintf(stderr, "%7.2f", 100.0 * p->pc7/p->tsc);
+	if (extra_msr_offset)
+		fprintf(stderr, "  0x%016llx", p->extra_msr);
+	putc('\n', stderr);
+}
+
+void print_counters(PCC *cnt)
+{
+	PCC *pcc;
+
+	print_header();
+
+	if (num_cpus > 1)
+		print_pcc(pcc_average);
+
+	for (pcc = cnt; pcc != NULL; pcc = pcc->next)
+		print_pcc(pcc);
+
+}
+
+#define SUBTRACT_COUNTER(after, before, delta) (delta = (after - before), (before > after))
+
+
+int compute_delta(PCC *after, PCC *before, PCC *delta)
+{
+	int errors = 0;
+	int perf_err = 0;
+
+	skip_c0 = skip_c1 = 0;
+
+	for ( ; after && before && delta;
+		after = after->next, before = before->next, delta = delta->next) {
+		if (before->cpu != after->cpu) {
+			printf("cpu configuration changed: %d != %d\n",
+				before->cpu, after->cpu);
+			return -1;
+		}
+
+		if (SUBTRACT_COUNTER(after->tsc, before->tsc, delta->tsc)) {
+			fprintf(stderr, "cpu%d TSC went backwards %llX to %llX\n",
+				before->cpu, before->tsc, after->tsc);
+			errors++;
+		}
+		/* check for TSC < 1 Mcycles over interval */
+		if (delta->tsc < (1000 * 1000)) {
+			fprintf(stderr, "Insanely slow TSC rate,"
+				" TSC stops in idle?\n");
+			fprintf(stderr, "You can disable all c-states"
+				" by booting with \"idle=poll\"\n");
+			fprintf(stderr, "or just the deep ones with"
+				" \"processor.max_cstate=1\"\n");
+			exit(-3);
+		}
+		if (SUBTRACT_COUNTER(after->c3, before->c3, delta->c3)) {
+			fprintf(stderr, "cpu%d c3 counter went backwards %llX to %llX\n",
+				before->cpu, before->c3, after->c3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c6, before->c6, delta->c6)) {
+			fprintf(stderr, "cpu%d c6 counter went backwards %llX to %llX\n",
+				before->cpu, before->c6, after->c6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->c7, before->c7, delta->c7)) {
+			fprintf(stderr, "cpu%d c7 counter went backwards %llX to %llX\n",
+				before->cpu, before->c7, after->c7);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc2, before->pc2, delta->pc2)) {
+			fprintf(stderr, "cpu%d pc2 counter went backwards %llX to %llX\n",
+				before->cpu, before->pc2, after->pc2);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc3, before->pc3, delta->pc3)) {
+			fprintf(stderr, "cpu%d pc3 counter went backwards %llX to %llX\n",
+				before->cpu, before->pc3, after->pc3);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc6, before->pc6, delta->pc6)) {
+			fprintf(stderr, "cpu%d pc6 counter went backwards %llX to %llX\n",
+				before->cpu, before->pc6, after->pc6);
+			errors++;
+		}
+		if (SUBTRACT_COUNTER(after->pc7, before->pc7, delta->pc7)) {
+			fprintf(stderr, "cpu%d pc7 counter went backwards %llX to %llX\n",
+				before->cpu, before->pc7, after->pc7);
+			errors++;
+		}
+
+		perf_err = SUBTRACT_COUNTER(after->aperf, before->aperf, delta->aperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d aperf counter went backwards %llX to %llX\n",
+				before->cpu, before->aperf, after->aperf);
+		}
+		perf_err |= SUBTRACT_COUNTER(after->mperf, before->mperf, delta->mperf);
+		if (perf_err) {
+			fprintf(stderr, "cpu%d mperf counter went backwards %llX to %llX\n",
+				before->cpu, before->mperf, after->mperf);
+		}
+		if (perf_err) {
+			if (!aperf_mperf_unstable) {
+				fprintf(stderr, "%s: APERF or MPERF went backwards *\n", progname);
+				fprintf(stderr, "* Frequency results do not cover entire interval *\n");
+				fprintf(stderr, "* fix this by running Linux-2.6.30 or later *\n");
+
+				aperf_mperf_unstable = 1;
+			}
+			/*
+			 * mperf delta is likely a huge "positive" number
+			 * can not use it for calculating c0 time
+			 */
+			skip_c0 = 1;
+			skip_c1 = 1;
+		}
+
+		/*
+		 * As mperf and tsc collection are not atomic,
+		 * it is possible for mperf's non-halted cycles
+		 * to exceed TSC's all cycles: show c1 = 0% in that case.
+		 */
+		if (delta->mperf > delta->tsc)
+			delta->c1 = 0;
+		else /* normal case, derive c1 */
+			delta->c1 = delta->tsc - delta->mperf
+				- delta->c3 - delta->c6 - delta->c7;
+
+		if (delta->mperf == 0)
+			delta->mperf = 1;	/* divide by 0 protection */
+
+		/*
+		 * for "extra msr", just copy the latest w/o subtracting
+		 */
+		delta->extra_msr = after->extra_msr;
+		if (errors) {
+			fprintf(stderr, "ERROR cpu%d before:\n", before->cpu);
+			dump_pcc(before);
+			fprintf(stderr, "ERROR cpu%d after:\n", before->cpu);
+			dump_pcc(after);
+			errors = 0;
+		}
+	}
+	return 0;
+}
+
+void compute_average(PCC *delta, PCC *avg)
+{
+	PCC *sum;
+
+	sum = calloc(1, sizeof(PCC));
+	if (sum == NULL) {
+		perror("calloc sum");
+		exit(1);
+	}
+
+	for (; delta; delta = delta->next) {
+		sum->tsc += delta->tsc;
+		sum->c1 += delta->c1;
+		sum->c3 += delta->c3;
+		sum->c6 += delta->c6;
+		sum->c7 += delta->c7;
+		sum->aperf += delta->aperf;
+		sum->mperf += delta->mperf;
+		sum->pc2 += delta->pc2;
+		sum->pc3 += delta->pc3;
+		sum->pc6 += delta->pc6;
+		sum->pc7 += delta->pc7;
+	}
+	avg->tsc = sum->tsc/num_cpus;
+	avg->c1 = sum->c1/num_cpus;
+	avg->c3 = sum->c3/num_cpus;
+	avg->c6 = sum->c6/num_cpus;
+	avg->c7 = sum->c7/num_cpus;
+	avg->aperf = sum->aperf/num_cpus;
+	avg->mperf = sum->mperf/num_cpus;
+	avg->pc2 = sum->pc2/num_cpus;
+	avg->pc3 = sum->pc3/num_cpus;
+	avg->pc6 = sum->pc6/num_cpus;
+	avg->pc7 = sum->pc7/num_cpus;
+
+	free(sum);
+}
+
+void get_counters(PCC *pcc)
+{
+	for ( ; pcc; pcc = pcc->next) {
+		pcc->tsc = get_msr(pcc->cpu, MSR_TSC);
+		if (do_nhm_cstates)
+			pcc->c3 = get_msr(pcc->cpu, MSR_CORE_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->c6 = get_msr(pcc->cpu, MSR_CORE_C6_RESIDENCY);
+		if (do_snb_cstates)
+			pcc->c7 = get_msr(pcc->cpu, MSR_CORE_C7_RESIDENCY);
+		if (has_aperf)
+			pcc->aperf = get_msr(pcc->cpu, MSR_APERF);
+		if (has_aperf)
+			pcc->mperf = get_msr(pcc->cpu, MSR_MPERF);
+		if (do_snb_cstates)
+			pcc->pc2 = get_msr(pcc->cpu, MSR_PKG_C2_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->pc3 = get_msr(pcc->cpu, MSR_PKG_C3_RESIDENCY);
+		if (do_nhm_cstates)
+			pcc->pc6 = get_msr(pcc->cpu, MSR_PKG_C6_RESIDENCY);
+		if (do_snb_cstates)
+			pcc->pc7 = get_msr(pcc->cpu, MSR_PKG_C7_RESIDENCY);
+		if (extra_msr_offset)
+			pcc->extra_msr = get_msr(pcc->cpu, extra_msr_offset);
+	}
+}
+
+
+void print_nehalem_info()
+{
+	unsigned long long msr;
+	unsigned int ratio;
+
+	if (!do_nehalem_platform_info)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_PLATFORM_INFO);
+
+	ratio = (msr >> 40) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency\n",
+		ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	fprintf(stderr, "%d * %.0f = %.0f MHz TSC frequency\n",
+		ratio, bclk, ratio * bclk);
+
+	if (verbose > 1)
+		fprintf(stderr, "MSR_NEHALEM_PLATFORM_INFO: 0x%llx\n", msr);
+
+	if (!do_nehalem_turbo_ratio_limit)
+		return;
+
+	msr = get_msr(0, MSR_NEHALEM_TURBO_RATIO_LIMIT);
+
+	ratio = (msr >> 24) & 0xFF;
+	if (ratio)
+		fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 4 active cores\n",
+			ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 16) & 0xFF;
+	if (ratio)
+		fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 3 active cores\n",
+			ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 8) & 0xFF;
+	if (ratio)
+		fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 2 active cores\n",
+			ratio, bclk, ratio * bclk);
+
+	ratio = (msr >> 0) & 0xFF;
+	if (ratio)
+		fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 1 active cores\n",
+			ratio, bclk, ratio * bclk);
+
+}
+
+void free_counter_list(PCC *list)
+{
+	PCC *p;
+
+	for (p = list; p; ) {
+		PCC *free_me;
+
+		free_me = p;
+		p = p->next;
+		free(free_me);
+	}
+	return;
+}
+
+void free_all_counters(void)
+{
+	free_counter_list(pcc_even);
+	pcc_even = NULL;
+
+	free_counter_list(pcc_odd);
+	pcc_odd = NULL;
+
+	free_counter_list(pcc_delta);
+	pcc_delta = NULL;
+
+	free_counter_list(pcc_average);
+	pcc_average = NULL;
+}
+
+void insert_cpu_counters(PCC **list, PCC *new)
+{
+	PCC *prev;
+
+	/*
+	 * list was empty
+	 */
+	if (*list == NULL) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	show_cpu = 1;	/* there is more than one CPU */
+
+	/*
+	 * insert on front of list.
+	 * It is sorted by ascending package#, core#, cpu#
+	 */
+	if (((*list)->pkg > new->pkg) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core > new->core)) ||
+	    (((*list)->pkg == new->pkg) && ((*list)->core == new->core) && ((*list)->cpu > new->cpu))) {
+		new->next = *list;
+		*list = new;
+		return;
+	}
+
+	prev = *list;
+
+	while (prev->next && (prev->next->pkg < new->pkg)) {
+		prev = prev->next;
+		show_pkg = 1;	/* there is more than 1 package */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core < new->core)) {
+		prev = prev->next;
+		show_core = 1;	/* there is more than 1 core */
+	}
+
+	while (prev->next && (prev->next->pkg == new->pkg)
+		&& (prev->next->core == new->core)
+		&& (prev->next->cpu < new->cpu)) {
+		prev = prev->next;
+	}
+
+	/*
+	 * insert after "prev"
+	 */
+	new->next = prev->next;
+	prev->next = new;
+
+	return;
+}
+
+void alloc_new_cpu_counters(int pkg, int core, int cpu)
+{
+	PCC *new;
+
+	if (verbose > 1)
+		printf("pkg%d core%d, cpu%d\n", pkg, core, cpu);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_odd, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_even, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	insert_cpu_counters(&pcc_delta, new);
+
+	new = (PCC *)calloc(1, sizeof(PCC));
+	if (new == NULL) {
+		perror("calloc");
+		exit(1);
+	}
+	new->pkg = pkg;
+	new->core = core;
+	new->cpu = cpu;
+	pcc_average = new;
+}
+
+int get_physical_package_id(int cpu)
+{
+	char path[64];
+	FILE *filep;
+	int pkg;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/physical_package_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(1);
+	}
+	fscanf(filep, "%d", &pkg);
+	fclose(filep);
+	return pkg;
+}
+
+int get_core_id(int cpu)
+{
+	char path[64];
+	FILE *filep;
+	int core;
+
+	sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/core_id", cpu);
+	filep = fopen(path, "r");
+	if (filep == NULL) {
+		perror(path);
+		exit(1);
+	}
+	fscanf(filep, "%d", &core);
+	fclose(filep);
+	return core;
+}
+
+/*
+ * run func(index, cpu) on every cpu in /proc/stat
+ */
+
+int for_all_cpus(void (func)(int, int, int))
+{
+	FILE *fp;
+	int cpu_count;
+	int retval;
+
+	fp = fopen(proc_stat, "r");
+	if (fp == NULL) {
+		perror(proc_stat);
+		exit(1);
+	}
+
+	retval = fscanf(fp, "cpu %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n");
+	if (retval != 0) {
+		perror("/proc/stat format");
+		exit(1);
+	}
+
+	for (cpu_count = 0; ; cpu_count++) {
+		int cpu;
+
+		retval = fscanf(fp, "cpu%u %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n", &cpu);
+		if (retval != 1)
+			break;
+
+		func(get_physical_package_id(cpu), get_core_id(cpu), cpu);
+	}
+	fclose(fp);
+	return cpu_count;
+}
+
+void re_initialize(void)
+{
+	printf("turbostat: topology changed, re-initializing.\n");
+	free_all_counters();
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+	need_reinitialize = 0;
+	printf("num_cpus is now %d\n", num_cpus);
+}
+
+void dummy(int pkg, int core, int cpu) { return; }
+/*
+ * check to see if a cpu came on-line
+ */
+void verify_num_cpus()
+{
+	int new_num_cpus;
+
+	new_num_cpus = for_all_cpus(dummy);
+
+	if (new_num_cpus != num_cpus) {
+		if (verbose)
+			printf("num_cpus was %d, is now  %d\n",
+				num_cpus, new_num_cpus);
+		need_reinitialize = 1;
+	}
+
+	return;
+}
+
+void turbostat_loop()
+{
+restart:
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	while (1) {
+		verify_num_cpus();
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_odd);
+		gettimeofday(&tv_odd, (struct timezone *)NULL);
+
+		compute_delta(pcc_odd, pcc_even, pcc_delta);
+		timersub(&tv_odd, &tv_even, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+		if (need_reinitialize) {
+			re_initialize();
+			goto restart;
+		}
+		sleep(interval_sec);
+		get_counters(pcc_even);
+		gettimeofday(&tv_even, (struct timezone *)NULL);
+		compute_delta(pcc_even, pcc_odd, pcc_delta);
+		timersub(&tv_even, &tv_odd, &tv_delta);
+		compute_average(pcc_delta, pcc_average);
+		print_counters(pcc_delta);
+	}
+}
+
+void check_dev_msr()
+{
+	struct stat sb;
+
+	if (stat("/dev/cpu/0/msr", &sb)) {
+		fprintf(stderr, "no /dev/cpu/0/msr\n");
+		fprintf(stderr, "Try \"# modprobe msr\"\n");
+		exit(-5);
+	}
+}
+
+void check_super_user()
+{
+	if (getuid() != 0) {
+		fprintf(stderr, "must be root\n");
+		exit(-6);
+	}
+}
+
+int has_nehalem_turbo_ratio_limit(unsigned int family, unsigned int model)
+{
+	if (!genuine_intel)
+		return 0;
+
+	if (family != 6)
+		return 0;
+
+	switch (model) {
+	case 0x1A:	/* Core i7, Xeon 5500 series - Bloomfield, Gainstown NHM-EP */
+	case 0x1E:	/* Core i7 and i5 Processor - Clarksfield, Lynnfield, Jasper Forest */
+	case 0x1F:	/* Core i7 and i5 Processor - Nehalem */
+	case 0x25:	/* Westmere Client - Clarkdale, Arrandale */
+	case 0x2C:	/* Westmere EP - Gulftown */
+	case 0x2A:	/* SNB */
+	case 0x2D:	/* SNB Xeon */
+		return 1;
+	case 0x2E:	/* Nehalem-EX Xeon - Beckton */
+	case 0x2F:	/* Westmere-EX Xeon - Eagleton */
+	default:
+		return 0;
+	}
+}
+
+int is_snb(unsigned int family, unsigned int model)
+{
+	if (!genuine_intel)
+		return 0;
+
+	switch (model) {
+	case 0x2A:
+	case 0x2D:
+		return 1;
+	}
+	return 0;
+}
+
+double discover_bclk(unsigned int family, unsigned int model)
+{
+	if (is_snb(family, model))
+		return 100.00;
+	else
+		return 133.33;
+}
+
+void check_cpuid()
+{
+	unsigned int eax, ebx, ecx, edx, max_level;
+	unsigned int fms, family, model, stepping;
+
+	eax = ebx = ecx = edx = 0;
+
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0));
+
+	if (ebx == 0x756e6547 && edx == 0x49656e69 && ecx == 0x6c65746e)
+		genuine_intel = 1;
+
+	if (verbose)
+		fprintf(stderr, "%.4s%.4s%.4s ",
+			(char *)&ebx, (char *)&edx, (char *)&ecx);
+
+	asm("cpuid" : "=a" (fms), "=c" (ecx), "=d" (edx) : "a" (1) : "ebx");
+	family = (fms >> 8) & 0xf;
+	model = (fms >> 4) & 0xf;
+	stepping = fms & 0xf;
+	if (family == 6 || family == 0xf)
+		model += ((fms >> 16) & 0xf) << 4;
+
+	if (verbose)
+		fprintf(stderr, "%d CPUID levels; family:model:stepping 0x%x:%x:%x (%d:%d:%d)\n",
+			max_level, family, model, stepping, family, model, stepping);
+
+	if (!(edx & (1 << 5))) {
+		fprintf(stderr, "CPUID: no MSR\n");
+		exit(1);
+	}
+
+	/*
+	 * check max extended function levels of CPUID.
+	 * This is needed to check for invariant TSC.
+	 * This check is valid for both Intel and AMD.
+	 */
+	ebx = ecx = edx = 0;
+	asm("cpuid" : "=a" (max_level), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000000));
+
+	if (max_level < 0x80000007) {
+		fprintf(stderr, "CPUID: no invariant TSC (max_level 0x%x)\n", max_level);
+		exit(1);
+	}
+
+	/*
+	 * Non-Stop TSC is advertised by CPUID.EAX=0x80000007: EDX.bit8
+	 * this check is valid for both Intel and AMD
+	 */
+	asm("cpuid" : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x80000007));
+	has_invariant_tsc = edx && (1 << 8);
+
+	if (!has_invariant_tsc) {
+		fprintf(stderr, "No invariant TSC\n");
+		exit(1);
+	}
+
+	/*
+	 * APERF/MPERF is advertised by CPUID.EAX=0x6: ECX.bit0
+	 * this check is valid for both Intel and AMD
+	 */
+
+	asm("cpuid" : "=a" (eax), "=b" (ebx), "=c" (ecx), "=d" (edx) : "a" (0x6));
+	has_aperf = ecx && (1 << 0);
+	if (!has_aperf) {
+		fprintf(stderr, "No APERF MSR\n");
+		exit(1);
+	}
+
+	do_nehalem_platform_info = genuine_intel && has_invariant_tsc;
+	do_nhm_cstates = genuine_intel;	/* all Intel w/ non-stop TSC have NHM counters */
+	do_snb_cstates = is_snb(family, model);
+	bclk = discover_bclk(family, model);
+
+	do_nehalem_turbo_ratio_limit = has_nehalem_turbo_ratio_limit(family, model);
+}
+
+
+void usage()
+{
+	fprintf(stderr, "%s: [-v] [-M MSR#] [-i interval_sec | command ...]\n",
+		progname);
+	exit(1);
+}
+
+
+/*
+ * in /dev/cpu/ return success for names that are numbers
+ * ie. filter out ".", "..", "microcode".
+ */
+int dir_filter(const struct dirent *dirp)
+{
+	if (isdigit(dirp->d_name[0]))
+		return 1;
+	else
+		return 0;
+}
+
+int open_dev_cpu_msr(int dummy1)
+{
+	return 0;
+}
+
+void turbostat_init()
+{
+	check_cpuid();
+
+	check_dev_msr();
+	check_super_user();
+
+	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+
+	if (verbose)
+		print_nehalem_info();
+}
+
+int fork_it(char **argv)
+{
+	int retval;
+	pid_t child_pid;
+	get_counters(pcc_even);
+	gettimeofday(&tv_even, (struct timezone *)NULL);
+
+	child_pid = fork();
+	if (!child_pid) {
+		/* child */
+		execvp(argv[0], argv);
+	} else {
+		int status;
+
+		/* parent */
+		if (child_pid == -1) {
+			perror("fork");
+			exit(1);
+		}
+
+		signal(SIGINT, SIG_IGN);
+		signal(SIGQUIT, SIG_IGN);
+		if (waitpid(child_pid, &status, 0) == -1) {
+			perror("wait");
+			exit(1);
+		}
+	}
+	get_counters(pcc_odd);
+	gettimeofday(&tv_odd, (struct timezone *)NULL);
+	retval = compute_delta(pcc_odd, pcc_even, pcc_delta);
+
+	timersub(&tv_odd, &tv_even, &tv_delta);
+	compute_average(pcc_delta, pcc_average);
+	if (!retval)
+		print_counters(pcc_delta);
+
+	fprintf(stderr, "%.6f sec\n", tv_delta.tv_sec + tv_delta.tv_usec/1000000.0);;
+
+	return 0;
+}
+
+void cmdline(int argc, char **argv)
+{
+	int opt;
+
+	progname = argv[0];
+
+	while ((opt = getopt(argc, argv, "+vi:M:")) != -1) {
+		switch (opt) {
+		case 'v':
+			verbose++;
+			break;
+		case 'i':
+			interval_sec = atoi(optarg);
+			break;
+		case 'M':
+			sscanf(optarg, "%x", &extra_msr_offset);
+			if (verbose > 1)
+				fprintf(stderr, "MSR 0x%X\n", extra_msr_offset);
+			break;
+		default:
+			usage();
+		}
+	}
+}
+
+int main(int argc, char **argv)
+{
+	cmdline(argc, argv);
+
+	if (verbose > 1)
+		fprintf(stderr, "turbostat Dec 6, 2010"
+			" - Len Brown <lenb@kernel.org>\n");
+	if (verbose > 1)
+		fprintf(stderr, "http://userweb.kernel.org/~lenb/acpi/utils/pmtools/turbostat/\n");
+
+	turbostat_init();
+
+	/*
+	 * if any params left, it must be a command to fork
+	 */
+	if (argc - optind)
+		return fork_it(argv + optind);
+	else
+		turbostat_loop();
+
+	return 0;
+}

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v4] tools: create power/x86/turbostat
  2010-12-07  6:00         ` [PATCH v4] " Len Brown
@ 2010-12-07 14:29           ` Thiago Farina
  2011-02-11  4:43             ` Len Brown
  0 siblings, 1 reply; 11+ messages in thread
From: Thiago Farina @ 2010-12-07 14:29 UTC (permalink / raw)
  To: Len Brown; +Cc: Greg Kroah-Hartman, linux-pm, x86, linux-kernel

On Tue, Dec 7, 2010 at 4:00 AM, Len Brown <lenb@kernel.org> wrote:
> From: Len Brown <len.brown@intel.com>
>
> turbostat is a tool to help verify proper operation
> of processor power management states on modern x86.
>
> Signed-off-by: Len Brown <len.brown@intel.com>
> ---
> v4: fix open file descriptor leak, reported by David C. Wong
>
>  tools/power/x86/turbostat/Makefile    |    8 +
>  tools/power/x86/turbostat/turbostat.8 |  172 ++++++
>  tools/power/x86/turbostat/turbostat.c | 1048 +++++++++++++++++++++++++++++++++
>  3 files changed, 1228 insertions(+), 0 deletions(-)
>
> diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
> new file mode 100644
> index 0000000..4c6983d
> --- /dev/null
> +++ b/tools/power/x86/turbostat/turbostat.c
> @@ -0,0 +1,1048 @@
> +/*
> + * turbostat -- show CPU frequency and C-state residency
> + * on modern Intel turbo-capable processors.
> + *
> + * Copyright (c) 2010, Intel Corporation.
> + * Len Brown <len.brown@intel.com>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
> + */
> +
> +#include <stdio.h>
> +#include <unistd.h>
> +#include <sys/types.h>
> +#include <sys/wait.h>
> +#include <sys/stat.h>
> +#include <sys/resource.h>
> +#include <fcntl.h>
> +#include <signal.h>
> +#include <sys/time.h>
> +#include <stdlib.h>
> +#include <dirent.h>
> +#include <string.h>
> +#include <ctype.h>
> +
> +#define MSR_TSC        0x10
> +#define MSR_NEHALEM_PLATFORM_INFO      0xCE
> +#define MSR_NEHALEM_TURBO_RATIO_LIMIT  0x1AD
> +#define MSR_APERF      0xE8
> +#define MSR_MPERF      0xE7
> +#define MSR_PKG_C2_RESIDENCY   0x60D   /* SNB only */
> +#define MSR_PKG_C3_RESIDENCY   0x3F8
> +#define MSR_PKG_C6_RESIDENCY   0x3F9
> +#define MSR_PKG_C7_RESIDENCY   0x3FA   /* SNB only */
> +#define MSR_CORE_C3_RESIDENCY  0x3FC
> +#define MSR_CORE_C6_RESIDENCY  0x3FD
> +#define MSR_CORE_C7_RESIDENCY  0x3FE   /* SNB only */
> +
> +char *proc_stat = "/proc/stat";
> +unsigned int interval_sec = 5; /* set with -i interval_sec */
> +unsigned int verbose;          /* set with -v */
> +unsigned int skip_c0;
> +unsigned int skip_c1;
> +unsigned int do_nhm_cstates;
> +unsigned int do_snb_cstates;
> +unsigned int has_aperf;
> +unsigned int units = 1000000000;       /* Ghz etc */
> +unsigned int genuine_intel;
> +unsigned int has_invariant_tsc;
> +unsigned int do_nehalem_platform_info;
> +unsigned int do_nehalem_turbo_ratio_limit;
> +unsigned int extra_msr_offset;
> +double bclk;
> +unsigned int show_pkg;
> +unsigned int show_core;
> +unsigned int show_cpu;
> +
> +int aperf_mperf_unstable;
> +int backwards_count;
> +char *progname;
> +int need_reinitialize;
> +
> +int num_cpus;
> +
> +typedef struct per_cpu_counters {
> +       unsigned long long tsc;         /* per thread */
> +       unsigned long long aperf;       /* per thread */
> +       unsigned long long mperf;       /* per thread */
> +       unsigned long long c1;  /* per thread (calculated) */
> +       unsigned long long c3;  /* per core */
> +       unsigned long long c6;  /* per core */
> +       unsigned long long c7;  /* per core */
> +       unsigned long long pc2; /* per package */
> +       unsigned long long pc3; /* per package */
> +       unsigned long long pc6; /* per package */
> +       unsigned long long pc7; /* per package */
> +       unsigned long long extra_msr;   /* per thread */
> +       int pkg;
> +       int core;
> +       int cpu;
> +       struct per_cpu_counters *next;
> +} PCC;
> +

Why the typedef? Isn't preferable to use struct per_cpu_counters all
around (yeah I know it requires more typing, but is less obfuscated).

> +PCC *pcc_even;
> +PCC *pcc_odd;
> +PCC *pcc_delta;
> +PCC *pcc_average;
> +struct timeval tv_even;
> +struct timeval tv_odd;
> +struct timeval tv_delta;
> +
> +unsigned long long get_msr(int cpu, off_t offset)
> +{
> +       ssize_t retval;
> +       unsigned long long msr;
> +       char pathname[32];
> +       int fd;
> +
> +       sprintf(pathname, "/dev/cpu/%d/msr", cpu);
> +       fd = open(pathname, O_RDONLY);
> +       if (fd < 0) {
> +               perror(pathname);
> +               need_reinitialize = 1;
> +               return 0;
> +       }
> +
> +       retval = pread(fd, &msr, sizeof msr, offset);
> +       if (retval != sizeof msr) {
> +               fprintf(stderr, "cpu%d pread(..., 0x%zx) = %jd\n",
> +                       cpu, offset, retval);
> +               exit(-2);
> +       }
> +
> +       close(fd);
> +       return msr;
> +}
> +
> +void print_header()
use void in the paramenter? Also make this static?

> +
> +#define SUBTRACT_COUNTER(after, before, delta) (delta = (after - before), (before > after))
> +
> +
Remove this extra blank line?

> +
> +void get_counters(PCC *pcc)
> +{
> +       for ( ; pcc; pcc = pcc->next) {
> +               pcc->tsc = get_msr(pcc->cpu, MSR_TSC);
> +               if (do_nhm_cstates)
> +                       pcc->c3 = get_msr(pcc->cpu, MSR_CORE_C3_RESIDENCY);
> +               if (do_nhm_cstates)
> +                       pcc->c6 = get_msr(pcc->cpu, MSR_CORE_C6_RESIDENCY);
> +               if (do_snb_cstates)
> +                       pcc->c7 = get_msr(pcc->cpu, MSR_CORE_C7_RESIDENCY);
> +               if (has_aperf)
> +                       pcc->aperf = get_msr(pcc->cpu, MSR_APERF);
> +               if (has_aperf)
> +                       pcc->mperf = get_msr(pcc->cpu, MSR_MPERF);
> +               if (do_snb_cstates)
> +                       pcc->pc2 = get_msr(pcc->cpu, MSR_PKG_C2_RESIDENCY);
> +               if (do_nhm_cstates)
> +                       pcc->pc3 = get_msr(pcc->cpu, MSR_PKG_C3_RESIDENCY);
> +               if (do_nhm_cstates)
> +                       pcc->pc6 = get_msr(pcc->cpu, MSR_PKG_C6_RESIDENCY);
> +               if (do_snb_cstates)
> +                       pcc->pc7 = get_msr(pcc->cpu, MSR_PKG_C7_RESIDENCY);
> +               if (extra_msr_offset)
> +                       pcc->extra_msr = get_msr(pcc->cpu, extra_msr_offset);
> +       }
> +}
> +
> +
Remove this blank line too?

> +void print_nehalem_info()
> +{
Add void into the parameter?

> +       unsigned long long msr;
> +       unsigned int ratio;
> +
> +       if (!do_nehalem_platform_info)
> +               return;
> +
> +       msr = get_msr(0, MSR_NEHALEM_PLATFORM_INFO);
> +
> +       ratio = (msr >> 40) & 0xFF;
> +       fprintf(stderr, "%d * %.0f = %.0f MHz max efficiency\n",
> +               ratio, bclk, ratio * bclk);
> +
> +       ratio = (msr >> 8) & 0xFF;
> +       fprintf(stderr, "%d * %.0f = %.0f MHz TSC frequency\n",
> +               ratio, bclk, ratio * bclk);
> +
> +       if (verbose > 1)
> +               fprintf(stderr, "MSR_NEHALEM_PLATFORM_INFO: 0x%llx\n", msr);
> +
> +       if (!do_nehalem_turbo_ratio_limit)
> +               return;
> +
> +       msr = get_msr(0, MSR_NEHALEM_TURBO_RATIO_LIMIT);
> +
> +       ratio = (msr >> 24) & 0xFF;
> +       if (ratio)
> +               fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 4 active cores\n",
> +                       ratio, bclk, ratio * bclk);
> +
> +       ratio = (msr >> 16) & 0xFF;
> +       if (ratio)
> +               fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 3 active cores\n",
> +                       ratio, bclk, ratio * bclk);
> +
> +       ratio = (msr >> 8) & 0xFF;
> +       if (ratio)
> +               fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 2 active cores\n",
> +                       ratio, bclk, ratio * bclk);
> +
> +       ratio = (msr >> 0) & 0xFF;
> +       if (ratio)
> +               fprintf(stderr, "%d * %.0f = %.0f MHz max turbo 1 active cores\n",
> +                       ratio, bclk, ratio * bclk);
> +
> +}
> +
> +void free_counter_list(PCC *list)
> +{
> +       PCC *p;
> +
> +       for (p = list; p; ) {
> +               PCC *free_me;
> +
> +               free_me = p;
> +               p = p->next;
> +               free(free_me);
> +       }
> +       return;
Why the return here? Isn't redundant?

> +
> +void insert_cpu_counters(PCC **list, PCC *new)
> +{
> +       PCC *prev;
> +
> +       /*
> +        * list was empty
> +        */
> +       if (*list == NULL) {
> +               new->next = *list;
> +               *list = new;
> +               return;
> +       }
> +
> +       show_cpu = 1;   /* there is more than one CPU */
> +
> +       /*
> +        * insert on front of list.
> +        * It is sorted by ascending package#, core#, cpu#
> +        */
> +       if (((*list)->pkg > new->pkg) ||
> +           (((*list)->pkg == new->pkg) && ((*list)->core > new->core)) ||
> +           (((*list)->pkg == new->pkg) && ((*list)->core == new->core) && ((*list)->cpu > new->cpu))) {
> +               new->next = *list;
> +               *list = new;
> +               return;
> +       }
> +
> +       prev = *list;
> +
> +       while (prev->next && (prev->next->pkg < new->pkg)) {
> +               prev = prev->next;
> +               show_pkg = 1;   /* there is more than 1 package */
> +       }
> +
> +       while (prev->next && (prev->next->pkg == new->pkg)
> +               && (prev->next->core < new->core)) {
> +               prev = prev->next;
> +               show_core = 1;  /* there is more than 1 core */
> +       }
> +
> +       while (prev->next && (prev->next->pkg == new->pkg)
> +               && (prev->next->core == new->core)
> +               && (prev->next->cpu < new->cpu)) {
> +               prev = prev->next;
> +       }
> +
> +       /*
> +        * insert after "prev"
> +        */
> +       new->next = prev->next;
> +       prev->next = new;
> +
> +       return;
Same thing here about the return.

> +
> +int get_physical_package_id(int cpu)
> +{
> +       char path[64];
> +       FILE *filep;
> +       int pkg;
> +
> +       sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/physical_package_id", cpu);
> +       filep = fopen(path, "r");
> +       if (filep == NULL) {
> +               perror(path);
> +               exit(1);
> +       }
> +       fscanf(filep, "%d", &pkg);
> +       fclose(filep);
> +       return pkg;
> +}
> +
> +int get_core_id(int cpu)
> +{
> +       char path[64];
> +       FILE *filep;
> +       int core;
> +
> +       sprintf(path, "/sys/devices/system/cpu/cpu%d/topology/core_id", cpu);
> +       filep = fopen(path, "r");
> +       if (filep == NULL) {
> +               perror(path);
> +               exit(1);
> +       }
> +       fscanf(filep, "%d", &core);
> +       fclose(filep);
> +       return core;
> +}
> +
> +/*
> + * run func(index, cpu) on every cpu in /proc/stat
> + */
> +
> +int for_all_cpus(void (func)(int, int, int))
> +{
> +       FILE *fp;
> +       int cpu_count;
> +       int retval;
> +
> +       fp = fopen(proc_stat, "r");
> +       if (fp == NULL) {
> +               perror(proc_stat);
> +               exit(1);
> +       }
> +
> +       retval = fscanf(fp, "cpu %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n");
> +       if (retval != 0) {
> +               perror("/proc/stat format");
> +               exit(1);
> +       }
> +
> +       for (cpu_count = 0; ; cpu_count++) {
> +               int cpu;
> +
> +               retval = fscanf(fp, "cpu%u %*d %*d %*d %*d %*d %*d %*d %*d %*d %*d\n", &cpu);
> +               if (retval != 1)
> +                       break;
> +
> +               func(get_physical_package_id(cpu), get_core_id(cpu), cpu);
> +       }
> +       fclose(fp);
> +       return cpu_count;
> +}
> +
> +void re_initialize(void)
> +{
> +       printf("turbostat: topology changed, re-initializing.\n");
> +       free_all_counters();
> +       num_cpus = for_all_cpus(alloc_new_cpu_counters);
> +       need_reinitialize = 0;
> +       printf("num_cpus is now %d\n", num_cpus);
> +}
> +
> +void dummy(int pkg, int core, int cpu) { return; }
> +/*
> + * check to see if a cpu came on-line
> + */
> +void verify_num_cpus()
Use void on parameter?

> +{
> +       int new_num_cpus;
> +
> +       new_num_cpus = for_all_cpus(dummy);
> +
> +       if (new_num_cpus != num_cpus) {
> +               if (verbose)
> +                       printf("num_cpus was %d, is now  %d\n",
> +                               num_cpus, new_num_cpus);
> +               need_reinitialize = 1;
> +       }
> +
> +       return;
> +}
No need of this return, right?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4] tools: create power/x86/turbostat
  2010-12-07 14:29           ` Thiago Farina
@ 2011-02-11  4:43             ` Len Brown
  0 siblings, 0 replies; 11+ messages in thread
From: Len Brown @ 2011-02-11  4:43 UTC (permalink / raw)
  To: Thiago Farina; +Cc: linux-pm, x86, linux-kernel

From: Len Brown <len.brown@intel.com>

Follow kernel style more closely.
Nuke typdef, re-name "per cpu counters" simply to be counters etc.

This patch changes no functionality.

Suggested-by: Thiago Farina <tfransosi@gmail.com>
Signed-off-by: Len Brown <len.brown@intel.com>
---
Thanks for the feedback, Thiago.

-Len

 tools/power/x86/turbostat/turbostat.c |  196 ++++++++++++++++-----------------
 1 files changed, 96 insertions(+), 100 deletions(-)

diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
index 4c6983d..87d309e 100644
--- a/tools/power/x86/turbostat/turbostat.c
+++ b/tools/power/x86/turbostat/turbostat.c
@@ -72,7 +72,7 @@ int need_reinitialize;
 
 int num_cpus;
 
-typedef struct per_cpu_counters {
+struct counters {
 	unsigned long long tsc;		/* per thread */
 	unsigned long long aperf;	/* per thread */
 	unsigned long long mperf;	/* per thread */
@@ -88,13 +88,13 @@ typedef struct per_cpu_counters {
 	int pkg;
 	int core;
 	int cpu;
-	struct per_cpu_counters *next;
-} PCC;
+	struct counters *next;
+};
 
-PCC *pcc_even;
-PCC *pcc_odd;
-PCC *pcc_delta;
-PCC *pcc_average;
+struct counters *cnt_even;
+struct counters *cnt_odd;
+struct counters *cnt_delta;
+struct counters *cnt_average;
 struct timeval tv_even;
 struct timeval tv_odd;
 struct timeval tv_delta;
@@ -125,7 +125,7 @@ unsigned long long get_msr(int cpu, off_t offset)
 	return msr;
 }
 
-void print_header()
+void print_header(void)
 {
 	if (show_pkg)
 		fprintf(stderr, "pkg ");
@@ -160,39 +160,39 @@ void print_header()
 	putc('\n', stderr);
 }
 
-void dump_pcc(PCC *pcc)
+void dump_cnt(struct counters *cnt)
 {
-	fprintf(stderr, "package: %d ", pcc->pkg);
-	fprintf(stderr, "core:: %d ", pcc->core);
-	fprintf(stderr, "CPU: %d ", pcc->cpu);
-	fprintf(stderr, "TSC: %016llX\n", pcc->tsc);
-	fprintf(stderr, "c3: %016llX\n", pcc->c3);
-	fprintf(stderr, "c6: %016llX\n", pcc->c6);
-	fprintf(stderr, "c7: %016llX\n", pcc->c7);
-	fprintf(stderr, "aperf: %016llX\n", pcc->aperf);
-	fprintf(stderr, "pc2: %016llX\n", pcc->pc2);
-	fprintf(stderr, "pc3: %016llX\n", pcc->pc3);
-	fprintf(stderr, "pc6: %016llX\n", pcc->pc6);
-	fprintf(stderr, "pc7: %016llX\n", pcc->pc7);
-	fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, pcc->extra_msr);
+	fprintf(stderr, "package: %d ", cnt->pkg);
+	fprintf(stderr, "core:: %d ", cnt->core);
+	fprintf(stderr, "CPU: %d ", cnt->cpu);
+	fprintf(stderr, "TSC: %016llX\n", cnt->tsc);
+	fprintf(stderr, "c3: %016llX\n", cnt->c3);
+	fprintf(stderr, "c6: %016llX\n", cnt->c6);
+	fprintf(stderr, "c7: %016llX\n", cnt->c7);
+	fprintf(stderr, "aperf: %016llX\n", cnt->aperf);
+	fprintf(stderr, "pc2: %016llX\n", cnt->pc2);
+	fprintf(stderr, "pc3: %016llX\n", cnt->pc3);
+	fprintf(stderr, "pc6: %016llX\n", cnt->pc6);
+	fprintf(stderr, "pc7: %016llX\n", cnt->pc7);
+	fprintf(stderr, "msr0x%x: %016llX\n", extra_msr_offset, cnt->extra_msr);
 }
 
-void dump_list(PCC *pcc)
+void dump_list(struct counters *cnt)
 {
-	printf("dump_list 0x%p\n", pcc);
+	printf("dump_list 0x%p\n", cnt);
 
-	for (; pcc; pcc = pcc->next)
-		dump_pcc(pcc);
+	for (; cnt; cnt = cnt->next)
+		dump_cnt(cnt);
 }
 
-void print_pcc(PCC *p)
+void print_cnt(struct counters *p)
 {
 	double interval_float;
 
 	interval_float = tv_delta.tv_sec + tv_delta.tv_usec/1000000.0;
 
 	/* topology columns, print blanks on 1st (average) line */
-	if (p == pcc_average) {
+	if (p == cnt_average) {
 		if (show_pkg)
 			fprintf(stderr, "    ");
 		if (show_core)
@@ -262,24 +262,24 @@ void print_pcc(PCC *p)
 	putc('\n', stderr);
 }
 
-void print_counters(PCC *cnt)
+void print_counters(struct counters *counters)
 {
-	PCC *pcc;
+	struct counters *cnt;
 
 	print_header();
 
 	if (num_cpus > 1)
-		print_pcc(pcc_average);
+		print_cnt(cnt_average);
 
-	for (pcc = cnt; pcc != NULL; pcc = pcc->next)
-		print_pcc(pcc);
+	for (cnt = counters; cnt != NULL; cnt = cnt->next)
+		print_cnt(cnt);
 
 }
 
 #define SUBTRACT_COUNTER(after, before, delta) (delta = (after - before), (before > after))
 
-
-int compute_delta(PCC *after, PCC *before, PCC *delta)
+int compute_delta(struct counters *after,
+	struct counters *before, struct counters *delta)
 {
 	int errors = 0;
 	int perf_err = 0;
@@ -391,20 +391,20 @@ int compute_delta(PCC *after, PCC *before, PCC *delta)
 		delta->extra_msr = after->extra_msr;
 		if (errors) {
 			fprintf(stderr, "ERROR cpu%d before:\n", before->cpu);
-			dump_pcc(before);
+			dump_cnt(before);
 			fprintf(stderr, "ERROR cpu%d after:\n", before->cpu);
-			dump_pcc(after);
+			dump_cnt(after);
 			errors = 0;
 		}
 	}
 	return 0;
 }
 
-void compute_average(PCC *delta, PCC *avg)
+void compute_average(struct counters *delta, struct counters *avg)
 {
-	PCC *sum;
+	struct counters *sum;
 
-	sum = calloc(1, sizeof(PCC));
+	sum = calloc(1, sizeof(struct counters));
 	if (sum == NULL) {
 		perror("calloc sum");
 		exit(1);
@@ -438,35 +438,34 @@ void compute_average(PCC *delta, PCC *avg)
 	free(sum);
 }
 
-void get_counters(PCC *pcc)
+void get_counters(struct counters *cnt)
 {
-	for ( ; pcc; pcc = pcc->next) {
-		pcc->tsc = get_msr(pcc->cpu, MSR_TSC);
+	for ( ; cnt; cnt = cnt->next) {
+		cnt->tsc = get_msr(cnt->cpu, MSR_TSC);
 		if (do_nhm_cstates)
-			pcc->c3 = get_msr(pcc->cpu, MSR_CORE_C3_RESIDENCY);
+			cnt->c3 = get_msr(cnt->cpu, MSR_CORE_C3_RESIDENCY);
 		if (do_nhm_cstates)
-			pcc->c6 = get_msr(pcc->cpu, MSR_CORE_C6_RESIDENCY);
+			cnt->c6 = get_msr(cnt->cpu, MSR_CORE_C6_RESIDENCY);
 		if (do_snb_cstates)
-			pcc->c7 = get_msr(pcc->cpu, MSR_CORE_C7_RESIDENCY);
+			cnt->c7 = get_msr(cnt->cpu, MSR_CORE_C7_RESIDENCY);
 		if (has_aperf)
-			pcc->aperf = get_msr(pcc->cpu, MSR_APERF);
+			cnt->aperf = get_msr(cnt->cpu, MSR_APERF);
 		if (has_aperf)
-			pcc->mperf = get_msr(pcc->cpu, MSR_MPERF);
+			cnt->mperf = get_msr(cnt->cpu, MSR_MPERF);
 		if (do_snb_cstates)
-			pcc->pc2 = get_msr(pcc->cpu, MSR_PKG_C2_RESIDENCY);
+			cnt->pc2 = get_msr(cnt->cpu, MSR_PKG_C2_RESIDENCY);
 		if (do_nhm_cstates)
-			pcc->pc3 = get_msr(pcc->cpu, MSR_PKG_C3_RESIDENCY);
+			cnt->pc3 = get_msr(cnt->cpu, MSR_PKG_C3_RESIDENCY);
 		if (do_nhm_cstates)
-			pcc->pc6 = get_msr(pcc->cpu, MSR_PKG_C6_RESIDENCY);
+			cnt->pc6 = get_msr(cnt->cpu, MSR_PKG_C6_RESIDENCY);
 		if (do_snb_cstates)
-			pcc->pc7 = get_msr(pcc->cpu, MSR_PKG_C7_RESIDENCY);
+			cnt->pc7 = get_msr(cnt->cpu, MSR_PKG_C7_RESIDENCY);
 		if (extra_msr_offset)
-			pcc->extra_msr = get_msr(pcc->cpu, extra_msr_offset);
+			cnt->extra_msr = get_msr(cnt->cpu, extra_msr_offset);
 	}
 }
 
-
-void print_nehalem_info()
+void print_nehalem_info(void)
 {
 	unsigned long long msr;
 	unsigned int ratio;
@@ -514,38 +513,38 @@ void print_nehalem_info()
 
 }
 
-void free_counter_list(PCC *list)
+void free_counter_list(struct counters *list)
 {
-	PCC *p;
+	struct counters *p;
 
 	for (p = list; p; ) {
-		PCC *free_me;
+		struct counters *free_me;
 
 		free_me = p;
 		p = p->next;
 		free(free_me);
 	}
-	return;
 }
 
 void free_all_counters(void)
 {
-	free_counter_list(pcc_even);
-	pcc_even = NULL;
+	free_counter_list(cnt_even);
+	cnt_even = NULL;
 
-	free_counter_list(pcc_odd);
-	pcc_odd = NULL;
+	free_counter_list(cnt_odd);
+	cnt_odd = NULL;
 
-	free_counter_list(pcc_delta);
-	pcc_delta = NULL;
+	free_counter_list(cnt_delta);
+	cnt_delta = NULL;
 
-	free_counter_list(pcc_average);
-	pcc_average = NULL;
+	free_counter_list(cnt_average);
+	cnt_average = NULL;
 }
 
-void insert_cpu_counters(PCC **list, PCC *new)
+void insert_counters(struct counters **list,
+	struct counters *new)
 {
-	PCC *prev;
+	struct counters *prev;
 
 	/*
 	 * list was empty
@@ -594,18 +593,16 @@ void insert_cpu_counters(PCC **list, PCC *new)
 	 */
 	new->next = prev->next;
 	prev->next = new;
-
-	return;
 }
 
-void alloc_new_cpu_counters(int pkg, int core, int cpu)
+void alloc_new_counters(int pkg, int core, int cpu)
 {
-	PCC *new;
+	struct counters *new;
 
 	if (verbose > 1)
 		printf("pkg%d core%d, cpu%d\n", pkg, core, cpu);
 
-	new = (PCC *)calloc(1, sizeof(PCC));
+	new = (struct counters *)calloc(1, sizeof(struct counters));
 	if (new == NULL) {
 		perror("calloc");
 		exit(1);
@@ -613,9 +610,10 @@ void alloc_new_cpu_counters(int pkg, int core, int cpu)
 	new->pkg = pkg;
 	new->core = core;
 	new->cpu = cpu;
-	insert_cpu_counters(&pcc_odd, new);
+	insert_counters(&cnt_odd, new);
 
-	new = (PCC *)calloc(1, sizeof(PCC));
+	new = (struct counters *)calloc(1,
+		sizeof(struct counters));
 	if (new == NULL) {
 		perror("calloc");
 		exit(1);
@@ -623,9 +621,9 @@ void alloc_new_cpu_counters(int pkg, int core, int cpu)
 	new->pkg = pkg;
 	new->core = core;
 	new->cpu = cpu;
-	insert_cpu_counters(&pcc_even, new);
+	insert_counters(&cnt_even, new);
 
-	new = (PCC *)calloc(1, sizeof(PCC));
+	new = (struct counters *)calloc(1, sizeof(struct counters));
 	if (new == NULL) {
 		perror("calloc");
 		exit(1);
@@ -633,9 +631,9 @@ void alloc_new_cpu_counters(int pkg, int core, int cpu)
 	new->pkg = pkg;
 	new->core = core;
 	new->cpu = cpu;
-	insert_cpu_counters(&pcc_delta, new);
+	insert_counters(&cnt_delta, new);
 
-	new = (PCC *)calloc(1, sizeof(PCC));
+	new = (struct counters *)calloc(1, sizeof(struct counters));
 	if (new == NULL) {
 		perror("calloc");
 		exit(1);
@@ -643,7 +641,7 @@ void alloc_new_cpu_counters(int pkg, int core, int cpu)
 	new->pkg = pkg;
 	new->core = core;
 	new->cpu = cpu;
-	pcc_average = new;
+	cnt_average = new;
 }
 
 int get_physical_package_id(int cpu)
@@ -719,7 +717,7 @@ void re_initialize(void)
 {
 	printf("turbostat: topology changed, re-initializing.\n");
 	free_all_counters();
-	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+	num_cpus = for_all_cpus(alloc_new_counters);
 	need_reinitialize = 0;
 	printf("num_cpus is now %d\n", num_cpus);
 }
@@ -728,7 +726,7 @@ void dummy(int pkg, int core, int cpu) { return; }
 /*
  * check to see if a cpu came on-line
  */
-void verify_num_cpus()
+void verify_num_cpus(void)
 {
 	int new_num_cpus;
 
@@ -740,14 +738,12 @@ void verify_num_cpus()
 				num_cpus, new_num_cpus);
 		need_reinitialize = 1;
 	}
-
-	return;
 }
 
 void turbostat_loop()
 {
 restart:
-	get_counters(pcc_even);
+	get_counters(cnt_even);
 	gettimeofday(&tv_even, (struct timezone *)NULL);
 
 	while (1) {
@@ -757,24 +753,24 @@ restart:
 			goto restart;
 		}
 		sleep(interval_sec);
-		get_counters(pcc_odd);
+		get_counters(cnt_odd);
 		gettimeofday(&tv_odd, (struct timezone *)NULL);
 
-		compute_delta(pcc_odd, pcc_even, pcc_delta);
+		compute_delta(cnt_odd, cnt_even, cnt_delta);
 		timersub(&tv_odd, &tv_even, &tv_delta);
-		compute_average(pcc_delta, pcc_average);
-		print_counters(pcc_delta);
+		compute_average(cnt_delta, cnt_average);
+		print_counters(cnt_delta);
 		if (need_reinitialize) {
 			re_initialize();
 			goto restart;
 		}
 		sleep(interval_sec);
-		get_counters(pcc_even);
+		get_counters(cnt_even);
 		gettimeofday(&tv_even, (struct timezone *)NULL);
-		compute_delta(pcc_even, pcc_odd, pcc_delta);
+		compute_delta(cnt_even, cnt_odd, cnt_delta);
 		timersub(&tv_even, &tv_odd, &tv_delta);
-		compute_average(pcc_delta, pcc_average);
-		print_counters(pcc_delta);
+		compute_average(cnt_delta, cnt_average);
+		print_counters(cnt_delta);
 	}
 }
 
@@ -952,7 +948,7 @@ void turbostat_init()
 	check_dev_msr();
 	check_super_user();
 
-	num_cpus = for_all_cpus(alloc_new_cpu_counters);
+	num_cpus = for_all_cpus(alloc_new_counters);
 
 	if (verbose)
 		print_nehalem_info();
@@ -962,7 +958,7 @@ int fork_it(char **argv)
 {
 	int retval;
 	pid_t child_pid;
-	get_counters(pcc_even);
+	get_counters(cnt_even);
 	gettimeofday(&tv_even, (struct timezone *)NULL);
 
 	child_pid = fork();
@@ -985,14 +981,14 @@ int fork_it(char **argv)
 			exit(1);
 		}
 	}
-	get_counters(pcc_odd);
+	get_counters(cnt_odd);
 	gettimeofday(&tv_odd, (struct timezone *)NULL);
-	retval = compute_delta(pcc_odd, pcc_even, pcc_delta);
+	retval = compute_delta(cnt_odd, cnt_even, cnt_delta);
 
 	timersub(&tv_odd, &tv_even, &tv_delta);
-	compute_average(pcc_delta, pcc_average);
+	compute_average(cnt_delta, cnt_average);
 	if (!retval)
-		print_counters(pcc_delta);
+		print_counters(cnt_delta);
 
 	fprintf(stderr, "%.6f sec\n", tv_delta.tv_sec + tv_delta.tv_usec/1000000.0);;
 
-- 
1.7.4.45.g1a9f


^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2011-02-11  4:43 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-10-23  5:31 [PATCH] tools: add power/x86/turbostat Len Brown
2010-10-24  4:06 ` [PATCH] tools: add power/x86/turbostat - v2 Len Brown
2010-10-27  3:14   ` [PATCH RESEND] " Len Brown
2010-11-15 16:01     ` Len Brown
2010-11-15 17:16       ` Otavio Salvador
2010-11-16 10:29       ` Andreas Herrmann
2010-11-17  6:44         ` Len Brown
2010-11-24  4:58       ` [PATCH v3] tools: create power/x86/turbostat Len Brown
2010-12-07  6:00         ` [PATCH v4] " Len Brown
2010-12-07 14:29           ` Thiago Farina
2011-02-11  4:43             ` Len Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).