linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Add top down metrics to perf stat
@ 2015-08-08  1:06 Andi Kleen
  2015-08-08  1:06 ` [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error Andi Kleen
                   ` (8 more replies)
  0 siblings, 9 replies; 22+ messages in thread
From: Andi Kleen @ 2015-08-08  1:06 UTC (permalink / raw)
  To: acme; +Cc: jolsa, linux-kernel, eranian, namhyung, peterz, mingo

This patchkit adds support for TopDown to perf stat
It applies on top of my earlier metrics patchkit, posted
separately.

TopDown is intended to replace the frontend cycles idle/
backend cycles idle metrics in standard perf stat output.
These metrics are not reliable in many workloads, 
due to out of order effects.

This implements a new --topdown mode in perf stat
(similar to --transaction) that measures the pipe line
bottlenecks using standardized formulas. The measurement
can be all done with 5 counters (one fixed counter)

The result are four metrics:
FrontendBound, BackendBound, BadSpeculation, Retiring

that describe the CPU pipeline behavior on a high level.

FrontendBound and BackendBound
BadSpeculation is a higher

The full top down methology has many hierarchical metrics.
This implementation only supports level 1 which can be
collected without multiplexing. A full implementation
of top down on top of perf is available in pmu-tools toplev.
(http://github.com/andikleen/pmu-tools)

The current version works on Intel Core CPUs starting
with Sandy Bridge, and Atom CPUs starting with Silvermont.
In principle the generic metrics should be also implementable
on other out of order CPUs.

TopDown level 1 uses a set of abstracted metrics which
are generic to out of order CPU cores (although some
CPUs may not implement all of them):
    
topdown-total-slots   Available slots in the pipeline
topdown-slots-issued          Slots issued into the pipeline
topdown-slots-retired         Slots successfully retired
topdown-fetch-bubbles         Pipeline gaps in the frontend
topdown-recovery-bubbles  Pipeline gaps during recovery
                          from misspeculation
    
These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.
    
The formulas to compute the metrics are generic, they
only change based on the availability on the abstracted
input values.
    
The kernel declares the events supported by the current
CPU and perf stat then computes the formulas based on the
available metrics.


Example output:

$ ./perf stat --topdown -a ./BC1s 

 Performance counter stats for 'system wide':

S0-C0           2           19650790      topdown-total-slots                                           (100.00%)
S0-C0           2         4445680.00      topdown-fetch-bubbles     #    22.62% frontend bound          (100.00%)
S0-C0           2         1743552.00      topdown-slots-retired                                         (100.00%)
S0-C0           2             622954      topdown-recovery-bubbles                                      (100.00%)
S0-C0           2         2025498.00      topdown-slots-issued      #    63.90% backend bound         
S0-C1           2        16685216540      topdown-total-slots                                           (100.00%)
S0-C1           2       962557931.00      topdown-fetch-bubbles                                         (100.00%)
S0-C1           2      4175583320.00      topdown-slots-retired                                         (100.00%)
S0-C1           2         1743329246      topdown-recovery-bubbles  #    22.22% bad speculation         (100.00%)
S0-C1           2      6138901193.50      topdown-slots-issued      #    46.99% backend bound         

       1.535832673 seconds time elapsed
 
On Hyper Threaded CPUs Top Down computes metrics per core instead of per logical CPU.
In this case perf stat automatically enables --per-core mode and also requires
global mode (-a) and avoiding other filters (no cgroup mode)

One side effect is that this may require root rights or a
kernel.perf_event_paranoid=-1 setting.  

On systems without Hyper Threading it can be used per process.

Full tree available in 
git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc perf/top-down-2


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error
  2015-08-08  1:06 Add top down metrics to perf stat Andi Kleen
@ 2015-08-08  1:06 ` Andi Kleen
  2015-08-11 13:07   ` Jiri Olsa
  2015-08-08  1:06 ` [PATCH 2/9] perf, tools, stat: Support up-scaling of events Andi Kleen
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Andi Kleen @ 2015-08-08  1:06 UTC (permalink / raw)
  To: acme; +Cc: jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

When an error happens during alias parsing currently the complete
parsing of all attributes of the PMU is stopped. This is breaks
old perf on a newer kernel that may have not-yet-know
alias attributes (such as .scale or .per-pkg).

Continue when some attribute is unparseable.

This is IMHO a stable candidate and should be backported
to older versions to avoid problems with newer kernels.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/util/pmu.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index d4b0e64..ce56354 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -283,13 +283,12 @@ static int pmu_aliases_parse(char *dir, struct list_head *head)
 {
 	struct dirent *evt_ent;
 	DIR *event_dir;
-	int ret = 0;
 
 	event_dir = opendir(dir);
 	if (!event_dir)
 		return -EINVAL;
 
-	while (!ret && (evt_ent = readdir(event_dir))) {
+	while ((evt_ent = readdir(event_dir))) {
 		char path[PATH_MAX];
 		char *name = evt_ent->d_name;
 		FILE *file;
@@ -305,17 +304,16 @@ static int pmu_aliases_parse(char *dir, struct list_head *head)
 
 		snprintf(path, PATH_MAX, "%s/%s", dir, name);
 
-		ret = -EINVAL;
 		file = fopen(path, "r");
 		if (!file)
-			break;
+			continue;
 
-		ret = perf_pmu__new_alias(head, dir, name, file);
+		perf_pmu__new_alias(head, dir, name, file);
 		fclose(file);
 	}
 
 	closedir(event_dir);
-	return ret;
+	return 0;
 }
 
 /*
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/9] perf, tools, stat: Support up-scaling of events
  2015-08-08  1:06 Add top down metrics to perf stat Andi Kleen
  2015-08-08  1:06 ` [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error Andi Kleen
@ 2015-08-08  1:06 ` Andi Kleen
  2015-08-11 13:25   ` Jiri Olsa
  2015-08-08  1:06 ` [PATCH 3/9] perf, tools, stat: Basic support for TopDown in perf stat Andi Kleen
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 22+ messages in thread
From: Andi Kleen @ 2015-08-08  1:06 UTC (permalink / raw)
  To: acme; +Cc: jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

TopDown needs to multiply events by constants (for example
the CPU Pipeline Width) to get the correct results.
The kernel needs to export this factor.

Today *.scale is only used to scale down metrics (divide), for example
to scale bytes to MB.

Repurpose negative scale to mean scaling up, that is multiplying.
Implement the code for this in perf stat.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/builtin-stat.c | 27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index ea5298a..2590c75 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -179,6 +179,17 @@ static inline int nsec_counter(struct perf_evsel *evsel)
 	return 0;
 }
 
+static double scale_val(struct perf_evsel *counter, u64 val)
+{
+	double uval = val;
+
+	if (counter->scale < 0)
+		uval = val * (-counter->scale);
+	else if (counter->scale)
+		uval = val / counter->scale;
+	return uval;
+}
+
 /*
  * Read out the results of a single counter:
  * do not aggregate counts across CPUs in system-wide mode
@@ -630,12 +641,12 @@ static void abs_printout(int id, int nr, struct perf_evsel *evsel, double avg)
 	const char *fmt;
 
 	if (csv_output) {
-		fmt = sc != 1.0 ?  "%.2f%s" : "%.0f%s";
+		fmt = (sc != 1.0 && sc > 0) ?  "%.2f%s" : "%.0f%s";
 	} else {
 		if (big_num)
-			fmt = sc != 1.0 ? "%'18.2f%s" : "%'18.0f%s";
+			fmt = (sc != 1.0 && sc > 0) ? "%'18.2f%s" : "%'18.0f%s";
 		else
-			fmt = sc != 1.0 ? "%18.2f%s" : "%18.0f%s";
+			fmt = (sc != 1.0 && sc > 0) ? "%18.2f%s" : "%18.0f%s";
 	}
 
 	aggr_printout(evsel, id, nr);
@@ -750,7 +761,7 @@ static void aggr_update_shadow(void)
 					continue;
 				val += perf_counts(counter->counts, cpu, 0)->val;
 			}
-			val = val * counter->scale;
+			val = scale_val(counter, val);
 			perf_stat__update_shadow_stats(counter, &val,
 						       first_shadow_cpu(counter, id));
 		}
@@ -788,7 +799,7 @@ static void print_aggr(char *prefix)
 			if (prefix)
 				fprintf(output, "%s", prefix);
 
-			uval = val * counter->scale;
+			uval = scale_val(counter, val);
 			printout(id, nr, counter, uval, prefix, run, ena, 1.0);
 			fputc('\n', output);
 		}
@@ -815,7 +826,7 @@ static void print_aggr_thread(struct perf_evsel *counter, char *prefix)
 		if (prefix)
 			fprintf(output, "%s", prefix);
 
-		uval = val * counter->scale;
+		uval = scale_val(counter, val);
 		printout(thread, 0, counter, uval, prefix, run, ena, 1.0);
 		fputc('\n', output);
 	}
@@ -860,7 +871,7 @@ static void print_counter_aggr(struct perf_evsel *counter, char *prefix)
 		return;
 	}
 
-	uval = avg * counter->scale;
+	uval = scale_val(counter, avg);
 	printout(-1, 0, counter, uval, prefix, avg_running, avg_enabled, avg);
 	fprintf(output, "\n");
 }
@@ -884,7 +895,7 @@ static void print_counter(struct perf_evsel *counter, char *prefix)
 		if (prefix)
 			fprintf(output, "%s", prefix);
 
-		uval = val * counter->scale;
+		uval = scale_val(counter, val);
 		printout(cpu, 0, counter, uval, prefix, run, ena, 1.0);
 
 		fputc('\n', output);
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/9] perf, tools, stat: Basic support for TopDown in perf stat
  2015-08-08  1:06 Add top down metrics to perf stat Andi Kleen
  2015-08-08  1:06 ` [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error Andi Kleen
  2015-08-08  1:06 ` [PATCH 2/9] perf, tools, stat: Support up-scaling of events Andi Kleen
@ 2015-08-08  1:06 ` Andi Kleen
  2015-08-08  1:06 ` [PATCH 4/9] perf, tools, stat: Add computation of TopDown formulas Andi Kleen
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Andi Kleen @ 2015-08-08  1:06 UTC (permalink / raw)
  To: acme; +Cc: jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add basic plumbing for TopDown in perf stat

Add a new --topdown options to enable events.
When --topdown is specified set up events for all topdown
events supported by the kernel.
Add topdown-* as a special case to the event parser, as is
needed for all events containing -.

The actual code to compute the metrics is in follow-on patches.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/Documentation/perf-stat.txt |   8 +++
 tools/perf/builtin-stat.c              | 124 ++++++++++++++++++++++++++++++++-
 tools/perf/util/parse-events.l         |   1 +
 3 files changed, 131 insertions(+), 2 deletions(-)

diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
index 47469ab..86c03e9 100644
--- a/tools/perf/Documentation/perf-stat.txt
+++ b/tools/perf/Documentation/perf-stat.txt
@@ -158,6 +158,14 @@ filter out the startup phase of the program, which is often very different.
 
 Print statistics of transactional execution if supported.
 
+--topdown::
+
+Print top down level 1 metrics if supported by the CPU. This allows to
+determine bottle necks in the CPU pipeline for CPU bound workloads,
+by breaking it down into frontend bound, backend bound, bad speculation
+and retiring.  Specifying the option multiple times shows metrics even
+if the don't cross a threshold.
+
 EXAMPLES
 --------
 
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 2590c75..a83f26f 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -93,6 +93,15 @@ static const char * transaction_limited_attrs = {
 	"}"
 };
 
+static const char * topdown_attrs[] = {
+	"topdown-total-slots",
+	"topdown-fetch-bubbles",
+	"topdown-slots-retired",
+	"topdown-recovery-bubbles",
+	"topdown-slots-issued",
+	NULL,
+};
+
 static struct perf_evlist	*evsel_list;
 
 static struct target target = {
@@ -105,6 +114,7 @@ static volatile pid_t		child_pid			= -1;
 static bool			null_run			=  false;
 static int			detailed_run			=  0;
 static bool			transaction_run;
+static int			topdown_run			= 0;
 static bool			big_num				=  true;
 static int			big_num_opt			=  -1;
 static const char		*csv_sep			= NULL;
@@ -735,7 +745,8 @@ static void printout(int id, int nr, struct perf_evsel *counter, double uval,
 				first_shadow_cpu(counter, id),
 				pm,
 				nl,
-				&os);
+				&os,
+				topdown_run);
 
 	if (!csv_output) {
 		print_noise(counter, noise);
@@ -1093,12 +1104,90 @@ static int perf_stat_init_aggr_mode(void)
 	return 0;
 }
 
+static void filter_events(const char **attr, char **str, bool use_group)
+{
+	int off = 0;
+	int i;
+	int len = 0;
+	char *s;
+
+	for (i = 0; attr[i]; i++) {
+		if (pmu_have_event("cpu", attr[i])) {
+			len += strlen(attr[i]) + 1;
+			attr[i - off] = attr[i];
+		} else
+			off++;
+	}
+	attr[i - off] = NULL;
+
+	*str = malloc(len + 1 + 2);
+	if (!*str)
+		return;
+	s = *str;
+	if (i - off == 0) {
+		*s = 0;
+		return;
+	}
+	if (use_group)
+		*s++ = '{';
+	for (i = 0; attr[i]; i++) {
+		strcpy(s, attr[i]);
+		s += strlen(s);
+		*s++ = ',';
+	}
+	if (use_group) {
+		s[-1] = '}';
+		*s = 0;
+	} else
+		s[-1] = 0;
+}
+
+/* Caller must free result */
+static char *sysctl_read(const char *fn)
+{
+	int n;
+	char *line = NULL;
+	size_t linelen = 0;
+	FILE *f = fopen(fn, "r");
+	if (!f)
+		return NULL;
+	n = getline(&line, &linelen, f);
+	fclose(f);
+	if (n > 0)
+		return line;
+	free(line);
+	return NULL;
+}
+
+/*
+ * Check whether we can use a group for top down.
+ * Without a group may get bad results.
+ */
+static bool check_group(bool *warn)
+{
+	char *v = sysctl_read("/proc/sys/kernel/nmi_watchdog");
+	int n;
+
+	*warn = false;
+	if (v) {
+		bool res = sscanf(v, "%d", &n) == 1 && n != 0;
+		free(v);
+		if (res) {
+			*warn = true;
+			return false;
+		}
+		return true;
+	}
+	return false; /* Don't know, so don't use group */
+}
+
 /*
  * Add default attributes, if there were no attributes specified or
  * if -d/--detailed, -d -d or -d -d -d is used:
  */
 static int add_default_attributes(void)
 {
+	int err;
 	struct perf_event_attr default_attrs[] = {
 
   { .type = PERF_TYPE_SOFTWARE, .config = PERF_COUNT_SW_TASK_CLOCK		},
@@ -1211,7 +1300,6 @@ static int add_default_attributes(void)
 		return 0;
 
 	if (transaction_run) {
-		int err;
 		if (pmu_have_event("cpu", "cycles-ct") &&
 		    pmu_have_event("cpu", "el-start"))
 			err = parse_events(evsel_list, transaction_attrs, NULL);
@@ -1224,6 +1312,36 @@ static int add_default_attributes(void)
 		return 0;
 	}
 
+	if (topdown_run) {
+		char *str = NULL;
+		bool warn;
+
+		filter_events(topdown_attrs, &str, check_group(&warn));
+		if (topdown_attrs[0] && str) {
+			if (warn)
+				fprintf(stderr,
+		"nmi_watchdog enabled with topdown. May give wrong results.\n"
+		"Disable with echo 0 > /proc/sys/kernel/nmi_watchdog\n");
+			err = parse_events(evsel_list, str, NULL);
+			if (err) {
+				fprintf(stderr,
+					"Cannot set up top down events %s: %d\n",
+					str, err);
+				free(str);
+				return -1;
+			}
+		} else {
+			fprintf(stderr, "System does not support topdown\n");
+			return -1;
+		}
+		free(str);
+		/*
+		 * Right now combining with the other attributes breaks group
+		 * semantics.
+		 */
+		return 0;
+	}
+
 	if (!evsel_list->nr_entries) {
 		if (perf_evlist__add_default_attrs(evsel_list, default_attrs) < 0)
 			return -1;
@@ -1260,6 +1378,8 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
 	const struct option options[] = {
 	OPT_BOOLEAN('T', "transaction", &transaction_run,
 		    "hardware transaction statistics"),
+	OPT_INCR(0, "topdown", &topdown_run,
+		    "measure topdown level 1 statistics"),
 	OPT_CALLBACK('e', "event", &evsel_list, "event",
 		     "event selector. use 'perf list' to list available events",
 		     parse_events_option),
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index f542750..a3b9903 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -239,6 +239,7 @@ cycles-ct					{ return str(yyscanner, PE_KERNEL_PMU_EVENT); }
 cycles-t					{ return str(yyscanner, PE_KERNEL_PMU_EVENT); }
 mem-loads					{ return str(yyscanner, PE_KERNEL_PMU_EVENT); }
 mem-stores					{ return str(yyscanner, PE_KERNEL_PMU_EVENT); }
+topdown-[a-z-]+					{ return str(yyscanner, PE_KERNEL_PMU_EVENT); }
 
 L1-dcache|l1-d|l1d|L1-data		|
 L1-icache|l1-i|l1i|L1-instruction	|
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/9] perf, tools, stat: Add computation of TopDown formulas
  2015-08-08  1:06 Add top down metrics to perf stat Andi Kleen
                   ` (2 preceding siblings ...)
  2015-08-08  1:06 ` [PATCH 3/9] perf, tools, stat: Basic support for TopDown in perf stat Andi Kleen
@ 2015-08-08  1:06 ` Andi Kleen
  2015-08-08  1:06 ` [PATCH 5/9] x86, perf: Support sysfs files depending on SMT status Andi Kleen
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Andi Kleen @ 2015-08-08  1:06 UTC (permalink / raw)
  To: acme; +Cc: jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Implement the TopDown formulas in perf stat. The topdown basic metrics
reported by the kernel are collected, and the formulas are computed
and output as normal metrics.

See the kernel commit exporting the events for details on the used
metrics.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/util/stat-shadow.c | 119 +++++++++++++++++++++++++++++++++++++++++-
 tools/perf/util/stat.c        |   5 ++
 tools/perf/util/stat.h        |   8 ++-
 3 files changed, 130 insertions(+), 2 deletions(-)

diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c
index 073e66f7..2158a0e 100644
--- a/tools/perf/util/stat-shadow.c
+++ b/tools/perf/util/stat-shadow.c
@@ -28,6 +28,11 @@ static struct stats runtime_dtlb_cache_stats[NUM_CTX][MAX_NR_CPUS];
 static struct stats runtime_cycles_in_tx_stats[NUM_CTX][MAX_NR_CPUS];
 static struct stats runtime_transaction_stats[NUM_CTX][MAX_NR_CPUS];
 static struct stats runtime_elision_stats[NUM_CTX][MAX_NR_CPUS];
+static struct stats runtime_topdown_total_slots[NUM_CTX][MAX_NR_CPUS];
+static struct stats runtime_topdown_slots_issued[NUM_CTX][MAX_NR_CPUS];
+static struct stats runtime_topdown_slots_retired[NUM_CTX][MAX_NR_CPUS];
+static struct stats runtime_topdown_fetch_bubbles[NUM_CTX][MAX_NR_CPUS];
+static struct stats runtime_topdown_recovery_bubbles[NUM_CTX][MAX_NR_CPUS];
 
 struct stats walltime_nsecs_stats;
 
@@ -68,6 +73,11 @@ void perf_stat__reset_shadow_stats(void)
 		sizeof(runtime_transaction_stats));
 	memset(runtime_elision_stats, 0, sizeof(runtime_elision_stats));
 	memset(&walltime_nsecs_stats, 0, sizeof(walltime_nsecs_stats));
+	memset(runtime_topdown_total_slots, 0, sizeof(runtime_topdown_total_slots));
+	memset(runtime_topdown_slots_retired, 0, sizeof(runtime_topdown_slots_retired));
+	memset(runtime_topdown_slots_issued, 0, sizeof(runtime_topdown_slots_issued));
+	memset(runtime_topdown_fetch_bubbles, 0, sizeof(runtime_topdown_fetch_bubbles));
+	memset(runtime_topdown_recovery_bubbles, 0, sizeof(runtime_topdown_recovery_bubbles));
 }
 
 /*
@@ -90,6 +100,16 @@ void perf_stat__update_shadow_stats(struct perf_evsel *counter, u64 *count,
 		update_stats(&runtime_transaction_stats[ctx][cpu], count[0]);
 	else if (perf_stat_evsel__is(counter, ELISION_START))
 		update_stats(&runtime_elision_stats[ctx][cpu], count[0]);
+	else if (perf_stat_evsel__is(counter, TOPDOWN_TOTAL_SLOTS))
+		update_stats(&runtime_topdown_total_slots[ctx][cpu], count[0]);
+	else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_ISSUED))
+		update_stats(&runtime_topdown_slots_issued[ctx][cpu], count[0]);
+	else if (perf_stat_evsel__is(counter, TOPDOWN_SLOTS_RETIRED))
+		update_stats(&runtime_topdown_slots_retired[ctx][cpu], count[0]);
+	else if (perf_stat_evsel__is(counter, TOPDOWN_FETCH_BUBBLES))
+		update_stats(&runtime_topdown_fetch_bubbles[ctx][cpu],count[0]);
+	else if (perf_stat_evsel__is(counter, TOPDOWN_RECOVERY_BUBBLES))
+		update_stats(&runtime_topdown_recovery_bubbles[ctx][cpu], count[0]);
 	else if (perf_evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_FRONTEND))
 		update_stats(&runtime_stalled_cycles_front_stats[ctx][cpu], count[0]);
 	else if (perf_evsel__match(counter, HARDWARE, HW_STALLED_CYCLES_BACKEND))
@@ -293,11 +313,70 @@ static void print_ll_cache_misses(int cpu,
 	print_metric(ctxp, color, "%7.2f%%", "of all LL-cache hits", ratio);
 }
 
+/*
+ * For an explanation of the formulas see:
+ * Yasin, A Top Down Method for Performance analysis and Counter architecture
+ * ISPASS14
+ */
+
+static double td_total_slots(int ctx, int cpu)
+{
+	return avg_stats(&runtime_topdown_total_slots[ctx][cpu]);
+}
+
+static double td_bad_spec(int ctx, int cpu)
+{
+	double bad_spec = 0;
+	double total_slots;
+	double total;
+
+	total = avg_stats(&runtime_topdown_slots_issued[ctx][cpu]) -
+		avg_stats(&runtime_topdown_slots_retired[ctx][cpu]) +
+		avg_stats(&runtime_topdown_recovery_bubbles[ctx][cpu]);
+	total_slots = td_total_slots(ctx, cpu);
+	if (total_slots)
+		bad_spec = total / total_slots;
+	return bad_spec;
+}
+
+static double td_retiring(int ctx, int cpu)
+{
+	double retiring = 0;
+	double total_slots = td_total_slots(ctx, cpu);
+	double ret_slots = avg_stats(&runtime_topdown_slots_retired[ctx][cpu]);
+
+	if (total_slots)
+		retiring = ret_slots / total_slots;
+	return retiring;
+}
+
+static double td_fe_bound(int ctx, int cpu)
+{
+	double fe_bound = 0;
+	double total_slots = td_total_slots(ctx, cpu);
+	double fetch_bub = avg_stats(&runtime_topdown_fetch_bubbles[ctx][cpu]);
+
+	if (total_slots)
+		fe_bound = fetch_bub / total_slots;
+	return fe_bound;
+}
+
+static double td_be_bound(int ctx, int cpu)
+{
+	double sum = (td_fe_bound(ctx, cpu) +
+		      td_bad_spec(ctx, cpu) +
+		      td_retiring(ctx, cpu));
+	if (sum == 0)
+		return 0;
+	return 1.0 - sum;
+}
+
 void perf_stat__print_shadow_stats(struct perf_evsel *evsel,
 				   double avg, int cpu,
 				   print_metric_t print_metric,
 				   void (*new_line)(void *ctx),
-				   void *ctxp)
+				   void *ctxp,
+				   int topdown_run)
 {
 	double total, ratio = 0.0, total2;
 	int ctx = evsel_context(evsel);
@@ -422,6 +501,44 @@ void perf_stat__print_shadow_stats(struct perf_evsel *evsel,
 	} else if (perf_evsel__match(evsel, SOFTWARE, SW_TASK_CLOCK) &&
 		   (ratio = avg_stats(&walltime_nsecs_stats)) != 0) {
 		print_metric(ctxp, NULL, "%8.3f", "CPUs utilized", avg / ratio);
+	} else if (perf_stat_evsel__is(evsel, TOPDOWN_FETCH_BUBBLES)) {
+		double fe_bound = td_fe_bound(ctx, cpu);
+
+		if (fe_bound > 0.2 || topdown_run > 1)
+			print_metric(ctxp, NULL, "%8.2f%%", "frontend bound",
+					fe_bound * 100.);
+		else
+			print_metric(ctxp, NULL, NULL, NULL, 0);
+	} else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_RETIRED)) {
+		double retiring = td_retiring(ctx, cpu);
+
+		if (retiring > 0.7 || topdown_run > 1)
+			print_metric(ctxp, NULL, "%8.2f%%", "retiring",
+					retiring * 100.);
+		else
+			print_metric(ctxp, NULL, NULL, NULL, 0);
+	} else if (perf_stat_evsel__is(evsel, TOPDOWN_RECOVERY_BUBBLES)) {
+		double bad_spec = td_bad_spec(ctx, cpu);
+
+		if (bad_spec > 0.1 || topdown_run > 1)
+			print_metric(ctxp, NULL, "%8.2f%%", "bad speculation",
+					bad_spec * 100.);
+		else
+			print_metric(ctxp, NULL, NULL, NULL, 0);
+	} else if (perf_stat_evsel__is(evsel, TOPDOWN_SLOTS_ISSUED)) {
+		double be_bound = td_be_bound(ctx, cpu);
+		const char *name = "backend bound";
+
+		/* In case the CPU does not support topdown-recovery-bubbles */
+		if (avg_stats(&runtime_topdown_recovery_bubbles[ctx][cpu]) == 0)
+			name = "backend bound/bad spec";
+
+		if (td_total_slots(ctx, cpu) > 0 &&
+			(be_bound > 0.2 || topdown_run > 1))
+			print_metric(ctxp, NULL, "%8.2f%%", name,
+					be_bound * 100.);
+		else
+			print_metric(ctxp, NULL, NULL, NULL, 0);
 	} else if (runtime_nsecs_stats[cpu].n != 0) {
 		char unit = 'M';
 		char unit_buf[10];
diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
index c5c709c..f700b81 100644
--- a/tools/perf/util/stat.c
+++ b/tools/perf/util/stat.c
@@ -79,6 +79,11 @@ static const char *id_str[PERF_STAT_EVSEL_ID__MAX] = {
 	ID(TRANSACTION_START,	cpu/tx-start/),
 	ID(ELISION_START,	cpu/el-start/),
 	ID(CYCLES_IN_TX_CP,	cpu/cycles-ct/),
+	ID(TOPDOWN_TOTAL_SLOTS, topdown-total-slots),
+	ID(TOPDOWN_SLOTS_ISSUED, topdown-slots-issued),
+	ID(TOPDOWN_SLOTS_RETIRED, topdown-slots-retired),
+	ID(TOPDOWN_FETCH_BUBBLES, topdown-fetch-bubbles),
+	ID(TOPDOWN_RECOVERY_BUBBLES, topdown-recovery-bubbles),
 };
 #undef ID
 
diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h
index a492e64..b907a31 100644
--- a/tools/perf/util/stat.h
+++ b/tools/perf/util/stat.h
@@ -17,6 +17,11 @@ enum perf_stat_evsel_id {
 	PERF_STAT_EVSEL_ID__TRANSACTION_START,
 	PERF_STAT_EVSEL_ID__ELISION_START,
 	PERF_STAT_EVSEL_ID__CYCLES_IN_TX_CP,
+	PERF_STAT_EVSEL_ID__TOPDOWN_TOTAL_SLOTS,
+	PERF_STAT_EVSEL_ID__TOPDOWN_SLOTS_ISSUED,
+	PERF_STAT_EVSEL_ID__TOPDOWN_SLOTS_RETIRED,
+	PERF_STAT_EVSEL_ID__TOPDOWN_FETCH_BUBBLES,
+	PERF_STAT_EVSEL_ID__TOPDOWN_RECOVERY_BUBBLES,
 	PERF_STAT_EVSEL_ID__MAX,
 };
 
@@ -100,7 +105,8 @@ void perf_stat__print_shadow_stats(struct perf_evsel *evsel,
 				   double avg, int cpu,
 				   print_metric_t print_metric,
 				   void (*new_line)(void *ctx),
-				   void *ctx);
+				   void *ctx,
+				   int topdown_run);
 
 struct perf_counts *perf_counts__new(int ncpus, int nthreads);
 void perf_counts__delete(struct perf_counts *counts);
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/9] x86, perf: Support sysfs files depending on SMT status
  2015-08-08  1:06 Add top down metrics to perf stat Andi Kleen
                   ` (3 preceding siblings ...)
  2015-08-08  1:06 ` [PATCH 4/9] perf, tools, stat: Add computation of TopDown formulas Andi Kleen
@ 2015-08-08  1:06 ` Andi Kleen
  2015-08-08  1:06 ` [PATCH 6/9] x86, perf: Add Top Down events to Intel Core Andi Kleen
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Andi Kleen @ 2015-08-08  1:06 UTC (permalink / raw)
  To: acme; +Cc: jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add a way to show different sysfs events attributes depending on
HyperThreading is on or off. This is difficult to determine
early at boot, so we just do it dynamically when the sysfs
attribute is read.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event.c | 34 ++++++++++++++++++++++++++++++++++
 arch/x86/kernel/cpu/perf_event.h | 10 ++++++++++
 include/linux/perf_event.h       |  7 +++++++
 3 files changed, 51 insertions(+)

diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index 8bac4bb..a1313ed 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -1590,6 +1590,40 @@ ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
 	return x86_pmu.events_sysfs_show(page, config);
 }
 
+ssize_t events_ht_sysfs_show(struct device *dev, struct device_attribute *attr,
+			  char *page)
+{
+	struct perf_pmu_events_ht_attr *pmu_attr =
+		container_of(attr, struct perf_pmu_events_ht_attr, attr);
+	bool ht_on = false;
+	int cpu;
+
+	/*
+	 * Report conditional events depending on Hyper-Threading.
+	 *
+	 * Check all online CPUs if any have a thread sibling,
+	 * as perf may measure any of them.
+	 *
+	 * This is overly conservative as usually the HT special
+	 * handling is not needed if the other CPU thread is idle.
+	 *
+	 * Note this does not (cannot) handle the case when thread
+	 * siblings are invisible, for example with virtualization
+	 * if they are owned by some other guest.  The user tool
+	 * has to re-read when a thread sibling gets onlined later.
+	 */
+	for_each_online_cpu (cpu) {
+		ht_on = cpumask_weight(topology_sibling_cpumask(cpu)) > 1;
+		if (ht_on)
+			break;
+	}
+
+	return sprintf(page, "%s",
+			ht_on ?
+			pmu_attr->event_str_ht :
+			pmu_attr->event_str_noht);
+}
+
 EVENT_ATTR(cpu-cycles,			CPU_CYCLES		);
 EVENT_ATTR(instructions,		INSTRUCTIONS		);
 EVENT_ATTR(cache-references,		CACHE_REFERENCES	);
diff --git a/arch/x86/kernel/cpu/perf_event.h b/arch/x86/kernel/cpu/perf_event.h
index 5edf6d8..3df86d9 100644
--- a/arch/x86/kernel/cpu/perf_event.h
+++ b/arch/x86/kernel/cpu/perf_event.h
@@ -664,6 +664,14 @@ static struct perf_pmu_events_attr event_attr_##v = {			\
 	.event_str	= str,						\
 };
 
+#define EVENT_ATTR_STR_HT(_name, v, noht, ht)				\
+static struct perf_pmu_events_ht_attr event_attr_##v = {		\
+	.attr		= __ATTR(_name, 0444, events_ht_sysfs_show, NULL),\
+	.id		= 0,						\
+	.event_str_noht	= noht,						\
+	.event_str_ht	= ht,						\
+};
+
 extern struct x86_pmu x86_pmu __read_mostly;
 
 static inline bool x86_pmu_has_lbr_callstack(void)
@@ -923,6 +931,8 @@ int knc_pmu_init(void);
 
 ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
 			  char *page);
+ssize_t events_ht_sysfs_show(struct device *dev, struct device_attribute *attr,
+			  char *page);
 
 static inline int is_ht_workaround_enabled(void)
 {
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 2027809..5e9ee24 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1068,6 +1068,13 @@ struct perf_pmu_events_attr {
 	const char *event_str;
 };
 
+struct perf_pmu_events_ht_attr {
+	struct device_attribute attr;
+	u64 id;
+	const char *event_str_ht;
+	const char *event_str_noht;
+};
+
 ssize_t perf_event_sysfs_show(struct device *dev, struct device_attribute *attr,
 			      char *page);
 
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/9] x86, perf: Add Top Down events to Intel Core
  2015-08-08  1:06 Add top down metrics to perf stat Andi Kleen
                   ` (4 preceding siblings ...)
  2015-08-08  1:06 ` [PATCH 5/9] x86, perf: Support sysfs files depending on SMT status Andi Kleen
@ 2015-08-08  1:06 ` Andi Kleen
  2015-08-08  1:06 ` [PATCH 7/9] x86, perf: Add Top Down events to Intel Atom Andi Kleen
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 22+ messages in thread
From: Andi Kleen @ 2015-08-08  1:06 UTC (permalink / raw)
  To: acme; +Cc: jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add declarations for the events needed for TopDown to the
Intel big core CPUs starting with Sandy Bridge. We need
to report different values if HyperThreading is on or off.

The only thing this patch does is to export some events
in sysfs.

TopDown level 1 uses a set of abstracted metrics which
are generic to out of order CPU cores (although some
CPUs may not implement all of them):

topdown-total-slots	  Available slots in the pipeline
topdown-slots-issued	  Slots issued into the pipeline
topdown-slots-retired	  Slots successfully retired
topdown-fetch-bubbles	  Pipeline gaps in the frontend
topdown-recovery-bubbles  Pipeline gaps during recovery
			  from misspeculation

These metrics then allow to compute four useful metrics:
FrontendBound, BackendBound, Retiring, BadSpeculation.

The formulas to compute the metrics are generic, they
only change based on the availability on the abstracted
input values.

The kernel declares the events supported by the current
CPU and perf stat then computes the formulas based on the
available metrics.

Some events need a divisor. To handle this I redefined
".scale" slightly to let a negative value mean divide by.

For HyperThreading the any bit is needed to get accurate
values when both threads are executing. This implies that
the events can only be collected as root or with
perf_event_paranoid=-1 for now.

Hyper Threading also requires averaging events from both
threads together (the CPU cannot measure them independently).
In perf stat this is done by using per core mode, and then
forcing a divisor of two to get the average. The
new .agg-per-core attribute is added to the events, which
then forces perf stat to enable --per-core.
When hyperthreading is disabled the attribute has the value 0.

The basic scheme is based on the following paper:
Yasin,
A Top Down Method for Performance analysis and Counter architecture
ISPASS14
(pdf available via google)

with some extensions to handle HyperThreading.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c | 82 ++++++++++++++++++++++++++++++++++
 1 file changed, 82 insertions(+)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index a478e3c..65b58cb 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -217,9 +217,70 @@ struct attribute *nhm_events_attrs[] = {
 	NULL,
 };
 
+/*
+ * TopDown events for Core.
+ *
+ * With Hyper Threading on, TopDown metrics are averaged between the
+ * threads of a core: (count_core0 + count_core1) / 2. The 2 is expressed
+ * as a scale parameter. We also tell perf to aggregate per core
+ * by setting the .agg-per-core attribute for the alias to 1.
+ *
+ * Some events need to be multiplied by the pipeline width (4), which
+ * is expressed as a negative scale. In HT we cancel the factor 4
+ * with the 2 dividend for the core average, so we use -2.
+ */
+
+EVENT_ATTR_STR_HT(topdown-total-slots, td_total_slots,
+	"event=0x3c,umask=0x0",			/* cpu_clk_unhalted.thread */
+	"event=0x3c,umask=0x0,any=1");		/* cpu_clk_unhalted.thread_any */
+EVENT_ATTR_STR_HT(topdown-total-slots.scale, td_total_slots_scale,
+	"-4", "-2");
+EVENT_ATTR_STR_HT(topdown-total-slots.agg-per-core, td_total_slots_pc,
+	"0", "1");
+EVENT_ATTR_STR(topdown-slots-issued, td_slots_issued,
+	"event=0xe,umask=0x1");			/* uops_issued.any */
+EVENT_ATTR_STR_HT(topdown-slots-issued.agg-per-core, td_slots_issued_pc,
+	"0", "1");
+EVENT_ATTR_STR_HT(topdown-slots-issued.scale, td_slots_issued_scale,
+	"0", "2");
+EVENT_ATTR_STR(topdown-slots-retired, td_slots_retired,
+	"event=0xc2,umask=0x2");		/* uops_retired.retire_slots */
+EVENT_ATTR_STR_HT(topdown-slots-retired.agg-per-core, td_slots_retired_pc,
+	"0", "1");
+EVENT_ATTR_STR_HT(topdown-slots-retired.scale, td_slots_retired_scale,
+	"0", "2");
+EVENT_ATTR_STR(topdown-fetch-bubbles, td_fetch_bubbles,
+	"event=0x9c,umask=0x1");		/* idq_uops_not_delivered_core */
+EVENT_ATTR_STR_HT(topdown-fetch-bubbles.agg-per-core, td_fetch_bubbles_pc,
+	"0", "1");
+EVENT_ATTR_STR_HT(topdown-fetch-bubbles.scale, td_fetch_bubbles_scale,
+	"0", "2");
+EVENT_ATTR_STR_HT(topdown-recovery-bubbles, td_recovery_bubbles,
+	"event=0xd,umask=0x3,cmask=1",		/* int_misc.recovery_cycles */
+	"event=0xd,umask=0x3,cmask=1,any=1");	/* int_misc.recovery_cycles_any */
+EVENT_ATTR_STR_HT(topdown-recovery-bubbles.scale, td_recovery_bubbles_scale,
+	"-4", "-2");
+EVENT_ATTR_STR_HT(topdown-recovery-bubbles.agg-per-core, td_recovery_bubbles_pc,
+	"0", "1");
+
 struct attribute *snb_events_attrs[] = {
 	EVENT_PTR(mem_ld_snb),
 	EVENT_PTR(mem_st_snb),
+	EVENT_PTR(td_slots_issued),
+	EVENT_PTR(td_slots_issued_scale),
+	EVENT_PTR(td_slots_issued_pc),
+	EVENT_PTR(td_slots_retired),
+	EVENT_PTR(td_slots_retired_scale),
+	EVENT_PTR(td_slots_retired_pc),
+	EVENT_PTR(td_fetch_bubbles),
+	EVENT_PTR(td_fetch_bubbles_scale),
+	EVENT_PTR(td_fetch_bubbles_pc),
+	EVENT_PTR(td_total_slots),
+	EVENT_PTR(td_total_slots_scale),
+	EVENT_PTR(td_total_slots_pc),
+	EVENT_PTR(td_recovery_bubbles),
+	EVENT_PTR(td_recovery_bubbles_scale),
+	EVENT_PTR(td_recovery_bubbles_pc),
 	NULL,
 };
 
@@ -3177,6 +3238,21 @@ static struct attribute *hsw_events_attrs[] = {
 	EVENT_PTR(cycles_ct),
 	EVENT_PTR(mem_ld_hsw),
 	EVENT_PTR(mem_st_hsw),
+	EVENT_PTR(td_slots_issued),
+	EVENT_PTR(td_slots_issued_scale),
+	EVENT_PTR(td_slots_issued_pc),
+	EVENT_PTR(td_slots_retired),
+	EVENT_PTR(td_slots_retired_scale),
+	EVENT_PTR(td_slots_retired_pc),
+	EVENT_PTR(td_fetch_bubbles),
+	EVENT_PTR(td_fetch_bubbles_scale),
+	EVENT_PTR(td_fetch_bubbles_pc),
+	EVENT_PTR(td_total_slots),
+	EVENT_PTR(td_total_slots_scale),
+	EVENT_PTR(td_total_slots_pc),
+	EVENT_PTR(td_recovery_bubbles),
+	EVENT_PTR(td_recovery_bubbles_scale),
+	EVENT_PTR(td_recovery_bubbles_pc),
 	NULL
 };
 
@@ -3494,6 +3570,12 @@ __init int intel_pmu_init(void)
 		memcpy(hw_cache_extra_regs, skl_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
 		intel_pmu_lbr_init_skl();
 
+		/* INT_MISC.RECOVERY_CYCLES has umask 1 in Skylake */
+		event_attr_td_recovery_bubbles.event_str_noht =
+			"event=0xd,umask=0x1,cmask=1";
+		event_attr_td_recovery_bubbles.event_str_ht =
+			"event=0xd,umask=0x1,cmask=1,any=1";
+
 		x86_pmu.event_constraints = intel_skl_event_constraints;
 		x86_pmu.pebs_constraints = intel_skl_pebs_event_constraints;
 		x86_pmu.extra_regs = intel_skl_extra_regs;
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 7/9] x86, perf: Add Top Down events to Intel Atom
  2015-08-08  1:06 Add top down metrics to perf stat Andi Kleen
                   ` (5 preceding siblings ...)
  2015-08-08  1:06 ` [PATCH 6/9] x86, perf: Add Top Down events to Intel Core Andi Kleen
@ 2015-08-08  1:06 ` Andi Kleen
  2015-08-08  1:06 ` [PATCH 8/9] perf, tools, stat: Add extra output of counter values with -v Andi Kleen
  2015-08-08  1:06 ` [PATCH 9/9] perf, tools, stat: Force --per-core mode for .agg-per-core aliases Andi Kleen
  8 siblings, 0 replies; 22+ messages in thread
From: Andi Kleen @ 2015-08-08  1:06 UTC (permalink / raw)
  To: acme; +Cc: jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add topdown event declarations to Silvermont / Airmont.
These cores do not support the full Top Down metrics, but an useful
subset (FrontendBound, Retiring, Backend Bound/Bad Speculation).

The perf stat tool automatically handles the missing events
and combines the available metrics.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 arch/x86/kernel/cpu/perf_event_intel.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/arch/x86/kernel/cpu/perf_event_intel.c b/arch/x86/kernel/cpu/perf_event_intel.c
index 65b58cb..1f08603 100644
--- a/arch/x86/kernel/cpu/perf_event_intel.c
+++ b/arch/x86/kernel/cpu/perf_event_intel.c
@@ -1380,6 +1380,29 @@ static __initconst const u64 atom_hw_cache_event_ids
  },
 };
 
+EVENT_ATTR_STR(topdown-total-slots, td_total_slots_slm, "event=0x3c");
+EVENT_ATTR_STR(topdown-total-slots.scale, td_total_slots_scale_slm, "-2");
+/* no_alloc_cycles.not_delivered */
+EVENT_ATTR_STR(topdown-fetch-bubbles, td_fetch_bubbles_slm,
+	       "event=0xca,umask=0x50");
+EVENT_ATTR_STR(topdown-fetch-bubbles.scale, td_fetch_bubbles_scale_slm, "-2");
+/* uops_retired.all */
+EVENT_ATTR_STR(topdown-slots-issued, td_slots_issued_slm,
+	       "event=0xc2,umask=0x10");
+/* uops_retired.all */
+EVENT_ATTR_STR(topdown-slots-retired, td_slots_retired_slm,
+	       "event=0xc2,umask=0x10");
+
+struct attribute *slm_events_attrs[] = {
+	EVENT_PTR(td_total_slots_slm),
+	EVENT_PTR(td_total_slots_scale_slm),
+	EVENT_PTR(td_fetch_bubbles_slm),
+	EVENT_PTR(td_fetch_bubbles_scale_slm),
+	EVENT_PTR(td_slots_issued_slm),
+	EVENT_PTR(td_slots_retired_slm),
+	NULL
+};
+
 static struct extra_reg intel_slm_extra_regs[] __read_mostly =
 {
 	/* must define OFFCORE_RSP_X first, see intel_fixup_er() */
@@ -3401,6 +3424,7 @@ __init int intel_pmu_init(void)
 		x86_pmu.pebs_constraints = intel_slm_pebs_event_constraints;
 		x86_pmu.extra_regs = intel_slm_extra_regs;
 		x86_pmu.flags |= PMU_FL_HAS_RSP_1;
+		x86_pmu.cpu_events = slm_events_attrs;
 		pr_cont("Silvermont events, ");
 		break;
 
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 8/9] perf, tools, stat: Add extra output of counter values with -v
  2015-08-08  1:06 Add top down metrics to perf stat Andi Kleen
                   ` (6 preceding siblings ...)
  2015-08-08  1:06 ` [PATCH 7/9] x86, perf: Add Top Down events to Intel Atom Andi Kleen
@ 2015-08-08  1:06 ` Andi Kleen
  2015-08-08  1:06 ` [PATCH 9/9] perf, tools, stat: Force --per-core mode for .agg-per-core aliases Andi Kleen
  8 siblings, 0 replies; 22+ messages in thread
From: Andi Kleen @ 2015-08-08  1:06 UTC (permalink / raw)
  To: acme; +Cc: jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add debug output of raw counter values per CPU when
perf stat -v is specified, together with their cpu numbers.
This is very useful to debug problems with per core counters,
where we can normally only see aggregated values.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/builtin-stat.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index a83f26f..eec6c16 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -223,6 +223,13 @@ static int read_counter(struct perf_evsel *counter)
 			count = perf_counts(counter->counts, cpu, thread);
 			if (perf_evsel__read(counter, cpu, thread, count))
 				return -1;
+			if (verbose) {
+				fprintf(stat_config.output,
+					"%s: %d: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n",
+						perf_evsel__name(counter),
+						cpu,
+						count->val, count->ena, count->run);
+			}
 		}
 	}
 
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 9/9] perf, tools, stat: Force --per-core mode for .agg-per-core aliases
  2015-08-08  1:06 Add top down metrics to perf stat Andi Kleen
                   ` (7 preceding siblings ...)
  2015-08-08  1:06 ` [PATCH 8/9] perf, tools, stat: Add extra output of counter values with -v Andi Kleen
@ 2015-08-08  1:06 ` Andi Kleen
  8 siblings, 0 replies; 22+ messages in thread
From: Andi Kleen @ 2015-08-08  1:06 UTC (permalink / raw)
  To: acme; +Cc: jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

When an event alias is used that the kernel marked as .agg-per-core, force
--per-core mode (and also require -a and forbid cgroups or per thread mode).
This in term means, --topdown forces --per-core mode.

This is needed for TopDown in SMT mode, because it needs to measure
all threads in a core together and merge the values to compute the correct
percentages of how the pipeline is limited.

We do this if any alias is agg-per-core.

Add the code to parse the .agg-per-core attributes and propagate
the information to the evsel. Then the main stat code does
the necessary checks and forces per core mode.

Open issue: in combination with -C ... we get wrong values. I think that's
a existing bug that needs to be debugged/fixed separately.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/builtin-stat.c      | 18 ++++++++++++++++++
 tools/perf/util/evsel.h        |  1 +
 tools/perf/util/parse-events.c |  1 +
 tools/perf/util/pmu.c          | 23 +++++++++++++++++++++++
 tools/perf/util/pmu.h          |  2 ++
 5 files changed, 45 insertions(+)

diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index eec6c16..0df0aff 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -1382,6 +1382,7 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
 	bool append_file = false;
 	int output_fd = 0;
 	const char *output_name	= NULL;
+	struct perf_evsel *counter;
 	const struct option options[] = {
 	OPT_BOOLEAN('T', "transaction", &transaction_run,
 		    "hardware transaction statistics"),
@@ -1563,6 +1564,23 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
 	if (add_default_attributes())
 		goto out;
 
+	evlist__for_each (evsel_list, counter) {
+		/* Enable per core mode if only a single event requires it. */
+		if (counter->agg_per_core) {
+			if (stat_config.aggr_mode != AGGR_GLOBAL &&
+			    stat_config.aggr_mode != AGGR_CORE) {
+				pr_err("per core event configuration requires per core mode\n");
+				goto out;
+			}
+			stat_config.aggr_mode = AGGR_CORE;
+			if (nr_cgroups || !target__has_cpu(&target)) {
+				pr_err("per core event configuration requires system-wide mode (-a)\n");
+				goto out;
+			}
+			break;
+		}
+	}
+
 	target__validate(&target);
 
 	if (perf_evlist__create_maps(evsel_list, &target) < 0) {
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 6a12908..85f02b8 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -100,6 +100,7 @@ struct perf_evsel {
 	bool			system_wide;
 	bool			tracking;
 	bool			per_pkg;
+	bool			agg_per_core;
 	/* parse modifier helper */
 	int			exclude_GH;
 	int			nr_members;
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 828936d..d2a5938 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -759,6 +759,7 @@ int parse_events_add_pmu(struct parse_events_evlist *data,
 		evsel->unit = info.unit;
 		evsel->scale = info.scale;
 		evsel->per_pkg = info.per_pkg;
+		evsel->agg_per_core = info.agg_per_core;
 		evsel->snapshot = info.snapshot;
 	}
 
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index ce56354..abedb6a 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -189,6 +189,23 @@ perf_pmu__parse_per_pkg(struct perf_pmu_alias *alias, char *dir, char *name)
 	return 0;
 }
 
+static void
+perf_pmu__parse_agg_per_core(struct perf_pmu_alias *alias, char *dir, char *name)
+{
+	char path[PATH_MAX];
+	FILE *f;
+	int flag;
+
+	snprintf(path, PATH_MAX, "%s/%s.agg-per-core", dir, name);
+
+	f = fopen(path, "r");
+	if (f && fscanf(f, "%d", &flag) == 1) {
+		alias->agg_per_core = flag != 0;
+		fclose(f);
+	}
+}
+
+
 static int perf_pmu__parse_snapshot(struct perf_pmu_alias *alias,
 				    char *dir, char *name)
 {
@@ -237,6 +254,7 @@ static int __perf_pmu__new_alias(struct list_head *list, char *dir, char *name,
 		perf_pmu__parse_scale(alias, dir, name);
 		perf_pmu__parse_per_pkg(alias, dir, name);
 		perf_pmu__parse_snapshot(alias, dir, name);
+		perf_pmu__parse_agg_per_core(alias, dir, name);
 	}
 
 	list_add_tail(&alias->list, list);
@@ -271,6 +289,8 @@ static inline bool pmu_alias_info_file(char *name)
 		return true;
 	if (len > 9 && !strcmp(name + len - 9, ".snapshot"))
 		return true;
+	if (len > 13 && !strcmp(name + len - 13, ".agg-per-core"))
+		return true;
 
 	return false;
 }
@@ -858,6 +878,7 @@ int perf_pmu__check_alias(struct perf_pmu *pmu, struct list_head *head_terms,
 	int ret;
 
 	info->per_pkg = false;
+	info->agg_per_core = false;
 
 	/*
 	 * Mark unit and scale as not set
@@ -881,6 +902,8 @@ int perf_pmu__check_alias(struct perf_pmu *pmu, struct list_head *head_terms,
 
 		if (alias->per_pkg)
 			info->per_pkg = true;
+		if (alias->agg_per_core)
+			info->agg_per_core = true;
 
 		list_del(&term->list);
 		free(term);
diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
index 5d7e844..5a43719 100644
--- a/tools/perf/util/pmu.h
+++ b/tools/perf/util/pmu.h
@@ -32,6 +32,7 @@ struct perf_pmu_info {
 	double scale;
 	bool per_pkg;
 	bool snapshot;
+	bool agg_per_core;
 };
 
 #define UNIT_MAX_LEN	31 /* max length for event unit name */
@@ -44,6 +45,7 @@ struct perf_pmu_alias {
 	double scale;
 	bool per_pkg;
 	bool snapshot;
+	bool agg_per_core;
 };
 
 struct perf_pmu *perf_pmu__find(const char *name);
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error
  2015-08-08  1:06 ` [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error Andi Kleen
@ 2015-08-11 13:07   ` Jiri Olsa
  2015-08-11 13:14     ` Andi Kleen
  0 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2015-08-11 13:07 UTC (permalink / raw)
  To: Andi Kleen
  Cc: acme, jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

On Fri, Aug 07, 2015 at 06:06:17PM -0700, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>
> 
> When an error happens during alias parsing currently the complete
> parsing of all attributes of the PMU is stopped. This is breaks
> old perf on a newer kernel that may have not-yet-know
> alias attributes (such as .scale or .per-pkg).

hum, both .scale and .per-pgk are skip from term parsing via:

                /*
                 * skip info files parsed in perf_pmu__new_alias()
                 */
                if (pmu_alias_info_file(name))
                        continue;

and loaded without any error report:

	static int __perf_pmu__new_alias(struct list_head *list, char *dir, char *name,
					 char *desc __maybe_unused, char *val)
	SNIP
		if (dir) {
			/*
			 * load unit name and scale if available
			 */
			perf_pmu__parse_unit(alias, dir, name);
			perf_pmu__parse_scale(alias, dir, name);
			perf_pmu__parse_per_pkg(alias, dir, name);
			perf_pmu__parse_snapshot(alias, dir, name);
		}

		list_add_tail(&alias->list, list);

		return 0;
	}

Which attribute parsing is failing for you?

thanks,
jirka

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error
  2015-08-11 13:07   ` Jiri Olsa
@ 2015-08-11 13:14     ` Andi Kleen
  2015-08-11 13:24       ` Jiri Olsa
  0 siblings, 1 reply; 22+ messages in thread
From: Andi Kleen @ 2015-08-11 13:14 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andi Kleen, acme, jolsa, linux-kernel, eranian, namhyung, peterz, mingo

> Which attribute parsing is failing for you?

The new .agg-per-core attribute I added later in the series.
I think it will happen to any not-yet-known attribute.

-Andi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error
  2015-08-11 13:14     ` Andi Kleen
@ 2015-08-11 13:24       ` Jiri Olsa
  2015-08-11 13:40         ` Andi Kleen
  0 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2015-08-11 13:24 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andi Kleen, acme, jolsa, linux-kernel, eranian, namhyung, peterz, mingo

On Tue, Aug 11, 2015 at 06:14:57AM -0700, Andi Kleen wrote:
> > Which attribute parsing is failing for you?
> 
> The new .agg-per-core attribute I added later in the series.
> I think it will happen to any not-yet-known attribute.

alias can contain only terms defined in formats directory,
and the *.XXX attributes parsing does not return error code

can't see the failure, please get some example

jirka

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/9] perf, tools, stat: Support up-scaling of events
  2015-08-08  1:06 ` [PATCH 2/9] perf, tools, stat: Support up-scaling of events Andi Kleen
@ 2015-08-11 13:25   ` Jiri Olsa
  2015-08-11 13:38     ` Andi Kleen
  0 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2015-08-11 13:25 UTC (permalink / raw)
  To: Andi Kleen
  Cc: acme, jolsa, linux-kernel, eranian, namhyung, peterz, mingo, Andi Kleen

On Fri, Aug 07, 2015 at 06:06:18PM -0700, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>
> 
> TopDown needs to multiply events by constants (for example
> the CPU Pipeline Width) to get the correct results.
> The kernel needs to export this factor.
> 
> Today *.scale is only used to scale down metrics (divide), for example
> to scale bytes to MB.
> 
> Repurpose negative scale to mean scaling up, that is multiplying.
> Implement the code for this in perf stat.
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
>  tools/perf/builtin-stat.c | 27 +++++++++++++++++++--------
>  1 file changed, 19 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index ea5298a..2590c75 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -179,6 +179,17 @@ static inline int nsec_counter(struct perf_evsel *evsel)
>  	return 0;
>  }
>  
> +static double scale_val(struct perf_evsel *counter, u64 val)
> +{
> +	double uval = val;
> +
> +	if (counter->scale < 0)
> +		uval = val * (-counter->scale);
> +	else if (counter->scale)
> +		uval = val / counter->scale;

hum, do you change the scale logic? the current scale > 0 works like:

	uval = val * counter->scale;

jirka

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/9] perf, tools, stat: Support up-scaling of events
  2015-08-11 13:25   ` Jiri Olsa
@ 2015-08-11 13:38     ` Andi Kleen
  2015-08-11 13:54       ` Jiri Olsa
  0 siblings, 1 reply; 22+ messages in thread
From: Andi Kleen @ 2015-08-11 13:38 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andi Kleen, acme, jolsa, linux-kernel, eranian, namhyung, peterz, mingo

On Tue, Aug 11, 2015 at 03:25:32PM +0200, Jiri Olsa wrote:
> On Fri, Aug 07, 2015 at 06:06:18PM -0700, Andi Kleen wrote:
> > From: Andi Kleen <ak@linux.intel.com>
> > 
> > TopDown needs to multiply events by constants (for example
> > the CPU Pipeline Width) to get the correct results.
> > The kernel needs to export this factor.
> > 
> > Today *.scale is only used to scale down metrics (divide), for example
> > to scale bytes to MB.
> > 
> > Repurpose negative scale to mean scaling up, that is multiplying.
> > Implement the code for this in perf stat.
> > 
> > Signed-off-by: Andi Kleen <ak@linux.intel.com>
> > ---
> >  tools/perf/builtin-stat.c | 27 +++++++++++++++++++--------
> >  1 file changed, 19 insertions(+), 8 deletions(-)
> > 
> > diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> > index ea5298a..2590c75 100644
> > --- a/tools/perf/builtin-stat.c
> > +++ b/tools/perf/builtin-stat.c
> > @@ -179,6 +179,17 @@ static inline int nsec_counter(struct perf_evsel *evsel)
> >  	return 0;
> >  }
> >  
> > +static double scale_val(struct perf_evsel *counter, u64 val)
> > +{
> > +	double uval = val;
> > +
> > +	if (counter->scale < 0)
> > +		uval = val * (-counter->scale);
> > +	else if (counter->scale)
> > +		uval = val / counter->scale;
> 
> hum, do you change the scale logic? the current scale > 0 works like:
> 
> 	uval = val * counter->scale;

Yes I define negative scales to mean "multiply by" See the description of the kernel
patch for more details.

-Andi


-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error
  2015-08-11 13:24       ` Jiri Olsa
@ 2015-08-11 13:40         ` Andi Kleen
  2015-08-11 14:39           ` Jiri Olsa
  0 siblings, 1 reply; 22+ messages in thread
From: Andi Kleen @ 2015-08-11 13:40 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andi Kleen, acme, jolsa, linux-kernel, eranian, namhyung, peterz, mingo

On Tue, Aug 11, 2015 at 03:24:27PM +0200, Jiri Olsa wrote:
> On Tue, Aug 11, 2015 at 06:14:57AM -0700, Andi Kleen wrote:
> > > Which attribute parsing is failing for you?
> > 
> > The new .agg-per-core attribute I added later in the series.
> > I think it will happen to any not-yet-known attribute.
> 
> alias can contain only terms defined in formats directory,
> and the *.XXX attributes parsing does not return error code
> 
> can't see the failure, please get some example

Apply the kernel patch that adds several .agg-per-core attributes
Then try to use any cpu/.../ event

% perf stat -e cpu/event=0x3c/ true
invalid or unsupported event: 'cpu/event=0x3c/'

because the PMU parsing bailed out.

With patched perf (either this patch or the patch that adds
the .agg-per-core parsing) it works.

-Andi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/9] perf, tools, stat: Support up-scaling of events
  2015-08-11 13:38     ` Andi Kleen
@ 2015-08-11 13:54       ` Jiri Olsa
  2015-08-11 17:00         ` Andi Kleen
  0 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2015-08-11 13:54 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andi Kleen, acme, jolsa, linux-kernel, eranian, namhyung, peterz, mingo

On Tue, Aug 11, 2015 at 06:38:05AM -0700, Andi Kleen wrote:
> On Tue, Aug 11, 2015 at 03:25:32PM +0200, Jiri Olsa wrote:
> > On Fri, Aug 07, 2015 at 06:06:18PM -0700, Andi Kleen wrote:
> > > From: Andi Kleen <ak@linux.intel.com>
> > > 
> > > TopDown needs to multiply events by constants (for example
> > > the CPU Pipeline Width) to get the correct results.
> > > The kernel needs to export this factor.
> > > 
> > > Today *.scale is only used to scale down metrics (divide), for example
> > > to scale bytes to MB.
> > > 
> > > Repurpose negative scale to mean scaling up, that is multiplying.
> > > Implement the code for this in perf stat.
> > > 
> > > Signed-off-by: Andi Kleen <ak@linux.intel.com>
> > > ---
> > >  tools/perf/builtin-stat.c | 27 +++++++++++++++++++--------
> > >  1 file changed, 19 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> > > index ea5298a..2590c75 100644
> > > --- a/tools/perf/builtin-stat.c
> > > +++ b/tools/perf/builtin-stat.c
> > > @@ -179,6 +179,17 @@ static inline int nsec_counter(struct perf_evsel *evsel)
> > >  	return 0;
> > >  }
> > >  
> > > +static double scale_val(struct perf_evsel *counter, u64 val)
> > > +{
> > > +	double uval = val;
> > > +
> > > +	if (counter->scale < 0)
> > > +		uval = val * (-counter->scale);
> > > +	else if (counter->scale)
> > > +		uval = val / counter->scale;
> > 
> > hum, do you change the scale logic? the current scale > 0 works like:
> > 
> > 	uval = val * counter->scale;
> 
> Yes I define negative scales to mean "multiply by" See the description of the kernel
> patch for more details.

how about existing scale attributes, like in rapl code

jirka

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error
  2015-08-11 13:40         ` Andi Kleen
@ 2015-08-11 14:39           ` Jiri Olsa
  2015-08-11 16:59             ` Andi Kleen
  0 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2015-08-11 14:39 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andi Kleen, acme, jolsa, linux-kernel, eranian, namhyung, peterz, mingo

On Tue, Aug 11, 2015 at 06:40:27AM -0700, Andi Kleen wrote:
> On Tue, Aug 11, 2015 at 03:24:27PM +0200, Jiri Olsa wrote:
> > On Tue, Aug 11, 2015 at 06:14:57AM -0700, Andi Kleen wrote:
> > > > Which attribute parsing is failing for you?
> > > 
> > > The new .agg-per-core attribute I added later in the series.
> > > I think it will happen to any not-yet-known attribute.
> > 
> > alias can contain only terms defined in formats directory,
> > and the *.XXX attributes parsing does not return error code
> > 
> > can't see the failure, please get some example
> 
> Apply the kernel patch that adds several .agg-per-core attributes
> Then try to use any cpu/.../ event
> 
> % perf stat -e cpu/event=0x3c/ true
> invalid or unsupported event: 'cpu/event=0x3c/'
> 
> because the PMU parsing bailed out.

ugh right, the new attribute wont be recognized..

how about recognizing attribute based on the '.' prefix being
existing file rather than the suffix like in the attached patch

jirka


---
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index d4b0e6454bc6..937ecc35a60e 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -258,21 +258,23 @@ static int perf_pmu__new_alias(struct list_head *list, char *dir, char *name, FI
 	return __perf_pmu__new_alias(list, dir, name, NULL, buf);
 }
 
-static inline bool pmu_alias_info_file(char *name)
+static inline bool pmu_alias_attr_file(char *dir, char *name)
 {
-	size_t len;
-
-	len = strlen(name);
-	if (len > 5 && !strcmp(name + len - 5, ".unit"))
-		return true;
-	if (len > 6 && !strcmp(name + len - 6, ".scale"))
-		return true;
-	if (len > 8 && !strcmp(name + len - 8, ".per-pkg"))
-		return true;
-	if (len > 9 && !strcmp(name + len - 9, ".snapshot"))
-		return true;
+	bool ret = false;
+	struct stat st;
+	char *path, *s;
 
-	return false;
+	if (asprintf(&path, "%s/%s", dir, name) == -1)
+		return false;
+
+	s = strrchr(path, '.');
+	if (s) {
+		*s = 0;
+		ret = !stat(path, &st);
+	}
+
+	free(path);
+	return ret;
 }
 
 /*
@@ -300,7 +302,7 @@ static int pmu_aliases_parse(char *dir, struct list_head *head)
 		/*
 		 * skip info files parsed in perf_pmu__new_alias()
 		 */
-		if (pmu_alias_info_file(name))
+		if (pmu_alias_attr_file(dir, name))
 			continue;
 
 		snprintf(path, PATH_MAX, "%s/%s", dir, name);

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error
  2015-08-11 14:39           ` Jiri Olsa
@ 2015-08-11 16:59             ` Andi Kleen
  0 siblings, 0 replies; 22+ messages in thread
From: Andi Kleen @ 2015-08-11 16:59 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andi Kleen, Andi Kleen, acme, jolsa, linux-kernel, eranian,
	namhyung, peterz, mingo

> how about recognizing attribute based on the '.' prefix being
> existing file rather than the suffix like in the attached patch

Fine too. My patch is simpler and works well enough though,
and also handles other cases.

-Andi

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/9] perf, tools, stat: Support up-scaling of events
  2015-08-11 13:54       ` Jiri Olsa
@ 2015-08-11 17:00         ` Andi Kleen
  2015-08-11 17:13           ` Jiri Olsa
  0 siblings, 1 reply; 22+ messages in thread
From: Andi Kleen @ 2015-08-11 17:00 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andi Kleen, Andi Kleen, acme, jolsa, linux-kernel, eranian,
	namhyung, peterz, mingo

> how about existing scale attributes, like in rapl code

I'm using the existing scale attribute, but I need a multiplication,
not a division. That is why negative scale was redefined to mean
multiplication.

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/9] perf, tools, stat: Support up-scaling of events
  2015-08-11 17:00         ` Andi Kleen
@ 2015-08-11 17:13           ` Jiri Olsa
  2015-08-11 17:17             ` Andi Kleen
  0 siblings, 1 reply; 22+ messages in thread
From: Jiri Olsa @ 2015-08-11 17:13 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andi Kleen, acme, jolsa, linux-kernel, eranian, namhyung, peterz, mingo

On Tue, Aug 11, 2015 at 07:00:37PM +0200, Andi Kleen wrote:
> > how about existing scale attributes, like in rapl code
> 
> I'm using the existing scale attribute, but I need a multiplication,
> not a division. That is why negative scale was redefined to mean
> multiplication.

your new perf tool code (perf/top-down-2 branch) over the rapl counter:

[root@krava perf]# ./perf stat -e 'power/energy-cores/' -I 1000 -a
#           time             counts   unit events
     1.000096151 21606019212309954560.00 Joules power/energy-cores/                                         
     2.000284710 3411476717733150720.00 Joules power/energy-cores/                                         
     3.000455216 12621337955705815040.00 Joules power/energy-cores/                                         
     4.000543075 6444651066767179776.00 Joules power/energy-cores/                                         
^C     4.144246923 1705738358866575360.00 Joules power/energy-cores/   


jirka

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/9] perf, tools, stat: Support up-scaling of events
  2015-08-11 17:13           ` Jiri Olsa
@ 2015-08-11 17:17             ` Andi Kleen
  0 siblings, 0 replies; 22+ messages in thread
From: Andi Kleen @ 2015-08-11 17:17 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andi Kleen, Andi Kleen, acme, jolsa, linux-kernel, eranian,
	namhyung, peterz, mingo

On Tue, Aug 11, 2015 at 07:13:41PM +0200, Jiri Olsa wrote:
> On Tue, Aug 11, 2015 at 07:00:37PM +0200, Andi Kleen wrote:
> > > how about existing scale attributes, like in rapl code
> > 
> > I'm using the existing scale attribute, but I need a multiplication,
> > not a division. That is why negative scale was redefined to mean
> > multiplication.
> 
> your new perf tool code (perf/top-down-2 branch) over the rapl counter:

Thanks I'll look at it. Perhaps can also use a fraction scale instead
of the negative scale.

-Andi

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2015-08-11 17:17 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-08-08  1:06 Add top down metrics to perf stat Andi Kleen
2015-08-08  1:06 ` [PATCH 1/9] perf, tools: Dont stop PMU parsing on alias parse error Andi Kleen
2015-08-11 13:07   ` Jiri Olsa
2015-08-11 13:14     ` Andi Kleen
2015-08-11 13:24       ` Jiri Olsa
2015-08-11 13:40         ` Andi Kleen
2015-08-11 14:39           ` Jiri Olsa
2015-08-11 16:59             ` Andi Kleen
2015-08-08  1:06 ` [PATCH 2/9] perf, tools, stat: Support up-scaling of events Andi Kleen
2015-08-11 13:25   ` Jiri Olsa
2015-08-11 13:38     ` Andi Kleen
2015-08-11 13:54       ` Jiri Olsa
2015-08-11 17:00         ` Andi Kleen
2015-08-11 17:13           ` Jiri Olsa
2015-08-11 17:17             ` Andi Kleen
2015-08-08  1:06 ` [PATCH 3/9] perf, tools, stat: Basic support for TopDown in perf stat Andi Kleen
2015-08-08  1:06 ` [PATCH 4/9] perf, tools, stat: Add computation of TopDown formulas Andi Kleen
2015-08-08  1:06 ` [PATCH 5/9] x86, perf: Support sysfs files depending on SMT status Andi Kleen
2015-08-08  1:06 ` [PATCH 6/9] x86, perf: Add Top Down events to Intel Core Andi Kleen
2015-08-08  1:06 ` [PATCH 7/9] x86, perf: Add Top Down events to Intel Atom Andi Kleen
2015-08-08  1:06 ` [PATCH 8/9] perf, tools, stat: Add extra output of counter values with -v Andi Kleen
2015-08-08  1:06 ` [PATCH 9/9] perf, tools, stat: Force --per-core mode for .agg-per-core aliases Andi Kleen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).