* [PATCH 0/9] perf: consolidate all the open counters loops
@ 2012-10-11 4:25 David Ahern
2012-10-11 4:25 ` [PATCH 1/9] perf python: add ui stubs file David Ahern
` (8 more replies)
0 siblings, 9 replies; 11+ messages in thread
From: David Ahern @ 2012-10-11 4:25 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
ACME was a litle slow today (ACME Component Mgmt Env that is) so managed
to add perf-stat to the list and do a decent amount of testing. This
consolidates all of the open counters loops into a single common one.
David Ahern (9):
perf python: add ui stubs file
perf top: make use of perf_record_opts
perf evlist: introduce open counters method
perf top: use the new perf_evlist__open_counters method
perf record: use the new perf_evlist__open_counters method
perf stat: move user options to perf_record_opts
perf evlist: add stat unique code to open_counters method
perf stat: move to perf_evlist__open_counters
perf evsel: remove perf_evsel__open_per_cpu
tools/perf/builtin-record.c | 109 +---------------
tools/perf/builtin-stat.c | 240 ++++++++++++++++--------------------
tools/perf/builtin-top.c | 142 ++++++---------------
tools/perf/util/evlist.c | 139 ++++++++++++++++++++-
tools/perf/util/evlist.h | 4 +
tools/perf/util/evsel.c | 6 -
tools/perf/util/evsel.h | 2 -
tools/perf/util/python-ext-sources | 1 +
tools/perf/util/top.c | 20 +--
tools/perf/util/top.h | 9 +-
10 files changed, 303 insertions(+), 369 deletions(-)
--
1.7.10.1
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/9] perf python: add ui stubs file
2012-10-11 4:25 [PATCH 0/9] perf: consolidate all the open counters loops David Ahern
@ 2012-10-11 4:25 ` David Ahern
2012-10-11 4:25 ` [PATCH 2/9] perf top: make use of perf_record_opts David Ahern
` (7 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Ahern @ 2012-10-11 4:25 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
stderr based implementations of ui_xxxx functions for the python
library. Needed for patch 3 - consolidating open counters method.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
tools/perf/util/python-ext-sources | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/perf/util/python-ext-sources b/tools/perf/util/python-ext-sources
index 2133628..8a45370 100644
--- a/tools/perf/util/python-ext-sources
+++ b/tools/perf/util/python-ext-sources
@@ -19,3 +19,4 @@ util/debugfs.c
util/rblist.c
util/strlist.c
../../lib/rbtree.c
+util/ui_stubs.c
--
1.7.10.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/9] perf top: make use of perf_record_opts
2012-10-11 4:25 [PATCH 0/9] perf: consolidate all the open counters loops David Ahern
2012-10-11 4:25 ` [PATCH 1/9] perf python: add ui stubs file David Ahern
@ 2012-10-11 4:25 ` David Ahern
2012-10-11 4:25 ` [PATCH 3/9] perf evlist: introduce open counters method David Ahern
` (6 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Ahern @ 2012-10-11 4:25 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
Changes top code to use the perf_record_opts struct. Stepping stone to
consolidating the open counters code.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
tools/perf/builtin-top.c | 84 ++++++++++++++++++++++++----------------------
tools/perf/util/top.c | 20 +++++------
tools/perf/util/top.h | 9 +----
3 files changed, 54 insertions(+), 59 deletions(-)
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index fb9da71..33c3825 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -591,7 +591,7 @@ static void *display_thread_tui(void *arg)
* via --uid.
*/
list_for_each_entry(pos, &top->evlist->entries, node)
- pos->hists.uid_filter_str = top->target.uid_str;
+ pos->hists.uid_filter_str = top->opts.target.uid_str;
perf_evlist__tui_browse_hists(top->evlist, help,
perf_top__sort_new_samples,
@@ -891,7 +891,7 @@ static void perf_top__start_counters(struct perf_top *top)
struct perf_evsel *counter;
struct perf_evlist *evlist = top->evlist;
- if (top->group)
+ if (top->opts.group)
perf_evlist__set_leader(evlist);
list_for_each_entry(counter, &evlist->entries, node) {
@@ -899,10 +899,10 @@ static void perf_top__start_counters(struct perf_top *top)
attr->sample_type = PERF_SAMPLE_IP | PERF_SAMPLE_TID;
- if (top->freq) {
+ if (top->opts.freq) {
attr->sample_type |= PERF_SAMPLE_PERIOD;
attr->freq = 1;
- attr->sample_freq = top->freq;
+ attr->sample_freq = top->opts.freq;
}
if (evlist->nr_entries > 1) {
@@ -910,7 +910,7 @@ static void perf_top__start_counters(struct perf_top *top)
attr->read_format |= PERF_FORMAT_ID;
}
- if (perf_target__has_cpu(&top->target))
+ if (perf_target__has_cpu(&top->opts.target))
attr->sample_type |= PERF_SAMPLE_CPU;
if (symbol_conf.use_callchain)
@@ -918,12 +918,12 @@ static void perf_top__start_counters(struct perf_top *top)
attr->mmap = 1;
attr->comm = 1;
- attr->inherit = top->inherit;
+ attr->inherit = !top->opts.no_inherit;
fallback_missing_features:
- if (top->exclude_guest_missing)
+ if (top->opts.exclude_guest_missing)
attr->exclude_guest = attr->exclude_host = 0;
retry_sample_id:
- attr->sample_id_all = top->sample_id_all_missing ? 0 : 1;
+ attr->sample_id_all = top->opts.sample_id_all_missing ? 0 : 1;
try_again:
if (perf_evsel__open(counter, top->evlist->cpus,
top->evlist->threads) < 0) {
@@ -933,17 +933,17 @@ try_again:
ui__error_paranoid();
goto out_err;
} else if (err == EINVAL) {
- if (!top->exclude_guest_missing &&
+ if (!top->opts.exclude_guest_missing &&
(attr->exclude_guest || attr->exclude_host)) {
pr_debug("Old kernel, cannot exclude "
"guest or host samples.\n");
- top->exclude_guest_missing = true;
+ top->opts.exclude_guest_missing = true;
goto fallback_missing_features;
- } else if (!top->sample_id_all_missing) {
+ } else if (!top->opts.sample_id_all_missing) {
/*
* Old kernel, no attr->sample_id_type_all field
*/
- top->sample_id_all_missing = true;
+ top->opts.sample_id_all_missing = true;
goto retry_sample_id;
}
}
@@ -988,7 +988,7 @@ try_again:
}
}
- if (perf_evlist__mmap(evlist, top->mmap_pages, false) < 0) {
+ if (perf_evlist__mmap(evlist, top->opts.mmap_pages, false) < 0) {
ui__error("Failed to mmap with %d (%s)\n",
errno, strerror(errno));
goto out_err;
@@ -1034,7 +1034,7 @@ static int __cmd_top(struct perf_top *top)
if (ret)
goto out_delete;
- if (perf_target__has_task(&top->target))
+ if (perf_target__has_task(&top->opts.target))
perf_event__synthesize_thread_map(&top->tool, top->evlist->threads,
perf_event__process,
&top->session->host_machine);
@@ -1168,11 +1168,13 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
struct perf_top top = {
.count_filter = 5,
.delay_secs = 2,
- .freq = 4000, /* 4 KHz */
- .mmap_pages = 128,
.sym_pcnt_filter = 5,
- .target = {
- .uses_mmap = true,
+ .opts = {
+ .freq = 4000, /* 4 KHz */
+ .mmap_pages = 128,
+ .target = {
+ .uses_mmap = true,
+ },
},
};
char callchain_default_opt[] = "fractal,0.5,callee";
@@ -1180,21 +1182,21 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
OPT_CALLBACK('e', "event", &top.evlist, "event",
"event selector. use 'perf list' to list available events",
parse_events_option),
- OPT_INTEGER('c', "count", &top.default_interval,
- "event period to sample"),
- OPT_STRING('p', "pid", &top.target.pid, "pid",
+ OPT_U64('c', "count", &top.opts.default_interval,
+ "event period to sample"),
+ OPT_STRING('p', "pid", &top.opts.target.pid, "pid",
"profile events on existing process id"),
- OPT_STRING('t', "tid", &top.target.tid, "tid",
+ OPT_STRING('t', "tid", &top.opts.target.tid, "tid",
"profile events on existing thread id"),
- OPT_BOOLEAN('a', "all-cpus", &top.target.system_wide,
+ OPT_BOOLEAN('a', "all-cpus", &top.opts.target.system_wide,
"system-wide collection from all CPUs"),
- OPT_STRING('C', "cpu", &top.target.cpu_list, "cpu",
+ OPT_STRING('C', "cpu", &top.opts.target.cpu_list, "cpu",
"list of cpus to monitor"),
OPT_STRING('k', "vmlinux", &symbol_conf.vmlinux_name,
"file", "vmlinux pathname"),
OPT_BOOLEAN('K', "hide_kernel_symbols", &top.hide_kernel_symbols,
"hide kernel symbols"),
- OPT_UINTEGER('m', "mmap-pages", &top.mmap_pages, "number of mmap data pages"),
+ OPT_UINTEGER('m', "mmap-pages", &top.opts.mmap_pages, "number of mmap data pages"),
OPT_INTEGER('r', "realtime", &top.realtime_prio,
"collect data with this RT SCHED_FIFO priority"),
OPT_INTEGER('d', "delay", &top.delay_secs,
@@ -1203,15 +1205,15 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
"dump the symbol table used for profiling"),
OPT_INTEGER('f', "count-filter", &top.count_filter,
"only display functions with more events than this"),
- OPT_BOOLEAN('g', "group", &top.group,
+ OPT_BOOLEAN('g', "group", &top.opts.group,
"put the counters into a counter group"),
- OPT_BOOLEAN('i', "inherit", &top.inherit,
+ OPT_BOOLEAN('i', "inherit", &top.opts.no_inherit,
"child tasks inherit counters"),
OPT_STRING(0, "sym-annotate", &top.sym_filter, "symbol name",
"symbol to annotate"),
OPT_BOOLEAN('z', "zero", &top.zero,
"zero history across updates"),
- OPT_INTEGER('F', "freq", &top.freq,
+ OPT_UINTEGER('F', "freq", &top.opts.freq,
"profile at this frequency"),
OPT_INTEGER('E', "entries", &top.print_entries,
"display this many functions"),
@@ -1243,7 +1245,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
"Display raw encoding of assembly instructions (default)"),
OPT_STRING('M', "disassembler-style", &disassembler_style, "disassembler style",
"Specify disassembler style (e.g. -M intel for intel syntax)"),
- OPT_STRING('u', "uid", &top.target.uid_str, "user", "user to profile"),
+ OPT_STRING('u', "uid", &top.opts.target.uid_str, "user", "user to profile"),
OPT_END()
};
const char * const top_usage[] = {
@@ -1273,27 +1275,27 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
setup_browser(false);
- status = perf_target__validate(&top.target);
+ status = perf_target__validate(&top.opts.target);
if (status) {
- perf_target__strerror(&top.target, status, errbuf, BUFSIZ);
+ perf_target__strerror(&top.opts.target, status, errbuf, BUFSIZ);
ui__warning("%s", errbuf);
}
- status = perf_target__parse_uid(&top.target);
+ status = perf_target__parse_uid(&top.opts.target);
if (status) {
int saved_errno = errno;
- perf_target__strerror(&top.target, status, errbuf, BUFSIZ);
+ perf_target__strerror(&top.opts.target, status, errbuf, BUFSIZ);
ui__error("%s", errbuf);
status = -saved_errno;
goto out_delete_evlist;
}
- if (perf_target__none(&top.target))
- top.target.system_wide = true;
+ if (perf_target__none(&top.opts.target))
+ top.opts.target.system_wide = true;
- if (perf_evlist__create_maps(top.evlist, &top.target) < 0)
+ if (perf_evlist__create_maps(top.evlist, &top.opts.target) < 0)
usage_with_options(top_usage, options);
if (!top.evlist->nr_entries &&
@@ -1310,10 +1312,10 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
/*
* User specified count overrides default frequency.
*/
- if (top.default_interval)
- top.freq = 0;
- else if (top.freq) {
- top.default_interval = top.freq;
+ if (top.opts.default_interval)
+ top.opts.freq = 0;
+ else if (top.opts.freq) {
+ top.opts.default_interval = top.opts.freq;
} else {
ui__error("frequency and count are zero, aborting\n");
exit(EXIT_FAILURE);
@@ -1324,7 +1326,7 @@ int cmd_top(int argc, const char **argv, const char *prefix __maybe_unused)
* Fill in the ones not specifically initialized via -c:
*/
if (!pos->attr.sample_period)
- pos->attr.sample_period = top.default_interval;
+ pos->attr.sample_period = top.opts.default_interval;
}
top.sym_evsel = perf_evlist__first(top.evlist);
diff --git a/tools/perf/util/top.c b/tools/perf/util/top.c
index 884dde9..deea8fd 100644
--- a/tools/perf/util/top.c
+++ b/tools/perf/util/top.c
@@ -61,31 +61,31 @@ size_t perf_top__header_snprintf(struct perf_top *top, char *bf, size_t size)
struct perf_evsel *first = perf_evlist__first(top->evlist);
ret += SNPRINTF(bf + ret, size - ret, "%" PRIu64 "%s ",
(uint64_t)first->attr.sample_period,
- top->freq ? "Hz" : "");
+ top->opts.freq ? "Hz" : "");
}
ret += SNPRINTF(bf + ret, size - ret, "%s", perf_evsel__name(top->sym_evsel));
ret += SNPRINTF(bf + ret, size - ret, "], ");
- if (top->target.pid)
+ if (top->opts.target.pid)
ret += SNPRINTF(bf + ret, size - ret, " (target_pid: %s",
- top->target.pid);
- else if (top->target.tid)
+ top->opts.target.pid);
+ else if (top->opts.target.tid)
ret += SNPRINTF(bf + ret, size - ret, " (target_tid: %s",
- top->target.tid);
- else if (top->target.uid_str != NULL)
+ top->opts.target.tid);
+ else if (top->opts.target.uid_str != NULL)
ret += SNPRINTF(bf + ret, size - ret, " (uid: %s",
- top->target.uid_str);
+ top->opts.target.uid_str);
else
ret += SNPRINTF(bf + ret, size - ret, " (all");
- if (top->target.cpu_list)
+ if (top->opts.target.cpu_list)
ret += SNPRINTF(bf + ret, size - ret, ", CPU%s: %s)",
top->evlist->cpus->nr > 1 ? "s" : "",
- top->target.cpu_list);
+ top->opts.target.cpu_list);
else {
- if (top->target.tid)
+ if (top->opts.target.tid)
ret += SNPRINTF(bf + ret, size - ret, ")");
else
ret += SNPRINTF(bf + ret, size - ret, ", %d CPU%s)",
diff --git a/tools/perf/util/top.h b/tools/perf/util/top.h
index 86ff1b1..9728740 100644
--- a/tools/perf/util/top.h
+++ b/tools/perf/util/top.h
@@ -14,7 +14,7 @@ struct perf_session;
struct perf_top {
struct perf_tool tool;
struct perf_evlist *evlist;
- struct perf_target target;
+ struct perf_record_opts opts;
/*
* Symbols will be added here in perf_event__process_sample and will
* get out after decayed.
@@ -24,24 +24,17 @@ struct perf_top {
u64 exact_samples;
u64 guest_us_samples, guest_kernel_samples;
int print_entries, count_filter, delay_secs;
- int freq;
bool hide_kernel_symbols, hide_user_symbols, zero;
bool use_tui, use_stdio;
bool sort_has_symbols;
bool dont_use_callchains;
bool kptr_restrict_warned;
bool vmlinux_warned;
- bool inherit;
- bool group;
- bool sample_id_all_missing;
- bool exclude_guest_missing;
bool dump_symtab;
struct hist_entry *sym_filter_entry;
struct perf_evsel *sym_evsel;
struct perf_session *session;
struct winsize winsize;
- unsigned int mmap_pages;
- int default_interval;
int realtime_prio;
int sym_pcnt_filter;
const char *sym_filter;
--
1.7.10.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/9] perf evlist: introduce open counters method
2012-10-11 4:25 [PATCH 0/9] perf: consolidate all the open counters loops David Ahern
2012-10-11 4:25 ` [PATCH 1/9] perf python: add ui stubs file David Ahern
2012-10-11 4:25 ` [PATCH 2/9] perf top: make use of perf_record_opts David Ahern
@ 2012-10-11 4:25 ` David Ahern
2012-10-11 4:25 ` [PATCH 4/9] perf top: use the new perf_evlist__open_counters method David Ahern
` (5 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Ahern @ 2012-10-11 4:25 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
Superset of the open counters code in perf-top and perf-record -
combining retry handling and error handling. Should be functionally
equivalent.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
tools/perf/util/evlist.c | 127 +++++++++++++++++++++++++++++++++++++++++++++-
tools/perf/util/evlist.h | 3 ++
2 files changed, 129 insertions(+), 1 deletion(-)
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index a41dc4a..bce2f58 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -15,7 +15,7 @@
#include "evlist.h"
#include "evsel.h"
#include <unistd.h>
-
+#include "debug.h"
#include "parse-events.h"
#include <sys/mman.h>
@@ -838,3 +838,128 @@ size_t perf_evlist__fprintf(struct perf_evlist *evlist, FILE *fp)
return printed + fprintf(fp, "\n");;
}
+
+int perf_evlist__open_counters(struct perf_evlist *evlist,
+ struct perf_record_opts *opts)
+{
+ struct perf_evsel *pos;
+ int rc = 0;
+
+ list_for_each_entry(pos, &evlist->entries, node) {
+ struct perf_event_attr *attr = &pos->attr;
+
+ /*
+ * Carried over from perf-record:
+ * Check if parse_single_tracepoint_event has already asked for
+ * PERF_SAMPLE_TIME.
+ *
+ * XXX this is kludgy but short term fix for problems introduced by
+ * eac23d1c that broke 'perf script' by having different sample_types
+ * when using multiple tracepoint events when we use a perf binary
+ * that tries to use sample_id_all on an older kernel.
+ *
+ * We need to move counter creation to perf_session, support
+ * different sample_types, etc.
+ */
+ bool time_needed = attr->sample_type & PERF_SAMPLE_TIME;
+
+fallback_missing_features:
+ if (opts->exclude_guest_missing)
+ attr->exclude_guest = attr->exclude_host = 0;
+retry_sample_id:
+ attr->sample_id_all = opts->sample_id_all_missing ? 0 : 1;
+try_again:
+ if (perf_evsel__open(pos, evlist->cpus, evlist->threads) < 0) {
+ int err = errno;
+
+ if (err == EPERM || err == EACCES) {
+ ui__error_paranoid();
+ rc = -err;
+ goto out;
+ } else if (err == ENODEV && opts->target.cpu_list) {
+ pr_err("No such device - did you specify"
+ " an out-of-range profile CPU?\n");
+ rc = -err;
+ goto out;
+ } else if (err == EINVAL) {
+ if (!opts->exclude_guest_missing &&
+ (attr->exclude_guest || attr->exclude_host)) {
+ pr_debug("Old kernel, cannot exclude "
+ "guest or host samples.\n");
+ opts->exclude_guest_missing = true;
+ goto fallback_missing_features;
+ } else if (!opts->sample_id_all_missing) {
+ /*
+ * Old kernel, no attr->sample_id_type_all field
+ */
+ opts->sample_id_all_missing = true;
+ if (!opts->sample_time &&
+ !opts->raw_samples &&
+ !time_needed)
+ attr->sample_type &= ~PERF_SAMPLE_TIME;
+ goto retry_sample_id;
+ }
+ }
+
+ /*
+ * If it's cycles then fall back to hrtimer
+ * based cpu-clock-tick sw counter, which
+ * is always available even if no PMU support:
+ *
+ * PPC returns ENXIO until 2.6.37 (behavior changed
+ * with commit b0a873e).
+ */
+ if ((err == ENOENT || err == ENXIO) &&
+ (attr->type == PERF_TYPE_HARDWARE) &&
+ (attr->config == PERF_COUNT_HW_CPU_CYCLES)) {
+
+ if (verbose)
+ ui__warning("Cycles event not supported,\n"
+ "trying to fall back to cpu-clock-ticks\n");
+
+ attr->type = PERF_TYPE_SOFTWARE;
+ attr->config = PERF_COUNT_SW_CPU_CLOCK;
+ if (pos->name) {
+ free(pos->name);
+ pos->name = NULL;
+ }
+ goto try_again;
+ }
+
+ if (err == ENOENT) {
+ ui__error("The %s event is not supported.\n",
+ perf_evsel__name(pos));
+ rc = -err;
+ goto out;
+ } else if (err == EMFILE) {
+ ui__error("Too many events are opened.\n"
+ "Try again after reducing the number of events\n");
+ rc = -err;
+ goto out;
+ }
+
+ ui__error("sys_perf_event_open() syscall returned with "
+ "%d (%s) for event %s. /bin/dmesg may provide"
+ "additional information.\n",
+ err, strerror(err), perf_evsel__name(pos));
+
+#if defined(__i386__) || defined(__x86_64__)
+ if ((attr->type == PERF_TYPE_HARDWARE) &&
+ (err == EOPNOTSUPP)) {
+ pr_err("No hardware sampling interrupt available."
+ " No APIC? If so then you can boot the kernel"
+ " with the \"lapic\" boot parameter to"
+ " force-enable it.\n");
+ rc = -err;
+ goto out;
+ }
+#endif
+
+ pr_err("No CONFIG_PERF_EVENTS=y kernel support configured?\n");
+ rc = -err;
+ goto out;
+ }
+ }
+out:
+ return rc;
+}
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index 56003f7..270e546 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -135,4 +135,7 @@ static inline struct perf_evsel *perf_evlist__last(struct perf_evlist *evlist)
}
size_t perf_evlist__fprintf(struct perf_evlist *evlist, FILE *fp);
+
+int perf_evlist__open_counters(struct perf_evlist *evlist,
+ struct perf_record_opts *opts);
#endif /* __PERF_EVLIST_H */
--
1.7.10.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 4/9] perf top: use the new perf_evlist__open_counters method
2012-10-11 4:25 [PATCH 0/9] perf: consolidate all the open counters loops David Ahern
` (2 preceding siblings ...)
2012-10-11 4:25 ` [PATCH 3/9] perf evlist: introduce open counters method David Ahern
@ 2012-10-11 4:25 ` David Ahern
2012-10-11 4:25 ` [PATCH 5/9] perf record: " David Ahern
` (4 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Ahern @ 2012-10-11 4:25 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
Remove open counters code with all the retry and error handling in
favor of the new perf_evlist__open_counters method which is based
on the top code.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
tools/perf/builtin-top.c | 70 ++--------------------------------------------
1 file changed, 3 insertions(+), 67 deletions(-)
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 33c3825..2ffc32e 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -919,75 +919,11 @@ static void perf_top__start_counters(struct perf_top *top)
attr->mmap = 1;
attr->comm = 1;
attr->inherit = !top->opts.no_inherit;
-fallback_missing_features:
- if (top->opts.exclude_guest_missing)
- attr->exclude_guest = attr->exclude_host = 0;
-retry_sample_id:
- attr->sample_id_all = top->opts.sample_id_all_missing ? 0 : 1;
-try_again:
- if (perf_evsel__open(counter, top->evlist->cpus,
- top->evlist->threads) < 0) {
- int err = errno;
-
- if (err == EPERM || err == EACCES) {
- ui__error_paranoid();
- goto out_err;
- } else if (err == EINVAL) {
- if (!top->opts.exclude_guest_missing &&
- (attr->exclude_guest || attr->exclude_host)) {
- pr_debug("Old kernel, cannot exclude "
- "guest or host samples.\n");
- top->opts.exclude_guest_missing = true;
- goto fallback_missing_features;
- } else if (!top->opts.sample_id_all_missing) {
- /*
- * Old kernel, no attr->sample_id_type_all field
- */
- top->opts.sample_id_all_missing = true;
- goto retry_sample_id;
- }
- }
- /*
- * If it's cycles then fall back to hrtimer
- * based cpu-clock-tick sw counter, which
- * is always available even if no PMU support:
- */
- if ((err == ENOENT || err == ENXIO) &&
- (attr->type == PERF_TYPE_HARDWARE) &&
- (attr->config == PERF_COUNT_HW_CPU_CYCLES)) {
-
- if (verbose)
- ui__warning("Cycles event not supported,\n"
- "trying to fall back to cpu-clock-ticks\n");
-
- attr->type = PERF_TYPE_SOFTWARE;
- attr->config = PERF_COUNT_SW_CPU_CLOCK;
- if (counter->name) {
- free(counter->name);
- counter->name = NULL;
- }
- goto try_again;
- }
-
- if (err == ENOENT) {
- ui__error("The %s event is not supported.\n",
- perf_evsel__name(counter));
- goto out_err;
- } else if (err == EMFILE) {
- ui__error("Too many events are opened.\n"
- "Try again after reducing the number of events\n");
- goto out_err;
- }
-
- ui__error("The sys_perf_event_open() syscall "
- "returned with %d (%s). /bin/dmesg "
- "may provide additional information.\n"
- "No CONFIG_PERF_EVENTS=y kernel support "
- "configured?\n", err, strerror(err));
- goto out_err;
- }
}
+ if (perf_evlist__open_counters(evlist, &top->opts) != 0)
+ goto out_err;
+
if (perf_evlist__mmap(evlist, top->opts.mmap_pages, false) < 0) {
ui__error("Failed to mmap with %d (%s)\n",
errno, strerror(errno));
--
1.7.10.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 5/9] perf record: use the new perf_evlist__open_counters method
2012-10-11 4:25 [PATCH 0/9] perf: consolidate all the open counters loops David Ahern
` (3 preceding siblings ...)
2012-10-11 4:25 ` [PATCH 4/9] perf top: use the new perf_evlist__open_counters method David Ahern
@ 2012-10-11 4:25 ` David Ahern
2012-10-11 4:25 ` [PATCH 6/9] perf stat: move user options to perf_record_opts David Ahern
` (3 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Ahern @ 2012-10-11 4:25 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
Remove open counters code with all the retry and error handling in
favor of the new perf_evlist__open_counters method which is based
on the existing code.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
tools/perf/builtin-record.c | 109 +------------------------------------------
1 file changed, 2 insertions(+), 107 deletions(-)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 73b5d7f..b9dcc01 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -224,7 +224,6 @@ static bool perf_evlist__equal(struct perf_evlist *evlist,
static int perf_record__open(struct perf_record *rec)
{
- struct perf_evsel *pos;
struct perf_evlist *evlist = rec->evlist;
struct perf_session *session = rec->session;
struct perf_record_opts *opts = &rec->opts;
@@ -235,113 +234,9 @@ static int perf_record__open(struct perf_record *rec)
if (opts->group)
perf_evlist__set_leader(evlist);
- list_for_each_entry(pos, &evlist->entries, node) {
- struct perf_event_attr *attr = &pos->attr;
- /*
- * Check if parse_single_tracepoint_event has already asked for
- * PERF_SAMPLE_TIME.
- *
- * XXX this is kludgy but short term fix for problems introduced by
- * eac23d1c that broke 'perf script' by having different sample_types
- * when using multiple tracepoint events when we use a perf binary
- * that tries to use sample_id_all on an older kernel.
- *
- * We need to move counter creation to perf_session, support
- * different sample_types, etc.
- */
- bool time_needed = attr->sample_type & PERF_SAMPLE_TIME;
-
-fallback_missing_features:
- if (opts->exclude_guest_missing)
- attr->exclude_guest = attr->exclude_host = 0;
-retry_sample_id:
- attr->sample_id_all = opts->sample_id_all_missing ? 0 : 1;
-try_again:
- if (perf_evsel__open(pos, evlist->cpus, evlist->threads) < 0) {
- int err = errno;
-
- if (err == EPERM || err == EACCES) {
- ui__error_paranoid();
- rc = -err;
- goto out;
- } else if (err == ENODEV && opts->target.cpu_list) {
- pr_err("No such device - did you specify"
- " an out-of-range profile CPU?\n");
- rc = -err;
- goto out;
- } else if (err == EINVAL) {
- if (!opts->exclude_guest_missing &&
- (attr->exclude_guest || attr->exclude_host)) {
- pr_debug("Old kernel, cannot exclude "
- "guest or host samples.\n");
- opts->exclude_guest_missing = true;
- goto fallback_missing_features;
- } else if (!opts->sample_id_all_missing) {
- /*
- * Old kernel, no attr->sample_id_type_all field
- */
- opts->sample_id_all_missing = true;
- if (!opts->sample_time && !opts->raw_samples && !time_needed)
- attr->sample_type &= ~PERF_SAMPLE_TIME;
-
- goto retry_sample_id;
- }
- }
-
- /*
- * If it's cycles then fall back to hrtimer
- * based cpu-clock-tick sw counter, which
- * is always available even if no PMU support.
- *
- * PPC returns ENXIO until 2.6.37 (behavior changed
- * with commit b0a873e).
- */
- if ((err == ENOENT || err == ENXIO)
- && attr->type == PERF_TYPE_HARDWARE
- && attr->config == PERF_COUNT_HW_CPU_CYCLES) {
-
- if (verbose)
- ui__warning("The cycles event is not supported, "
- "trying to fall back to cpu-clock-ticks\n");
- attr->type = PERF_TYPE_SOFTWARE;
- attr->config = PERF_COUNT_SW_CPU_CLOCK;
- if (pos->name) {
- free(pos->name);
- pos->name = NULL;
- }
- goto try_again;
- }
-
- if (err == ENOENT) {
- ui__error("The %s event is not supported.\n",
- perf_evsel__name(pos));
- rc = -err;
- goto out;
- }
-
- printf("\n");
- error("sys_perf_event_open() syscall returned with %d "
- "(%s) for event %s. /bin/dmesg may provide "
- "additional information.\n",
- err, strerror(err), perf_evsel__name(pos));
-
-#if defined(__i386__) || defined(__x86_64__)
- if (attr->type == PERF_TYPE_HARDWARE &&
- err == EOPNOTSUPP) {
- pr_err("No hardware sampling interrupt available."
- " No APIC? If so then you can boot the kernel"
- " with the \"lapic\" boot parameter to"
- " force-enable it.\n");
- rc = -err;
- goto out;
- }
-#endif
-
- pr_err("No CONFIG_PERF_EVENTS=y kernel support configured?\n");
- rc = -err;
+ rc = perf_evlist__open_counters(evlist, opts);
+ if (rc != 0)
goto out;
- }
- }
if (perf_evlist__apply_filters(evlist)) {
error("failed to set filter with %d (%s)\n", errno,
--
1.7.10.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 6/9] perf stat: move user options to perf_record_opts
2012-10-11 4:25 [PATCH 0/9] perf: consolidate all the open counters loops David Ahern
` (4 preceding siblings ...)
2012-10-11 4:25 ` [PATCH 5/9] perf record: " David Ahern
@ 2012-10-11 4:25 ` David Ahern
2012-10-11 4:25 ` [PATCH 7/9] perf evlist: add stat unique code to open_counters method David Ahern
` (2 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: David Ahern @ 2012-10-11 4:25 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
This is required for perf-stat to use perf_evlist__open_counters.
And move opts to a stack variable.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
tools/perf/builtin-stat.c | 161 +++++++++++++++++++++++++++------------------
1 file changed, 97 insertions(+), 64 deletions(-)
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 93b9011..9727d217 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -66,12 +66,8 @@
static struct perf_evlist *evsel_list;
-static struct perf_target target = {
- .uid = UINT_MAX,
-};
static int run_count = 1;
-static bool no_inherit = false;
static bool scale = true;
static bool no_aggr = false;
static pid_t child_pid = -1;
@@ -81,7 +77,6 @@ static bool big_num = true;
static int big_num_opt = -1;
static const char *csv_sep = NULL;
static bool csv_output = false;
-static bool group = false;
static FILE *output = NULL;
static volatile int done = 0;
@@ -102,14 +97,16 @@ static void perf_evsel__free_stat_priv(struct perf_evsel *evsel)
evsel->priv = NULL;
}
-static inline struct cpu_map *perf_evsel__cpus(struct perf_evsel *evsel)
+static inline struct cpu_map *perf_evsel__cpus(struct perf_evsel *evsel,
+ struct perf_target *target)
{
- return (evsel->cpus && !target.cpu_list) ? evsel->cpus : evsel_list->cpus;
+ return (evsel->cpus && !target->cpu_list) ? evsel->cpus : evsel_list->cpus;
}
-static inline int perf_evsel__nr_cpus(struct perf_evsel *evsel)
+static inline int perf_evsel__nr_cpus(struct perf_evsel *evsel,
+ struct perf_target *target)
{
- return perf_evsel__cpus(evsel)->nr;
+ return perf_evsel__cpus(evsel, target)->nr;
}
static struct stats runtime_nsecs_stats[MAX_NR_CPUS];
@@ -126,8 +123,10 @@ static struct stats runtime_dtlb_cache_stats[MAX_NR_CPUS];
static struct stats walltime_nsecs_stats;
static int create_perf_stat_counter(struct perf_evsel *evsel,
- struct perf_evsel *first)
+ struct perf_evsel *first,
+ struct perf_record_opts *opts)
{
+ struct perf_target *target = &opts->target;
struct perf_event_attr *attr = &evsel->attr;
bool exclude_guest_missing = false;
int ret;
@@ -136,20 +135,22 @@ static int create_perf_stat_counter(struct perf_evsel *evsel,
attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED |
PERF_FORMAT_TOTAL_TIME_RUNNING;
- attr->inherit = !no_inherit;
+ attr->inherit = !opts->no_inherit;
retry:
if (exclude_guest_missing)
evsel->attr.exclude_guest = evsel->attr.exclude_host = 0;
- if (perf_target__has_cpu(&target)) {
- ret = perf_evsel__open_per_cpu(evsel, perf_evsel__cpus(evsel));
+ if (perf_target__has_cpu(target)) {
+ ret = perf_evsel__open_per_cpu(evsel,
+ perf_evsel__cpus(evsel, target));
if (ret)
goto check_ret;
return 0;
}
- if (!perf_target__has_task(&target) && (!group || evsel == first)) {
+ if (!perf_target__has_task(target) &&
+ (!opts->group || evsel == first)) {
attr->disabled = 1;
attr->enable_on_exec = 1;
}
@@ -218,13 +219,15 @@ static void update_shadow_stats(struct perf_evsel *counter, u64 *count)
* Read out the results of a single counter:
* aggregate counts across CPUs in system-wide mode
*/
-static int read_counter_aggr(struct perf_evsel *counter)
+static int read_counter_aggr(struct perf_evsel *counter,
+ struct perf_record_opts *opts)
{
+ struct perf_target *target = &opts->target;
struct perf_stat *ps = counter->priv;
u64 *count = counter->counts->aggr.values;
int i;
- if (__perf_evsel__read(counter, perf_evsel__nr_cpus(counter),
+ if (__perf_evsel__read(counter, perf_evsel__nr_cpus(counter, target),
evsel_list->threads->nr, scale) < 0)
return -1;
@@ -248,12 +251,14 @@ static int read_counter_aggr(struct perf_evsel *counter)
* Read out the results of a single counter:
* do not aggregate counts across CPUs in system-wide mode
*/
-static int read_counter(struct perf_evsel *counter)
+static int read_counter(struct perf_evsel *counter,
+ struct perf_record_opts *opts)
{
+ struct cpu_map *cmap = perf_evsel__cpus(counter, &opts->target);
u64 *count;
int cpu;
- for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
+ for (cpu = 0; cpu < cmap->nr; cpu++) {
if (__perf_evsel__read_on_cpu(counter, cpu, 0, scale) < 0)
return -1;
@@ -265,10 +270,13 @@ static int read_counter(struct perf_evsel *counter)
return 0;
}
-static int run_perf_stat(int argc __maybe_unused, const char **argv)
+static int run_perf_stat(int argc __maybe_unused,
+ const char **argv,
+ struct perf_record_opts *opts)
{
unsigned long long t0, t1;
struct perf_evsel *counter, *first;
+ struct cpu_map *cmap;
int status = 0;
int child_ready_pipe[2], go_pipe[2];
const bool forks = (argc > 0);
@@ -312,7 +320,7 @@ static int run_perf_stat(int argc __maybe_unused, const char **argv)
exit(-1);
}
- if (perf_target__none(&target))
+ if (perf_target__none(&opts->target))
evsel_list->threads->map[0] = child_pid;
/*
@@ -325,13 +333,13 @@ static int run_perf_stat(int argc __maybe_unused, const char **argv)
close(child_ready_pipe[0]);
}
- if (group)
+ if (opts->group)
perf_evlist__set_leader(evsel_list);
first = perf_evlist__first(evsel_list);
list_for_each_entry(counter, &evsel_list->entries, node) {
- if (create_perf_stat_counter(counter, first) < 0) {
+ if (create_perf_stat_counter(counter, first, opts) < 0) {
/*
* PPC returns ENXIO for HW counters until 2.6.37
* (behavior changed with commit b0a873e).
@@ -350,7 +358,7 @@ static int run_perf_stat(int argc __maybe_unused, const char **argv)
error("You may not have permission to collect %sstats.\n"
"\t Consider tweaking"
" /proc/sys/kernel/perf_event_paranoid or running as root.",
- target.system_wide ? "system-wide " : "");
+ opts->target.system_wide ? "system-wide " : "");
} else {
error("open_counter returned with %d (%s). "
"/bin/dmesg may provide additional information.\n",
@@ -391,13 +399,15 @@ static int run_perf_stat(int argc __maybe_unused, const char **argv)
if (no_aggr) {
list_for_each_entry(counter, &evsel_list->entries, node) {
- read_counter(counter);
- perf_evsel__close_fd(counter, perf_evsel__nr_cpus(counter), 1);
+ cmap = perf_evsel__cpus(counter, &opts->target);
+ read_counter(counter, opts);
+ perf_evsel__close_fd(counter, cmap->nr, 1);
}
} else {
list_for_each_entry(counter, &evsel_list->entries, node) {
- read_counter_aggr(counter);
- perf_evsel__close_fd(counter, perf_evsel__nr_cpus(counter),
+ cmap = perf_evsel__cpus(counter, &opts->target);
+ read_counter_aggr(counter, opts);
+ perf_evsel__close_fd(counter, cmap->nr,
evsel_list->threads->nr);
}
}
@@ -426,16 +436,20 @@ static void print_noise(struct perf_evsel *evsel, double avg)
print_noise_pct(stddev_stats(&ps->res_stats[0]), avg);
}
-static void nsec_printout(int cpu, struct perf_evsel *evsel, double avg)
+static void nsec_printout(int cpu, struct perf_evsel *evsel,
+ double avg, struct perf_record_opts *opts)
{
double msecs = avg / 1e6;
char cpustr[16] = { '\0', };
const char *fmt = csv_output ? "%s%.6f%s%s" : "%s%18.6f%s%-25s";
+ struct cpu_map *cmap;
- if (no_aggr)
+ if (no_aggr) {
+ cmap = perf_evsel__cpus(evsel, &opts->target);
sprintf(cpustr, "CPU%*d%s",
csv_output ? 0 : -4,
- perf_evsel__cpus(evsel)->map[cpu], csv_sep);
+ cmap->map[cpu], csv_sep);
+ }
fprintf(output, fmt, cpustr, msecs, csv_sep, perf_evsel__name(evsel));
@@ -631,11 +645,13 @@ static void print_ll_cache_misses(int cpu,
fprintf(output, " of all LL-cache hits ");
}
-static void abs_printout(int cpu, struct perf_evsel *evsel, double avg)
+static void abs_printout(int cpu, struct perf_evsel *evsel,
+ double avg, struct perf_record_opts *opts)
{
double total, ratio = 0.0;
char cpustr[16] = { '\0', };
const char *fmt;
+ struct cpu_map *cmap;
if (csv_output)
fmt = "%s%.0f%s%s";
@@ -644,11 +660,12 @@ static void abs_printout(int cpu, struct perf_evsel *evsel, double avg)
else
fmt = "%s%18.0f%s%-25s";
- if (no_aggr)
+ if (no_aggr) {
+ cmap = perf_evsel__cpus(evsel, &opts->target);
sprintf(cpustr, "CPU%*d%s",
csv_output ? 0 : -4,
- perf_evsel__cpus(evsel)->map[cpu], csv_sep);
- else
+ cmap->map[cpu], csv_sep);
+ } else
cpu = 0;
fprintf(output, fmt, cpustr, avg, csv_sep, perf_evsel__name(evsel));
@@ -755,7 +772,8 @@ static void abs_printout(int cpu, struct perf_evsel *evsel, double avg)
* Print out the results of a single counter:
* aggregated counts in system-wide mode
*/
-static void print_counter_aggr(struct perf_evsel *counter)
+static void print_counter_aggr(struct perf_evsel *counter,
+ struct perf_record_opts *opts)
{
struct perf_stat *ps = counter->priv;
double avg = avg_stats(&ps->res_stats[0]);
@@ -777,9 +795,9 @@ static void print_counter_aggr(struct perf_evsel *counter)
}
if (nsec_counter(counter))
- nsec_printout(-1, counter, avg);
+ nsec_printout(-1, counter, avg, opts);
else
- abs_printout(-1, counter, avg);
+ abs_printout(-1, counter, avg, opts);
print_noise(counter, avg);
@@ -803,19 +821,21 @@ static void print_counter_aggr(struct perf_evsel *counter)
* Print out the results of a single counter:
* does not use aggregated count in system-wide
*/
-static void print_counter(struct perf_evsel *counter)
+static void print_counter(struct perf_evsel *counter,
+ struct perf_record_opts *opts)
{
+ struct cpu_map *cmap = perf_evsel__cpus(counter, &opts->target);
u64 ena, run, val;
int cpu;
- for (cpu = 0; cpu < perf_evsel__nr_cpus(counter); cpu++) {
+ for (cpu = 0; cpu < cmap->nr; cpu++) {
val = counter->counts->cpu[cpu].val;
ena = counter->counts->cpu[cpu].ena;
run = counter->counts->cpu[cpu].run;
if (run == 0 || ena == 0) {
fprintf(output, "CPU%*d%s%*s%s%*s",
csv_output ? 0 : -4,
- perf_evsel__cpus(counter)->map[cpu], csv_sep,
+ cmap->map[cpu], csv_sep,
csv_output ? 0 : 18,
counter->supported ? CNTR_NOT_COUNTED : CNTR_NOT_SUPPORTED,
csv_sep,
@@ -831,9 +851,9 @@ static void print_counter(struct perf_evsel *counter)
}
if (nsec_counter(counter))
- nsec_printout(cpu, counter, val);
+ nsec_printout(cpu, counter, val, opts);
else
- abs_printout(cpu, counter, val);
+ abs_printout(cpu, counter, val, opts);
if (!csv_output) {
print_noise(counter, 1.0);
@@ -846,9 +866,11 @@ static void print_counter(struct perf_evsel *counter)
}
}
-static void print_stat(int argc, const char **argv)
+static void print_stat(int argc, const char **argv,
+ struct perf_record_opts *opts)
{
struct perf_evsel *counter;
+ struct perf_target *target = &opts->target;
int i;
fflush(stdout);
@@ -856,14 +878,14 @@ static void print_stat(int argc, const char **argv)
if (!csv_output) {
fprintf(output, "\n");
fprintf(output, " Performance counter stats for ");
- if (!perf_target__has_task(&target)) {
+ if (!perf_target__has_task(target)) {
fprintf(output, "\'%s", argv[0]);
for (i = 1; i < argc; i++)
fprintf(output, " %s", argv[i]);
- } else if (target.pid)
- fprintf(output, "process id \'%s", target.pid);
+ } else if (target->pid)
+ fprintf(output, "process id \'%s", target->pid);
else
- fprintf(output, "thread id \'%s", target.tid);
+ fprintf(output, "thread id \'%s", target->tid);
fprintf(output, "\'");
if (run_count > 1)
@@ -873,10 +895,10 @@ static void print_stat(int argc, const char **argv)
if (no_aggr) {
list_for_each_entry(counter, &evsel_list->entries, node)
- print_counter(counter);
+ print_counter(counter, opts);
} else {
list_for_each_entry(counter, &evsel_list->entries, node)
- print_counter_aggr(counter);
+ print_counter_aggr(counter, opts);
}
if (!csv_output) {
@@ -1073,21 +1095,31 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
sync_run = false;
int output_fd = 0;
const char *output_name = NULL;
+
+ struct perf_record_opts opts = {
+ .target = {
+ .uid = UINT_MAX,
+ },
+ .no_inherit = false,
+ .group = false,
+ };
+ struct perf_target *target = &opts.target;
+
const struct option options[] = {
OPT_CALLBACK('e', "event", &evsel_list, "event",
"event selector. use 'perf list' to list available events",
parse_events_option),
OPT_CALLBACK(0, "filter", &evsel_list, "filter",
"event filter", parse_filter),
- OPT_BOOLEAN('i', "no-inherit", &no_inherit,
+ OPT_BOOLEAN('i', "no-inherit", &opts.no_inherit,
"child tasks do not inherit counters"),
- OPT_STRING('p', "pid", &target.pid, "pid",
+ OPT_STRING('p', "pid", &target->pid, "pid",
"stat events on existing process id"),
- OPT_STRING('t', "tid", &target.tid, "tid",
+ OPT_STRING('t', "tid", &target->tid, "tid",
"stat events on existing thread id"),
- OPT_BOOLEAN('a', "all-cpus", &target.system_wide,
+ OPT_BOOLEAN('a', "all-cpus", &target->system_wide,
"system-wide collection from all CPUs"),
- OPT_BOOLEAN('g', "group", &group,
+ OPT_BOOLEAN('g', "group", &opts.group,
"put the counters into a counter group"),
OPT_BOOLEAN('c', "scale", &scale, "scale/normalize counters"),
OPT_INCR('v', "verbose", &verbose,
@@ -1103,7 +1135,7 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
OPT_CALLBACK_NOOPT('B', "big-num", NULL, NULL,
"print large numbers with thousands\' separators",
stat__set_big_num),
- OPT_STRING('C', "cpu", &target.cpu_list, "cpu",
+ OPT_STRING('C', "cpu", &target->cpu_list, "cpu",
"list of cpus to monitor in system-wide"),
OPT_BOOLEAN('A', "no-aggr", &no_aggr, "disable CPU count aggregation"),
OPT_STRING('x', "field-separator", &csv_sep, "separator",
@@ -1187,13 +1219,13 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
} else if (big_num_opt == 0) /* User passed --no-big-num */
big_num = false;
- if (!argc && !perf_target__has_task(&target))
+ if (!argc && !perf_target__has_task(target))
usage_with_options(stat_usage, options);
if (run_count <= 0)
usage_with_options(stat_usage, options);
/* no_aggr, cgroup are for system-wide only */
- if ((no_aggr || nr_cgroups) && !perf_target__has_cpu(&target)) {
+ if ((no_aggr || nr_cgroups) && !perf_target__has_cpu(target)) {
fprintf(stderr, "both cgroup and no-aggregation "
"modes only available in system-wide mode\n");
@@ -1203,12 +1235,12 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
if (add_default_attributes())
goto out;
- perf_target__validate(&target);
+ perf_target__validate(target);
- if (perf_evlist__create_maps(evsel_list, &target) < 0) {
- if (perf_target__has_task(&target))
+ if (perf_evlist__create_maps(evsel_list, target) < 0) {
+ if (perf_target__has_task(target))
pr_err("Problems finding threads of monitor\n");
- if (perf_target__has_cpu(&target))
+ if (perf_target__has_cpu(target))
perror("failed to parse CPUs map");
usage_with_options(stat_usage, options);
@@ -1216,8 +1248,9 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
}
list_for_each_entry(pos, &evsel_list->entries, node) {
+ struct cpu_map *cmap = perf_evsel__cpus(pos, target);
if (perf_evsel__alloc_stat_priv(pos) < 0 ||
- perf_evsel__alloc_counts(pos, perf_evsel__nr_cpus(pos)) < 0)
+ perf_evsel__alloc_counts(pos, cmap->nr) < 0)
goto out_free_fd;
}
@@ -1241,11 +1274,11 @@ int cmd_stat(int argc, const char **argv, const char *prefix __maybe_unused)
if (sync_run)
sync();
- status = run_perf_stat(argc, argv);
+ status = run_perf_stat(argc, argv, &opts);
}
if (status != -1)
- print_stat(argc, argv);
+ print_stat(argc, argv, &opts);
out_free_fd:
list_for_each_entry(pos, &evsel_list->entries, node)
perf_evsel__free_stat_priv(pos);
--
1.7.10.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 7/9] perf evlist: add stat unique code to open_counters method
2012-10-11 4:25 [PATCH 0/9] perf: consolidate all the open counters loops David Ahern
` (5 preceding siblings ...)
2012-10-11 4:25 ` [PATCH 6/9] perf stat: move user options to perf_record_opts David Ahern
@ 2012-10-11 4:25 ` David Ahern
2012-10-11 4:25 ` [PATCH 8/9] perf stat: move to perf_evlist__open_counters David Ahern
2012-10-11 4:25 ` [PATCH 9/9] perf evsel: remove perf_evsel__open_per_cpu David Ahern
8 siblings, 0 replies; 11+ messages in thread
From: David Ahern @ 2012-10-11 4:25 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
Mainly the addition is an argument to keep going for some open
failures.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
tools/perf/builtin-record.c | 2 +-
tools/perf/builtin-top.c | 2 +-
tools/perf/util/evlist.c | 16 ++++++++++++++--
tools/perf/util/evlist.h | 3 ++-
4 files changed, 18 insertions(+), 5 deletions(-)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index b9dcc01..663ccc8 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -234,7 +234,7 @@ static int perf_record__open(struct perf_record *rec)
if (opts->group)
perf_evlist__set_leader(evlist);
- rc = perf_evlist__open_counters(evlist, opts);
+ rc = perf_evlist__open_counters(evlist, opts, false);
if (rc != 0)
goto out;
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 2ffc32e..2c3b3c7 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -921,7 +921,7 @@ static void perf_top__start_counters(struct perf_top *top)
attr->inherit = !top->opts.no_inherit;
}
- if (perf_evlist__open_counters(evlist, &top->opts) != 0)
+ if (perf_evlist__open_counters(evlist, &top->opts, false) != 0)
goto out_err;
if (perf_evlist__mmap(evlist, top->opts.mmap_pages, false) < 0) {
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index bce2f58..fa0daac 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -840,7 +840,8 @@ size_t perf_evlist__fprintf(struct perf_evlist *evlist, FILE *fp)
}
int perf_evlist__open_counters(struct perf_evlist *evlist,
- struct perf_record_opts *opts)
+ struct perf_record_opts *opts,
+ bool continue_on_fail)
{
struct perf_evsel *pos;
int rc = 0;
@@ -872,6 +873,16 @@ try_again:
if (perf_evsel__open(pos, evlist->cpus, evlist->threads) < 0) {
int err = errno;
+ if (continue_on_fail &&
+ (err == EINVAL || err == ENOSYS || err == ENXIO ||
+ err == ENOENT || err == EOPNOTSUPP)) {
+ if (verbose)
+ ui__warning("%s event is not supported by the kernel.\n",
+ perf_evsel__name(pos));
+ pos->supported = false;
+ continue;
+ }
+
if (err == EPERM || err == EACCES) {
ui__error_paranoid();
rc = -err;
@@ -958,7 +969,8 @@ try_again:
pr_err("No CONFIG_PERF_EVENTS=y kernel support configured?\n");
rc = -err;
goto out;
- }
+ } else
+ pos->supported = true;
}
out:
return rc;
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index 270e546..0747b6f 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -137,5 +137,6 @@ static inline struct perf_evsel *perf_evlist__last(struct perf_evlist *evlist)
size_t perf_evlist__fprintf(struct perf_evlist *evlist, FILE *fp);
int perf_evlist__open_counters(struct perf_evlist *evlist,
- struct perf_record_opts *opts);
+ struct perf_record_opts *opts,
+ bool continue_on_fail);
#endif /* __PERF_EVLIST_H */
--
1.7.10.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 8/9] perf stat: move to perf_evlist__open_counters
2012-10-11 4:25 [PATCH 0/9] perf: consolidate all the open counters loops David Ahern
` (6 preceding siblings ...)
2012-10-11 4:25 ` [PATCH 7/9] perf evlist: add stat unique code to open_counters method David Ahern
@ 2012-10-11 4:25 ` David Ahern
2012-10-11 4:25 ` [PATCH 9/9] perf evsel: remove perf_evsel__open_per_cpu David Ahern
8 siblings, 0 replies; 11+ messages in thread
From: David Ahern @ 2012-10-11 4:25 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
Removes a lot of duplicated code moving to the common
open method.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
tools/perf/builtin-stat.c | 103 ++++++++++-----------------------------------
1 file changed, 22 insertions(+), 81 deletions(-)
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 9727d217..affbada 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -122,55 +122,6 @@ static struct stats runtime_itlb_cache_stats[MAX_NR_CPUS];
static struct stats runtime_dtlb_cache_stats[MAX_NR_CPUS];
static struct stats walltime_nsecs_stats;
-static int create_perf_stat_counter(struct perf_evsel *evsel,
- struct perf_evsel *first,
- struct perf_record_opts *opts)
-{
- struct perf_target *target = &opts->target;
- struct perf_event_attr *attr = &evsel->attr;
- bool exclude_guest_missing = false;
- int ret;
-
- if (scale)
- attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED |
- PERF_FORMAT_TOTAL_TIME_RUNNING;
-
- attr->inherit = !opts->no_inherit;
-
-retry:
- if (exclude_guest_missing)
- evsel->attr.exclude_guest = evsel->attr.exclude_host = 0;
-
- if (perf_target__has_cpu(target)) {
- ret = perf_evsel__open_per_cpu(evsel,
- perf_evsel__cpus(evsel, target));
- if (ret)
- goto check_ret;
- return 0;
- }
-
- if (!perf_target__has_task(target) &&
- (!opts->group || evsel == first)) {
- attr->disabled = 1;
- attr->enable_on_exec = 1;
- }
-
- ret = perf_evsel__open_per_thread(evsel, evsel_list->threads);
- if (!ret)
- return 0;
- /* fall through */
-check_ret:
- if (ret && errno == EINVAL) {
- if (!exclude_guest_missing &&
- (evsel->attr.exclude_guest || evsel->attr.exclude_host)) {
- pr_debug("Old kernel, cannot exclude "
- "guest or host samples.\n");
- exclude_guest_missing = true;
- goto retry;
- }
- }
- return ret;
-}
/*
* Does the counter have nsecs as a unit?
@@ -277,6 +228,7 @@ static int run_perf_stat(int argc __maybe_unused,
unsigned long long t0, t1;
struct perf_evsel *counter, *first;
struct cpu_map *cmap;
+ struct perf_target *target = &opts->target;
int status = 0;
int child_ready_pipe[2], go_pipe[2];
const bool forks = (argc > 0);
@@ -320,7 +272,7 @@ static int run_perf_stat(int argc __maybe_unused,
exit(-1);
}
- if (perf_target__none(&opts->target))
+ if (perf_target__none(target))
evsel_list->threads->map[0] = child_pid;
/*
@@ -339,38 +291,27 @@ static int run_perf_stat(int argc __maybe_unused,
first = perf_evlist__first(evsel_list);
list_for_each_entry(counter, &evsel_list->entries, node) {
- if (create_perf_stat_counter(counter, first, opts) < 0) {
- /*
- * PPC returns ENXIO for HW counters until 2.6.37
- * (behavior changed with commit b0a873e).
- */
- if (errno == EINVAL || errno == ENOSYS ||
- errno == ENOENT || errno == EOPNOTSUPP ||
- errno == ENXIO) {
- if (verbose)
- ui__warning("%s event is not supported by the kernel.\n",
- perf_evsel__name(counter));
- counter->supported = false;
- continue;
- }
-
- if (errno == EPERM || errno == EACCES) {
- error("You may not have permission to collect %sstats.\n"
- "\t Consider tweaking"
- " /proc/sys/kernel/perf_event_paranoid or running as root.",
- opts->target.system_wide ? "system-wide " : "");
- } else {
- error("open_counter returned with %d (%s). "
- "/bin/dmesg may provide additional information.\n",
- errno, strerror(errno));
- }
- if (child_pid != -1)
- kill(child_pid, SIGTERM);
-
- pr_err("Not all events could be opened.\n");
- return -1;
+ struct perf_event_attr *attr = &counter->attr;
+
+ if (scale)
+ attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED |
+ PERF_FORMAT_TOTAL_TIME_RUNNING;
+
+ attr->inherit = !opts->no_inherit;
+
+ if (perf_target__none(target) &&
+ (!opts->group || counter == first)) {
+ attr->disabled = 1;
+ attr->enable_on_exec = 1;
}
- counter->supported = true;
+ }
+
+ if (perf_evlist__open_counters(evsel_list, opts, true) != 0) {
+ if (child_pid != -1)
+ kill(child_pid, SIGTERM);
+
+ pr_err("Not all events could be opened.\n");
+ return -1;
}
if (perf_evlist__apply_filters(evsel_list)) {
--
1.7.10.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 9/9] perf evsel: remove perf_evsel__open_per_cpu
2012-10-11 4:25 [PATCH 0/9] perf: consolidate all the open counters loops David Ahern
` (7 preceding siblings ...)
2012-10-11 4:25 ` [PATCH 8/9] perf stat: move to perf_evlist__open_counters David Ahern
@ 2012-10-11 4:25 ` David Ahern
8 siblings, 0 replies; 11+ messages in thread
From: David Ahern @ 2012-10-11 4:25 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
No longer needed with perf-stat converted to perf_evlist__open_counters.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
tools/perf/util/evsel.c | 6 ------
tools/perf/util/evsel.h | 2 --
2 files changed, 8 deletions(-)
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index ffdd94e..ab3d1c8 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -774,12 +774,6 @@ int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus,
return __perf_evsel__open(evsel, cpus, threads);
}
-int perf_evsel__open_per_cpu(struct perf_evsel *evsel,
- struct cpu_map *cpus)
-{
- return __perf_evsel__open(evsel, cpus, &empty_thread_map.map);
-}
-
int perf_evsel__open_per_thread(struct perf_evsel *evsel,
struct thread_map *threads)
{
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 3ead0d5..bf32de4 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -121,8 +121,6 @@ void perf_evsel__close_fd(struct perf_evsel *evsel, int ncpus, int nthreads);
int perf_evsel__set_filter(struct perf_evsel *evsel, int ncpus, int nthreads,
const char *filter);
-int perf_evsel__open_per_cpu(struct perf_evsel *evsel,
- struct cpu_map *cpus);
int perf_evsel__open_per_thread(struct perf_evsel *evsel,
struct thread_map *threads);
int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus,
--
1.7.10.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/9] perf evlist: introduce open counters method
2012-10-29 16:31 [PATCH 0/9 v2] perf: consolidate all the open counters loops David Ahern
@ 2012-10-29 16:31 ` David Ahern
0 siblings, 0 replies; 11+ messages in thread
From: David Ahern @ 2012-10-29 16:31 UTC (permalink / raw)
To: acme, linux-kernel; +Cc: mingo, peterz, fweisbec, David Ahern
Superset of the open counters code in perf-top and perf-record -
combining retry handling and error handling. Should be functionally
equivalent.
Signed-off-by: David Ahern <dsahern@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
tools/perf/util/evlist.c | 131 +++++++++++++++++++++++++++++++++++++++++++++-
tools/perf/util/evlist.h | 3 ++
2 files changed, 133 insertions(+), 1 deletion(-)
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index a41dc4a..b24ebc1 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -15,7 +15,7 @@
#include "evlist.h"
#include "evsel.h"
#include <unistd.h>
-
+#include "debug.h"
#include "parse-events.h"
#include <sys/mman.h>
@@ -838,3 +838,132 @@ size_t perf_evlist__fprintf(struct perf_evlist *evlist, FILE *fp)
return printed + fprintf(fp, "\n");;
}
+
+int perf_evlist__open_counters(struct perf_evlist *evlist,
+ struct perf_record_opts *opts)
+{
+ struct perf_evsel *pos;
+ int rc = 0;
+
+ list_for_each_entry(pos, &evlist->entries, node) {
+ struct perf_event_attr *attr = &pos->attr;
+
+ /*
+ * Carried over from perf-record:
+ * Check if parse_single_tracepoint_event has already asked for
+ * PERF_SAMPLE_TIME.
+ *
+ * XXX this is kludgy but short term fix for problems introduced by
+ * eac23d1c that broke 'perf script' by having different sample_types
+ * when using multiple tracepoint events when we use a perf binary
+ * that tries to use sample_id_all on an older kernel.
+ *
+ * We need to move counter creation to perf_session, support
+ * different sample_types, etc.
+ */
+ bool time_needed = attr->sample_type & PERF_SAMPLE_TIME;
+
+fallback_missing_features:
+ if (opts->exclude_guest_missing)
+ attr->exclude_guest = attr->exclude_host = 0;
+retry_sample_id:
+ attr->sample_id_all = opts->sample_id_all_missing ? 0 : 1;
+try_again:
+ if (perf_evsel__open(pos, evlist->cpus, evlist->threads) < 0) {
+ int err = errno;
+
+ if (err == EPERM || err == EACCES) {
+ ui__error_paranoid();
+ rc = -err;
+ goto out;
+ } else if (err == ENODEV && opts->target.cpu_list) {
+ pr_err("No such device - did you specify"
+ " an out-of-range profile CPU?\n");
+ rc = -err;
+ goto out;
+ } else if (err == EINVAL) {
+ if (!opts->exclude_guest_missing &&
+ (attr->exclude_guest || attr->exclude_host)) {
+ pr_debug("Old kernel, cannot exclude "
+ "guest or host samples.\n");
+ opts->exclude_guest_missing = true;
+ goto fallback_missing_features;
+ } else if (!opts->sample_id_all_missing) {
+ /*
+ * Old kernel, no attr->sample_id_type_all field
+ */
+ opts->sample_id_all_missing = true;
+ if (!opts->sample_time &&
+ !opts->raw_samples &&
+ !time_needed)
+ attr->sample_type &= ~PERF_SAMPLE_TIME;
+ goto retry_sample_id;
+ }
+ }
+
+ /*
+ * If it's cycles then fall back to hrtimer
+ * based cpu-clock-tick sw counter, which
+ * is always available even if no PMU support:
+ *
+ * PPC returns ENXIO until 2.6.37 (behavior changed
+ * with commit b0a873e).
+ */
+ if ((err == ENOENT || err == ENXIO) &&
+ (attr->type == PERF_TYPE_HARDWARE) &&
+ (attr->config == PERF_COUNT_HW_CPU_CYCLES)) {
+
+ if (verbose)
+ ui__warning("Cycles event not supported,\n"
+ "trying to fall back to cpu-clock-ticks\n");
+
+ attr->type = PERF_TYPE_SOFTWARE;
+ attr->config = PERF_COUNT_SW_CPU_CLOCK;
+ if (pos->name) {
+ free(pos->name);
+ pos->name = NULL;
+ }
+ goto try_again;
+ }
+
+ if (err == ENOENT) {
+ ui__error("The %s event is not supported.\n",
+ perf_evsel__name(pos));
+ rc = -err;
+ goto out;
+ } else if (err == EMFILE) {
+ ui__error("Too many events are opened.\n"
+ "Try again after reducing the number of events\n");
+ rc = -err;
+ goto out;
+ } else if ((err == EOPNOTSUPP) && (attr->precise_ip)) {
+ ui__error("\'precise\' request may not be supported. "
+ "Try removing 'p' modifier\n");
+ goto out;
+ }
+
+ ui__error("sys_perf_event_open() syscall returned with "
+ "%d (%s) for event %s. /bin/dmesg may provide"
+ "additional information.\n",
+ err, strerror(err), perf_evsel__name(pos));
+
+#if defined(__i386__) || defined(__x86_64__)
+ if ((attr->type == PERF_TYPE_HARDWARE) &&
+ (err == EOPNOTSUPP)) {
+ pr_err("No hardware sampling interrupt available."
+ " No APIC? If so then you can boot the kernel"
+ " with the \"lapic\" boot parameter to"
+ " force-enable it.\n");
+ rc = -err;
+ goto out;
+ }
+#endif
+
+ pr_err("No CONFIG_PERF_EVENTS=y kernel support configured?\n");
+ rc = -err;
+ goto out;
+ }
+ }
+out:
+ return rc;
+}
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index 56003f7..270e546 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -135,4 +135,7 @@ static inline struct perf_evsel *perf_evlist__last(struct perf_evlist *evlist)
}
size_t perf_evlist__fprintf(struct perf_evlist *evlist, FILE *fp);
+
+int perf_evlist__open_counters(struct perf_evlist *evlist,
+ struct perf_record_opts *opts);
#endif /* __PERF_EVLIST_H */
--
1.7.10.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
end of thread, other threads:[~2012-10-29 16:32 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-10-11 4:25 [PATCH 0/9] perf: consolidate all the open counters loops David Ahern
2012-10-11 4:25 ` [PATCH 1/9] perf python: add ui stubs file David Ahern
2012-10-11 4:25 ` [PATCH 2/9] perf top: make use of perf_record_opts David Ahern
2012-10-11 4:25 ` [PATCH 3/9] perf evlist: introduce open counters method David Ahern
2012-10-11 4:25 ` [PATCH 4/9] perf top: use the new perf_evlist__open_counters method David Ahern
2012-10-11 4:25 ` [PATCH 5/9] perf record: " David Ahern
2012-10-11 4:25 ` [PATCH 6/9] perf stat: move user options to perf_record_opts David Ahern
2012-10-11 4:25 ` [PATCH 7/9] perf evlist: add stat unique code to open_counters method David Ahern
2012-10-11 4:25 ` [PATCH 8/9] perf stat: move to perf_evlist__open_counters David Ahern
2012-10-11 4:25 ` [PATCH 9/9] perf evsel: remove perf_evsel__open_per_cpu David Ahern
2012-10-29 16:31 [PATCH 0/9 v2] perf: consolidate all the open counters loops David Ahern
2012-10-29 16:31 ` [PATCH 3/9] perf evlist: introduce open counters method David Ahern
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).