linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Optimize perf stat for large number of events/cpus
@ 2019-11-07 18:16 Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 01/13] perf pmu: Use file system cache to optimize sysfs access Andi Kleen
                   ` (12 more replies)
  0 siblings, 13 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel

[v5: Address review feedback. Split patches.]

This patch kit optimizes perf stat for a large number of events 
on systems with many CPUs and PMUs.

Some profiling shows that the most overhead is doing IPIs to
all the target CPUs. We can optimize this by using sched_setaffinity
to set the affinity to a target CPU once and then doing
the perf operation for all events on that CPU. This requires
some restructuring, but cuts the set up time quite a bit.

In theory we could go further by parallelizing these setups
too, but that would be much more complicated and for now just batching it
per CPU seems to be sufficient. At some point with many more cores 
parallelization or a better bulk perf setup API might be needed though.

In addition perf does a lot of redundant /sys accesses with
many PMUs, which can be also expensve. This is also optimized.

On a large test case (>700 events with many weak groups) on a 94 CPU
system I go from

real	0m8.607s
user	0m0.550s
sys	0m8.041s

to 

real	0m3.269s
user	0m0.760s
sys	0m1.694s

so shaving ~6 seconds of system time, at slightly more cost
in perf stat itself. On a 4 socket system with the savings
are more dramatic:

real	0m15.641s
user	0m0.873s
sys	0m14.729s

to 

real	0m4.493s
user	0m1.578s
sys	0m2.444s

so 11s difference in the user visible set up time.

Also available in 

git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc perf/stat-scale-8

v1: Initial post.
v2: Rebase. Fix some minor issues.
v3: Rebase. Address review feedback. Fix one minor issue
v4: Modified based on review feedback. Now it maintains
all_cpus per evlist. There is still a need for cpu_index iteration
to get the correct index for indexing the file descriptors.
Fix bug with unsorted cpu maps, now they are always sorted.
Some cleanups and refactoring.
v5: Split patches. Redo loop iteration again. Fix cpu map
merging for uncore. Remove duplicates from cpumaps. Add unit
tests.

-Andi


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v5 01/13] perf pmu: Use file system cache to optimize sysfs access
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 02/13] perf affinity: Add infrastructure to save/restore affinity Andi Kleen
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

pmu.c does a lot of redundant /sys accesses while parsing aliases
and probing for PMUs. On large systems with a lot of PMUs this
can get expensive (>2s):

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 27.25    1.227847           8    160888     16976 openat
 26.42    1.190481           7    164224    164077 stat

Add a cache to remember if specific file names exist or don't
exist, which eliminates most of this overhead.

Also optimize some stat() calls to be slightly cheaper access()

Resulting in:

  0.18    0.004166           2      1851       305 open
  0.08    0.001970           2       829       622 access

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---

v2: Use single lookup function as API (Jiri)
---
 tools/perf/util/Build     |  1 +
 tools/perf/util/fncache.c | 63 +++++++++++++++++++++++++++++++++++++++
 tools/perf/util/fncache.h |  7 +++++
 tools/perf/util/pmu.c     | 34 +++++++--------------
 tools/perf/util/srccode.c |  9 +-----
 5 files changed, 83 insertions(+), 31 deletions(-)
 create mode 100644 tools/perf/util/fncache.c
 create mode 100644 tools/perf/util/fncache.h

diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index 39814b1806a6..2c1504fe924c 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -48,6 +48,7 @@ perf-y += header.o
 perf-y += callchain.o
 perf-y += values.o
 perf-y += debug.o
+perf-y += fncache.o
 perf-y += machine.o
 perf-y += map.o
 perf-y += pstack.o
diff --git a/tools/perf/util/fncache.c b/tools/perf/util/fncache.c
new file mode 100644
index 000000000000..5afcd7edbe7a
--- /dev/null
+++ b/tools/perf/util/fncache.c
@@ -0,0 +1,63 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Manage a cache of file names' existence */
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <linux/list.h>
+#include "fncache.h"
+
+struct fncache {
+	struct hlist_node nd;
+	bool res;
+	char name[];
+};
+
+#define FNHSIZE 61
+
+static struct hlist_head fncache_hash[FNHSIZE];
+
+unsigned shash(const unsigned char *s)
+{
+	unsigned h = 0;
+	while (*s)
+		h = 65599 * h + *s++;
+	return h ^ (h >> 16);
+}
+
+static bool lookup_fncache(const char *name, bool *res)
+{
+	int h = shash((const unsigned char *)name) % FNHSIZE;
+	struct fncache *n;
+
+	hlist_for_each_entry (n, &fncache_hash[h], nd) {
+		if (!strcmp(n->name, name)) {
+			*res = n->res;
+			return true;
+		}
+	}
+	return false;
+}
+
+static void update_fncache(const char *name, bool res)
+{
+	struct fncache *n = malloc(sizeof(struct fncache) + strlen(name) + 1);
+	int h = shash((const unsigned char *)name) % FNHSIZE;
+
+	if (!n)
+		return;
+	strcpy(n->name, name);
+	n->res = res;
+	hlist_add_head(&n->nd, &fncache_hash[h]);
+}
+
+/* No LRU, only use when bounded in some other way. */
+bool file_available(const char *name)
+{
+	bool res;
+
+	if (lookup_fncache(name, &res))
+		return res;
+	res = access(name, R_OK) == 0;
+	update_fncache(name, res);
+	return res;
+}
diff --git a/tools/perf/util/fncache.h b/tools/perf/util/fncache.h
new file mode 100644
index 000000000000..fe020beaefb1
--- /dev/null
+++ b/tools/perf/util/fncache.h
@@ -0,0 +1,7 @@
+#ifndef _FCACHE_H
+#define _FCACHE_H 1
+
+unsigned shash(const unsigned char *s);
+bool file_available(const char *name);
+
+#endif
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index adbe97e941dd..81357cc3d59a 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -24,6 +24,7 @@
 #include "pmu-events/pmu-events.h"
 #include "string2.h"
 #include "strbuf.h"
+#include "fncache.h"
 
 struct perf_pmu_format {
 	char *name;
@@ -82,7 +83,6 @@ int perf_pmu__format_parse(char *dir, struct list_head *head)
  */
 static int pmu_format(const char *name, struct list_head *format)
 {
-	struct stat st;
 	char path[PATH_MAX];
 	const char *sysfs = sysfs__mountpoint();
 
@@ -92,8 +92,8 @@ static int pmu_format(const char *name, struct list_head *format)
 	snprintf(path, PATH_MAX,
 		 "%s" EVENT_SOURCE_DEVICE_PATH "%s/format", sysfs, name);
 
-	if (stat(path, &st) < 0)
-		return 0;	/* no error if format does not exist */
+	if (!file_available(path))
+		return 0;
 
 	if (perf_pmu__format_parse(path, format))
 		return -1;
@@ -475,7 +475,6 @@ static int pmu_aliases_parse(char *dir, struct list_head *head)
  */
 static int pmu_aliases(const char *name, struct list_head *head)
 {
-	struct stat st;
 	char path[PATH_MAX];
 	const char *sysfs = sysfs__mountpoint();
 
@@ -485,8 +484,8 @@ static int pmu_aliases(const char *name, struct list_head *head)
 	snprintf(path, PATH_MAX,
 		 "%s/bus/event_source/devices/%s/events", sysfs, name);
 
-	if (stat(path, &st) < 0)
-		return 0;	 /* no error if 'events' does not exist */
+	if (!file_available(path))
+		return 0;
 
 	if (pmu_aliases_parse(path, head))
 		return -1;
@@ -525,7 +524,6 @@ static int pmu_alias_terms(struct perf_pmu_alias *alias,
  */
 static int pmu_type(const char *name, __u32 *type)
 {
-	struct stat st;
 	char path[PATH_MAX];
 	FILE *file;
 	int ret = 0;
@@ -537,7 +535,7 @@ static int pmu_type(const char *name, __u32 *type)
 	snprintf(path, PATH_MAX,
 		 "%s" EVENT_SOURCE_DEVICE_PATH "%s/type", sysfs, name);
 
-	if (stat(path, &st) < 0)
+	if (access(path, R_OK) < 0)
 		return -1;
 
 	file = fopen(path, "r");
@@ -628,14 +626,11 @@ static struct perf_cpu_map *pmu_cpumask(const char *name)
 static bool pmu_is_uncore(const char *name)
 {
 	char path[PATH_MAX];
-	struct perf_cpu_map *cpus;
-	const char *sysfs = sysfs__mountpoint();
+	const char *sysfs;
 
+	sysfs = sysfs__mountpoint();
 	snprintf(path, PATH_MAX, CPUS_TEMPLATE_UNCORE, sysfs, name);
-	cpus = __pmu_cpumask(path);
-	perf_cpu_map__put(cpus);
-
-	return !!cpus;
+	return file_available(path);
 }
 
 /*
@@ -645,7 +640,6 @@ static bool pmu_is_uncore(const char *name)
  */
 static int is_arm_pmu_core(const char *name)
 {
-	struct stat st;
 	char path[PATH_MAX];
 	const char *sysfs = sysfs__mountpoint();
 
@@ -655,10 +649,7 @@ static int is_arm_pmu_core(const char *name)
 	/* Look for cpu sysfs (specific to arm) */
 	scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/%s/cpus",
 				sysfs, name);
-	if (stat(path, &st) == 0)
-		return 1;
-
-	return 0;
+	return file_available(path);
 }
 
 static char *perf_pmu__getcpuid(struct perf_pmu *pmu)
@@ -1528,7 +1519,6 @@ bool pmu_have_event(const char *pname, const char *name)
 
 static FILE *perf_pmu__open_file(struct perf_pmu *pmu, const char *name)
 {
-	struct stat st;
 	char path[PATH_MAX];
 	const char *sysfs;
 
@@ -1538,10 +1528,8 @@ static FILE *perf_pmu__open_file(struct perf_pmu *pmu, const char *name)
 
 	snprintf(path, PATH_MAX,
 		 "%s" EVENT_SOURCE_DEVICE_PATH "%s/%s", sysfs, pmu->name, name);
-
-	if (stat(path, &st) < 0)
+	if (!file_available(path))
 		return NULL;
-
 	return fopen(path, "r");
 }
 
diff --git a/tools/perf/util/srccode.c b/tools/perf/util/srccode.c
index d84ed8b6caaa..c29edaaca863 100644
--- a/tools/perf/util/srccode.c
+++ b/tools/perf/util/srccode.c
@@ -16,6 +16,7 @@
 #include "srccode.h"
 #include "debug.h"
 #include <internal/lib.h> // page_size
+#include "fncache.h"
 
 #define MAXSRCCACHE (32*1024*1024)
 #define MAXSRCFILES     64
@@ -36,14 +37,6 @@ static LIST_HEAD(srcfile_list);
 static long map_total_sz;
 static int num_srcfiles;
 
-static unsigned shash(unsigned char *s)
-{
-	unsigned h = 0;
-	while (*s)
-		h = 65599 * h + *s++;
-	return h ^ (h >> 16);
-}
-
 static int countlines(char *map, int maplen)
 {
 	int numl;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 02/13] perf affinity: Add infrastructure to save/restore affinity
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 01/13] perf pmu: Use file system cache to optimize sysfs access Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 03/13] perf cpumap: Maintain cpumaps ordered and without dups Andi Kleen
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

The kernel perf subsystem has to IPI to the target CPU for many
operations. On systems with many CPUs and when managing many events the
overhead can be dominated by lots of IPIs.

An alternative is to set up CPU affinity in the perf tool, then set up
all the events for that CPU, and then move on to the next CPU.

Add some affinity management infrastructure to enable such a model.
Used in followon patches.

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---

v2: Use linux/bitmap.h functions.
---
 tools/perf/util/Build      |  1 +
 tools/perf/util/affinity.c | 72 ++++++++++++++++++++++++++++++++++++++
 tools/perf/util/affinity.h | 15 ++++++++
 3 files changed, 88 insertions(+)
 create mode 100644 tools/perf/util/affinity.c
 create mode 100644 tools/perf/util/affinity.h

diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index 2c1504fe924c..c7d4eab017e5 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -76,6 +76,7 @@ perf-y += sort.o
 perf-y += hist.o
 perf-y += util.o
 perf-y += cpumap.o
+perf-y += affinity.o
 perf-y += cputopo.o
 perf-y += cgroup.o
 perf-y += target.o
diff --git a/tools/perf/util/affinity.c b/tools/perf/util/affinity.c
new file mode 100644
index 000000000000..e197b0416f56
--- /dev/null
+++ b/tools/perf/util/affinity.c
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Manage affinity to optimize IPIs inside the kernel perf API. */
+#define _GNU_SOURCE 1
+#include <sched.h>
+#include <stdlib.h>
+#include <linux/bitmap.h>
+#include "perf.h"
+#include "cpumap.h"
+#include "affinity.h"
+
+static int get_cpu_set_size(void)
+{
+	int sz = cpu__max_cpu() + 8 - 1;
+	/*
+	 * sched_getaffinity doesn't like masks smaller than the kernel.
+	 * Hopefully that's big enough.
+	 */
+	if (sz < 4096)
+		sz = 4096;
+	return sz/8;
+}
+
+int affinity__setup(struct affinity *a)
+{
+	int cpu_set_size = get_cpu_set_size();
+
+	a->orig_cpus = bitmap_alloc(cpu_set_size*8);
+	if (!a->orig_cpus)
+		return -1;
+	sched_getaffinity(0, cpu_set_size, (cpu_set_t *)a->orig_cpus);
+	a->sched_cpus = bitmap_alloc(cpu_set_size*8);
+	if (!a->sched_cpus) {
+		free(a->orig_cpus);
+		return -1;
+	}
+	bitmap_zero((unsigned long *)a->sched_cpus, cpu_set_size);
+	a->changed = false;
+	return 0;
+}
+
+/*
+ * perf_event_open does an IPI internally to the target CPU.
+ * It is more efficient to change perf's affinity to the target
+ * CPU and then set up all events on that CPU, so we amortize
+ * CPU communication.
+ */
+void affinity__set(struct affinity *a, int cpu)
+{
+	int cpu_set_size = get_cpu_set_size();
+
+	if (cpu == -1)
+		return;
+	a->changed = true;
+	set_bit(cpu, a->sched_cpus);
+	/*
+	 * We ignore errors because affinity is just an optimization.
+	 * This could happen for example with isolated CPUs or cpusets.
+	 * In this case the IPIs inside the kernel's perf API still work.
+	 */
+	sched_setaffinity(0, cpu_set_size, (cpu_set_t *)a->sched_cpus);
+	clear_bit(cpu, a->sched_cpus);
+}
+
+void affinity__cleanup(struct affinity *a)
+{
+	int cpu_set_size = get_cpu_set_size();
+
+	if (a->changed)
+		sched_setaffinity(0, cpu_set_size, (cpu_set_t *)a->orig_cpus);
+	free(a->sched_cpus);
+	free(a->orig_cpus);
+}
diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
new file mode 100644
index 000000000000..008e2c3995b9
--- /dev/null
+++ b/tools/perf/util/affinity.h
@@ -0,0 +1,15 @@
+// SPDX-License-Identifier: GPL-2.0
+#ifndef AFFINITY_H
+#define AFFINITY_H 1
+
+struct affinity {
+	unsigned long *orig_cpus;
+	unsigned long *sched_cpus;
+	bool changed;
+};
+
+void affinity__cleanup(struct affinity *a);
+void affinity__set(struct affinity *a, int cpu);
+int affinity__setup(struct affinity *a);
+
+#endif
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 03/13] perf cpumap: Maintain cpumaps ordered and without dups
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 01/13] perf pmu: Use file system cache to optimize sysfs access Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 02/13] perf affinity: Add infrastructure to save/restore affinity Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 04/13] perf evlist: Maintain evlist->all_cpus Andi Kleen
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Enforce this in _trim()

Needed for followon change.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/lib/cpumap.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/tools/perf/lib/cpumap.c b/tools/perf/lib/cpumap.c
index 2ca1fafa620d..d81656b4635e 100644
--- a/tools/perf/lib/cpumap.c
+++ b/tools/perf/lib/cpumap.c
@@ -68,14 +68,28 @@ static struct perf_cpu_map *cpu_map__default_new(void)
 	return cpus;
 }
 
+static int cmp_int(const void *a, const void *b)
+{
+	return *(const int *)a - *(const int*)b;
+}
+
 static struct perf_cpu_map *cpu_map__trim_new(int nr_cpus, int *tmp_cpus)
 {
 	size_t payload_size = nr_cpus * sizeof(int);
 	struct perf_cpu_map *cpus = malloc(sizeof(*cpus) + payload_size);
+	int i, j;
 
 	if (cpus != NULL) {
-		cpus->nr = nr_cpus;
 		memcpy(cpus->map, tmp_cpus, payload_size);
+		qsort(cpus->map, nr_cpus, sizeof(int), cmp_int);
+		/* Remove dups */
+		j = 0;
+		for (i = 0; i < nr_cpus; i++) {
+			if (i == 0 || cpus->map[i] != cpus->map[i - 1])
+				cpus->map[j++] = cpus->map[i];
+		}
+		cpus->nr = j;
+		assert(j <= nr_cpus);
 		refcount_set(&cpus->refcnt, 1);
 	}
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 04/13] perf evlist: Maintain evlist->all_cpus
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
                   ` (2 preceding siblings ...)
  2019-11-07 18:16 ` [PATCH v5 03/13] perf cpumap: Maintain cpumaps ordered and without dups Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-11 13:31   ` Jiri Olsa
  2019-11-07 18:16 ` [PATCH v5 05/13] perf evsel: Add iterator to iterate over events ordered by CPU Andi Kleen
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Maintain a cpumap in the evlist that is the union of all the cpus
of the events.

This needs a cpumap merge operation, which is added together
with tests.

Signed-off-by: Andi Kleen <ak@linux.intel.com>

v2:
Add tests for cpu map merge
Fix handling of duplicates
Rename _update to _merge
Factor out sorting.
Fix handling of NULL maps in merge
---
 tools/perf/lib/cpumap.c                  | 43 ++++++++++++++++++++++++
 tools/perf/lib/evlist.c                  |  1 +
 tools/perf/lib/include/internal/evlist.h |  1 +
 tools/perf/lib/include/perf/cpumap.h     |  2 ++
 tools/perf/tests/builtin-test.c          |  5 +++
 tools/perf/tests/cpumap.c                | 16 +++++++++
 tools/perf/tests/tests.h                 |  1 +
 7 files changed, 69 insertions(+)

diff --git a/tools/perf/lib/cpumap.c b/tools/perf/lib/cpumap.c
index d81656b4635e..b9f573438b93 100644
--- a/tools/perf/lib/cpumap.c
+++ b/tools/perf/lib/cpumap.c
@@ -286,3 +286,46 @@ int perf_cpu_map__max(struct perf_cpu_map *map)
 
 	return max;
 }
+
+struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig,
+					 struct perf_cpu_map *other)
+{
+	int *tmp_cpus;
+	int tmp_len;
+	int i, j, k;
+	struct perf_cpu_map *merged;
+
+	if (!orig && !other)
+		return NULL;
+	if (!orig) {
+		perf_cpu_map__get(other);
+		return other;
+	}
+	if (!other)
+		return orig;
+	if (orig->nr == other->nr &&
+	    !memcmp(orig->map, other->map, orig->nr * sizeof(int)))
+		return orig;
+	tmp_len = orig->nr + other->nr;
+	tmp_cpus = malloc(tmp_len * sizeof(int));
+	if (!tmp_cpus)
+		return NULL;
+	i = j = k = 0;
+	while (i < orig->nr && j < other->nr) {
+		if (orig->map[i] <= other->map[j]) {
+			if (orig->map[i] == other->map[j])
+				j++;
+			tmp_cpus[k++] = orig->map[i++];
+		} else
+			tmp_cpus[k++] = other->map[j++];
+	}
+	while (i < orig->nr)
+		tmp_cpus[k++] = orig->map[i++];
+	while (j < other->nr)
+		tmp_cpus[k++] = other->map[j++];
+	assert(k <= tmp_len);
+	merged = cpu_map__trim_new(k, tmp_cpus);
+	free(tmp_cpus);
+	perf_cpu_map__put(orig);
+	return merged;
+}
diff --git a/tools/perf/lib/evlist.c b/tools/perf/lib/evlist.c
index 205ddbb80bc1..ae9e65aa2491 100644
--- a/tools/perf/lib/evlist.c
+++ b/tools/perf/lib/evlist.c
@@ -54,6 +54,7 @@ static void __perf_evlist__propagate_maps(struct perf_evlist *evlist,
 
 	perf_thread_map__put(evsel->threads);
 	evsel->threads = perf_thread_map__get(evlist->threads);
+	evlist->all_cpus = perf_cpu_map__merge(evlist->all_cpus, evsel->cpus);
 }
 
 static void perf_evlist__propagate_maps(struct perf_evlist *evlist)
diff --git a/tools/perf/lib/include/internal/evlist.h b/tools/perf/lib/include/internal/evlist.h
index a2fbccf1922f..74dc8c3f0b66 100644
--- a/tools/perf/lib/include/internal/evlist.h
+++ b/tools/perf/lib/include/internal/evlist.h
@@ -18,6 +18,7 @@ struct perf_evlist {
 	int			 nr_entries;
 	bool			 has_user_cpus;
 	struct perf_cpu_map	*cpus;
+	struct perf_cpu_map	*all_cpus;
 	struct perf_thread_map	*threads;
 	int			 nr_mmaps;
 	size_t			 mmap_len;
diff --git a/tools/perf/lib/include/perf/cpumap.h b/tools/perf/lib/include/perf/cpumap.h
index ac9aa497f84a..6a17ad730cbc 100644
--- a/tools/perf/lib/include/perf/cpumap.h
+++ b/tools/perf/lib/include/perf/cpumap.h
@@ -12,6 +12,8 @@ LIBPERF_API struct perf_cpu_map *perf_cpu_map__dummy_new(void);
 LIBPERF_API struct perf_cpu_map *perf_cpu_map__new(const char *cpu_list);
 LIBPERF_API struct perf_cpu_map *perf_cpu_map__read(FILE *file);
 LIBPERF_API struct perf_cpu_map *perf_cpu_map__get(struct perf_cpu_map *map);
+LIBPERF_API struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig,
+						     struct perf_cpu_map *other);
 LIBPERF_API void perf_cpu_map__put(struct perf_cpu_map *map);
 LIBPERF_API int perf_cpu_map__cpu(const struct perf_cpu_map *cpus, int idx);
 LIBPERF_API int perf_cpu_map__nr(const struct perf_cpu_map *cpus);
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index 8b286e9b7549..5fa37cf7f283 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -259,6 +259,11 @@ static struct test generic_tests[] = {
 		.desc = "Print cpu map",
 		.func = test__cpu_map_print,
 	},
+	{
+		.desc = "Merge cpu map",
+		.func = test__cpu_map_merge,
+	},
+
 	{
 		.desc = "Probe SDT events",
 		.func = test__sdt_event,
diff --git a/tools/perf/tests/cpumap.c b/tools/perf/tests/cpumap.c
index 8a0d236202b0..4ac56741ac5f 100644
--- a/tools/perf/tests/cpumap.c
+++ b/tools/perf/tests/cpumap.c
@@ -120,3 +120,19 @@ int test__cpu_map_print(struct test *test __maybe_unused, int subtest __maybe_un
 	TEST_ASSERT_VAL("failed to convert map", cpu_map_print("1-10,12-20,22-30,32-40"));
 	return 0;
 }
+
+int test__cpu_map_merge(struct test *test __maybe_unused, int subtest __maybe_unused)
+{
+	struct perf_cpu_map *a = perf_cpu_map__new("4,2,1");
+	struct perf_cpu_map *b = perf_cpu_map__new("4,5,7");
+	struct perf_cpu_map *c = perf_cpu_map__merge(a, b);
+	char buf[100];
+
+	TEST_ASSERT_VAL("failed to merge map: bad nr", c->nr == 5);
+	cpu_map__snprint(c, buf, sizeof(buf));
+	TEST_ASSERT_VAL("failed to merge map: bad result", !strcmp(buf, "1-2,4-5,7"));
+	perf_cpu_map__put(a);
+	perf_cpu_map__put(b);
+	perf_cpu_map__put(c);
+	return 0;
+}
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index 9837b6e93023..44b184fd869f 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -98,6 +98,7 @@ int test__event_update(struct test *test, int subtest);
 int test__event_times(struct test *test, int subtest);
 int test__backward_ring_buffer(struct test *test, int subtest);
 int test__cpu_map_print(struct test *test, int subtest);
+int test__cpu_map_merge(struct test *test, int subtest);
 int test__sdt_event(struct test *test, int subtest);
 int test__is_printable_array(struct test *test, int subtest);
 int test__bitmap_print(struct test *test, int subtest);
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 05/13] perf evsel: Add iterator to iterate over events ordered by CPU
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
                   ` (3 preceding siblings ...)
  2019-11-07 18:16 ` [PATCH v5 04/13] perf evlist: Maintain evlist->all_cpus Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 06/13] perf evsel: Add functions to close evsel on a CPU Andi Kleen
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add some common code that is needed to iterate over all events
in CPU order. Used in followon patches

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---

v2: Add cpumap__for_each_cpu macro to factor out some common code
v3: Drop cpumap__for_each_cpu macro again, replace with evlist__for_each_cpu
Add new evlist__for_each_cpu
Don't compute cpus nr in cpu_index iterator init, just use all_cpus
v4:
Remove __next, move into skip
Add _no_inc
Move initialization into iterator macro
Rename cpu_index to cpu_iter
---
 tools/perf/util/cpumap.h |  1 +
 tools/perf/util/evlist.c | 32 ++++++++++++++++++++++++++++++++
 tools/perf/util/evlist.h |  8 ++++++++
 tools/perf/util/evsel.h  |  1 +
 4 files changed, 42 insertions(+)

diff --git a/tools/perf/util/cpumap.h b/tools/perf/util/cpumap.h
index 2553bef1279d..dbc1d7e949ed 100644
--- a/tools/perf/util/cpumap.h
+++ b/tools/perf/util/cpumap.h
@@ -60,4 +60,5 @@ int cpu_map__build_map(struct perf_cpu_map *cpus, struct perf_cpu_map **res,
 
 int cpu_map__cpu(struct perf_cpu_map *cpus, int idx);
 bool cpu_map__has(struct perf_cpu_map *cpus, int cpu);
+
 #endif /* __PERF_CPUMAP_H */
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index fdce590d2278..dae6e846b2f8 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -342,6 +342,38 @@ static int perf_evlist__nr_threads(struct evlist *evlist,
 		return perf_thread_map__nr(evlist->core.threads);
 }
 
+void evlist__cpu_iter_start(struct evlist *evlist)
+{
+	struct evsel *pos;
+
+	/*
+	 * Reset the per evsel cpu_iter. This is needed because
+	 * each evsel's cpumap may have a different index space,
+	 * and some operations need the index to modify
+	 * the FD xyarray (e.g. open, close)
+	 */
+	evlist__for_each_entry(evlist, pos)
+		pos->cpu_iter = 0;
+}
+
+bool evsel__cpu_iter_skip_no_inc(struct evsel *ev, int cpu)
+{
+	if (ev->cpu_iter >= ev->core.cpus->nr)
+		return true;
+	if (cpu >= 0 && ev->core.cpus->map[ev->cpu_iter] != cpu)
+		return true;
+	return false;
+}
+
+bool evsel__cpu_iter_skip(struct evsel *ev, int cpu)
+{
+	if (!evsel__cpu_iter_skip_no_inc(ev, cpu)) {
+		ev->cpu_iter++;
+		return false;
+	}
+	return true;
+}
+
 void evlist__disable(struct evlist *evlist)
 {
 	struct evsel *pos;
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index 13051409fd22..12606efc1f7c 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -333,9 +333,17 @@ void perf_evlist__to_front(struct evlist *evlist,
 #define evlist__for_each_entry_safe(evlist, tmp, evsel) \
 	__evlist__for_each_entry_safe(&(evlist)->core.entries, tmp, evsel)
 
+#define evlist__for_each_cpu(evlist, index, cpu)	\
+	evlist__cpu_iter_start(evlist);			\
+	perf_cpu_map__for_each_cpu (cpu, index, (evlist)->core.all_cpus)
+
 void perf_evlist__set_tracking_event(struct evlist *evlist,
 				     struct evsel *tracking_evsel);
 
+void evlist__cpu_iter_start(struct evlist *evlist);
+bool evsel__cpu_iter_skip(struct evsel *ev, int cpu);
+bool evsel__cpu_iter_skip_no_inc(struct evsel *ev, int cpu);
+
 struct evsel *
 perf_evlist__find_evsel_by_str(struct evlist *evlist, const char *str);
 
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index ddc5ee6f6592..b10d5ba21966 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -95,6 +95,7 @@ struct evsel {
 	bool			collect_stat;
 	bool			weak_group;
 	bool			percore;
+	int			cpu_iter;
 	const char		*pmu_name;
 	struct {
 		perf_evsel__sb_cb_t	*cb;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 06/13] perf evsel: Add functions to close evsel on a CPU
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
                   ` (4 preceding siblings ...)
  2019-11-07 18:16 ` [PATCH v5 05/13] perf evsel: Add iterator to iterate over events ordered by CPU Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 07/13] perf stat: Use affinity for closing file descriptors Andi Kleen
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Refactor the existing all CPU function to use the per CPU
close internally.

Export APIs to close per CPU.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/lib/evsel.c              | 27 +++++++++++++++++++++------
 tools/perf/lib/include/perf/evsel.h |  1 +
 2 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/tools/perf/lib/evsel.c b/tools/perf/lib/evsel.c
index 5a89857b0381..ea775dacbd2d 100644
--- a/tools/perf/lib/evsel.c
+++ b/tools/perf/lib/evsel.c
@@ -114,16 +114,23 @@ int perf_evsel__open(struct perf_evsel *evsel, struct perf_cpu_map *cpus,
 	return err;
 }
 
+static void perf_evsel__close_fd_cpu(struct perf_evsel *evsel, int cpu)
+{
+	int thread;
+
+	for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
+		if (FD(evsel, cpu, thread) >= 0)
+			close(FD(evsel, cpu, thread));
+		FD(evsel, cpu, thread) = -1;
+	}
+}
+
 void perf_evsel__close_fd(struct perf_evsel *evsel)
 {
-	int cpu, thread;
+	int cpu;
 
 	for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++)
-		for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
-			if (FD(evsel, cpu, thread) >= 0)
-				close(FD(evsel, cpu, thread));
-			FD(evsel, cpu, thread) = -1;
-		}
+		perf_evsel__close_fd_cpu(evsel, cpu);
 }
 
 void perf_evsel__free_fd(struct perf_evsel *evsel)
@@ -141,6 +148,14 @@ void perf_evsel__close(struct perf_evsel *evsel)
 	perf_evsel__free_fd(evsel);
 }
 
+void perf_evsel__close_cpu(struct perf_evsel *evsel, int cpu)
+{
+	if (evsel->fd == NULL)
+		return;
+
+	perf_evsel__close_fd_cpu(evsel, cpu);
+}
+
 int perf_evsel__read_size(struct perf_evsel *evsel)
 {
 	u64 read_format = evsel->attr.read_format;
diff --git a/tools/perf/lib/include/perf/evsel.h b/tools/perf/lib/include/perf/evsel.h
index 557f5815a9c9..e7add554f861 100644
--- a/tools/perf/lib/include/perf/evsel.h
+++ b/tools/perf/lib/include/perf/evsel.h
@@ -26,6 +26,7 @@ LIBPERF_API void perf_evsel__delete(struct perf_evsel *evsel);
 LIBPERF_API int perf_evsel__open(struct perf_evsel *evsel, struct perf_cpu_map *cpus,
 				 struct perf_thread_map *threads);
 LIBPERF_API void perf_evsel__close(struct perf_evsel *evsel);
+LIBPERF_API void perf_evsel__close_cpu(struct perf_evsel *evsel, int cpu);
 LIBPERF_API int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread,
 				 struct perf_counts_values *count);
 LIBPERF_API int perf_evsel__enable(struct perf_evsel *evsel);
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 07/13] perf stat: Use affinity for closing file descriptors
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
                   ` (5 preceding siblings ...)
  2019-11-07 18:16 ` [PATCH v5 06/13] perf evsel: Add functions to close evsel on a CPU Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-11 13:30   ` Jiri Olsa
  2019-11-07 18:16 ` [PATCH v5 08/13] perf stat: Factor out open error handling Andi Kleen
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Closing a perf fd can also trigger an IPI to the target CPU.
Use the same affinity technique as we use for reading/enabling events
to closing to optimize the CPU transitions.

Before on a large test case with 94 CPUs:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 32.56    3.085463          50     61483           close

After:

 10.54    0.735704          11     61485           close

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---

v2: Use new iterator macros
v3: Use new iterator macros
Add missing affinity__cleanup
v4:
Update iterators again
---
 tools/perf/util/evlist.c | 27 +++++++++++++++++++++++++--
 1 file changed, 25 insertions(+), 2 deletions(-)

diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index dae6e846b2f8..0dcea66329e2 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -18,6 +18,7 @@
 #include "debug.h"
 #include "units.h"
 #include <internal/lib.h> // page_size
+#include "affinity.h"
 #include "../perf.h"
 #include "asm/bug.h"
 #include "bpf-event.h"
@@ -1169,9 +1170,31 @@ void perf_evlist__set_selected(struct evlist *evlist,
 void evlist__close(struct evlist *evlist)
 {
 	struct evsel *evsel;
+	struct affinity affinity;
+	int cpu, i;
 
-	evlist__for_each_entry_reverse(evlist, evsel)
-		evsel__close(evsel);
+	if (!evlist->core.cpus) {
+		evlist__for_each_entry_reverse(evlist, evsel)
+			evsel__close(evsel);
+		return;
+	}
+
+	if (affinity__setup(&affinity) < 0)
+		return;
+	evlist__for_each_cpu (evlist, i, cpu) {
+		affinity__set(&affinity, cpu);
+
+		evlist__for_each_entry_reverse(evlist, evsel) {
+			if (evsel__cpu_iter_skip(evsel, cpu))
+			    continue;
+			perf_evsel__close_cpu(&evsel->core, evsel->cpu_iter - 1);
+		}
+	}
+	affinity__cleanup(&affinity);
+	evlist__for_each_entry_reverse(evlist, evsel) {
+		perf_evsel__free_fd(&evsel->core);
+		perf_evsel__free_id(&evsel->core);
+	}
 }
 
 static int perf_evlist__create_syswide_maps(struct evlist *evlist)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 08/13] perf stat: Factor out open error handling
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
                   ` (6 preceding siblings ...)
  2019-11-07 18:16 ` [PATCH v5 07/13] perf stat: Use affinity for closing file descriptors Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 09/13] perf evsel: Support opening on a specific CPU Andi Kleen
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Factor out the open error handling into a separate function.
This is useful for followon patches who need to duplicate this.

No behavior change intended.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/builtin-stat.c | 100 +++++++++++++++++++++++---------------
 1 file changed, 60 insertions(+), 40 deletions(-)

diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index c88d4e118409..1a586009e5a7 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -420,6 +420,57 @@ static bool is_target_alive(struct target *_target,
 	return false;
 }
 
+enum counter_recovery {
+	COUNTER_SKIP,
+	COUNTER_RETRY,
+	COUNTER_FATAL,
+};
+
+static enum counter_recovery stat_handle_error(struct evsel *counter)
+{
+	char msg[BUFSIZ];
+	/*
+	 * PPC returns ENXIO for HW counters until 2.6.37
+	 * (behavior changed with commit b0a873e).
+	 */
+	if (errno == EINVAL || errno == ENOSYS ||
+	    errno == ENOENT || errno == EOPNOTSUPP ||
+	    errno == ENXIO) {
+		if (verbose > 0)
+			ui__warning("%s event is not supported by the kernel.\n",
+				    perf_evsel__name(counter));
+		counter->supported = false;
+
+		if ((counter->leader != counter) ||
+		    !(counter->leader->core.nr_members > 1))
+			return COUNTER_SKIP;
+	} else if (perf_evsel__fallback(counter, errno, msg, sizeof(msg))) {
+		if (verbose > 0)
+			ui__warning("%s\n", msg);
+		return COUNTER_RETRY;
+	} else if (target__has_per_thread(&target) &&
+		   evsel_list->core.threads &&
+		   evsel_list->core.threads->err_thread != -1) {
+		/*
+		 * For global --per-thread case, skip current
+		 * error thread.
+		 */
+		if (!thread_map__remove(evsel_list->core.threads,
+					evsel_list->core.threads->err_thread)) {
+			evsel_list->core.threads->err_thread = -1;
+			return COUNTER_RETRY;
+		}
+	}
+
+	perf_evsel__open_strerror(counter, &target,
+				  errno, msg, sizeof(msg));
+	ui__error("%s\n", msg);
+
+	if (child_pid != -1)
+		kill(child_pid, SIGTERM);
+	return COUNTER_FATAL;
+}
+
 static int __run_perf_stat(int argc, const char **argv, int run_idx)
 {
 	int interval = stat_config.interval;
@@ -469,47 +520,16 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
 				goto try_again;
 			}
 
-			/*
-			 * PPC returns ENXIO for HW counters until 2.6.37
-			 * (behavior changed with commit b0a873e).
-			 */
-			if (errno == EINVAL || errno == ENOSYS ||
-			    errno == ENOENT || errno == EOPNOTSUPP ||
-			    errno == ENXIO) {
-				if (verbose > 0)
-					ui__warning("%s event is not supported by the kernel.\n",
-						    perf_evsel__name(counter));
-				counter->supported = false;
-
-				if ((counter->leader != counter) ||
-				    !(counter->leader->core.nr_members > 1))
-					continue;
-			} else if (perf_evsel__fallback(counter, errno, msg, sizeof(msg))) {
-                                if (verbose > 0)
-                                        ui__warning("%s\n", msg);
-                                goto try_again;
-			} else if (target__has_per_thread(&target) &&
-				   evsel_list->core.threads &&
-				   evsel_list->core.threads->err_thread != -1) {
-				/*
-				 * For global --per-thread case, skip current
-				 * error thread.
-				 */
-				if (!thread_map__remove(evsel_list->core.threads,
-							evsel_list->core.threads->err_thread)) {
-					evsel_list->core.threads->err_thread = -1;
-					goto try_again;
-				}
+			switch (stat_handle_error(counter)) {
+			case COUNTER_FATAL:
+				return -1;
+			case COUNTER_RETRY:
+				goto try_again;
+			case COUNTER_SKIP:
+				continue;
+			default:
+				break;
 			}
-
-			perf_evsel__open_strerror(counter, &target,
-						  errno, msg, sizeof(msg));
-			ui__error("%s\n", msg);
-
-			if (child_pid != -1)
-				kill(child_pid, SIGTERM);
-
-			return -1;
 		}
 		counter->supported = true;
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 09/13] perf evsel: Support opening on a specific CPU
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
                   ` (7 preceding siblings ...)
  2019-11-07 18:16 ` [PATCH v5 08/13] perf stat: Factor out open error handling Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-11 13:30   ` Jiri Olsa
  2019-11-07 18:16 ` [PATCH v5 10/13] perf stat: Use affinity for opening events Andi Kleen
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Support opening an evsel on a specific CPU index. Fix up
all callers so far to still open on all CPUs.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/tests/event-times.c |  4 ++--
 tools/perf/util/evsel.c        | 18 +++++++++++++-----
 tools/perf/util/evsel.h        |  3 ++-
 tools/perf/util/stat.c         |  2 +-
 4 files changed, 18 insertions(+), 9 deletions(-)

diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
index 1ee8704e2284..1e8a9f5c356d 100644
--- a/tools/perf/tests/event-times.c
+++ b/tools/perf/tests/event-times.c
@@ -125,7 +125,7 @@ static int attach__cpu_disabled(struct evlist *evlist)
 
 	evsel->core.attr.disabled = 1;
 
-	err = perf_evsel__open_per_cpu(evsel, cpus);
+	err = perf_evsel__open_per_cpu(evsel, cpus, -1);
 	if (err) {
 		if (err == -EACCES)
 			return TEST_SKIP;
@@ -152,7 +152,7 @@ static int attach__cpu_enabled(struct evlist *evlist)
 		return -1;
 	}
 
-	err = perf_evsel__open_per_cpu(evsel, cpus);
+	err = perf_evsel__open_per_cpu(evsel, cpus, -1);
 	if (err == -EACCES)
 		return TEST_SKIP;
 
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index d4451846af93..7106f9a067df 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1569,8 +1569,9 @@ static int perf_event_open(struct evsel *evsel,
 	return fd;
 }
 
-int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
-		struct perf_thread_map *threads)
+static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
+		struct perf_thread_map *threads,
+		int start_cpu, int end_cpu)
 {
 	int cpu, thread, nthreads;
 	unsigned long flags = PERF_FLAG_FD_CLOEXEC;
@@ -1647,7 +1648,7 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
 
 	display_attr(&evsel->core.attr);
 
-	for (cpu = 0; cpu < cpus->nr; cpu++) {
+	for (cpu = start_cpu; cpu < end_cpu; cpu++) {
 
 		for (thread = 0; thread < nthreads; thread++) {
 			int fd, group_fd;
@@ -1825,6 +1826,12 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
 	return err;
 }
 
+int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
+		struct perf_thread_map *threads)
+{
+	return evsel__open_cpu(evsel, cpus, threads, 0, cpus ? cpus->nr : 1);
+}
+
 void evsel__close(struct evsel *evsel)
 {
 	perf_evsel__close(&evsel->core);
@@ -1832,9 +1839,10 @@ void evsel__close(struct evsel *evsel)
 }
 
 int perf_evsel__open_per_cpu(struct evsel *evsel,
-			     struct perf_cpu_map *cpus)
+			     struct perf_cpu_map *cpus,
+			     int cpu)
 {
-	return evsel__open(evsel, cpus, NULL);
+	return evsel__open_cpu(evsel, cpus, NULL, cpu, cpu + 1);
 }
 
 int perf_evsel__open_per_thread(struct evsel *evsel,
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index b10d5ba21966..54513d70c109 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -223,7 +223,8 @@ int evsel__enable(struct evsel *evsel);
 int evsel__disable(struct evsel *evsel);
 
 int perf_evsel__open_per_cpu(struct evsel *evsel,
-			     struct perf_cpu_map *cpus);
+			     struct perf_cpu_map *cpus,
+			     int cpu);
 int perf_evsel__open_per_thread(struct evsel *evsel,
 				struct perf_thread_map *threads);
 int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
index 6822e4ffe224..36dc95032e4c 100644
--- a/tools/perf/util/stat.c
+++ b/tools/perf/util/stat.c
@@ -517,7 +517,7 @@ int create_perf_stat_counter(struct evsel *evsel,
 	}
 
 	if (target__has_cpu(target) && !target__has_per_thread(target))
-		return perf_evsel__open_per_cpu(evsel, evsel__cpus(evsel));
+		return perf_evsel__open_per_cpu(evsel, evsel__cpus(evsel), -1);
 
 	return perf_evsel__open_per_thread(evsel, evsel->core.threads);
 }
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 10/13] perf stat: Use affinity for opening events
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
                   ` (8 preceding siblings ...)
  2019-11-07 18:16 ` [PATCH v5 09/13] perf evsel: Support opening on a specific CPU Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-11 13:30   ` Jiri Olsa
                     ` (2 more replies)
  2019-11-07 18:16 ` [PATCH v5 11/13] perf stat: Use affinity for reading Andi Kleen
                   ` (2 subsequent siblings)
  12 siblings, 3 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Restructure the event opening in perf stat to cycle through
the events by CPU after setting affinity to that CPU.
This eliminates IPI overhead in the perf API.

We have to loop through the CPU in the outter builtin-stat
code instead of leaving that to low level functions.

It has to change the weak group fallback strategy slightly.
Since we cannot easily undo the opens for other CPUs
move the weak group retry to a separate loop.

Before with a large test case with 94 CPUs:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 42.75    4.050910          67     60046       110 perf_event_open

After:

 26.86    0.944396          16     58069       110 perf_event_open

(the number changes slightly because the weak group retries
work differently and the test case relies on weak groups)

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---

v2: Use new iterator macros.
Fix bug that caused unnecessary retry for errored events.
Add extra assert to check assumption that cpumaps are always subsets
v3:
Use new iterator macros
Factored out code movement for error handling.
v4:
Update iterator macros again
Fix minor bug with errored events
---
 tools/perf/builtin-record.c |   2 +-
 tools/perf/builtin-stat.c   | 118 ++++++++++++++++++++++++++++++------
 tools/perf/util/evlist.c    |   6 +-
 tools/perf/util/evlist.h    |   3 +-
 tools/perf/util/evsel.h     |   2 +
 tools/perf/util/stat.c      |   5 +-
 tools/perf/util/stat.h      |   3 +-
 7 files changed, 113 insertions(+), 26 deletions(-)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 2fb83aabbef5..9f8a9393ce4a 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -776,7 +776,7 @@ static int record__open(struct record *rec)
 			if ((errno == EINVAL || errno == EBADF) &&
 			    pos->leader != pos &&
 			    pos->weak_group) {
-			        pos = perf_evlist__reset_weak_group(evlist, pos);
+			        pos = perf_evlist__reset_weak_group(evlist, pos, true);
 				goto try_again;
 			}
 			rc = -errno;
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 1a586009e5a7..7f9ec41d8f62 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -65,6 +65,7 @@
 #include "util/target.h"
 #include "util/time-utils.h"
 #include "util/top.h"
+#include "util/affinity.h"
 #include "asm/bug.h"
 
 #include <linux/time64.h>
@@ -440,6 +441,7 @@ static enum counter_recovery stat_handle_error(struct evsel *counter)
 			ui__warning("%s event is not supported by the kernel.\n",
 				    perf_evsel__name(counter));
 		counter->supported = false;
+		counter->errored = true;
 
 		if ((counter->leader != counter) ||
 		    !(counter->leader->core.nr_members > 1))
@@ -484,6 +486,9 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
 	int status = 0;
 	const bool forks = (argc > 0);
 	bool is_pipe = STAT_RECORD ? perf_stat.data.is_pipe : false;
+	struct affinity affinity;
+	int i, cpu;
+	bool second_pass = false;
 
 	if (interval) {
 		ts.tv_sec  = interval / USEC_PER_MSEC;
@@ -508,30 +513,105 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
 	if (group)
 		perf_evlist__set_leader(evsel_list);
 
-	evlist__for_each_entry(evsel_list, counter) {
+	if (affinity__setup(&affinity) < 0)
+		return -1;
+
+	evlist__for_each_cpu (evsel_list, i, cpu) {
+		affinity__set(&affinity, cpu);
+
+		evlist__for_each_entry(evsel_list, counter) {
+			if (evsel__cpu_iter_skip(counter, cpu))
+				continue;
+			if (counter->reset_group || counter->errored)
+				continue;
 try_again:
-		if (create_perf_stat_counter(counter, &stat_config, &target) < 0) {
-
-			/* Weak group failed. Reset the group. */
-			if ((errno == EINVAL || errno == EBADF) &&
-			    counter->leader != counter &&
-			    counter->weak_group) {
-				counter = perf_evlist__reset_weak_group(evsel_list, counter);
-				goto try_again;
+			if (create_perf_stat_counter(counter, &stat_config, &target,
+						     counter->cpu_iter - 1) < 0) {
+
+				/*
+				 * Weak group failed. We cannot just undo this here
+				 * because earlier CPUs might be in group mode, and the kernel
+				 * doesn't support mixing group and non group reads. Defer
+				 * it to later.
+				 * Don't close here because we're in the wrong affinity.
+				 */
+				if ((errno == EINVAL || errno == EBADF) &&
+				    counter->leader != counter &&
+				    counter->weak_group) {
+					perf_evlist__reset_weak_group(evsel_list, counter, false);
+					assert(counter->reset_group);
+					second_pass = true;
+					continue;
+				}
+
+				switch (stat_handle_error(counter)) {
+				case COUNTER_FATAL:
+					return -1;
+				case COUNTER_RETRY:
+					goto try_again;
+				case COUNTER_SKIP:
+					continue;
+				default:
+					break;
+				}
+
 			}
+			counter->supported = true;
+		}
+	}
 
-			switch (stat_handle_error(counter)) {
-			case COUNTER_FATAL:
-				return -1;
-			case COUNTER_RETRY:
-				goto try_again;
-			case COUNTER_SKIP:
-				continue;
-			default:
-				break;
+	if (second_pass) {
+		/*
+		 * Now redo all the weak group after closing them,
+		 * and also close errored counters.
+		 */
+
+		evlist__cpu_iter_start(evsel_list);
+		evlist__for_each_cpu (evsel_list, i, cpu) {
+			affinity__set(&affinity, cpu);
+			/* First close errored or weak retry */
+			evlist__for_each_entry(evsel_list, counter) {
+				if (!counter->reset_group && !counter->errored)
+					continue;
+				if (evsel__cpu_iter_skip_no_inc(counter, cpu))
+					continue;
+				perf_evsel__close_cpu(&counter->core, counter->cpu_iter);
 			}
+			/* Now reopen weak */
+			evlist__for_each_entry(evsel_list, counter) {
+				if (!counter->reset_group && !counter->errored)
+					continue;
+				if (evsel__cpu_iter_skip(counter, cpu))
+					continue;
+				if (!counter->reset_group)
+					continue;
+try_again_reset:
+				pr_debug2("reopening weak %s\n", perf_evsel__name(counter));
+				if (create_perf_stat_counter(counter, &stat_config, &target,
+							     counter->cpu_iter - 1) < 0) {
+
+					switch (stat_handle_error(counter)) {
+					case COUNTER_FATAL:
+						return -1;
+					case COUNTER_RETRY:
+						goto try_again_reset;
+					case COUNTER_SKIP:
+						continue;
+					default:
+						break;
+					}
+				}
+				counter->supported = true;
+			}
+		}
+	}
+	affinity__cleanup(&affinity);
+
+	evlist__for_each_entry(evsel_list, counter) {
+		if (!counter->supported) {
+			perf_evsel__free_fd(&counter->core);
+			continue;
 		}
-		counter->supported = true;
 
 		l = strlen(counter->unit);
 		if (l > stat_config.unit_width)
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 0dcea66329e2..33080f79b977 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1632,7 +1632,8 @@ void perf_evlist__force_leader(struct evlist *evlist)
 }
 
 struct evsel *perf_evlist__reset_weak_group(struct evlist *evsel_list,
-						 struct evsel *evsel)
+						 struct evsel *evsel,
+						bool close)
 {
 	struct evsel *c2, *leader;
 	bool is_open = true;
@@ -1649,10 +1650,11 @@ struct evsel *perf_evlist__reset_weak_group(struct evlist *evsel_list,
 		if (c2 == evsel)
 			is_open = false;
 		if (c2->leader == leader) {
-			if (is_open)
+			if (is_open && close)
 				perf_evsel__close(&c2->core);
 			c2->leader = c2;
 			c2->core.nr_members = 0;
+			c2->reset_group = true;
 		}
 	}
 	return leader;
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index 12606efc1f7c..ad77091d1e1e 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -355,5 +355,6 @@ bool perf_evlist__exclude_kernel(struct evlist *evlist);
 void perf_evlist__force_leader(struct evlist *evlist);
 
 struct evsel *perf_evlist__reset_weak_group(struct evlist *evlist,
-						 struct evsel *evsel);
+						 struct evsel *evsel,
+						bool close);
 #endif /* __PERF_EVLIST_H */
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 54513d70c109..ca82a93960cd 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -94,6 +94,8 @@ struct evsel {
 	struct evsel		*metric_leader;
 	bool			collect_stat;
 	bool			weak_group;
+	bool			reset_group;
+	bool			errored;
 	bool			percore;
 	int			cpu_iter;
 	const char		*pmu_name;
diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
index 36dc95032e4c..3aebe732e886 100644
--- a/tools/perf/util/stat.c
+++ b/tools/perf/util/stat.c
@@ -463,7 +463,8 @@ size_t perf_event__fprintf_stat_config(union perf_event *event, FILE *fp)
 
 int create_perf_stat_counter(struct evsel *evsel,
 			     struct perf_stat_config *config,
-			     struct target *target)
+			     struct target *target,
+			     int cpu)
 {
 	struct perf_event_attr *attr = &evsel->core.attr;
 	struct evsel *leader = evsel->leader;
@@ -517,7 +518,7 @@ int create_perf_stat_counter(struct evsel *evsel,
 	}
 
 	if (target__has_cpu(target) && !target__has_per_thread(target))
-		return perf_evsel__open_per_cpu(evsel, evsel__cpus(evsel), -1);
+		return perf_evsel__open_per_cpu(evsel, evsel__cpus(evsel), cpu);
 
 	return perf_evsel__open_per_thread(evsel, evsel->core.threads);
 }
diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h
index 081c4a5113c6..4c9a7b68c3e7 100644
--- a/tools/perf/util/stat.h
+++ b/tools/perf/util/stat.h
@@ -213,7 +213,8 @@ size_t perf_event__fprintf_stat_config(union perf_event *event, FILE *fp);
 
 int create_perf_stat_counter(struct evsel *evsel,
 			     struct perf_stat_config *config,
-			     struct target *target);
+			     struct target *target,
+			     int cpu);
 void
 perf_evlist__print_counters(struct evlist *evlist,
 			    struct perf_stat_config *config,
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 11/13] perf stat: Use affinity for reading
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
                   ` (9 preceding siblings ...)
  2019-11-07 18:16 ` [PATCH v5 10/13] perf stat: Use affinity for opening events Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-11 13:31   ` Jiri Olsa
  2019-11-07 18:16 ` [PATCH v5 12/13] perf evsel: Add functions to enable/disable for a specific CPU Andi Kleen
  2019-11-07 18:16 ` [PATCH v5 13/13] perf stat: Use affinity for enabling/disabling events Andi Kleen
  12 siblings, 1 reply; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Restructure event reading to use affinity to minimize the number
of IPIs needed.

Before on a large test case with 94 CPUs:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
  3.16    0.106079           4     22082           read

After:

  3.43    0.081295           3     22082           read

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---

v2: Use new iterator macros
v3: Use new iterator macros
v4: Change iterator macros even more
---
 tools/perf/builtin-stat.c | 94 ++++++++++++++++++++++-----------------
 tools/perf/util/evsel.h   |  1 +
 2 files changed, 54 insertions(+), 41 deletions(-)

diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 7f9ec41d8f62..dff817d714f6 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -266,15 +266,10 @@ static int read_single_counter(struct evsel *counter, int cpu,
  * Read out the results of a single counter:
  * do not aggregate counts across CPUs in system-wide mode
  */
-static int read_counter(struct evsel *counter, struct timespec *rs)
+static int read_counter(struct evsel *counter, struct timespec *rs, int cpu)
 {
 	int nthreads = perf_thread_map__nr(evsel_list->core.threads);
-	int ncpus, cpu, thread;
-
-	if (target__has_cpu(&target) && !target__has_per_thread(&target))
-		ncpus = perf_evsel__nr_cpus(counter);
-	else
-		ncpus = 1;
+	int thread;
 
 	if (!counter->supported)
 		return -ENOENT;
@@ -283,40 +278,38 @@ static int read_counter(struct evsel *counter, struct timespec *rs)
 		nthreads = 1;
 
 	for (thread = 0; thread < nthreads; thread++) {
-		for (cpu = 0; cpu < ncpus; cpu++) {
-			struct perf_counts_values *count;
-
-			count = perf_counts(counter->counts, cpu, thread);
-
-			/*
-			 * The leader's group read loads data into its group members
-			 * (via perf_evsel__read_counter) and sets threir count->loaded.
-			 */
-			if (!perf_counts__is_loaded(counter->counts, cpu, thread) &&
-			    read_single_counter(counter, cpu, thread, rs)) {
-				counter->counts->scaled = -1;
-				perf_counts(counter->counts, cpu, thread)->ena = 0;
-				perf_counts(counter->counts, cpu, thread)->run = 0;
-				return -1;
-			}
+		struct perf_counts_values *count;
 
-			perf_counts__set_loaded(counter->counts, cpu, thread, false);
+		count = perf_counts(counter->counts, cpu, thread);
 
-			if (STAT_RECORD) {
-				if (perf_evsel__write_stat_event(counter, cpu, thread, count)) {
-					pr_err("failed to write stat event\n");
-					return -1;
-				}
-			}
+		/*
+		 * The leader's group read loads data into its group members
+		 * (via perf_evsel__read_counter) and sets threir count->loaded.
+		 */
+		if (!perf_counts__is_loaded(counter->counts, cpu, thread) &&
+		    read_single_counter(counter, cpu, thread, rs)) {
+			counter->counts->scaled = -1;
+			perf_counts(counter->counts, cpu, thread)->ena = 0;
+			perf_counts(counter->counts, cpu, thread)->run = 0;
+			return -1;
+		}
+
+		perf_counts__set_loaded(counter->counts, cpu, thread, false);
 
-			if (verbose > 1) {
-				fprintf(stat_config.output,
-					"%s: %d: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n",
-						perf_evsel__name(counter),
-						cpu,
-						count->val, count->ena, count->run);
+		if (STAT_RECORD) {
+			if (perf_evsel__write_stat_event(counter, cpu, thread, count)) {
+				pr_err("failed to write stat event\n");
+				return -1;
 			}
 		}
+
+		if (verbose > 1) {
+			fprintf(stat_config.output,
+				"%s: %d: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n",
+					perf_evsel__name(counter),
+					cpu,
+					count->val, count->ena, count->run);
+		}
 	}
 
 	return 0;
@@ -325,15 +318,34 @@ static int read_counter(struct evsel *counter, struct timespec *rs)
 static void read_counters(struct timespec *rs)
 {
 	struct evsel *counter;
-	int ret;
+	struct affinity affinity;
+	int i, ncpus, cpu;
+
+	if (affinity__setup(&affinity) < 0)
+		return;
+
+	ncpus = evsel_list->core.all_cpus->nr;
+	if (!(target__has_cpu(&target) && !target__has_per_thread(&target)))
+		ncpus = 1;
+	evlist__for_each_cpu (evsel_list, i, cpu) {
+		if (i >= ncpus)
+			break;
+		affinity__set(&affinity, cpu);
+
+		evlist__for_each_entry(evsel_list, counter) {
+			if (evsel__cpu_iter_skip(counter, cpu))
+				continue;
+			counter->err = read_counter(counter, rs, counter->cpu_iter - 1);
+		}
+	}
+	affinity__cleanup(&affinity);
 
 	evlist__for_each_entry(evsel_list, counter) {
-		ret = read_counter(counter, rs);
-		if (ret)
+		if (counter->err)
 			pr_debug("failed to read counter %s\n", counter->name);
-
-		if (ret == 0 && perf_stat_process_counter(&stat_config, counter))
+		if (counter->err == 0 && perf_stat_process_counter(&stat_config, counter))
 			pr_warning("failed to process counter %s\n", counter->name);
+		counter->err = 0;
 	}
 }
 
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index ca82a93960cd..c8af4bc23f8f 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -86,6 +86,7 @@ struct evsel {
 	struct list_head	config_terms;
 	struct bpf_object	*bpf_obj;
 	int			bpf_fd;
+	int			err;
 	bool			auto_merge_stats;
 	bool			merged_stat;
 	const char *		metric_expr;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 12/13] perf evsel: Add functions to enable/disable for a specific CPU
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
                   ` (10 preceding siblings ...)
  2019-11-07 18:16 ` [PATCH v5 11/13] perf stat: Use affinity for reading Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-11 13:30   ` Jiri Olsa
  2019-11-07 18:16 ` [PATCH v5 13/13] perf stat: Use affinity for enabling/disabling events Andi Kleen
  12 siblings, 1 reply; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Refactor the existing functions to use these functions internally.

Used in the next patch

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/lib/evsel.c              | 49 +++++++++++++++++++++--------
 tools/perf/lib/include/perf/evsel.h |  2 ++
 tools/perf/util/evsel.c             | 13 ++++++++
 tools/perf/util/evsel.h             |  2 ++
 4 files changed, 53 insertions(+), 13 deletions(-)

diff --git a/tools/perf/lib/evsel.c b/tools/perf/lib/evsel.c
index ea775dacbd2d..89ddfade0b96 100644
--- a/tools/perf/lib/evsel.c
+++ b/tools/perf/lib/evsel.c
@@ -198,38 +198,61 @@ int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread,
 }
 
 static int perf_evsel__run_ioctl(struct perf_evsel *evsel,
-				 int ioc,  void *arg)
+				 int ioc,  void *arg,
+				 int cpu)
 {
-	int cpu, thread;
+	int thread;
 
-	for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++) {
-		for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
-			int fd = FD(evsel, cpu, thread),
-			    err = ioctl(fd, ioc, arg);
+	for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
+		int fd = FD(evsel, cpu, thread),
+		    err = ioctl(fd, ioc, arg);
 
-			if (err)
-				return err;
-		}
+		if (err)
+			return err;
 	}
 
 	return 0;
 }
 
+int perf_evsel__enable_cpu(struct perf_evsel *evsel, int cpu)
+{
+	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, 0, cpu);
+}
+
 int perf_evsel__enable(struct perf_evsel *evsel)
 {
-	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, 0);
+	int i;
+	int err = 0;
+
+	for (i = 0; i < evsel->cpus->nr && !err; i++)
+		err = perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, 0, i);
+	return err;
+}
+
+int perf_evsel__disable_cpu(struct perf_evsel *evsel, int cpu)
+{
+	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, 0, cpu);
 }
 
 int perf_evsel__disable(struct perf_evsel *evsel)
 {
-	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, 0);
+	int i;
+	int err = 0;
+
+	for (i = 0; i < evsel->cpus->nr && !err; i++)
+		err = perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, 0, i);
+	return err;
 }
 
 int perf_evsel__apply_filter(struct perf_evsel *evsel, const char *filter)
 {
-	return perf_evsel__run_ioctl(evsel,
+	int err = 0, i;
+
+	for (i = 0; i < evsel->cpus->nr && !err; i++)
+		err = perf_evsel__run_ioctl(evsel,
 				     PERF_EVENT_IOC_SET_FILTER,
-				     (void *)filter);
+				     (void *)filter, i);
+	return err;
 }
 
 struct perf_cpu_map *perf_evsel__cpus(struct perf_evsel *evsel)
diff --git a/tools/perf/lib/include/perf/evsel.h b/tools/perf/lib/include/perf/evsel.h
index e7add554f861..c82ec39a4ad0 100644
--- a/tools/perf/lib/include/perf/evsel.h
+++ b/tools/perf/lib/include/perf/evsel.h
@@ -30,7 +30,9 @@ LIBPERF_API void perf_evsel__close_cpu(struct perf_evsel *evsel, int cpu);
 LIBPERF_API int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread,
 				 struct perf_counts_values *count);
 LIBPERF_API int perf_evsel__enable(struct perf_evsel *evsel);
+LIBPERF_API int perf_evsel__enable_cpu(struct perf_evsel *evsel, int cpu);
 LIBPERF_API int perf_evsel__disable(struct perf_evsel *evsel);
+LIBPERF_API int perf_evsel__disable_cpu(struct perf_evsel *evsel, int cpu);
 LIBPERF_API struct perf_cpu_map *perf_evsel__cpus(struct perf_evsel *evsel);
 LIBPERF_API struct perf_thread_map *perf_evsel__threads(struct perf_evsel *evsel);
 LIBPERF_API struct perf_event_attr *perf_evsel__attr(struct perf_evsel *evsel);
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 7106f9a067df..79050a6f4991 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1205,13 +1205,26 @@ int perf_evsel__append_addr_filter(struct evsel *evsel, const char *filter)
 	return perf_evsel__append_filter(evsel, "%s,%s", filter);
 }
 
+/* Caller has to clear disabled after going through all CPUs. */
+int evsel__enable_cpu(struct evsel *evsel, int cpu)
+{
+	int err = perf_evsel__enable_cpu(&evsel->core, cpu);
+	return err;
+}
+
 int evsel__enable(struct evsel *evsel)
 {
 	int err = perf_evsel__enable(&evsel->core);
 
 	if (!err)
 		evsel->disabled = false;
+	return err;
+}
 
+/* Caller has to set disabled after going through all CPUs. */
+int evsel__disable_cpu(struct evsel *evsel, int cpu)
+{
+	int err = perf_evsel__disable_cpu(&evsel->core, cpu);
 	return err;
 }
 
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index c8af4bc23f8f..dc14f4a823cd 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -222,8 +222,10 @@ int perf_evsel__set_filter(struct evsel *evsel, const char *filter);
 int perf_evsel__append_tp_filter(struct evsel *evsel, const char *filter);
 int perf_evsel__append_addr_filter(struct evsel *evsel,
 				   const char *filter);
+int evsel__enable_cpu(struct evsel *evsel, int cpu);
 int evsel__enable(struct evsel *evsel);
 int evsel__disable(struct evsel *evsel);
+int evsel__disable_cpu(struct evsel *evsel, int cpu);
 
 int perf_evsel__open_per_cpu(struct evsel *evsel,
 			     struct perf_cpu_map *cpus,
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v5 13/13] perf stat: Use affinity for enabling/disabling events
  2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
                   ` (11 preceding siblings ...)
  2019-11-07 18:16 ` [PATCH v5 12/13] perf evsel: Add functions to enable/disable for a specific CPU Andi Kleen
@ 2019-11-07 18:16 ` Andi Kleen
  2019-11-11 14:04   ` Jiri Olsa
  12 siblings, 1 reply; 29+ messages in thread
From: Andi Kleen @ 2019-11-07 18:16 UTC (permalink / raw)
  To: jolsa; +Cc: acme, linux-kernel, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Restructure event enabling/disabling to use affinity, which
minimizes the number of IPIs needed.

Before on a large test case with 94 CPUs:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 54.65    1.899986          22     84812       660 ioctl

after:

 39.21    0.930451          10     84796       644 ioctl

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---

v2: Use new iterator macros
v3: Use new iterator macros
v4: Update iterators again
---
 tools/perf/util/evlist.c | 40 +++++++++++++++++++++++++++++++++++++---
 1 file changed, 37 insertions(+), 3 deletions(-)

diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 33080f79b977..571bb102b432 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -378,11 +378,28 @@ bool evsel__cpu_iter_skip(struct evsel *ev, int cpu)
 void evlist__disable(struct evlist *evlist)
 {
 	struct evsel *pos;
+	struct affinity affinity;
+	int cpu, i;
+
+	if (affinity__setup(&affinity) < 0)
+		return;
+
+	evlist__for_each_cpu (evlist, i, cpu) {
+		affinity__set(&affinity, cpu);
 
+		evlist__for_each_entry(evlist, pos) {
+			if (evsel__cpu_iter_skip(pos, cpu))
+				continue;
+			if (pos->disabled || !perf_evsel__is_group_leader(pos) || !pos->core.fd)
+				continue;
+			evsel__disable_cpu(pos, pos->cpu_iter - 1);
+		}
+	}
+	affinity__cleanup(&affinity);
 	evlist__for_each_entry(evlist, pos) {
-		if (pos->disabled || !perf_evsel__is_group_leader(pos) || !pos->core.fd)
+		if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
 			continue;
-		evsel__disable(pos);
+		pos->disabled = true;
 	}
 
 	evlist->enabled = false;
@@ -391,11 +408,28 @@ void evlist__disable(struct evlist *evlist)
 void evlist__enable(struct evlist *evlist)
 {
 	struct evsel *pos;
+	struct affinity affinity;
+	int cpu, i;
 
+	if (affinity__setup(&affinity) < 0)
+		return;
+
+	evlist__for_each_cpu (evlist, i, cpu) {
+		affinity__set(&affinity, cpu);
+
+		evlist__for_each_entry(evlist, pos) {
+			if (evsel__cpu_iter_skip(pos, cpu))
+				continue;
+			if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
+				continue;
+			evsel__enable_cpu(pos, pos->cpu_iter - 1);
+		}
+	}
+	affinity__cleanup(&affinity);
 	evlist__for_each_entry(evlist, pos) {
 		if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
 			continue;
-		evsel__enable(pos);
+		pos->disabled = false;
 	}
 
 	evlist->enabled = true;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 10/13] perf stat: Use affinity for opening events
  2019-11-07 18:16 ` [PATCH v5 10/13] perf stat: Use affinity for opening events Andi Kleen
@ 2019-11-11 13:30   ` Jiri Olsa
  2019-11-11 13:30   ` Jiri Olsa
  2019-11-11 13:31   ` Jiri Olsa
  2 siblings, 0 replies; 29+ messages in thread
From: Jiri Olsa @ 2019-11-11 13:30 UTC (permalink / raw)
  To: Andi Kleen; +Cc: jolsa, acme, linux-kernel, Andi Kleen

On Thu, Nov 07, 2019 at 10:16:43AM -0800, Andi Kleen wrote:

SNIP

> --- a/tools/perf/util/evlist.c
> +++ b/tools/perf/util/evlist.c
> @@ -1632,7 +1632,8 @@ void perf_evlist__force_leader(struct evlist *evlist)
>  }
>  
>  struct evsel *perf_evlist__reset_weak_group(struct evlist *evsel_list,
> -						 struct evsel *evsel)
> +						 struct evsel *evsel,
> +						bool close)
>  {
>  	struct evsel *c2, *leader;
>  	bool is_open = true;
> @@ -1649,10 +1650,11 @@ struct evsel *perf_evlist__reset_weak_group(struct evlist *evsel_list,
>  		if (c2 == evsel)
>  			is_open = false;
>  		if (c2->leader == leader) {
> -			if (is_open)
> +			if (is_open && close)
>  				perf_evsel__close(&c2->core);
>  			c2->leader = c2;
>  			c2->core.nr_members = 0;
> +			c2->reset_group = true;

so it's only set to true and stays.. please explain the logic
in comment.. together with errored 

thanks,
jirka


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 10/13] perf stat: Use affinity for opening events
  2019-11-07 18:16 ` [PATCH v5 10/13] perf stat: Use affinity for opening events Andi Kleen
  2019-11-11 13:30   ` Jiri Olsa
@ 2019-11-11 13:30   ` Jiri Olsa
  2019-11-11 13:31   ` Jiri Olsa
  2 siblings, 0 replies; 29+ messages in thread
From: Jiri Olsa @ 2019-11-11 13:30 UTC (permalink / raw)
  To: Andi Kleen; +Cc: jolsa, acme, linux-kernel, Andi Kleen

On Thu, Nov 07, 2019 at 10:16:43AM -0800, Andi Kleen wrote:

SNIP

> +				 * Don't close here because we're in the wrong affinity.
> +				 */
> +				if ((errno == EINVAL || errno == EBADF) &&
> +				    counter->leader != counter &&
> +				    counter->weak_group) {
> +					perf_evlist__reset_weak_group(evsel_list, counter, false);
> +					assert(counter->reset_group);
> +					second_pass = true;
> +					continue;
> +				}
> +
> +				switch (stat_handle_error(counter)) {
> +				case COUNTER_FATAL:
> +					return -1;
> +				case COUNTER_RETRY:
> +					goto try_again;
> +				case COUNTER_SKIP:
> +					continue;
> +				default:
> +					break;
> +				}
> +
>  			}
> +			counter->supported = true;
> +		}
> +	}
>  
> -			switch (stat_handle_error(counter)) {
> -			case COUNTER_FATAL:
> -				return -1;
> -			case COUNTER_RETRY:
> -				goto try_again;
> -			case COUNTER_SKIP:
> -				continue;
> -			default:
> -				break;
> +	if (second_pass) {
> +		/*
> +		 * Now redo all the weak group after closing them,
> +		 * and also close errored counters.
> +		 */
> +
> +		evlist__cpu_iter_start(evsel_list);
> +		evlist__for_each_cpu (evsel_list, i, cpu) {

no need for evlist__cpu_iter_start call in here?

jirka

> +			affinity__set(&affinity, cpu);
> +			/* First close errored or weak retry */
> +			evlist__for_each_entry(evsel_list, counter) {
> +				if (!counter->reset_group && !counter->errored)
> +					continue;
> +				if (evsel__cpu_iter_skip_no_inc(counter, cpu))
> +					continue;
> +				perf_evsel__close_cpu(&counter->core, counter->cpu_iter);
>  			}
> +			/* Now reopen weak */
> +			evlist__for_each_entry(evsel_list, counter) {
> +				if (!counter->reset_group && !counter->errored)

SNIP


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 09/13] perf evsel: Support opening on a specific CPU
  2019-11-07 18:16 ` [PATCH v5 09/13] perf evsel: Support opening on a specific CPU Andi Kleen
@ 2019-11-11 13:30   ` Jiri Olsa
  2019-11-12  0:41     ` Andi Kleen
  0 siblings, 1 reply; 29+ messages in thread
From: Jiri Olsa @ 2019-11-11 13:30 UTC (permalink / raw)
  To: Andi Kleen; +Cc: jolsa, acme, linux-kernel, Andi Kleen

On Thu, Nov 07, 2019 at 10:16:42AM -0800, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>

SNIP

>  int perf_evsel__open_per_thread(struct evsel *evsel,
> diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
> index b10d5ba21966..54513d70c109 100644
> --- a/tools/perf/util/evsel.h
> +++ b/tools/perf/util/evsel.h
> @@ -223,7 +223,8 @@ int evsel__enable(struct evsel *evsel);
>  int evsel__disable(struct evsel *evsel);
>  
>  int perf_evsel__open_per_cpu(struct evsel *evsel,
> -			     struct perf_cpu_map *cpus);
> +			     struct perf_cpu_map *cpus,
> +			     int cpu);
>  int perf_evsel__open_per_thread(struct evsel *evsel,
>  				struct perf_thread_map *threads);
>  int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
> diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
> index 6822e4ffe224..36dc95032e4c 100644
> --- a/tools/perf/util/stat.c
> +++ b/tools/perf/util/stat.c
> @@ -517,7 +517,7 @@ int create_perf_stat_counter(struct evsel *evsel,
>  	}
>  
>  	if (target__has_cpu(target) && !target__has_per_thread(target))
> -		return perf_evsel__open_per_cpu(evsel, evsel__cpus(evsel));
> +		return perf_evsel__open_per_cpu(evsel, evsel__cpus(evsel), -1);

how will -1 owrk in here? it will end up as:

   perf_evsel__open_per_cpu
    evsel__open_cpu( ...., start_cpu = -1, end_cpu = -1 + 1)
      for (cpu = start_cpu; cpu < end_cpu; cpu++) {

?

jirka


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 12/13] perf evsel: Add functions to enable/disable for a specific CPU
  2019-11-07 18:16 ` [PATCH v5 12/13] perf evsel: Add functions to enable/disable for a specific CPU Andi Kleen
@ 2019-11-11 13:30   ` Jiri Olsa
  0 siblings, 0 replies; 29+ messages in thread
From: Jiri Olsa @ 2019-11-11 13:30 UTC (permalink / raw)
  To: Andi Kleen; +Cc: jolsa, acme, linux-kernel, Andi Kleen

On Thu, Nov 07, 2019 at 10:16:45AM -0800, Andi Kleen wrote:

SNIP

> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
> index 7106f9a067df..79050a6f4991 100644
> --- a/tools/perf/util/evsel.c
> +++ b/tools/perf/util/evsel.c
> @@ -1205,13 +1205,26 @@ int perf_evsel__append_addr_filter(struct evsel *evsel, const char *filter)
>  	return perf_evsel__append_filter(evsel, "%s,%s", filter);
>  }
>  
> +/* Caller has to clear disabled after going through all CPUs. */
> +int evsel__enable_cpu(struct evsel *evsel, int cpu)
> +{
> +	int err = perf_evsel__enable_cpu(&evsel->core, cpu);
> +	return err;

	return perf_evsel__enable_cpu(... ?

> +}
> +
>  int evsel__enable(struct evsel *evsel)
>  {
>  	int err = perf_evsel__enable(&evsel->core);
>  
>  	if (!err)
>  		evsel->disabled = false;
> +	return err;
> +}
>  
> +/* Caller has to set disabled after going through all CPUs. */
> +int evsel__disable_cpu(struct evsel *evsel, int cpu)
> +{
> +	int err = perf_evsel__disable_cpu(&evsel->core, cpu);
>  	return err;

	return perf_evsel__disable_cpu(... ?

jirka


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 07/13] perf stat: Use affinity for closing file descriptors
  2019-11-07 18:16 ` [PATCH v5 07/13] perf stat: Use affinity for closing file descriptors Andi Kleen
@ 2019-11-11 13:30   ` Jiri Olsa
  2019-11-11 16:56     ` Andi Kleen
  0 siblings, 1 reply; 29+ messages in thread
From: Jiri Olsa @ 2019-11-11 13:30 UTC (permalink / raw)
  To: Andi Kleen; +Cc: jolsa, acme, linux-kernel, Andi Kleen

On Thu, Nov 07, 2019 at 10:16:40AM -0800, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>
> 
> Closing a perf fd can also trigger an IPI to the target CPU.
> Use the same affinity technique as we use for reading/enabling events
> to closing to optimize the CPU transitions.
> 
> Before on a large test case with 94 CPUs:
> 
> % time     seconds  usecs/call     calls    errors syscall
> ------ ----------- ----------- --------- --------- ----------------
>  32.56    3.085463          50     61483           close
> 
> After:
> 
>  10.54    0.735704          11     61485           close
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> 
> ---
> 
> v2: Use new iterator macros
> v3: Use new iterator macros
> Add missing affinity__cleanup
> v4:
> Update iterators again
> ---
>  tools/perf/util/evlist.c | 27 +++++++++++++++++++++++++--
>  1 file changed, 25 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> index dae6e846b2f8..0dcea66329e2 100644
> --- a/tools/perf/util/evlist.c
> +++ b/tools/perf/util/evlist.c
> @@ -18,6 +18,7 @@
>  #include "debug.h"
>  #include "units.h"
>  #include <internal/lib.h> // page_size
> +#include "affinity.h"
>  #include "../perf.h"
>  #include "asm/bug.h"
>  #include "bpf-event.h"
> @@ -1169,9 +1170,31 @@ void perf_evlist__set_selected(struct evlist *evlist,
>  void evlist__close(struct evlist *evlist)
>  {
>  	struct evsel *evsel;
> +	struct affinity affinity;
> +	int cpu, i;
>  
> -	evlist__for_each_entry_reverse(evlist, evsel)
> -		evsel__close(evsel);
> +	if (!evlist->core.cpus) {

should this be evlist->all_cpus?

jirka

> +		evlist__for_each_entry_reverse(evlist, evsel)
> +			evsel__close(evsel);
> +		return;
> +	}
> +
> +	if (affinity__setup(&affinity) < 0)
> +		return;
> +	evlist__for_each_cpu (evlist, i, cpu) {
> +		affinity__set(&affinity, cpu);
> +
> +		evlist__for_each_entry_reverse(evlist, evsel) {
> +			if (evsel__cpu_iter_skip(evsel, cpu))
> +			    continue;
> +			perf_evsel__close_cpu(&evsel->core, evsel->cpu_iter - 1);
> +		}
> +	}
> +	affinity__cleanup(&affinity);
> +	evlist__for_each_entry_reverse(evlist, evsel) {
> +		perf_evsel__free_fd(&evsel->core);
> +		perf_evsel__free_id(&evsel->core);
> +	}
>  }
>  
>  static int perf_evlist__create_syswide_maps(struct evlist *evlist)
> -- 
> 2.23.0
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 11/13] perf stat: Use affinity for reading
  2019-11-07 18:16 ` [PATCH v5 11/13] perf stat: Use affinity for reading Andi Kleen
@ 2019-11-11 13:31   ` Jiri Olsa
  0 siblings, 0 replies; 29+ messages in thread
From: Jiri Olsa @ 2019-11-11 13:31 UTC (permalink / raw)
  To: Andi Kleen; +Cc: jolsa, acme, linux-kernel, Andi Kleen

On Thu, Nov 07, 2019 at 10:16:44AM -0800, Andi Kleen wrote:

SNIP

> @@ -325,15 +318,34 @@ static int read_counter(struct evsel *counter, struct timespec *rs)
>  static void read_counters(struct timespec *rs)
>  {
>  	struct evsel *counter;
> -	int ret;
> +	struct affinity affinity;
> +	int i, ncpus, cpu;
> +
> +	if (affinity__setup(&affinity) < 0)
> +		return;
> +
> +	ncpus = evsel_list->core.all_cpus->nr;
> +	if (!(target__has_cpu(&target) && !target__has_per_thread(&target)))
> +		ncpus = 1;
> +	evlist__for_each_cpu (evsel_list, i, cpu) {
> +		if (i >= ncpus)
> +			break;
> +		affinity__set(&affinity, cpu);
> +
> +		evlist__for_each_entry(evsel_list, counter) {
> +			if (evsel__cpu_iter_skip(counter, cpu))
> +				continue;
> +			counter->err = read_counter(counter, rs, counter->cpu_iter - 1);

this will be overwritten by another cpu attempt,
so the check below won't get triggered properly

should we just break in case of error in here?

> +		}
> +	}
> +	affinity__cleanup(&affinity);
>  
>  	evlist__for_each_entry(evsel_list, counter) {
> -		ret = read_counter(counter, rs);
> -		if (ret)
> +		if (counter->err)
>  			pr_debug("failed to read counter %s\n", counter->name);
> -
> -		if (ret == 0 && perf_stat_process_counter(&stat_config, counter))
> +		if (counter->err == 0 && perf_stat_process_counter(&stat_config, counter))
>  			pr_warning("failed to process counter %s\n", counter->name);
> +		counter->err = 0;
>  	}
>  }
>  

SNIP


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 10/13] perf stat: Use affinity for opening events
  2019-11-07 18:16 ` [PATCH v5 10/13] perf stat: Use affinity for opening events Andi Kleen
  2019-11-11 13:30   ` Jiri Olsa
  2019-11-11 13:30   ` Jiri Olsa
@ 2019-11-11 13:31   ` Jiri Olsa
  2019-11-11 17:02     ` Andi Kleen
  2 siblings, 1 reply; 29+ messages in thread
From: Jiri Olsa @ 2019-11-11 13:31 UTC (permalink / raw)
  To: Andi Kleen; +Cc: jolsa, acme, linux-kernel, Andi Kleen

On Thu, Nov 07, 2019 at 10:16:43AM -0800, Andi Kleen wrote:

SNIP

> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index 2fb83aabbef5..9f8a9393ce4a 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -776,7 +776,7 @@ static int record__open(struct record *rec)
>  			if ((errno == EINVAL || errno == EBADF) &&
>  			    pos->leader != pos &&
>  			    pos->weak_group) {
> -			        pos = perf_evlist__reset_weak_group(evlist, pos);
> +			        pos = perf_evlist__reset_weak_group(evlist, pos, true);
>  				goto try_again;
>  			}
>  			rc = -errno;
> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
> index 1a586009e5a7..7f9ec41d8f62 100644
> --- a/tools/perf/builtin-stat.c
> +++ b/tools/perf/builtin-stat.c
> @@ -65,6 +65,7 @@
>  #include "util/target.h"
>  #include "util/time-utils.h"
>  #include "util/top.h"
> +#include "util/affinity.h"
>  #include "asm/bug.h"
>  
>  #include <linux/time64.h>
> @@ -440,6 +441,7 @@ static enum counter_recovery stat_handle_error(struct evsel *counter)
>  			ui__warning("%s event is not supported by the kernel.\n",
>  				    perf_evsel__name(counter));
>  		counter->supported = false;
> +		counter->errored = true;

how is errored different from supported?
why can't you use it?

jirka


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 04/13] perf evlist: Maintain evlist->all_cpus
  2019-11-07 18:16 ` [PATCH v5 04/13] perf evlist: Maintain evlist->all_cpus Andi Kleen
@ 2019-11-11 13:31   ` Jiri Olsa
  0 siblings, 0 replies; 29+ messages in thread
From: Jiri Olsa @ 2019-11-11 13:31 UTC (permalink / raw)
  To: Andi Kleen; +Cc: jolsa, acme, linux-kernel, Andi Kleen

On Thu, Nov 07, 2019 at 10:16:37AM -0800, Andi Kleen wrote:

SNIP

> diff --git a/tools/perf/lib/cpumap.c b/tools/perf/lib/cpumap.c
> index d81656b4635e..b9f573438b93 100644
> --- a/tools/perf/lib/cpumap.c
> +++ b/tools/perf/lib/cpumap.c
> @@ -286,3 +286,46 @@ int perf_cpu_map__max(struct perf_cpu_map *map)
>  
>  	return max;
>  }
> +
> +struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig,
> +					 struct perf_cpu_map *other)
> +{
> +	int *tmp_cpus;
> +	int tmp_len;
> +	int i, j, k;
> +	struct perf_cpu_map *merged;
> +
> +	if (!orig && !other)
> +		return NULL;
> +	if (!orig) {
> +		perf_cpu_map__get(other);
> +		return other;
> +	}
> +	if (!other)
> +		return orig;

so we bump refcnt for other but not for orig?

could you please put to the comment expected results
wrt refcnt values for above cases?

> +	if (orig->nr == other->nr &&
> +	    !memcmp(orig->map, other->map, orig->nr * sizeof(int)))
> +		return orig;
> +	tmp_len = orig->nr + other->nr;
> +	tmp_cpus = malloc(tmp_len * sizeof(int));
> +	if (!tmp_cpus)
> +		return NULL;
> +	i = j = k = 0;
> +	while (i < orig->nr && j < other->nr) {
> +		if (orig->map[i] <= other->map[j]) {
> +			if (orig->map[i] == other->map[j])
> +				j++;
> +			tmp_cpus[k++] = orig->map[i++];
> +		} else
> +			tmp_cpus[k++] = other->map[j++];
> +	}
> +	while (i < orig->nr)
> +		tmp_cpus[k++] = orig->map[i++];
> +	while (j < other->nr)
> +		tmp_cpus[k++] = other->map[j++];
> +	assert(k <= tmp_len);
> +	merged = cpu_map__trim_new(k, tmp_cpus);
> +	free(tmp_cpus);
> +	perf_cpu_map__put(orig);
> +	return merged;

please use some comments and blank lines to separate
the code above, it's too much.. ;-)

SNIP

>  LIBPERF_API int perf_cpu_map__cpu(const struct perf_cpu_map *cpus, int idx);
>  LIBPERF_API int perf_cpu_map__nr(const struct perf_cpu_map *cpus);
> diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
> index 8b286e9b7549..5fa37cf7f283 100644
> --- a/tools/perf/tests/builtin-test.c
> +++ b/tools/perf/tests/builtin-test.c
> @@ -259,6 +259,11 @@ static struct test generic_tests[] = {
>  		.desc = "Print cpu map",
>  		.func = test__cpu_map_print,
>  	},
> +	{
> +		.desc = "Merge cpu map",
> +		.func = test__cpu_map_merge,
> +	},

awesome ,thanks

jirka


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 13/13] perf stat: Use affinity for enabling/disabling events
  2019-11-07 18:16 ` [PATCH v5 13/13] perf stat: Use affinity for enabling/disabling events Andi Kleen
@ 2019-11-11 14:04   ` Jiri Olsa
  2019-11-11 16:50     ` Andi Kleen
  0 siblings, 1 reply; 29+ messages in thread
From: Jiri Olsa @ 2019-11-11 14:04 UTC (permalink / raw)
  To: Andi Kleen; +Cc: jolsa, acme, linux-kernel, Andi Kleen

On Thu, Nov 07, 2019 at 10:16:46AM -0800, Andi Kleen wrote:

SNIP

> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> index 33080f79b977..571bb102b432 100644
> --- a/tools/perf/util/evlist.c
> +++ b/tools/perf/util/evlist.c
> @@ -378,11 +378,28 @@ bool evsel__cpu_iter_skip(struct evsel *ev, int cpu)
>  void evlist__disable(struct evlist *evlist)
>  {
>  	struct evsel *pos;
> +	struct affinity affinity;
> +	int cpu, i;

should we have the fallback to current code in here (and below) as well?
also for reading/openning?

jirka

> +
> +	if (affinity__setup(&affinity) < 0)
> +		return;
> +
> +	evlist__for_each_cpu (evlist, i, cpu) {
> +		affinity__set(&affinity, cpu);
>  
> +		evlist__for_each_entry(evlist, pos) {
> +			if (evsel__cpu_iter_skip(pos, cpu))
> +				continue;
> +			if (pos->disabled || !perf_evsel__is_group_leader(pos) || !pos->core.fd)
> +				continue;
> +			evsel__disable_cpu(pos, pos->cpu_iter - 1);
> +		}
> +	}
> +	affinity__cleanup(&affinity);
>  	evlist__for_each_entry(evlist, pos) {
> -		if (pos->disabled || !perf_evsel__is_group_leader(pos) || !pos->core.fd)
> +		if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
>  			continue;
> -		evsel__disable(pos);
> +		pos->disabled = true;
>  	}
>  
>  	evlist->enabled = false;
> @@ -391,11 +408,28 @@ void evlist__disable(struct evlist *evlist)
>  void evlist__enable(struct evlist *evlist)
>  {
>  	struct evsel *pos;
> +	struct affinity affinity;
> +	int cpu, i;
>  
> +	if (affinity__setup(&affinity) < 0)
> +		return;
> +
> +	evlist__for_each_cpu (evlist, i, cpu) {
> +		affinity__set(&affinity, cpu);
> +
> +		evlist__for_each_entry(evlist, pos) {
> +			if (evsel__cpu_iter_skip(pos, cpu))
> +				continue;
> +			if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
> +				continue;
> +			evsel__enable_cpu(pos, pos->cpu_iter - 1);
> +		}
> +	}
> +	affinity__cleanup(&affinity);
>  	evlist__for_each_entry(evlist, pos) {
>  		if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
>  			continue;
> -		evsel__enable(pos);
> +		pos->disabled = false;
>  	}
>  
>  	evlist->enabled = true;
> -- 
> 2.23.0
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 13/13] perf stat: Use affinity for enabling/disabling events
  2019-11-11 14:04   ` Jiri Olsa
@ 2019-11-11 16:50     ` Andi Kleen
  2019-11-11 20:06       ` Jiri Olsa
  0 siblings, 1 reply; 29+ messages in thread
From: Andi Kleen @ 2019-11-11 16:50 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: Andi Kleen, jolsa, acme, linux-kernel

On Mon, Nov 11, 2019 at 03:04:15PM +0100, Jiri Olsa wrote:
> On Thu, Nov 07, 2019 at 10:16:46AM -0800, Andi Kleen wrote:
> 
> SNIP
> 
> > diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> > index 33080f79b977..571bb102b432 100644
> > --- a/tools/perf/util/evlist.c
> > +++ b/tools/perf/util/evlist.c
> > @@ -378,11 +378,28 @@ bool evsel__cpu_iter_skip(struct evsel *ev, int cpu)
> >  void evlist__disable(struct evlist *evlist)
> >  {
> >  	struct evsel *pos;
> > +	struct affinity affinity;
> > +	int cpu, i;
> 
> should we have the fallback to current code in here (and below) as well?
> also for reading/openning?

The return only happens when you're out of memory, when nothing
will work anyways.

-Andi

> 
> jirka
> 
> > +
> > +	if (affinity__setup(&affinity) < 0)
> > +		return;

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 07/13] perf stat: Use affinity for closing file descriptors
  2019-11-11 13:30   ` Jiri Olsa
@ 2019-11-11 16:56     ` Andi Kleen
  0 siblings, 0 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-11 16:56 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: Andi Kleen, jolsa, acme, linux-kernel

On Mon, Nov 11, 2019 at 02:30:52PM +0100, Jiri Olsa wrote:
> On Thu, Nov 07, 2019 at 10:16:40AM -0800, Andi Kleen wrote:
> > From: Andi Kleen <ak@linux.intel.com>
> > 
> > Closing a perf fd can also trigger an IPI to the target CPU.
> > Use the same affinity technique as we use for reading/enabling events
> > to closing to optimize the CPU transitions.
> > 
> > Before on a large test case with 94 CPUs:
> > 
> > % time     seconds  usecs/call     calls    errors syscall
> > ------ ----------- ----------- --------- --------- ----------------
> >  32.56    3.085463          50     61483           close
> > 
> > After:
> > 
> >  10.54    0.735704          11     61485           close
> > 
> > Signed-off-by: Andi Kleen <ak@linux.intel.com>
> > 
> > ---
> > 
> > v2: Use new iterator macros
> > v3: Use new iterator macros
> > Add missing affinity__cleanup
> > v4:
> > Update iterators again
> > ---
> >  tools/perf/util/evlist.c | 27 +++++++++++++++++++++++++--
> >  1 file changed, 25 insertions(+), 2 deletions(-)
> > 
> > diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> > index dae6e846b2f8..0dcea66329e2 100644
> > --- a/tools/perf/util/evlist.c
> > +++ b/tools/perf/util/evlist.c
> > @@ -18,6 +18,7 @@
> >  #include "debug.h"
> >  #include "units.h"
> >  #include <internal/lib.h> // page_size
> > +#include "affinity.h"
> >  #include "../perf.h"
> >  #include "asm/bug.h"
> >  #include "bpf-event.h"
> > @@ -1169,9 +1170,31 @@ void perf_evlist__set_selected(struct evlist *evlist,
> >  void evlist__close(struct evlist *evlist)
> >  {
> >  	struct evsel *evsel;
> > +	struct affinity affinity;
> > +	int cpu, i;
> >  
> > -	evlist__for_each_entry_reverse(evlist, evsel)
> > -		evsel__close(evsel);
> > +	if (!evlist->core.cpus) {
> 
> should this be evlist->all_cpus?

This detects perf record essentially. I had some problems with perf record
in early testing, so I just disabled it, since I was just focussing
on stat. all_cpus would be set for perf record too.

-Andi

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 10/13] perf stat: Use affinity for opening events
  2019-11-11 13:31   ` Jiri Olsa
@ 2019-11-11 17:02     ` Andi Kleen
  0 siblings, 0 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-11 17:02 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: Andi Kleen, jolsa, acme, linux-kernel

> >  #include <linux/time64.h>
> > @@ -440,6 +441,7 @@ static enum counter_recovery stat_handle_error(struct evsel *counter)
> >  			ui__warning("%s event is not supported by the kernel.\n",
> >  				    perf_evsel__name(counter));
> >  		counter->supported = false;
> > +		counter->errored = true;
> 
> how is errored different from supported?
> why can't you use it?

errored means that the event is still partially open, while supported means it is
closed. While I guess it could be combined it seems cleaner to keep them
separate.

-Andi

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 13/13] perf stat: Use affinity for enabling/disabling events
  2019-11-11 16:50     ` Andi Kleen
@ 2019-11-11 20:06       ` Jiri Olsa
  2019-11-11 23:31         ` Andi Kleen
  0 siblings, 1 reply; 29+ messages in thread
From: Jiri Olsa @ 2019-11-11 20:06 UTC (permalink / raw)
  To: Andi Kleen; +Cc: Andi Kleen, jolsa, acme, linux-kernel

On Mon, Nov 11, 2019 at 08:50:28AM -0800, Andi Kleen wrote:
> On Mon, Nov 11, 2019 at 03:04:15PM +0100, Jiri Olsa wrote:
> > On Thu, Nov 07, 2019 at 10:16:46AM -0800, Andi Kleen wrote:
> > 
> > SNIP
> > 
> > > diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> > > index 33080f79b977..571bb102b432 100644
> > > --- a/tools/perf/util/evlist.c
> > > +++ b/tools/perf/util/evlist.c
> > > @@ -378,11 +378,28 @@ bool evsel__cpu_iter_skip(struct evsel *ev, int cpu)
> > >  void evlist__disable(struct evlist *evlist)
> > >  {
> > >  	struct evsel *pos;
> > > +	struct affinity affinity;
> > > +	int cpu, i;
> > 
> > should we have the fallback to current code in here (and below) as well?
> > also for reading/openning?
> 
> The return only happens when you're out of memory, when nothing
> will work anyways.

then let's have some assert or BUG_ON on !all_cpus
and remove the fallback code from close path

jirka

> 
> -Andi
> 
> > 
> > jirka
> > 
> > > +
> > > +	if (affinity__setup(&affinity) < 0)
> > > +		return;
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 13/13] perf stat: Use affinity for enabling/disabling events
  2019-11-11 20:06       ` Jiri Olsa
@ 2019-11-11 23:31         ` Andi Kleen
  0 siblings, 0 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-11 23:31 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: Andi Kleen, Andi Kleen, jolsa, acme, linux-kernel

> then let's have some assert or BUG_ON on !all_cpus
> and remove the fallback code from close path

I tried it again, but in record mode evsel->cpus is usually NULL,
resulting in various crashes.

I think fixing this beyond the scope of this patchkit, so i will
keep the fallback checks for now. I'll add better comments though.

-Andi

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v5 09/13] perf evsel: Support opening on a specific CPU
  2019-11-11 13:30   ` Jiri Olsa
@ 2019-11-12  0:41     ` Andi Kleen
  0 siblings, 0 replies; 29+ messages in thread
From: Andi Kleen @ 2019-11-12  0:41 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: Andi Kleen, jolsa, acme, linux-kernel

On Mon, Nov 11, 2019 at 02:30:33PM +0100, Jiri Olsa wrote:
> On Thu, Nov 07, 2019 at 10:16:42AM -0800, Andi Kleen wrote:
> > From: Andi Kleen <ak@linux.intel.com>
> 
> SNIP
> 
> >  int perf_evsel__open_per_thread(struct evsel *evsel,
> > diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
> > index b10d5ba21966..54513d70c109 100644
> > --- a/tools/perf/util/evsel.h
> > +++ b/tools/perf/util/evsel.h
> > @@ -223,7 +223,8 @@ int evsel__enable(struct evsel *evsel);
> >  int evsel__disable(struct evsel *evsel);
> >  
> >  int perf_evsel__open_per_cpu(struct evsel *evsel,
> > -			     struct perf_cpu_map *cpus);
> > +			     struct perf_cpu_map *cpus,
> > +			     int cpu);
> >  int perf_evsel__open_per_thread(struct evsel *evsel,
> >  				struct perf_thread_map *threads);
> >  int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
> > diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
> > index 6822e4ffe224..36dc95032e4c 100644
> > --- a/tools/perf/util/stat.c
> > +++ b/tools/perf/util/stat.c
> > @@ -517,7 +517,7 @@ int create_perf_stat_counter(struct evsel *evsel,
> >  	}
> >  
> >  	if (target__has_cpu(target) && !target__has_per_thread(target))
> > -		return perf_evsel__open_per_cpu(evsel, evsel__cpus(evsel));
> > +		return perf_evsel__open_per_cpu(evsel, evsel__cpus(evsel), -1);
> 
> how will -1 owrk in here? it will end up as:
> 
>    perf_evsel__open_per_cpu
>     evsel__open_cpu( ...., start_cpu = -1, end_cpu = -1 + 1)
>       for (cpu = start_cpu; cpu < end_cpu; cpu++) {

Yes you're right. The problem was the splitting of the patches.
With the two patches combined it works. So the end result is good,
just a bad intermediate step.

I will merge them again.

It seems better than creating something complicated here that
will just be undone next patch again.

-Andi

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2019-11-12  0:41 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-07 18:16 Optimize perf stat for large number of events/cpus Andi Kleen
2019-11-07 18:16 ` [PATCH v5 01/13] perf pmu: Use file system cache to optimize sysfs access Andi Kleen
2019-11-07 18:16 ` [PATCH v5 02/13] perf affinity: Add infrastructure to save/restore affinity Andi Kleen
2019-11-07 18:16 ` [PATCH v5 03/13] perf cpumap: Maintain cpumaps ordered and without dups Andi Kleen
2019-11-07 18:16 ` [PATCH v5 04/13] perf evlist: Maintain evlist->all_cpus Andi Kleen
2019-11-11 13:31   ` Jiri Olsa
2019-11-07 18:16 ` [PATCH v5 05/13] perf evsel: Add iterator to iterate over events ordered by CPU Andi Kleen
2019-11-07 18:16 ` [PATCH v5 06/13] perf evsel: Add functions to close evsel on a CPU Andi Kleen
2019-11-07 18:16 ` [PATCH v5 07/13] perf stat: Use affinity for closing file descriptors Andi Kleen
2019-11-11 13:30   ` Jiri Olsa
2019-11-11 16:56     ` Andi Kleen
2019-11-07 18:16 ` [PATCH v5 08/13] perf stat: Factor out open error handling Andi Kleen
2019-11-07 18:16 ` [PATCH v5 09/13] perf evsel: Support opening on a specific CPU Andi Kleen
2019-11-11 13:30   ` Jiri Olsa
2019-11-12  0:41     ` Andi Kleen
2019-11-07 18:16 ` [PATCH v5 10/13] perf stat: Use affinity for opening events Andi Kleen
2019-11-11 13:30   ` Jiri Olsa
2019-11-11 13:30   ` Jiri Olsa
2019-11-11 13:31   ` Jiri Olsa
2019-11-11 17:02     ` Andi Kleen
2019-11-07 18:16 ` [PATCH v5 11/13] perf stat: Use affinity for reading Andi Kleen
2019-11-11 13:31   ` Jiri Olsa
2019-11-07 18:16 ` [PATCH v5 12/13] perf evsel: Add functions to enable/disable for a specific CPU Andi Kleen
2019-11-11 13:30   ` Jiri Olsa
2019-11-07 18:16 ` [PATCH v5 13/13] perf stat: Use affinity for enabling/disabling events Andi Kleen
2019-11-11 14:04   ` Jiri Olsa
2019-11-11 16:50     ` Andi Kleen
2019-11-11 20:06       ` Jiri Olsa
2019-11-11 23:31         ` Andi Kleen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).