linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Optimize perf stat for large number of events/cpus v2
@ 2019-10-20 17:51 Andi Kleen
  2019-10-20 17:51 ` [PATCH v2 1/9] perf evsel: Always preserve errno while cleaning up perf_event_open failures Andi Kleen
                   ` (9 more replies)
  0 siblings, 10 replies; 28+ messages in thread
From: Andi Kleen @ 2019-10-20 17:51 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, jolsa, eranian, kan.liang, peterz

[The earlier v1 version had a lot of conflicts against some
recent libperf changes in tip/perf/core. Resolve that and
also fix some minor issues.]

This patch kit optimizes perf stat for a large number of events 
on systems with many CPUs and PMUs.

Some profiling shows that the most overhead is doing IPIs to
all the target CPUs. We can optimize this by using sched_setaffinity
to set the affinity to a target CPU once and then doing
the perf operation for all events on that CPU. This requires
some restructuring, but cuts the set up time quite a bit.

In theory we could go further by parallelizing these setups
too, but that would be much more complicated and for now just batching it
per CPU seems to be sufficient. At some point with many more cores 
parallelization or a better bulk perf setup API might be needed though.

In addition perf does a lot of redundant /sys accesses with
many PMUs, which can be also expensve. This is also optimized.

On a large test case (>700 events with many weak groups) on a 94 CPU
system I go from

real	0m8.607s
user	0m0.550s
sys	0m8.041s

to 

real	0m3.269s
user	0m0.760s
sys	0m1.694s

so shaving ~6 seconds of system time, at slightly more cost
in perf stat itself. On a 4 socket system with the savings
are more dramatic:

real	0m15.641s
user	0m0.873s
sys	0m14.729s

to 

real	0m4.493s
user	0m1.578s
sys	0m2.444s

so 11s difference in the user visible set up time.

Also available in 

git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc perf/stat-scale-4

v1: Initial post.
v2: Rebase. Fix some minor issues.

-Andi


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 1/9] perf evsel: Always preserve errno while cleaning up perf_event_open failures
  2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
@ 2019-10-20 17:51 ` Andi Kleen
  2019-10-22  8:01   ` Jiri Olsa
  2019-11-12 11:18   ` [tip: perf/core] " tip-bot2 for Andi Kleen
  2019-10-20 17:51 ` [PATCH v2 2/9] perf evsel: Avoid close(-1) Andi Kleen
                   ` (8 subsequent siblings)
  9 siblings, 2 replies; 28+ messages in thread
From: Andi Kleen @ 2019-10-20 17:51 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

In some cases when perf_event_open fails, it may do some closes to clean
up. In special cases these closes can fail too, which overwrites the
errno of the perf_event_open, which is then incorrectly reported.

Save/restore errno around closes.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/util/evsel.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index abc7fda4a0fe..d831038b55f2 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1574,7 +1574,7 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
 {
 	int cpu, thread, nthreads;
 	unsigned long flags = PERF_FLAG_FD_CLOEXEC;
-	int pid = -1, err;
+	int pid = -1, err, old_errno;
 	enum { NO_CHANGE, SET_TO_MAX, INCREASED_MAX } set_rlimit = NO_CHANGE;
 
 	if ((perf_missing_features.write_backward && evsel->core.attr.write_backward) ||
@@ -1727,8 +1727,8 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
 	 */
 	if (err == -EMFILE && set_rlimit < INCREASED_MAX) {
 		struct rlimit l;
-		int old_errno = errno;
 
+		old_errno = errno;
 		if (getrlimit(RLIMIT_NOFILE, &l) == 0) {
 			if (set_rlimit == NO_CHANGE)
 				l.rlim_cur = l.rlim_max;
@@ -1812,6 +1812,7 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
 	if (err)
 		threads->err_thread = thread;
 
+	old_errno = errno;
 	do {
 		while (--thread >= 0) {
 			close(FD(evsel, cpu, thread));
@@ -1819,6 +1820,7 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
 		}
 		thread = nthreads;
 	} while (--cpu >= 0);
+	errno = old_errno;
 	return err;
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 2/9] perf evsel: Avoid close(-1)
  2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
  2019-10-20 17:51 ` [PATCH v2 1/9] perf evsel: Always preserve errno while cleaning up perf_event_open failures Andi Kleen
@ 2019-10-20 17:51 ` Andi Kleen
  2019-10-22  8:01   ` Jiri Olsa
  2019-11-12 11:18   ` [tip: perf/core] " tip-bot2 for Andi Kleen
  2019-10-20 17:51 ` [PATCH v2 3/9] perf pmu: Use file system cache to optimize sysfs access Andi Kleen
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 28+ messages in thread
From: Andi Kleen @ 2019-10-20 17:51 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

In some weak fallback cases close can be called a lot with -1. Check
for this case and avoid calling close then.

This is mainly to shut up valgrind which complains about this case.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/lib/evsel.c  | 3 ++-
 tools/perf/util/evsel.c | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/perf/lib/evsel.c b/tools/perf/lib/evsel.c
index a8cb582e2721..5a89857b0381 100644
--- a/tools/perf/lib/evsel.c
+++ b/tools/perf/lib/evsel.c
@@ -120,7 +120,8 @@ void perf_evsel__close_fd(struct perf_evsel *evsel)
 
 	for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++)
 		for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
-			close(FD(evsel, cpu, thread));
+			if (FD(evsel, cpu, thread) >= 0)
+				close(FD(evsel, cpu, thread));
 			FD(evsel, cpu, thread) = -1;
 		}
 }
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index d831038b55f2..d4451846af93 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1815,7 +1815,8 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
 	old_errno = errno;
 	do {
 		while (--thread >= 0) {
-			close(FD(evsel, cpu, thread));
+			if (FD(evsel, cpu, thread) >= 0)
+				close(FD(evsel, cpu, thread));
 			FD(evsel, cpu, thread) = -1;
 		}
 		thread = nthreads;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 3/9] perf pmu: Use file system cache to optimize sysfs access
  2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
  2019-10-20 17:51 ` [PATCH v2 1/9] perf evsel: Always preserve errno while cleaning up perf_event_open failures Andi Kleen
  2019-10-20 17:51 ` [PATCH v2 2/9] perf evsel: Avoid close(-1) Andi Kleen
@ 2019-10-20 17:51 ` Andi Kleen
  2019-10-23  9:47   ` Jiri Olsa
  2019-10-20 17:51 ` [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity Andi Kleen
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 28+ messages in thread
From: Andi Kleen @ 2019-10-20 17:51 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

pmu.c does a lot of redundant /sys accesses while parsing aliases
and probing for PMUs. On large systems with a lot of PMUs this
can get expensive (>2s):

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 27.25    1.227847           8    160888     16976 openat
 26.42    1.190481           7    164224    164077 stat

Add a cache to remember if specific file names exist or don't
exist, which eliminates most of this overhead.

Also optimize some stat() calls to be slightly cheaper access()

Resulting in:

  0.18    0.004166           2      1851       305 open
  0.08    0.001970           2       829       622 access

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/util/Build     |  1 +
 tools/perf/util/fncache.c | 52 ++++++++++++++++++++++++++++++++++++++
 tools/perf/util/fncache.h |  8 ++++++
 tools/perf/util/pmu.c     | 53 ++++++++++++++++++++++++---------------
 tools/perf/util/srccode.c |  9 +------
 5 files changed, 95 insertions(+), 28 deletions(-)
 create mode 100644 tools/perf/util/fncache.c
 create mode 100644 tools/perf/util/fncache.h

diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index 39814b1806a6..2c1504fe924c 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -48,6 +48,7 @@ perf-y += header.o
 perf-y += callchain.o
 perf-y += values.o
 perf-y += debug.o
+perf-y += fncache.o
 perf-y += machine.o
 perf-y += map.o
 perf-y += pstack.o
diff --git a/tools/perf/util/fncache.c b/tools/perf/util/fncache.c
new file mode 100644
index 000000000000..0e6e2370b3af
--- /dev/null
+++ b/tools/perf/util/fncache.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Manage a cache of file names' existence */
+#include <stdlib.h>
+#include <unistd.h>
+#include <string.h>
+#include <linux/list.h>
+#include "fncache.h"
+
+struct fncache {
+	struct hlist_node nd;
+	bool res;
+	char name[];
+};
+
+#define FNHSIZE 61
+
+static struct hlist_head fncache_hash[FNHSIZE];
+
+unsigned shash(const unsigned char *s)
+{
+	unsigned h = 0;
+	while (*s)
+		h = 65599 * h + *s++;
+	return h ^ (h >> 16);
+}
+
+bool lookup_fncache(const char *name, bool *res)
+{
+	int h = shash((const unsigned char *)name) % FNHSIZE;
+	struct fncache *n;
+
+	hlist_for_each_entry (n, &fncache_hash[h], nd) {
+		if (!strcmp(n->name, name)) {
+			*res = n->res;
+			return true;
+		}
+	}
+	return false;
+}
+
+/* No LRU, only use when bounded in some other way. */
+void update_fncache(const char *name, bool res)
+{
+	struct fncache *n = malloc(sizeof(struct fncache) + strlen(name) + 1);
+	int h = shash((const unsigned char *)name) % FNHSIZE;
+
+	if (!n)
+		return;
+	strcpy(n->name, name);
+	n->res = res;
+	hlist_add_head(&n->nd, &fncache_hash[h]);
+}
diff --git a/tools/perf/util/fncache.h b/tools/perf/util/fncache.h
new file mode 100644
index 000000000000..93ca473f5357
--- /dev/null
+++ b/tools/perf/util/fncache.h
@@ -0,0 +1,8 @@
+#ifndef _FCACHE_H
+#define _FCACHE_H 1
+
+unsigned shash(const unsigned char *s);
+void update_fncache(const char *name, bool res);
+bool lookup_fncache(const char *name, bool *res);
+
+#endif
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index 5608da82ad23..ae5e6e894e79 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -24,6 +24,7 @@
 #include "pmu-events/pmu-events.h"
 #include "string2.h"
 #include "strbuf.h"
+#include "fncache.h"
 
 struct perf_pmu_format {
 	char *name;
@@ -82,9 +83,9 @@ int perf_pmu__format_parse(char *dir, struct list_head *head)
  */
 static int pmu_format(const char *name, struct list_head *format)
 {
-	struct stat st;
 	char path[PATH_MAX];
 	const char *sysfs = sysfs__mountpoint();
+	bool res = false;
 
 	if (!sysfs)
 		return -1;
@@ -92,8 +93,12 @@ static int pmu_format(const char *name, struct list_head *format)
 	snprintf(path, PATH_MAX,
 		 "%s" EVENT_SOURCE_DEVICE_PATH "%s/format", sysfs, name);
 
-	if (stat(path, &st) < 0)
+	if (lookup_fncache(path, &res) && !res)
+		return 0;
+
+	if (!res && access(path, R_OK) < 0)
 		return 0;	/* no error if format does not exist */
+	update_fncache(path, true);
 
 	if (perf_pmu__format_parse(path, format))
 		return -1;
@@ -470,9 +475,9 @@ static int pmu_aliases_parse(char *dir, struct list_head *head)
  */
 static int pmu_aliases(const char *name, struct list_head *head)
 {
-	struct stat st;
 	char path[PATH_MAX];
 	const char *sysfs = sysfs__mountpoint();
+	bool res = false;
 
 	if (!sysfs)
 		return -1;
@@ -480,8 +485,11 @@ static int pmu_aliases(const char *name, struct list_head *head)
 	snprintf(path, PATH_MAX,
 		 "%s/bus/event_source/devices/%s/events", sysfs, name);
 
-	if (stat(path, &st) < 0)
-		return 0;	 /* no error if 'events' does not exist */
+	if (lookup_fncache(path, &res) && !res)
+		return 0;
+	if (!res && access(path, R_OK) < 0)
+		return 0;
+	update_fncache(path, true);
 
 	if (pmu_aliases_parse(path, head))
 		return -1;
@@ -520,7 +528,6 @@ static int pmu_alias_terms(struct perf_pmu_alias *alias,
  */
 static int pmu_type(const char *name, __u32 *type)
 {
-	struct stat st;
 	char path[PATH_MAX];
 	FILE *file;
 	int ret = 0;
@@ -532,7 +539,7 @@ static int pmu_type(const char *name, __u32 *type)
 	snprintf(path, PATH_MAX,
 		 "%s" EVENT_SOURCE_DEVICE_PATH "%s/type", sysfs, name);
 
-	if (stat(path, &st) < 0)
+	if (access(path, R_OK) < 0)
 		return -1;
 
 	file = fopen(path, "r");
@@ -623,14 +630,16 @@ static struct perf_cpu_map *pmu_cpumask(const char *name)
 static bool pmu_is_uncore(const char *name)
 {
 	char path[PATH_MAX];
-	struct perf_cpu_map *cpus;
-	const char *sysfs = sysfs__mountpoint();
+	const char *sysfs;
+	bool res;
 
+	sysfs = sysfs__mountpoint();
 	snprintf(path, PATH_MAX, CPUS_TEMPLATE_UNCORE, sysfs, name);
-	cpus = __pmu_cpumask(path);
-	perf_cpu_map__put(cpus);
-
-	return !!cpus;
+	if (lookup_fncache(path, &res))
+		return res;
+	res = access(path, R_OK) == 0;
+	update_fncache(path, res);
+	return res;
 }
 
 /*
@@ -640,9 +649,9 @@ static bool pmu_is_uncore(const char *name)
  */
 static int is_arm_pmu_core(const char *name)
 {
-	struct stat st;
 	char path[PATH_MAX];
 	const char *sysfs = sysfs__mountpoint();
+	bool res;
 
 	if (!sysfs)
 		return 0;
@@ -650,10 +659,11 @@ static int is_arm_pmu_core(const char *name)
 	/* Look for cpu sysfs (specific to arm) */
 	scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/%s/cpus",
 				sysfs, name);
-	if (stat(path, &st) == 0)
-		return 1;
-
-	return 0;
+	if (lookup_fncache(path, &res))
+		return res;
+	res = access(path, R_OK) == 0;
+	update_fncache(path, res);
+	return res;
 }
 
 static char *perf_pmu__getcpuid(struct perf_pmu *pmu)
@@ -1519,9 +1529,9 @@ bool pmu_have_event(const char *pname, const char *name)
 
 static FILE *perf_pmu__open_file(struct perf_pmu *pmu, const char *name)
 {
-	struct stat st;
 	char path[PATH_MAX];
 	const char *sysfs;
+	bool res = false;
 
 	sysfs = sysfs__mountpoint();
 	if (!sysfs)
@@ -1530,8 +1540,11 @@ static FILE *perf_pmu__open_file(struct perf_pmu *pmu, const char *name)
 	snprintf(path, PATH_MAX,
 		 "%s" EVENT_SOURCE_DEVICE_PATH "%s/%s", sysfs, pmu->name, name);
 
-	if (stat(path, &st) < 0)
+	if (lookup_fncache(path, &res) && !res)
+		return NULL;
+	if (!res && access(path, R_OK) < 0)
 		return NULL;
+	update_fncache(path, true);
 
 	return fopen(path, "r");
 }
diff --git a/tools/perf/util/srccode.c b/tools/perf/util/srccode.c
index d84ed8b6caaa..c29edaaca863 100644
--- a/tools/perf/util/srccode.c
+++ b/tools/perf/util/srccode.c
@@ -16,6 +16,7 @@
 #include "srccode.h"
 #include "debug.h"
 #include <internal/lib.h> // page_size
+#include "fncache.h"
 
 #define MAXSRCCACHE (32*1024*1024)
 #define MAXSRCFILES     64
@@ -36,14 +37,6 @@ static LIST_HEAD(srcfile_list);
 static long map_total_sz;
 static int num_srcfiles;
 
-static unsigned shash(unsigned char *s)
-{
-	unsigned h = 0;
-	while (*s)
-		h = 65599 * h + *s++;
-	return h ^ (h >> 16);
-}
-
 static int countlines(char *map, int maplen)
 {
 	int numl;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity
  2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
                   ` (2 preceding siblings ...)
  2019-10-20 17:51 ` [PATCH v2 3/9] perf pmu: Use file system cache to optimize sysfs access Andi Kleen
@ 2019-10-20 17:51 ` Andi Kleen
  2019-10-23  9:59   ` Jiri Olsa
  2019-10-20 17:51 ` [PATCH v2 5/9] perf evsel: Add iterator to iterate over events ordered by CPU Andi Kleen
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 28+ messages in thread
From: Andi Kleen @ 2019-10-20 17:51 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

The kernel perf subsystem has to IPI to the target CPU for many
operations. On systems with many CPUs and when managing many events the
overhead can be dominated by lots of IPIs.

An alternative is to set up CPU affinity in the perf tool, then set up
all the events for that CPU, and then move on to the next CPU.

Add some affinity management infrastructure to enable such a model.
Used in followon patches.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/util/Build      |  1 +
 tools/perf/util/affinity.c | 71 ++++++++++++++++++++++++++++++++++++++
 tools/perf/util/affinity.h | 15 ++++++++
 3 files changed, 87 insertions(+)
 create mode 100644 tools/perf/util/affinity.c
 create mode 100644 tools/perf/util/affinity.h

diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index 2c1504fe924c..c7d4eab017e5 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -76,6 +76,7 @@ perf-y += sort.o
 perf-y += hist.o
 perf-y += util.o
 perf-y += cpumap.o
+perf-y += affinity.o
 perf-y += cputopo.o
 perf-y += cgroup.o
 perf-y += target.o
diff --git a/tools/perf/util/affinity.c b/tools/perf/util/affinity.c
new file mode 100644
index 000000000000..c42a6b9d63f0
--- /dev/null
+++ b/tools/perf/util/affinity.c
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Manage affinity to optimize IPIs inside the kernel perf API. */
+#define _GNU_SOURCE 1
+#include <sched.h>
+#include <stdlib.h>
+#include <linux/zalloc.h>
+#include "perf.h"
+#include "cpumap.h"
+#include "affinity.h"
+
+static int get_cpu_set_size(void)
+{
+	int sz = (cpu__max_cpu() + 64 - 1) / 64;
+	/*
+	 * sched_getaffinity doesn't like masks smaller than the kernel.
+	 * Hopefully that's big enough.
+	 */
+	if (sz < 4096/8)
+		sz = 4096/8;
+	return sz;
+}
+
+int affinity__setup(struct affinity *a)
+{
+	int cpu_set_size = get_cpu_set_size();
+
+	a->orig_cpus = malloc(cpu_set_size);
+	if (!a->orig_cpus)
+		return -1;
+	sched_getaffinity(0, cpu_set_size, (cpu_set_t *)a->orig_cpus);
+	a->sched_cpus = zalloc(cpu_set_size);
+	if (!a->sched_cpus) {
+		free(a->orig_cpus);
+		return -1;
+	}
+	a->changed = false;
+	return 0;
+}
+
+/*
+ * perf_event_open does an IPI internally to the target CPU.
+ * It is more efficient to change perf's affinity to the target
+ * CPU and then set up all events on that CPU, so we amortize
+ * CPU communication.
+ */
+void affinity__set(struct affinity *a, int cpu)
+{
+	int cpu_set_size = get_cpu_set_size();
+
+	if (cpu == -1)
+		return;
+	a->changed = true;
+	a->sched_cpus[cpu / 8] |= 1 << (cpu % 8);
+	/*
+	 * We ignore errors because affinity is just an optimization.
+	 * This could happen for example with isolated CPUs or cpusets.
+	 * In this case the IPIs inside the kernel's perf API still work.
+	 */
+	sched_setaffinity(0, cpu_set_size, (cpu_set_t *)a->sched_cpus);
+	a->sched_cpus[cpu / 8] ^= 1 << (cpu % 8);
+}
+
+void affinity__cleanup(struct affinity *a)
+{
+	int cpu_set_size = get_cpu_set_size();
+
+	if (a->changed)
+		sched_setaffinity(0, cpu_set_size, (cpu_set_t *)a->orig_cpus);
+	free(a->sched_cpus);
+	free(a->orig_cpus);
+}
diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
new file mode 100644
index 000000000000..e56148607e33
--- /dev/null
+++ b/tools/perf/util/affinity.h
@@ -0,0 +1,15 @@
+// SPDX-License-Identifier: GPL-2.0
+#ifndef AFFINITY_H
+#define AFFINITY_H 1
+
+struct affinity {
+	unsigned char *orig_cpus;
+	unsigned char *sched_cpus;
+	bool changed;
+};
+
+void affinity__cleanup(struct affinity *a);
+void affinity__set(struct affinity *a, int cpu);
+int affinity__setup(struct affinity *a);
+
+#endif
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 5/9] perf evsel: Add iterator to iterate over events ordered by CPU
  2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
                   ` (3 preceding siblings ...)
  2019-10-20 17:51 ` [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity Andi Kleen
@ 2019-10-20 17:51 ` Andi Kleen
  2019-10-20 17:51 ` [PATCH v2 6/9] perf stat: Use affinity for closing file descriptors Andi Kleen
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 28+ messages in thread
From: Andi Kleen @ 2019-10-20 17:51 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Add some common code that is needed to iterate over all events
in CPU order. Used in followon patches

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/util/evlist.c | 33 +++++++++++++++++++++++++++++++++
 tools/perf/util/evlist.h |  4 ++++
 tools/perf/util/evsel.h  |  1 +
 3 files changed, 38 insertions(+)

diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 21b77efa802c..27b4b958eddd 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -341,6 +341,39 @@ static int perf_evlist__nr_threads(struct evlist *evlist,
 		return perf_thread_map__nr(evlist->core.threads);
 }
 
+struct perf_cpu_map *evlist__cpu_iter_start(struct evlist *evlist)
+{
+	struct perf_cpu_map *cpus;
+	struct evsel *pos;
+
+	/*
+	 * evlist->cpus is not necessarily a superset of all the
+	 * event's cpus, so compute our own super set. This
+	 * assume that there is a super set
+	 */
+	cpus = evlist->core.cpus;
+	evlist__for_each_entry(evlist, pos) {
+		pos->cpu_index = 0;
+		if (pos->core.cpus->nr > cpus->nr)
+			cpus = pos->core.cpus;
+	}
+	return cpus;
+}
+
+bool evlist__cpu_iter_skip(struct evsel *ev, int cpu)
+{
+	if (ev->cpu_index >= ev->core.cpus->nr)
+		return true;
+	if (cpu >= 0 && ev->core.cpus->map[ev->cpu_index] != cpu)
+		return true;
+	return false;
+}
+
+void evlist__cpu_iter_next(struct evsel *ev)
+{
+	ev->cpu_index++;
+}
+
 void evlist__disable(struct evlist *evlist)
 {
 	struct evsel *pos;
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index 13051409fd22..c1deb8ebdcea 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -336,6 +336,10 @@ void perf_evlist__to_front(struct evlist *evlist,
 void perf_evlist__set_tracking_event(struct evlist *evlist,
 				     struct evsel *tracking_evsel);
 
+struct perf_cpu_map *evlist__cpu_iter_start(struct evlist *evlist);
+bool evlist__cpu_iter_skip(struct evsel *ev, int cpu);
+void evlist__cpu_iter_next(struct evsel *ev);
+
 struct evsel *
 perf_evlist__find_evsel_by_str(struct evlist *evlist, const char *str);
 
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index ddc5ee6f6592..cf90019ae744 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -95,6 +95,7 @@ struct evsel {
 	bool			collect_stat;
 	bool			weak_group;
 	bool			percore;
+	int			cpu_index;
 	const char		*pmu_name;
 	struct {
 		perf_evsel__sb_cb_t	*cb;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 6/9] perf stat: Use affinity for closing file descriptors
  2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
                   ` (4 preceding siblings ...)
  2019-10-20 17:51 ` [PATCH v2 5/9] perf evsel: Add iterator to iterate over events ordered by CPU Andi Kleen
@ 2019-10-20 17:51 ` Andi Kleen
  2019-10-20 17:52 ` [PATCH v2 7/9] perf stat: Use affinity for opening events Andi Kleen
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 28+ messages in thread
From: Andi Kleen @ 2019-10-20 17:51 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Closing a perf fd can also trigger an IPI to the target CPU.
Use the same affinity technique as we use for reading/enabling events
to closing to optimize the CPU transitions.

Before on a large test case with 94 CPUs:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 32.56    3.085463          50     61483           close

After:

 10.54    0.735704          11     61485           close

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/lib/evsel.c              | 27 +++++++++++++++++++------
 tools/perf/lib/include/perf/evsel.h |  1 +
 tools/perf/util/evlist.c            | 31 +++++++++++++++++++++++++++--
 tools/perf/util/evsel.h             |  1 +
 4 files changed, 52 insertions(+), 8 deletions(-)

diff --git a/tools/perf/lib/evsel.c b/tools/perf/lib/evsel.c
index 5a89857b0381..ea775dacbd2d 100644
--- a/tools/perf/lib/evsel.c
+++ b/tools/perf/lib/evsel.c
@@ -114,16 +114,23 @@ int perf_evsel__open(struct perf_evsel *evsel, struct perf_cpu_map *cpus,
 	return err;
 }
 
+static void perf_evsel__close_fd_cpu(struct perf_evsel *evsel, int cpu)
+{
+	int thread;
+
+	for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
+		if (FD(evsel, cpu, thread) >= 0)
+			close(FD(evsel, cpu, thread));
+		FD(evsel, cpu, thread) = -1;
+	}
+}
+
 void perf_evsel__close_fd(struct perf_evsel *evsel)
 {
-	int cpu, thread;
+	int cpu;
 
 	for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++)
-		for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
-			if (FD(evsel, cpu, thread) >= 0)
-				close(FD(evsel, cpu, thread));
-			FD(evsel, cpu, thread) = -1;
-		}
+		perf_evsel__close_fd_cpu(evsel, cpu);
 }
 
 void perf_evsel__free_fd(struct perf_evsel *evsel)
@@ -141,6 +148,14 @@ void perf_evsel__close(struct perf_evsel *evsel)
 	perf_evsel__free_fd(evsel);
 }
 
+void perf_evsel__close_cpu(struct perf_evsel *evsel, int cpu)
+{
+	if (evsel->fd == NULL)
+		return;
+
+	perf_evsel__close_fd_cpu(evsel, cpu);
+}
+
 int perf_evsel__read_size(struct perf_evsel *evsel)
 {
 	u64 read_format = evsel->attr.read_format;
diff --git a/tools/perf/lib/include/perf/evsel.h b/tools/perf/lib/include/perf/evsel.h
index 4388667f265c..ed10a914cd3f 100644
--- a/tools/perf/lib/include/perf/evsel.h
+++ b/tools/perf/lib/include/perf/evsel.h
@@ -28,6 +28,7 @@ LIBPERF_API void perf_evsel__delete(struct perf_evsel *evsel);
 LIBPERF_API int perf_evsel__open(struct perf_evsel *evsel, struct perf_cpu_map *cpus,
 				 struct perf_thread_map *threads);
 LIBPERF_API void perf_evsel__close(struct perf_evsel *evsel);
+LIBPERF_API void perf_evsel__close_cpu(struct perf_evsel *evsel, int cpu);
 LIBPERF_API int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread,
 				 struct perf_counts_values *count);
 LIBPERF_API int perf_evsel__enable(struct perf_evsel *evsel);
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 27b4b958eddd..b1b29d473a9f 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -18,6 +18,7 @@
 #include "debug.h"
 #include "units.h"
 #include <internal/lib.h> // page_size
+#include "affinity.h"
 #include "../perf.h"
 #include "asm/bug.h"
 #include "bpf-event.h"
@@ -1174,9 +1175,35 @@ void perf_evlist__set_selected(struct evlist *evlist,
 void evlist__close(struct evlist *evlist)
 {
 	struct evsel *evsel;
+	struct affinity affinity;
+	struct perf_cpu_map *cpus;
+	int i;
+
+	/* So far record doesn't set this up */
+	if (!evlist->core.cpus) {
+		evlist__for_each_entry_reverse(evlist, evsel)
+			evsel__close(evsel);
+		return;
+	}
 
-	evlist__for_each_entry_reverse(evlist, evsel)
-		evsel__close(evsel);
+	if (affinity__setup(&affinity) < 0)
+		return;
+	cpus = evlist__cpu_iter_start(evlist);
+	for (i = 0; i < cpus->nr; i++) {
+		int cpu = cpus->map[i];
+		affinity__set(&affinity, cpu);
+
+		evlist__for_each_entry_reverse(evlist, evsel) {
+			if (evlist__cpu_iter_skip(evsel, cpu))
+			    continue;
+			perf_evsel__close_cpu(&evsel->core, evsel->cpu_index);
+			evlist__cpu_iter_next(evsel);
+		}
+	}
+	evlist__for_each_entry_reverse(evlist, evsel) {
+		perf_evsel__free_fd(&evsel->core);
+		perf_evsel__free_id(&evsel->core);
+	}
 }
 
 static int perf_evlist__create_syswide_maps(struct evlist *evlist)
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index cf90019ae744..2e3b011ed09e 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -391,4 +391,5 @@ static inline bool evsel__has_callchain(const struct evsel *evsel)
 struct perf_env *perf_evsel__env(struct evsel *evsel);
 
 int perf_evsel__store_ids(struct evsel *evsel, struct evlist *evlist);
+
 #endif /* __PERF_EVSEL_H */
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 7/9] perf stat: Use affinity for opening events
  2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
                   ` (5 preceding siblings ...)
  2019-10-20 17:51 ` [PATCH v2 6/9] perf stat: Use affinity for closing file descriptors Andi Kleen
@ 2019-10-20 17:52 ` Andi Kleen
  2019-10-20 17:52 ` [PATCH v2 8/9] perf stat: Use affinity for reading Andi Kleen
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 28+ messages in thread
From: Andi Kleen @ 2019-10-20 17:52 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Restructure the event opening in perf stat to cycle through
the events by CPU after setting affinity to that CPU.
This eliminates IPI overhead in the perf API.

We have to loop through the CPU in the outter builtin-stat
code instead of leaving that to low level functions.

It has to change the weak group fallback strategy slightly.
Since we cannot easily undo the opens for other CPUs
move the weak group retry to a separate loop.

Before with a large test case with 94 CPUs:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 42.75    4.050910          67     60046       110 perf_event_open

After:

 26.86    0.944396          16     58069       110 perf_event_open

(the number changes slightly because the weak group retries
work differently and the test case relies on weak groups)

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/builtin-record.c    |   2 +-
 tools/perf/builtin-stat.c      | 194 +++++++++++++++++++++++++--------
 tools/perf/tests/event-times.c |   4 +-
 tools/perf/util/evlist.c       |   8 +-
 tools/perf/util/evlist.h       |   3 +-
 tools/perf/util/evsel.c        |  18 ++-
 tools/perf/util/evsel.h        |   5 +-
 tools/perf/util/stat.c         |   5 +-
 tools/perf/util/stat.h         |   3 +-
 9 files changed, 182 insertions(+), 60 deletions(-)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 2fb83aabbef5..9f8a9393ce4a 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -776,7 +776,7 @@ static int record__open(struct record *rec)
 			if ((errno == EINVAL || errno == EBADF) &&
 			    pos->leader != pos &&
 			    pos->weak_group) {
-			        pos = perf_evlist__reset_weak_group(evlist, pos);
+			        pos = perf_evlist__reset_weak_group(evlist, pos, true);
 				goto try_again;
 			}
 			rc = -errno;
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 468fc49420ce..330a36023494 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -65,6 +65,7 @@
 #include "util/target.h"
 #include "util/time-utils.h"
 #include "util/top.h"
+#include "util/affinity.h"
 #include "asm/bug.h"
 
 #include <linux/time64.h>
@@ -420,6 +421,58 @@ static bool is_target_alive(struct target *_target,
 	return false;
 }
 
+enum counter_recovery {
+	COUNTER_SKIP,
+	COUNTER_RETRY,
+	COUNTER_FATAL,
+};
+
+static enum counter_recovery stat_handle_error(struct evsel *counter)
+{
+	char msg[BUFSIZ];
+	/*
+	 * PPC returns ENXIO for HW counters until 2.6.37
+	 * (behavior changed with commit b0a873e).
+	 */
+	if (errno == EINVAL || errno == ENOSYS ||
+	    errno == ENOENT || errno == EOPNOTSUPP ||
+	    errno == ENXIO) {
+		if (verbose > 0)
+			ui__warning("%s event is not supported by the kernel.\n",
+				    perf_evsel__name(counter));
+		counter->supported = false;
+		counter->errored = true;
+
+		if ((counter->leader != counter) ||
+		    !(counter->leader->core.nr_members > 1))
+			return COUNTER_SKIP;
+	} else if (perf_evsel__fallback(counter, errno, msg, sizeof(msg))) {
+		if (verbose > 0)
+			ui__warning("%s\n", msg);
+		return COUNTER_RETRY;
+	} else if (target__has_per_thread(&target) &&
+		   evsel_list->core.threads &&
+		   evsel_list->core.threads->err_thread != -1) {
+		/*
+		 * For global --per-thread case, skip current
+		 * error thread.
+		 */
+		if (!thread_map__remove(evsel_list->core.threads,
+					evsel_list->core.threads->err_thread)) {
+			evsel_list->core.threads->err_thread = -1;
+			return COUNTER_RETRY;
+		}
+	}
+
+	perf_evsel__open_strerror(counter, &target,
+				  errno, msg, sizeof(msg));
+	ui__error("%s\n", msg);
+
+	if (child_pid != -1)
+		kill(child_pid, SIGTERM);
+	return COUNTER_FATAL;
+}
+
 static int __run_perf_stat(int argc, const char **argv, int run_idx)
 {
 	int interval = stat_config.interval;
@@ -428,11 +481,15 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
 	char msg[BUFSIZ];
 	unsigned long long t0, t1;
 	struct evsel *counter;
+	struct perf_cpu_map *cpus;
 	struct timespec ts;
 	size_t l;
 	int status = 0;
 	const bool forks = (argc > 0);
 	bool is_pipe = STAT_RECORD ? perf_stat.data.is_pipe : false;
+	struct affinity affinity;
+	int i;
+	bool second_pass = false;
 
 	if (interval) {
 		ts.tv_sec  = interval / USEC_PER_MSEC;
@@ -457,61 +514,110 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
 	if (group)
 		perf_evlist__set_leader(evsel_list);
 
-	evlist__for_each_entry(evsel_list, counter) {
+	if (affinity__setup(&affinity) < 0)
+		return -1;
+
+	cpus = evlist__cpu_iter_start(evsel_list);
+
+	for (i = 0; i < cpus->nr; i++) {
+		int cpu = cpus->map[i];
+
+		affinity__set(&affinity, cpu);
+		evlist__for_each_entry(evsel_list, counter) {
+			if (evlist__cpu_iter_skip(counter, cpu))
+				continue;
+			if (counter->reset_group || counter->errored)
+				continue;
+			evlist__cpu_iter_next(counter);
 try_again:
-		if (create_perf_stat_counter(counter, &stat_config, &target) < 0) {
-
-			/* Weak group failed. Reset the group. */
-			if ((errno == EINVAL || errno == EBADF) &&
-			    counter->leader != counter &&
-			    counter->weak_group) {
-				counter = perf_evlist__reset_weak_group(evsel_list, counter);
-				goto try_again;
-			}
+			if (create_perf_stat_counter(counter, &stat_config, &target,
+						     counter->cpu_index - 1) < 0) {
 
-			/*
-			 * PPC returns ENXIO for HW counters until 2.6.37
-			 * (behavior changed with commit b0a873e).
-			 */
-			if (errno == EINVAL || errno == ENOSYS ||
-			    errno == ENOENT || errno == EOPNOTSUPP ||
-			    errno == ENXIO) {
-				if (verbose > 0)
-					ui__warning("%s event is not supported by the kernel.\n",
-						    perf_evsel__name(counter));
-				counter->supported = false;
-
-				if ((counter->leader != counter) ||
-				    !(counter->leader->core.nr_members > 1))
-					continue;
-			} else if (perf_evsel__fallback(counter, errno, msg, sizeof(msg))) {
-                                if (verbose > 0)
-                                        ui__warning("%s\n", msg);
-                                goto try_again;
-			} else if (target__has_per_thread(&target) &&
-				   evsel_list->core.threads &&
-				   evsel_list->core.threads->err_thread != -1) {
 				/*
-				 * For global --per-thread case, skip current
-				 * error thread.
+				 * Weak group failed. We cannot just undo this here
+				 * because earlier CPUs might be in group mode, and the kernel
+				 * doesn't support mixing group and non group reads. Defer
+				 * it to later.
+				 * Don't close here because we're in the wrong affinity.
 				 */
-				if (!thread_map__remove(evsel_list->core.threads,
-							evsel_list->core.threads->err_thread)) {
-					evsel_list->core.threads->err_thread = -1;
+				if ((errno == EINVAL || errno == EBADF) &&
+				    counter->leader != counter &&
+				    counter->weak_group) {
+					perf_evlist__reset_weak_group(evsel_list, counter, false);
+					assert(counter->reset_group);
+					second_pass = true;
+					continue;
+				}
+
+				switch (stat_handle_error(counter)) {
+				case COUNTER_FATAL:
+					return -1;
+				case COUNTER_RETRY:
 					goto try_again;
+				case COUNTER_SKIP:
+					continue;
+				default:
+					break;
 				}
+
 			}
+			counter->supported = true;
+		}
+	}
 
-			perf_evsel__open_strerror(counter, &target,
-						  errno, msg, sizeof(msg));
-			ui__error("%s\n", msg);
+	if (second_pass) {
+		/*
+		 * Now redo all the weak group after closing them,
+		 * and also close errored counters.
+		 */
 
-			if (child_pid != -1)
-				kill(child_pid, SIGTERM);
+		cpus = evlist__cpu_iter_start(evsel_list);
+		for (i = 0; i < cpus->nr; i++) {
+			int cpu = cpus->map[i];
 
-			return -1;
+			affinity__set(&affinity, cpu);
+			/* First close errored or weak retry */
+			evlist__for_each_entry(evsel_list, counter) {
+				if (!counter->reset_group && !counter->errored)
+					continue;
+				if (evlist__cpu_iter_skip(counter, cpu))
+					continue;
+				perf_evsel__close_cpu(&counter->core, counter->cpu_index);
+			}
+			/* Now reopen */
+			evlist__for_each_entry(evsel_list, counter) {
+				if (!counter->reset_group || counter->errored)
+					continue;
+				if (evlist__cpu_iter_skip(counter, cpu))
+					continue;
+				evlist__cpu_iter_next(counter);
+try_again_reset:
+				pr_debug2("reopening weak %s\n", perf_evsel__name(counter));
+				if (create_perf_stat_counter(counter, &stat_config, &target,
+							     counter->cpu_index - 1) < 0) {
+
+					switch (stat_handle_error(counter)) {
+					case COUNTER_FATAL:
+						return -1;
+					case COUNTER_RETRY:
+						goto try_again_reset;
+					case COUNTER_SKIP:
+						continue;
+					default:
+						break;
+					}
+				}
+				counter->supported = true;
+			}
+		}
+	}
+	affinity__cleanup(&affinity);
+
+	evlist__for_each_entry(evsel_list, counter) {
+		if (!counter->supported) {
+			perf_evsel__free_fd(&counter->core);
+			continue;
 		}
-		counter->supported = true;
 
 		l = strlen(counter->unit);
 		if (l > stat_config.unit_width)
diff --git a/tools/perf/tests/event-times.c b/tools/perf/tests/event-times.c
index 1ee8704e2284..1e8a9f5c356d 100644
--- a/tools/perf/tests/event-times.c
+++ b/tools/perf/tests/event-times.c
@@ -125,7 +125,7 @@ static int attach__cpu_disabled(struct evlist *evlist)
 
 	evsel->core.attr.disabled = 1;
 
-	err = perf_evsel__open_per_cpu(evsel, cpus);
+	err = perf_evsel__open_per_cpu(evsel, cpus, -1);
 	if (err) {
 		if (err == -EACCES)
 			return TEST_SKIP;
@@ -152,7 +152,7 @@ static int attach__cpu_enabled(struct evlist *evlist)
 		return -1;
 	}
 
-	err = perf_evsel__open_per_cpu(evsel, cpus);
+	err = perf_evsel__open_per_cpu(evsel, cpus, -1);
 	if (err == -EACCES)
 		return TEST_SKIP;
 
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index b1b29d473a9f..bcb8a3670f3f 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1641,7 +1641,8 @@ void perf_evlist__force_leader(struct evlist *evlist)
 }
 
 struct evsel *perf_evlist__reset_weak_group(struct evlist *evsel_list,
-						 struct evsel *evsel)
+						 struct evsel *evsel,
+						bool close)
 {
 	struct evsel *c2, *leader;
 	bool is_open = true;
@@ -1658,10 +1659,11 @@ struct evsel *perf_evlist__reset_weak_group(struct evlist *evsel_list,
 		if (c2 == evsel)
 			is_open = false;
 		if (c2->leader == leader) {
-			if (is_open)
-				perf_evsel__close(&evsel->core);
+			if (is_open && close)
+				perf_evsel__close(&c2->core);
 			c2->leader = c2;
 			c2->core.nr_members = 0;
+			c2->reset_group = true;
 		}
 	}
 	return leader;
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index c1deb8ebdcea..d9174d565db3 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -351,5 +351,6 @@ bool perf_evlist__exclude_kernel(struct evlist *evlist);
 void perf_evlist__force_leader(struct evlist *evlist);
 
 struct evsel *perf_evlist__reset_weak_group(struct evlist *evlist,
-						 struct evsel *evsel);
+						 struct evsel *evsel,
+						bool close);
 #endif /* __PERF_EVLIST_H */
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index d4451846af93..7106f9a067df 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1569,8 +1569,9 @@ static int perf_event_open(struct evsel *evsel,
 	return fd;
 }
 
-int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
-		struct perf_thread_map *threads)
+static int evsel__open_cpu(struct evsel *evsel, struct perf_cpu_map *cpus,
+		struct perf_thread_map *threads,
+		int start_cpu, int end_cpu)
 {
 	int cpu, thread, nthreads;
 	unsigned long flags = PERF_FLAG_FD_CLOEXEC;
@@ -1647,7 +1648,7 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
 
 	display_attr(&evsel->core.attr);
 
-	for (cpu = 0; cpu < cpus->nr; cpu++) {
+	for (cpu = start_cpu; cpu < end_cpu; cpu++) {
 
 		for (thread = 0; thread < nthreads; thread++) {
 			int fd, group_fd;
@@ -1825,6 +1826,12 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
 	return err;
 }
 
+int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
+		struct perf_thread_map *threads)
+{
+	return evsel__open_cpu(evsel, cpus, threads, 0, cpus ? cpus->nr : 1);
+}
+
 void evsel__close(struct evsel *evsel)
 {
 	perf_evsel__close(&evsel->core);
@@ -1832,9 +1839,10 @@ void evsel__close(struct evsel *evsel)
 }
 
 int perf_evsel__open_per_cpu(struct evsel *evsel,
-			     struct perf_cpu_map *cpus)
+			     struct perf_cpu_map *cpus,
+			     int cpu)
 {
-	return evsel__open(evsel, cpus, NULL);
+	return evsel__open_cpu(evsel, cpus, NULL, cpu, cpu + 1);
 }
 
 int perf_evsel__open_per_thread(struct evsel *evsel,
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 2e3b011ed09e..d5440a928745 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -94,6 +94,8 @@ struct evsel {
 	struct evsel		*metric_leader;
 	bool			collect_stat;
 	bool			weak_group;
+	bool			reset_group;
+	bool			errored;
 	bool			percore;
 	int			cpu_index;
 	const char		*pmu_name;
@@ -223,7 +225,8 @@ int evsel__enable(struct evsel *evsel);
 int evsel__disable(struct evsel *evsel);
 
 int perf_evsel__open_per_cpu(struct evsel *evsel,
-			     struct perf_cpu_map *cpus);
+			     struct perf_cpu_map *cpus,
+			     int cpu);
 int perf_evsel__open_per_thread(struct evsel *evsel,
 				struct perf_thread_map *threads);
 int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
diff --git a/tools/perf/util/stat.c b/tools/perf/util/stat.c
index ebdd130557fb..4b9e196e5cd0 100644
--- a/tools/perf/util/stat.c
+++ b/tools/perf/util/stat.c
@@ -463,7 +463,8 @@ size_t perf_event__fprintf_stat_config(union perf_event *event, FILE *fp)
 
 int create_perf_stat_counter(struct evsel *evsel,
 			     struct perf_stat_config *config,
-			     struct target *target)
+			     struct target *target,
+			     int cpu)
 {
 	struct perf_event_attr *attr = &evsel->core.attr;
 	struct evsel *leader = evsel->leader;
@@ -507,7 +508,7 @@ int create_perf_stat_counter(struct evsel *evsel,
 	}
 
 	if (target__has_cpu(target) && !target__has_per_thread(target))
-		return perf_evsel__open_per_cpu(evsel, evsel__cpus(evsel));
+		return perf_evsel__open_per_cpu(evsel, evsel__cpus(evsel), cpu);
 
 	return perf_evsel__open_per_thread(evsel, evsel->core.threads);
 }
diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h
index edbeb2f63e8d..0773e4b7ec44 100644
--- a/tools/perf/util/stat.h
+++ b/tools/perf/util/stat.h
@@ -211,7 +211,8 @@ size_t perf_event__fprintf_stat_config(union perf_event *event, FILE *fp);
 
 int create_perf_stat_counter(struct evsel *evsel,
 			     struct perf_stat_config *config,
-			     struct target *target);
+			     struct target *target,
+			     int cpu);
 void
 perf_evlist__print_counters(struct evlist *evlist,
 			    struct perf_stat_config *config,
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 8/9] perf stat: Use affinity for reading
  2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
                   ` (6 preceding siblings ...)
  2019-10-20 17:52 ` [PATCH v2 7/9] perf stat: Use affinity for opening events Andi Kleen
@ 2019-10-20 17:52 ` Andi Kleen
  2019-10-20 17:52 ` [PATCH v2 9/9] perf stat: Use affinity for enabling/disabling events Andi Kleen
  2019-10-22  8:02 ` Optimize perf stat for large number of events/cpus v2 Jiri Olsa
  9 siblings, 0 replies; 28+ messages in thread
From: Andi Kleen @ 2019-10-20 17:52 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Restructure event reading to use affinity to minimize the number
of IPIs needed.

Before on a large test case with 94 CPUs:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
  3.16    0.106079           4     22082           read

After:

  3.43    0.081295           3     22082           read

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/builtin-stat.c | 97 ++++++++++++++++++++++-----------------
 tools/perf/util/evsel.h   |  1 +
 2 files changed, 57 insertions(+), 41 deletions(-)

diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 330a36023494..dd648f6f77ee 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -266,15 +266,10 @@ static int read_single_counter(struct evsel *counter, int cpu,
  * Read out the results of a single counter:
  * do not aggregate counts across CPUs in system-wide mode
  */
-static int read_counter(struct evsel *counter, struct timespec *rs)
+static int read_counter(struct evsel *counter, struct timespec *rs, int cpu)
 {
 	int nthreads = perf_thread_map__nr(evsel_list->core.threads);
-	int ncpus, cpu, thread;
-
-	if (target__has_cpu(&target) && !target__has_per_thread(&target))
-		ncpus = perf_evsel__nr_cpus(counter);
-	else
-		ncpus = 1;
+	int thread;
 
 	if (!counter->supported)
 		return -ENOENT;
@@ -283,40 +278,38 @@ static int read_counter(struct evsel *counter, struct timespec *rs)
 		nthreads = 1;
 
 	for (thread = 0; thread < nthreads; thread++) {
-		for (cpu = 0; cpu < ncpus; cpu++) {
-			struct perf_counts_values *count;
-
-			count = perf_counts(counter->counts, cpu, thread);
-
-			/*
-			 * The leader's group read loads data into its group members
-			 * (via perf_evsel__read_counter) and sets threir count->loaded.
-			 */
-			if (!perf_counts__is_loaded(counter->counts, cpu, thread) &&
-			    read_single_counter(counter, cpu, thread, rs)) {
-				counter->counts->scaled = -1;
-				perf_counts(counter->counts, cpu, thread)->ena = 0;
-				perf_counts(counter->counts, cpu, thread)->run = 0;
-				return -1;
-			}
+		struct perf_counts_values *count;
 
-			perf_counts__set_loaded(counter->counts, cpu, thread, false);
+		count = perf_counts(counter->counts, cpu, thread);
 
-			if (STAT_RECORD) {
-				if (perf_evsel__write_stat_event(counter, cpu, thread, count)) {
-					pr_err("failed to write stat event\n");
-					return -1;
-				}
-			}
+		/*
+		 * The leader's group read loads data into its group members
+		 * (via perf_evsel__read_counter) and sets threir count->loaded.
+		 */
+		if (!perf_counts__is_loaded(counter->counts, cpu, thread) &&
+		    read_single_counter(counter, cpu, thread, rs)) {
+			counter->counts->scaled = -1;
+			perf_counts(counter->counts, cpu, thread)->ena = 0;
+			perf_counts(counter->counts, cpu, thread)->run = 0;
+			return -1;
+		}
+
+		perf_counts__set_loaded(counter->counts, cpu, thread, false);
 
-			if (verbose > 1) {
-				fprintf(stat_config.output,
-					"%s: %d: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n",
-						perf_evsel__name(counter),
-						cpu,
-						count->val, count->ena, count->run);
+		if (STAT_RECORD) {
+			if (perf_evsel__write_stat_event(counter, cpu, thread, count)) {
+				pr_err("failed to write stat event\n");
+				return -1;
 			}
 		}
+
+		if (verbose > 1) {
+			fprintf(stat_config.output,
+				"%s: %d: %" PRIu64 " %" PRIu64 " %" PRIu64 "\n",
+					perf_evsel__name(counter),
+					cpu,
+					count->val, count->ena, count->run);
+		}
 	}
 
 	return 0;
@@ -325,15 +318,37 @@ static int read_counter(struct evsel *counter, struct timespec *rs)
 static void read_counters(struct timespec *rs)
 {
 	struct evsel *counter;
-	int ret;
+	struct affinity affinity;
+	int i, ncpus;
+	struct perf_cpu_map *cpus;
+
+	if (affinity__setup(&affinity) < 0)
+		return;
+
+	cpus = evlist__cpu_iter_start(evsel_list);
+
+	ncpus = cpus->nr;
+	if (!(target__has_cpu(&target) && !target__has_per_thread(&target)))
+		ncpus = 1;
+	for (i = 0; i < ncpus; i++) {
+		int cpu = cpus->map[i];
+		affinity__set(&affinity, cpu);
+
+		evlist__for_each_entry(evsel_list, counter) {
+			if (evlist__cpu_iter_skip(counter, cpu))
+				continue;
+			counter->err = read_counter(counter, rs, counter->cpu_index);
+			evlist__cpu_iter_next(counter);
+		}
+	}
+	affinity__cleanup(&affinity);
 
 	evlist__for_each_entry(evsel_list, counter) {
-		ret = read_counter(counter, rs);
-		if (ret)
+		if (counter->err)
 			pr_debug("failed to read counter %s\n", counter->name);
-
-		if (ret == 0 && perf_stat_process_counter(&stat_config, counter))
+		if (counter->err == 0 && perf_stat_process_counter(&stat_config, counter))
 			pr_warning("failed to process counter %s\n", counter->name);
+		counter->err = 0;
 	}
 }
 
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index d5440a928745..9fc9f6698aa4 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -86,6 +86,7 @@ struct evsel {
 	struct list_head	config_terms;
 	struct bpf_object	*bpf_obj;
 	int			bpf_fd;
+	int			err;
 	bool			auto_merge_stats;
 	bool			merged_stat;
 	const char *		metric_expr;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 9/9] perf stat: Use affinity for enabling/disabling events
  2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
                   ` (7 preceding siblings ...)
  2019-10-20 17:52 ` [PATCH v2 8/9] perf stat: Use affinity for reading Andi Kleen
@ 2019-10-20 17:52 ` Andi Kleen
  2019-10-23 10:30   ` Jiri Olsa
  2019-10-22  8:02 ` Optimize perf stat for large number of events/cpus v2 Jiri Olsa
  9 siblings, 1 reply; 28+ messages in thread
From: Andi Kleen @ 2019-10-20 17:52 UTC (permalink / raw)
  To: acme; +Cc: linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

From: Andi Kleen <ak@linux.intel.com>

Restructure event enabling/disabling to use affinity, which
minimizes the number of IPIs needed.

Before on a large test case with 94 CPUs:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 54.65    1.899986          22     84812       660 ioctl

after:

 39.21    0.930451          10     84796       644 ioctl

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
 tools/perf/lib/evsel.c              | 49 ++++++++++++++++++++--------
 tools/perf/lib/include/perf/evsel.h |  2 ++
 tools/perf/util/evlist.c            | 50 ++++++++++++++++++++++++++---
 tools/perf/util/evsel.c             | 13 ++++++++
 tools/perf/util/evsel.h             |  2 ++
 5 files changed, 98 insertions(+), 18 deletions(-)

diff --git a/tools/perf/lib/evsel.c b/tools/perf/lib/evsel.c
index ea775dacbd2d..89ddfade0b96 100644
--- a/tools/perf/lib/evsel.c
+++ b/tools/perf/lib/evsel.c
@@ -198,38 +198,61 @@ int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread,
 }
 
 static int perf_evsel__run_ioctl(struct perf_evsel *evsel,
-				 int ioc,  void *arg)
+				 int ioc,  void *arg,
+				 int cpu)
 {
-	int cpu, thread;
+	int thread;
 
-	for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++) {
-		for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
-			int fd = FD(evsel, cpu, thread),
-			    err = ioctl(fd, ioc, arg);
+	for (thread = 0; thread < xyarray__max_y(evsel->fd); thread++) {
+		int fd = FD(evsel, cpu, thread),
+		    err = ioctl(fd, ioc, arg);
 
-			if (err)
-				return err;
-		}
+		if (err)
+			return err;
 	}
 
 	return 0;
 }
 
+int perf_evsel__enable_cpu(struct perf_evsel *evsel, int cpu)
+{
+	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, 0, cpu);
+}
+
 int perf_evsel__enable(struct perf_evsel *evsel)
 {
-	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, 0);
+	int i;
+	int err = 0;
+
+	for (i = 0; i < evsel->cpus->nr && !err; i++)
+		err = perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_ENABLE, 0, i);
+	return err;
+}
+
+int perf_evsel__disable_cpu(struct perf_evsel *evsel, int cpu)
+{
+	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, 0, cpu);
 }
 
 int perf_evsel__disable(struct perf_evsel *evsel)
 {
-	return perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, 0);
+	int i;
+	int err = 0;
+
+	for (i = 0; i < evsel->cpus->nr && !err; i++)
+		err = perf_evsel__run_ioctl(evsel, PERF_EVENT_IOC_DISABLE, 0, i);
+	return err;
 }
 
 int perf_evsel__apply_filter(struct perf_evsel *evsel, const char *filter)
 {
-	return perf_evsel__run_ioctl(evsel,
+	int err = 0, i;
+
+	for (i = 0; i < evsel->cpus->nr && !err; i++)
+		err = perf_evsel__run_ioctl(evsel,
 				     PERF_EVENT_IOC_SET_FILTER,
-				     (void *)filter);
+				     (void *)filter, i);
+	return err;
 }
 
 struct perf_cpu_map *perf_evsel__cpus(struct perf_evsel *evsel)
diff --git a/tools/perf/lib/include/perf/evsel.h b/tools/perf/lib/include/perf/evsel.h
index ed10a914cd3f..db31e512a120 100644
--- a/tools/perf/lib/include/perf/evsel.h
+++ b/tools/perf/lib/include/perf/evsel.h
@@ -32,7 +32,9 @@ LIBPERF_API void perf_evsel__close_cpu(struct perf_evsel *evsel, int cpu);
 LIBPERF_API int perf_evsel__read(struct perf_evsel *evsel, int cpu, int thread,
 				 struct perf_counts_values *count);
 LIBPERF_API int perf_evsel__enable(struct perf_evsel *evsel);
+LIBPERF_API int perf_evsel__enable_cpu(struct perf_evsel *evsel, int cpu);
 LIBPERF_API int perf_evsel__disable(struct perf_evsel *evsel);
+LIBPERF_API int perf_evsel__disable_cpu(struct perf_evsel *evsel, int cpu);
 LIBPERF_API struct perf_cpu_map *perf_evsel__cpus(struct perf_evsel *evsel);
 LIBPERF_API struct perf_thread_map *perf_evsel__threads(struct perf_evsel *evsel);
 LIBPERF_API struct perf_event_attr *perf_evsel__attr(struct perf_evsel *evsel);
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index bcb8a3670f3f..55f38a71ad30 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -378,26 +378,66 @@ void evlist__cpu_iter_next(struct evsel *ev)
 void evlist__disable(struct evlist *evlist)
 {
 	struct evsel *pos;
+	struct affinity affinity;
+	struct perf_cpu_map *cpus;
+	int i;
 
+	if (affinity__setup(&affinity) < 0)
+		return;
+
+	cpus = evlist__cpu_iter_start(evlist);
+	for (i = 0; i < cpus->nr; i++) {
+		int cpu = cpus->map[i];
+		affinity__set(&affinity, cpu);
+
+		evlist__for_each_entry(evlist, pos) {
+			if (evlist__cpu_iter_skip(pos, cpu))
+				continue;
+			if (pos->disabled || !perf_evsel__is_group_leader(pos) || !pos->core.fd)
+				continue;
+			evsel__disable_cpu(pos, pos->cpu_index);
+			evlist__cpu_iter_next(pos);
+		}
+	}
+	affinity__cleanup(&affinity);
 	evlist__for_each_entry(evlist, pos) {
-		if (pos->disabled || !perf_evsel__is_group_leader(pos) || !pos->core.fd)
+		if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
 			continue;
-		evsel__disable(pos);
+		pos->disabled = true;
 	}
-
 	evlist->enabled = false;
 }
 
 void evlist__enable(struct evlist *evlist)
 {
 	struct evsel *pos;
+	struct affinity affinity;
+	struct perf_cpu_map *cpus;
+	int i;
+
+	if (affinity__setup(&affinity) < 0)
+		return;
+
+	cpus = evlist__cpu_iter_start(evlist);
+	for (i = 0; i < cpus->nr; i++) {
+		int cpu = cpus->map[i];
+		affinity__set(&affinity, cpu);
 
+		evlist__for_each_entry(evlist, pos) {
+			if (evlist__cpu_iter_skip(pos, cpu))
+				continue;
+			if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
+				continue;
+			evsel__enable_cpu(pos, pos->cpu_index);
+			evlist__cpu_iter_next(pos);
+		}
+	}
+	affinity__cleanup(&affinity);
 	evlist__for_each_entry(evlist, pos) {
 		if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
 			continue;
-		evsel__enable(pos);
+		pos->disabled = false;
 	}
-
 	evlist->enabled = true;
 }
 
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 7106f9a067df..79050a6f4991 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1205,13 +1205,26 @@ int perf_evsel__append_addr_filter(struct evsel *evsel, const char *filter)
 	return perf_evsel__append_filter(evsel, "%s,%s", filter);
 }
 
+/* Caller has to clear disabled after going through all CPUs. */
+int evsel__enable_cpu(struct evsel *evsel, int cpu)
+{
+	int err = perf_evsel__enable_cpu(&evsel->core, cpu);
+	return err;
+}
+
 int evsel__enable(struct evsel *evsel)
 {
 	int err = perf_evsel__enable(&evsel->core);
 
 	if (!err)
 		evsel->disabled = false;
+	return err;
+}
 
+/* Caller has to set disabled after going through all CPUs. */
+int evsel__disable_cpu(struct evsel *evsel, int cpu)
+{
+	int err = perf_evsel__disable_cpu(&evsel->core, cpu);
 	return err;
 }
 
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 9fc9f6698aa4..15977bbe7b63 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -222,8 +222,10 @@ int perf_evsel__set_filter(struct evsel *evsel, const char *filter);
 int perf_evsel__append_tp_filter(struct evsel *evsel, const char *filter);
 int perf_evsel__append_addr_filter(struct evsel *evsel,
 				   const char *filter);
+int evsel__enable_cpu(struct evsel *evsel, int cpu);
 int evsel__enable(struct evsel *evsel);
 int evsel__disable(struct evsel *evsel);
+int evsel__disable_cpu(struct evsel *evsel, int cpu);
 
 int perf_evsel__open_per_cpu(struct evsel *evsel,
 			     struct perf_cpu_map *cpus,
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 1/9] perf evsel: Always preserve errno while cleaning up perf_event_open failures
  2019-10-20 17:51 ` [PATCH v2 1/9] perf evsel: Always preserve errno while cleaning up perf_event_open failures Andi Kleen
@ 2019-10-22  8:01   ` Jiri Olsa
  2019-11-12 11:18   ` [tip: perf/core] " tip-bot2 for Andi Kleen
  1 sibling, 0 replies; 28+ messages in thread
From: Jiri Olsa @ 2019-10-22  8:01 UTC (permalink / raw)
  To: Andi Kleen
  Cc: acme, linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

On Sun, Oct 20, 2019 at 10:51:54AM -0700, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>
> 
> In some cases when perf_event_open fails, it may do some closes to clean
> up. In special cases these closes can fail too, which overwrites the
> errno of the perf_event_open, which is then incorrectly reported.
> 
> Save/restore errno around closes.
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>

Acked-by: Jiri Olsa <jolsa@kernel.org>

thanks,
jirka

> ---
>  tools/perf/util/evsel.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
> index abc7fda4a0fe..d831038b55f2 100644
> --- a/tools/perf/util/evsel.c
> +++ b/tools/perf/util/evsel.c
> @@ -1574,7 +1574,7 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
>  {
>  	int cpu, thread, nthreads;
>  	unsigned long flags = PERF_FLAG_FD_CLOEXEC;
> -	int pid = -1, err;
> +	int pid = -1, err, old_errno;
>  	enum { NO_CHANGE, SET_TO_MAX, INCREASED_MAX } set_rlimit = NO_CHANGE;
>  
>  	if ((perf_missing_features.write_backward && evsel->core.attr.write_backward) ||
> @@ -1727,8 +1727,8 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
>  	 */
>  	if (err == -EMFILE && set_rlimit < INCREASED_MAX) {
>  		struct rlimit l;
> -		int old_errno = errno;
>  
> +		old_errno = errno;
>  		if (getrlimit(RLIMIT_NOFILE, &l) == 0) {
>  			if (set_rlimit == NO_CHANGE)
>  				l.rlim_cur = l.rlim_max;
> @@ -1812,6 +1812,7 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
>  	if (err)
>  		threads->err_thread = thread;
>  
> +	old_errno = errno;
>  	do {
>  		while (--thread >= 0) {
>  			close(FD(evsel, cpu, thread));
> @@ -1819,6 +1820,7 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
>  		}
>  		thread = nthreads;
>  	} while (--cpu >= 0);
> +	errno = old_errno;
>  	return err;
>  }
>  
> -- 
> 2.21.0
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 2/9] perf evsel: Avoid close(-1)
  2019-10-20 17:51 ` [PATCH v2 2/9] perf evsel: Avoid close(-1) Andi Kleen
@ 2019-10-22  8:01   ` Jiri Olsa
  2019-11-12 11:18   ` [tip: perf/core] " tip-bot2 for Andi Kleen
  1 sibling, 0 replies; 28+ messages in thread
From: Jiri Olsa @ 2019-10-22  8:01 UTC (permalink / raw)
  To: Andi Kleen
  Cc: acme, linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

On Sun, Oct 20, 2019 at 10:51:55AM -0700, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>
> 
> In some weak fallback cases close can be called a lot with -1. Check
> for this case and avoid calling close then.
> 
> This is mainly to shut up valgrind which complains about this case.
> 
> Signed-off-by: Andi Kleen <ak@linux.intel.com>

Acked-by: Jiri Olsa <jolsa@kernel.org>

thanks,
jirka

> ---
>  tools/perf/lib/evsel.c  | 3 ++-
>  tools/perf/util/evsel.c | 3 ++-
>  2 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/perf/lib/evsel.c b/tools/perf/lib/evsel.c
> index a8cb582e2721..5a89857b0381 100644
> --- a/tools/perf/lib/evsel.c
> +++ b/tools/perf/lib/evsel.c
> @@ -120,7 +120,8 @@ void perf_evsel__close_fd(struct perf_evsel *evsel)
>  
>  	for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++)
>  		for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
> -			close(FD(evsel, cpu, thread));
> +			if (FD(evsel, cpu, thread) >= 0)
> +				close(FD(evsel, cpu, thread));
>  			FD(evsel, cpu, thread) = -1;
>  		}
>  }
> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
> index d831038b55f2..d4451846af93 100644
> --- a/tools/perf/util/evsel.c
> +++ b/tools/perf/util/evsel.c
> @@ -1815,7 +1815,8 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
>  	old_errno = errno;
>  	do {
>  		while (--thread >= 0) {
> -			close(FD(evsel, cpu, thread));
> +			if (FD(evsel, cpu, thread) >= 0)
> +				close(FD(evsel, cpu, thread));
>  			FD(evsel, cpu, thread) = -1;
>  		}
>  		thread = nthreads;
> -- 
> 2.21.0
> 


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Optimize perf stat for large number of events/cpus v2
  2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
                   ` (8 preceding siblings ...)
  2019-10-20 17:52 ` [PATCH v2 9/9] perf stat: Use affinity for enabling/disabling events Andi Kleen
@ 2019-10-22  8:02 ` Jiri Olsa
  2019-10-22 14:11   ` Arnaldo Carvalho de Melo
  9 siblings, 1 reply; 28+ messages in thread
From: Jiri Olsa @ 2019-10-22  8:02 UTC (permalink / raw)
  To: Andi Kleen; +Cc: acme, linux-kernel, jolsa, eranian, kan.liang, peterz

On Sun, Oct 20, 2019 at 10:51:53AM -0700, Andi Kleen wrote:
> [The earlier v1 version had a lot of conflicts against some
> recent libperf changes in tip/perf/core. Resolve that and
> also fix some minor issues.]
> 
> This patch kit optimizes perf stat for a large number of events 
> on systems with many CPUs and PMUs.
> 
> Some profiling shows that the most overhead is doing IPIs to
> all the target CPUs. We can optimize this by using sched_setaffinity
> to set the affinity to a target CPU once and then doing
> the perf operation for all events on that CPU. This requires
> some restructuring, but cuts the set up time quite a bit.
> 
> In theory we could go further by parallelizing these setups
> too, but that would be much more complicated and for now just batching it
> per CPU seems to be sufficient. At some point with many more cores 
> parallelization or a better bulk perf setup API might be needed though.
> 
> In addition perf does a lot of redundant /sys accesses with
> many PMUs, which can be also expensve. This is also optimized.
> 
> On a large test case (>700 events with many weak groups) on a 94 CPU
> system I go from
> 
> real	0m8.607s
> user	0m0.550s
> sys	0m8.041s
> 
> to 
> 
> real	0m3.269s
> user	0m0.760s
> sys	0m1.694s
> 
> so shaving ~6 seconds of system time, at slightly more cost
> in perf stat itself. On a 4 socket system with the savings
> are more dramatic:
> 
> real	0m15.641s
> user	0m0.873s
> sys	0m14.729s
> 
> to 
> 
> real	0m4.493s
> user	0m1.578s
> sys	0m2.444s
> 
> so 11s difference in the user visible set up time.
> 
> Also available in 
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc perf/stat-scale-4
> 
> v1: Initial post.
> v2: Rebase. Fix some minor issues.

looks really helpful, I ack-ed 1st 2 patches,
I'll need more time for the rest

thanks,
jirka


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: Optimize perf stat for large number of events/cpus v2
  2019-10-22  8:02 ` Optimize perf stat for large number of events/cpus v2 Jiri Olsa
@ 2019-10-22 14:11   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 28+ messages in thread
From: Arnaldo Carvalho de Melo @ 2019-10-22 14:11 UTC (permalink / raw)
  To: Jiri Olsa; +Cc: Andi Kleen, linux-kernel, jolsa, eranian, kan.liang, peterz

Em Tue, Oct 22, 2019 at 10:02:23AM +0200, Jiri Olsa escreveu:
> On Sun, Oct 20, 2019 at 10:51:53AM -0700, Andi Kleen wrote:
> > v1: Initial post.
> > v2: Rebase. Fix some minor issues.
 
> looks really helpful, I ack-ed 1st 2 patches,
> I'll need more time for the rest

Thanks, applied the first two, will go thru the others.

- Arnaldo

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 3/9] perf pmu: Use file system cache to optimize sysfs access
  2019-10-20 17:51 ` [PATCH v2 3/9] perf pmu: Use file system cache to optimize sysfs access Andi Kleen
@ 2019-10-23  9:47   ` Jiri Olsa
  0 siblings, 0 replies; 28+ messages in thread
From: Jiri Olsa @ 2019-10-23  9:47 UTC (permalink / raw)
  To: Andi Kleen
  Cc: acme, linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

On Sun, Oct 20, 2019 at 10:51:56AM -0700, Andi Kleen wrote:

SNIP

> @@ -92,8 +93,12 @@ static int pmu_format(const char *name, struct list_head *format)
>  	snprintf(path, PATH_MAX,
>  		 "%s" EVENT_SOURCE_DEVICE_PATH "%s/format", sysfs, name);
>  
> -	if (stat(path, &st) < 0)
> +	if (lookup_fncache(path, &res) && !res)
> +		return 0;
> +
> +	if (!res && access(path, R_OK) < 0)
>  		return 0;	/* no error if format does not exist */
> +	update_fncache(path, true);
>  
>  	if (perf_pmu__format_parse(path, format))
>  		return -1;
> @@ -470,9 +475,9 @@ static int pmu_aliases_parse(char *dir, struct list_head *head)
>   */
>  static int pmu_aliases(const char *name, struct list_head *head)
>  {
> -	struct stat st;
>  	char path[PATH_MAX];
>  	const char *sysfs = sysfs__mountpoint();
> +	bool res = false;
>  
>  	if (!sysfs)
>  		return -1;
> @@ -480,8 +485,11 @@ static int pmu_aliases(const char *name, struct list_head *head)
>  	snprintf(path, PATH_MAX,
>  		 "%s/bus/event_source/devices/%s/events", sysfs, name);
>  
> -	if (stat(path, &st) < 0)
> -		return 0;	 /* no error if 'events' does not exist */
> +	if (lookup_fncache(path, &res) && !res)
> +		return 0;
> +	if (!res && access(path, R_OK) < 0)
> +		return 0;
> +	update_fncache(path, true);

I was thinking that maybe you dont need to have the fncache::res,
but then I realized we have 2 kind of information in here:
  - we processed this file
  - is present file present

so I think you should update the result on each update_fncache call,
not only when it's succesful


also, could you please make single function API for this?
sonething like:

  is_the_file_there(path)

that would encapsulate those calls

thanks,
jirka


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity
  2019-10-20 17:51 ` [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity Andi Kleen
@ 2019-10-23  9:59   ` Jiri Olsa
  2019-10-23 13:02     ` Andi Kleen
  0 siblings, 1 reply; 28+ messages in thread
From: Jiri Olsa @ 2019-10-23  9:59 UTC (permalink / raw)
  To: Andi Kleen
  Cc: acme, linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

On Sun, Oct 20, 2019 at 10:51:57AM -0700, Andi Kleen wrote:

SNIP

> +}
> diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
> new file mode 100644
> index 000000000000..e56148607e33
> --- /dev/null
> +++ b/tools/perf/util/affinity.h
> @@ -0,0 +1,15 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#ifndef AFFINITY_H
> +#define AFFINITY_H 1
> +
> +struct affinity {
> +	unsigned char *orig_cpus;
> +	unsigned char *sched_cpus;

why not use cpu_set_t directly?

jirka


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 9/9] perf stat: Use affinity for enabling/disabling events
  2019-10-20 17:52 ` [PATCH v2 9/9] perf stat: Use affinity for enabling/disabling events Andi Kleen
@ 2019-10-23 10:30   ` Jiri Olsa
  2019-10-23 13:07     ` Andi Kleen
  0 siblings, 1 reply; 28+ messages in thread
From: Jiri Olsa @ 2019-10-23 10:30 UTC (permalink / raw)
  To: Andi Kleen
  Cc: acme, linux-kernel, jolsa, eranian, kan.liang, peterz, Andi Kleen

On Sun, Oct 20, 2019 at 10:52:02AM -0700, Andi Kleen wrote:

SNIP

>  
>  void evlist__enable(struct evlist *evlist)
>  {
>  	struct evsel *pos;
> +	struct affinity affinity;
> +	struct perf_cpu_map *cpus;
> +	int i;
> +
> +	if (affinity__setup(&affinity) < 0)
> +		return;
> +
> +	cpus = evlist__cpu_iter_start(evlist);
> +	for (i = 0; i < cpus->nr; i++) {
> +		int cpu = cpus->map[i];
> +		affinity__set(&affinity, cpu);
>  
> +		evlist__for_each_entry(evlist, pos) {
> +			if (evlist__cpu_iter_skip(pos, cpu))
> +				continue;
> +			if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
> +				continue;

all the previous patches and this one have this code in common,
could we make this a single function, that would call a callback
that would have affinity set.. sort of like what we do in 
cpu_function_call in the kernel

thanks,
jirka

> +			evsel__enable_cpu(pos, pos->cpu_index);
> +			evlist__cpu_iter_next(pos);
> +		}
> +	}
> +	affinity__cleanup(&affinity);

SNIP


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity
  2019-10-23  9:59   ` Jiri Olsa
@ 2019-10-23 13:02     ` Andi Kleen
  2019-10-23 14:30       ` Jiri Olsa
  0 siblings, 1 reply; 28+ messages in thread
From: Andi Kleen @ 2019-10-23 13:02 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andi Kleen, acme, linux-kernel, jolsa, eranian, kan.liang, peterz

On Wed, Oct 23, 2019 at 11:59:11AM +0200, Jiri Olsa wrote:
> On Sun, Oct 20, 2019 at 10:51:57AM -0700, Andi Kleen wrote:
> 
> SNIP
> 
> > +}
> > diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
> > new file mode 100644
> > index 000000000000..e56148607e33
> > --- /dev/null
> > +++ b/tools/perf/util/affinity.h
> > @@ -0,0 +1,15 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#ifndef AFFINITY_H
> > +#define AFFINITY_H 1
> > +
> > +struct affinity {
> > +	unsigned char *orig_cpus;
> > +	unsigned char *sched_cpus;
> 
> why not use cpu_set_t directly?

Because it's too small in glibc (only 1024 CPUs) and perf already 
supports more.

-andi

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 9/9] perf stat: Use affinity for enabling/disabling events
  2019-10-23 10:30   ` Jiri Olsa
@ 2019-10-23 13:07     ` Andi Kleen
  0 siblings, 0 replies; 28+ messages in thread
From: Andi Kleen @ 2019-10-23 13:07 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andi Kleen, acme, linux-kernel, jolsa, eranian, kan.liang, peterz

On Wed, Oct 23, 2019 at 12:30:48PM +0200, Jiri Olsa wrote:
> On Sun, Oct 20, 2019 at 10:52:02AM -0700, Andi Kleen wrote:
> 
> SNIP
> 
> >  
> >  void evlist__enable(struct evlist *evlist)
> >  {
> >  	struct evsel *pos;
> > +	struct affinity affinity;
> > +	struct perf_cpu_map *cpus;
> > +	int i;
> > +
> > +	if (affinity__setup(&affinity) < 0)
> > +		return;
> > +
> > +	cpus = evlist__cpu_iter_start(evlist);
> > +	for (i = 0; i < cpus->nr; i++) {
> > +		int cpu = cpus->map[i];
> > +		affinity__set(&affinity, cpu);
> >  
> > +		evlist__for_each_entry(evlist, pos) {
> > +			if (evlist__cpu_iter_skip(pos, cpu))
> > +				continue;
> > +			if (!perf_evsel__is_group_leader(pos) || !pos->core.fd)
> > +				continue;
> 
> all the previous patches and this one have this code in common,
> could we make this a single function, that would call a callback
> that would have affinity set.. sort of like what we do in 
> cpu_function_call in the kernel

I'm personally not a big friend of call backs. They usually make
the code harder to read and reason about. 

Prefer to use callable libraries of common code.

Also the event open code has some more complex variants of this pattern
which would need multiple call backs.

I already factored the common code into the iterator.

I guess the for loop could be a macro, and affinity_set() could
perhaps accept the map and return the cpu. I'll add that in
the next version. This will reduce the common code by a few lines
more.

-Andi

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity
  2019-10-23 13:02     ` Andi Kleen
@ 2019-10-23 14:30       ` Jiri Olsa
  2019-10-23 14:52         ` Andi Kleen
  0 siblings, 1 reply; 28+ messages in thread
From: Jiri Olsa @ 2019-10-23 14:30 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andi Kleen, acme, linux-kernel, jolsa, eranian, kan.liang, peterz

On Wed, Oct 23, 2019 at 06:02:35AM -0700, Andi Kleen wrote:
> On Wed, Oct 23, 2019 at 11:59:11AM +0200, Jiri Olsa wrote:
> > On Sun, Oct 20, 2019 at 10:51:57AM -0700, Andi Kleen wrote:
> > 
> > SNIP
> > 
> > > +}
> > > diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
> > > new file mode 100644
> > > index 000000000000..e56148607e33
> > > --- /dev/null
> > > +++ b/tools/perf/util/affinity.h
> > > @@ -0,0 +1,15 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +#ifndef AFFINITY_H
> > > +#define AFFINITY_H 1
> > > +
> > > +struct affinity {
> > > +	unsigned char *orig_cpus;
> > > +	unsigned char *sched_cpus;
> > 
> > why not use cpu_set_t directly?
> 
> Because it's too small in glibc (only 1024 CPUs) and perf already 
> supports more.

nice, we're using it all over the place.. how about using bitmap_alloc?

jirka


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity
  2019-10-23 14:30       ` Jiri Olsa
@ 2019-10-23 14:52         ` Andi Kleen
  2019-10-23 16:16           ` Alexey Budankov
  0 siblings, 1 reply; 28+ messages in thread
From: Andi Kleen @ 2019-10-23 14:52 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andi Kleen, acme, linux-kernel, jolsa, eranian, kan.liang,
	peterz, alexey.budankov

On Wed, Oct 23, 2019 at 04:30:49PM +0200, Jiri Olsa wrote:
> On Wed, Oct 23, 2019 at 06:02:35AM -0700, Andi Kleen wrote:
> > On Wed, Oct 23, 2019 at 11:59:11AM +0200, Jiri Olsa wrote:
> > > On Sun, Oct 20, 2019 at 10:51:57AM -0700, Andi Kleen wrote:
> > > 
> > > SNIP
> > > 
> > > > +}
> > > > diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
> > > > new file mode 100644
> > > > index 000000000000..e56148607e33
> > > > --- /dev/null
> > > > +++ b/tools/perf/util/affinity.h
> > > > @@ -0,0 +1,15 @@
> > > > +// SPDX-License-Identifier: GPL-2.0
> > > > +#ifndef AFFINITY_H
> > > > +#define AFFINITY_H 1
> > > > +
> > > > +struct affinity {
> > > > +	unsigned char *orig_cpus;
> > > > +	unsigned char *sched_cpus;
> > > 
> > > why not use cpu_set_t directly?
> > 
> > Because it's too small in glibc (only 1024 CPUs) and perf already 
> > supports more.
> 
> nice, we're using it all over the place.. how about using bitmap_alloc?

Okay.

The other places is mainly perf record from Alexey's recent affinity changes.
These probably need to be fixed.

+Alexey

And some stuff in bench/*. That's more nice to have.

-Andi

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity
  2019-10-23 14:52         ` Andi Kleen
@ 2019-10-23 16:16           ` Alexey Budankov
  2019-10-23 17:19             ` Andi Kleen
  0 siblings, 1 reply; 28+ messages in thread
From: Alexey Budankov @ 2019-10-23 16:16 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Jiri Olsa, Andi Kleen, acme, linux-kernel, jolsa, eranian,
	kan.liang, peterz


On 23.10.2019 17:52, Andi Kleen wrote:
> On Wed, Oct 23, 2019 at 04:30:49PM +0200, Jiri Olsa wrote:
>> On Wed, Oct 23, 2019 at 06:02:35AM -0700, Andi Kleen wrote:
>>> On Wed, Oct 23, 2019 at 11:59:11AM +0200, Jiri Olsa wrote:
>>>> On Sun, Oct 20, 2019 at 10:51:57AM -0700, Andi Kleen wrote:
>>>>
>>>> SNIP
>>>>
>>>>> +}
>>>>> diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
>>>>> new file mode 100644
>>>>> index 000000000000..e56148607e33
>>>>> --- /dev/null
>>>>> +++ b/tools/perf/util/affinity.h
>>>>> @@ -0,0 +1,15 @@
>>>>> +// SPDX-License-Identifier: GPL-2.0
>>>>> +#ifndef AFFINITY_H
>>>>> +#define AFFINITY_H 1
>>>>> +
>>>>> +struct affinity {
>>>>> +	unsigned char *orig_cpus;
>>>>> +	unsigned char *sched_cpus;
>>>>
>>>> why not use cpu_set_t directly?
>>>
>>> Because it's too small in glibc (only 1024 CPUs) and perf already 
>>> supports more.
>>
>> nice, we're using it all over the place.. how about using bitmap_alloc?
> 
> Okay.
> 
> The other places is mainly perf record from Alexey's recent affinity changes.
> These probably need to be fixed.
> 
> +Alexey

Despite the issue indeed looks generic for stat and record modes,
have you already observed record startup overhead somewhere in your setups?
I would, first, prefer to reproduce the overhead, to have stable use case 
for evaluation and then, possibly, improvement.

~Alexey

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity
  2019-10-23 16:16           ` Alexey Budankov
@ 2019-10-23 17:19             ` Andi Kleen
  2019-10-23 18:08               ` Alexey Budankov
  0 siblings, 1 reply; 28+ messages in thread
From: Andi Kleen @ 2019-10-23 17:19 UTC (permalink / raw)
  To: Alexey Budankov
  Cc: Andi Kleen, Jiri Olsa, Andi Kleen, acme, linux-kernel, jolsa,
	eranian, kan.liang, peterz

On Wed, Oct 23, 2019 at 07:16:13PM +0300, Alexey Budankov wrote:
> 
> On 23.10.2019 17:52, Andi Kleen wrote:
> > On Wed, Oct 23, 2019 at 04:30:49PM +0200, Jiri Olsa wrote:
> >> On Wed, Oct 23, 2019 at 06:02:35AM -0700, Andi Kleen wrote:
> >>> On Wed, Oct 23, 2019 at 11:59:11AM +0200, Jiri Olsa wrote:
> >>>> On Sun, Oct 20, 2019 at 10:51:57AM -0700, Andi Kleen wrote:
> >>>>
> >>>> SNIP
> >>>>
> >>>>> +}
> >>>>> diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
> >>>>> new file mode 100644
> >>>>> index 000000000000..e56148607e33
> >>>>> --- /dev/null
> >>>>> +++ b/tools/perf/util/affinity.h
> >>>>> @@ -0,0 +1,15 @@
> >>>>> +// SPDX-License-Identifier: GPL-2.0
> >>>>> +#ifndef AFFINITY_H
> >>>>> +#define AFFINITY_H 1
> >>>>> +
> >>>>> +struct affinity {
> >>>>> +	unsigned char *orig_cpus;
> >>>>> +	unsigned char *sched_cpus;
> >>>>
> >>>> why not use cpu_set_t directly?
> >>>
> >>> Because it's too small in glibc (only 1024 CPUs) and perf already 
> >>> supports more.
> >>
> >> nice, we're using it all over the place.. how about using bitmap_alloc?
> > 
> > Okay.
> > 
> > The other places is mainly perf record from Alexey's recent affinity changes.
> > These probably need to be fixed.
> > 
> > +Alexey
> 
> Despite the issue indeed looks generic for stat and record modes,
> have you already observed record startup overhead somewhere in your setups?
> I would, first, prefer to reproduce the overhead, to have stable use case 
> for evaluation and then, possibly, improvement.

What I meant the cpu_set usages you added in 

commit 9d2ed64587c045304efe8872b0258c30803d370c
Author: Alexey Budankov <alexey.budankov@linux.intel.com>
Date:   Tue Jan 22 20:47:43 2019 +0300

    perf record: Allocate affinity masks

need to be fixed to allocate dynamically, or at least use MAX_NR_CPUs to
support systems with >1024CPUs. That's an independent functionality
problem.

I haven't seen any large enough perf record usage to run
into the IPI problems for record.

-Andi

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity
  2019-10-23 17:19             ` Andi Kleen
@ 2019-10-23 18:08               ` Alexey Budankov
  2019-10-23 22:37                 ` Andi Kleen
  0 siblings, 1 reply; 28+ messages in thread
From: Alexey Budankov @ 2019-10-23 18:08 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andi Kleen, Jiri Olsa, acme, linux-kernel, jolsa, eranian,
	kan.liang, peterz

On 23.10.2019 20:19, Andi Kleen wrote:
> On Wed, Oct 23, 2019 at 07:16:13PM +0300, Alexey Budankov wrote:
>>
>> On 23.10.2019 17:52, Andi Kleen wrote:
>>> On Wed, Oct 23, 2019 at 04:30:49PM +0200, Jiri Olsa wrote:
>>>> On Wed, Oct 23, 2019 at 06:02:35AM -0700, Andi Kleen wrote:
>>>>> On Wed, Oct 23, 2019 at 11:59:11AM +0200, Jiri Olsa wrote:
>>>>>> On Sun, Oct 20, 2019 at 10:51:57AM -0700, Andi Kleen wrote:
>>>>>>
>>>>>> SNIP
>>>>>>
>>>>>>> +}
>>>>>>> diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
>>>>>>> new file mode 100644
>>>>>>> index 000000000000..e56148607e33
>>>>>>> --- /dev/null
>>>>>>> +++ b/tools/perf/util/affinity.h
>>>>>>> @@ -0,0 +1,15 @@
>>>>>>> +// SPDX-License-Identifier: GPL-2.0
>>>>>>> +#ifndef AFFINITY_H
>>>>>>> +#define AFFINITY_H 1
>>>>>>> +
>>>>>>> +struct affinity {
>>>>>>> +	unsigned char *orig_cpus;
>>>>>>> +	unsigned char *sched_cpus;
>>>>>>
>>>>>> why not use cpu_set_t directly?
>>>>>
>>>>> Because it's too small in glibc (only 1024 CPUs) and perf already 
>>>>> supports more.
>>>>
>>>> nice, we're using it all over the place.. how about using bitmap_alloc?
>>>
>>> Okay.
>>>
>>> The other places is mainly perf record from Alexey's recent affinity changes.
>>> These probably need to be fixed.
>>>
>>> +Alexey
>>
>> Despite the issue indeed looks generic for stat and record modes,
>> have you already observed record startup overhead somewhere in your setups?
>> I would, first, prefer to reproduce the overhead, to have stable use case 
>> for evaluation and then, possibly, improvement.
> 
> What I meant the cpu_set usages you added in 
> 
> commit 9d2ed64587c045304efe8872b0258c30803d370c
> Author: Alexey Budankov <alexey.budankov@linux.intel.com>
> Date:   Tue Jan 22 20:47:43 2019 +0300
> 
>     perf record: Allocate affinity masks
> 
> need to be fixed to allocate dynamically, or at least use MAX_NR_CPUs to
> support systems with >1024CPUs. That's an independent functionality
> problem.

Oh, it is clear now. Thanks for pointing this out. For that to move from 
cpu_mask_t to new custom struct affinity type its API requires extension 
to provide mask operations similar to the ones that cpu_mask_t provides: 
CPU_ZERO(), CPU_SET(), CPU_EQUAL(), CPU_OR().

For example it could be like: affinity__mask_zero(), affinity__mask_set(), 
affinity__mask_equal(), affinity__mask_or() and then the collecting part 
of record could also be moved to struct affinity type and overcome >1024CPUs 
limitation.

~Alexey

> 
> I haven't seen any large enough perf record usage to run
> into the IPI problems for record.
> 
> -Andi
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity
  2019-10-23 18:08               ` Alexey Budankov
@ 2019-10-23 22:37                 ` Andi Kleen
  2019-10-24  8:46                   ` Alexey Budankov
  0 siblings, 1 reply; 28+ messages in thread
From: Andi Kleen @ 2019-10-23 22:37 UTC (permalink / raw)
  To: Alexey Budankov
  Cc: Andi Kleen, Jiri Olsa, acme, linux-kernel, jolsa, eranian,
	kan.liang, peterz

On Wed, Oct 23, 2019 at 09:08:47PM +0300, Alexey Budankov wrote:
> On 23.10.2019 20:19, Andi Kleen wrote:
> > On Wed, Oct 23, 2019 at 07:16:13PM +0300, Alexey Budankov wrote:
> >>
> >> On 23.10.2019 17:52, Andi Kleen wrote:
> >>> On Wed, Oct 23, 2019 at 04:30:49PM +0200, Jiri Olsa wrote:
> >>>> On Wed, Oct 23, 2019 at 06:02:35AM -0700, Andi Kleen wrote:
> >>>>> On Wed, Oct 23, 2019 at 11:59:11AM +0200, Jiri Olsa wrote:
> >>>>>> On Sun, Oct 20, 2019 at 10:51:57AM -0700, Andi Kleen wrote:
> >>>>>>
> >>>>>> SNIP
> >>>>>>
> >>>>>>> +}
> >>>>>>> diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
> >>>>>>> new file mode 100644
> >>>>>>> index 000000000000..e56148607e33
> >>>>>>> --- /dev/null
> >>>>>>> +++ b/tools/perf/util/affinity.h
> >>>>>>> @@ -0,0 +1,15 @@
> >>>>>>> +// SPDX-License-Identifier: GPL-2.0
> >>>>>>> +#ifndef AFFINITY_H
> >>>>>>> +#define AFFINITY_H 1
> >>>>>>> +
> >>>>>>> +struct affinity {
> >>>>>>> +	unsigned char *orig_cpus;
> >>>>>>> +	unsigned char *sched_cpus;
> >>>>>>
> >>>>>> why not use cpu_set_t directly?
> >>>>>
> >>>>> Because it's too small in glibc (only 1024 CPUs) and perf already 
> >>>>> supports more.
> >>>>
> >>>> nice, we're using it all over the place.. how about using bitmap_alloc?
> >>>
> >>> Okay.
> >>>
> >>> The other places is mainly perf record from Alexey's recent affinity changes.
> >>> These probably need to be fixed.
> >>>
> >>> +Alexey
> >>
> >> Despite the issue indeed looks generic for stat and record modes,
> >> have you already observed record startup overhead somewhere in your setups?
> >> I would, first, prefer to reproduce the overhead, to have stable use case 
> >> for evaluation and then, possibly, improvement.
> > 
> > What I meant the cpu_set usages you added in 
> > 
> > commit 9d2ed64587c045304efe8872b0258c30803d370c
> > Author: Alexey Budankov <alexey.budankov@linux.intel.com>
> > Date:   Tue Jan 22 20:47:43 2019 +0300
> > 
> >     perf record: Allocate affinity masks
> > 
> > need to be fixed to allocate dynamically, or at least use MAX_NR_CPUs to
> > support systems with >1024CPUs. That's an independent functionality
> > problem.
> 
> Oh, it is clear now. Thanks for pointing this out. For that to move from 
> cpu_mask_t to new custom struct affinity type its API requires extension 
> to provide mask operations similar to the ones that cpu_mask_t provides: 
> CPU_ZERO(), CPU_SET(), CPU_EQUAL(), CPU_OR().
> 
> For example it could be like: affinity__mask_zero(), affinity__mask_set(), 
> affinity__mask_equal(), affinity__mask_or() and then the collecting part 
> of record could also be moved to struct affinity type and overcome >1024CPUs 
> limitation.

Not sure you need to use my library, except perhaps the get_cpu_set_size()
function. It is somewhat specialized.

Everything else you can use normal Linux bitmap functions,
or call the sys call directly.

-Andi

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity
  2019-10-23 22:37                 ` Andi Kleen
@ 2019-10-24  8:46                   ` Alexey Budankov
  0 siblings, 0 replies; 28+ messages in thread
From: Alexey Budankov @ 2019-10-24  8:46 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andi Kleen, Jiri Olsa, acme, linux-kernel, jolsa, eranian,
	kan.liang, peterz

On 24.10.2019 1:37, Andi Kleen wrote:
> On Wed, Oct 23, 2019 at 09:08:47PM +0300, Alexey Budankov wrote:
>> On 23.10.2019 20:19, Andi Kleen wrote:
>>> On Wed, Oct 23, 2019 at 07:16:13PM +0300, Alexey Budankov wrote:
>>>>
>>>> On 23.10.2019 17:52, Andi Kleen wrote:
>>>>> On Wed, Oct 23, 2019 at 04:30:49PM +0200, Jiri Olsa wrote:
>>>>>> On Wed, Oct 23, 2019 at 06:02:35AM -0700, Andi Kleen wrote:
>>>>>>> On Wed, Oct 23, 2019 at 11:59:11AM +0200, Jiri Olsa wrote:
>>>>>>>> On Sun, Oct 20, 2019 at 10:51:57AM -0700, Andi Kleen wrote:
>>>>>>>>
>>>>>>>> SNIP
>>>>>>>>
>>>>>>>>> +}
>>>>>>>>> diff --git a/tools/perf/util/affinity.h b/tools/perf/util/affinity.h
>>>>>>>>> new file mode 100644
>>>>>>>>> index 000000000000..e56148607e33
>>>>>>>>> --- /dev/null
>>>>>>>>> +++ b/tools/perf/util/affinity.h
>>>>>>>>> @@ -0,0 +1,15 @@
>>>>>>>>> +// SPDX-License-Identifier: GPL-2.0
>>>>>>>>> +#ifndef AFFINITY_H
>>>>>>>>> +#define AFFINITY_H 1
>>>>>>>>> +
>>>>>>>>> +struct affinity {
>>>>>>>>> +	unsigned char *orig_cpus;
>>>>>>>>> +	unsigned char *sched_cpus;
>>>>>>>>
>>>>>>>> why not use cpu_set_t directly?
>>>>>>>
>>>>>>> Because it's too small in glibc (only 1024 CPUs) and perf already 
>>>>>>> supports more.
>>>>>>
>>>>>> nice, we're using it all over the place.. how about using bitmap_alloc?
>>>>>
>>>>> Okay.
>>>>>
>>>>> The other places is mainly perf record from Alexey's recent affinity changes.
>>>>> These probably need to be fixed.
>>>>>
>>>>> +Alexey
>>>>
>>>> Despite the issue indeed looks generic for stat and record modes,
>>>> have you already observed record startup overhead somewhere in your setups?
>>>> I would, first, prefer to reproduce the overhead, to have stable use case 
>>>> for evaluation and then, possibly, improvement.
>>>
>>> What I meant the cpu_set usages you added in 
>>>
>>> commit 9d2ed64587c045304efe8872b0258c30803d370c
>>> Author: Alexey Budankov <alexey.budankov@linux.intel.com>
>>> Date:   Tue Jan 22 20:47:43 2019 +0300
>>>
>>>     perf record: Allocate affinity masks
>>>
>>> need to be fixed to allocate dynamically, or at least use MAX_NR_CPUs to
>>> support systems with >1024CPUs. That's an independent functionality
>>> problem.
>>
>> Oh, it is clear now. Thanks for pointing this out. For that to move from 
>> cpu_mask_t to new custom struct affinity type its API requires extension 
>> to provide mask operations similar to the ones that cpu_mask_t provides: 
>> CPU_ZERO(), CPU_SET(), CPU_EQUAL(), CPU_OR().
>>
>> For example it could be like: affinity__mask_zero(), affinity__mask_set(), 
>> affinity__mask_equal(), affinity__mask_or() and then the collecting part 
>> of record could also be moved to struct affinity type and overcome >1024CPUs 
>> limitation.
> 
> Not sure you need to use my library, except perhaps the get_cpu_set_size()
> function. It is somewhat specialized.

Ok, I see.

> 
> Everything else you can use normal Linux bitmap functions,
> or call the sys call directly.

Thanks,
Alexey

> 
> -Andi
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [tip: perf/core] perf evsel: Avoid close(-1)
  2019-10-20 17:51 ` [PATCH v2 2/9] perf evsel: Avoid close(-1) Andi Kleen
  2019-10-22  8:01   ` Jiri Olsa
@ 2019-11-12 11:18   ` tip-bot2 for Andi Kleen
  1 sibling, 0 replies; 28+ messages in thread
From: tip-bot2 for Andi Kleen @ 2019-11-12 11:18 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Andi Kleen, Jiri Olsa, Kan Liang, Peter Zijlstra,
	Stephane Eranian, Arnaldo Carvalho de Melo, Ingo Molnar,
	Borislav Petkov, linux-kernel

The following commit has been merged into the perf/core branch of tip:

Commit-ID:     2ccfb8bc2143ca347609d1d4434176d73a78d805
Gitweb:        https://git.kernel.org/tip/2ccfb8bc2143ca347609d1d4434176d73a78d805
Author:        Andi Kleen <ak@linux.intel.com>
AuthorDate:    Sun, 20 Oct 2019 10:51:55 -07:00
Committer:     Arnaldo Carvalho de Melo <acme@redhat.com>
CommitterDate: Wed, 06 Nov 2019 15:43:05 -03:00

perf evsel: Avoid close(-1)

In some weak fallback cases close can be called a lot with -1. Check for
this case and avoid calling close then.

This is mainly to shut up valgrind which complains about this case.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20191020175202.32456-3-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
 tools/perf/lib/evsel.c  | 3 ++-
 tools/perf/util/evsel.c | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/perf/lib/evsel.c b/tools/perf/lib/evsel.c
index a8cb582..5a89857 100644
--- a/tools/perf/lib/evsel.c
+++ b/tools/perf/lib/evsel.c
@@ -120,7 +120,8 @@ void perf_evsel__close_fd(struct perf_evsel *evsel)
 
 	for (cpu = 0; cpu < xyarray__max_x(evsel->fd); cpu++)
 		for (thread = 0; thread < xyarray__max_y(evsel->fd); ++thread) {
-			close(FD(evsel, cpu, thread));
+			if (FD(evsel, cpu, thread) >= 0)
+				close(FD(evsel, cpu, thread));
 			FD(evsel, cpu, thread) = -1;
 		}
 }
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index d831038..d445184 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1815,7 +1815,8 @@ out_close:
 	old_errno = errno;
 	do {
 		while (--thread >= 0) {
-			close(FD(evsel, cpu, thread));
+			if (FD(evsel, cpu, thread) >= 0)
+				close(FD(evsel, cpu, thread));
 			FD(evsel, cpu, thread) = -1;
 		}
 		thread = nthreads;

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [tip: perf/core] perf evsel: Always preserve errno while cleaning up perf_event_open failures
  2019-10-20 17:51 ` [PATCH v2 1/9] perf evsel: Always preserve errno while cleaning up perf_event_open failures Andi Kleen
  2019-10-22  8:01   ` Jiri Olsa
@ 2019-11-12 11:18   ` tip-bot2 for Andi Kleen
  1 sibling, 0 replies; 28+ messages in thread
From: tip-bot2 for Andi Kleen @ 2019-11-12 11:18 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Andi Kleen, Jiri Olsa, Kan Liang, Peter Zijlstra,
	Stephane Eranian, Arnaldo Carvalho de Melo, Ingo Molnar,
	Borislav Petkov, linux-kernel

The following commit has been merged into the perf/core branch of tip:

Commit-ID:     796c01a4bfb4b35ec6d1bd1cd5d520515d078b51
Gitweb:        https://git.kernel.org/tip/796c01a4bfb4b35ec6d1bd1cd5d520515d078b51
Author:        Andi Kleen <ak@linux.intel.com>
AuthorDate:    Sun, 20 Oct 2019 10:51:54 -07:00
Committer:     Arnaldo Carvalho de Melo <acme@redhat.com>
CommitterDate: Wed, 06 Nov 2019 15:43:05 -03:00

perf evsel: Always preserve errno while cleaning up perf_event_open failures

In some cases when perf_event_open fails, it may do some closes to clean
up. In special cases these closes can fail too, which overwrites the
errno of the perf_event_open, which is then incorrectly reported.

Save/restore errno around closes.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Cc: Kan Liang <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lore.kernel.org/lkml/20191020175202.32456-2-andi@firstfloor.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
 tools/perf/util/evsel.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index abc7fda..d831038 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1574,7 +1574,7 @@ int evsel__open(struct evsel *evsel, struct perf_cpu_map *cpus,
 {
 	int cpu, thread, nthreads;
 	unsigned long flags = PERF_FLAG_FD_CLOEXEC;
-	int pid = -1, err;
+	int pid = -1, err, old_errno;
 	enum { NO_CHANGE, SET_TO_MAX, INCREASED_MAX } set_rlimit = NO_CHANGE;
 
 	if ((perf_missing_features.write_backward && evsel->core.attr.write_backward) ||
@@ -1727,8 +1727,8 @@ try_fallback:
 	 */
 	if (err == -EMFILE && set_rlimit < INCREASED_MAX) {
 		struct rlimit l;
-		int old_errno = errno;
 
+		old_errno = errno;
 		if (getrlimit(RLIMIT_NOFILE, &l) == 0) {
 			if (set_rlimit == NO_CHANGE)
 				l.rlim_cur = l.rlim_max;
@@ -1812,6 +1812,7 @@ out_close:
 	if (err)
 		threads->err_thread = thread;
 
+	old_errno = errno;
 	do {
 		while (--thread >= 0) {
 			close(FD(evsel, cpu, thread));
@@ -1819,6 +1820,7 @@ out_close:
 		}
 		thread = nthreads;
 	} while (--cpu >= 0);
+	errno = old_errno;
 	return err;
 }
 

^ permalink raw reply related	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2019-11-12 11:18 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-20 17:51 Optimize perf stat for large number of events/cpus v2 Andi Kleen
2019-10-20 17:51 ` [PATCH v2 1/9] perf evsel: Always preserve errno while cleaning up perf_event_open failures Andi Kleen
2019-10-22  8:01   ` Jiri Olsa
2019-11-12 11:18   ` [tip: perf/core] " tip-bot2 for Andi Kleen
2019-10-20 17:51 ` [PATCH v2 2/9] perf evsel: Avoid close(-1) Andi Kleen
2019-10-22  8:01   ` Jiri Olsa
2019-11-12 11:18   ` [tip: perf/core] " tip-bot2 for Andi Kleen
2019-10-20 17:51 ` [PATCH v2 3/9] perf pmu: Use file system cache to optimize sysfs access Andi Kleen
2019-10-23  9:47   ` Jiri Olsa
2019-10-20 17:51 ` [PATCH v2 4/9] perf affinity: Add infrastructure to save/restore affinity Andi Kleen
2019-10-23  9:59   ` Jiri Olsa
2019-10-23 13:02     ` Andi Kleen
2019-10-23 14:30       ` Jiri Olsa
2019-10-23 14:52         ` Andi Kleen
2019-10-23 16:16           ` Alexey Budankov
2019-10-23 17:19             ` Andi Kleen
2019-10-23 18:08               ` Alexey Budankov
2019-10-23 22:37                 ` Andi Kleen
2019-10-24  8:46                   ` Alexey Budankov
2019-10-20 17:51 ` [PATCH v2 5/9] perf evsel: Add iterator to iterate over events ordered by CPU Andi Kleen
2019-10-20 17:51 ` [PATCH v2 6/9] perf stat: Use affinity for closing file descriptors Andi Kleen
2019-10-20 17:52 ` [PATCH v2 7/9] perf stat: Use affinity for opening events Andi Kleen
2019-10-20 17:52 ` [PATCH v2 8/9] perf stat: Use affinity for reading Andi Kleen
2019-10-20 17:52 ` [PATCH v2 9/9] perf stat: Use affinity for enabling/disabling events Andi Kleen
2019-10-23 10:30   ` Jiri Olsa
2019-10-23 13:07     ` Andi Kleen
2019-10-22  8:02 ` Optimize perf stat for large number of events/cpus v2 Jiri Olsa
2019-10-22 14:11   ` Arnaldo Carvalho de Melo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).