linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andi Kleen <andi@firstfloor.org>
To: acme@kernel.org
Cc: jolsa@kernel.org, eranian@google.com,
	linux-kernel@vger.kernel.org, Andi Kleen <ak@linux.intel.com>
Subject: [PATCH v3 3/7] perf evsel: Add iterator to iterate over events ordered by CPU
Date: Fri, 25 Oct 2019 11:14:13 -0700	[thread overview]
Message-ID: <20191025181417.10670-4-andi@firstfloor.org> (raw)
In-Reply-To: <20191025181417.10670-1-andi@firstfloor.org>

From: Andi Kleen <ak@linux.intel.com>

Add some common code that is needed to iterate over all events
in CPU order. Used in followon patches

Signed-off-by: Andi Kleen <ak@linux.intel.com>

---

v2: Add cpumap__for_each_cpu macro to factor out some common code
---
 tools/perf/util/cpumap.h |  8 ++++++++
 tools/perf/util/evlist.c | 33 +++++++++++++++++++++++++++++++++
 tools/perf/util/evlist.h |  4 ++++
 tools/perf/util/evsel.h  |  1 +
 4 files changed, 46 insertions(+)

diff --git a/tools/perf/util/cpumap.h b/tools/perf/util/cpumap.h
index 2553bef1279d..a9b13d72fd29 100644
--- a/tools/perf/util/cpumap.h
+++ b/tools/perf/util/cpumap.h
@@ -60,4 +60,12 @@ int cpu_map__build_map(struct perf_cpu_map *cpus, struct perf_cpu_map **res,
 
 int cpu_map__cpu(struct perf_cpu_map *cpus, int idx);
 bool cpu_map__has(struct perf_cpu_map *cpus, int cpu);
+
+#define __cpumap__for_each_cpu(cpus, index, cpu, maxcpu)\
+	for ((index) = 0; 				\
+	     (cpu) = (index) < (maxcpu) ? (cpus)->map[index] : -1, (index) < (maxcpu); \
+	     (index)++)
+#define cpumap__for_each_cpu(cpus, index, cpu) \
+	__cpumap__for_each_cpu(cpus, index, cpu, (cpus)->nr)
+
 #endif /* __PERF_CPUMAP_H */
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index fdce590d2278..da3c8f8ef68e 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -342,6 +342,39 @@ static int perf_evlist__nr_threads(struct evlist *evlist,
 		return perf_thread_map__nr(evlist->core.threads);
 }
 
+struct perf_cpu_map *evlist__cpu_iter_start(struct evlist *evlist)
+{
+	struct perf_cpu_map *cpus;
+	struct evsel *pos;
+
+	/*
+	 * evlist->cpus is not necessarily a superset of all the
+	 * event's cpus, so compute our own super set. This
+	 * assume that there is a super set
+	 */
+	cpus = evlist->core.cpus;
+	evlist__for_each_entry(evlist, pos) {
+		pos->cpu_index = 0;
+		if (pos->core.cpus->nr > cpus->nr)
+			cpus = pos->core.cpus;
+	}
+	return cpus;
+}
+
+bool evlist__cpu_iter_skip(struct evsel *ev, int cpu)
+{
+	if (ev->cpu_index >= ev->core.cpus->nr)
+		return true;
+	if (cpu >= 0 && ev->core.cpus->map[ev->cpu_index] != cpu)
+		return true;
+	return false;
+}
+
+void evlist__cpu_iter_next(struct evsel *ev)
+{
+	ev->cpu_index++;
+}
+
 void evlist__disable(struct evlist *evlist)
 {
 	struct evsel *pos;
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index 13051409fd22..c1deb8ebdcea 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -336,6 +336,10 @@ void perf_evlist__to_front(struct evlist *evlist,
 void perf_evlist__set_tracking_event(struct evlist *evlist,
 				     struct evsel *tracking_evsel);
 
+struct perf_cpu_map *evlist__cpu_iter_start(struct evlist *evlist);
+bool evlist__cpu_iter_skip(struct evsel *ev, int cpu);
+void evlist__cpu_iter_next(struct evsel *ev);
+
 struct evsel *
 perf_evlist__find_evsel_by_str(struct evlist *evlist, const char *str);
 
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index ddc5ee6f6592..cf90019ae744 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -95,6 +95,7 @@ struct evsel {
 	bool			collect_stat;
 	bool			weak_group;
 	bool			percore;
+	int			cpu_index;
 	const char		*pmu_name;
 	struct {
 		perf_evsel__sb_cb_t	*cb;
-- 
2.21.0


  parent reply	other threads:[~2019-10-25 18:14 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-25 18:14 Optimize perf stat for large number of events/cpus v3 Andi Kleen
2019-10-25 18:14 ` [PATCH v3 1/7] perf pmu: Use file system cache to optimize sysfs access Andi Kleen
2019-10-28 22:01   ` Jiri Olsa
2019-10-29  2:14     ` Andi Kleen
2019-10-25 18:14 ` [PATCH v3 2/7] perf affinity: Add infrastructure to save/restore affinity Andi Kleen
2019-10-25 18:14 ` Andi Kleen [this message]
2019-10-30 10:05   ` [PATCH v3 3/7] perf evsel: Add iterator to iterate over events ordered by CPU Jiri Olsa
2019-10-30 10:06   ` Jiri Olsa
2019-10-30 15:51     ` Andi Kleen
2019-10-30 18:15       ` Jiri Olsa
2019-10-30 19:03         ` Andi Kleen
2019-11-01  8:38           ` Jiri Olsa
2019-10-25 18:14 ` [PATCH v3 4/7] perf stat: Use affinity for closing file descriptors Andi Kleen
2019-10-30 10:05   ` Jiri Olsa
2019-11-04 23:35     ` Andi Kleen
2019-10-25 18:14 ` [PATCH v3 5/7] perf stat: Use affinity for opening events Andi Kleen
2019-10-30 10:06   ` Jiri Olsa
2019-10-25 18:14 ` [PATCH v3 6/7] perf stat: Use affinity for reading Andi Kleen
2019-10-25 18:14 ` [PATCH v3 7/7] perf stat: Use affinity for enabling/disabling events Andi Kleen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191025181417.10670-4-andi@firstfloor.org \
    --to=andi@firstfloor.org \
    --cc=acme@kernel.org \
    --cc=ak@linux.intel.com \
    --cc=eranian@google.com \
    --cc=jolsa@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).