linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/22] Reference count checker and related fixes
@ 2022-02-11 10:33 Ian Rogers
  2022-02-11 10:33 ` [PATCH v3 01/22] perf cpumap: Migrate to libperf cpumap api Ian Rogers
                   ` (21 more replies)
  0 siblings, 22 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:33 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

The perf tool has a class of memory problems where reference counts
are used incorrectly. Memory/address sanitizers and valgrind don't
provide useful ways to debug these problems, you see a memory leak
where the only pertinent information is the original allocation
site. What would be more useful is knowing where a get fails to have a
corresponding put, where there are double puts, etc.

This work was motivated by the roll-back of:
https://lore.kernel.org/linux-perf-users/20211118193714.2293728-1-irogers@google.com/
where fixing a missed put resulted in a use-after-free in a different
context. There was a sense in fixing the issue that a game of
wac-a-mole had been embarked upon in adding missed gets and puts.

The basic approach of the change is to add a level of indirection at
the get and put calls. Get allocates a level of indirection that, if
no corresponding put is called, becomes a memory leak (and associated
stack trace) that leak sanitizer can report. Similarly if two puts are
called for the same get, then a double free can be detected by address
sanitizer. This can also detect the use after put, which should also
yield a segv without a sanitizer.

Adding reference count checking to cpu map was done as a proof of
concept, it yielded little other than a location where the use of get
could be cleaner by using its result. Reference count checking on
nsinfo identified a double free of the indirection layer and the
related threads, thereby identifying a data race as discussed here:
 https://lore.kernel.org/linux-perf-users/CAP-5=fWZH20L4kv-BwVtGLwR=Em3AOOT+Q4QGivvQuYn5AsPRg@mail.gmail.com/
Accordingly the dso->lock was extended and use to cover the race.

The v3 version addresses problems in v2, in particular using macros to
avoid #ifdefs. The v3 version applies the reference count checking
approach to two more data structures, maps and map. While maps was
straightforward, struct map showed a problem where reference counted
thing can be on lists and rb-trees that are oblivious to the
reference count. To sanitize this, struct map is changed so that it is
referenced by either a list or rb-tree node and not part of it. This
simplifies the reference count and the patches have caught and fixed a
number of missed or mismatched reference counts relating to struct
map.

The patches are arranged so that API refactors and bug fixes appear
first, then the reference count checker itself appears. This allows
for the refactor and fixes to be applied upstream first, as has
already happened with cpumap.

A wider discussion of the approach is on the mailing list:
 https://lore.kernel.org/linux-perf-users/YffqnynWcc5oFkI5@kernel.org/T/#mf25ccd7a2e03de92cec29d36e2999a8ab5ec7f88
Comparing it to a past approach:
 https://lore.kernel.org/all/20151209021047.10245.8918.stgit@localhost.localdomain/
and to ref_tracker:
 https://lwn.net/Articles/877603/

Ian Rogers (22):
  perf cpumap: Migrate to libperf cpumap api
  perf cpumap: Use for each loop
  perf dso: Make lock error check and add BUG_ONs
  perf dso: Hold lock when accessing nsinfo
  perf maps: Use a pointer for kmaps
  perf test: Use pointer for maps
  perf maps: Reduce scope of init and exit
  perf maps: Move maps code to own C file
  perf map: Add const to map_ip and unmap_ip
  perf map: Make map__contains_symbol args const
  perf map: Move map list node into symbol
  perf maps: Remove rb_node from struct map
  perf namespaces: Add functions to access nsinfo
  perf maps: Add functions to access maps
  perf map: Use functions to access the variables in map
  perf test: Add extra diagnostics to maps test
  perf map: Changes to reference counting
  libperf: Add reference count checking macros.
  perf cpumap: Add reference count checking
  perf namespaces: Add reference count checking
  perf maps: Add reference count checking.
  perf map: Add reference count checking

 tools/lib/perf/cpumap.c                       |  93 +--
 tools/lib/perf/include/internal/cpumap.h      |   4 +-
 tools/lib/perf/include/internal/rc_check.h    |  94 +++
 tools/perf/arch/s390/annotate/instructions.c  |   4 +-
 tools/perf/arch/x86/tests/dwarf-unwind.c      |   2 +-
 tools/perf/arch/x86/util/event.c              |  15 +-
 tools/perf/builtin-annotate.c                 |   8 +-
 tools/perf/builtin-inject.c                   |  14 +-
 tools/perf/builtin-kallsyms.c                 |   6 +-
 tools/perf/builtin-kmem.c                     |   4 +-
 tools/perf/builtin-mem.c                      |   4 +-
 tools/perf/builtin-probe.c                    |   2 +-
 tools/perf/builtin-report.c                   |  26 +-
 tools/perf/builtin-script.c                   |  26 +-
 tools/perf/builtin-top.c                      |  16 +-
 tools/perf/builtin-trace.c                    |   2 +-
 .../scripts/python/Perf-Trace-Util/Context.c  |  14 +-
 tools/perf/tests/code-reading.c               |  32 +-
 tools/perf/tests/cpumap.c                     |  14 +-
 tools/perf/tests/hists_common.c               |   4 +-
 tools/perf/tests/hists_cumulate.c             |  14 +-
 tools/perf/tests/hists_filter.c               |  14 +-
 tools/perf/tests/hists_link.c                 |  18 +-
 tools/perf/tests/hists_output.c               |  12 +-
 tools/perf/tests/maps.c                       |  87 ++-
 tools/perf/tests/mmap-thread-lookup.c         |   3 +-
 tools/perf/tests/thread-maps-share.c          |  29 +-
 tools/perf/tests/vmlinux-kallsyms.c           |  56 +-
 tools/perf/ui/browsers/annotate.c             |   7 +-
 tools/perf/ui/browsers/hists.c                |  21 +-
 tools/perf/ui/browsers/map.c                  |   4 +-
 tools/perf/util/Build                         |   1 +
 tools/perf/util/annotate.c                    |  38 +-
 tools/perf/util/auxtrace.c                    |   2 +-
 tools/perf/util/block-info.c                  |   4 +-
 tools/perf/util/bpf-event.c                   |  10 +-
 tools/perf/util/build-id.c                    |   6 +-
 tools/perf/util/callchain.c                   |  28 +-
 tools/perf/util/cpumap.c                      |  36 +-
 tools/perf/util/data-convert-json.c           |   4 +-
 tools/perf/util/db-export.c                   |  16 +-
 tools/perf/util/dlfilter.c                    |  29 +-
 tools/perf/util/dso.c                         |  21 +-
 tools/perf/util/event.c                       |  30 +-
 tools/perf/util/evsel_fprintf.c               |   4 +-
 tools/perf/util/hist.c                        |  22 +-
 tools/perf/util/intel-pt.c                    |  48 +-
 tools/perf/util/jitdump.c                     |  10 +-
 tools/perf/util/machine.c                     | 252 ++++---
 tools/perf/util/machine.h                     |   8 +-
 tools/perf/util/map.c                         | 629 ++++--------------
 tools/perf/util/map.h                         |  80 ++-
 tools/perf/util/maps.c                        | 475 +++++++++++++
 tools/perf/util/maps.h                        |  69 +-
 tools/perf/util/namespaces.c                  | 158 +++--
 tools/perf/util/namespaces.h                  |  13 +-
 tools/perf/util/pmu.c                         |  18 +-
 tools/perf/util/probe-event.c                 |  58 +-
 .../util/scripting-engines/trace-event-perl.c |   9 +-
 .../scripting-engines/trace-event-python.c    |  14 +-
 tools/perf/util/sort.c                        |  48 +-
 tools/perf/util/symbol-elf.c                  |  59 +-
 tools/perf/util/symbol.c                      | 280 +++++---
 tools/perf/util/symbol_fprintf.c              |   2 +-
 tools/perf/util/synthetic-events.c            |  34 +-
 tools/perf/util/thread-stack.c                |   4 +-
 tools/perf/util/thread.c                      |  40 +-
 tools/perf/util/unwind-libunwind-local.c      |  50 +-
 tools/perf/util/unwind-libunwind.c            |  34 +-
 tools/perf/util/vdso.c                        |   7 +-
 70 files changed, 1941 insertions(+), 1358 deletions(-)
 create mode 100644 tools/lib/perf/include/internal/rc_check.h
 create mode 100644 tools/perf/util/maps.c

-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply	[flat|nested] 58+ messages in thread

* [PATCH v3 01/22] perf cpumap: Migrate to libperf cpumap api
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
@ 2022-02-11 10:33 ` Ian Rogers
  2022-02-11 17:02   ` Arnaldo Carvalho de Melo
  2022-02-11 10:33 ` [PATCH v3 02/22] perf cpumap: Use for each loop Ian Rogers
                   ` (20 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:33 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Switch from directly accessing the perf_cpu_map to using the appropriate
libperf API when possible. Using the API simplifies the job of
refactoring use of perf_cpu_map.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/tests/cpumap.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/tools/perf/tests/cpumap.c b/tools/perf/tests/cpumap.c
index 84e87e31f119..f94929ebb54b 100644
--- a/tools/perf/tests/cpumap.c
+++ b/tools/perf/tests/cpumap.c
@@ -35,10 +35,10 @@ static int process_event_mask(struct perf_tool *tool __maybe_unused,
 	}
 
 	map = cpu_map__new_data(data);
-	TEST_ASSERT_VAL("wrong nr",  map->nr == 20);
+	TEST_ASSERT_VAL("wrong nr",  perf_cpu_map__nr(map) == 20);
 
 	for (i = 0; i < 20; i++) {
-		TEST_ASSERT_VAL("wrong cpu", map->map[i].cpu == i);
+		TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, i).cpu == i);
 	}
 
 	perf_cpu_map__put(map);
@@ -66,9 +66,9 @@ static int process_event_cpus(struct perf_tool *tool __maybe_unused,
 	TEST_ASSERT_VAL("wrong cpu",  cpus->cpu[1] == 256);
 
 	map = cpu_map__new_data(data);
-	TEST_ASSERT_VAL("wrong nr",  map->nr == 2);
-	TEST_ASSERT_VAL("wrong cpu", map->map[0].cpu == 1);
-	TEST_ASSERT_VAL("wrong cpu", map->map[1].cpu == 256);
+	TEST_ASSERT_VAL("wrong nr",  perf_cpu_map__nr(map) == 2);
+	TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, 0).cpu == 1);
+	TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, 1).cpu == 256);
 	TEST_ASSERT_VAL("wrong refcnt", refcount_read(&map->refcnt) == 1);
 	perf_cpu_map__put(map);
 	return 0;
@@ -130,7 +130,7 @@ static int test__cpu_map_merge(struct test_suite *test __maybe_unused, int subte
 	struct perf_cpu_map *c = perf_cpu_map__merge(a, b);
 	char buf[100];
 
-	TEST_ASSERT_VAL("failed to merge map: bad nr", c->nr == 5);
+	TEST_ASSERT_VAL("failed to merge map: bad nr", perf_cpu_map__nr(c) == 5);
 	cpu_map__snprint(c, buf, sizeof(buf));
 	TEST_ASSERT_VAL("failed to merge map: bad result", !strcmp(buf, "1-2,4-5,7"));
 	perf_cpu_map__put(b);
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 02/22] perf cpumap: Use for each loop
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
  2022-02-11 10:33 ` [PATCH v3 01/22] perf cpumap: Migrate to libperf cpumap api Ian Rogers
@ 2022-02-11 10:33 ` Ian Rogers
  2022-02-11 17:04   ` Arnaldo Carvalho de Melo
  2022-02-11 10:33 ` [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs Ian Rogers
                   ` (19 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:33 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Improve readability in perf_pmu__cpus_match by using
perf_cpu_map__for_each_cpu.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/util/pmu.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index 8dfbba15aeb8..9a1c7e63e663 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -1998,7 +1998,8 @@ int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
 {
 	struct perf_cpu_map *pmu_cpus = pmu->cpus;
 	struct perf_cpu_map *matched_cpus, *unmatched_cpus;
-	int matched_nr = 0, unmatched_nr = 0;
+	struct perf_cpu cpu;
+	int i, matched_nr = 0, unmatched_nr = 0;
 
 	matched_cpus = perf_cpu_map__default_new();
 	if (!matched_cpus)
@@ -2010,14 +2011,11 @@ int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
 		return -1;
 	}
 
-	for (int i = 0; i < cpus->nr; i++) {
-		int cpu;
-
-		cpu = perf_cpu_map__idx(pmu_cpus, cpus->map[i]);
-		if (cpu == -1)
-			unmatched_cpus->map[unmatched_nr++] = cpus->map[i];
+	perf_cpu_map__for_each_cpu(cpu, i, cpus) {
+		if (!perf_cpu_map__has(pmu_cpus, cpu))
+			unmatched_cpus->map[unmatched_nr++] = cpu;
 		else
-			matched_cpus->map[matched_nr++] = cpus->map[i];
+			matched_cpus->map[matched_nr++] = cpu;
 	}
 
 	unmatched_cpus->nr = unmatched_nr;
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
  2022-02-11 10:33 ` [PATCH v3 01/22] perf cpumap: Migrate to libperf cpumap api Ian Rogers
  2022-02-11 10:33 ` [PATCH v3 02/22] perf cpumap: Use for each loop Ian Rogers
@ 2022-02-11 10:33 ` Ian Rogers
  2022-02-11 17:13   ` Arnaldo Carvalho de Melo
  2022-02-11 10:33 ` [PATCH v3 04/22] perf dso: Hold lock when accessing nsinfo Ian Rogers
                   ` (18 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:33 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Make the pthread mutex on dso use the error check type. This allows
deadlock checking via the return type. Assert the returned value from
mutex lock is always 0.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/util/dso.c    | 12 +++++++++---
 tools/perf/util/symbol.c |  2 +-
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
index 9cc8a1772b4b..6beccffeef7b 100644
--- a/tools/perf/util/dso.c
+++ b/tools/perf/util/dso.c
@@ -784,7 +784,7 @@ dso_cache__free(struct dso *dso)
 	struct rb_root *root = &dso->data.cache;
 	struct rb_node *next = rb_first(root);
 
-	pthread_mutex_lock(&dso->lock);
+	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
 	while (next) {
 		struct dso_cache *cache;
 
@@ -830,7 +830,7 @@ dso_cache__insert(struct dso *dso, struct dso_cache *new)
 	struct dso_cache *cache;
 	u64 offset = new->offset;
 
-	pthread_mutex_lock(&dso->lock);
+	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
 	while (*p != NULL) {
 		u64 end;
 
@@ -1259,6 +1259,8 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
 	struct dso *dso = calloc(1, sizeof(*dso) + strlen(name) + 1);
 
 	if (dso != NULL) {
+		pthread_mutexattr_t lock_attr;
+
 		strcpy(dso->name, name);
 		if (id)
 			dso->id = *id;
@@ -1286,8 +1288,12 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
 		dso->root = NULL;
 		INIT_LIST_HEAD(&dso->node);
 		INIT_LIST_HEAD(&dso->data.open_entry);
-		pthread_mutex_init(&dso->lock, NULL);
+		pthread_mutexattr_init(&lock_attr);
+		pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_ERRORCHECK);
+		pthread_mutex_init(&dso->lock, &lock_attr);
+		pthread_mutexattr_destroy(&lock_attr);
 		refcount_set(&dso->refcnt, 1);
+
 	}
 
 	return dso;
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index b2ed3140a1fa..43f47532696f 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1783,7 +1783,7 @@ int dso__load(struct dso *dso, struct map *map)
 	}
 
 	nsinfo__mountns_enter(dso->nsinfo, &nsc);
-	pthread_mutex_lock(&dso->lock);
+	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
 
 	/* check again under the dso->lock */
 	if (dso__loaded(dso)) {
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 04/22] perf dso: Hold lock when accessing nsinfo
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (2 preceding siblings ...)
  2022-02-11 10:33 ` [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs Ian Rogers
@ 2022-02-11 10:33 ` Ian Rogers
  2022-02-11 17:14   ` Arnaldo Carvalho de Melo
  2022-02-12 11:30   ` Jiri Olsa
  2022-02-11 10:33 ` [PATCH v3 05/22] perf maps: Use a pointer for kmaps Ian Rogers
                   ` (17 subsequent siblings)
  21 siblings, 2 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:33 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

There may be threads racing to update dso->nsinfo:
https://lore.kernel.org/linux-perf-users/CAP-5=fWZH20L4kv-BwVtGLwR=Em3AOOT+Q4QGivvQuYn5AsPRg@mail.gmail.com/
Holding the dso->lock avoids use-after-free, memory leaks and other
such bugs. Apply the fix in:
https://lore.kernel.org/linux-perf-users/20211118193714.2293728-1-irogers@google.com/
of there being a missing nsinfo__put now that the accesses are data race
free.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/builtin-inject.c   | 4 ++++
 tools/perf/util/dso.c         | 5 ++++-
 tools/perf/util/map.c         | 3 +++
 tools/perf/util/probe-event.c | 2 ++
 tools/perf/util/symbol.c      | 2 +-
 5 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index fbf43a454cba..bede332bf0e2 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -363,8 +363,10 @@ static struct dso *findnew_dso(int pid, int tid, const char *filename,
 	}
 
 	if (dso) {
+		BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
 		nsinfo__put(dso->nsinfo);
 		dso->nsinfo = nsi;
+		pthread_mutex_unlock(&dso->lock);
 	} else
 		nsinfo__put(nsi);
 
@@ -547,7 +549,9 @@ static int dso__read_build_id(struct dso *dso)
 	if (dso->has_build_id)
 		return 0;
 
+	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
 	nsinfo__mountns_enter(dso->nsinfo, &nsc);
+	pthread_mutex_unlock(&dso->lock);
 	if (filename__read_build_id(dso->long_name, &dso->bid) > 0)
 		dso->has_build_id = true;
 	nsinfo__mountns_exit(&nsc);
diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
index 6beccffeef7b..b2f570adba35 100644
--- a/tools/perf/util/dso.c
+++ b/tools/perf/util/dso.c
@@ -548,8 +548,11 @@ static int open_dso(struct dso *dso, struct machine *machine)
 	int fd;
 	struct nscookie nsc;
 
-	if (dso->binary_type != DSO_BINARY_TYPE__BUILD_ID_CACHE)
+	if (dso->binary_type != DSO_BINARY_TYPE__BUILD_ID_CACHE) {
+		BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
 		nsinfo__mountns_enter(dso->nsinfo, &nsc);
+		pthread_mutex_unlock(&dso->lock);
+	}
 	fd = __open_dso(dso, machine);
 	if (dso->binary_type != DSO_BINARY_TYPE__BUILD_ID_CACHE)
 		nsinfo__mountns_exit(&nsc);
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 8af693d9678c..ae99b52502d5 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -192,7 +192,10 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 			if (!(prot & PROT_EXEC))
 				dso__set_loaded(dso);
 		}
+		BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
+		nsinfo__put(dso->nsinfo);
 		dso->nsinfo = nsi;
+		pthread_mutex_unlock(&dso->lock);
 
 		if (build_id__is_defined(bid))
 			dso__set_build_id(dso, bid);
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index a834918a0a0d..7444e689ece7 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -180,8 +180,10 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
 
 		map = dso__new_map(target);
 		if (map && map->dso) {
+			BUG_ON(pthread_mutex_lock(&map->dso->lock) != 0);
 			nsinfo__put(map->dso->nsinfo);
 			map->dso->nsinfo = nsinfo__get(nsi);
+			pthread_mutex_unlock(&map->dso->lock);
 		}
 		return map;
 	} else {
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 43f47532696f..a504346feb05 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1774,6 +1774,7 @@ int dso__load(struct dso *dso, struct map *map)
 	char newmapname[PATH_MAX];
 	const char *map_path = dso->long_name;
 
+	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
 	perfmap = strncmp(dso->name, "/tmp/perf-", 10) == 0;
 	if (perfmap) {
 		if (dso->nsinfo && (dso__find_perf_map(newmapname,
@@ -1783,7 +1784,6 @@ int dso__load(struct dso *dso, struct map *map)
 	}
 
 	nsinfo__mountns_enter(dso->nsinfo, &nsc);
-	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
 
 	/* check again under the dso->lock */
 	if (dso__loaded(dso)) {
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 05/22] perf maps: Use a pointer for kmaps
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (3 preceding siblings ...)
  2022-02-11 10:33 ` [PATCH v3 04/22] perf dso: Hold lock when accessing nsinfo Ian Rogers
@ 2022-02-11 10:33 ` Ian Rogers
  2022-02-11 17:23   ` Arnaldo Carvalho de Melo
  2022-02-11 10:33 ` [PATCH v3 06/22] perf test: Use pointer for maps Ian Rogers
                   ` (16 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:33 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

struct maps is reference counted, using a pointer is more idiomatic.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/arch/x86/util/event.c    |  2 +-
 tools/perf/tests/vmlinux-kallsyms.c |  4 +--
 tools/perf/util/bpf-event.c         |  2 +-
 tools/perf/util/callchain.c         |  2 +-
 tools/perf/util/event.c             |  6 ++---
 tools/perf/util/machine.c           | 38 ++++++++++++++++-------------
 tools/perf/util/machine.h           |  8 +++---
 tools/perf/util/probe-event.c       |  2 +-
 8 files changed, 34 insertions(+), 30 deletions(-)

diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
index 9b31734ee968..e670f3547581 100644
--- a/tools/perf/arch/x86/util/event.c
+++ b/tools/perf/arch/x86/util/event.c
@@ -18,7 +18,7 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
 {
 	int rc = 0;
 	struct map *pos;
-	struct maps *kmaps = &machine->kmaps;
+	struct maps *kmaps = machine__kernel_maps(machine);
 	union perf_event *event = zalloc(sizeof(event->mmap) +
 					 machine->id_hdr_size);
 
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index e80df13c0420..84bf5f640065 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -293,7 +293,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 		 * so use the short name, less descriptive but the same ("[kernel]" in
 		 * both cases.
 		 */
-		pair = maps__find_by_name(&kallsyms.kmaps, (map->dso->kernel ?
+		pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
 								map->dso->short_name :
 								map->dso->name));
 		if (pair) {
@@ -315,7 +315,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
 		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
 
-		pair = maps__find(&kallsyms.kmaps, mem_start);
+		pair = maps__find(kallsyms.kmaps, mem_start);
 		if (pair == NULL || pair->priv)
 			continue;
 
diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
index a517eaa51eb3..33257b594a71 100644
--- a/tools/perf/util/bpf-event.c
+++ b/tools/perf/util/bpf-event.c
@@ -92,7 +92,7 @@ static int machine__process_bpf_event_load(struct machine *machine,
 	for (i = 0; i < info_linear->info.nr_jited_ksyms; i++) {
 		u64 *addrs = (u64 *)(uintptr_t)(info_linear->info.jited_ksyms);
 		u64 addr = addrs[i];
-		struct map *map = maps__find(&machine->kmaps, addr);
+		struct map *map = maps__find(machine__kernel_maps(machine), addr);
 
 		if (map) {
 			map->dso->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
index 131207b91d15..5c27a4b2e7a7 100644
--- a/tools/perf/util/callchain.c
+++ b/tools/perf/util/callchain.c
@@ -1119,7 +1119,7 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
 			goto out;
 	}
 
-	if (al->maps == &al->maps->machine->kmaps) {
+	if (al->maps == machine__kernel_maps(al->maps->machine)) {
 		if (machine__is_host(al->maps->machine)) {
 			al->cpumode = PERF_RECORD_MISC_KERNEL;
 			al->level = 'k';
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index fe24801f8e9f..6439c888ae38 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -484,7 +484,7 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
 	if (machine) {
 		struct addr_location al;
 
-		al.map = maps__find(&machine->kmaps, tp->addr);
+		al.map = maps__find(machine__kernel_maps(machine), tp->addr);
 		if (al.map && map__load(al.map) >= 0) {
 			al.addr = al.map->map_ip(al.map, tp->addr);
 			al.sym = map__find_symbol(al.map, al.addr);
@@ -587,13 +587,13 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
 
 	if (cpumode == PERF_RECORD_MISC_KERNEL && perf_host) {
 		al->level = 'k';
-		al->maps = maps = &machine->kmaps;
+		al->maps = maps = machine__kernel_maps(machine);
 		load_map = true;
 	} else if (cpumode == PERF_RECORD_MISC_USER && perf_host) {
 		al->level = '.';
 	} else if (cpumode == PERF_RECORD_MISC_GUEST_KERNEL && perf_guest) {
 		al->level = 'g';
-		al->maps = maps = &machine->kmaps;
+		al->maps = maps = machine__kernel_maps(machine);
 		load_map = true;
 	} else if (cpumode == PERF_RECORD_MISC_GUEST_USER && perf_guest) {
 		al->level = 'u';
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index f70ba56912d4..57fbdba66425 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -89,7 +89,10 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
 	int err = -ENOMEM;
 
 	memset(machine, 0, sizeof(*machine));
-	maps__init(&machine->kmaps, machine);
+	machine->kmaps = maps__new(machine);
+	if (machine->kmaps == NULL)
+		return -ENOMEM;
+
 	RB_CLEAR_NODE(&machine->rb_node);
 	dsos__init(&machine->dsos);
 
@@ -108,7 +111,7 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
 
 	machine->root_dir = strdup(root_dir);
 	if (machine->root_dir == NULL)
-		return -ENOMEM;
+		goto out;
 
 	if (machine__set_mmap_name(machine))
 		goto out;
@@ -131,6 +134,7 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
 
 out:
 	if (err) {
+		zfree(&machine->kmaps);
 		zfree(&machine->root_dir);
 		zfree(&machine->mmap_name);
 	}
@@ -220,7 +224,7 @@ void machine__exit(struct machine *machine)
 		return;
 
 	machine__destroy_kernel_maps(machine);
-	maps__exit(&machine->kmaps);
+	maps__delete(machine->kmaps);
 	dsos__exit(&machine->dsos);
 	machine__exit_vdso(machine);
 	zfree(&machine->root_dir);
@@ -778,7 +782,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
 					     struct perf_sample *sample __maybe_unused)
 {
 	struct symbol *sym;
-	struct map *map = maps__find(&machine->kmaps, event->ksymbol.addr);
+	struct map *map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
 
 	if (!map) {
 		struct dso *dso = dso__new(event->ksymbol.name);
@@ -801,7 +805,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
 
 		map->start = event->ksymbol.addr;
 		map->end = map->start + event->ksymbol.len;
-		maps__insert(&machine->kmaps, map);
+		maps__insert(machine__kernel_maps(machine), map);
 		map__put(map);
 		dso__set_loaded(dso);
 
@@ -827,12 +831,12 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
 	struct symbol *sym;
 	struct map *map;
 
-	map = maps__find(&machine->kmaps, event->ksymbol.addr);
+	map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
 	if (!map)
 		return 0;
 
 	if (map != machine->vmlinux_map)
-		maps__remove(&machine->kmaps, map);
+		maps__remove(machine__kernel_maps(machine), map);
 	else {
 		sym = dso__find_symbol(map->dso, map->map_ip(map, map->start));
 		if (sym)
@@ -858,7 +862,7 @@ int machine__process_ksymbol(struct machine *machine __maybe_unused,
 int machine__process_text_poke(struct machine *machine, union perf_event *event,
 			       struct perf_sample *sample __maybe_unused)
 {
-	struct map *map = maps__find(&machine->kmaps, event->text_poke.addr);
+	struct map *map = maps__find(machine__kernel_maps(machine), event->text_poke.addr);
 	u8 cpumode = event->header.misc & PERF_RECORD_MISC_CPUMODE_MASK;
 
 	if (dump_trace)
@@ -914,7 +918,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
 	if (map == NULL)
 		goto out;
 
-	maps__insert(&machine->kmaps, map);
+	maps__insert(machine__kernel_maps(machine), map);
 
 	/* Put the map here because maps__insert already got it */
 	map__put(map);
@@ -1100,7 +1104,7 @@ int machine__create_extra_kernel_map(struct machine *machine,
 
 	strlcpy(kmap->name, xm->name, KMAP_NAME_LEN);
 
-	maps__insert(&machine->kmaps, map);
+	maps__insert(machine__kernel_maps(machine), map);
 
 	pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
 		  kmap->name, map->start, map->end);
@@ -1145,7 +1149,7 @@ static u64 find_entry_trampoline(struct dso *dso)
 int machine__map_x86_64_entry_trampolines(struct machine *machine,
 					  struct dso *kernel)
 {
-	struct maps *kmaps = &machine->kmaps;
+	struct maps *kmaps = machine__kernel_maps(machine);
 	int nr_cpus_avail, cpu;
 	bool found = false;
 	struct map *map;
@@ -1215,7 +1219,7 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
 		return -1;
 
 	machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
-	maps__insert(&machine->kmaps, machine->vmlinux_map);
+	maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
 	return 0;
 }
 
@@ -1228,7 +1232,7 @@ void machine__destroy_kernel_maps(struct machine *machine)
 		return;
 
 	kmap = map__kmap(map);
-	maps__remove(&machine->kmaps, map);
+	maps__remove(machine__kernel_maps(machine), map);
 	if (kmap && kmap->ref_reloc_sym) {
 		zfree((char **)&kmap->ref_reloc_sym->name);
 		zfree(&kmap->ref_reloc_sym);
@@ -1323,7 +1327,7 @@ int machine__load_kallsyms(struct machine *machine, const char *filename)
 		 * kernel, with modules between them, fixup the end of all
 		 * sections.
 		 */
-		maps__fixup_end(&machine->kmaps);
+		maps__fixup_end(machine__kernel_maps(machine));
 	}
 
 	return ret;
@@ -1471,7 +1475,7 @@ static int machine__set_modules_path(struct machine *machine)
 		 machine->root_dir, version);
 	free(version);
 
-	return maps__set_modules_path_dir(&machine->kmaps, modules_path, 0);
+	return maps__set_modules_path_dir(machine__kernel_maps(machine), modules_path, 0);
 }
 int __weak arch__fix_module_text_start(u64 *start __maybe_unused,
 				u64 *size __maybe_unused,
@@ -1544,11 +1548,11 @@ static void machine__update_kernel_mmap(struct machine *machine,
 	struct map *map = machine__kernel_map(machine);
 
 	map__get(map);
-	maps__remove(&machine->kmaps, map);
+	maps__remove(machine__kernel_maps(machine), map);
 
 	machine__set_kernel_mmap(machine, start, end);
 
-	maps__insert(&machine->kmaps, map);
+	maps__insert(machine__kernel_maps(machine), map);
 	map__put(map);
 }
 
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index c5a45dc8df4c..0023165422aa 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -51,7 +51,7 @@ struct machine {
 	struct vdso_info  *vdso_info;
 	struct perf_env   *env;
 	struct dsos	  dsos;
-	struct maps	  kmaps;
+	struct maps	  *kmaps;
 	struct map	  *vmlinux_map;
 	u64		  kernel_start;
 	pid_t		  *current_tid;
@@ -83,7 +83,7 @@ struct map *machine__kernel_map(struct machine *machine)
 static inline
 struct maps *machine__kernel_maps(struct machine *machine)
 {
-	return &machine->kmaps;
+	return machine->kmaps;
 }
 
 int machine__get_kernel_start(struct machine *machine);
@@ -223,7 +223,7 @@ static inline
 struct symbol *machine__find_kernel_symbol(struct machine *machine, u64 addr,
 					   struct map **mapp)
 {
-	return maps__find_symbol(&machine->kmaps, addr, mapp);
+	return maps__find_symbol(machine->kmaps, addr, mapp);
 }
 
 static inline
@@ -231,7 +231,7 @@ struct symbol *machine__find_kernel_symbol_by_name(struct machine *machine,
 						   const char *name,
 						   struct map **mapp)
 {
-	return maps__find_symbol_by_name(&machine->kmaps, name, mapp);
+	return maps__find_symbol_by_name(machine->kmaps, name, mapp);
 }
 
 int arch__fix_module_text_start(u64 *start, u64 *size, const char *name);
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index 7444e689ece7..bc5ab782ace5 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -334,7 +334,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
 		char module_name[128];
 
 		snprintf(module_name, sizeof(module_name), "[%s]", module);
-		map = maps__find_by_name(&host_machine->kmaps, module_name);
+		map = maps__find_by_name(machine__kernel_maps(host_machine), module_name);
 		if (map) {
 			dso = map->dso;
 			goto found;
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 06/22] perf test: Use pointer for maps
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (4 preceding siblings ...)
  2022-02-11 10:33 ` [PATCH v3 05/22] perf maps: Use a pointer for kmaps Ian Rogers
@ 2022-02-11 10:33 ` Ian Rogers
  2022-02-11 17:24   ` Arnaldo Carvalho de Melo
  2022-02-14 19:48   ` Arnaldo Carvalho de Melo
  2022-02-11 10:34 ` [PATCH v3 07/22] perf maps: Reduce scope of init and exit Ian Rogers
                   ` (15 subsequent siblings)
  21 siblings, 2 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:33 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

struct maps is reference counted, using a pointer is more idiomatic.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/tests/maps.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
index e308a3296cef..6f53f17f788e 100644
--- a/tools/perf/tests/maps.c
+++ b/tools/perf/tests/maps.c
@@ -35,7 +35,7 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
 
 static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest __maybe_unused)
 {
-	struct maps maps;
+	struct maps *maps;
 	unsigned int i;
 	struct map_def bpf_progs[] = {
 		{ "bpf_prog_1", 200, 300 },
@@ -64,7 +64,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
 	struct map *map_kcore1, *map_kcore2, *map_kcore3;
 	int ret;
 
-	maps__init(&maps, NULL);
+	maps = maps__new(NULL);
 
 	for (i = 0; i < ARRAY_SIZE(bpf_progs); i++) {
 		struct map *map;
@@ -74,7 +74,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
 
 		map->start = bpf_progs[i].start;
 		map->end   = bpf_progs[i].end;
-		maps__insert(&maps, map);
+		maps__insert(maps, map);
 		map__put(map);
 	}
 
@@ -99,25 +99,25 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
 	map_kcore3->start = 880;
 	map_kcore3->end   = 1100;
 
-	ret = maps__merge_in(&maps, map_kcore1);
+	ret = maps__merge_in(maps, map_kcore1);
 	TEST_ASSERT_VAL("failed to merge map", !ret);
 
-	ret = check_maps(merged12, ARRAY_SIZE(merged12), &maps);
+	ret = check_maps(merged12, ARRAY_SIZE(merged12), maps);
 	TEST_ASSERT_VAL("merge check failed", !ret);
 
-	ret = maps__merge_in(&maps, map_kcore2);
+	ret = maps__merge_in(maps, map_kcore2);
 	TEST_ASSERT_VAL("failed to merge map", !ret);
 
-	ret = check_maps(merged12, ARRAY_SIZE(merged12), &maps);
+	ret = check_maps(merged12, ARRAY_SIZE(merged12), maps);
 	TEST_ASSERT_VAL("merge check failed", !ret);
 
-	ret = maps__merge_in(&maps, map_kcore3);
+	ret = maps__merge_in(maps, map_kcore3);
 	TEST_ASSERT_VAL("failed to merge map", !ret);
 
-	ret = check_maps(merged3, ARRAY_SIZE(merged3), &maps);
+	ret = check_maps(merged3, ARRAY_SIZE(merged3), maps);
 	TEST_ASSERT_VAL("merge check failed", !ret);
 
-	maps__exit(&maps);
+	maps__delete(maps);
 	return TEST_OK;
 }
 
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 07/22] perf maps: Reduce scope of init and exit
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (5 preceding siblings ...)
  2022-02-11 10:33 ` [PATCH v3 06/22] perf test: Use pointer for maps Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 17:26   ` Arnaldo Carvalho de Melo
  2022-02-11 10:34 ` [PATCH v3 08/22] perf maps: Move maps code to own C file Ian Rogers
                   ` (14 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Now purely accessed through new and delete, so reduce to file scope.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/util/map.c  | 4 ++--
 tools/perf/util/maps.h | 2 --
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index ae99b52502d5..4d1de363c19a 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -527,7 +527,7 @@ u64 map__objdump_2mem(struct map *map, u64 ip)
 	return ip + map->reloc;
 }
 
-void maps__init(struct maps *maps, struct machine *machine)
+static void maps__init(struct maps *maps, struct machine *machine)
 {
 	maps->entries = RB_ROOT;
 	init_rwsem(&maps->lock);
@@ -616,7 +616,7 @@ static void __maps__purge(struct maps *maps)
 	}
 }
 
-void maps__exit(struct maps *maps)
+static void maps__exit(struct maps *maps)
 {
 	down_write(&maps->lock);
 	__maps__purge(maps);
diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
index 3dd000ddf925..7e729ff42749 100644
--- a/tools/perf/util/maps.h
+++ b/tools/perf/util/maps.h
@@ -60,8 +60,6 @@ static inline struct maps *maps__get(struct maps *maps)
 }
 
 void maps__put(struct maps *maps);
-void maps__init(struct maps *maps, struct machine *machine);
-void maps__exit(struct maps *maps);
 int maps__clone(struct thread *thread, struct maps *parent);
 size_t maps__fprintf(struct maps *maps, FILE *fp);
 
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 08/22] perf maps: Move maps code to own C file
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (6 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 07/22] perf maps: Reduce scope of init and exit Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 17:27   ` Arnaldo Carvalho de Melo
  2022-02-14 19:58   ` Arnaldo Carvalho de Melo
  2022-02-11 10:34 ` [PATCH v3 09/22] perf map: Add const to map_ip and unmap_ip Ian Rogers
                   ` (13 subsequent siblings)
  21 siblings, 2 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

The maps code has its own header, move the corresponding C function
definitions to their own C file. In the process tidy and minimize
includes.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/util/Build  |   1 +
 tools/perf/util/map.c  | 417 +----------------------------------------
 tools/perf/util/map.h  |   2 +
 tools/perf/util/maps.c | 403 +++++++++++++++++++++++++++++++++++++++
 4 files changed, 414 insertions(+), 409 deletions(-)
 create mode 100644 tools/perf/util/maps.c

diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index 2a403cefcaf2..9a7209a99e16 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -56,6 +56,7 @@ perf-y += debug.o
 perf-y += fncache.o
 perf-y += machine.o
 perf-y += map.o
+perf-y += maps.o
 perf-y += pstack.o
 perf-y += session.o
 perf-y += sample-raw.o
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 4d1de363c19a..2cfe5744b86c 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -1,31 +1,20 @@
 // SPDX-License-Identifier: GPL-2.0
-#include "symbol.h"
-#include <assert.h>
-#include <errno.h>
 #include <inttypes.h>
 #include <limits.h>
+#include <stdio.h>
 #include <stdlib.h>
 #include <string.h>
-#include <stdio.h>
-#include <unistd.h>
+#include <linux/string.h>
+#include <linux/zalloc.h>
 #include <uapi/linux/mman.h> /* To get things like MAP_HUGETLB even on older libc headers */
+#include "debug.h"
 #include "dso.h"
 #include "map.h"
-#include "map_symbol.h"
+#include "namespaces.h"
+#include "srcline.h"
+#include "symbol.h"
 #include "thread.h"
 #include "vdso.h"
-#include "build-id.h"
-#include "debug.h"
-#include "machine.h"
-#include <linux/string.h>
-#include <linux/zalloc.h>
-#include "srcline.h"
-#include "namespaces.h"
-#include "unwind.h"
-#include "srccode.h"
-#include "ui/ui.h"
-
-static void __maps__insert(struct maps *maps, struct map *map);
 
 static inline int is_android_lib(const char *filename)
 {
@@ -527,403 +516,13 @@ u64 map__objdump_2mem(struct map *map, u64 ip)
 	return ip + map->reloc;
 }
 
-static void maps__init(struct maps *maps, struct machine *machine)
-{
-	maps->entries = RB_ROOT;
-	init_rwsem(&maps->lock);
-	maps->machine = machine;
-	maps->last_search_by_name = NULL;
-	maps->nr_maps = 0;
-	maps->maps_by_name = NULL;
-	refcount_set(&maps->refcnt, 1);
-}
-
-static void __maps__free_maps_by_name(struct maps *maps)
-{
-	/*
-	 * Free everything to try to do it from the rbtree in the next search
-	 */
-	zfree(&maps->maps_by_name);
-	maps->nr_maps_allocated = 0;
-}
-
-void maps__insert(struct maps *maps, struct map *map)
-{
-	down_write(&maps->lock);
-	__maps__insert(maps, map);
-	++maps->nr_maps;
-
-	if (map->dso && map->dso->kernel) {
-		struct kmap *kmap = map__kmap(map);
-
-		if (kmap)
-			kmap->kmaps = maps;
-		else
-			pr_err("Internal error: kernel dso with non kernel map\n");
-	}
-
-
-	/*
-	 * If we already performed some search by name, then we need to add the just
-	 * inserted map and resort.
-	 */
-	if (maps->maps_by_name) {
-		if (maps->nr_maps > maps->nr_maps_allocated) {
-			int nr_allocate = maps->nr_maps * 2;
-			struct map **maps_by_name = realloc(maps->maps_by_name, nr_allocate * sizeof(map));
-
-			if (maps_by_name == NULL) {
-				__maps__free_maps_by_name(maps);
-				up_write(&maps->lock);
-				return;
-			}
-
-			maps->maps_by_name = maps_by_name;
-			maps->nr_maps_allocated = nr_allocate;
-		}
-		maps->maps_by_name[maps->nr_maps - 1] = map;
-		__maps__sort_by_name(maps);
-	}
-	up_write(&maps->lock);
-}
-
-static void __maps__remove(struct maps *maps, struct map *map)
-{
-	rb_erase_init(&map->rb_node, &maps->entries);
-	map__put(map);
-}
-
-void maps__remove(struct maps *maps, struct map *map)
-{
-	down_write(&maps->lock);
-	if (maps->last_search_by_name == map)
-		maps->last_search_by_name = NULL;
-
-	__maps__remove(maps, map);
-	--maps->nr_maps;
-	if (maps->maps_by_name)
-		__maps__free_maps_by_name(maps);
-	up_write(&maps->lock);
-}
-
-static void __maps__purge(struct maps *maps)
-{
-	struct map *pos, *next;
-
-	maps__for_each_entry_safe(maps, pos, next) {
-		rb_erase_init(&pos->rb_node,  &maps->entries);
-		map__put(pos);
-	}
-}
-
-static void maps__exit(struct maps *maps)
-{
-	down_write(&maps->lock);
-	__maps__purge(maps);
-	up_write(&maps->lock);
-}
-
-bool maps__empty(struct maps *maps)
-{
-	return !maps__first(maps);
-}
-
-struct maps *maps__new(struct machine *machine)
-{
-	struct maps *maps = zalloc(sizeof(*maps));
-
-	if (maps != NULL)
-		maps__init(maps, machine);
-
-	return maps;
-}
-
-void maps__delete(struct maps *maps)
-{
-	maps__exit(maps);
-	unwind__finish_access(maps);
-	free(maps);
-}
-
-void maps__put(struct maps *maps)
-{
-	if (maps && refcount_dec_and_test(&maps->refcnt))
-		maps__delete(maps);
-}
-
-struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
-{
-	struct map *map = maps__find(maps, addr);
-
-	/* Ensure map is loaded before using map->map_ip */
-	if (map != NULL && map__load(map) >= 0) {
-		if (mapp != NULL)
-			*mapp = map;
-		return map__find_symbol(map, map->map_ip(map, addr));
-	}
-
-	return NULL;
-}
-
-static bool map__contains_symbol(struct map *map, struct symbol *sym)
+bool map__contains_symbol(struct map *map, struct symbol *sym)
 {
 	u64 ip = map->unmap_ip(map, sym->start);
 
 	return ip >= map->start && ip < map->end;
 }
 
-struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct map **mapp)
-{
-	struct symbol *sym;
-	struct map *pos;
-
-	down_read(&maps->lock);
-
-	maps__for_each_entry(maps, pos) {
-		sym = map__find_symbol_by_name(pos, name);
-
-		if (sym == NULL)
-			continue;
-		if (!map__contains_symbol(pos, sym)) {
-			sym = NULL;
-			continue;
-		}
-		if (mapp != NULL)
-			*mapp = pos;
-		goto out;
-	}
-
-	sym = NULL;
-out:
-	up_read(&maps->lock);
-	return sym;
-}
-
-int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
-{
-	if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
-		if (maps == NULL)
-			return -1;
-		ams->ms.map = maps__find(maps, ams->addr);
-		if (ams->ms.map == NULL)
-			return -1;
-	}
-
-	ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
-	ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
-
-	return ams->ms.sym ? 0 : -1;
-}
-
-size_t maps__fprintf(struct maps *maps, FILE *fp)
-{
-	size_t printed = 0;
-	struct map *pos;
-
-	down_read(&maps->lock);
-
-	maps__for_each_entry(maps, pos) {
-		printed += fprintf(fp, "Map:");
-		printed += map__fprintf(pos, fp);
-		if (verbose > 2) {
-			printed += dso__fprintf(pos->dso, fp);
-			printed += fprintf(fp, "--\n");
-		}
-	}
-
-	up_read(&maps->lock);
-
-	return printed;
-}
-
-int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
-{
-	struct rb_root *root;
-	struct rb_node *next, *first;
-	int err = 0;
-
-	down_write(&maps->lock);
-
-	root = &maps->entries;
-
-	/*
-	 * Find first map where end > map->start.
-	 * Same as find_vma() in kernel.
-	 */
-	next = root->rb_node;
-	first = NULL;
-	while (next) {
-		struct map *pos = rb_entry(next, struct map, rb_node);
-
-		if (pos->end > map->start) {
-			first = next;
-			if (pos->start <= map->start)
-				break;
-			next = next->rb_left;
-		} else
-			next = next->rb_right;
-	}
-
-	next = first;
-	while (next) {
-		struct map *pos = rb_entry(next, struct map, rb_node);
-		next = rb_next(&pos->rb_node);
-
-		/*
-		 * Stop if current map starts after map->end.
-		 * Maps are ordered by start: next will not overlap for sure.
-		 */
-		if (pos->start >= map->end)
-			break;
-
-		if (verbose >= 2) {
-
-			if (use_browser) {
-				pr_debug("overlapping maps in %s (disable tui for more info)\n",
-					   map->dso->name);
-			} else {
-				fputs("overlapping maps:\n", fp);
-				map__fprintf(map, fp);
-				map__fprintf(pos, fp);
-			}
-		}
-
-		rb_erase_init(&pos->rb_node, root);
-		/*
-		 * Now check if we need to create new maps for areas not
-		 * overlapped by the new map:
-		 */
-		if (map->start > pos->start) {
-			struct map *before = map__clone(pos);
-
-			if (before == NULL) {
-				err = -ENOMEM;
-				goto put_map;
-			}
-
-			before->end = map->start;
-			__maps__insert(maps, before);
-			if (verbose >= 2 && !use_browser)
-				map__fprintf(before, fp);
-			map__put(before);
-		}
-
-		if (map->end < pos->end) {
-			struct map *after = map__clone(pos);
-
-			if (after == NULL) {
-				err = -ENOMEM;
-				goto put_map;
-			}
-
-			after->start = map->end;
-			after->pgoff += map->end - pos->start;
-			assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
-			__maps__insert(maps, after);
-			if (verbose >= 2 && !use_browser)
-				map__fprintf(after, fp);
-			map__put(after);
-		}
-put_map:
-		map__put(pos);
-
-		if (err)
-			goto out;
-	}
-
-	err = 0;
-out:
-	up_write(&maps->lock);
-	return err;
-}
-
-/*
- * XXX This should not really _copy_ te maps, but refcount them.
- */
-int maps__clone(struct thread *thread, struct maps *parent)
-{
-	struct maps *maps = thread->maps;
-	int err;
-	struct map *map;
-
-	down_read(&parent->lock);
-
-	maps__for_each_entry(parent, map) {
-		struct map *new = map__clone(map);
-
-		if (new == NULL) {
-			err = -ENOMEM;
-			goto out_unlock;
-		}
-
-		err = unwind__prepare_access(maps, new, NULL);
-		if (err)
-			goto out_unlock;
-
-		maps__insert(maps, new);
-		map__put(new);
-	}
-
-	err = 0;
-out_unlock:
-	up_read(&parent->lock);
-	return err;
-}
-
-static void __maps__insert(struct maps *maps, struct map *map)
-{
-	struct rb_node **p = &maps->entries.rb_node;
-	struct rb_node *parent = NULL;
-	const u64 ip = map->start;
-	struct map *m;
-
-	while (*p != NULL) {
-		parent = *p;
-		m = rb_entry(parent, struct map, rb_node);
-		if (ip < m->start)
-			p = &(*p)->rb_left;
-		else
-			p = &(*p)->rb_right;
-	}
-
-	rb_link_node(&map->rb_node, parent, p);
-	rb_insert_color(&map->rb_node, &maps->entries);
-	map__get(map);
-}
-
-struct map *maps__find(struct maps *maps, u64 ip)
-{
-	struct rb_node *p;
-	struct map *m;
-
-	down_read(&maps->lock);
-
-	p = maps->entries.rb_node;
-	while (p != NULL) {
-		m = rb_entry(p, struct map, rb_node);
-		if (ip < m->start)
-			p = p->rb_left;
-		else if (ip >= m->end)
-			p = p->rb_right;
-		else
-			goto out;
-	}
-
-	m = NULL;
-out:
-	up_read(&maps->lock);
-	return m;
-}
-
-struct map *maps__first(struct maps *maps)
-{
-	struct rb_node *first = rb_first(&maps->entries);
-
-	if (first)
-		return rb_entry(first, struct map, rb_node);
-	return NULL;
-}
-
 static struct map *__map__next(struct map *map)
 {
 	struct rb_node *next = rb_next(&map->rb_node);
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index d32f5b28c1fb..973dce27b253 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -160,6 +160,8 @@ static inline bool __map__is_kmodule(const struct map *map)
 
 bool map__has_symbols(const struct map *map);
 
+bool map__contains_symbol(struct map *map, struct symbol *sym);
+
 #define ENTRY_TRAMPOLINE_NAME "__entry_SYSCALL_64_trampoline"
 
 static inline bool is_entry_trampoline(const char *name)
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
new file mode 100644
index 000000000000..ededabf0a230
--- /dev/null
+++ b/tools/perf/util/maps.c
@@ -0,0 +1,403 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <errno.h>
+#include <stdlib.h>
+#include <linux/zalloc.h>
+#include "debug.h"
+#include "dso.h"
+#include "map.h"
+#include "maps.h"
+#include "thread.h"
+#include "ui/ui.h"
+#include "unwind.h"
+
+static void __maps__insert(struct maps *maps, struct map *map);
+
+void maps__init(struct maps *maps, struct machine *machine)
+{
+	maps->entries = RB_ROOT;
+	init_rwsem(&maps->lock);
+	maps->machine = machine;
+	maps->last_search_by_name = NULL;
+	maps->nr_maps = 0;
+	maps->maps_by_name = NULL;
+	refcount_set(&maps->refcnt, 1);
+}
+
+static void __maps__free_maps_by_name(struct maps *maps)
+{
+	/*
+	 * Free everything to try to do it from the rbtree in the next search
+	 */
+	zfree(&maps->maps_by_name);
+	maps->nr_maps_allocated = 0;
+}
+
+void maps__insert(struct maps *maps, struct map *map)
+{
+	down_write(&maps->lock);
+	__maps__insert(maps, map);
+	++maps->nr_maps;
+
+	if (map->dso && map->dso->kernel) {
+		struct kmap *kmap = map__kmap(map);
+
+		if (kmap)
+			kmap->kmaps = maps;
+		else
+			pr_err("Internal error: kernel dso with non kernel map\n");
+	}
+
+
+	/*
+	 * If we already performed some search by name, then we need to add the just
+	 * inserted map and resort.
+	 */
+	if (maps->maps_by_name) {
+		if (maps->nr_maps > maps->nr_maps_allocated) {
+			int nr_allocate = maps->nr_maps * 2;
+			struct map **maps_by_name = realloc(maps->maps_by_name, nr_allocate * sizeof(map));
+
+			if (maps_by_name == NULL) {
+				__maps__free_maps_by_name(maps);
+				up_write(&maps->lock);
+				return;
+			}
+
+			maps->maps_by_name = maps_by_name;
+			maps->nr_maps_allocated = nr_allocate;
+		}
+		maps->maps_by_name[maps->nr_maps - 1] = map;
+		__maps__sort_by_name(maps);
+	}
+	up_write(&maps->lock);
+}
+
+static void __maps__remove(struct maps *maps, struct map *map)
+{
+	rb_erase_init(&map->rb_node, &maps->entries);
+	map__put(map);
+}
+
+void maps__remove(struct maps *maps, struct map *map)
+{
+	down_write(&maps->lock);
+	if (maps->last_search_by_name == map)
+		maps->last_search_by_name = NULL;
+
+	__maps__remove(maps, map);
+	--maps->nr_maps;
+	if (maps->maps_by_name)
+		__maps__free_maps_by_name(maps);
+	up_write(&maps->lock);
+}
+
+static void __maps__purge(struct maps *maps)
+{
+	struct map *pos, *next;
+
+	maps__for_each_entry_safe(maps, pos, next) {
+		rb_erase_init(&pos->rb_node,  &maps->entries);
+		map__put(pos);
+	}
+}
+
+void maps__exit(struct maps *maps)
+{
+	down_write(&maps->lock);
+	__maps__purge(maps);
+	up_write(&maps->lock);
+}
+
+bool maps__empty(struct maps *maps)
+{
+	return !maps__first(maps);
+}
+
+struct maps *maps__new(struct machine *machine)
+{
+	struct maps *maps = zalloc(sizeof(*maps));
+
+	if (maps != NULL)
+		maps__init(maps, machine);
+
+	return maps;
+}
+
+void maps__delete(struct maps *maps)
+{
+	maps__exit(maps);
+	unwind__finish_access(maps);
+	free(maps);
+}
+
+void maps__put(struct maps *maps)
+{
+	if (maps && refcount_dec_and_test(&maps->refcnt))
+		maps__delete(maps);
+}
+
+struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
+{
+	struct map *map = maps__find(maps, addr);
+
+	/* Ensure map is loaded before using map->map_ip */
+	if (map != NULL && map__load(map) >= 0) {
+		if (mapp != NULL)
+			*mapp = map;
+		return map__find_symbol(map, map->map_ip(map, addr));
+	}
+
+	return NULL;
+}
+
+struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct map **mapp)
+{
+	struct symbol *sym;
+	struct map *pos;
+
+	down_read(&maps->lock);
+
+	maps__for_each_entry(maps, pos) {
+		sym = map__find_symbol_by_name(pos, name);
+
+		if (sym == NULL)
+			continue;
+		if (!map__contains_symbol(pos, sym)) {
+			sym = NULL;
+			continue;
+		}
+		if (mapp != NULL)
+			*mapp = pos;
+		goto out;
+	}
+
+	sym = NULL;
+out:
+	up_read(&maps->lock);
+	return sym;
+}
+
+int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
+{
+	if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
+		if (maps == NULL)
+			return -1;
+		ams->ms.map = maps__find(maps, ams->addr);
+		if (ams->ms.map == NULL)
+			return -1;
+	}
+
+	ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
+	ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
+
+	return ams->ms.sym ? 0 : -1;
+}
+
+size_t maps__fprintf(struct maps *maps, FILE *fp)
+{
+	size_t printed = 0;
+	struct map *pos;
+
+	down_read(&maps->lock);
+
+	maps__for_each_entry(maps, pos) {
+		printed += fprintf(fp, "Map:");
+		printed += map__fprintf(pos, fp);
+		if (verbose > 2) {
+			printed += dso__fprintf(pos->dso, fp);
+			printed += fprintf(fp, "--\n");
+		}
+	}
+
+	up_read(&maps->lock);
+
+	return printed;
+}
+
+int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
+{
+	struct rb_root *root;
+	struct rb_node *next, *first;
+	int err = 0;
+
+	down_write(&maps->lock);
+
+	root = &maps->entries;
+
+	/*
+	 * Find first map where end > map->start.
+	 * Same as find_vma() in kernel.
+	 */
+	next = root->rb_node;
+	first = NULL;
+	while (next) {
+		struct map *pos = rb_entry(next, struct map, rb_node);
+
+		if (pos->end > map->start) {
+			first = next;
+			if (pos->start <= map->start)
+				break;
+			next = next->rb_left;
+		} else
+			next = next->rb_right;
+	}
+
+	next = first;
+	while (next) {
+		struct map *pos = rb_entry(next, struct map, rb_node);
+		next = rb_next(&pos->rb_node);
+
+		/*
+		 * Stop if current map starts after map->end.
+		 * Maps are ordered by start: next will not overlap for sure.
+		 */
+		if (pos->start >= map->end)
+			break;
+
+		if (verbose >= 2) {
+
+			if (use_browser) {
+				pr_debug("overlapping maps in %s (disable tui for more info)\n",
+					   map->dso->name);
+			} else {
+				fputs("overlapping maps:\n", fp);
+				map__fprintf(map, fp);
+				map__fprintf(pos, fp);
+			}
+		}
+
+		rb_erase_init(&pos->rb_node, root);
+		/*
+		 * Now check if we need to create new maps for areas not
+		 * overlapped by the new map:
+		 */
+		if (map->start > pos->start) {
+			struct map *before = map__clone(pos);
+
+			if (before == NULL) {
+				err = -ENOMEM;
+				goto put_map;
+			}
+
+			before->end = map->start;
+			__maps__insert(maps, before);
+			if (verbose >= 2 && !use_browser)
+				map__fprintf(before, fp);
+			map__put(before);
+		}
+
+		if (map->end < pos->end) {
+			struct map *after = map__clone(pos);
+
+			if (after == NULL) {
+				err = -ENOMEM;
+				goto put_map;
+			}
+
+			after->start = map->end;
+			after->pgoff += map->end - pos->start;
+			assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
+			__maps__insert(maps, after);
+			if (verbose >= 2 && !use_browser)
+				map__fprintf(after, fp);
+			map__put(after);
+		}
+put_map:
+		map__put(pos);
+
+		if (err)
+			goto out;
+	}
+
+	err = 0;
+out:
+	up_write(&maps->lock);
+	return err;
+}
+
+/*
+ * XXX This should not really _copy_ te maps, but refcount them.
+ */
+int maps__clone(struct thread *thread, struct maps *parent)
+{
+	struct maps *maps = thread->maps;
+	int err;
+	struct map *map;
+
+	down_read(&parent->lock);
+
+	maps__for_each_entry(parent, map) {
+		struct map *new = map__clone(map);
+
+		if (new == NULL) {
+			err = -ENOMEM;
+			goto out_unlock;
+		}
+
+		err = unwind__prepare_access(maps, new, NULL);
+		if (err)
+			goto out_unlock;
+
+		maps__insert(maps, new);
+		map__put(new);
+	}
+
+	err = 0;
+out_unlock:
+	up_read(&parent->lock);
+	return err;
+}
+
+static void __maps__insert(struct maps *maps, struct map *map)
+{
+	struct rb_node **p = &maps->entries.rb_node;
+	struct rb_node *parent = NULL;
+	const u64 ip = map->start;
+	struct map *m;
+
+	while (*p != NULL) {
+		parent = *p;
+		m = rb_entry(parent, struct map, rb_node);
+		if (ip < m->start)
+			p = &(*p)->rb_left;
+		else
+			p = &(*p)->rb_right;
+	}
+
+	rb_link_node(&map->rb_node, parent, p);
+	rb_insert_color(&map->rb_node, &maps->entries);
+	map__get(map);
+}
+
+struct map *maps__find(struct maps *maps, u64 ip)
+{
+	struct rb_node *p;
+	struct map *m;
+
+	down_read(&maps->lock);
+
+	p = maps->entries.rb_node;
+	while (p != NULL) {
+		m = rb_entry(p, struct map, rb_node);
+		if (ip < m->start)
+			p = p->rb_left;
+		else if (ip >= m->end)
+			p = p->rb_right;
+		else
+			goto out;
+	}
+
+	m = NULL;
+out:
+	up_read(&maps->lock);
+	return m;
+}
+
+struct map *maps__first(struct maps *maps)
+{
+	struct rb_node *first = rb_first(&maps->entries);
+
+	if (first)
+		return rb_entry(first, struct map, rb_node);
+	return NULL;
+}
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 09/22] perf map: Add const to map_ip and unmap_ip
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (7 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 08/22] perf maps: Move maps code to own C file Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 17:28   ` Arnaldo Carvalho de Melo
  2022-02-11 10:34 ` [PATCH v3 10/22] perf map: Make map__contains_symbol args const Ian Rogers
                   ` (12 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Functions purely determine a value from the map and don't need to modify
it. Move functions to C file as they are most commonly used via a
function pointer.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/util/map.c | 15 +++++++++++++++
 tools/perf/util/map.h | 24 ++++++++----------------
 2 files changed, 23 insertions(+), 16 deletions(-)

diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 2cfe5744b86c..b98fb000eb5c 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -563,3 +563,18 @@ struct maps *map__kmaps(struct map *map)
 	}
 	return kmap->kmaps;
 }
+
+u64 map__map_ip(const struct map *map, u64 ip)
+{
+	return ip - map->start + map->pgoff;
+}
+
+u64 map__unmap_ip(const struct map *map, u64 ip)
+{
+	return ip + map->start - map->pgoff;
+}
+
+u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip)
+{
+	return ip;
+}
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 973dce27b253..212a9468d5e1 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -29,9 +29,9 @@ struct map {
 	u64			reloc;
 
 	/* ip -> dso rip */
-	u64			(*map_ip)(struct map *, u64);
+	u64			(*map_ip)(const struct map *, u64);
 	/* dso rip -> ip */
-	u64			(*unmap_ip)(struct map *, u64);
+	u64			(*unmap_ip)(const struct map *, u64);
 
 	struct dso		*dso;
 	refcount_t		refcnt;
@@ -44,20 +44,12 @@ struct kmap *__map__kmap(struct map *map);
 struct kmap *map__kmap(struct map *map);
 struct maps *map__kmaps(struct map *map);
 
-static inline u64 map__map_ip(struct map *map, u64 ip)
-{
-	return ip - map->start + map->pgoff;
-}
-
-static inline u64 map__unmap_ip(struct map *map, u64 ip)
-{
-	return ip + map->start - map->pgoff;
-}
-
-static inline u64 identity__map_ip(struct map *map __maybe_unused, u64 ip)
-{
-	return ip;
-}
+/* ip -> dso rip */
+u64 map__map_ip(const struct map *map, u64 ip);
+/* dso rip -> ip */
+u64 map__unmap_ip(const struct map *map, u64 ip);
+/* Returns ip */
+u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip);
 
 static inline size_t map__size(const struct map *map)
 {
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 10/22] perf map: Make map__contains_symbol args const
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (8 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 09/22] perf map: Add const to map_ip and unmap_ip Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 17:28   ` Arnaldo Carvalho de Melo
  2022-02-11 10:34 ` [PATCH v3 11/22] perf map: Move map list node into symbol Ian Rogers
                   ` (11 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Now unmap_ip is const, make contains symbol const.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/util/map.c | 2 +-
 tools/perf/util/map.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index b98fb000eb5c..8bbf9246a3cf 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -516,7 +516,7 @@ u64 map__objdump_2mem(struct map *map, u64 ip)
 	return ip + map->reloc;
 }
 
-bool map__contains_symbol(struct map *map, struct symbol *sym)
+bool map__contains_symbol(const struct map *map, const struct symbol *sym)
 {
 	u64 ip = map->unmap_ip(map, sym->start);
 
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 212a9468d5e1..3dcfe06db6b3 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -152,7 +152,7 @@ static inline bool __map__is_kmodule(const struct map *map)
 
 bool map__has_symbols(const struct map *map);
 
-bool map__contains_symbol(struct map *map, struct symbol *sym);
+bool map__contains_symbol(const struct map *map, const struct symbol *sym);
 
 #define ENTRY_TRAMPOLINE_NAME "__entry_SYSCALL_64_trampoline"
 
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 11/22] perf map: Move map list node into symbol
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (9 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 10/22] perf map: Make map__contains_symbol args const Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 10:34 ` [PATCH v3 12/22] perf maps: Remove rb_node from struct map Ian Rogers
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Using a perf map as a list node is only done in symbol. Move the
list_node struct into symbol as a single pointer to the map. This
makes reference count behavior more obvious and easy to check.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/util/map.h    |  5 +--
 tools/perf/util/symbol.c | 89 ++++++++++++++++++++++++++--------------
 2 files changed, 60 insertions(+), 34 deletions(-)

diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 3dcfe06db6b3..2879cae05ee0 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -16,10 +16,7 @@ struct maps;
 struct machine;
 
 struct map {
-	union {
-		struct rb_node	rb_node;
-		struct list_head node;
-	};
+	struct rb_node		rb_node;
 	u64			start;
 	u64			end;
 	bool			erange_warned:1;
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index a504346feb05..99accae7d3b8 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -48,6 +48,11 @@ static bool symbol__is_idle(const char *name);
 int vmlinux_path__nr_entries;
 char **vmlinux_path;
 
+struct map_list_node {
+	struct list_head node;
+	struct map *map;
+};
+
 struct symbol_conf symbol_conf = {
 	.nanosecs		= false,
 	.use_modules		= true,
@@ -1193,16 +1198,22 @@ struct kcore_mapfn_data {
 static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
 {
 	struct kcore_mapfn_data *md = data;
-	struct map *map;
+	struct map_list_node *list_node;
 
-	map = map__new2(start, md->dso);
-	if (map == NULL)
+	list_node = malloc(sizeof(*list_node));
+	if (list_node == NULL)
 		return -ENOMEM;
 
-	map->end = map->start + len;
-	map->pgoff = pgoff;
+	list_node->map = map__new2(start, md->dso);
+	if (list_node->map == NULL) {
+		free(list_node);
+		return -ENOMEM;
+	}
+
+	list_node->map->end = list_node->map->start + len;
+	list_node->map->pgoff = pgoff;
 
-	list_add(&map->node, &md->maps);
+	list_add(&list_node->node, &md->maps);
 
 	return 0;
 }
@@ -1238,12 +1249,19 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 				 * |new.............| -> |new..|       |new..|
 				 *       |old....|    ->       |old....|
 				 */
-				struct map *m = map__clone(new_map);
+				struct map_list_node *m;
 
+				m = malloc(sizeof(*m));
 				if (!m)
 					return -ENOMEM;
 
-				m->end = old_map->start;
+				m->map = map__clone(new_map);
+				if (!m->map) {
+					free(m);
+					return -ENOMEM;
+				}
+
+				m->map->end = old_map->start;
 				list_add_tail(&m->node, &merged);
 				new_map->pgoff += old_map->end - new_map->start;
 				new_map->start = old_map->end;
@@ -1273,10 +1291,13 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 	}
 
 	while (!list_empty(&merged)) {
-		old_map = list_entry(merged.next, struct map, node);
-		list_del_init(&old_map->node);
-		maps__insert(kmaps, old_map);
-		map__put(old_map);
+		struct map_list_node *old_node;
+
+		old_node = list_entry(merged.next, struct map_list_node, node);
+		list_del_init(&old_node->node);
+		maps__insert(kmaps, old_node->map);
+		map__put(old_node->map);
+		free(old_node);
 	}
 
 	if (new_map) {
@@ -1291,7 +1312,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 {
 	struct maps *kmaps = map__kmaps(map);
 	struct kcore_mapfn_data md;
-	struct map *old_map, *new_map, *replacement_map = NULL, *next;
+	struct map *old_map, *replacement_map = NULL, *next;
 	struct machine *machine;
 	bool is_64_bit;
 	int err, fd;
@@ -1351,42 +1372,47 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 
 	/* Find the kernel map using the '_stext' symbol */
 	if (!kallsyms__get_function_start(kallsyms_filename, "_stext", &stext)) {
-		list_for_each_entry(new_map, &md.maps, node) {
-			if (stext >= new_map->start && stext < new_map->end) {
-				replacement_map = new_map;
+		struct map_list_node *new_node;
+
+		list_for_each_entry(new_node, &md.maps, node) {
+			if (stext >= new_node->map->start && stext < new_node->map->end) {
+				replacement_map = new_node->map;
 				break;
 			}
 		}
 	}
 
 	if (!replacement_map)
-		replacement_map = list_entry(md.maps.next, struct map, node);
+		replacement_map = list_entry(md.maps.next, struct map_list_node, node)->map;
 
 	/* Add new maps */
 	while (!list_empty(&md.maps)) {
-		new_map = list_entry(md.maps.next, struct map, node);
-		list_del_init(&new_map->node);
-		if (new_map == replacement_map) {
-			map->start	= new_map->start;
-			map->end	= new_map->end;
-			map->pgoff	= new_map->pgoff;
-			map->map_ip	= new_map->map_ip;
-			map->unmap_ip	= new_map->unmap_ip;
+		struct map_list_node *new_node;
+
+		new_node = list_entry(md.maps.next, struct map_list_node, node);
+		list_del_init(&new_node->node);
+		if (new_node->map == replacement_map) {
+			map->start	= new_node->map->start;
+			map->end	= new_node->map->end;
+			map->pgoff	= new_node->map->pgoff;
+			map->map_ip	= new_node->map->map_ip;
+			map->unmap_ip	= new_node->map->unmap_ip;
 			/* Ensure maps are correctly ordered */
 			map__get(map);
 			maps__remove(kmaps, map);
 			maps__insert(kmaps, map);
 			map__put(map);
-			map__put(new_map);
+			map__put(new_node->map);
 		} else {
 			/*
 			 * Merge kcore map into existing maps,
 			 * and ensure that current maps (eBPF)
 			 * stay intact.
 			 */
-			if (maps__merge_in(kmaps, new_map))
+			if (maps__merge_in(kmaps, new_node->map))
 				goto out_err;
 		}
+		free(new_node);
 	}
 
 	if (machine__is(machine, "x86_64")) {
@@ -1423,9 +1449,12 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 
 out_err:
 	while (!list_empty(&md.maps)) {
-		map = list_entry(md.maps.next, struct map, node);
-		list_del_init(&map->node);
-		map__put(map);
+		struct map_list_node *list_node;
+
+		list_node = list_entry(md.maps.next, struct map_list_node, node);
+		list_del_init(&list_node->node);
+		map__put(list_node->map);
+		free(list_node);
 	}
 	close(fd);
 	return -EINVAL;
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 12/22] perf maps: Remove rb_node from struct map
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (10 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 11/22] perf map: Move map list node into symbol Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-16 14:08   ` Arnaldo Carvalho de Melo
  2022-02-11 10:34 ` [PATCH v3 13/22] perf namespaces: Add functions to access nsinfo Ian Rogers
                   ` (9 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

struct map is reference counted, having it also be a node in an
red-black tree complicates the reference counting. Switch to having a
map_rb_node which is a red-block tree node but points at the reference
counted struct map. This reference is responsible for a single reference
count.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/arch/x86/util/event.c    |  13 +-
 tools/perf/builtin-report.c         |   6 +-
 tools/perf/tests/maps.c             |   8 +-
 tools/perf/tests/vmlinux-kallsyms.c |  17 +--
 tools/perf/util/machine.c           |  62 ++++++----
 tools/perf/util/map.c               |  16 ---
 tools/perf/util/map.h               |   1 -
 tools/perf/util/maps.c              | 182 ++++++++++++++++++----------
 tools/perf/util/maps.h              |  17 ++-
 tools/perf/util/probe-event.c       |  18 +--
 tools/perf/util/symbol-elf.c        |   9 +-
 tools/perf/util/symbol.c            |  77 +++++++-----
 tools/perf/util/synthetic-events.c  |  26 ++--
 tools/perf/util/thread.c            |  10 +-
 tools/perf/util/vdso.c              |   7 +-
 15 files changed, 288 insertions(+), 181 deletions(-)

diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
index e670f3547581..7b6b0c98fb36 100644
--- a/tools/perf/arch/x86/util/event.c
+++ b/tools/perf/arch/x86/util/event.c
@@ -17,7 +17,7 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
 				       struct machine *machine)
 {
 	int rc = 0;
-	struct map *pos;
+	struct map_rb_node *pos;
 	struct maps *kmaps = machine__kernel_maps(machine);
 	union perf_event *event = zalloc(sizeof(event->mmap) +
 					 machine->id_hdr_size);
@@ -31,11 +31,12 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
 	maps__for_each_entry(kmaps, pos) {
 		struct kmap *kmap;
 		size_t size;
+		struct map *map = pos->map;
 
-		if (!__map__is_extra_kernel_map(pos))
+		if (!__map__is_extra_kernel_map(map))
 			continue;
 
-		kmap = map__kmap(pos);
+		kmap = map__kmap(map);
 
 		size = sizeof(event->mmap) - sizeof(event->mmap.filename) +
 		       PERF_ALIGN(strlen(kmap->name) + 1, sizeof(u64)) +
@@ -56,9 +57,9 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
 
 		event->mmap.header.size = size;
 
-		event->mmap.start = pos->start;
-		event->mmap.len   = pos->end - pos->start;
-		event->mmap.pgoff = pos->pgoff;
+		event->mmap.start = map->start;
+		event->mmap.len   = map->end - map->start;
+		event->mmap.pgoff = map->pgoff;
 		event->mmap.pid   = machine->pid;
 
 		strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 1dd92d8c9279..57611ef725c3 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -799,9 +799,11 @@ static struct task *tasks_list(struct task *task, struct machine *machine)
 static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
 {
 	size_t printed = 0;
-	struct map *map;
+	struct map_rb_node *rb_node;
+
+	maps__for_each_entry(maps, rb_node) {
+		struct map *map = rb_node->map;
 
-	maps__for_each_entry(maps, map) {
 		printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
 				   indent, "", map->start, map->end,
 				   map->prot & PROT_READ ? 'r' : '-',
diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
index 6f53f17f788e..a58274598587 100644
--- a/tools/perf/tests/maps.c
+++ b/tools/perf/tests/maps.c
@@ -15,10 +15,12 @@ struct map_def {
 
 static int check_maps(struct map_def *merged, unsigned int size, struct maps *maps)
 {
-	struct map *map;
+	struct map_rb_node *rb_node;
 	unsigned int i = 0;
 
-	maps__for_each_entry(maps, map) {
+	maps__for_each_entry(maps, rb_node) {
+		struct map *map = rb_node->map;
+
 		if (i > 0)
 			TEST_ASSERT_VAL("less maps expected", (map && i < size) || (!map && i == size));
 
@@ -74,7 +76,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
 
 		map->start = bpf_progs[i].start;
 		map->end   = bpf_progs[i].end;
-		maps__insert(maps, map);
+		TEST_ASSERT_VAL("failed to insert map", maps__insert(maps, map) == 0);
 		map__put(map);
 	}
 
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index 84bf5f640065..11a230ee5894 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -117,7 +117,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 	int err = -1;
 	struct rb_node *nd;
 	struct symbol *sym;
-	struct map *kallsyms_map, *vmlinux_map, *map;
+	struct map *kallsyms_map, *vmlinux_map;
+	struct map_rb_node *rb_node;
 	struct machine kallsyms, vmlinux;
 	struct maps *maps = machine__kernel_maps(&vmlinux);
 	u64 mem_start, mem_end;
@@ -285,15 +286,15 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 
 	header_printed = false;
 
-	maps__for_each_entry(maps, map) {
-		struct map *
+	maps__for_each_entry(maps, rb_node) {
+		struct map *map = rb_node->map;
 		/*
 		 * If it is the kernel, kallsyms is always "[kernel.kallsyms]", while
 		 * the kernel will have the path for the vmlinux file being used,
 		 * so use the short name, less descriptive but the same ("[kernel]" in
 		 * both cases.
 		 */
-		pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
+		struct map *pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
 								map->dso->short_name :
 								map->dso->name));
 		if (pair) {
@@ -309,8 +310,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 
 	header_printed = false;
 
-	maps__for_each_entry(maps, map) {
-		struct map *pair;
+	maps__for_each_entry(maps, rb_node) {
+		struct map *pair, *map = rb_node->map;
 
 		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
 		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
@@ -339,7 +340,9 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 
 	maps = machine__kernel_maps(&kallsyms);
 
-	maps__for_each_entry(maps, map) {
+	maps__for_each_entry(maps, rb_node) {
+		struct map *map = rb_node->map;
+
 		if (!map->priv) {
 			if (!header_printed) {
 				pr_info("WARN: Maps only in kallsyms:\n");
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 57fbdba66425..fa25174cabf7 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -786,6 +786,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
 
 	if (!map) {
 		struct dso *dso = dso__new(event->ksymbol.name);
+		int err;
 
 		if (dso) {
 			dso->kernel = DSO_SPACE__KERNEL;
@@ -805,8 +806,11 @@ static int machine__process_ksymbol_register(struct machine *machine,
 
 		map->start = event->ksymbol.addr;
 		map->end = map->start + event->ksymbol.len;
-		maps__insert(machine__kernel_maps(machine), map);
+		err = maps__insert(machine__kernel_maps(machine), map);
 		map__put(map);
+		if (err)
+			return err;
+
 		dso__set_loaded(dso);
 
 		if (is_bpf_image(event->ksymbol.name)) {
@@ -906,6 +910,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
 	struct map *map = NULL;
 	struct kmod_path m;
 	struct dso *dso;
+	int err;
 
 	if (kmod_path__parse_name(&m, filename))
 		return NULL;
@@ -918,10 +923,14 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
 	if (map == NULL)
 		goto out;
 
-	maps__insert(machine__kernel_maps(machine), map);
+	err = maps__insert(machine__kernel_maps(machine), map);
 
 	/* Put the map here because maps__insert already got it */
 	map__put(map);
+
+	/* If maps__insert failed, return NULL. */
+	if (err)
+		map = NULL;
 out:
 	/* put the dso here, corresponding to  machine__findnew_module_dso */
 	dso__put(dso);
@@ -1092,10 +1101,11 @@ int machine__create_extra_kernel_map(struct machine *machine,
 {
 	struct kmap *kmap;
 	struct map *map;
+	int err;
 
 	map = map__new2(xm->start, kernel);
 	if (!map)
-		return -1;
+		return -ENOMEM;
 
 	map->end   = xm->end;
 	map->pgoff = xm->pgoff;
@@ -1104,14 +1114,16 @@ int machine__create_extra_kernel_map(struct machine *machine,
 
 	strlcpy(kmap->name, xm->name, KMAP_NAME_LEN);
 
-	maps__insert(machine__kernel_maps(machine), map);
+	err = maps__insert(machine__kernel_maps(machine), map);
 
-	pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
-		  kmap->name, map->start, map->end);
+	if (!err) {
+		pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
+			kmap->name, map->start, map->end);
+	}
 
 	map__put(map);
 
-	return 0;
+	return err;
 }
 
 static u64 find_entry_trampoline(struct dso *dso)
@@ -1152,16 +1164,16 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
 	struct maps *kmaps = machine__kernel_maps(machine);
 	int nr_cpus_avail, cpu;
 	bool found = false;
-	struct map *map;
+	struct map_rb_node *rb_node;
 	u64 pgoff;
 
 	/*
 	 * In the vmlinux case, pgoff is a virtual address which must now be
 	 * mapped to a vmlinux offset.
 	 */
-	maps__for_each_entry(kmaps, map) {
+	maps__for_each_entry(kmaps, rb_node) {
+		struct map *dest_map, *map = rb_node->map;
 		struct kmap *kmap = __map__kmap(map);
-		struct map *dest_map;
 
 		if (!kmap || !is_entry_trampoline(kmap->name))
 			continue;
@@ -1216,11 +1228,10 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
 
 	machine->vmlinux_map = map__new2(0, kernel);
 	if (machine->vmlinux_map == NULL)
-		return -1;
+		return -ENOMEM;
 
 	machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
-	maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
-	return 0;
+	return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
 }
 
 void machine__destroy_kernel_maps(struct machine *machine)
@@ -1542,25 +1553,26 @@ static void machine__set_kernel_mmap(struct machine *machine,
 		machine->vmlinux_map->end = ~0ULL;
 }
 
-static void machine__update_kernel_mmap(struct machine *machine,
+static int machine__update_kernel_mmap(struct machine *machine,
 				     u64 start, u64 end)
 {
 	struct map *map = machine__kernel_map(machine);
+	int err;
 
 	map__get(map);
 	maps__remove(machine__kernel_maps(machine), map);
 
 	machine__set_kernel_mmap(machine, start, end);
 
-	maps__insert(machine__kernel_maps(machine), map);
+	err = maps__insert(machine__kernel_maps(machine), map);
 	map__put(map);
+	return err;
 }
 
 int machine__create_kernel_maps(struct machine *machine)
 {
 	struct dso *kernel = machine__get_kernel(machine);
 	const char *name = NULL;
-	struct map *map;
 	u64 start = 0, end = ~0ULL;
 	int ret;
 
@@ -1592,7 +1604,9 @@ int machine__create_kernel_maps(struct machine *machine)
 		 * we have a real start address now, so re-order the kmaps
 		 * assume it's the last in the kmaps
 		 */
-		machine__update_kernel_mmap(machine, start, end);
+		ret = machine__update_kernel_mmap(machine, start, end);
+		if (ret < 0)
+			goto out_put;
 	}
 
 	if (machine__create_extra_kernel_maps(machine, kernel))
@@ -1600,9 +1614,12 @@ int machine__create_kernel_maps(struct machine *machine)
 
 	if (end == ~0ULL) {
 		/* update end address of the kernel map using adjacent module address */
-		map = map__next(machine__kernel_map(machine));
-		if (map)
-			machine__set_kernel_mmap(machine, start, map->start);
+		struct map_rb_node *rb_node = maps__find_node(machine__kernel_maps(machine),
+							machine__kernel_map(machine));
+		struct map_rb_node *next = map_rb_node__next(rb_node);
+
+		if (next)
+			machine__set_kernel_mmap(machine, start, next->map->start);
 	}
 
 out_put:
@@ -1726,7 +1743,10 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
 		if (strstr(kernel->long_name, "vmlinux"))
 			dso__set_short_name(kernel, "[kernel.vmlinux]", false);
 
-		machine__update_kernel_mmap(machine, xm->start, xm->end);
+		if (machine__update_kernel_mmap(machine, xm->start, xm->end) < 0) {
+			dso__put(kernel);
+			goto out_problem;
+		}
 
 		if (build_id__is_defined(bid))
 			dso__set_build_id(kernel, bid);
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 8bbf9246a3cf..dfa5f6b7381f 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -111,7 +111,6 @@ void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
 	map->dso      = dso__get(dso);
 	map->map_ip   = map__map_ip;
 	map->unmap_ip = map__unmap_ip;
-	RB_CLEAR_NODE(&map->rb_node);
 	map->erange_warned = false;
 	refcount_set(&map->refcnt, 1);
 }
@@ -383,7 +382,6 @@ struct map *map__clone(struct map *from)
 	map = memdup(from, size);
 	if (map != NULL) {
 		refcount_set(&map->refcnt, 1);
-		RB_CLEAR_NODE(&map->rb_node);
 		dso__get(map->dso);
 	}
 
@@ -523,20 +521,6 @@ bool map__contains_symbol(const struct map *map, const struct symbol *sym)
 	return ip >= map->start && ip < map->end;
 }
 
-static struct map *__map__next(struct map *map)
-{
-	struct rb_node *next = rb_next(&map->rb_node);
-
-	if (next)
-		return rb_entry(next, struct map, rb_node);
-	return NULL;
-}
-
-struct map *map__next(struct map *map)
-{
-	return map ? __map__next(map) : NULL;
-}
-
 struct kmap *__map__kmap(struct map *map)
 {
 	if (!map->dso || !map->dso->kernel)
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 2879cae05ee0..d1a6f85fd31d 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -16,7 +16,6 @@ struct maps;
 struct machine;
 
 struct map {
-	struct rb_node		rb_node;
 	u64			start;
 	u64			end;
 	bool			erange_warned:1;
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index ededabf0a230..beb09b9a122c 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -10,9 +10,7 @@
 #include "ui/ui.h"
 #include "unwind.h"
 
-static void __maps__insert(struct maps *maps, struct map *map);
-
-void maps__init(struct maps *maps, struct machine *machine)
+static void maps__init(struct maps *maps, struct machine *machine)
 {
 	maps->entries = RB_ROOT;
 	init_rwsem(&maps->lock);
@@ -32,10 +30,44 @@ static void __maps__free_maps_by_name(struct maps *maps)
 	maps->nr_maps_allocated = 0;
 }
 
-void maps__insert(struct maps *maps, struct map *map)
+static int __maps__insert(struct maps *maps, struct map *map)
+{
+	struct rb_node **p = &maps->entries.rb_node;
+	struct rb_node *parent = NULL;
+	const u64 ip = map->start;
+	struct map_rb_node *m, *new_rb_node;
+
+	new_rb_node = malloc(sizeof(*new_rb_node));
+	if (!new_rb_node)
+		return -ENOMEM;
+
+	RB_CLEAR_NODE(&new_rb_node->rb_node);
+	new_rb_node->map = map;
+
+	while (*p != NULL) {
+		parent = *p;
+		m = rb_entry(parent, struct map_rb_node, rb_node);
+		if (ip < m->map->start)
+			p = &(*p)->rb_left;
+		else
+			p = &(*p)->rb_right;
+	}
+
+	rb_link_node(&new_rb_node->rb_node, parent, p);
+	rb_insert_color(&new_rb_node->rb_node, &maps->entries);
+	map__get(map);
+	return 0;
+}
+
+int maps__insert(struct maps *maps, struct map *map)
 {
+	int err;
+
 	down_write(&maps->lock);
-	__maps__insert(maps, map);
+	err = __maps__insert(maps, map);
+	if (err)
+		goto out;
+
 	++maps->nr_maps;
 
 	if (map->dso && map->dso->kernel) {
@@ -59,8 +91,8 @@ void maps__insert(struct maps *maps, struct map *map)
 
 			if (maps_by_name == NULL) {
 				__maps__free_maps_by_name(maps);
-				up_write(&maps->lock);
-				return;
+				err = -ENOMEM;
+				goto out;
 			}
 
 			maps->maps_by_name = maps_by_name;
@@ -69,22 +101,29 @@ void maps__insert(struct maps *maps, struct map *map)
 		maps->maps_by_name[maps->nr_maps - 1] = map;
 		__maps__sort_by_name(maps);
 	}
+out:
 	up_write(&maps->lock);
+	return err;
 }
 
-static void __maps__remove(struct maps *maps, struct map *map)
+static void __maps__remove(struct maps *maps, struct map_rb_node *rb_node)
 {
-	rb_erase_init(&map->rb_node, &maps->entries);
-	map__put(map);
+	rb_erase_init(&rb_node->rb_node, &maps->entries);
+	map__put(rb_node->map);
+	free(rb_node);
 }
 
 void maps__remove(struct maps *maps, struct map *map)
 {
+	struct map_rb_node *rb_node;
+
 	down_write(&maps->lock);
 	if (maps->last_search_by_name == map)
 		maps->last_search_by_name = NULL;
 
-	__maps__remove(maps, map);
+	rb_node = maps__find_node(maps, map);
+	assert(rb_node->map == map);
+	__maps__remove(maps, rb_node);
 	--maps->nr_maps;
 	if (maps->maps_by_name)
 		__maps__free_maps_by_name(maps);
@@ -93,15 +132,16 @@ void maps__remove(struct maps *maps, struct map *map)
 
 static void __maps__purge(struct maps *maps)
 {
-	struct map *pos, *next;
+	struct map_rb_node *pos, *next;
 
 	maps__for_each_entry_safe(maps, pos, next) {
 		rb_erase_init(&pos->rb_node,  &maps->entries);
-		map__put(pos);
+		map__put(pos->map);
+		free(pos);
 	}
 }
 
-void maps__exit(struct maps *maps)
+static void maps__exit(struct maps *maps)
 {
 	down_write(&maps->lock);
 	__maps__purge(maps);
@@ -153,21 +193,21 @@ struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
 struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct map **mapp)
 {
 	struct symbol *sym;
-	struct map *pos;
+	struct map_rb_node *pos;
 
 	down_read(&maps->lock);
 
 	maps__for_each_entry(maps, pos) {
-		sym = map__find_symbol_by_name(pos, name);
+		sym = map__find_symbol_by_name(pos->map, name);
 
 		if (sym == NULL)
 			continue;
-		if (!map__contains_symbol(pos, sym)) {
+		if (!map__contains_symbol(pos->map, sym)) {
 			sym = NULL;
 			continue;
 		}
 		if (mapp != NULL)
-			*mapp = pos;
+			*mapp = pos->map;
 		goto out;
 	}
 
@@ -196,15 +236,15 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
 size_t maps__fprintf(struct maps *maps, FILE *fp)
 {
 	size_t printed = 0;
-	struct map *pos;
+	struct map_rb_node *pos;
 
 	down_read(&maps->lock);
 
 	maps__for_each_entry(maps, pos) {
 		printed += fprintf(fp, "Map:");
-		printed += map__fprintf(pos, fp);
+		printed += map__fprintf(pos->map, fp);
 		if (verbose > 2) {
-			printed += dso__fprintf(pos->dso, fp);
+			printed += dso__fprintf(pos->map->dso, fp);
 			printed += fprintf(fp, "--\n");
 		}
 	}
@@ -231,11 +271,11 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 	next = root->rb_node;
 	first = NULL;
 	while (next) {
-		struct map *pos = rb_entry(next, struct map, rb_node);
+		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
 
-		if (pos->end > map->start) {
+		if (pos->map->end > map->start) {
 			first = next;
-			if (pos->start <= map->start)
+			if (pos->map->start <= map->start)
 				break;
 			next = next->rb_left;
 		} else
@@ -244,14 +284,14 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 
 	next = first;
 	while (next) {
-		struct map *pos = rb_entry(next, struct map, rb_node);
+		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
 		next = rb_next(&pos->rb_node);
 
 		/*
 		 * Stop if current map starts after map->end.
 		 * Maps are ordered by start: next will not overlap for sure.
 		 */
-		if (pos->start >= map->end)
+		if (pos->map->start >= map->end)
 			break;
 
 		if (verbose >= 2) {
@@ -262,7 +302,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 			} else {
 				fputs("overlapping maps:\n", fp);
 				map__fprintf(map, fp);
-				map__fprintf(pos, fp);
+				map__fprintf(pos->map, fp);
 			}
 		}
 
@@ -271,8 +311,8 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 		 * Now check if we need to create new maps for areas not
 		 * overlapped by the new map:
 		 */
-		if (map->start > pos->start) {
-			struct map *before = map__clone(pos);
+		if (map->start > pos->map->start) {
+			struct map *before = map__clone(pos->map);
 
 			if (before == NULL) {
 				err = -ENOMEM;
@@ -280,14 +320,17 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 			}
 
 			before->end = map->start;
-			__maps__insert(maps, before);
+			err = __maps__insert(maps, before);
+			if (err)
+				goto put_map;
+
 			if (verbose >= 2 && !use_browser)
 				map__fprintf(before, fp);
 			map__put(before);
 		}
 
-		if (map->end < pos->end) {
-			struct map *after = map__clone(pos);
+		if (map->end < pos->map->end) {
+			struct map *after = map__clone(pos->map);
 
 			if (after == NULL) {
 				err = -ENOMEM;
@@ -295,15 +338,19 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 			}
 
 			after->start = map->end;
-			after->pgoff += map->end - pos->start;
-			assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
-			__maps__insert(maps, after);
+			after->pgoff += map->end - pos->map->start;
+			assert(pos->map->map_ip(pos->map, map->end) ==
+				after->map_ip(after, map->end));
+			err = __maps__insert(maps, after);
+			if (err)
+				goto put_map;
+
 			if (verbose >= 2 && !use_browser)
 				map__fprintf(after, fp);
 			map__put(after);
 		}
 put_map:
-		map__put(pos);
+		map__put(pos->map);
 
 		if (err)
 			goto out;
@@ -322,12 +369,12 @@ int maps__clone(struct thread *thread, struct maps *parent)
 {
 	struct maps *maps = thread->maps;
 	int err;
-	struct map *map;
+	struct map_rb_node *rb_node;
 
 	down_read(&parent->lock);
 
-	maps__for_each_entry(parent, map) {
-		struct map *new = map__clone(map);
+	maps__for_each_entry(parent, rb_node) {
+		struct map *new = map__clone(rb_node->map);
 
 		if (new == NULL) {
 			err = -ENOMEM;
@@ -338,7 +385,10 @@ int maps__clone(struct thread *thread, struct maps *parent)
 		if (err)
 			goto out_unlock;
 
-		maps__insert(maps, new);
+		err = maps__insert(maps, new);
+		if (err)
+			goto out_unlock;
+
 		map__put(new);
 	}
 
@@ -348,40 +398,30 @@ int maps__clone(struct thread *thread, struct maps *parent)
 	return err;
 }
 
-static void __maps__insert(struct maps *maps, struct map *map)
+struct map_rb_node *maps__find_node(struct maps *maps, struct map *map)
 {
-	struct rb_node **p = &maps->entries.rb_node;
-	struct rb_node *parent = NULL;
-	const u64 ip = map->start;
-	struct map *m;
+	struct map_rb_node *rb_node;
 
-	while (*p != NULL) {
-		parent = *p;
-		m = rb_entry(parent, struct map, rb_node);
-		if (ip < m->start)
-			p = &(*p)->rb_left;
-		else
-			p = &(*p)->rb_right;
+	maps__for_each_entry(maps, rb_node) {
+		if (rb_node->map == map)
+			return rb_node;
 	}
-
-	rb_link_node(&map->rb_node, parent, p);
-	rb_insert_color(&map->rb_node, &maps->entries);
-	map__get(map);
+	return NULL;
 }
 
 struct map *maps__find(struct maps *maps, u64 ip)
 {
 	struct rb_node *p;
-	struct map *m;
+	struct map_rb_node *m;
 
 	down_read(&maps->lock);
 
 	p = maps->entries.rb_node;
 	while (p != NULL) {
-		m = rb_entry(p, struct map, rb_node);
-		if (ip < m->start)
+		m = rb_entry(p, struct map_rb_node, rb_node);
+		if (ip < m->map->start)
 			p = p->rb_left;
-		else if (ip >= m->end)
+		else if (ip >= m->map->end)
 			p = p->rb_right;
 		else
 			goto out;
@@ -390,14 +430,30 @@ struct map *maps__find(struct maps *maps, u64 ip)
 	m = NULL;
 out:
 	up_read(&maps->lock);
-	return m;
+
+	return m ? m->map : NULL;
 }
 
-struct map *maps__first(struct maps *maps)
+struct map_rb_node *maps__first(struct maps *maps)
 {
 	struct rb_node *first = rb_first(&maps->entries);
 
 	if (first)
-		return rb_entry(first, struct map, rb_node);
+		return rb_entry(first, struct map_rb_node, rb_node);
 	return NULL;
 }
+
+struct map_rb_node *map_rb_node__next(struct map_rb_node *node)
+{
+	struct rb_node *next;
+
+	if (!node)
+		return NULL;
+
+	next = rb_next(&node->rb_node);
+
+	if (!next)
+		return NULL;
+
+	return rb_entry(next, struct map_rb_node, rb_node);
+}
diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
index 7e729ff42749..512746ec0f9a 100644
--- a/tools/perf/util/maps.h
+++ b/tools/perf/util/maps.h
@@ -15,15 +15,22 @@ struct map;
 struct maps;
 struct thread;
 
+struct map_rb_node {
+	struct rb_node rb_node;
+	struct map *map;
+};
+
+struct map_rb_node *maps__first(struct maps *maps);
+struct map_rb_node *map_rb_node__next(struct map_rb_node *node);
+struct map_rb_node *maps__find_node(struct maps *maps, struct map *map);
 struct map *maps__find(struct maps *maps, u64 addr);
-struct map *maps__first(struct maps *maps);
-struct map *map__next(struct map *map);
 
 #define maps__for_each_entry(maps, map) \
-	for (map = maps__first(maps); map; map = map__next(map))
+	for (map = maps__first(maps); map; map = map_rb_node__next(map))
 
 #define maps__for_each_entry_safe(maps, map, next) \
-	for (map = maps__first(maps), next = map__next(map); map; map = next, next = map__next(map))
+	for (map = maps__first(maps), next = map_rb_node__next(map); map; \
+	     map = next, next = map_rb_node__next(map))
 
 struct maps {
 	struct rb_root      entries;
@@ -63,7 +70,7 @@ void maps__put(struct maps *maps);
 int maps__clone(struct thread *thread, struct maps *parent);
 size_t maps__fprintf(struct maps *maps, FILE *fp);
 
-void maps__insert(struct maps *maps, struct map *map);
+int maps__insert(struct maps *maps, struct map *map);
 
 void maps__remove(struct maps *maps, struct map *map);
 
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index bc5ab782ace5..f9fbf611f2bf 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -150,23 +150,27 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
 static struct map *kernel_get_module_map(const char *module)
 {
 	struct maps *maps = machine__kernel_maps(host_machine);
-	struct map *pos;
+	struct map_rb_node *pos;
 
 	/* A file path -- this is an offline module */
 	if (module && strchr(module, '/'))
 		return dso__new_map(module);
 
 	if (!module) {
-		pos = machine__kernel_map(host_machine);
-		return map__get(pos);
+		struct map *map = machine__kernel_map(host_machine);
+
+		return map__get(map);
 	}
 
 	maps__for_each_entry(maps, pos) {
 		/* short_name is "[module]" */
-		if (strncmp(pos->dso->short_name + 1, module,
-			    pos->dso->short_name_len - 2) == 0 &&
-		    module[pos->dso->short_name_len - 2] == '\0') {
-			return map__get(pos);
+		const char *short_name = pos->map->dso->short_name;
+		u16 short_name_len =  pos->map->dso->short_name_len;
+
+		if (strncmp(short_name + 1, module,
+			    short_name_len - 2) == 0 &&
+		    module[short_name_len - 2] == '\0') {
+			return map__get(pos->map);
 		}
 	}
 	return NULL;
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 31cd59a2b66e..4607c9438866 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1000,10 +1000,14 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 			map->unmap_ip = map__unmap_ip;
 			/* Ensure maps are correctly ordered */
 			if (kmaps) {
+				int err;
+
 				map__get(map);
 				maps__remove(kmaps, map);
-				maps__insert(kmaps, map);
+				err = maps__insert(kmaps, map);
 				map__put(map);
+				if (err)
+					return err;
 			}
 		}
 
@@ -1056,7 +1060,8 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
 		}
 		curr_dso->symtab_type = dso->symtab_type;
-		maps__insert(kmaps, curr_map);
+		if (maps__insert(kmaps, curr_map))
+			return -1;
 		/*
 		 * Add it before we drop the reference to curr_map, i.e. while
 		 * we still are sure to have a reference to this DSO via
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 99accae7d3b8..266c65bb8bbb 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -247,13 +247,13 @@ void symbols__fixup_end(struct rb_root_cached *symbols)
 
 void maps__fixup_end(struct maps *maps)
 {
-	struct map *prev = NULL, *curr;
+	struct map_rb_node *prev = NULL, *curr;
 
 	down_write(&maps->lock);
 
 	maps__for_each_entry(maps, curr) {
-		if (prev != NULL && !prev->end)
-			prev->end = curr->start;
+		if (prev != NULL && !prev->map->end)
+			prev->map->end = curr->map->start;
 
 		prev = curr;
 	}
@@ -262,8 +262,8 @@ void maps__fixup_end(struct maps *maps)
 	 * We still haven't the actual symbols, so guess the
 	 * last map final address.
 	 */
-	if (curr && !curr->end)
-		curr->end = ~0ULL;
+	if (curr && !curr->map->end)
+		curr->map->end = ~0ULL;
 
 	up_write(&maps->lock);
 }
@@ -911,7 +911,10 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 			}
 
 			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
-			maps__insert(kmaps, curr_map);
+			if (maps__insert(kmaps, curr_map)) {
+				dso__put(ndso);
+				return -1;
+			}
 			++kernel_range;
 		} else if (delta) {
 			/* Kernel was relocated at boot time */
@@ -1099,14 +1102,15 @@ int compare_proc_modules(const char *from, const char *to)
 static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
 {
 	struct rb_root modules = RB_ROOT;
-	struct map *old_map;
+	struct map_rb_node *old_node;
 	int err;
 
 	err = read_proc_modules(filename, &modules);
 	if (err)
 		return err;
 
-	maps__for_each_entry(kmaps, old_map) {
+	maps__for_each_entry(kmaps, old_node) {
+		struct map *old_map = old_node->map;
 		struct module_info *mi;
 
 		if (!__map__is_kmodule(old_map)) {
@@ -1224,10 +1228,13 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
  */
 int maps__merge_in(struct maps *kmaps, struct map *new_map)
 {
-	struct map *old_map;
+	struct map_rb_node *rb_node;
 	LIST_HEAD(merged);
+	int err = 0;
+
+	maps__for_each_entry(kmaps, rb_node) {
+		struct map *old_map = rb_node->map;
 
-	maps__for_each_entry(kmaps, old_map) {
 		/* no overload with this one */
 		if (new_map->end < old_map->start ||
 		    new_map->start >= old_map->end)
@@ -1252,13 +1259,16 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 				struct map_list_node *m;
 
 				m = malloc(sizeof(*m));
-				if (!m)
-					return -ENOMEM;
+				if (!m) {
+					err = -ENOMEM;
+					goto out;
+				}
 
 				m->map = map__clone(new_map);
 				if (!m->map) {
 					free(m);
-					return -ENOMEM;
+					err = -ENOMEM;
+					goto out;
 				}
 
 				m->map->end = old_map->start;
@@ -1290,21 +1300,24 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 		}
 	}
 
+out:
 	while (!list_empty(&merged)) {
 		struct map_list_node *old_node;
 
 		old_node = list_entry(merged.next, struct map_list_node, node);
 		list_del_init(&old_node->node);
-		maps__insert(kmaps, old_node->map);
+		if (!err)
+			err = maps__insert(kmaps, old_node->map);
 		map__put(old_node->map);
 		free(old_node);
 	}
 
 	if (new_map) {
-		maps__insert(kmaps, new_map);
+		if (!err)
+			err = maps__insert(kmaps, new_map);
 		map__put(new_map);
 	}
-	return 0;
+	return err;
 }
 
 static int dso__load_kcore(struct dso *dso, struct map *map,
@@ -1312,7 +1325,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 {
 	struct maps *kmaps = map__kmaps(map);
 	struct kcore_mapfn_data md;
-	struct map *old_map, *replacement_map = NULL, *next;
+	struct map *replacement_map = NULL;
+	struct map_rb_node *old_node, *next;
 	struct machine *machine;
 	bool is_64_bit;
 	int err, fd;
@@ -1359,7 +1373,9 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	}
 
 	/* Remove old maps */
-	maps__for_each_entry_safe(kmaps, old_map, next) {
+	maps__for_each_entry_safe(kmaps, old_node, next) {
+		struct map *old_map = old_node->map;
+
 		/*
 		 * We need to preserve eBPF maps even if they are
 		 * covered by kcore, because we need to access
@@ -1400,17 +1416,21 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 			/* Ensure maps are correctly ordered */
 			map__get(map);
 			maps__remove(kmaps, map);
-			maps__insert(kmaps, map);
+			err = maps__insert(kmaps, map);
 			map__put(map);
 			map__put(new_node->map);
+			if (err)
+				goto out_err;
 		} else {
 			/*
 			 * Merge kcore map into existing maps,
 			 * and ensure that current maps (eBPF)
 			 * stay intact.
 			 */
-			if (maps__merge_in(kmaps, new_node->map))
+			if (maps__merge_in(kmaps, new_node->map)) {
+				err = -EINVAL;
 				goto out_err;
+			}
 		}
 		free(new_node);
 	}
@@ -1457,7 +1477,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 		free(list_node);
 	}
 	close(fd);
-	return -EINVAL;
+	return err;
 }
 
 /*
@@ -1991,8 +2011,9 @@ void __maps__sort_by_name(struct maps *maps)
 
 static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
 {
-	struct map *map;
-	struct map **maps_by_name = realloc(maps->maps_by_name, maps->nr_maps * sizeof(map));
+	struct map_rb_node *rb_node;
+	struct map **maps_by_name = realloc(maps->maps_by_name,
+					    maps->nr_maps * sizeof(struct map *));
 	int i = 0;
 
 	if (maps_by_name == NULL)
@@ -2001,8 +2022,8 @@ static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
 	maps->maps_by_name = maps_by_name;
 	maps->nr_maps_allocated = maps->nr_maps;
 
-	maps__for_each_entry(maps, map)
-		maps_by_name[i++] = map;
+	maps__for_each_entry(maps, rb_node)
+		maps_by_name[i++] = rb_node->map;
 
 	__maps__sort_by_name(maps);
 	return 0;
@@ -2024,6 +2045,7 @@ static struct map *__maps__find_by_name(struct maps *maps, const char *name)
 
 struct map *maps__find_by_name(struct maps *maps, const char *name)
 {
+	struct map_rb_node *rb_node;
 	struct map *map;
 
 	down_read(&maps->lock);
@@ -2042,12 +2064,13 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 		goto out_unlock;
 
 	/* Fallback to traversing the rbtree... */
-	maps__for_each_entry(maps, map)
+	maps__for_each_entry(maps, rb_node) {
+		map = rb_node->map;
 		if (strcmp(map->dso->short_name, name) == 0) {
 			maps->last_search_by_name = map;
 			goto out_unlock;
 		}
-
+	}
 	map = NULL;
 
 out_unlock:
diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
index 70f095624a0b..ed2d55d224aa 100644
--- a/tools/perf/util/synthetic-events.c
+++ b/tools/perf/util/synthetic-events.c
@@ -639,7 +639,7 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
 				   struct machine *machine)
 {
 	int rc = 0;
-	struct map *pos;
+	struct map_rb_node *pos;
 	struct maps *maps = machine__kernel_maps(machine);
 	union perf_event *event;
 	size_t size = symbol_conf.buildid_mmap2 ?
@@ -662,37 +662,39 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
 		event->header.misc = PERF_RECORD_MISC_GUEST_KERNEL;
 
 	maps__for_each_entry(maps, pos) {
-		if (!__map__is_kmodule(pos))
+		struct map *map = pos->map;
+
+		if (!__map__is_kmodule(map))
 			continue;
 
 		if (symbol_conf.buildid_mmap2) {
-			size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
+			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
 			event->mmap2.header.type = PERF_RECORD_MMAP2;
 			event->mmap2.header.size = (sizeof(event->mmap2) -
 						(sizeof(event->mmap2.filename) - size));
 			memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
 			event->mmap2.header.size += machine->id_hdr_size;
-			event->mmap2.start = pos->start;
-			event->mmap2.len   = pos->end - pos->start;
+			event->mmap2.start = map->start;
+			event->mmap2.len   = map->end - map->start;
 			event->mmap2.pid   = machine->pid;
 
-			memcpy(event->mmap2.filename, pos->dso->long_name,
-			       pos->dso->long_name_len + 1);
+			memcpy(event->mmap2.filename, map->dso->long_name,
+			       map->dso->long_name_len + 1);
 
 			perf_record_mmap2__read_build_id(&event->mmap2, false);
 		} else {
-			size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
+			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
 			event->mmap.header.type = PERF_RECORD_MMAP;
 			event->mmap.header.size = (sizeof(event->mmap) -
 						(sizeof(event->mmap.filename) - size));
 			memset(event->mmap.filename + size, 0, machine->id_hdr_size);
 			event->mmap.header.size += machine->id_hdr_size;
-			event->mmap.start = pos->start;
-			event->mmap.len   = pos->end - pos->start;
+			event->mmap.start = map->start;
+			event->mmap.len   = map->end - map->start;
 			event->mmap.pid   = machine->pid;
 
-			memcpy(event->mmap.filename, pos->dso->long_name,
-			       pos->dso->long_name_len + 1);
+			memcpy(event->mmap.filename, map->dso->long_name,
+			       map->dso->long_name_len + 1);
 		}
 
 		if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
index 665e5c0618ed..4baf4db8af65 100644
--- a/tools/perf/util/thread.c
+++ b/tools/perf/util/thread.c
@@ -338,9 +338,7 @@ int thread__insert_map(struct thread *thread, struct map *map)
 		return ret;
 
 	maps__fixup_overlappings(thread->maps, map, stderr);
-	maps__insert(thread->maps, map);
-
-	return 0;
+	return maps__insert(thread->maps, map);
 }
 
 static int __thread__prepare_access(struct thread *thread)
@@ -348,12 +346,12 @@ static int __thread__prepare_access(struct thread *thread)
 	bool initialized = false;
 	int err = 0;
 	struct maps *maps = thread->maps;
-	struct map *map;
+	struct map_rb_node *rb_node;
 
 	down_read(&maps->lock);
 
-	maps__for_each_entry(maps, map) {
-		err = unwind__prepare_access(thread->maps, map, &initialized);
+	maps__for_each_entry(maps, rb_node) {
+		err = unwind__prepare_access(thread->maps, rb_node->map, &initialized);
 		if (err || initialized)
 			break;
 	}
diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
index 43beb169631d..835c39efb80d 100644
--- a/tools/perf/util/vdso.c
+++ b/tools/perf/util/vdso.c
@@ -144,10 +144,11 @@ static enum dso_type machine__thread_dso_type(struct machine *machine,
 					      struct thread *thread)
 {
 	enum dso_type dso_type = DSO__TYPE_UNKNOWN;
-	struct map *map;
+	struct map_rb_node *rb_node;
+
+	maps__for_each_entry(thread->maps, rb_node) {
+		struct dso *dso = rb_node->map->dso;
 
-	maps__for_each_entry(thread->maps, map) {
-		struct dso *dso = map->dso;
 		if (!dso || dso->long_name[0] != '/')
 			continue;
 		dso_type = dso__type(dso, machine);
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 13/22] perf namespaces: Add functions to access nsinfo
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (11 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 12/22] perf maps: Remove rb_node from struct map Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 17:31   ` Arnaldo Carvalho de Melo
  2022-02-11 10:34 ` [PATCH v3 14/22] perf maps: Add functions to access maps Ian Rogers
                   ` (8 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Having functions to access nsinfo reduces the places where reference
counting checking needs to be added.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/builtin-inject.c  |  2 +-
 tools/perf/builtin-probe.c   |  2 +-
 tools/perf/util/build-id.c   |  4 +--
 tools/perf/util/jitdump.c    | 10 ++++----
 tools/perf/util/map.c        |  4 +--
 tools/perf/util/namespaces.c | 50 ++++++++++++++++++++++++++++--------
 tools/perf/util/namespaces.h | 10 ++++++--
 tools/perf/util/symbol.c     |  8 +++---
 8 files changed, 63 insertions(+), 27 deletions(-)

diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index bede332bf0e2..f7917c390e96 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -354,7 +354,7 @@ static struct dso *findnew_dso(int pid, int tid, const char *filename,
 		nnsi = nsinfo__copy(nsi);
 		if (nnsi) {
 			nsinfo__put(nsi);
-			nnsi->need_setns = false;
+			nsinfo__clear_need_setns(nnsi);
 			nsi = nnsi;
 		}
 		dso = machine__findnew_vdso(machine, thread);
diff --git a/tools/perf/builtin-probe.c b/tools/perf/builtin-probe.c
index c31627af75d4..f62298f5db3b 100644
--- a/tools/perf/builtin-probe.c
+++ b/tools/perf/builtin-probe.c
@@ -217,7 +217,7 @@ static int opt_set_target_ns(const struct option *opt __maybe_unused,
 			return ret;
 		}
 		nsip = nsinfo__new(ns_pid);
-		if (nsip && nsip->need_setns)
+		if (nsip && nsinfo__need_setns(nsip))
 			params.nsi = nsinfo__get(nsip);
 		nsinfo__put(nsip);
 
diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
index e32e8f2ff3bd..7a5821c87f94 100644
--- a/tools/perf/util/build-id.c
+++ b/tools/perf/util/build-id.c
@@ -706,7 +706,7 @@ build_id_cache__add(const char *sbuild_id, const char *name, const char *realnam
 		if (is_kallsyms) {
 			if (copyfile("/proc/kallsyms", filename))
 				goto out_free;
-		} else if (nsi && nsi->need_setns) {
+		} else if (nsi && nsinfo__need_setns(nsi)) {
 			if (copyfile_ns(name, filename, nsi))
 				goto out_free;
 		} else if (link(realname, filename) && errno != EEXIST &&
@@ -730,7 +730,7 @@ build_id_cache__add(const char *sbuild_id, const char *name, const char *realnam
 				goto out_free;
 			}
 			if (access(filename, F_OK)) {
-				if (nsi && nsi->need_setns) {
+				if (nsi && nsinfo__need_setns(nsi)) {
 					if (copyfile_ns(debugfile, filename,
 							nsi))
 						goto out_free;
diff --git a/tools/perf/util/jitdump.c b/tools/perf/util/jitdump.c
index 917a9c707371..a23255773c60 100644
--- a/tools/perf/util/jitdump.c
+++ b/tools/perf/util/jitdump.c
@@ -382,15 +382,15 @@ jit_inject_event(struct jit_buf_desc *jd, union perf_event *event)
 
 static pid_t jr_entry_pid(struct jit_buf_desc *jd, union jr_entry *jr)
 {
-	if (jd->nsi && jd->nsi->in_pidns)
-		return jd->nsi->tgid;
+	if (jd->nsi && nsinfo__in_pidns(jd->nsi))
+		return nsinfo__tgid(jd->nsi);
 	return jr->load.pid;
 }
 
 static pid_t jr_entry_tid(struct jit_buf_desc *jd, union jr_entry *jr)
 {
-	if (jd->nsi && jd->nsi->in_pidns)
-		return jd->nsi->pid;
+	if (jd->nsi && nsinfo__in_pidns(jd->nsi))
+		return nsinfo__pid(jd->nsi);
 	return jr->load.tid;
 }
 
@@ -779,7 +779,7 @@ jit_detect(char *mmap_name, pid_t pid, struct nsinfo *nsi)
 	 * pid does not match mmap pid
 	 * pid==0 in system-wide mode (synthesized)
 	 */
-	if (pid && pid2 != nsi->nstgid)
+	if (pid && pid2 != nsinfo__nstgid(nsi))
 		return -1;
 	/*
 	 * validate suffix
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index dfa5f6b7381f..166c84c829f6 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -139,7 +139,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 
 		if ((anon || no_dso) && nsi && (prot & PROT_EXEC)) {
 			snprintf(newfilename, sizeof(newfilename),
-				 "/tmp/perf-%d.map", nsi->pid);
+				 "/tmp/perf-%d.map", nsinfo__pid(nsi));
 			filename = newfilename;
 		}
 
@@ -156,7 +156,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 			nnsi = nsinfo__copy(nsi);
 			if (nnsi) {
 				nsinfo__put(nsi);
-				nnsi->need_setns = false;
+				nsinfo__clear_need_setns(nnsi);
 				nsi = nnsi;
 			}
 			pgoff = 0;
diff --git a/tools/perf/util/namespaces.c b/tools/perf/util/namespaces.c
index 48aa3217300b..dd536220cdb9 100644
--- a/tools/perf/util/namespaces.c
+++ b/tools/perf/util/namespaces.c
@@ -76,7 +76,7 @@ static int nsinfo__get_nspid(struct nsinfo *nsi, const char *path)
 		if (strstr(statln, "Tgid:") != NULL) {
 			nsi->tgid = (pid_t)strtol(strrchr(statln, '\t'),
 						     NULL, 10);
-			nsi->nstgid = nsi->tgid;
+			nsi->nstgid = nsinfo__tgid(nsi);
 		}
 
 		if (strstr(statln, "NStgid:") != NULL) {
@@ -108,7 +108,7 @@ int nsinfo__init(struct nsinfo *nsi)
 	if (snprintf(oldns, PATH_MAX, "/proc/self/ns/mnt") >= PATH_MAX)
 		return rv;
 
-	if (asprintf(&newns, "/proc/%d/ns/mnt", nsi->pid) == -1)
+	if (asprintf(&newns, "/proc/%d/ns/mnt", nsinfo__pid(nsi)) == -1)
 		return rv;
 
 	if (stat(oldns, &old_stat) < 0)
@@ -129,7 +129,7 @@ int nsinfo__init(struct nsinfo *nsi)
 	/* If we're dealing with a process that is in a different PID namespace,
 	 * attempt to work out the innermost tgid for the process.
 	 */
-	if (snprintf(spath, PATH_MAX, "/proc/%d/status", nsi->pid) >= PATH_MAX)
+	if (snprintf(spath, PATH_MAX, "/proc/%d/status", nsinfo__pid(nsi)) >= PATH_MAX)
 		goto out;
 
 	rv = nsinfo__get_nspid(nsi, spath);
@@ -166,7 +166,7 @@ struct nsinfo *nsinfo__new(pid_t pid)
 	return nsi;
 }
 
-struct nsinfo *nsinfo__copy(struct nsinfo *nsi)
+struct nsinfo *nsinfo__copy(const struct nsinfo *nsi)
 {
 	struct nsinfo *nnsi;
 
@@ -175,11 +175,11 @@ struct nsinfo *nsinfo__copy(struct nsinfo *nsi)
 
 	nnsi = calloc(1, sizeof(*nnsi));
 	if (nnsi != NULL) {
-		nnsi->pid = nsi->pid;
-		nnsi->tgid = nsi->tgid;
-		nnsi->nstgid = nsi->nstgid;
-		nnsi->need_setns = nsi->need_setns;
-		nnsi->in_pidns = nsi->in_pidns;
+		nnsi->pid = nsinfo__pid(nsi);
+		nnsi->tgid = nsinfo__tgid(nsi);
+		nnsi->nstgid = nsinfo__nstgid(nsi);
+		nnsi->need_setns = nsinfo__need_setns(nsi);
+		nnsi->in_pidns = nsinfo__in_pidns(nsi);
 		if (nsi->mntns_path) {
 			nnsi->mntns_path = strdup(nsi->mntns_path);
 			if (!nnsi->mntns_path) {
@@ -193,7 +193,7 @@ struct nsinfo *nsinfo__copy(struct nsinfo *nsi)
 	return nnsi;
 }
 
-void nsinfo__delete(struct nsinfo *nsi)
+static void nsinfo__delete(struct nsinfo *nsi)
 {
 	zfree(&nsi->mntns_path);
 	free(nsi);
@@ -212,6 +212,36 @@ void nsinfo__put(struct nsinfo *nsi)
 		nsinfo__delete(nsi);
 }
 
+bool nsinfo__need_setns(const struct nsinfo *nsi)
+{
+        return nsi->need_setns;
+}
+
+void nsinfo__clear_need_setns(struct nsinfo *nsi)
+{
+        nsi->need_setns = false;
+}
+
+pid_t nsinfo__tgid(const struct nsinfo  *nsi)
+{
+        return nsi->tgid;
+}
+
+pid_t nsinfo__nstgid(const struct nsinfo  *nsi)
+{
+        return nsi->nstgid;
+}
+
+pid_t nsinfo__pid(const struct nsinfo  *nsi)
+{
+        return nsi->pid;
+}
+
+pid_t nsinfo__in_pidns(const struct nsinfo  *nsi)
+{
+        return nsi->in_pidns;
+}
+
 void nsinfo__mountns_enter(struct nsinfo *nsi,
 				  struct nscookie *nc)
 {
diff --git a/tools/perf/util/namespaces.h b/tools/perf/util/namespaces.h
index 9ceea9643507..567829262c42 100644
--- a/tools/perf/util/namespaces.h
+++ b/tools/perf/util/namespaces.h
@@ -47,12 +47,18 @@ struct nscookie {
 
 int nsinfo__init(struct nsinfo *nsi);
 struct nsinfo *nsinfo__new(pid_t pid);
-struct nsinfo *nsinfo__copy(struct nsinfo *nsi);
-void nsinfo__delete(struct nsinfo *nsi);
+struct nsinfo *nsinfo__copy(const struct nsinfo *nsi);
 
 struct nsinfo *nsinfo__get(struct nsinfo *nsi);
 void nsinfo__put(struct nsinfo *nsi);
 
+bool nsinfo__need_setns(const struct nsinfo *nsi);
+void nsinfo__clear_need_setns(struct nsinfo *nsi);
+pid_t nsinfo__tgid(const struct nsinfo  *nsi);
+pid_t nsinfo__nstgid(const struct nsinfo  *nsi);
+pid_t nsinfo__pid(const struct nsinfo  *nsi);
+pid_t nsinfo__in_pidns(const struct nsinfo  *nsi);
+
 void nsinfo__mountns_enter(struct nsinfo *nsi, struct nscookie *nc);
 void nsinfo__mountns_exit(struct nscookie *nc);
 
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 266c65bb8bbb..e8045b1c8700 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1784,8 +1784,8 @@ static int dso__find_perf_map(char *filebuf, size_t bufsz,
 
 	nsi = *nsip;
 
-	if (nsi->need_setns) {
-		snprintf(filebuf, bufsz, "/tmp/perf-%d.map", nsi->nstgid);
+	if (nsinfo__need_setns(nsi)) {
+		snprintf(filebuf, bufsz, "/tmp/perf-%d.map", nsinfo__nstgid(nsi));
 		nsinfo__mountns_enter(nsi, &nsc);
 		rc = access(filebuf, R_OK);
 		nsinfo__mountns_exit(&nsc);
@@ -1797,8 +1797,8 @@ static int dso__find_perf_map(char *filebuf, size_t bufsz,
 	if (nnsi) {
 		nsinfo__put(nsi);
 
-		nnsi->need_setns = false;
-		snprintf(filebuf, bufsz, "/tmp/perf-%d.map", nnsi->tgid);
+		nsinfo__clear_need_setns(nnsi);
+		snprintf(filebuf, bufsz, "/tmp/perf-%d.map", nsinfo__tgid(nnsi));
 		*nsip = nnsi;
 		rc = 0;
 	}
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 14/22] perf maps: Add functions to access maps
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (12 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 13/22] perf namespaces: Add functions to access nsinfo Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 17:33   ` Arnaldo Carvalho de Melo
  2022-02-11 10:34 ` [PATCH v3 15/22] perf map: Use functions to access the variables in map Ian Rogers
                   ` (7 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Introduce functions to access struct maps. These functions reduce the
number of places reference counting is necessary. While tidying APIs do
some small const-ification, in particlar to unwind_libunwind_ops.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 .../scripts/python/Perf-Trace-Util/Context.c  |  7 +-
 tools/perf/tests/code-reading.c               |  2 +-
 tools/perf/ui/browsers/hists.c                |  3 +-
 tools/perf/util/callchain.c                   |  9 +--
 tools/perf/util/db-export.c                   | 12 ++--
 tools/perf/util/dlfilter.c                    |  8 ++-
 tools/perf/util/event.c                       |  4 +-
 tools/perf/util/hist.c                        |  2 +-
 tools/perf/util/machine.c                     |  2 +-
 tools/perf/util/map.c                         | 14 ++--
 tools/perf/util/maps.c                        | 69 +++++++++++--------
 tools/perf/util/maps.h                        | 47 ++++++++++---
 .../scripting-engines/trace-event-python.c    |  2 +-
 tools/perf/util/sort.c                        |  2 +-
 tools/perf/util/symbol-elf.c                  |  2 +-
 tools/perf/util/symbol.c                      | 36 +++++-----
 tools/perf/util/thread-stack.c                |  4 +-
 tools/perf/util/thread.c                      |  4 +-
 tools/perf/util/unwind-libunwind-local.c      | 16 +++--
 tools/perf/util/unwind-libunwind.c            | 30 +++++---
 20 files changed, 170 insertions(+), 105 deletions(-)

diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
index 895f5fc23965..b64013a87c54 100644
--- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c
+++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
@@ -98,10 +98,11 @@ static PyObject *perf_sample_insn(PyObject *obj, PyObject *args)
 	if (!c)
 		return NULL;
 
-	if (c->sample->ip && !c->sample->insn_len &&
-	    c->al->thread->maps && c->al->thread->maps->machine)
-		script_fetch_insn(c->sample, c->al->thread, c->al->thread->maps->machine);
+	if (c->sample->ip && !c->sample->insn_len && c->al->thread->maps) {
+		struct machine *machine =  maps__machine(c->al->thread->maps);
 
+		script_fetch_insn(c->sample, c->al->thread, machine);
+	}
 	if (!c->sample->insn_len)
 		Py_RETURN_NONE; /* N.B. This is a return statement */
 
diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
index 5610767b407f..6eafe36a8704 100644
--- a/tools/perf/tests/code-reading.c
+++ b/tools/perf/tests/code-reading.c
@@ -268,7 +268,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 		len = al.map->end - addr;
 
 	/* Read the object code using perf */
-	ret_len = dso__data_read_offset(al.map->dso, thread->maps->machine,
+	ret_len = dso__data_read_offset(al.map->dso, maps__machine(thread->maps),
 					al.addr, buf1, len);
 	if (ret_len != len) {
 		pr_debug("dso__data_read_offset failed\n");
diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
index b72ee6822222..572ff38ceb0f 100644
--- a/tools/perf/ui/browsers/hists.c
+++ b/tools/perf/ui/browsers/hists.c
@@ -3139,7 +3139,8 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
 			continue;
 		case 'k':
 			if (browser->selection != NULL)
-				hists_browser__zoom_map(browser, browser->selection->maps->machine->vmlinux_map);
+				hists_browser__zoom_map(browser,
+					      maps__machine(browser->selection->maps)->vmlinux_map);
 			continue;
 		case 'V':
 			verbose = (verbose + 1) % 4;
diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
index 5c27a4b2e7a7..61bb3fb2107a 100644
--- a/tools/perf/util/callchain.c
+++ b/tools/perf/util/callchain.c
@@ -1106,6 +1106,8 @@ int hist_entry__append_callchain(struct hist_entry *he, struct perf_sample *samp
 int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *node,
 			bool hide_unresolved)
 {
+	struct machine *machine = maps__machine(node->ms.maps);
+
 	al->maps = node->ms.maps;
 	al->map = node->ms.map;
 	al->sym = node->ms.sym;
@@ -1118,9 +1120,8 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
 		if (al->map == NULL)
 			goto out;
 	}
-
-	if (al->maps == machine__kernel_maps(al->maps->machine)) {
-		if (machine__is_host(al->maps->machine)) {
+	if (al->maps == machine__kernel_maps(machine)) {
+		if (machine__is_host(machine)) {
 			al->cpumode = PERF_RECORD_MISC_KERNEL;
 			al->level = 'k';
 		} else {
@@ -1128,7 +1129,7 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
 			al->level = 'g';
 		}
 	} else {
-		if (machine__is_host(al->maps->machine)) {
+		if (machine__is_host(machine)) {
 			al->cpumode = PERF_RECORD_MISC_USER;
 			al->level = '.';
 		} else if (perf_guest) {
diff --git a/tools/perf/util/db-export.c b/tools/perf/util/db-export.c
index e0d4f08839fb..1cfcfdd3cf52 100644
--- a/tools/perf/util/db-export.c
+++ b/tools/perf/util/db-export.c
@@ -181,7 +181,7 @@ static int db_ids_from_al(struct db_export *dbe, struct addr_location *al,
 	if (al->map) {
 		struct dso *dso = al->map->dso;
 
-		err = db_export__dso(dbe, dso, al->maps->machine);
+		err = db_export__dso(dbe, dso, maps__machine(al->maps));
 		if (err)
 			return err;
 		*dso_db_id = dso->db_id;
@@ -354,19 +354,21 @@ int db_export__sample(struct db_export *dbe, union perf_event *event,
 	};
 	struct thread *main_thread;
 	struct comm *comm = NULL;
+	struct machine *machine;
 	int err;
 
 	err = db_export__evsel(dbe, evsel);
 	if (err)
 		return err;
 
-	err = db_export__machine(dbe, al->maps->machine);
+	machine = maps__machine(al->maps);
+	err = db_export__machine(dbe, machine);
 	if (err)
 		return err;
 
-	main_thread = thread__main_thread(al->maps->machine, thread);
+	main_thread = thread__main_thread(machine, thread);
 
-	err = db_export__threads(dbe, thread, main_thread, al->maps->machine, &comm);
+	err = db_export__threads(dbe, thread, main_thread, machine, &comm);
 	if (err)
 		goto out_put;
 
@@ -380,7 +382,7 @@ int db_export__sample(struct db_export *dbe, union perf_event *event,
 		goto out_put;
 
 	if (dbe->cpr) {
-		struct call_path *cp = call_path_from_sample(dbe, al->maps->machine,
+		struct call_path *cp = call_path_from_sample(dbe, machine,
 							     thread, sample,
 							     evsel);
 		if (cp) {
diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
index db964d5a52af..d59462af15f1 100644
--- a/tools/perf/util/dlfilter.c
+++ b/tools/perf/util/dlfilter.c
@@ -197,8 +197,12 @@ static const __u8 *dlfilter__insn(void *ctx, __u32 *len)
 		if (!al->thread && machine__resolve(d->machine, al, d->sample) < 0)
 			return NULL;
 
-		if (al->thread->maps && al->thread->maps->machine)
-			script_fetch_insn(d->sample, al->thread, al->thread->maps->machine);
+		if (al->thread->maps) {
+			struct machine *machine = maps__machine(al->thread->maps);
+
+			if (machine)
+				script_fetch_insn(d->sample, al->thread, machine);
+		}
 	}
 
 	if (!d->sample->insn_len)
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 6439c888ae38..40a3b1a35613 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -571,7 +571,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
 			     struct addr_location *al)
 {
 	struct maps *maps = thread->maps;
-	struct machine *machine = maps->machine;
+	struct machine *machine = maps__machine(maps);
 	bool load_map = false;
 
 	al->maps = maps;
@@ -636,7 +636,7 @@ struct map *thread__find_map_fb(struct thread *thread, u8 cpumode, u64 addr,
 				struct addr_location *al)
 {
 	struct map *map = thread__find_map(thread, cpumode, addr, al);
-	struct machine *machine = thread->maps->machine;
+	struct machine *machine = maps__machine(thread->maps);
 	u8 addr_cpumode = machine__addr_cpumode(machine, cpumode, addr);
 
 	if (map || addr_cpumode == cpumode)
diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index 0a8033b09e28..78f9fbb925a7 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -237,7 +237,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 
 	if (h->cgroup) {
 		const char *cgrp_name = "unknown";
-		struct cgroup *cgrp = cgroup__find(h->ms.maps->machine->env,
+		struct cgroup *cgrp = cgroup__find(maps__machine(h->ms.maps)->env,
 						   h->cgroup);
 		if (cgrp != NULL)
 			cgrp_name = cgrp->name;
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index fa25174cabf7..88279008e761 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -2739,7 +2739,7 @@ static int find_prev_cpumode(struct ip_callchain *chain, struct thread *thread,
 static u64 get_leaf_frame_caller(struct perf_sample *sample,
 		struct thread *thread, int usr_idx)
 {
-	if (machine__normalized_is(thread->maps->machine, "arm64"))
+	if (machine__normalized_is(maps__machine(thread->maps), "arm64"))
 		return get_leaf_frame_caller_aarch64(sample, thread, usr_idx);
 	else
 		return 0;
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 166c84c829f6..57e926ce115f 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -220,7 +220,7 @@ bool __map__is_kernel(const struct map *map)
 {
 	if (!map->dso->kernel)
 		return false;
-	return machine__kernel_map(map__kmaps((struct map *)map)->machine) == map;
+	return machine__kernel_map(maps__machine(map__kmaps((struct map *)map))) == map;
 }
 
 bool __map__is_extra_kernel_map(const struct map *map)
@@ -461,11 +461,15 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
 	 * kcore may not either. However the trampoline object code is on the
 	 * main kernel map, so just use that instead.
 	 */
-	if (kmap && is_entry_trampoline(kmap->name) && kmap->kmaps && kmap->kmaps->machine) {
-		struct map *kernel_map = machine__kernel_map(kmap->kmaps->machine);
+	if (kmap && is_entry_trampoline(kmap->name) && kmap->kmaps) {
+		struct machine *machine = maps__machine(kmap->kmaps);
 
-		if (kernel_map)
-			map = kernel_map;
+		if (machine) {
+			struct map *kernel_map = machine__kernel_map(machine);
+
+			if (kernel_map)
+				map = kernel_map;
+		}
 	}
 
 	if (!map->dso->adjust_symbols)
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index beb09b9a122c..9fc3e7186b8e 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -13,7 +13,7 @@
 static void maps__init(struct maps *maps, struct machine *machine)
 {
 	maps->entries = RB_ROOT;
-	init_rwsem(&maps->lock);
+	init_rwsem(maps__lock(maps));
 	maps->machine = machine;
 	maps->last_search_by_name = NULL;
 	maps->nr_maps = 0;
@@ -32,7 +32,7 @@ static void __maps__free_maps_by_name(struct maps *maps)
 
 static int __maps__insert(struct maps *maps, struct map *map)
 {
-	struct rb_node **p = &maps->entries.rb_node;
+	struct rb_node **p = &maps__entries(maps)->rb_node;
 	struct rb_node *parent = NULL;
 	const u64 ip = map->start;
 	struct map_rb_node *m, *new_rb_node;
@@ -54,7 +54,7 @@ static int __maps__insert(struct maps *maps, struct map *map)
 	}
 
 	rb_link_node(&new_rb_node->rb_node, parent, p);
-	rb_insert_color(&new_rb_node->rb_node, &maps->entries);
+	rb_insert_color(&new_rb_node->rb_node, maps__entries(maps));
 	map__get(map);
 	return 0;
 }
@@ -63,7 +63,7 @@ int maps__insert(struct maps *maps, struct map *map)
 {
 	int err;
 
-	down_write(&maps->lock);
+	down_write(maps__lock(maps));
 	err = __maps__insert(maps, map);
 	if (err)
 		goto out;
@@ -84,10 +84,11 @@ int maps__insert(struct maps *maps, struct map *map)
 	 * If we already performed some search by name, then we need to add the just
 	 * inserted map and resort.
 	 */
-	if (maps->maps_by_name) {
-		if (maps->nr_maps > maps->nr_maps_allocated) {
-			int nr_allocate = maps->nr_maps * 2;
-			struct map **maps_by_name = realloc(maps->maps_by_name, nr_allocate * sizeof(map));
+	if (maps__maps_by_name(maps)) {
+		if (maps__nr_maps(maps) > maps->nr_maps_allocated) {
+			int nr_allocate = maps__nr_maps(maps) * 2;
+			struct map **maps_by_name = realloc(maps__maps_by_name(maps),
+							    nr_allocate * sizeof(map));
 
 			if (maps_by_name == NULL) {
 				__maps__free_maps_by_name(maps);
@@ -98,17 +99,17 @@ int maps__insert(struct maps *maps, struct map *map)
 			maps->maps_by_name = maps_by_name;
 			maps->nr_maps_allocated = nr_allocate;
 		}
-		maps->maps_by_name[maps->nr_maps - 1] = map;
+		maps__maps_by_name(maps)[maps__nr_maps(maps) - 1] = map;
 		__maps__sort_by_name(maps);
 	}
 out:
-	up_write(&maps->lock);
+	up_write(maps__lock(maps));
 	return err;
 }
 
 static void __maps__remove(struct maps *maps, struct map_rb_node *rb_node)
 {
-	rb_erase_init(&rb_node->rb_node, &maps->entries);
+	rb_erase_init(&rb_node->rb_node, maps__entries(maps));
 	map__put(rb_node->map);
 	free(rb_node);
 }
@@ -117,7 +118,7 @@ void maps__remove(struct maps *maps, struct map *map)
 {
 	struct map_rb_node *rb_node;
 
-	down_write(&maps->lock);
+	down_write(maps__lock(maps));
 	if (maps->last_search_by_name == map)
 		maps->last_search_by_name = NULL;
 
@@ -125,9 +126,9 @@ void maps__remove(struct maps *maps, struct map *map)
 	assert(rb_node->map == map);
 	__maps__remove(maps, rb_node);
 	--maps->nr_maps;
-	if (maps->maps_by_name)
+	if (maps__maps_by_name(maps))
 		__maps__free_maps_by_name(maps);
-	up_write(&maps->lock);
+	up_write(maps__lock(maps));
 }
 
 static void __maps__purge(struct maps *maps)
@@ -135,7 +136,7 @@ static void __maps__purge(struct maps *maps)
 	struct map_rb_node *pos, *next;
 
 	maps__for_each_entry_safe(maps, pos, next) {
-		rb_erase_init(&pos->rb_node,  &maps->entries);
+		rb_erase_init(&pos->rb_node,  maps__entries(maps));
 		map__put(pos->map);
 		free(pos);
 	}
@@ -143,9 +144,9 @@ static void __maps__purge(struct maps *maps)
 
 static void maps__exit(struct maps *maps)
 {
-	down_write(&maps->lock);
+	down_write(maps__lock(maps));
 	__maps__purge(maps);
-	up_write(&maps->lock);
+	up_write(maps__lock(maps));
 }
 
 bool maps__empty(struct maps *maps)
@@ -170,6 +171,14 @@ void maps__delete(struct maps *maps)
 	free(maps);
 }
 
+struct maps *maps__get(struct maps *maps)
+{
+	if (maps)
+		refcount_inc(&maps->refcnt);
+
+	return maps;
+}
+
 void maps__put(struct maps *maps)
 {
 	if (maps && refcount_dec_and_test(&maps->refcnt))
@@ -195,7 +204,7 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
 	struct symbol *sym;
 	struct map_rb_node *pos;
 
-	down_read(&maps->lock);
+	down_read(maps__lock(maps));
 
 	maps__for_each_entry(maps, pos) {
 		sym = map__find_symbol_by_name(pos->map, name);
@@ -213,7 +222,7 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
 
 	sym = NULL;
 out:
-	up_read(&maps->lock);
+	up_read(maps__lock(maps));
 	return sym;
 }
 
@@ -238,7 +247,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
 	size_t printed = 0;
 	struct map_rb_node *pos;
 
-	down_read(&maps->lock);
+	down_read(maps__lock(maps));
 
 	maps__for_each_entry(maps, pos) {
 		printed += fprintf(fp, "Map:");
@@ -249,7 +258,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
 		}
 	}
 
-	up_read(&maps->lock);
+	up_read(maps__lock(maps));
 
 	return printed;
 }
@@ -260,9 +269,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 	struct rb_node *next, *first;
 	int err = 0;
 
-	down_write(&maps->lock);
+	down_write(maps__lock(maps));
 
-	root = &maps->entries;
+	root = maps__entries(maps);
 
 	/*
 	 * Find first map where end > map->start.
@@ -358,7 +367,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 
 	err = 0;
 out:
-	up_write(&maps->lock);
+	up_write(maps__lock(maps));
 	return err;
 }
 
@@ -371,7 +380,7 @@ int maps__clone(struct thread *thread, struct maps *parent)
 	int err;
 	struct map_rb_node *rb_node;
 
-	down_read(&parent->lock);
+	down_read(maps__lock(parent));
 
 	maps__for_each_entry(parent, rb_node) {
 		struct map *new = map__clone(rb_node->map);
@@ -394,7 +403,7 @@ int maps__clone(struct thread *thread, struct maps *parent)
 
 	err = 0;
 out_unlock:
-	up_read(&parent->lock);
+	up_read(maps__lock(parent));
 	return err;
 }
 
@@ -414,9 +423,9 @@ struct map *maps__find(struct maps *maps, u64 ip)
 	struct rb_node *p;
 	struct map_rb_node *m;
 
-	down_read(&maps->lock);
+	down_read(maps__lock(maps));
 
-	p = maps->entries.rb_node;
+	p = maps__entries(maps)->rb_node;
 	while (p != NULL) {
 		m = rb_entry(p, struct map_rb_node, rb_node);
 		if (ip < m->map->start)
@@ -429,14 +438,14 @@ struct map *maps__find(struct maps *maps, u64 ip)
 
 	m = NULL;
 out:
-	up_read(&maps->lock);
+	up_read(maps__lock(maps));
 
 	return m ? m->map : NULL;
 }
 
 struct map_rb_node *maps__first(struct maps *maps)
 {
-	struct rb_node *first = rb_first(&maps->entries);
+	struct rb_node *first = rb_first(maps__entries(maps));
 
 	if (first)
 		return rb_entry(first, struct map_rb_node, rb_node);
diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
index 512746ec0f9a..bde3390c7096 100644
--- a/tools/perf/util/maps.h
+++ b/tools/perf/util/maps.h
@@ -43,7 +43,7 @@ struct maps {
 	unsigned int	 nr_maps_allocated;
 #ifdef HAVE_LIBUNWIND_SUPPORT
 	void				*addr_space;
-	struct unwind_libunwind_ops	*unwind_libunwind_ops;
+	const struct unwind_libunwind_ops *unwind_libunwind_ops;
 #endif
 };
 
@@ -58,20 +58,51 @@ struct kmap {
 struct maps *maps__new(struct machine *machine);
 void maps__delete(struct maps *maps);
 bool maps__empty(struct maps *maps);
+int maps__clone(struct thread *thread, struct maps *parent);
+
+struct maps *maps__get(struct maps *maps);
+void maps__put(struct maps *maps);
 
-static inline struct maps *maps__get(struct maps *maps)
+static inline struct rb_root *maps__entries(struct maps *maps)
 {
-	if (maps)
-		refcount_inc(&maps->refcnt);
-	return maps;
+	return &maps->entries;
 }
 
-void maps__put(struct maps *maps);
-int maps__clone(struct thread *thread, struct maps *parent);
+static inline struct machine *maps__machine(struct maps *maps)
+{
+	return maps->machine;
+}
+
+static inline struct rw_semaphore *maps__lock(struct maps *maps)
+{
+	return &maps->lock;
+}
+
+static inline struct map **maps__maps_by_name(struct maps *maps)
+{
+	return maps->maps_by_name;
+}
+
+static inline unsigned int maps__nr_maps(const struct maps *maps)
+{
+	return maps->nr_maps;
+}
+
+#ifdef HAVE_LIBUNWIND_SUPPORT
+static inline void *maps__addr_space(struct maps *maps)
+{
+	return maps->addr_space;
+}
+
+static inline const struct unwind_libunwind_ops *maps__unwind_libunwind_ops(const struct maps *maps)
+{
+	return maps->unwind_libunwind_ops;
+}
+#endif
+
 size_t maps__fprintf(struct maps *maps, FILE *fp);
 
 int maps__insert(struct maps *maps, struct map *map);
-
 void maps__remove(struct maps *maps, struct map *map);
 
 struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp);
diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
index e752e1f4a5f0..0290dc3a6258 100644
--- a/tools/perf/util/scripting-engines/trace-event-python.c
+++ b/tools/perf/util/scripting-engines/trace-event-python.c
@@ -1220,7 +1220,7 @@ static void python_export_sample_table(struct db_export *dbe,
 
 	tuple_set_d64(t, 0, es->db_id);
 	tuple_set_d64(t, 1, es->evsel->db_id);
-	tuple_set_d64(t, 2, es->al->maps->machine->db_id);
+	tuple_set_d64(t, 2, maps__machine(es->al->maps)->db_id);
 	tuple_set_d64(t, 3, es->al->thread->db_id);
 	tuple_set_d64(t, 4, es->comm_db_id);
 	tuple_set_d64(t, 5, es->dso_db_id);
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index cfba8c337783..25686d67ee6f 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -661,7 +661,7 @@ static int hist_entry__cgroup_snprintf(struct hist_entry *he,
 	const char *cgrp_name = "N/A";
 
 	if (he->cgroup) {
-		struct cgroup *cgrp = cgroup__find(he->ms.maps->machine->env,
+		struct cgroup *cgrp = cgroup__find(maps__machine(he->ms.maps)->env,
 						   he->cgroup);
 		if (cgrp != NULL)
 			cgrp_name = cgrp->name;
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 4607c9438866..3ca9a0968345 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1067,7 +1067,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		 * we still are sure to have a reference to this DSO via
 		 * *curr_map->dso.
 		 */
-		dsos__add(&kmaps->machine->dsos, curr_dso);
+		dsos__add(&maps__machine(kmaps)->dsos, curr_dso);
 		/* kmaps already got it */
 		map__put(curr_map);
 		dso__set_loaded(curr_dso);
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index e8045b1c8700..9b51e669a722 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -249,7 +249,7 @@ void maps__fixup_end(struct maps *maps)
 {
 	struct map_rb_node *prev = NULL, *curr;
 
-	down_write(&maps->lock);
+	down_write(maps__lock(maps));
 
 	maps__for_each_entry(maps, curr) {
 		if (prev != NULL && !prev->map->end)
@@ -265,7 +265,7 @@ void maps__fixup_end(struct maps *maps)
 	if (curr && !curr->map->end)
 		curr->map->end = ~0ULL;
 
-	up_write(&maps->lock);
+	up_write(maps__lock(maps));
 }
 
 struct symbol *symbol__new(u64 start, u64 len, u8 binding, u8 type, const char *name)
@@ -813,7 +813,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 	if (!kmaps)
 		return -1;
 
-	machine = kmaps->machine;
+	machine = maps__machine(kmaps);
 
 	x86_64 = machine__is(machine, "x86_64");
 
@@ -937,7 +937,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 
 	if (curr_map != initial_map &&
 	    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
-	    machine__is_default_guest(kmaps->machine)) {
+	    machine__is_default_guest(maps__machine(kmaps))) {
 		dso__set_loaded(curr_map->dso);
 	}
 
@@ -1336,7 +1336,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	if (!kmaps)
 		return -EINVAL;
 
-	machine = kmaps->machine;
+	machine = maps__machine(kmaps);
 
 	/* This function requires that the map is the kernel map */
 	if (!__map__is_kernel(map))
@@ -1851,7 +1851,7 @@ int dso__load(struct dso *dso, struct map *map)
 		else if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
 			ret = dso__load_guest_kernel_sym(dso, map);
 
-		machine = map__kmaps(map)->machine;
+		machine = maps__machine(map__kmaps(map));
 		if (machine__is(machine, "x86_64"))
 			machine__map_x86_64_entry_trampolines(machine, dso);
 		goto out;
@@ -2006,21 +2006,21 @@ static int map__strcmp_name(const void *name, const void *b)
 
 void __maps__sort_by_name(struct maps *maps)
 {
-	qsort(maps->maps_by_name, maps->nr_maps, sizeof(struct map *), map__strcmp);
+	qsort(maps__maps_by_name(maps), maps__nr_maps(maps), sizeof(struct map *), map__strcmp);
 }
 
 static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
 {
 	struct map_rb_node *rb_node;
-	struct map **maps_by_name = realloc(maps->maps_by_name,
-					    maps->nr_maps * sizeof(struct map *));
+	struct map **maps_by_name = realloc(maps__maps_by_name(maps),
+					    maps__nr_maps(maps) * sizeof(struct map *));
 	int i = 0;
 
 	if (maps_by_name == NULL)
 		return -1;
 
 	maps->maps_by_name = maps_by_name;
-	maps->nr_maps_allocated = maps->nr_maps;
+	maps->nr_maps_allocated = maps__nr_maps(maps);
 
 	maps__for_each_entry(maps, rb_node)
 		maps_by_name[i++] = rb_node->map;
@@ -2033,11 +2033,12 @@ static struct map *__maps__find_by_name(struct maps *maps, const char *name)
 {
 	struct map **mapp;
 
-	if (maps->maps_by_name == NULL &&
+	if (maps__maps_by_name(maps) == NULL &&
 	    map__groups__sort_by_name_from_rbtree(maps))
 		return NULL;
 
-	mapp = bsearch(name, maps->maps_by_name, maps->nr_maps, sizeof(*mapp), map__strcmp_name);
+	mapp = bsearch(name, maps__maps_by_name(maps), maps__nr_maps(maps),
+		       sizeof(*mapp), map__strcmp_name);
 	if (mapp)
 		return *mapp;
 	return NULL;
@@ -2048,9 +2049,10 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 	struct map_rb_node *rb_node;
 	struct map *map;
 
-	down_read(&maps->lock);
+	down_read(maps__lock(maps));
 
-	if (maps->last_search_by_name && strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
+	if (maps->last_search_by_name &&
+	    strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
 		map = maps->last_search_by_name;
 		goto out_unlock;
 	}
@@ -2060,7 +2062,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 	 * made.
 	 */
 	map = __maps__find_by_name(maps, name);
-	if (map || maps->maps_by_name != NULL)
+	if (map || maps__maps_by_name(maps) != NULL)
 		goto out_unlock;
 
 	/* Fallback to traversing the rbtree... */
@@ -2074,7 +2076,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 	map = NULL;
 
 out_unlock:
-	up_read(&maps->lock);
+	up_read(maps__lock(maps));
 	return map;
 }
 
@@ -2326,7 +2328,7 @@ static int dso__load_guest_kernel_sym(struct dso *dso, struct map *map)
 {
 	int err;
 	const char *kallsyms_filename = NULL;
-	struct machine *machine = map__kmaps(map)->machine;
+	struct machine *machine = maps__machine(map__kmaps(map));
 	char path[PATH_MAX];
 
 	if (machine__is_default_guest(machine)) {
diff --git a/tools/perf/util/thread-stack.c b/tools/perf/util/thread-stack.c
index 1b992bbba4e8..4b85c1728012 100644
--- a/tools/perf/util/thread-stack.c
+++ b/tools/perf/util/thread-stack.c
@@ -155,8 +155,8 @@ static int thread_stack__init(struct thread_stack *ts, struct thread *thread,
 		ts->br_stack_sz = br_stack_sz;
 	}
 
-	if (thread->maps && thread->maps->machine) {
-		struct machine *machine = thread->maps->machine;
+	if (thread->maps && maps__machine(thread->maps)) {
+		struct machine *machine = maps__machine(thread->maps);
 		const char *arch = perf_env__arch(machine->env);
 
 		ts->kernel_start = machine__kernel_start(machine);
diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
index 4baf4db8af65..c2256777b813 100644
--- a/tools/perf/util/thread.c
+++ b/tools/perf/util/thread.c
@@ -348,7 +348,7 @@ static int __thread__prepare_access(struct thread *thread)
 	struct maps *maps = thread->maps;
 	struct map_rb_node *rb_node;
 
-	down_read(&maps->lock);
+	down_read(maps__lock(maps));
 
 	maps__for_each_entry(maps, rb_node) {
 		err = unwind__prepare_access(thread->maps, rb_node->map, &initialized);
@@ -356,7 +356,7 @@ static int __thread__prepare_access(struct thread *thread)
 			break;
 	}
 
-	up_read(&maps->lock);
+	up_read(maps__lock(maps));
 
 	return err;
 }
diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
index 71a353349181..7e6c59811292 100644
--- a/tools/perf/util/unwind-libunwind-local.c
+++ b/tools/perf/util/unwind-libunwind-local.c
@@ -618,24 +618,26 @@ static unw_accessors_t accessors = {
 
 static int _unwind__prepare_access(struct maps *maps)
 {
-	maps->addr_space = unw_create_addr_space(&accessors, 0);
-	if (!maps->addr_space) {
+	void *addr_space = unw_create_addr_space(&accessors, 0);
+
+	maps->addr_space = addr_space;
+	if (!addr_space) {
 		pr_err("unwind: Can't create unwind address space.\n");
 		return -ENOMEM;
 	}
 
-	unw_set_caching_policy(maps->addr_space, UNW_CACHE_GLOBAL);
+	unw_set_caching_policy(addr_space, UNW_CACHE_GLOBAL);
 	return 0;
 }
 
 static void _unwind__flush_access(struct maps *maps)
 {
-	unw_flush_cache(maps->addr_space, 0, 0);
+	unw_flush_cache(maps__addr_space(maps), 0, 0);
 }
 
 static void _unwind__finish_access(struct maps *maps)
 {
-	unw_destroy_addr_space(maps->addr_space);
+	unw_destroy_addr_space(maps__addr_space(maps));
 }
 
 static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
@@ -660,7 +662,7 @@ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
 	 */
 	if (max_stack - 1 > 0) {
 		WARN_ONCE(!ui->thread, "WARNING: ui->thread is NULL");
-		addr_space = ui->thread->maps->addr_space;
+		addr_space = maps__addr_space(ui->thread->maps);
 
 		if (addr_space == NULL)
 			return -1;
@@ -709,7 +711,7 @@ static int _unwind__get_entries(unwind_entry_cb_t cb, void *arg,
 	struct unwind_info ui = {
 		.sample       = data,
 		.thread       = thread,
-		.machine      = thread->maps->machine,
+		.machine      = maps__machine(thread->maps),
 	};
 
 	if (!data->user_regs.regs)
diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
index e89a5479b361..7b797ffadd19 100644
--- a/tools/perf/util/unwind-libunwind.c
+++ b/tools/perf/util/unwind-libunwind.c
@@ -22,12 +22,13 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 	const char *arch;
 	enum dso_type dso_type;
 	struct unwind_libunwind_ops *ops = local_unwind_libunwind_ops;
+	struct machine *machine;
 	int err;
 
 	if (!dwarf_callchain_users)
 		return 0;
 
-	if (maps->addr_space) {
+	if (maps__addr_space(maps)) {
 		pr_debug("unwind: thread map already set, dso=%s\n",
 			 map->dso->name);
 		if (initialized)
@@ -35,15 +36,16 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 		return 0;
 	}
 
+	machine = maps__machine(maps);
 	/* env->arch is NULL for live-mode (i.e. perf top) */
-	if (!maps->machine->env || !maps->machine->env->arch)
+	if (!machine->env || !machine->env->arch)
 		goto out_register;
 
-	dso_type = dso__type(map->dso, maps->machine);
+	dso_type = dso__type(map->dso, machine);
 	if (dso_type == DSO__TYPE_UNKNOWN)
 		return 0;
 
-	arch = perf_env__arch(maps->machine->env);
+	arch = perf_env__arch(machine->env);
 
 	if (!strcmp(arch, "x86")) {
 		if (dso_type != DSO__TYPE_64BIT)
@@ -60,7 +62,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 out_register:
 	unwind__register_ops(maps, ops);
 
-	err = maps->unwind_libunwind_ops->prepare_access(maps);
+	err = maps__unwind_libunwind_ops(maps)->prepare_access(maps);
 	if (initialized)
 		*initialized = err ? false : true;
 	return err;
@@ -68,21 +70,27 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 
 void unwind__flush_access(struct maps *maps)
 {
-	if (maps->unwind_libunwind_ops)
-		maps->unwind_libunwind_ops->flush_access(maps);
+	const struct unwind_libunwind_ops *ops = maps__unwind_libunwind_ops(maps);
+
+	if (ops)
+		ops->flush_access(maps);
 }
 
 void unwind__finish_access(struct maps *maps)
 {
-	if (maps->unwind_libunwind_ops)
-		maps->unwind_libunwind_ops->finish_access(maps);
+	const struct unwind_libunwind_ops *ops = maps__unwind_libunwind_ops(maps);
+
+	if (ops)
+		ops->finish_access(maps);
 }
 
 int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
 			 struct thread *thread,
 			 struct perf_sample *data, int max_stack)
 {
-	if (thread->maps->unwind_libunwind_ops)
-		return thread->maps->unwind_libunwind_ops->get_entries(cb, arg, thread, data, max_stack);
+	const struct unwind_libunwind_ops *ops = maps__unwind_libunwind_ops(thread->maps);
+
+	if (ops)
+		return ops->get_entries(cb, arg, thread, data, max_stack);
 	return 0;
 }
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 15/22] perf map: Use functions to access the variables in map
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (13 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 14/22] perf maps: Add functions to access maps Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 17:35   ` Arnaldo Carvalho de Melo
  2022-02-11 17:36   ` Arnaldo Carvalho de Melo
  2022-02-11 10:34 ` [PATCH v3 16/22] perf test: Add extra diagnostics to maps test Ian Rogers
                   ` (6 subsequent siblings)
  21 siblings, 2 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

The use of functions enables easier reference count
checking. Some minor changes to map_ip and unmap_ip to making the
naming a little clearer. __maps_insert is modified to return the
inserted map, which simplifies the reference checking
wrapping. maps__fixup_overlappings has some minor tweaks so that
puts occur on error paths. dso__process_kernel_symbol has the
unused curr_mapp argument removed.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/arch/s390/annotate/instructions.c  |   4 +-
 tools/perf/arch/x86/tests/dwarf-unwind.c      |   2 +-
 tools/perf/arch/x86/util/event.c              |   6 +-
 tools/perf/builtin-annotate.c                 |   8 +-
 tools/perf/builtin-inject.c                   |   8 +-
 tools/perf/builtin-kallsyms.c                 |   6 +-
 tools/perf/builtin-kmem.c                     |   4 +-
 tools/perf/builtin-mem.c                      |   4 +-
 tools/perf/builtin-report.c                   |  20 +--
 tools/perf/builtin-script.c                   |  26 ++--
 tools/perf/builtin-top.c                      |  12 +-
 tools/perf/builtin-trace.c                    |   2 +-
 .../scripts/python/Perf-Trace-Util/Context.c  |   7 +-
 tools/perf/tests/code-reading.c               |  32 ++---
 tools/perf/tests/hists_common.c               |   4 +-
 tools/perf/tests/vmlinux-kallsyms.c           |  35 +++---
 tools/perf/ui/browsers/annotate.c             |   7 +-
 tools/perf/ui/browsers/hists.c                |  18 +--
 tools/perf/ui/browsers/map.c                  |   4 +-
 tools/perf/util/annotate.c                    |  38 +++---
 tools/perf/util/auxtrace.c                    |   2 +-
 tools/perf/util/block-info.c                  |   4 +-
 tools/perf/util/bpf-event.c                   |   8 +-
 tools/perf/util/build-id.c                    |   2 +-
 tools/perf/util/callchain.c                   |  10 +-
 tools/perf/util/data-convert-json.c           |   4 +-
 tools/perf/util/db-export.c                   |   4 +-
 tools/perf/util/dlfilter.c                    |  21 ++--
 tools/perf/util/dso.c                         |   4 +-
 tools/perf/util/event.c                       |  14 +--
 tools/perf/util/evsel_fprintf.c               |   4 +-
 tools/perf/util/hist.c                        |  10 +-
 tools/perf/util/intel-pt.c                    |  48 +++----
 tools/perf/util/machine.c                     |  84 +++++++------
 tools/perf/util/map.c                         | 117 +++++++++---------
 tools/perf/util/map.h                         |  58 ++++++++-
 tools/perf/util/maps.c                        |  83 +++++++------
 tools/perf/util/probe-event.c                 |  44 +++----
 .../util/scripting-engines/trace-event-perl.c |   9 +-
 .../scripting-engines/trace-event-python.c    |  12 +-
 tools/perf/util/sort.c                        |  46 +++----
 tools/perf/util/symbol-elf.c                  |  39 +++---
 tools/perf/util/symbol.c                      |  96 +++++++-------
 tools/perf/util/symbol_fprintf.c              |   2 +-
 tools/perf/util/synthetic-events.c            |  28 ++---
 tools/perf/util/thread.c                      |  26 ++--
 tools/perf/util/unwind-libunwind-local.c      |  34 ++---
 tools/perf/util/unwind-libunwind.c            |   4 +-
 tools/perf/util/vdso.c                        |   2 +-
 49 files changed, 577 insertions(+), 489 deletions(-)

diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c
index 0e136630659e..740f1a63bc04 100644
--- a/tools/perf/arch/s390/annotate/instructions.c
+++ b/tools/perf/arch/s390/annotate/instructions.c
@@ -39,7 +39,9 @@ static int s390_call__parse(struct arch *arch, struct ins_operands *ops,
 	target.addr = map__objdump_2mem(map, ops->target.addr);
 
 	if (maps__find_ams(ms->maps, &target) == 0 &&
-	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
+	    map__rip_2objdump(target.ms.map,
+			      map->map_ip(target.ms.map, target.addr)
+			     ) == ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
 	return 0;
diff --git a/tools/perf/arch/x86/tests/dwarf-unwind.c b/tools/perf/arch/x86/tests/dwarf-unwind.c
index a54dea7c112f..497593be80f2 100644
--- a/tools/perf/arch/x86/tests/dwarf-unwind.c
+++ b/tools/perf/arch/x86/tests/dwarf-unwind.c
@@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample,
 		return -1;
 	}
 
-	stack_size = map->end - sp;
+	stack_size = map__end(map) - sp;
 	stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size;
 
 	memcpy(buf, (void *) sp, stack_size);
diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
index 7b6b0c98fb36..c790c682b76e 100644
--- a/tools/perf/arch/x86/util/event.c
+++ b/tools/perf/arch/x86/util/event.c
@@ -57,9 +57,9 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
 
 		event->mmap.header.size = size;
 
-		event->mmap.start = map->start;
-		event->mmap.len   = map->end - map->start;
-		event->mmap.pgoff = map->pgoff;
+		event->mmap.start = map__start(map);
+		event->mmap.len   = map__size(map);
+		event->mmap.pgoff = map__pgoff(map);
 		event->mmap.pid   = machine->pid;
 
 		strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
index 490bb9b8cf17..49d3ae36fd89 100644
--- a/tools/perf/builtin-annotate.c
+++ b/tools/perf/builtin-annotate.c
@@ -199,7 +199,7 @@ static int process_branch_callback(struct evsel *evsel,
 		return 0;
 
 	if (a.map != NULL)
-		a.map->dso->hit = 1;
+		map__dso(a.map)->hit = 1;
 
 	hist__account_cycles(sample->branch_stack, al, sample, false, NULL);
 
@@ -231,9 +231,9 @@ static int evsel__add_sample(struct evsel *evsel, struct perf_sample *sample,
 		 */
 		if (al->sym != NULL) {
 			rb_erase_cached(&al->sym->rb_node,
-				 &al->map->dso->symbols);
+					&map__dso(al->map)->symbols);
 			symbol__delete(al->sym);
-			dso__reset_find_symbol_cache(al->map->dso);
+			dso__reset_find_symbol_cache(map__dso(al->map));
 		}
 		return 0;
 	}
@@ -315,7 +315,7 @@ static void hists__find_annotations(struct hists *hists,
 		struct hist_entry *he = rb_entry(nd, struct hist_entry, rb_node);
 		struct annotation *notes;
 
-		if (he->ms.sym == NULL || he->ms.map->dso->annotate_warned)
+		if (he->ms.sym == NULL || map__dso(he->ms.map)->annotate_warned)
 			goto find_next;
 
 		if (ann->sym_hist_filter &&
diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index f7917c390e96..92a9dbc3d4cd 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -600,10 +600,10 @@ int perf_event__inject_buildid(struct perf_tool *tool, union perf_event *event,
 	}
 
 	if (thread__find_map(thread, sample->cpumode, sample->ip, &al)) {
-		if (!al.map->dso->hit) {
-			al.map->dso->hit = 1;
-			dso__inject_build_id(al.map->dso, tool, machine,
-					     sample->cpumode, al.map->flags);
+		if (!map__dso(al.map)->hit) {
+			map__dso(al.map)->hit = 1;
+			dso__inject_build_id(map__dso(al.map), tool, machine,
+					     sample->cpumode, map__flags(al.map));
 		}
 	}
 
diff --git a/tools/perf/builtin-kallsyms.c b/tools/perf/builtin-kallsyms.c
index c08ee81529e8..d940b60ce812 100644
--- a/tools/perf/builtin-kallsyms.c
+++ b/tools/perf/builtin-kallsyms.c
@@ -36,8 +36,10 @@ static int __cmd_kallsyms(int argc, const char **argv)
 		}
 
 		printf("%s: %s %s %#" PRIx64 "-%#" PRIx64 " (%#" PRIx64 "-%#" PRIx64")\n",
-			symbol->name, map->dso->short_name, map->dso->long_name,
-			map->unmap_ip(map, symbol->start), map->unmap_ip(map, symbol->end),
+			symbol->name, map__dso(map)->short_name,
+			map__dso(map)->long_name,
+			map__unmap_ip(map, symbol->start),
+			map__unmap_ip(map, symbol->end),
 			symbol->start, symbol->end);
 	}
 
diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
index 99d7ff9a8eff..d87d9c341a20 100644
--- a/tools/perf/builtin-kmem.c
+++ b/tools/perf/builtin-kmem.c
@@ -410,7 +410,7 @@ static u64 find_callsite(struct evsel *evsel, struct perf_sample *sample)
 		if (!caller) {
 			/* found */
 			if (node->ms.map)
-				addr = map__unmap_ip(node->ms.map, node->ip);
+				addr = map__dso_unmap_ip(node->ms.map, node->ip);
 			else
 				addr = node->ip;
 
@@ -1012,7 +1012,7 @@ static void __print_slab_result(struct rb_root *root,
 
 		if (sym != NULL)
 			snprintf(buf, sizeof(buf), "%s+%" PRIx64 "", sym->name,
-				 addr - map->unmap_ip(map, sym->start));
+				 addr - map__unmap_ip(map, sym->start));
 		else
 			snprintf(buf, sizeof(buf), "%#" PRIx64 "", addr);
 		printf(" %-34s |", buf);
diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c
index fcf65a59bea2..d18083f57303 100644
--- a/tools/perf/builtin-mem.c
+++ b/tools/perf/builtin-mem.c
@@ -200,7 +200,7 @@ dump_raw_samples(struct perf_tool *tool,
 		goto out_put;
 
 	if (al.map != NULL)
-		al.map->dso->hit = 1;
+		map__dso(al.map)->hit = 1;
 
 	field_sep = symbol_conf.field_sep;
 	if (field_sep) {
@@ -241,7 +241,7 @@ dump_raw_samples(struct perf_tool *tool,
 		symbol_conf.field_sep,
 		sample->data_src,
 		symbol_conf.field_sep,
-		al.map ? (al.map->dso ? al.map->dso->long_name : "???") : "???",
+		al.map && map__dso(al.map) ? map__dso(al.map)->long_name : "???",
 		al.sym ? al.sym->name : "???");
 out_put:
 	addr_location__put(&al);
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 57611ef725c3..9b92b2bbd7de 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -304,7 +304,7 @@ static int process_sample_event(struct perf_tool *tool,
 	}
 
 	if (al.map != NULL)
-		al.map->dso->hit = 1;
+		map__dso(al.map)->hit = 1;
 
 	if (ui__has_annotation() || rep->symbol_ipc || rep->total_cycles_mode) {
 		hist__account_cycles(sample->branch_stack, &al, sample,
@@ -579,7 +579,7 @@ static void report__warn_kptr_restrict(const struct report *rep)
 		return;
 
 	if (kernel_map == NULL ||
-	    (kernel_map->dso->hit &&
+	    (map__dso(kernel_map)->hit &&
 	     (kernel_kmap->ref_reloc_sym == NULL ||
 	      kernel_kmap->ref_reloc_sym->addr == 0))) {
 		const char *desc =
@@ -805,13 +805,15 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
 		struct map *map = rb_node->map;
 
 		printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
-				   indent, "", map->start, map->end,
-				   map->prot & PROT_READ ? 'r' : '-',
-				   map->prot & PROT_WRITE ? 'w' : '-',
-				   map->prot & PROT_EXEC ? 'x' : '-',
-				   map->flags & MAP_SHARED ? 's' : 'p',
-				   map->pgoff,
-				   map->dso->id.ino, map->dso->name);
+				   indent, "",
+				   map__start(map), map__end(map),
+				   map__prot(map) & PROT_READ ? 'r' : '-',
+				   map__prot(map) & PROT_WRITE ? 'w' : '-',
+				   map__prot(map) & PROT_EXEC ? 'x' : '-',
+				   map__flags(map) & MAP_SHARED ? 's' : 'p',
+				   map__pgoff(map),
+				   map__dso(map)->id.ino,
+				   map__dso(map)->name);
 	}
 
 	return printed;
diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
index abae8184e171..4edfce95e137 100644
--- a/tools/perf/builtin-script.c
+++ b/tools/perf/builtin-script.c
@@ -972,12 +972,12 @@ static int perf_sample__fprintf_brstackoff(struct perf_sample *sample,
 		to   = entries[i].to;
 
 		if (thread__find_map_fb(thread, sample->cpumode, from, &alf) &&
-		    !alf.map->dso->adjust_symbols)
-			from = map__map_ip(alf.map, from);
+		    !map__dso(alf.map)->adjust_symbols)
+			from = map__dso_map_ip(alf.map, from);
 
 		if (thread__find_map_fb(thread, sample->cpumode, to, &alt) &&
-		    !alt.map->dso->adjust_symbols)
-			to = map__map_ip(alt.map, to);
+		    !map__dso(alt.map)->adjust_symbols)
+			to = map__dso_map_ip(alt.map, to);
 
 		printed += fprintf(fp, " 0x%"PRIx64, from);
 		if (PRINT_FIELD(DSO)) {
@@ -1039,11 +1039,11 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
 		return 0;
 	}
 
-	if (!thread__find_map(thread, *cpumode, start, &al) || !al.map->dso) {
+	if (!thread__find_map(thread, *cpumode, start, &al) || !map__dso(al.map)) {
 		pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
 		return 0;
 	}
-	if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR) {
+	if (map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR) {
 		pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
 		return 0;
 	}
@@ -1051,11 +1051,11 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
 	/* Load maps to ensure dso->is_64_bit has been updated */
 	map__load(al.map);
 
-	offset = al.map->map_ip(al.map, start);
-	len = dso__data_read_offset(al.map->dso, machine, offset, (u8 *)buffer,
-				    end - start + MAXINSN);
+	offset = map__map_ip(al.map, start);
+	len = dso__data_read_offset(map__dso(al.map), machine, offset,
+				    (u8 *)buffer, end - start + MAXINSN);
 
-	*is64bit = al.map->dso->is_64_bit;
+	*is64bit = map__dso(al.map)->is_64_bit;
 	if (len <= 0)
 		pr_debug("\tcannot fetch code for block at %" PRIx64 "-%" PRIx64 "\n",
 			start, end);
@@ -1070,9 +1070,9 @@ static int map__fprintf_srccode(struct map *map, u64 addr, FILE *fp, struct srcc
 	int len;
 	char *srccode;
 
-	if (!map || !map->dso)
+	if (!map || !map__dso(map))
 		return 0;
-	srcfile = get_srcline_split(map->dso,
+	srcfile = get_srcline_split(map__dso(map),
 				    map__rip_2objdump(map, addr),
 				    &line);
 	if (!srcfile)
@@ -1164,7 +1164,7 @@ static int ip__fprintf_sym(uint64_t addr, struct thread *thread,
 	if (al.addr < al.sym->end)
 		off = al.addr - al.sym->start;
 	else
-		off = al.addr - al.map->start - al.sym->start;
+		off = al.addr - map__start(al.map) - al.sym->start;
 	printed += fprintf(fp, "\t%s", al.sym->name);
 	if (off)
 		printed += fprintf(fp, "%+d", off);
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 1fc390f136dd..8db1df7bdabe 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -127,8 +127,8 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
 	/*
 	 * We can't annotate with just /proc/kallsyms
 	 */
-	if (map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
-	    !dso__is_kcore(map->dso)) {
+	if (map__dso(map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
+	    !dso__is_kcore(map__dso(map))) {
 		pr_err("Can't annotate %s: No vmlinux file was found in the "
 		       "path\n", sym->name);
 		sleep(1);
@@ -180,8 +180,9 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
 		    "Tools:  %s\n\n"
 		    "Not all samples will be on the annotation output.\n\n"
 		    "Please report to linux-kernel@vger.kernel.org\n",
-		    ip, map->dso->long_name, dso__symtab_origin(map->dso),
-		    map->start, map->end, sym->start, sym->end,
+		    ip, map__dso(map)->long_name,
+		    dso__symtab_origin(map__dso(map)),
+		    map__start(map), map__end(map), sym->start, sym->end,
 		    sym->binding == STB_GLOBAL ? 'g' :
 		    sym->binding == STB_LOCAL  ? 'l' : 'w', sym->name,
 		    err ? "[unknown]" : uts.machine,
@@ -810,7 +811,8 @@ static void perf_event__process_sample(struct perf_tool *tool,
 		    __map__is_kernel(al.map) && map__has_symbols(al.map)) {
 			if (symbol_conf.vmlinux_name) {
 				char serr[256];
-				dso__strerror_load(al.map->dso, serr, sizeof(serr));
+				dso__strerror_load(map__dso(al.map),
+						   serr, sizeof(serr));
 				ui__warning("The %s file can't be used: %s\n%s",
 					    symbol_conf.vmlinux_name, serr, msg);
 			} else {
diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 32844d8a0ea5..0134f24da3e3 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -2862,7 +2862,7 @@ static void print_location(FILE *f, struct perf_sample *sample,
 {
 
 	if ((verbose > 0 || print_dso) && al->map)
-		fprintf(f, "%s@", al->map->dso->long_name);
+		fprintf(f, "%s@", map__dso(al->map)->long_name);
 
 	if ((verbose > 0 || print_sym) && al->sym)
 		fprintf(f, "%s+0x%" PRIx64, al->sym->name,
diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
index b64013a87c54..b83b62d33945 100644
--- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c
+++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
@@ -152,9 +152,10 @@ static PyObject *perf_sample_src(PyObject *obj, PyObject *args, bool get_srccode
 	map = c->al->map;
 	addr = c->al->addr;
 
-	if (map && map->dso)
-		srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
-
+	if (map && map__dso(map)) {
+		srcfile = get_srcline_split(map__dso(map),
+					    map__rip_2objdump(map, addr), &line);
+	}
 	if (get_srccode) {
 		if (srcfile)
 			srccode = find_sourceline(srcfile, line, &len);
diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
index 6eafe36a8704..9cb7d3f577d7 100644
--- a/tools/perf/tests/code-reading.c
+++ b/tools/perf/tests/code-reading.c
@@ -240,7 +240,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 
 	pr_debug("Reading object code for memory address: %#"PRIx64"\n", addr);
 
-	if (!thread__find_map(thread, cpumode, addr, &al) || !al.map->dso) {
+	if (!thread__find_map(thread, cpumode, addr, &al) || !map__dso(al.map)) {
 		if (cpumode == PERF_RECORD_MISC_HYPERVISOR) {
 			pr_debug("Hypervisor address can not be resolved - skipping\n");
 			return 0;
@@ -250,10 +250,10 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 		return -1;
 	}
 
-	pr_debug("File is: %s\n", al.map->dso->long_name);
+	pr_debug("File is: %s\n", map__dso(al.map)->long_name);
 
-	if (al.map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
-	    !dso__is_kcore(al.map->dso)) {
+	if (map__dso(al.map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
+	    !dso__is_kcore(map__dso(al.map))) {
 		pr_debug("Unexpected kernel address - skipping\n");
 		return 0;
 	}
@@ -264,11 +264,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 		len = BUFSZ;
 
 	/* Do not go off the map */
-	if (addr + len > al.map->end)
-		len = al.map->end - addr;
+	if (addr + len > map__end(al.map))
+		len = map__end(al.map) - addr;
 
 	/* Read the object code using perf */
-	ret_len = dso__data_read_offset(al.map->dso, maps__machine(thread->maps),
+	ret_len = dso__data_read_offset(map__dso(al.map), maps__machine(thread->maps),
 					al.addr, buf1, len);
 	if (ret_len != len) {
 		pr_debug("dso__data_read_offset failed\n");
@@ -283,11 +283,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 		return -1;
 
 	/* objdump struggles with kcore - try each map only once */
-	if (dso__is_kcore(al.map->dso)) {
+	if (dso__is_kcore(map__dso(al.map))) {
 		size_t d;
 
 		for (d = 0; d < state->done_cnt; d++) {
-			if (state->done[d] == al.map->start) {
+			if (state->done[d] == map__start(al.map)) {
 				pr_debug("kcore map tested already");
 				pr_debug(" - skipping\n");
 				return 0;
@@ -297,12 +297,12 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 			pr_debug("Too many kcore maps - skipping\n");
 			return 0;
 		}
-		state->done[state->done_cnt++] = al.map->start;
+		state->done[state->done_cnt++] = map__start(al.map);
 	}
 
-	objdump_name = al.map->dso->long_name;
-	if (dso__needs_decompress(al.map->dso)) {
-		if (dso__decompress_kmodule_path(al.map->dso, objdump_name,
+	objdump_name = map__dso(al.map)->long_name;
+	if (dso__needs_decompress(map__dso(al.map))) {
+		if (dso__decompress_kmodule_path(map__dso(al.map), objdump_name,
 						 decomp_name,
 						 sizeof(decomp_name)) < 0) {
 			pr_debug("decompression failed\n");
@@ -330,7 +330,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 			len -= ret;
 			if (len) {
 				pr_debug("Reducing len to %zu\n", len);
-			} else if (dso__is_kcore(al.map->dso)) {
+			} else if (dso__is_kcore(map__dso(al.map))) {
 				/*
 				 * objdump cannot handle very large segments
 				 * that may be found in kcore.
@@ -588,8 +588,8 @@ static int do_test_code_reading(bool try_kcore)
 		pr_debug("map__load failed\n");
 		goto out_err;
 	}
-	have_vmlinux = dso__is_vmlinux(map->dso);
-	have_kcore = dso__is_kcore(map->dso);
+	have_vmlinux = dso__is_vmlinux(map__dso(map));
+	have_kcore = dso__is_kcore(map__dso(map));
 
 	/* 2nd time through we just try kcore */
 	if (try_kcore && !have_kcore)
diff --git a/tools/perf/tests/hists_common.c b/tools/perf/tests/hists_common.c
index 6f34d08b84e5..40eccc659767 100644
--- a/tools/perf/tests/hists_common.c
+++ b/tools/perf/tests/hists_common.c
@@ -181,7 +181,7 @@ void print_hists_in(struct hists *hists)
 		if (!he->filtered) {
 			pr_info("%2d: entry: %-8s [%-8s] %20s: period = %"PRIu64"\n",
 				i, thread__comm_str(he->thread),
-				he->ms.map->dso->short_name,
+				map__dso(he->ms.map)->short_name,
 				he->ms.sym->name, he->stat.period);
 		}
 
@@ -208,7 +208,7 @@ void print_hists_out(struct hists *hists)
 		if (!he->filtered) {
 			pr_info("%2d: entry: %8s:%5d [%-8s] %20s: period = %"PRIu64"/%"PRIu64"\n",
 				i, thread__comm_str(he->thread), he->thread->tid,
-				he->ms.map->dso->short_name,
+				map__dso(he->ms.map)->short_name,
 				he->ms.sym->name, he->stat.period,
 				he->stat_acc ? he->stat_acc->period : 0);
 		}
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index 11a230ee5894..5afab21455f1 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -13,7 +13,7 @@
 #include "debug.h"
 #include "machine.h"
 
-#define UM(x) kallsyms_map->unmap_ip(kallsyms_map, (x))
+#define UM(x) map__unmap_ip(kallsyms_map, (x))
 
 static bool is_ignored_symbol(const char *name, char type)
 {
@@ -216,8 +216,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 		if (sym->start == sym->end)
 			continue;
 
-		mem_start = vmlinux_map->unmap_ip(vmlinux_map, sym->start);
-		mem_end = vmlinux_map->unmap_ip(vmlinux_map, sym->end);
+		mem_start = map__unmap_ip(vmlinux_map, sym->start);
+		mem_end = map__unmap_ip(vmlinux_map, sym->end);
 
 		first_pair = machine__find_kernel_symbol(&kallsyms, mem_start, NULL);
 		pair = first_pair;
@@ -262,7 +262,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 
 				continue;
 			}
-		} else if (mem_start == kallsyms.vmlinux_map->end) {
+		} else if (mem_start == map__end(kallsyms.vmlinux_map)) {
 			/*
 			 * Ignore aliases to _etext, i.e. to the end of the kernel text area,
 			 * such as __indirect_thunk_end.
@@ -294,9 +294,10 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 		 * so use the short name, less descriptive but the same ("[kernel]" in
 		 * both cases.
 		 */
-		struct map *pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
-								map->dso->short_name :
-								map->dso->name));
+		struct map *pair = maps__find_by_name(kallsyms.kmaps,
+						map__dso(map)->kernel
+						? map__dso(map)->short_name
+						: map__dso(map)->name);
 		if (pair) {
 			pair->priv = 1;
 		} else {
@@ -313,25 +314,27 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 	maps__for_each_entry(maps, rb_node) {
 		struct map *pair, *map = rb_node->map;
 
-		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
-		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
+		mem_start = map__unmap_ip(vmlinux_map, map__start(map));
+		mem_end = map__unmap_ip(vmlinux_map, map__end(map));
 
 		pair = maps__find(kallsyms.kmaps, mem_start);
-		if (pair == NULL || pair->priv)
+		if (pair == NULL || map__priv(pair))
 			continue;
 
-		if (pair->start == mem_start) {
+		if (map__start(pair) == mem_start) {
 			if (!header_printed) {
 				pr_info("WARN: Maps in vmlinux with a different name in kallsyms:\n");
 				header_printed = true;
 			}
 
 			pr_info("WARN: %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s in kallsyms as",
-				map->start, map->end, map->pgoff, map->dso->name);
-			if (mem_end != pair->end)
+				map__start(map), map__end(map),
+				map__pgoff(map), map__dso(map)->name);
+			if (mem_end != map__end(pair))
 				pr_info(":\nWARN: *%" PRIx64 "-%" PRIx64 " %" PRIx64,
-					pair->start, pair->end, pair->pgoff);
-			pr_info(" %s\n", pair->dso->name);
+					map__start(pair), map__end(pair),
+					map__pgoff(pair));
+			pr_info(" %s\n", map__dso(pair)->name);
 			pair->priv = 1;
 		}
 	}
@@ -343,7 +346,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 	maps__for_each_entry(maps, rb_node) {
 		struct map *map = rb_node->map;
 
-		if (!map->priv) {
+		if (!map__priv(map)) {
 			if (!header_printed) {
 				pr_info("WARN: Maps only in kallsyms:\n");
 				header_printed = true;
diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
index 44ba900828f6..7d51d92302dc 100644
--- a/tools/perf/ui/browsers/annotate.c
+++ b/tools/perf/ui/browsers/annotate.c
@@ -446,7 +446,8 @@ static void ui_browser__init_asm_mode(struct ui_browser *browser)
 static int sym_title(struct symbol *sym, struct map *map, char *title,
 		     size_t sz, int percent_type)
 {
-	return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name, map->dso->long_name,
+	return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name,
+			map__dso(map)->long_name,
 			percent_type_str(percent_type));
 }
 
@@ -971,14 +972,14 @@ int symbol__tui_annotate(struct map_symbol *ms, struct evsel *evsel,
 	if (sym == NULL)
 		return -1;
 
-	if (ms->map->dso->annotate_warned)
+	if (map__dso(ms->map)->annotate_warned)
 		return -1;
 
 	if (not_annotated) {
 		err = symbol__annotate2(ms, evsel, opts, &browser.arch);
 		if (err) {
 			char msg[BUFSIZ];
-			ms->map->dso->annotate_warned = true;
+			map__dso(ms->map)->annotate_warned = true;
 			symbol__strerror_disassemble(ms, err, msg, sizeof(msg));
 			ui__error("Couldn't annotate %s:\n%s", sym->name, msg);
 			goto out_free_offsets;
diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
index 572ff38ceb0f..2241447e9bfb 100644
--- a/tools/perf/ui/browsers/hists.c
+++ b/tools/perf/ui/browsers/hists.c
@@ -2487,7 +2487,7 @@ static struct symbol *symbol__new_unresolved(u64 addr, struct map *map)
 			return NULL;
 		}
 
-		dso__insert_symbol(map->dso, sym);
+		dso__insert_symbol(map__dso(map), sym);
 	}
 
 	return sym;
@@ -2499,7 +2499,7 @@ add_annotate_opt(struct hist_browser *browser __maybe_unused,
 		 struct map_symbol *ms,
 		 u64 addr)
 {
-	if (!ms->map || !ms->map->dso || ms->map->dso->annotate_warned)
+	if (!ms->map || !map__dso(ms->map) || map__dso(ms->map)->annotate_warned)
 		return 0;
 
 	if (!ms->sym)
@@ -2590,8 +2590,10 @@ static int hists_browser__zoom_map(struct hist_browser *browser, struct map *map
 		ui_helpline__pop();
 	} else {
 		ui_helpline__fpush("To zoom out press ESC or ENTER + \"Zoom out of %s DSO\"",
-				   __map__is_kernel(map) ? "the Kernel" : map->dso->short_name);
-		browser->hists->dso_filter = map->dso;
+				   __map__is_kernel(map)
+				   ? "the Kernel"
+				   : map__dso(map)->short_name);
+		browser->hists->dso_filter = map__dso(map);
 		perf_hpp__set_elide(HISTC_DSO, true);
 		pstack__push(browser->pstack, &browser->hists->dso_filter);
 	}
@@ -2616,7 +2618,9 @@ add_dso_opt(struct hist_browser *browser, struct popup_action *act,
 
 	if (asprintf(optstr, "Zoom %s %s DSO (use the 'k' hotkey to zoom directly into the kernel)",
 		     browser->hists->dso_filter ? "out of" : "into",
-		     __map__is_kernel(map) ? "the Kernel" : map->dso->short_name) < 0)
+		     __map__is_kernel(map)
+		     ? "the Kernel"
+		     : map__dso(map)->short_name) < 0)
 		return 0;
 
 	act->ms.map = map;
@@ -3091,8 +3095,8 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
 
 			if (!browser->selection ||
 			    !browser->selection->map ||
-			    !browser->selection->map->dso ||
-			    browser->selection->map->dso->annotate_warned) {
+			    !map__dso(browser->selection->map) ||
+			    map__dso(browser->selection->map)->annotate_warned) {
 				continue;
 			}
 
diff --git a/tools/perf/ui/browsers/map.c b/tools/perf/ui/browsers/map.c
index 3d49b916c9e4..3d1b958d8832 100644
--- a/tools/perf/ui/browsers/map.c
+++ b/tools/perf/ui/browsers/map.c
@@ -76,7 +76,7 @@ static int map_browser__run(struct map_browser *browser)
 {
 	int key;
 
-	if (ui_browser__show(&browser->b, browser->map->dso->long_name,
+	if (ui_browser__show(&browser->b, map__dso(browser->map)->long_name,
 			     "Press ESC to exit, %s / to search",
 			     verbose > 0 ? "" : "restart with -v to use") < 0)
 		return -1;
@@ -106,7 +106,7 @@ int map__browse(struct map *map)
 {
 	struct map_browser mb = {
 		.b = {
-			.entries = &map->dso->symbols,
+			.entries = &map__dso(map)->symbols,
 			.refresh = ui_browser__rb_tree_refresh,
 			.seek	 = ui_browser__rb_tree_seek,
 			.write	 = map_browser__write,
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 01900689dc00..3a7433d3e48a 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -280,7 +280,9 @@ static int call__parse(struct arch *arch, struct ins_operands *ops, struct map_s
 	target.addr = map__objdump_2mem(map, ops->target.addr);
 
 	if (maps__find_ams(ms->maps, &target) == 0 &&
-	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
+	    map__rip_2objdump(target.ms.map,
+			      map->map_ip(target.ms.map, target.addr)
+			      ) == ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
 	return 0;
@@ -384,8 +386,8 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
 	}
 
 	target.addr = map__objdump_2mem(map, ops->target.addr);
-	start = map->unmap_ip(map, sym->start),
-	end = map->unmap_ip(map, sym->end);
+	start = map__unmap_ip(map, sym->start),
+	end = map__unmap_ip(map, sym->end);
 
 	ops->target.outside = target.addr < start || target.addr > end;
 
@@ -408,7 +410,9 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
 	 * the symbol searching and disassembly should be done.
 	 */
 	if (maps__find_ams(ms->maps, &target) == 0 &&
-	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
+	    map__rip_2objdump(target.ms.map,
+			      map->map_ip(target.ms.map, target.addr)
+			      ) == ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
 	if (!ops->target.outside) {
@@ -889,7 +893,7 @@ static int __symbol__inc_addr_samples(struct map_symbol *ms,
 	unsigned offset;
 	struct sym_hist *h;
 
-	pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, ms->map->unmap_ip(ms->map, addr));
+	pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, map__unmap_ip(ms->map, addr));
 
 	if ((addr < sym->start || addr >= sym->end) &&
 	    (addr != sym->end || sym->start != sym->end)) {
@@ -1016,13 +1020,13 @@ int addr_map_symbol__account_cycles(struct addr_map_symbol *ams,
 	if (start &&
 		(start->ms.sym == ams->ms.sym ||
 		 (ams->ms.sym &&
-		   start->addr == ams->ms.sym->start + ams->ms.map->start)))
+		  start->addr == ams->ms.sym->start + map__start(ams->ms.map))))
 		saddr = start->al_addr;
 	if (saddr == 0)
 		pr_debug2("BB with bad start: addr %"PRIx64" start %"PRIx64" sym %"PRIx64" saddr %"PRIx64"\n",
 			ams->addr,
 			start ? start->addr : 0,
-			ams->ms.sym ? ams->ms.sym->start + ams->ms.map->start : 0,
+			ams->ms.sym ? ams->ms.sym->start + map__start(ams->ms.map) : 0,
 			saddr);
 	err = symbol__account_cycles(ams->al_addr, saddr, ams->ms.sym, cycles);
 	if (err)
@@ -1593,7 +1597,7 @@ static void delete_last_nop(struct symbol *sym)
 
 int symbol__strerror_disassemble(struct map_symbol *ms, int errnum, char *buf, size_t buflen)
 {
-	struct dso *dso = ms->map->dso;
+	struct dso *dso = map__dso(ms->map);
 
 	BUG_ON(buflen == 0);
 
@@ -1723,7 +1727,7 @@ static int symbol__disassemble_bpf(struct symbol *sym,
 	struct map *map = args->ms.map;
 	struct perf_bpil *info_linear;
 	struct disassemble_info info;
-	struct dso *dso = map->dso;
+	struct dso *dso = map__dso(map);
 	int pc = 0, count, sub_id;
 	struct btf *btf = NULL;
 	char tpath[PATH_MAX];
@@ -1946,7 +1950,7 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
 {
 	struct annotation_options *opts = args->options;
 	struct map *map = args->ms.map;
-	struct dso *dso = map->dso;
+	struct dso *dso = map__dso(map);
 	char *command;
 	FILE *file;
 	char symfs_filename[PATH_MAX];
@@ -1973,8 +1977,8 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
 		return err;
 
 	pr_debug("%s: filename=%s, sym=%s, start=%#" PRIx64 ", end=%#" PRIx64 "\n", __func__,
-		 symfs_filename, sym->name, map->unmap_ip(map, sym->start),
-		 map->unmap_ip(map, sym->end));
+		 symfs_filename, sym->name, map__unmap_ip(map, sym->start),
+		 map__unmap_ip(map, sym->end));
 
 	pr_debug("annotating [%p] %30s : [%p] %30s\n",
 		 dso, dso->long_name, sym, sym->name);
@@ -2386,7 +2390,7 @@ int symbol__annotate_printf(struct map_symbol *ms, struct evsel *evsel,
 {
 	struct map *map = ms->map;
 	struct symbol *sym = ms->sym;
-	struct dso *dso = map->dso;
+	struct dso *dso = map__dso(map);
 	char *filename;
 	const char *d_filename;
 	const char *evsel_name = evsel__name(evsel);
@@ -2569,7 +2573,7 @@ int map_symbol__annotation_dump(struct map_symbol *ms, struct evsel *evsel,
 	}
 
 	fprintf(fp, "%s() %s\nEvent: %s\n\n",
-		ms->sym->name, ms->map->dso->long_name, ev_name);
+		ms->sym->name, map__dso(ms->map)->long_name, ev_name);
 	symbol__annotate_fprintf2(ms->sym, fp, opts);
 
 	fclose(fp);
@@ -2781,7 +2785,7 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,
 		if (percent_max <= 0.5)
 			continue;
 
-		al->path = get_srcline(map->dso, notes->start + al->offset, NULL,
+		al->path = get_srcline(map__dso(map), notes->start + al->offset, NULL,
 				       false, true, notes->start + al->offset);
 		insert_source_line(&tmp_root, al, opts);
 	}
@@ -2800,7 +2804,7 @@ static void symbol__calc_lines(struct map_symbol *ms, struct rb_root *root,
 int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
 			  struct annotation_options *opts)
 {
-	struct dso *dso = ms->map->dso;
+	struct dso *dso = map__dso(ms->map);
 	struct symbol *sym = ms->sym;
 	struct rb_root source_line = RB_ROOT;
 	struct hists *hists = evsel__hists(evsel);
@@ -2836,7 +2840,7 @@ int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
 int symbol__tty_annotate(struct map_symbol *ms, struct evsel *evsel,
 			 struct annotation_options *opts)
 {
-	struct dso *dso = ms->map->dso;
+	struct dso *dso = map__dso(ms->map);
 	struct symbol *sym = ms->sym;
 	struct rb_root source_line = RB_ROOT;
 	int err;
diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
index 825336304a37..2e864c9bdef3 100644
--- a/tools/perf/util/auxtrace.c
+++ b/tools/perf/util/auxtrace.c
@@ -2478,7 +2478,7 @@ static struct dso *load_dso(const char *name)
 	if (map__load(map) < 0)
 		pr_err("File '%s' not found or has no symbols.\n", name);
 
-	dso = dso__get(map->dso);
+	dso = dso__get(map__dso(map));
 
 	map__put(map);
 
diff --git a/tools/perf/util/block-info.c b/tools/perf/util/block-info.c
index 5ecd4f401f32..16a7b4adcf18 100644
--- a/tools/perf/util/block-info.c
+++ b/tools/perf/util/block-info.c
@@ -317,9 +317,9 @@ static int block_dso_entry(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp,
 	struct block_fmt *block_fmt = container_of(fmt, struct block_fmt, fmt);
 	struct map *map = he->ms.map;
 
-	if (map && map->dso) {
+	if (map && map__dso(map)) {
 		return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
-				 map->dso->short_name);
+				 map__dso(map)->short_name);
 	}
 
 	return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
index 33257b594a71..5717933be116 100644
--- a/tools/perf/util/bpf-event.c
+++ b/tools/perf/util/bpf-event.c
@@ -95,10 +95,10 @@ static int machine__process_bpf_event_load(struct machine *machine,
 		struct map *map = maps__find(machine__kernel_maps(machine), addr);
 
 		if (map) {
-			map->dso->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
-			map->dso->bpf_prog.id = id;
-			map->dso->bpf_prog.sub_id = i;
-			map->dso->bpf_prog.env = env;
+			map__dso(map)->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
+			map__dso(map)->bpf_prog.id = id;
+			map__dso(map)->bpf_prog.sub_id = i;
+			map__dso(map)->bpf_prog.env = env;
 		}
 	}
 	return 0;
diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
index 7a5821c87f94..274b705dd941 100644
--- a/tools/perf/util/build-id.c
+++ b/tools/perf/util/build-id.c
@@ -59,7 +59,7 @@ int build_id__mark_dso_hit(struct perf_tool *tool __maybe_unused,
 	}
 
 	if (thread__find_map(thread, sample->cpumode, sample->ip, &al))
-		al.map->dso->hit = 1;
+		map__dso(al.map)->hit = 1;
 
 	thread__put(thread);
 	return 0;
diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
index 61bb3fb2107a..a8cfd31a3ff0 100644
--- a/tools/perf/util/callchain.c
+++ b/tools/perf/util/callchain.c
@@ -695,8 +695,8 @@ static enum match_result match_chain_strings(const char *left,
 static enum match_result match_chain_dso_addresses(struct map *left_map, u64 left_ip,
 						   struct map *right_map, u64 right_ip)
 {
-	struct dso *left_dso = left_map ? left_map->dso : NULL;
-	struct dso *right_dso = right_map ? right_map->dso : NULL;
+	struct dso *left_dso = left_map ? map__dso(left_map) : NULL;
+	struct dso *right_dso = right_map ? map__dso(right_map) : NULL;
 
 	if (left_dso != right_dso)
 		return left_dso < right_dso ? MATCH_LT : MATCH_GT;
@@ -1167,9 +1167,9 @@ char *callchain_list__sym_name(struct callchain_list *cl,
 
 	if (show_dso)
 		scnprintf(bf + printed, bfsize - printed, " %s",
-			  cl->ms.map ?
-			  cl->ms.map->dso->short_name :
-			  "unknown");
+			  cl->ms.map
+			  ? map__dso(cl->ms.map)->short_name
+			  : "unknown");
 
 	return bf;
 }
diff --git a/tools/perf/util/data-convert-json.c b/tools/perf/util/data-convert-json.c
index f1ab6edba446..9c83228bb9f1 100644
--- a/tools/perf/util/data-convert-json.c
+++ b/tools/perf/util/data-convert-json.c
@@ -127,8 +127,8 @@ static void output_sample_callchain_entry(struct perf_tool *tool,
 		fputc(',', out);
 		output_json_key_string(out, false, 5, "symbol", al->sym->name);
 
-		if (al->map && al->map->dso) {
-			const char *dso = al->map->dso->short_name;
+		if (al->map && map__dso(al->map)) {
+			const char *dso = map__dso(al->map)->short_name;
 
 			if (dso && strlen(dso) > 0) {
 				fputc(',', out);
diff --git a/tools/perf/util/db-export.c b/tools/perf/util/db-export.c
index 1cfcfdd3cf52..84c970c11794 100644
--- a/tools/perf/util/db-export.c
+++ b/tools/perf/util/db-export.c
@@ -179,7 +179,7 @@ static int db_ids_from_al(struct db_export *dbe, struct addr_location *al,
 	int err;
 
 	if (al->map) {
-		struct dso *dso = al->map->dso;
+		struct dso *dso = map__dso(al->map);
 
 		err = db_export__dso(dbe, dso, maps__machine(al->maps));
 		if (err)
@@ -255,7 +255,7 @@ static struct call_path *call_path_from_sample(struct db_export *dbe,
 		al.addr = node->ip;
 
 		if (al.map && !al.sym)
-			al.sym = dso__find_symbol(al.map->dso, al.addr);
+			al.sym = dso__find_symbol(map__dso(al.map), al.addr);
 
 		db_ids_from_al(dbe, &al, &dso_db_id, &sym_db_id, &offset);
 
diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
index d59462af15f1..f1d9dd7065e6 100644
--- a/tools/perf/util/dlfilter.c
+++ b/tools/perf/util/dlfilter.c
@@ -29,7 +29,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
 
 	d_al->size = sizeof(*d_al);
 	if (al->map) {
-		struct dso *dso = al->map->dso;
+		struct dso *dso = map__dso(al->map);
 
 		if (symbol_conf.show_kernel_path && dso->long_name)
 			d_al->dso = dso->long_name;
@@ -51,7 +51,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
 		if (al->addr < sym->end)
 			d_al->symoff = al->addr - sym->start;
 		else
-			d_al->symoff = al->addr - al->map->start - sym->start;
+			d_al->symoff = al->addr - map__start(al->map) - sym->start;
 		d_al->sym_binding = sym->binding;
 	} else {
 		d_al->sym = NULL;
@@ -232,9 +232,10 @@ static const char *dlfilter__srcline(void *ctx, __u32 *line_no)
 	map = al->map;
 	addr = al->addr;
 
-	if (map && map->dso)
-		srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
-
+	if (map && map__dso(map)) {
+		srcfile = get_srcline_split(map__dso(map),
+					    map__rip_2objdump(map, addr), &line);
+	}
 	*line_no = line;
 	return srcfile;
 }
@@ -266,7 +267,7 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
 
 	map = al->map;
 
-	if (map && ip >= map->start && ip < map->end &&
+	if (map && ip >= map__start(map) && ip < map__end(map) &&
 	    machine__kernel_ip(d->machine, ip) == machine__kernel_ip(d->machine, d->sample->ip))
 		goto have_map;
 
@@ -276,10 +277,10 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
 
 	map = a.map;
 have_map:
-	offset = map->map_ip(map, ip);
-	if (ip + len >= map->end)
-		len = map->end - ip;
-	return dso__data_read_offset(map->dso, d->machine, offset, buf, len);
+	offset = map__map_ip(map, ip);
+	if (ip + len >= map__end(map))
+		len = map__end(map) - ip;
+	return dso__data_read_offset(map__dso(map), d->machine, offset, buf, len);
 }
 
 static const struct perf_dlfilter_fns perf_dlfilter_fns = {
diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
index b2f570adba35..1115bc51a261 100644
--- a/tools/perf/util/dso.c
+++ b/tools/perf/util/dso.c
@@ -1109,7 +1109,7 @@ ssize_t dso__data_read_addr(struct dso *dso, struct map *map,
 			    struct machine *machine, u64 addr,
 			    u8 *data, ssize_t size)
 {
-	u64 offset = map->map_ip(map, addr);
+	u64 offset = map__map_ip(map, addr);
 	return dso__data_read_offset(dso, machine, offset, data, size);
 }
 
@@ -1149,7 +1149,7 @@ ssize_t dso__data_write_cache_addr(struct dso *dso, struct map *map,
 				   struct machine *machine, u64 addr,
 				   const u8 *data, ssize_t size)
 {
-	u64 offset = map->map_ip(map, addr);
+	u64 offset = map__map_ip(map, addr);
 	return dso__data_write_cache_offs(dso, machine, offset, data, size);
 }
 
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 40a3b1a35613..54a1d4df5f70 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -486,7 +486,7 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
 
 		al.map = maps__find(machine__kernel_maps(machine), tp->addr);
 		if (al.map && map__load(al.map) >= 0) {
-			al.addr = al.map->map_ip(al.map, tp->addr);
+			al.addr = map__map_ip(al.map, tp->addr);
 			al.sym = map__find_symbol(al.map, al.addr);
 			if (al.sym)
 				ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
@@ -621,7 +621,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
 		 */
 		if (load_map)
 			map__load(al->map);
-		al->addr = al->map->map_ip(al->map, al->addr);
+		al->addr = map__map_ip(al->map, al->addr);
 	}
 
 	return al->map;
@@ -692,8 +692,8 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
 	dump_printf(" ... thread: %s:%d\n", thread__comm_str(thread), thread->tid);
 	thread__find_map(thread, sample->cpumode, sample->ip, al);
 	dump_printf(" ...... dso: %s\n",
-		    al->map ? al->map->dso->long_name :
-			al->level == 'H' ? "[hypervisor]" : "<not found>");
+		    al->map ? map__dso(al->map)->long_name
+			    : al->level == 'H' ? "[hypervisor]" : "<not found>");
 
 	if (thread__is_filtered(thread))
 		al->filtered |= (1 << HIST_FILTER__THREAD);
@@ -711,7 +711,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
 	}
 
 	if (al->map) {
-		struct dso *dso = al->map->dso;
+		struct dso *dso = map__dso(al->map);
 
 		if (symbol_conf.dso_list &&
 		    (!dso || !(strlist__has_entry(symbol_conf.dso_list,
@@ -738,12 +738,12 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
 		}
 		if (!ret && al->sym) {
 			snprintf(al_addr_str, sz, "0x%"PRIx64,
-				al->map->unmap_ip(al->map, al->sym->start));
+				 map__unmap_ip(al->map, al->sym->start));
 			ret = strlist__has_entry(symbol_conf.sym_list,
 						al_addr_str);
 		}
 		if (!ret && symbol_conf.addr_list && al->map) {
-			unsigned long addr = al->map->unmap_ip(al->map, al->addr);
+			unsigned long addr = map__unmap_ip(al->map, al->addr);
 
 			ret = intlist__has_entry(symbol_conf.addr_list, addr);
 			if (!ret && symbol_conf.addr_range) {
diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c
index 8c2ea8001329..ac6fef9d8906 100644
--- a/tools/perf/util/evsel_fprintf.c
+++ b/tools/perf/util/evsel_fprintf.c
@@ -146,11 +146,11 @@ int sample__fprintf_callchain(struct perf_sample *sample, int left_alignment,
 				printed += fprintf(fp, " <-");
 
 			if (map)
-				addr = map->map_ip(map, node->ip);
+				addr = map__map_ip(map, node->ip);
 
 			if (print_ip) {
 				/* Show binary offset for userspace addr */
-				if (map && !map->dso->kernel)
+				if (map && !map__dso(map)->kernel)
 					printed += fprintf(fp, "%c%16" PRIx64, s, addr);
 				else
 					printed += fprintf(fp, "%c%16" PRIx64, s, node->ip);
diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index 78f9fbb925a7..f19ac6eb4775 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -105,7 +105,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 		hists__set_col_len(hists, HISTC_THREAD, len + 8);
 
 	if (h->ms.map) {
-		len = dso__name_len(h->ms.map->dso);
+		len = dso__name_len(map__dso(h->ms.map));
 		hists__new_col_len(hists, HISTC_DSO, len);
 	}
 
@@ -119,7 +119,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 				symlen += BITS_PER_LONG / 4 + 2 + 3;
 			hists__new_col_len(hists, HISTC_SYMBOL_FROM, symlen);
 
-			symlen = dso__name_len(h->branch_info->from.ms.map->dso);
+			symlen = dso__name_len(map__dso(h->branch_info->from.ms.map));
 			hists__new_col_len(hists, HISTC_DSO_FROM, symlen);
 		} else {
 			symlen = unresolved_col_width + 4 + 2;
@@ -133,7 +133,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 				symlen += BITS_PER_LONG / 4 + 2 + 3;
 			hists__new_col_len(hists, HISTC_SYMBOL_TO, symlen);
 
-			symlen = dso__name_len(h->branch_info->to.ms.map->dso);
+			symlen = dso__name_len(map__dso(h->branch_info->to.ms.map));
 			hists__new_col_len(hists, HISTC_DSO_TO, symlen);
 		} else {
 			symlen = unresolved_col_width + 4 + 2;
@@ -177,7 +177,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 		}
 
 		if (h->mem_info->daddr.ms.map) {
-			symlen = dso__name_len(h->mem_info->daddr.ms.map->dso);
+			symlen = dso__name_len(map__dso(h->mem_info->daddr.ms.map));
 			hists__new_col_len(hists, HISTC_MEM_DADDR_DSO,
 					   symlen);
 		} else {
@@ -2096,7 +2096,7 @@ static bool hists__filter_entry_by_dso(struct hists *hists,
 				       struct hist_entry *he)
 {
 	if (hists->dso_filter != NULL &&
-	    (he->ms.map == NULL || he->ms.map->dso != hists->dso_filter)) {
+	    (he->ms.map == NULL || map__dso(he->ms.map) != hists->dso_filter)) {
 		he->filtered |= (1 << HIST_FILTER__DSO);
 		return true;
 	}
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index e8613cbda331..c88f112c0a06 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -731,20 +731,20 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 	}
 
 	while (1) {
-		if (!thread__find_map(thread, cpumode, *ip, &al) || !al.map->dso)
+		if (!thread__find_map(thread, cpumode, *ip, &al) || !map__dso(al.map))
 			return -EINVAL;
 
-		if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR &&
-		    dso__data_status_seen(al.map->dso,
+		if (map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR &&
+		    dso__data_status_seen(map__dso(al.map),
 					  DSO_DATA_STATUS_SEEN_ITRACE))
 			return -ENOENT;
 
-		offset = al.map->map_ip(al.map, *ip);
+		offset = map__map_ip(al.map, *ip);
 
 		if (!to_ip && one_map) {
 			struct intel_pt_cache_entry *e;
 
-			e = intel_pt_cache_lookup(al.map->dso, machine, offset);
+			e = intel_pt_cache_lookup(map__dso(al.map), machine, offset);
 			if (e &&
 			    (!max_insn_cnt || e->insn_cnt <= max_insn_cnt)) {
 				*insn_cnt_ptr = e->insn_cnt;
@@ -766,10 +766,10 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 		/* Load maps to ensure dso->is_64_bit has been updated */
 		map__load(al.map);
 
-		x86_64 = al.map->dso->is_64_bit;
+		x86_64 = map__dso(al.map)->is_64_bit;
 
 		while (1) {
-			len = dso__data_read_offset(al.map->dso, machine,
+			len = dso__data_read_offset(map__dso(al.map), machine,
 						    offset, buf,
 						    INTEL_PT_INSN_BUF_SZ);
 			if (len <= 0)
@@ -795,7 +795,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 				goto out_no_cache;
 			}
 
-			if (*ip >= al.map->end)
+			if (*ip >= map__end(al.map))
 				break;
 
 			offset += intel_pt_insn->length;
@@ -815,13 +815,13 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 	if (to_ip) {
 		struct intel_pt_cache_entry *e;
 
-		e = intel_pt_cache_lookup(al.map->dso, machine, start_offset);
+		e = intel_pt_cache_lookup(map__dso(al.map), machine, start_offset);
 		if (e)
 			return 0;
 	}
 
 	/* Ignore cache errors */
-	intel_pt_cache_add(al.map->dso, machine, start_offset, insn_cnt,
+	intel_pt_cache_add(map__dso(al.map), machine, start_offset, insn_cnt,
 			   *ip - start_ip, intel_pt_insn);
 
 	return 0;
@@ -892,13 +892,13 @@ static int __intel_pt_pgd_ip(uint64_t ip, void *data)
 	if (!thread)
 		return -EINVAL;
 
-	if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso)
+	if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map))
 		return -EINVAL;
 
-	offset = al.map->map_ip(al.map, ip);
+	offset = map__map_ip(al.map, ip);
 
 	return intel_pt_match_pgd_ip(ptq->pt, ip, offset,
-				     al.map->dso->long_name);
+				     map__dso(al.map)->long_name);
 }
 
 static bool intel_pt_pgd_ip(uint64_t ip, void *data)
@@ -2406,13 +2406,13 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
 	if (map__load(map))
 		return 0;
 
-	start = dso__first_symbol(map->dso);
+	start = dso__first_symbol(map__dso(map));
 
 	for (sym = start; sym; sym = dso__next_symbol(sym)) {
 		if (sym->binding == STB_GLOBAL &&
 		    !strcmp(sym->name, "__switch_to")) {
-			ip = map->unmap_ip(map, sym->start);
-			if (ip >= map->start && ip < map->end) {
+			ip = map__unmap_ip(map, sym->start);
+			if (ip >= map__start(map) && ip < map__end(map)) {
 				switch_ip = ip;
 				break;
 			}
@@ -2429,8 +2429,8 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
 
 	for (sym = start; sym; sym = dso__next_symbol(sym)) {
 		if (!strcmp(sym->name, ptss)) {
-			ip = map->unmap_ip(map, sym->start);
-			if (ip >= map->start && ip < map->end) {
+			ip = map__unmap_ip(map, sym->start);
+			if (ip >= map__start(map) && ip < map__end(map)) {
 				*ptss_ip = ip;
 				break;
 			}
@@ -2965,7 +2965,7 @@ static int intel_pt_process_aux_output_hw_id(struct intel_pt *pt,
 static int intel_pt_find_map(struct thread *thread, u8 cpumode, u64 addr,
 			     struct addr_location *al)
 {
-	if (!al->map || addr < al->map->start || addr >= al->map->end) {
+	if (!al->map || addr < map__start(al->map) || addr >= map__end(al->map)) {
 		if (!thread__find_map(thread, cpumode, addr, al))
 			return -1;
 	}
@@ -2996,12 +2996,12 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
 			continue;
 		}
 
-		if (!al.map->dso || !al.map->dso->auxtrace_cache)
+		if (!map__dso(al.map) || !map__dso(al.map)->auxtrace_cache)
 			continue;
 
-		offset = al.map->map_ip(al.map, addr);
+		offset = map__map_ip(al.map, addr);
 
-		e = intel_pt_cache_lookup(al.map->dso, machine, offset);
+		e = intel_pt_cache_lookup(map__dso(al.map), machine, offset);
 		if (!e)
 			continue;
 
@@ -3014,9 +3014,9 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
 			if (e->branch != INTEL_PT_BR_NO_BRANCH)
 				return 0;
 		} else {
-			intel_pt_cache_invalidate(al.map->dso, machine, offset);
+			intel_pt_cache_invalidate(map__dso(al.map), machine, offset);
 			intel_pt_log("Invalidated instruction cache for %s at %#"PRIx64"\n",
-				     al.map->dso->long_name, addr);
+				     map__dso(al.map)->long_name, addr);
 		}
 	}
 
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 88279008e761..940fb2a50dfd 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -47,7 +47,7 @@ static void __machine__remove_thread(struct machine *machine, struct thread *th,
 
 static struct dso *machine__kernel_dso(struct machine *machine)
 {
-	return machine->vmlinux_map->dso;
+	return map__dso(machine->vmlinux_map);
 }
 
 static void dsos__init(struct dsos *dsos)
@@ -842,9 +842,10 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
 	if (map != machine->vmlinux_map)
 		maps__remove(machine__kernel_maps(machine), map);
 	else {
-		sym = dso__find_symbol(map->dso, map->map_ip(map, map->start));
+		sym = dso__find_symbol(map__dso(map),
+				map__map_ip(map, map__start(map)));
 		if (sym)
-			dso__delete_symbol(map->dso, sym);
+			dso__delete_symbol(map__dso(map), sym);
 	}
 
 	return 0;
@@ -880,7 +881,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
 		return 0;
 	}
 
-	if (map && map->dso) {
+	if (map && map__dso(map)) {
 		u8 *new_bytes = event->text_poke.bytes + event->text_poke.old_len;
 		int ret;
 
@@ -889,7 +890,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
 		 * must be done prior to using kernel maps.
 		 */
 		map__load(map);
-		ret = dso__data_write_cache_addr(map->dso, map, machine,
+		ret = dso__data_write_cache_addr(map__dso(map), map, machine,
 						 event->text_poke.addr,
 						 new_bytes,
 						 event->text_poke.new_len);
@@ -931,6 +932,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
 	/* If maps__insert failed, return NULL. */
 	if (err)
 		map = NULL;
+
 out:
 	/* put the dso here, corresponding to  machine__findnew_module_dso */
 	dso__put(dso);
@@ -1118,7 +1120,7 @@ int machine__create_extra_kernel_map(struct machine *machine,
 
 	if (!err) {
 		pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
-			kmap->name, map->start, map->end);
+			kmap->name, map__start(map), map__end(map));
 	}
 
 	map__put(map);
@@ -1178,9 +1180,9 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
 		if (!kmap || !is_entry_trampoline(kmap->name))
 			continue;
 
-		dest_map = maps__find(kmaps, map->pgoff);
+		dest_map = maps__find(kmaps, map__pgoff(map));
 		if (dest_map != map)
-			map->pgoff = dest_map->map_ip(dest_map, map->pgoff);
+			map->pgoff = map__map_ip(dest_map, map__pgoff(map));
 		found = true;
 	}
 	if (found || machine->trampolines_mapped)
@@ -1230,7 +1232,8 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
 	if (machine->vmlinux_map == NULL)
 		return -ENOMEM;
 
-	machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
+	machine->vmlinux_map->map_ip = map__identity_ip;
+	machine->vmlinux_map->unmap_ip = map__identity_ip;
 	return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
 }
 
@@ -1329,10 +1332,10 @@ int machines__create_kernel_maps(struct machines *machines, pid_t pid)
 int machine__load_kallsyms(struct machine *machine, const char *filename)
 {
 	struct map *map = machine__kernel_map(machine);
-	int ret = __dso__load_kallsyms(map->dso, filename, map, true);
+	int ret = __dso__load_kallsyms(map__dso(map), filename, map, true);
 
 	if (ret > 0) {
-		dso__set_loaded(map->dso);
+		dso__set_loaded(map__dso(map));
 		/*
 		 * Since /proc/kallsyms will have multiple sessions for the
 		 * kernel, with modules between them, fixup the end of all
@@ -1347,10 +1350,10 @@ int machine__load_kallsyms(struct machine *machine, const char *filename)
 int machine__load_vmlinux_path(struct machine *machine)
 {
 	struct map *map = machine__kernel_map(machine);
-	int ret = dso__load_vmlinux_path(map->dso, map);
+	int ret = dso__load_vmlinux_path(map__dso(map), map);
 
 	if (ret > 0)
-		dso__set_loaded(map->dso);
+		dso__set_loaded(map__dso(map));
 
 	return ret;
 }
@@ -1401,16 +1404,16 @@ static int maps__set_module_path(struct maps *maps, const char *path, struct kmo
 	if (long_name == NULL)
 		return -ENOMEM;
 
-	dso__set_long_name(map->dso, long_name, true);
-	dso__kernel_module_get_build_id(map->dso, "");
+	dso__set_long_name(map__dso(map), long_name, true);
+	dso__kernel_module_get_build_id(map__dso(map), "");
 
 	/*
 	 * Full name could reveal us kmod compression, so
 	 * we need to update the symtab_type if needed.
 	 */
-	if (m->comp && is_kmod_dso(map->dso)) {
-		map->dso->symtab_type++;
-		map->dso->comp = m->comp;
+	if (m->comp && is_kmod_dso(map__dso(map))) {
+		map__dso(map)->symtab_type++;
+		map__dso(map)->comp = m->comp;
 	}
 
 	return 0;
@@ -1509,8 +1512,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
 		return -1;
 	map->end = start + size;
 
-	dso__kernel_module_get_build_id(map->dso, machine->root_dir);
-
+	dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
 	return 0;
 }
 
@@ -1619,7 +1621,7 @@ int machine__create_kernel_maps(struct machine *machine)
 		struct map_rb_node *next = map_rb_node__next(rb_node);
 
 		if (next)
-			machine__set_kernel_mmap(machine, start, next->map->start);
+			machine__set_kernel_mmap(machine, start, map__start(next->map));
 	}
 
 out_put:
@@ -1683,10 +1685,10 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
 		if (map == NULL)
 			goto out_problem;
 
-		map->end = map->start + xm->end - xm->start;
+		map->end = map__start(map) + xm->end - xm->start;
 
 		if (build_id__is_defined(bid))
-			dso__set_build_id(map->dso, bid);
+			dso__set_build_id(map__dso(map), bid);
 
 	} else if (is_kernel_mmap) {
 		const char *symbol_name = (xm->name + strlen(machine->mmap_name));
@@ -2148,14 +2150,14 @@ static char *callchain_srcline(struct map_symbol *ms, u64 ip)
 	if (!map || callchain_param.key == CCKEY_FUNCTION)
 		return srcline;
 
-	srcline = srcline__tree_find(&map->dso->srclines, ip);
+	srcline = srcline__tree_find(&map__dso(map)->srclines, ip);
 	if (!srcline) {
 		bool show_sym = false;
 		bool show_addr = callchain_param.key == CCKEY_ADDRESS;
 
-		srcline = get_srcline(map->dso, map__rip_2objdump(map, ip),
+		srcline = get_srcline(map__dso(map), map__rip_2objdump(map, ip),
 				      ms->sym, show_sym, show_addr, ip);
-		srcline__tree_insert(&map->dso->srclines, ip, srcline);
+		srcline__tree_insert(&map__dso(map)->srclines, ip, srcline);
 	}
 
 	return srcline;
@@ -2179,7 +2181,7 @@ static int add_callchain_ip(struct thread *thread,
 {
 	struct map_symbol ms;
 	struct addr_location al;
-	int nr_loop_iter = 0;
+	int nr_loop_iter = 0, err;
 	u64 iter_cycles = 0;
 	const char *srcline = NULL;
 
@@ -2228,9 +2230,10 @@ static int add_callchain_ip(struct thread *thread,
 		}
 	}
 
-	if (symbol_conf.hide_unresolved && al.sym == NULL)
+	if (symbol_conf.hide_unresolved && al.sym == NULL) {
+		addr_location__put(&al);
 		return 0;
-
+	}
 	if (iter) {
 		nr_loop_iter = iter->nr_loop_iter;
 		iter_cycles = iter->cycles;
@@ -2240,9 +2243,10 @@ static int add_callchain_ip(struct thread *thread,
 	ms.map = al.map;
 	ms.sym = al.sym;
 	srcline = callchain_srcline(&ms, al.addr);
-	return callchain_cursor_append(cursor, ip, &ms,
-				       branch, flags, nr_loop_iter,
-				       iter_cycles, branch_from, srcline);
+	err = callchain_cursor_append(cursor, ip, &ms,
+				      branch, flags, nr_loop_iter,
+				      iter_cycles, branch_from, srcline);
+	return err;
 }
 
 struct branch_info *sample__resolve_bstack(struct perf_sample *sample,
@@ -2937,15 +2941,15 @@ static int append_inlines(struct callchain_cursor *cursor, struct map_symbol *ms
 	if (!symbol_conf.inline_name || !map || !sym)
 		return ret;
 
-	addr = map__map_ip(map, ip);
+	addr = map__dso_map_ip(map, ip);
 	addr = map__rip_2objdump(map, addr);
 
-	inline_node = inlines__tree_find(&map->dso->inlined_nodes, addr);
+	inline_node = inlines__tree_find(&map__dso(map)->inlined_nodes, addr);
 	if (!inline_node) {
-		inline_node = dso__parse_addr_inlines(map->dso, addr, sym);
+		inline_node = dso__parse_addr_inlines(map__dso(map), addr, sym);
 		if (!inline_node)
 			return ret;
-		inlines__tree_insert(&map->dso->inlined_nodes, inline_node);
+		inlines__tree_insert(&map__dso(map)->inlined_nodes, inline_node);
 	}
 
 	list_for_each_entry(ilist, &inline_node->val, list) {
@@ -2981,7 +2985,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
 	 * its corresponding binary.
 	 */
 	if (entry->ms.map)
-		addr = map__map_ip(entry->ms.map, entry->ip);
+		addr = map__dso_map_ip(entry->ms.map, entry->ip);
 
 	srcline = callchain_srcline(&entry->ms, addr);
 	return callchain_cursor_append(cursor, entry->ip, &entry->ms,
@@ -3183,7 +3187,7 @@ int machine__get_kernel_start(struct machine *machine)
 		 * kernel_start = 1ULL << 63 for x86_64.
 		 */
 		if (!err && !machine__is(machine, "x86_64"))
-			machine->kernel_start = map->start;
+			machine->kernel_start = map__start(map);
 	}
 	return err;
 }
@@ -3234,8 +3238,8 @@ char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, ch
 	if (sym == NULL)
 		return NULL;
 
-	*modp = __map__is_kmodule(map) ? (char *)map->dso->short_name : NULL;
-	*addrp = map->unmap_ip(map, sym->start);
+	*modp = __map__is_kmodule(map) ? (char *)map__dso(map)->short_name : NULL;
+	*addrp = map__unmap_ip(map, sym->start);
 	return sym->name;
 }
 
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 57e926ce115f..47d81e361e29 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -109,8 +109,8 @@ void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
 	map->pgoff    = pgoff;
 	map->reloc    = 0;
 	map->dso      = dso__get(dso);
-	map->map_ip   = map__map_ip;
-	map->unmap_ip = map__unmap_ip;
+	map->map_ip   = map__dso_map_ip;
+	map->unmap_ip = map__dso_unmap_ip;
 	map->erange_warned = false;
 	refcount_set(&map->refcnt, 1);
 }
@@ -120,10 +120,11 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 		     u32 prot, u32 flags, struct build_id *bid,
 		     char *filename, struct thread *thread)
 {
-	struct map *map = malloc(sizeof(*map));
+	struct map *map;
 	struct nsinfo *nsi = NULL;
 	struct nsinfo *nnsi;
 
+	map = malloc(sizeof(*map));
 	if (map != NULL) {
 		char newfilename[PATH_MAX];
 		struct dso *dso;
@@ -170,7 +171,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 		map__init(map, start, start + len, pgoff, dso);
 
 		if (anon || no_dso) {
-			map->map_ip = map->unmap_ip = identity__map_ip;
+			map->map_ip = map->unmap_ip = map__identity_ip;
 
 			/*
 			 * Set memory without DSO as loaded. All map__find_*
@@ -204,8 +205,9 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
  */
 struct map *map__new2(u64 start, struct dso *dso)
 {
-	struct map *map = calloc(1, (sizeof(*map) +
-				     (dso->kernel ? sizeof(struct kmap) : 0)));
+	struct map *map;
+
+	map = calloc(1, sizeof(*map) + (dso->kernel ? sizeof(struct kmap) : 0));
 	if (map != NULL) {
 		/*
 		 * ->end will be filled after we load all the symbols
@@ -218,7 +220,7 @@ struct map *map__new2(u64 start, struct dso *dso)
 
 bool __map__is_kernel(const struct map *map)
 {
-	if (!map->dso->kernel)
+	if (!map__dso(map)->kernel)
 		return false;
 	return machine__kernel_map(maps__machine(map__kmaps((struct map *)map))) == map;
 }
@@ -234,7 +236,7 @@ bool __map__is_bpf_prog(const struct map *map)
 {
 	const char *name;
 
-	if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
+	if (map__dso(map)->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
 		return true;
 
 	/*
@@ -242,7 +244,7 @@ bool __map__is_bpf_prog(const struct map *map)
 	 * type of DSO_BINARY_TYPE__BPF_PROG_INFO. In such cases, we can
 	 * guess the type based on name.
 	 */
-	name = map->dso->short_name;
+	name = map__dso(map)->short_name;
 	return name && (strstr(name, "bpf_prog_") == name);
 }
 
@@ -250,7 +252,7 @@ bool __map__is_bpf_image(const struct map *map)
 {
 	const char *name;
 
-	if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
+	if (map__dso(map)->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
 		return true;
 
 	/*
@@ -258,18 +260,19 @@ bool __map__is_bpf_image(const struct map *map)
 	 * type of DSO_BINARY_TYPE__BPF_IMAGE. In such cases, we can
 	 * guess the type based on name.
 	 */
-	name = map->dso->short_name;
+	name = map__dso(map)->short_name;
 	return name && is_bpf_image(name);
 }
 
 bool __map__is_ool(const struct map *map)
 {
-	return map->dso && map->dso->binary_type == DSO_BINARY_TYPE__OOL;
+	return map__dso(map) &&
+	       map__dso(map)->binary_type == DSO_BINARY_TYPE__OOL;
 }
 
 bool map__has_symbols(const struct map *map)
 {
-	return dso__has_symbols(map->dso);
+	return dso__has_symbols(map__dso(map));
 }
 
 static void map__exit(struct map *map)
@@ -292,7 +295,7 @@ void map__put(struct map *map)
 
 void map__fixup_start(struct map *map)
 {
-	struct rb_root_cached *symbols = &map->dso->symbols;
+	struct rb_root_cached *symbols = &map__dso(map)->symbols;
 	struct rb_node *nd = rb_first_cached(symbols);
 	if (nd != NULL) {
 		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
@@ -302,7 +305,7 @@ void map__fixup_start(struct map *map)
 
 void map__fixup_end(struct map *map)
 {
-	struct rb_root_cached *symbols = &map->dso->symbols;
+	struct rb_root_cached *symbols = &map__dso(map)->symbols;
 	struct rb_node *nd = rb_last(&symbols->rb_root);
 	if (nd != NULL) {
 		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
@@ -314,18 +317,18 @@ void map__fixup_end(struct map *map)
 
 int map__load(struct map *map)
 {
-	const char *name = map->dso->long_name;
+	const char *name = map__dso(map)->long_name;
 	int nr;
 
-	if (dso__loaded(map->dso))
+	if (dso__loaded(map__dso(map)))
 		return 0;
 
-	nr = dso__load(map->dso, map);
+	nr = dso__load(map__dso(map), map);
 	if (nr < 0) {
-		if (map->dso->has_build_id) {
+		if (map__dso(map)->has_build_id) {
 			char sbuild_id[SBUILD_ID_SIZE];
 
-			build_id__sprintf(&map->dso->bid, sbuild_id);
+			build_id__sprintf(&map__dso(map)->bid, sbuild_id);
 			pr_debug("%s with build id %s not found", name, sbuild_id);
 		} else
 			pr_debug("Failed to open %s", name);
@@ -357,7 +360,7 @@ struct symbol *map__find_symbol(struct map *map, u64 addr)
 	if (map__load(map) < 0)
 		return NULL;
 
-	return dso__find_symbol(map->dso, addr);
+	return dso__find_symbol(map__dso(map), addr);
 }
 
 struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
@@ -365,24 +368,24 @@ struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
 	if (map__load(map) < 0)
 		return NULL;
 
-	if (!dso__sorted_by_name(map->dso))
-		dso__sort_by_name(map->dso);
+	if (!dso__sorted_by_name(map__dso(map)))
+		dso__sort_by_name(map__dso(map));
 
-	return dso__find_symbol_by_name(map->dso, name);
+	return dso__find_symbol_by_name(map__dso(map), name);
 }
 
 struct map *map__clone(struct map *from)
 {
-	size_t size = sizeof(struct map);
 	struct map *map;
+	size_t size = sizeof(struct map);
 
-	if (from->dso && from->dso->kernel)
+	if (map__dso(from) && map__dso(from)->kernel)
 		size += sizeof(struct kmap);
 
 	map = memdup(from, size);
 	if (map != NULL) {
 		refcount_set(&map->refcnt, 1);
-		dso__get(map->dso);
+		map->dso = dso__get(map->dso);
 	}
 
 	return map;
@@ -391,7 +394,8 @@ struct map *map__clone(struct map *from)
 size_t map__fprintf(struct map *map, FILE *fp)
 {
 	return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s\n",
-		       map->start, map->end, map->pgoff, map->dso->name);
+		       map__start(map), map__end(map),
+		       map__pgoff(map), map__dso(map)->name);
 }
 
 size_t map__fprintf_dsoname(struct map *map, FILE *fp)
@@ -399,11 +403,11 @@ size_t map__fprintf_dsoname(struct map *map, FILE *fp)
 	char buf[symbol_conf.pad_output_len_dso + 1];
 	const char *dsoname = "[unknown]";
 
-	if (map && map->dso) {
-		if (symbol_conf.show_kernel_path && map->dso->long_name)
-			dsoname = map->dso->long_name;
+	if (map && map__dso(map)) {
+		if (symbol_conf.show_kernel_path && map__dso(map)->long_name)
+			dsoname = map__dso(map)->long_name;
 		else
-			dsoname = map->dso->name;
+			dsoname = map__dso(map)->name;
 	}
 
 	if (symbol_conf.pad_output_len_dso) {
@@ -418,7 +422,8 @@ char *map__srcline(struct map *map, u64 addr, struct symbol *sym)
 {
 	if (map == NULL)
 		return SRCLINE_UNKNOWN;
-	return get_srcline(map->dso, map__rip_2objdump(map, addr), sym, true, true, addr);
+	return get_srcline(map__dso(map), map__rip_2objdump(map, addr),
+			   sym, true, true, addr);
 }
 
 int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
@@ -426,7 +431,7 @@ int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
 {
 	int ret = 0;
 
-	if (map && map->dso) {
+	if (map && map__dso(map)) {
 		char *srcline = map__srcline(map, addr, NULL);
 		if (strncmp(srcline, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0)
 			ret = fprintf(fp, "%s%s", prefix, srcline);
@@ -472,20 +477,20 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
 		}
 	}
 
-	if (!map->dso->adjust_symbols)
+	if (!map__dso(map)->adjust_symbols)
 		return rip;
 
-	if (map->dso->rel)
-		return rip - map->pgoff;
+	if (map__dso(map)->rel)
+		return rip - map__pgoff(map);
 
 	/*
 	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
 	 * but all kernel modules are ET_REL, so won't get here.
 	 */
-	if (map->dso->kernel == DSO_SPACE__USER)
-		return rip + map->dso->text_offset;
+	if (map__dso(map)->kernel == DSO_SPACE__USER)
+		return rip + map__dso(map)->text_offset;
 
-	return map->unmap_ip(map, rip) - map->reloc;
+	return map__unmap_ip(map, rip) - map__reloc(map);
 }
 
 /**
@@ -502,34 +507,34 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
  */
 u64 map__objdump_2mem(struct map *map, u64 ip)
 {
-	if (!map->dso->adjust_symbols)
-		return map->unmap_ip(map, ip);
+	if (!map__dso(map)->adjust_symbols)
+		return map__unmap_ip(map, ip);
 
-	if (map->dso->rel)
-		return map->unmap_ip(map, ip + map->pgoff);
+	if (map__dso(map)->rel)
+		return map__unmap_ip(map, ip + map__pgoff(map));
 
 	/*
 	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
 	 * but all kernel modules are ET_REL, so won't get here.
 	 */
-	if (map->dso->kernel == DSO_SPACE__USER)
-		return map->unmap_ip(map, ip - map->dso->text_offset);
+	if (map__dso(map)->kernel == DSO_SPACE__USER)
+		return map__unmap_ip(map, ip - map__dso(map)->text_offset);
 
-	return ip + map->reloc;
+	return ip + map__reloc(map);
 }
 
 bool map__contains_symbol(const struct map *map, const struct symbol *sym)
 {
-	u64 ip = map->unmap_ip(map, sym->start);
+	u64 ip = map__unmap_ip(map, sym->start);
 
-	return ip >= map->start && ip < map->end;
+	return ip >= map__start(map) && ip < map__end(map);
 }
 
 struct kmap *__map__kmap(struct map *map)
 {
-	if (!map->dso || !map->dso->kernel)
+	if (!map__dso(map) || !map__dso(map)->kernel)
 		return NULL;
-	return (struct kmap *)(map + 1);
+	return (struct kmap *)(&map[1]);
 }
 
 struct kmap *map__kmap(struct map *map)
@@ -552,17 +557,17 @@ struct maps *map__kmaps(struct map *map)
 	return kmap->kmaps;
 }
 
-u64 map__map_ip(const struct map *map, u64 ip)
+u64 map__dso_map_ip(const struct map *map, u64 ip)
 {
-	return ip - map->start + map->pgoff;
+	return ip - map__start(map) + map__pgoff(map);
 }
 
-u64 map__unmap_ip(const struct map *map, u64 ip)
+u64 map__dso_unmap_ip(const struct map *map, u64 ip)
 {
-	return ip + map->start - map->pgoff;
+	return ip + map__start(map) - map__pgoff(map);
 }
 
-u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip)
+u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip)
 {
 	return ip;
 }
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index d1a6f85fd31d..99ef0464a357 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -41,15 +41,65 @@ struct kmap *map__kmap(struct map *map);
 struct maps *map__kmaps(struct map *map);
 
 /* ip -> dso rip */
-u64 map__map_ip(const struct map *map, u64 ip);
+u64 map__dso_map_ip(const struct map *map, u64 ip);
 /* dso rip -> ip */
-u64 map__unmap_ip(const struct map *map, u64 ip);
+u64 map__dso_unmap_ip(const struct map *map, u64 ip);
 /* Returns ip */
-u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip);
+u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip);
+
+static inline struct dso *map__dso(const struct map *map)
+{
+	return map->dso;
+}
+
+static inline u64 map__map_ip(const struct map *map, u64 ip)
+{
+	return map->map_ip(map, ip);
+}
+
+static inline u64 map__unmap_ip(const struct map *map, u64 ip)
+{
+	return map->unmap_ip(map, ip);
+}
+
+static inline u64 map__start(const struct map *map)
+{
+	return map->start;
+}
+
+static inline u64 map__end(const struct map *map)
+{
+	return map->end;
+}
+
+static inline u64 map__pgoff(const struct map *map)
+{
+	return map->pgoff;
+}
+
+static inline u64 map__reloc(const struct map *map)
+{
+	return map->reloc;
+}
+
+static inline u32 map__flags(const struct map *map)
+{
+	return map->flags;
+}
+
+static inline u32 map__prot(const struct map *map)
+{
+	return map->prot;
+}
+
+static inline bool map__priv(const struct map *map)
+{
+	return map->priv;
+}
 
 static inline size_t map__size(const struct map *map)
 {
-	return map->end - map->start;
+	return map__end(map) - map__start(map);
 }
 
 /* rip/ip <-> addr suitable for passing to `objdump --start-address=` */
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 9fc3e7186b8e..6efbcb79131c 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -30,24 +30,24 @@ static void __maps__free_maps_by_name(struct maps *maps)
 	maps->nr_maps_allocated = 0;
 }
 
-static int __maps__insert(struct maps *maps, struct map *map)
+static struct map *__maps__insert(struct maps *maps, struct map *map)
 {
 	struct rb_node **p = &maps__entries(maps)->rb_node;
 	struct rb_node *parent = NULL;
-	const u64 ip = map->start;
+	const u64 ip = map__start(map);
 	struct map_rb_node *m, *new_rb_node;
 
 	new_rb_node = malloc(sizeof(*new_rb_node));
 	if (!new_rb_node)
-		return -ENOMEM;
+		return NULL;
 
 	RB_CLEAR_NODE(&new_rb_node->rb_node);
-	new_rb_node->map = map;
+	new_rb_node->map = map__get(map);
 
 	while (*p != NULL) {
 		parent = *p;
 		m = rb_entry(parent, struct map_rb_node, rb_node);
-		if (ip < m->map->start)
+		if (ip < map__start(m->map))
 			p = &(*p)->rb_left;
 		else
 			p = &(*p)->rb_right;
@@ -55,22 +55,23 @@ static int __maps__insert(struct maps *maps, struct map *map)
 
 	rb_link_node(&new_rb_node->rb_node, parent, p);
 	rb_insert_color(&new_rb_node->rb_node, maps__entries(maps));
-	map__get(map);
-	return 0;
+	return new_rb_node->map;
 }
 
 int maps__insert(struct maps *maps, struct map *map)
 {
-	int err;
+	int err = 0;
 
 	down_write(maps__lock(maps));
-	err = __maps__insert(maps, map);
-	if (err)
+	map = __maps__insert(maps, map);
+	if (!map) {
+		err = -ENOMEM;
 		goto out;
+	}
 
 	++maps->nr_maps;
 
-	if (map->dso && map->dso->kernel) {
+	if (map__dso(map) && map__dso(map)->kernel) {
 		struct kmap *kmap = map__kmap(map);
 
 		if (kmap)
@@ -193,7 +194,7 @@ struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
 	if (map != NULL && map__load(map) >= 0) {
 		if (mapp != NULL)
 			*mapp = map;
-		return map__find_symbol(map, map->map_ip(map, addr));
+		return map__find_symbol(map, map__map_ip(map, addr));
 	}
 
 	return NULL;
@@ -228,7 +229,8 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
 
 int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
 {
-	if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
+	if (ams->addr < map__start(ams->ms.map) ||
+	    ams->addr >= map__end(ams->ms.map)) {
 		if (maps == NULL)
 			return -1;
 		ams->ms.map = maps__find(maps, ams->addr);
@@ -236,7 +238,7 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
 			return -1;
 	}
 
-	ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
+	ams->al_addr = map__map_ip(ams->ms.map, ams->addr);
 	ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
 
 	return ams->ms.sym ? 0 : -1;
@@ -253,7 +255,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
 		printed += fprintf(fp, "Map:");
 		printed += map__fprintf(pos->map, fp);
 		if (verbose > 2) {
-			printed += dso__fprintf(pos->map->dso, fp);
+			printed += dso__fprintf(map__dso(pos->map), fp);
 			printed += fprintf(fp, "--\n");
 		}
 	}
@@ -282,9 +284,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 	while (next) {
 		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
 
-		if (pos->map->end > map->start) {
+		if (map__end(pos->map) > map__start(map)) {
 			first = next;
-			if (pos->map->start <= map->start)
+			if (map__start(pos->map) <= map__start(map))
 				break;
 			next = next->rb_left;
 		} else
@@ -300,14 +302,14 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 		 * Stop if current map starts after map->end.
 		 * Maps are ordered by start: next will not overlap for sure.
 		 */
-		if (pos->map->start >= map->end)
+		if (map__start(pos->map) >= map__end(map))
 			break;
 
 		if (verbose >= 2) {
 
 			if (use_browser) {
 				pr_debug("overlapping maps in %s (disable tui for more info)\n",
-					   map->dso->name);
+					   map__dso(map)->name);
 			} else {
 				fputs("overlapping maps:\n", fp);
 				map__fprintf(map, fp);
@@ -320,7 +322,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 		 * Now check if we need to create new maps for areas not
 		 * overlapped by the new map:
 		 */
-		if (map->start > pos->map->start) {
+		if (map__start(map) > map__start(pos->map)) {
 			struct map *before = map__clone(pos->map);
 
 			if (before == NULL) {
@@ -328,17 +330,19 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 				goto put_map;
 			}
 
-			before->end = map->start;
-			err = __maps__insert(maps, before);
-			if (err)
+			before->end = map__start(map);
+			if (!__maps__insert(maps, before)) {
+				map__put(before);
+				err = -ENOMEM;
 				goto put_map;
+			}
 
 			if (verbose >= 2 && !use_browser)
 				map__fprintf(before, fp);
 			map__put(before);
 		}
 
-		if (map->end < pos->map->end) {
+		if (map__end(map) < map__end(pos->map)) {
 			struct map *after = map__clone(pos->map);
 
 			if (after == NULL) {
@@ -346,14 +350,15 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 				goto put_map;
 			}
 
-			after->start = map->end;
-			after->pgoff += map->end - pos->map->start;
-			assert(pos->map->map_ip(pos->map, map->end) ==
-				after->map_ip(after, map->end));
-			err = __maps__insert(maps, after);
-			if (err)
+			after->start = map__end(map);
+			after->pgoff += map__end(map) - map__start(pos->map);
+			assert(map__map_ip(pos->map, map__end(map)) ==
+				map__map_ip(after, map__end(map)));
+			if (!__maps__insert(maps, after)) {
+				map__put(after);
+				err = -ENOMEM;
 				goto put_map;
-
+			}
 			if (verbose >= 2 && !use_browser)
 				map__fprintf(after, fp);
 			map__put(after);
@@ -377,7 +382,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 int maps__clone(struct thread *thread, struct maps *parent)
 {
 	struct maps *maps = thread->maps;
-	int err;
+	int err = 0;
 	struct map_rb_node *rb_node;
 
 	down_read(maps__lock(parent));
@@ -391,17 +396,13 @@ int maps__clone(struct thread *thread, struct maps *parent)
 		}
 
 		err = unwind__prepare_access(maps, new, NULL);
-		if (err)
-			goto out_unlock;
+		if (!err)
+			err = maps__insert(maps, new);
 
-		err = maps__insert(maps, new);
+		map__put(new);
 		if (err)
 			goto out_unlock;
-
-		map__put(new);
 	}
-
-	err = 0;
 out_unlock:
 	up_read(maps__lock(parent));
 	return err;
@@ -428,9 +429,9 @@ struct map *maps__find(struct maps *maps, u64 ip)
 	p = maps__entries(maps)->rb_node;
 	while (p != NULL) {
 		m = rb_entry(p, struct map_rb_node, rb_node);
-		if (ip < m->map->start)
+		if (ip < map__start(m->map))
 			p = p->rb_left;
-		else if (ip >= m->map->end)
+		else if (ip >= map__end(m->map))
 			p = p->rb_right;
 		else
 			goto out;
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index f9fbf611f2bf..1a93dca50a4c 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -134,15 +134,15 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
 	/* ref_reloc_sym is just a label. Need a special fix*/
 	reloc_sym = kernel_get_ref_reloc_sym(&map);
 	if (reloc_sym && strcmp(name, reloc_sym->name) == 0)
-		*addr = (!map->reloc || reloc) ? reloc_sym->addr :
+		*addr = (!map__reloc(map) || reloc) ? reloc_sym->addr :
 			reloc_sym->unrelocated_addr;
 	else {
 		sym = machine__find_kernel_symbol_by_name(host_machine, name, &map);
 		if (!sym)
 			return -ENOENT;
-		*addr = map->unmap_ip(map, sym->start) -
-			((reloc) ? 0 : map->reloc) -
-			((reladdr) ? map->start : 0);
+		*addr = map__unmap_ip(map, sym->start) -
+			((reloc) ? 0 : map__reloc(map)) -
+			((reladdr) ? map__start(map) : 0);
 	}
 	return 0;
 }
@@ -164,8 +164,8 @@ static struct map *kernel_get_module_map(const char *module)
 
 	maps__for_each_entry(maps, pos) {
 		/* short_name is "[module]" */
-		const char *short_name = pos->map->dso->short_name;
-		u16 short_name_len =  pos->map->dso->short_name_len;
+		const char *short_name = map__dso(pos->map)->short_name;
+		u16 short_name_len =  map__dso(pos->map)->short_name_len;
 
 		if (strncmp(short_name + 1, module,
 			    short_name_len - 2) == 0 &&
@@ -183,11 +183,11 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
 		struct map *map;
 
 		map = dso__new_map(target);
-		if (map && map->dso) {
-			BUG_ON(pthread_mutex_lock(&map->dso->lock) != 0);
-			nsinfo__put(map->dso->nsinfo);
-			map->dso->nsinfo = nsinfo__get(nsi);
-			pthread_mutex_unlock(&map->dso->lock);
+		if (map && map__dso(map)) {
+			BUG_ON(pthread_mutex_lock(&map__dso(map)->lock) != 0);
+			nsinfo__put(map__dso(map)->nsinfo);
+			map__dso(map)->nsinfo = nsinfo__get(nsi);
+			pthread_mutex_unlock(&map__dso(map)->lock);
 		}
 		return map;
 	} else {
@@ -253,7 +253,7 @@ static bool kprobe_warn_out_range(const char *symbol, u64 address)
 
 	map = kernel_get_module_map(NULL);
 	if (map) {
-		ret = address <= map->start || map->end < address;
+		ret = address <= map__start(map) || map__end(map) < address;
 		if (ret)
 			pr_warning("%s is out of .text, skip it.\n", symbol);
 		map__put(map);
@@ -340,7 +340,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
 		snprintf(module_name, sizeof(module_name), "[%s]", module);
 		map = maps__find_by_name(machine__kernel_maps(host_machine), module_name);
 		if (map) {
-			dso = map->dso;
+			dso = map__dso(map);
 			goto found;
 		}
 		pr_debug("Failed to find module %s.\n", module);
@@ -348,7 +348,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
 	}
 
 	map = machine__kernel_map(host_machine);
-	dso = map->dso;
+	dso = map__dso(map);
 	if (!dso->has_build_id)
 		dso__read_running_kernel_build_id(dso, host_machine);
 
@@ -396,7 +396,8 @@ static int find_alternative_probe_point(struct debuginfo *dinfo,
 					   "Consider identifying the final function used at run time and set the probe directly on that.\n",
 					   pp->function);
 		} else
-			address = map->unmap_ip(map, sym->start) - map->reloc;
+			address = map__unmap_ip(map, sym->start) -
+				  map__reloc(map);
 		break;
 	}
 	if (!address) {
@@ -862,8 +863,7 @@ post_process_kernel_probe_trace_events(struct probe_trace_event *tevs,
 			free(tevs[i].point.symbol);
 		tevs[i].point.symbol = tmp;
 		tevs[i].point.offset = tevs[i].point.address -
-			(map->reloc ? reloc_sym->unrelocated_addr :
-				      reloc_sym->addr);
+			(map__reloc(map) ? reloc_sym->unrelocated_addr : reloc_sym->addr);
 	}
 	return skipped;
 }
@@ -2243,7 +2243,7 @@ static int find_perf_probe_point_from_map(struct probe_trace_point *tp,
 		goto out;
 
 	pp->retprobe = tp->retprobe;
-	pp->offset = addr - map->unmap_ip(map, sym->start);
+	pp->offset = addr - map__unmap_ip(map, sym->start);
 	pp->function = strdup(sym->name);
 	ret = pp->function ? 0 : -ENOMEM;
 
@@ -3117,7 +3117,7 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
 			goto err_out;
 		}
 		/* Add one probe point */
-		tp->address = map->unmap_ip(map, sym->start) + pp->offset;
+		tp->address = map__unmap_ip(map, sym->start) + pp->offset;
 
 		/* Check the kprobe (not in module) is within .text  */
 		if (!pev->uprobes && !pev->target &&
@@ -3759,13 +3759,13 @@ int show_available_funcs(const char *target, struct nsinfo *nsi,
 			       (target) ? : "kernel");
 		goto end;
 	}
-	if (!dso__sorted_by_name(map->dso))
-		dso__sort_by_name(map->dso);
+	if (!dso__sorted_by_name(map__dso(map)))
+		dso__sort_by_name(map__dso(map));
 
 	/* Show all (filtered) symbols */
 	setup_pager();
 
-	for (nd = rb_first_cached(&map->dso->symbol_names); nd;
+	for (nd = rb_first_cached(&map__dso(map)->symbol_names); nd;
 	     nd = rb_next(nd)) {
 		struct symbol_name_rb_node *pos = rb_entry(nd, struct symbol_name_rb_node, rb_node);
 
diff --git a/tools/perf/util/scripting-engines/trace-event-perl.c b/tools/perf/util/scripting-engines/trace-event-perl.c
index a5d945415bbc..1282fb9b45e1 100644
--- a/tools/perf/util/scripting-engines/trace-event-perl.c
+++ b/tools/perf/util/scripting-engines/trace-event-perl.c
@@ -315,11 +315,12 @@ static SV *perl_process_callchain(struct perf_sample *sample,
 		if (node->ms.map) {
 			struct map *map = node->ms.map;
 			const char *dsoname = "[unknown]";
-			if (map && map->dso) {
-				if (symbol_conf.show_kernel_path && map->dso->long_name)
-					dsoname = map->dso->long_name;
+			if (map && map__dso(map)) {
+				if (symbol_conf.show_kernel_path &&
+				    map__dso(map)->long_name)
+					dsoname = map__dso(map)->long_name;
 				else
-					dsoname = map->dso->name;
+					dsoname = map__dso(map)->name;
 			}
 			if (!hv_stores(elem, "dso", newSVpv(dsoname,0))) {
 				hv_undef(elem);
diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
index 0290dc3a6258..559b2ac5cac3 100644
--- a/tools/perf/util/scripting-engines/trace-event-python.c
+++ b/tools/perf/util/scripting-engines/trace-event-python.c
@@ -382,11 +382,11 @@ static const char *get_dsoname(struct map *map)
 {
 	const char *dsoname = "[unknown]";
 
-	if (map && map->dso) {
-		if (symbol_conf.show_kernel_path && map->dso->long_name)
-			dsoname = map->dso->long_name;
+	if (map && map__dso(map)) {
+		if (symbol_conf.show_kernel_path && map__dso(map)->long_name)
+			dsoname = map__dso(map)->long_name;
 		else
-			dsoname = map->dso->name;
+			dsoname = map__dso(map)->name;
 	}
 
 	return dsoname;
@@ -527,7 +527,7 @@ static unsigned long get_offset(struct symbol *sym, struct addr_location *al)
 	if (al->addr < sym->end)
 		offset = al->addr - sym->start;
 	else
-		offset = al->addr - al->map->start - sym->start;
+		offset = al->addr - map__start(al->map) - sym->start;
 
 	return offset;
 }
@@ -741,7 +741,7 @@ static void set_sym_in_dict(PyObject *dict, struct addr_location *al,
 {
 	if (al->map) {
 		pydict_set_item_string_decref(dict, dso_field,
-			_PyUnicode_FromString(al->map->dso->name));
+			_PyUnicode_FromString(map__dso(al->map)->name));
 	}
 	if (al->sym) {
 		pydict_set_item_string_decref(dict, sym_field,
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index 25686d67ee6f..6d19bbcd30df 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -173,8 +173,8 @@ struct sort_entry sort_comm = {
 
 static int64_t _sort__dso_cmp(struct map *map_l, struct map *map_r)
 {
-	struct dso *dso_l = map_l ? map_l->dso : NULL;
-	struct dso *dso_r = map_r ? map_r->dso : NULL;
+	struct dso *dso_l = map_l ? map__dso(map_l) : NULL;
+	struct dso *dso_r = map_r ? map__dso(map_r) : NULL;
 	const char *dso_name_l, *dso_name_r;
 
 	if (!dso_l || !dso_r)
@@ -200,9 +200,9 @@ sort__dso_cmp(struct hist_entry *left, struct hist_entry *right)
 static int _hist_entry__dso_snprintf(struct map *map, char *bf,
 				     size_t size, unsigned int width)
 {
-	if (map && map->dso) {
-		const char *dso_name = verbose > 0 ? map->dso->long_name :
-			map->dso->short_name;
+	if (map && map__dso(map)) {
+		const char *dso_name = verbose > 0 ? map__dso(map)->long_name :
+			map__dso(map)->short_name;
 		return repsep_snprintf(bf, size, "%-*.*s", width, width, dso_name);
 	}
 
@@ -222,7 +222,7 @@ static int hist_entry__dso_filter(struct hist_entry *he, int type, const void *a
 	if (type != HIST_FILTER__DSO)
 		return -1;
 
-	return dso && (!he->ms.map || he->ms.map->dso != dso);
+	return dso && (!he->ms.map || map__dso(he->ms.map) != dso);
 }
 
 struct sort_entry sort_dso = {
@@ -302,12 +302,12 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
 	size_t ret = 0;
 
 	if (verbose > 0) {
-		char o = map ? dso__symtab_origin(map->dso) : '!';
+		char o = map ? dso__symtab_origin(map__dso(map)) : '!';
 		u64 rip = ip;
 
-		if (map && map->dso && map->dso->kernel
-		    && map->dso->adjust_symbols)
-			rip = map->unmap_ip(map, ip);
+		if (map && map__dso(map) && map__dso(map)->kernel
+		    && map__dso(map)->adjust_symbols)
+			rip = map__unmap_ip(map, ip);
 
 		ret += repsep_snprintf(bf, size, "%-#*llx %c ",
 				       BITS_PER_LONG / 4 + 2, rip, o);
@@ -318,7 +318,7 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
 		if (sym->type == STT_OBJECT) {
 			ret += repsep_snprintf(bf + ret, size - ret, "%s", sym->name);
 			ret += repsep_snprintf(bf + ret, size - ret, "+0x%llx",
-					ip - map->unmap_ip(map, sym->start));
+					ip - map__unmap_ip(map, sym->start));
 		} else {
 			ret += repsep_snprintf(bf + ret, size - ret, "%.*s",
 					       width - ret,
@@ -517,7 +517,7 @@ static char *hist_entry__get_srcfile(struct hist_entry *e)
 	if (!map)
 		return no_srcfile;
 
-	sf = __get_srcline(map->dso, map__rip_2objdump(map, e->ip),
+	sf = __get_srcline(map__dso(map), map__rip_2objdump(map, e->ip),
 			 e->ms.sym, false, true, true, e->ip);
 	if (!strcmp(sf, SRCLINE_UNKNOWN))
 		return no_srcfile;
@@ -838,7 +838,7 @@ static int hist_entry__dso_from_filter(struct hist_entry *he, int type,
 		return -1;
 
 	return dso && (!he->branch_info || !he->branch_info->from.ms.map ||
-		       he->branch_info->from.ms.map->dso != dso);
+		map__dso(he->branch_info->from.ms.map) != dso);
 }
 
 static int64_t
@@ -870,7 +870,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type,
 		return -1;
 
 	return dso && (!he->branch_info || !he->branch_info->to.ms.map ||
-		       he->branch_info->to.ms.map->dso != dso);
+		map__dso(he->branch_info->to.ms.map) != dso);
 }
 
 static int64_t
@@ -1259,7 +1259,7 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
 	if (!l_map) return -1;
 	if (!r_map) return 1;
 
-	rc = dso__cmp_id(l_map->dso, r_map->dso);
+	rc = dso__cmp_id(map__dso(l_map), map__dso(r_map));
 	if (rc)
 		return rc;
 	/*
@@ -1271,9 +1271,9 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
 	 */
 
 	if ((left->cpumode != PERF_RECORD_MISC_KERNEL) &&
-	    (!(l_map->flags & MAP_SHARED)) &&
-	    !l_map->dso->id.maj && !l_map->dso->id.min &&
-	    !l_map->dso->id.ino && !l_map->dso->id.ino_generation) {
+	    (!(map__flags(l_map) & MAP_SHARED)) &&
+	    !map__dso(l_map)->id.maj && !map__dso(l_map)->id.min &&
+	    !map__dso(l_map)->id.ino && !map__dso(l_map)->id.ino_generation) {
 		/* userspace anonymous */
 
 		if (left->thread->pid_ > right->thread->pid_) return -1;
@@ -1307,10 +1307,10 @@ static int hist_entry__dcacheline_snprintf(struct hist_entry *he, char *bf,
 
 		/* print [s] for shared data mmaps */
 		if ((he->cpumode != PERF_RECORD_MISC_KERNEL) &&
-		     map && !(map->prot & PROT_EXEC) &&
-		    (map->flags & MAP_SHARED) &&
-		    (map->dso->id.maj || map->dso->id.min ||
-		     map->dso->id.ino || map->dso->id.ino_generation))
+		    map && !(map__prot(map) & PROT_EXEC) &&
+		    (map__flags(map) & MAP_SHARED) &&
+		    (map__dso(map)->id.maj || map__dso(map)->id.min ||
+		     map__dso(map)->id.ino || map__dso(map)->id.ino_generation))
 			level = 's';
 		else if (!map)
 			level = 'X';
@@ -1806,7 +1806,7 @@ sort__dso_size_cmp(struct hist_entry *left, struct hist_entry *right)
 static int _hist_entry__dso_size_snprintf(struct map *map, char *bf,
 					  size_t bf_size, unsigned int width)
 {
-	if (map && map->dso)
+	if (map && map__dso(map))
 		return repsep_snprintf(bf, bf_size, "%*d", width,
 				       map__size(map));
 
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 3ca9a0968345..056405d3d655 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -970,7 +970,7 @@ void __weak arch__sym_update(struct symbol *s __maybe_unused,
 static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 				      GElf_Sym *sym, GElf_Shdr *shdr,
 				      struct maps *kmaps, struct kmap *kmap,
-				      struct dso **curr_dsop, struct map **curr_mapp,
+				      struct dso **curr_dsop,
 				      const char *section_name,
 				      bool adjust_kernel_syms, bool kmodule, bool *remap_kernel)
 {
@@ -994,18 +994,18 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		if (*remap_kernel && dso->kernel && !kmodule) {
 			*remap_kernel = false;
 			map->start = shdr->sh_addr + ref_reloc(kmap);
-			map->end = map->start + shdr->sh_size;
+			map->end = map__start(map) + shdr->sh_size;
 			map->pgoff = shdr->sh_offset;
-			map->map_ip = map__map_ip;
-			map->unmap_ip = map__unmap_ip;
+			map->map_ip = map__dso_map_ip;
+			map->unmap_ip = map__dso_unmap_ip;
 			/* Ensure maps are correctly ordered */
 			if (kmaps) {
 				int err;
+				struct map *updated = map__get(map);
 
-				map__get(map);
 				maps__remove(kmaps, map);
-				err = maps__insert(kmaps, map);
-				map__put(map);
+				err = maps__insert(kmaps, updated);
+				map__put(updated);
 				if (err)
 					return err;
 			}
@@ -1021,7 +1021,6 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 			map->pgoff = shdr->sh_offset;
 		}
 
-		*curr_mapp = map;
 		*curr_dsop = dso;
 		return 0;
 	}
@@ -1036,7 +1035,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		u64 start = sym->st_value;
 
 		if (kmodule)
-			start += map->start + shdr->sh_offset;
+			start += map__start(map) + shdr->sh_offset;
 
 		curr_dso = dso__new(dso_name);
 		if (curr_dso == NULL)
@@ -1054,10 +1053,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 
 		if (adjust_kernel_syms) {
 			curr_map->start  = shdr->sh_addr + ref_reloc(kmap);
-			curr_map->end	 = curr_map->start + shdr->sh_size;
-			curr_map->pgoff	 = shdr->sh_offset;
+			curr_map->end	= map__start(curr_map) + shdr->sh_size;
+			curr_map->pgoff	= shdr->sh_offset;
 		} else {
-			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
+			curr_map->map_ip = map__identity_ip;
+			curr_map->unmap_ip = map__identity_ip;
 		}
 		curr_dso->symtab_type = dso->symtab_type;
 		if (maps__insert(kmaps, curr_map))
@@ -1068,13 +1068,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		 * *curr_map->dso.
 		 */
 		dsos__add(&maps__machine(kmaps)->dsos, curr_dso);
-		/* kmaps already got it */
-		map__put(curr_map);
 		dso__set_loaded(curr_dso);
-		*curr_mapp = curr_map;
 		*curr_dsop = curr_dso;
+		map__put(curr_map);
 	} else
-		*curr_dsop = curr_map->dso;
+		*curr_dsop = map__dso(curr_map);
 
 	return 0;
 }
@@ -1085,7 +1083,6 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
 {
 	struct kmap *kmap = dso->kernel ? map__kmap(map) : NULL;
 	struct maps *kmaps = kmap ? map__kmaps(map) : NULL;
-	struct map *curr_map = map;
 	struct dso *curr_dso = dso;
 	Elf_Data *symstrs, *secstrs, *secstrs_run, *secstrs_sym;
 	uint32_t nr_syms;
@@ -1175,7 +1172,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
 	 * attempted to prelink vdso to its virtual address.
 	 */
 	if (dso__is_vdso(dso))
-		map->reloc = map->start - dso->text_offset;
+		map->reloc = map__start(map) - dso->text_offset;
 
 	dso->adjust_symbols = runtime_ss->adjust_symbols || ref_reloc(kmap);
 	/*
@@ -1262,8 +1259,10 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
 			--sym.st_value;
 
 		if (dso->kernel) {
-			if (dso__process_kernel_symbol(dso, map, &sym, &shdr, kmaps, kmap, &curr_dso, &curr_map,
-						       section_name, adjust_kernel_syms, kmodule, &remap_kernel))
+			if (dso__process_kernel_symbol(dso, map, &sym, &shdr,
+						       kmaps, kmap, &curr_dso,
+						       section_name, adjust_kernel_syms,
+						       kmodule, &remap_kernel))
 				goto out_elf_end;
 		} else if ((used_opd && runtime_ss->adjust_symbols) ||
 			   (!used_opd && syms_ss->adjust_symbols)) {
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 9b51e669a722..6289b3028b91 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -252,8 +252,8 @@ void maps__fixup_end(struct maps *maps)
 	down_write(maps__lock(maps));
 
 	maps__for_each_entry(maps, curr) {
-		if (prev != NULL && !prev->map->end)
-			prev->map->end = curr->map->start;
+		if (prev != NULL && !map__end(prev->map))
+			prev->map->end = map__start(curr->map);
 
 		prev = curr;
 	}
@@ -262,7 +262,7 @@ void maps__fixup_end(struct maps *maps)
 	 * We still haven't the actual symbols, so guess the
 	 * last map final address.
 	 */
-	if (curr && !curr->map->end)
+	if (curr && !map__end(curr->map))
 		curr->map->end = ~0ULL;
 
 	up_write(maps__lock(maps));
@@ -778,12 +778,12 @@ static int maps__split_kallsyms_for_kcore(struct maps *kmaps, struct dso *dso)
 			continue;
 		}
 
-		pos->start -= curr_map->start - curr_map->pgoff;
-		if (pos->end > curr_map->end)
-			pos->end = curr_map->end;
+		pos->start -= map__start(curr_map) - map__pgoff(curr_map);
+		if (pos->end > map__end(curr_map))
+			pos->end = map__end(curr_map);
 		if (pos->end)
-			pos->end -= curr_map->start - curr_map->pgoff;
-		symbols__insert(&curr_map->dso->symbols, pos);
+			pos->end -= map__start(curr_map) - map__pgoff(curr_map);
+		symbols__insert(&map__dso(curr_map)->symbols, pos);
 		++count;
 	}
 
@@ -830,7 +830,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 
 			*module++ = '\0';
 
-			if (strcmp(curr_map->dso->short_name, module)) {
+			if (strcmp(map__dso(curr_map)->short_name, module)) {
 				if (curr_map != initial_map &&
 				    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
 				    machine__is_default_guest(machine)) {
@@ -841,7 +841,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 					 * symbols are in its kmap. Mark it as
 					 * loaded.
 					 */
-					dso__set_loaded(curr_map->dso);
+					dso__set_loaded(map__dso(curr_map));
 				}
 
 				curr_map = maps__find_by_name(kmaps, module);
@@ -854,7 +854,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 					goto discard_symbol;
 				}
 
-				if (curr_map->dso->loaded &&
+				if (map__dso(curr_map)->loaded &&
 				    !machine__is_default_guest(machine))
 					goto discard_symbol;
 			}
@@ -862,8 +862,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 			 * So that we look just like we get from .ko files,
 			 * i.e. not prelinked, relative to initial_map->start.
 			 */
-			pos->start = curr_map->map_ip(curr_map, pos->start);
-			pos->end   = curr_map->map_ip(curr_map, pos->end);
+			pos->start = map__map_ip(curr_map, pos->start);
+			pos->end   = map__map_ip(curr_map, pos->end);
 		} else if (x86_64 && is_entry_trampoline(pos->name)) {
 			/*
 			 * These symbols are not needed anymore since the
@@ -910,7 +910,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 				return -1;
 			}
 
-			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
+			curr_map->map_ip = map__identity_ip;
+			curr_map->unmap_ip = map__identity_ip;
 			if (maps__insert(kmaps, curr_map)) {
 				dso__put(ndso);
 				return -1;
@@ -924,7 +925,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 add_symbol:
 		if (curr_map != initial_map) {
 			rb_erase_cached(&pos->rb_node, root);
-			symbols__insert(&curr_map->dso->symbols, pos);
+			symbols__insert(&map__dso(curr_map)->symbols, pos);
 			++moved;
 		} else
 			++count;
@@ -938,7 +939,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 	if (curr_map != initial_map &&
 	    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
 	    machine__is_default_guest(maps__machine(kmaps))) {
-		dso__set_loaded(curr_map->dso);
+		dso__set_loaded(map__dso(curr_map));
 	}
 
 	return count + moved;
@@ -1118,8 +1119,8 @@ static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
 		}
 
 		/* Module must be in memory at the same address */
-		mi = find_module(old_map->dso->short_name, &modules);
-		if (!mi || mi->start != old_map->start) {
+		mi = find_module(map__dso(old_map)->short_name, &modules);
+		if (!mi || mi->start != map__start(old_map)) {
 			err = -EINVAL;
 			goto out;
 		}
@@ -1214,7 +1215,7 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
 		return -ENOMEM;
 	}
 
-	list_node->map->end = list_node->map->start + len;
+	list_node->map->end = map__start(list_node->map) + len;
 	list_node->map->pgoff = pgoff;
 
 	list_add(&list_node->node, &md->maps);
@@ -1236,21 +1237,21 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 		struct map *old_map = rb_node->map;
 
 		/* no overload with this one */
-		if (new_map->end < old_map->start ||
-		    new_map->start >= old_map->end)
+		if (map__end(new_map) < map__start(old_map) ||
+		    map__start(new_map) >= map__end(old_map))
 			continue;
 
-		if (new_map->start < old_map->start) {
+		if (map__start(new_map) < map__start(old_map)) {
 			/*
 			 * |new......
 			 *       |old....
 			 */
-			if (new_map->end < old_map->end) {
+			if (map__end(new_map) < map__end(old_map)) {
 				/*
 				 * |new......|     -> |new..|
 				 *       |old....| ->       |old....|
 				 */
-				new_map->end = old_map->start;
+				new_map->end = map__start(old_map);
 			} else {
 				/*
 				 * |new.............| -> |new..|       |new..|
@@ -1271,17 +1272,18 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 					goto out;
 				}
 
-				m->map->end = old_map->start;
+				m->map->end = map__start(old_map);
 				list_add_tail(&m->node, &merged);
-				new_map->pgoff += old_map->end - new_map->start;
-				new_map->start = old_map->end;
+				new_map->pgoff +=
+					map__end(old_map) - map__start(new_map);
+				new_map->start = map__end(old_map);
 			}
 		} else {
 			/*
 			 *      |new......
 			 * |old....
 			 */
-			if (new_map->end < old_map->end) {
+			if (map__end(new_map) < map__end(old_map)) {
 				/*
 				 *      |new..|   -> x
 				 * |old.........| -> |old.........|
@@ -1294,8 +1296,9 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 				 *      |new......| ->         |new...|
 				 * |old....|        -> |old....|
 				 */
-				new_map->pgoff += old_map->end - new_map->start;
-				new_map->start = old_map->end;
+				new_map->pgoff +=
+					map__end(old_map) - map__start(new_map);
+				new_map->start = map__end(old_map);
 			}
 		}
 	}
@@ -1361,7 +1364,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	}
 
 	/* Read new maps into temporary lists */
-	err = file__read_maps(fd, map->prot & PROT_EXEC, kcore_mapfn, &md,
+	err = file__read_maps(fd, map__prot(map) & PROT_EXEC, kcore_mapfn, &md,
 			      &is_64_bit);
 	if (err)
 		goto out_err;
@@ -1391,7 +1394,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 		struct map_list_node *new_node;
 
 		list_for_each_entry(new_node, &md.maps, node) {
-			if (stext >= new_node->map->start && stext < new_node->map->end) {
+			if (stext >= map__start(new_node->map) &&
+			    stext < map__end(new_node->map)) {
 				replacement_map = new_node->map;
 				break;
 			}
@@ -1408,16 +1412,18 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 		new_node = list_entry(md.maps.next, struct map_list_node, node);
 		list_del_init(&new_node->node);
 		if (new_node->map == replacement_map) {
-			map->start	= new_node->map->start;
-			map->end	= new_node->map->end;
-			map->pgoff	= new_node->map->pgoff;
-			map->map_ip	= new_node->map->map_ip;
-			map->unmap_ip	= new_node->map->unmap_ip;
+			struct  map *updated;
+
+			map->start = map__start(new_node->map);
+			map->end   = map__end(new_node->map);
+			map->pgoff = map__pgoff(new_node->map);
+			map->map_ip = new_node->map->map_ip;
+			map->unmap_ip = new_node->map->unmap_ip;
 			/* Ensure maps are correctly ordered */
-			map__get(map);
+			updated = map__get(map);
 			maps__remove(kmaps, map);
-			err = maps__insert(kmaps, map);
-			map__put(map);
+			err = maps__insert(kmaps, updated);
+			map__put(updated);
 			map__put(new_node->map);
 			if (err)
 				goto out_err;
@@ -1460,7 +1466,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 
 	close(fd);
 
-	if (map->prot & PROT_EXEC)
+	if (map__prot(map) & PROT_EXEC)
 		pr_debug("Using %s for kernel object code\n", kcore_filename);
 	else
 		pr_debug("Using %s for kernel data\n", kcore_filename);
@@ -1995,13 +2001,13 @@ int dso__load(struct dso *dso, struct map *map)
 static int map__strcmp(const void *a, const void *b)
 {
 	const struct map *ma = *(const struct map **)a, *mb = *(const struct map **)b;
-	return strcmp(ma->dso->short_name, mb->dso->short_name);
+	return strcmp(map__dso(ma)->short_name, map__dso(mb)->short_name);
 }
 
 static int map__strcmp_name(const void *name, const void *b)
 {
 	const struct map *map = *(const struct map **)b;
-	return strcmp(name, map->dso->short_name);
+	return strcmp(name, map__dso(map)->short_name);
 }
 
 void __maps__sort_by_name(struct maps *maps)
@@ -2052,7 +2058,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 	down_read(maps__lock(maps));
 
 	if (maps->last_search_by_name &&
-	    strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
+	    strcmp(map__dso(maps->last_search_by_name)->short_name, name) == 0) {
 		map = maps->last_search_by_name;
 		goto out_unlock;
 	}
@@ -2068,7 +2074,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 	/* Fallback to traversing the rbtree... */
 	maps__for_each_entry(maps, rb_node) {
 		map = rb_node->map;
-		if (strcmp(map->dso->short_name, name) == 0) {
+		if (strcmp(map__dso(map)->short_name, name) == 0) {
 			maps->last_search_by_name = map;
 			goto out_unlock;
 		}
diff --git a/tools/perf/util/symbol_fprintf.c b/tools/perf/util/symbol_fprintf.c
index 2664fb65e47a..d9e5ad040b6a 100644
--- a/tools/perf/util/symbol_fprintf.c
+++ b/tools/perf/util/symbol_fprintf.c
@@ -30,7 +30,7 @@ size_t __symbol__fprintf_symname_offs(const struct symbol *sym,
 			if (al->addr < sym->end)
 				offset = al->addr - sym->start;
 			else
-				offset = al->addr - al->map->start - sym->start;
+				offset = al->addr - map__start(al->map) - sym->start;
 			length += fprintf(fp, "+0x%lx", offset);
 		}
 		return length;
diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
index ed2d55d224aa..437fd57c2084 100644
--- a/tools/perf/util/synthetic-events.c
+++ b/tools/perf/util/synthetic-events.c
@@ -668,33 +668,33 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
 			continue;
 
 		if (symbol_conf.buildid_mmap2) {
-			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
+			size = PERF_ALIGN(map__dso(map)->long_name_len + 1, sizeof(u64));
 			event->mmap2.header.type = PERF_RECORD_MMAP2;
 			event->mmap2.header.size = (sizeof(event->mmap2) -
 						(sizeof(event->mmap2.filename) - size));
 			memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
 			event->mmap2.header.size += machine->id_hdr_size;
-			event->mmap2.start = map->start;
-			event->mmap2.len   = map->end - map->start;
+			event->mmap2.start = map__start(map);
+			event->mmap2.len   = map__end(map) - map__start(map);
 			event->mmap2.pid   = machine->pid;
 
-			memcpy(event->mmap2.filename, map->dso->long_name,
-			       map->dso->long_name_len + 1);
+			memcpy(event->mmap2.filename, map__dso(map)->long_name,
+			       map__dso(map)->long_name_len + 1);
 
 			perf_record_mmap2__read_build_id(&event->mmap2, false);
 		} else {
-			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
+			size = PERF_ALIGN(map__dso(map)->long_name_len + 1, sizeof(u64));
 			event->mmap.header.type = PERF_RECORD_MMAP;
 			event->mmap.header.size = (sizeof(event->mmap) -
 						(sizeof(event->mmap.filename) - size));
 			memset(event->mmap.filename + size, 0, machine->id_hdr_size);
 			event->mmap.header.size += machine->id_hdr_size;
-			event->mmap.start = map->start;
-			event->mmap.len   = map->end - map->start;
+			event->mmap.start = map__start(map);
+			event->mmap.len   = map__end(map) - map__start(map);
 			event->mmap.pid   = machine->pid;
 
-			memcpy(event->mmap.filename, map->dso->long_name,
-			       map->dso->long_name_len + 1);
+			memcpy(event->mmap.filename, map__dso(map)->long_name,
+			       map__dso(map)->long_name_len + 1);
 		}
 
 		if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
@@ -1112,8 +1112,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
 		event->mmap2.header.size = (sizeof(event->mmap2) -
 				(sizeof(event->mmap2.filename) - size) + machine->id_hdr_size);
 		event->mmap2.pgoff = kmap->ref_reloc_sym->addr;
-		event->mmap2.start = map->start;
-		event->mmap2.len   = map->end - event->mmap.start;
+		event->mmap2.start = map__start(map);
+		event->mmap2.len   = map__end(map) - event->mmap.start;
 		event->mmap2.pid   = machine->pid;
 
 		perf_record_mmap2__read_build_id(&event->mmap2, true);
@@ -1125,8 +1125,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
 		event->mmap.header.size = (sizeof(event->mmap) -
 				(sizeof(event->mmap.filename) - size) + machine->id_hdr_size);
 		event->mmap.pgoff = kmap->ref_reloc_sym->addr;
-		event->mmap.start = map->start;
-		event->mmap.len   = map->end - event->mmap.start;
+		event->mmap.start = map__start(map);
+		event->mmap.len   = map__end(map) - event->mmap.start;
 		event->mmap.pid   = machine->pid;
 	}
 
diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
index c2256777b813..6fbcc115cc6d 100644
--- a/tools/perf/util/thread.c
+++ b/tools/perf/util/thread.c
@@ -434,23 +434,23 @@ struct thread *thread__main_thread(struct machine *machine, struct thread *threa
 int thread__memcpy(struct thread *thread, struct machine *machine,
 		   void *buf, u64 ip, int len, bool *is64bit)
 {
-       u8 cpumode = PERF_RECORD_MISC_USER;
-       struct addr_location al;
-       long offset;
+	u8 cpumode = PERF_RECORD_MISC_USER;
+	struct addr_location al;
+	long offset;
 
-       if (machine__kernel_ip(machine, ip))
-               cpumode = PERF_RECORD_MISC_KERNEL;
+	if (machine__kernel_ip(machine, ip))
+		cpumode = PERF_RECORD_MISC_KERNEL;
 
-       if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso ||
-	   al.map->dso->data.status == DSO_DATA_STATUS_ERROR ||
-	   map__load(al.map) < 0)
-               return -1;
+	if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map) ||
+		map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR ||
+		map__load(al.map) < 0)
+		return -1;
 
-       offset = al.map->map_ip(al.map, ip);
-       if (is64bit)
-               *is64bit = al.map->dso->is_64_bit;
+	offset = map__map_ip(al.map, ip);
+	if (is64bit)
+		*is64bit = map__dso(al.map)->is_64_bit;
 
-       return dso__data_read_offset(al.map->dso, machine, offset, buf, len);
+	return dso__data_read_offset(map__dso(al.map), machine, offset, buf, len);
 }
 
 void thread__free_stitch_list(struct thread *thread)
diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
index 7e6c59811292..841ac84a93ab 100644
--- a/tools/perf/util/unwind-libunwind-local.c
+++ b/tools/perf/util/unwind-libunwind-local.c
@@ -381,20 +381,20 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
 	int ret = -EINVAL;
 
 	map = find_map(ip, ui);
-	if (!map || !map->dso)
+	if (!map || !map__dso(map))
 		return -EINVAL;
 
-	pr_debug("unwind: find_proc_info dso %s\n", map->dso->name);
+	pr_debug("unwind: %s dso %s\n", __func__, map__dso(map)->name);
 
 	/* Check the .eh_frame section for unwinding info */
-	if (!read_unwind_spec_eh_frame(map->dso, ui->machine,
+	if (!read_unwind_spec_eh_frame(map__dso(map), ui->machine,
 				       &table_data, &segbase, &fde_count)) {
 		memset(&di, 0, sizeof(di));
 		di.format   = UNW_INFO_FORMAT_REMOTE_TABLE;
-		di.start_ip = map->start;
-		di.end_ip   = map->end;
-		di.u.rti.segbase    = map->start + segbase - map->pgoff;
-		di.u.rti.table_data = map->start + table_data - map->pgoff;
+		di.start_ip = map__start(map);
+		di.end_ip   = map__end(map);
+		di.u.rti.segbase    = map__start(map) + segbase - map__pgoff(map);
+		di.u.rti.table_data = map__start(map) + table_data - map__pgoff(map);
 		di.u.rti.table_len  = fde_count * sizeof(struct table_entry)
 				      / sizeof(unw_word_t);
 		ret = dwarf_search_unwind_table(as, ip, &di, pi,
@@ -404,20 +404,20 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
 #ifndef NO_LIBUNWIND_DEBUG_FRAME
 	/* Check the .debug_frame section for unwinding info */
 	if (ret < 0 &&
-	    !read_unwind_spec_debug_frame(map->dso, ui->machine, &segbase)) {
-		int fd = dso__data_get_fd(map->dso, ui->machine);
-		int is_exec = elf_is_exec(fd, map->dso->name);
-		unw_word_t base = is_exec ? 0 : map->start;
+	    !read_unwind_spec_debug_frame(map__dso(map), ui->machine, &segbase)) {
+		int fd = dso__data_get_fd(map__dso(map), ui->machine);
+		int is_exec = elf_is_exec(fd, map__dso(map)->name);
+		unw_word_t base = is_exec ? 0 : map__start(map);
 		const char *symfile;
 
 		if (fd >= 0)
-			dso__data_put_fd(map->dso);
+			dso__data_put_fd(map__dso(map));
 
-		symfile = map->dso->symsrc_filename ?: map->dso->name;
+		symfile = map__dso(map)->symsrc_filename ?: map__dso(map)->name;
 
 		memset(&di, 0, sizeof(di));
 		if (dwarf_find_debug_frame(0, &di, ip, base, symfile,
-					   map->start, map->end))
+					   map__start(map), map__end(map)))
 			return dwarf_search_unwind_table(as, ip, &di, pi,
 							 need_unwind_info, arg);
 	}
@@ -473,10 +473,10 @@ static int access_dso_mem(struct unwind_info *ui, unw_word_t addr,
 		return -1;
 	}
 
-	if (!map->dso)
+	if (!map__dso(map))
 		return -1;
 
-	size = dso__data_read_addr(map->dso, map, ui->machine,
+	size = dso__data_read_addr(map__dso(map), map, ui->machine,
 				   addr, (u8 *) data, sizeof(*data));
 
 	return !(size == sizeof(*data));
@@ -583,7 +583,7 @@ static int entry(u64 ip, struct thread *thread,
 	pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n",
 		 al.sym ? al.sym->name : "''",
 		 ip,
-		 al.map ? al.map->map_ip(al.map, ip) : (u64) 0);
+		 al.map ? map__map_ip(al.map, ip) : (u64) 0);
 
 	return cb(&e, arg);
 }
diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
index 7b797ffadd19..cece1ee89031 100644
--- a/tools/perf/util/unwind-libunwind.c
+++ b/tools/perf/util/unwind-libunwind.c
@@ -30,7 +30,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 
 	if (maps__addr_space(maps)) {
 		pr_debug("unwind: thread map already set, dso=%s\n",
-			 map->dso->name);
+			 map__dso(map)->name);
 		if (initialized)
 			*initialized = true;
 		return 0;
@@ -41,7 +41,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 	if (!machine->env || !machine->env->arch)
 		goto out_register;
 
-	dso_type = dso__type(map->dso, machine);
+	dso_type = dso__type(map__dso(map), machine);
 	if (dso_type == DSO__TYPE_UNKNOWN)
 		return 0;
 
diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
index 835c39efb80d..ec777ee11493 100644
--- a/tools/perf/util/vdso.c
+++ b/tools/perf/util/vdso.c
@@ -147,7 +147,7 @@ static enum dso_type machine__thread_dso_type(struct machine *machine,
 	struct map_rb_node *rb_node;
 
 	maps__for_each_entry(thread->maps, rb_node) {
-		struct dso *dso = rb_node->map->dso;
+		struct dso *dso = map__dso(rb_node->map);
 
 		if (!dso || dso->long_name[0] != '/')
 			continue;
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 16/22] perf test: Add extra diagnostics to maps test
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (14 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 15/22] perf map: Use functions to access the variables in map Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 10:34 ` [PATCH v3 17/22] perf map: Changes to reference counting Ian Rogers
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Dump the resultant and comparison maps on failure.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/tests/maps.c | 51 +++++++++++++++++++++++++++++------------
 1 file changed, 36 insertions(+), 15 deletions(-)

diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
index a58274598587..38c1ec0074d1 100644
--- a/tools/perf/tests/maps.c
+++ b/tools/perf/tests/maps.c
@@ -1,4 +1,5 @@
 // SPDX-License-Identifier: GPL-2.0
+#include <inttypes.h>
 #include <linux/compiler.h>
 #include <linux/kernel.h>
 #include "tests.h"
@@ -17,22 +18,42 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
 {
 	struct map_rb_node *rb_node;
 	unsigned int i = 0;
-
-	maps__for_each_entry(maps, rb_node) {
-		struct map *map = rb_node->map;
-
-		if (i > 0)
-			TEST_ASSERT_VAL("less maps expected", (map && i < size) || (!map && i == size));
-
-		TEST_ASSERT_VAL("wrong map start",  map->start == merged[i].start);
-		TEST_ASSERT_VAL("wrong map end",    map->end == merged[i].end);
-		TEST_ASSERT_VAL("wrong map name",  !strcmp(map->dso->name, merged[i].name));
-		TEST_ASSERT_VAL("wrong map refcnt", refcount_read(&map->refcnt) == 1);
-
-		i++;
+	bool failed = false;
+
+	if (maps__nr_maps(maps) != size) {
+		pr_debug("Expected %d maps, got %d", size, maps__nr_maps(maps));
+		failed = true;
+	} else {
+		maps__for_each_entry(maps, rb_node) {
+			struct map *map = rb_node->map;
+
+			if (map__start(map) != merged[i].start ||
+			    map__end(map) != merged[i].end ||
+			    strcmp(map__dso(map)->name, merged[i].name) ||
+			    refcount_read(&map->refcnt) != 1) {
+				failed = true;
+			}
+			i++;
+		}
 	}
-
-	return TEST_OK;
+	if (failed) {
+		pr_debug("Expected:\n");
+		for (i = 0; i < size; i++) {
+			pr_debug("\tstart: %" PRIu64 " end: %" PRIu64 " name: '%s' refcnt: 1\n",
+				merged[i].start, merged[i].end, merged[i].name);
+		}
+		pr_debug("Got:\n");
+		maps__for_each_entry(maps, rb_node) {
+			struct map *map = rb_node->map;
+
+			pr_debug("\tstart: %" PRIu64 " end: %" PRIu64 " name: '%s' refcnt: %d\n",
+				map__start(map),
+				map__end(map),
+				map__dso(map)->name,
+				refcount_read(&map->refcnt));
+		}
+	}
+	return failed ? TEST_FAIL : TEST_OK;
 }
 
 static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest __maybe_unused)
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 17/22] perf map: Changes to reference counting
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (15 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 16/22] perf test: Add extra diagnostics to maps test Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-12  8:45   ` Masami Hiramatsu
  2022-02-11 10:34 ` [PATCH v3 18/22] libperf: Add reference count checking macros Ian Rogers
                   ` (4 subsequent siblings)
  21 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

When a pointer to a map exists do a get, when that pointer is
overwritten or freed, put the map. This avoids issues with gets and
puts being inconsistently used causing, use after puts, etc. Reference
count checking and address sanitizer were used to identify issues.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/tests/hists_cumulate.c     | 14 ++++-
 tools/perf/tests/hists_filter.c       | 14 ++++-
 tools/perf/tests/hists_link.c         | 18 +++++-
 tools/perf/tests/hists_output.c       | 12 +++-
 tools/perf/tests/mmap-thread-lookup.c |  3 +-
 tools/perf/util/callchain.c           |  9 +--
 tools/perf/util/event.c               |  8 ++-
 tools/perf/util/hist.c                | 10 ++--
 tools/perf/util/machine.c             | 80 ++++++++++++++++-----------
 9 files changed, 118 insertions(+), 50 deletions(-)

diff --git a/tools/perf/tests/hists_cumulate.c b/tools/perf/tests/hists_cumulate.c
index 17f4fcd6bdce..28f5eb41eed9 100644
--- a/tools/perf/tests/hists_cumulate.c
+++ b/tools/perf/tests/hists_cumulate.c
@@ -112,6 +112,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
 		}
 
 		fake_samples[i].thread = al.thread;
+		map__put(fake_samples[i].map);
 		fake_samples[i].map = al.map;
 		fake_samples[i].sym = al.sym;
 	}
@@ -147,15 +148,23 @@ static void del_hist_entries(struct hists *hists)
 	}
 }
 
+static void put_fake_samples(void)
+{
+	size_t i;
+
+	for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
+		map__put(fake_samples[i].map);
+}
+
 typedef int (*test_fn_t)(struct evsel *, struct machine *);
 
 #define COMM(he)  (thread__comm_str(he->thread))
-#define DSO(he)   (he->ms.map->dso->short_name)
+#define DSO(he)   (map__dso(he->ms.map)->short_name)
 #define SYM(he)   (he->ms.sym->name)
 #define CPU(he)   (he->cpu)
 #define PID(he)   (he->thread->tid)
 #define DEPTH(he) (he->callchain->max_depth)
-#define CDSO(cl)  (cl->ms.map->dso->short_name)
+#define CDSO(cl)  (map__dso(cl->ms.map)->short_name)
 #define CSYM(cl)  (cl->ms.sym->name)
 
 struct result {
@@ -733,6 +742,7 @@ static int test__hists_cumulate(struct test_suite *test __maybe_unused, int subt
 	/* tear down everything */
 	evlist__delete(evlist);
 	machines__exit(&machines);
+	put_fake_samples();
 
 	return err;
 }
diff --git a/tools/perf/tests/hists_filter.c b/tools/perf/tests/hists_filter.c
index 08cbeb9e39ae..bcd46244182a 100644
--- a/tools/perf/tests/hists_filter.c
+++ b/tools/perf/tests/hists_filter.c
@@ -89,6 +89,7 @@ static int add_hist_entries(struct evlist *evlist,
 			}
 
 			fake_samples[i].thread = al.thread;
+			map__put(fake_samples[i].map);
 			fake_samples[i].map = al.map;
 			fake_samples[i].sym = al.sym;
 		}
@@ -101,6 +102,14 @@ static int add_hist_entries(struct evlist *evlist,
 	return TEST_FAIL;
 }
 
+static void put_fake_samples(void)
+{
+	size_t i;
+
+	for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
+		map__put(fake_samples[i].map);
+}
+
 static int test__hists_filter(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
 {
 	int err = TEST_FAIL;
@@ -194,7 +203,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
 		hists__filter_by_thread(hists);
 
 		/* now applying dso filter for 'kernel' */
-		hists->dso_filter = fake_samples[0].map->dso;
+		hists->dso_filter = map__dso(fake_samples[0].map);
 		hists__filter_by_dso(hists);
 
 		if (verbose > 2) {
@@ -288,7 +297,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
 
 		/* now applying all filters at once. */
 		hists->thread_filter = fake_samples[1].thread;
-		hists->dso_filter = fake_samples[1].map->dso;
+		hists->dso_filter = map__dso(fake_samples[1].map);
 		hists__filter_by_thread(hists);
 		hists__filter_by_dso(hists);
 
@@ -322,6 +331,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
 	evlist__delete(evlist);
 	reset_output_field();
 	machines__exit(&machines);
+	put_fake_samples();
 
 	return err;
 }
diff --git a/tools/perf/tests/hists_link.c b/tools/perf/tests/hists_link.c
index c575e13a850d..060e8731feff 100644
--- a/tools/perf/tests/hists_link.c
+++ b/tools/perf/tests/hists_link.c
@@ -6,6 +6,7 @@
 #include "evsel.h"
 #include "evlist.h"
 #include "machine.h"
+#include "map.h"
 #include "parse-events.h"
 #include "hists_common.h"
 #include "util/mmap.h"
@@ -94,6 +95,7 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
 			}
 
 			fake_common_samples[k].thread = al.thread;
+			map__put(fake_common_samples[k].map);
 			fake_common_samples[k].map = al.map;
 			fake_common_samples[k].sym = al.sym;
 		}
@@ -126,11 +128,24 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
 	return -1;
 }
 
+static void put_fake_samples(void)
+{
+	size_t i, j;
+
+	for (i = 0; i < ARRAY_SIZE(fake_common_samples); i++)
+		map__put(fake_common_samples[i].map);
+	for (i = 0; i < ARRAY_SIZE(fake_samples); i++) {
+		for (j = 0; j < ARRAY_SIZE(fake_samples[0]); j++)
+			map__put(fake_samples[i][j].map);
+	}
+}
+
 static int find_sample(struct sample *samples, size_t nr_samples,
 		       struct thread *t, struct map *m, struct symbol *s)
 {
 	while (nr_samples--) {
-		if (samples->thread == t && samples->map == m &&
+		if (samples->thread == t &&
+		    samples->map == m &&
 		    samples->sym == s)
 			return 1;
 		samples++;
@@ -336,6 +351,7 @@ static int test__hists_link(struct test_suite *test __maybe_unused, int subtest
 	evlist__delete(evlist);
 	reset_output_field();
 	machines__exit(&machines);
+	put_fake_samples();
 
 	return err;
 }
diff --git a/tools/perf/tests/hists_output.c b/tools/perf/tests/hists_output.c
index 0bde4a768c15..4af6916491e5 100644
--- a/tools/perf/tests/hists_output.c
+++ b/tools/perf/tests/hists_output.c
@@ -78,6 +78,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
 		}
 
 		fake_samples[i].thread = al.thread;
+		map__put(fake_samples[i].map);
 		fake_samples[i].map = al.map;
 		fake_samples[i].sym = al.sym;
 	}
@@ -113,10 +114,18 @@ static void del_hist_entries(struct hists *hists)
 	}
 }
 
+static void put_fake_samples(void)
+{
+	size_t i;
+
+	for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
+		map__put(fake_samples[i].map);
+}
+
 typedef int (*test_fn_t)(struct evsel *, struct machine *);
 
 #define COMM(he)  (thread__comm_str(he->thread))
-#define DSO(he)   (he->ms.map->dso->short_name)
+#define DSO(he)   (map__dso(he->ms.map)->short_name)
 #define SYM(he)   (he->ms.sym->name)
 #define CPU(he)   (he->cpu)
 #define PID(he)   (he->thread->tid)
@@ -620,6 +629,7 @@ static int test__hists_output(struct test_suite *test __maybe_unused, int subtes
 	/* tear down everything */
 	evlist__delete(evlist);
 	machines__exit(&machines);
+	put_fake_samples();
 
 	return err;
 }
diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c
index a4301fc7b770..898eda55b7a8 100644
--- a/tools/perf/tests/mmap-thread-lookup.c
+++ b/tools/perf/tests/mmap-thread-lookup.c
@@ -202,7 +202,8 @@ static int mmap_events(synth_cb synth)
 			break;
 		}
 
-		pr_debug("map %p, addr %" PRIx64 "\n", al.map, al.map->start);
+		pr_debug("map %p, addr %" PRIx64 "\n", al.map, map__start(al.map));
+		map__put(al.map);
 	}
 
 	machine__delete_threads(machine);
diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
index a8cfd31a3ff0..ae65b7bc9ab7 100644
--- a/tools/perf/util/callchain.c
+++ b/tools/perf/util/callchain.c
@@ -583,7 +583,7 @@ fill_node(struct callchain_node *node, struct callchain_cursor *cursor)
 		}
 		call->ip = cursor_node->ip;
 		call->ms = cursor_node->ms;
-		map__get(call->ms.map);
+		call->ms.map = map__get(call->ms.map);
 		call->srcline = cursor_node->srcline;
 
 		if (cursor_node->branch) {
@@ -1061,7 +1061,7 @@ int callchain_cursor_append(struct callchain_cursor *cursor,
 	node->ip = ip;
 	map__zput(node->ms.map);
 	node->ms = *ms;
-	map__get(node->ms.map);
+	node->ms.map = map__get(node->ms.map);
 	node->branch = branch;
 	node->nr_loop_iter = nr_loop_iter;
 	node->iter_cycles = iter_cycles;
@@ -1109,7 +1109,8 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
 	struct machine *machine = maps__machine(node->ms.maps);
 
 	al->maps = node->ms.maps;
-	al->map = node->ms.map;
+	map__put(al->map);
+	al->map = map__get(node->ms.map);
 	al->sym = node->ms.sym;
 	al->srcline = node->srcline;
 	al->addr = node->ip;
@@ -1530,7 +1531,7 @@ int callchain_node__make_parent_list(struct callchain_node *node)
 				goto out;
 			*new = *chain;
 			new->has_children = false;
-			map__get(new->ms.map);
+			new->ms.map = map__get(new->ms.map);
 			list_add_tail(&new->list, &head);
 		}
 		parent = parent->parent;
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 54a1d4df5f70..266318d5d006 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -484,13 +484,14 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
 	if (machine) {
 		struct addr_location al;
 
-		al.map = maps__find(machine__kernel_maps(machine), tp->addr);
+		al.map = map__get(maps__find(machine__kernel_maps(machine), tp->addr));
 		if (al.map && map__load(al.map) >= 0) {
 			al.addr = map__map_ip(al.map, tp->addr);
 			al.sym = map__find_symbol(al.map, al.addr);
 			if (al.sym)
 				ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
 		}
+		map__put(al.map);
 	}
 	ret += fprintf(fp, " old len %u new len %u\n", tp->old_len, tp->new_len);
 	old = true;
@@ -581,6 +582,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
 	al->filtered = 0;
 
 	if (machine == NULL) {
+		map__put(al->map);
 		al->map = NULL;
 		return NULL;
 	}
@@ -599,6 +601,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
 		al->level = 'u';
 	} else {
 		al->level = 'H';
+		map__put(al->map);
 		al->map = NULL;
 
 		if ((cpumode == PERF_RECORD_MISC_GUEST_USER ||
@@ -613,7 +616,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
 		return NULL;
 	}
 
-	al->map = maps__find(maps, al->addr);
+	al->map = map__get(maps__find(maps, al->addr));
 	if (al->map != NULL) {
 		/*
 		 * Kernel maps might be changed when loading symbols so loading
@@ -768,6 +771,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
  */
 void addr_location__put(struct addr_location *al)
 {
+	map__zput(al->map);
 	thread__zput(al->thread);
 }
 
diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index f19ac6eb4775..4dbb1dbf3679 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -446,7 +446,7 @@ static int hist_entry__init(struct hist_entry *he,
 			memset(&he->stat, 0, sizeof(he->stat));
 	}
 
-	map__get(he->ms.map);
+	he->ms.map = map__get(he->ms.map);
 
 	if (he->branch_info) {
 		/*
@@ -461,13 +461,13 @@ static int hist_entry__init(struct hist_entry *he,
 		memcpy(he->branch_info, template->branch_info,
 		       sizeof(*he->branch_info));
 
-		map__get(he->branch_info->from.ms.map);
-		map__get(he->branch_info->to.ms.map);
+		he->branch_info->from.ms.map = map__get(he->branch_info->from.ms.map);
+		he->branch_info->to.ms.map = map__get(he->branch_info->to.ms.map);
 	}
 
 	if (he->mem_info) {
-		map__get(he->mem_info->iaddr.ms.map);
-		map__get(he->mem_info->daddr.ms.map);
+		he->mem_info->iaddr.ms.map = map__get(he->mem_info->iaddr.ms.map);
+		he->mem_info->daddr.ms.map = map__get(he->mem_info->daddr.ms.map);
 	}
 
 	if (hist_entry__has_callchains(he) && symbol_conf.use_callchain)
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 940fb2a50dfd..49e4891e92b7 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -783,33 +783,42 @@ static int machine__process_ksymbol_register(struct machine *machine,
 {
 	struct symbol *sym;
 	struct map *map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
+	bool put_map = false;
+	int err = 0;
 
 	if (!map) {
 		struct dso *dso = dso__new(event->ksymbol.name);
-		int err;
 
-		if (dso) {
-			dso->kernel = DSO_SPACE__KERNEL;
-			map = map__new2(0, dso);
-			dso__put(dso);
+		if (!dso) {
+			err = -ENOMEM;
+			goto out;
 		}
-
-		if (!dso || !map) {
-			return -ENOMEM;
+		dso->kernel = DSO_SPACE__KERNEL;
+		map = map__new2(0, dso);
+		dso__put(dso);
+		if (!map) {
+			err = -ENOMEM;
+			goto out;
 		}
-
+		/*
+		 * The inserted map has a get on it, we need to put to release
+		 * the reference count here, but do it after all accesses are
+		 * done.
+		 */
+		put_map = true;
 		if (event->ksymbol.ksym_type == PERF_RECORD_KSYMBOL_TYPE_OOL) {
-			map->dso->binary_type = DSO_BINARY_TYPE__OOL;
-			map->dso->data.file_size = event->ksymbol.len;
-			dso__set_loaded(map->dso);
+			map__dso(map)->binary_type = DSO_BINARY_TYPE__OOL;
+			map__dso(map)->data.file_size = event->ksymbol.len;
+			dso__set_loaded(map__dso(map));
 		}
 
 		map->start = event->ksymbol.addr;
-		map->end = map->start + event->ksymbol.len;
+		map->end = map__start(map) + event->ksymbol.len;
 		err = maps__insert(machine__kernel_maps(machine), map);
-		map__put(map);
-		if (err)
-			return err;
+		if (err) {
+			err = -ENOMEM;
+			goto out;
+		}
 
 		dso__set_loaded(dso);
 
@@ -819,13 +828,18 @@ static int machine__process_ksymbol_register(struct machine *machine,
 		}
 	}
 
-	sym = symbol__new(map->map_ip(map, map->start),
+	sym = symbol__new(map__map_ip(map, map__start(map)),
 			  event->ksymbol.len,
 			  0, 0, event->ksymbol.name);
-	if (!sym)
-		return -ENOMEM;
-	dso__insert_symbol(map->dso, sym);
-	return 0;
+	if (!sym) {
+		err = -ENOMEM;
+		goto out;
+	}
+	dso__insert_symbol(map__dso(map), sym);
+out:
+	if (put_map)
+		map__put(map);
+	return err;
 }
 
 static int machine__process_ksymbol_unregister(struct machine *machine,
@@ -925,14 +939,11 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
 		goto out;
 
 	err = maps__insert(machine__kernel_maps(machine), map);
-
-	/* Put the map here because maps__insert already got it */
-	map__put(map);
-
 	/* If maps__insert failed, return NULL. */
-	if (err)
+	if (err) {
+		map__put(map);
 		map = NULL;
-
+	}
 out:
 	/* put the dso here, corresponding to  machine__findnew_module_dso */
 	dso__put(dso);
@@ -1228,6 +1239,7 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
 	/* In case of renewal the kernel map, destroy previous one */
 	machine__destroy_kernel_maps(machine);
 
+	map__put(machine->vmlinux_map);
 	machine->vmlinux_map = map__new2(0, kernel);
 	if (machine->vmlinux_map == NULL)
 		return -ENOMEM;
@@ -1513,6 +1525,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
 	map->end = start + size;
 
 	dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
+	map__put(map);
 	return 0;
 }
 
@@ -1558,16 +1571,18 @@ static void machine__set_kernel_mmap(struct machine *machine,
 static int machine__update_kernel_mmap(struct machine *machine,
 				     u64 start, u64 end)
 {
-	struct map *map = machine__kernel_map(machine);
+	struct map *orig, *updated;
 	int err;
 
-	map__get(map);
-	maps__remove(machine__kernel_maps(machine), map);
+	orig = machine->vmlinux_map;
+	updated = map__get(orig);
 
+	machine->vmlinux_map = updated;
 	machine__set_kernel_mmap(machine, start, end);
+	maps__remove(machine__kernel_maps(machine), orig);
+	err = maps__insert(machine__kernel_maps(machine), updated);
+	map__put(orig);
 
-	err = maps__insert(machine__kernel_maps(machine), map);
-	map__put(map);
 	return err;
 }
 
@@ -2246,6 +2261,7 @@ static int add_callchain_ip(struct thread *thread,
 	err = callchain_cursor_append(cursor, ip, &ms,
 				      branch, flags, nr_loop_iter,
 				      iter_cycles, branch_from, srcline);
+	map__put(al.map);
 	return err;
 }
 
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 18/22] libperf: Add reference count checking macros.
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (16 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 17/22] perf map: Changes to reference counting Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 10:34 ` [PATCH v3 19/22] perf cpumap: Add reference count checking Ian Rogers
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

The macros serve as a way to debug use of a reference counted struct.
The macros add a memory allocated pointer that is interposed between
the reference counted original struct at a get and freed by a put.
The pointer replaces the original struct, so use of the struct name
via APIs remains unchanged.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/lib/perf/include/internal/rc_check.h | 94 ++++++++++++++++++++++
 1 file changed, 94 insertions(+)
 create mode 100644 tools/lib/perf/include/internal/rc_check.h

diff --git a/tools/lib/perf/include/internal/rc_check.h b/tools/lib/perf/include/internal/rc_check.h
new file mode 100644
index 000000000000..30d12f9c7b52
--- /dev/null
+++ b/tools/lib/perf/include/internal/rc_check.h
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LIBPERF_INTERNAL_RC_CHECK_H
+#define __LIBPERF_INTERNAL_RC_CHECK_H
+
+#include <stdlib.h>
+#include <linux/zalloc.h>
+
+/*
+ * Shared reference count checking macros.
+ *
+ * Reference count checking is an approach to sanitizing the use of reference
+ * counted structs. It leverages address and leak sanitizers to make sure gets
+ * are paired with a put. Reference count checking adds a malloc-ed layer of
+ * indirection on a get, and frees it on a put. A missed put will be reported as
+ * a memory leak. A double put will be reported as a double free. Accessing
+ * after a put will cause a use-after-free and/or a segfault.
+ */
+
+#ifndef REFCNT_CHECKING
+/* Replaces "struct foo" so that the pointer may be interposed. */
+#define DECLARE_RC_STRUCT(struct_name)		\
+	struct struct_name
+
+/* Declare a reference counted struct variable. */
+#define RC_STRUCT(struct_name) struct struct_name
+
+/*
+ * Interpose the indirection. Result will hold the indirection and object is the
+ * reference counted struct.
+ */
+#define ADD_RC_CHK(result, object) (result = object, object)
+
+/* Strip the indirection layer. */
+#define RC_CHK_ACCESS(object) object
+
+/* Frees the object and the indirection layer. */
+#define RC_CHK_FREE(object) free(object)
+
+/* A get operation adding the indirection layer. */
+#define RC_CHK_GET(result, object) ADD_RC_CHK(result, object)
+
+/* A put operation removing the indirection layer. */
+#define RC_CHK_PUT(object) {}
+
+#else
+
+/* Replaces "struct foo" so that the pointer may be interposed. */
+#define DECLARE_RC_STRUCT(struct_name)			\
+	struct original_##struct_name;			\
+	struct struct_name {				\
+		struct original_##struct_name *orig;	\
+	};						\
+	struct original_##struct_name
+
+/* Declare a reference counted struct variable. */
+#define RC_STRUCT(struct_name) struct original_##struct_name
+
+/*
+ * Interpose the indirection. Result will hold the indirection and object is the
+ * reference counted struct.
+ */
+#define ADD_RC_CHK(result, object)					\
+	(								\
+		object ? (result = malloc(sizeof(*result)),		\
+			result ? (result->orig = object, result)	\
+			: (result = NULL, NULL))			\
+		: (result = NULL, NULL)					\
+		)
+
+/* Strip the indirection layer. */
+#define RC_CHK_ACCESS(object) object->orig
+
+/* Frees the object and the indirection layer. */
+#define RC_CHK_FREE(object)			\
+	do {					\
+		zfree(&object->orig);		\
+		free(object);			\
+	} while(0)
+
+/* A get operation adding the indirection layer. */
+#define RC_CHK_GET(result, object) ADD_RC_CHK(result, (object ? object->orig : NULL))
+
+/* A put operation removing the indirection layer. */
+#define RC_CHK_PUT(object)			\
+	do {					\
+		if (object) {			\
+			object->orig = NULL;	\
+			free(object);		\
+		}				\
+	} while(0)
+
+#endif
+
+#endif /* __LIBPERF_INTERNAL_RC_CHECK_H */
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 19/22] perf cpumap: Add reference count checking
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (17 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 18/22] libperf: Add reference count checking macros Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 10:34 ` [PATCH v3 20/22] perf namespaces: " Ian Rogers
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Enabled when REFCNT_CHECKING is defined. The change adds a memory
allocated pointer that is interposed between the reference counted
cpu map at a get and freed by a put. The pointer replaces the original
perf_cpu_map struct, so use of the perf_cpu_map via APIs remains
unchanged. Any use of the cpu map without the API requires two versions,
handled via the RC_CHK_ACCESS macro.

This change is intended to catch:
 - use after put: using a cpumap after you have put it will cause a
   segv.
 - unbalanced puts: two puts for a get will result in a double free
   that can be captured and reported by tools like address sanitizer,
   including with the associated stack traces of allocation and frees.
 - missing puts: if a put is missing then the get turns into a memory
   leak that can be reported by leak sanitizer, including the stack
   trace at the point the get occurs.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/lib/perf/cpumap.c                  | 93 +++++++++++++-----------
 tools/lib/perf/include/internal/cpumap.h |  4 +-
 tools/perf/tests/cpumap.c                |  2 +-
 tools/perf/util/cpumap.c                 | 36 +++++----
 tools/perf/util/pmu.c                    |  8 +-
 5 files changed, 75 insertions(+), 68 deletions(-)

diff --git a/tools/lib/perf/cpumap.c b/tools/lib/perf/cpumap.c
index ee66760f1e63..e22cfed7a633 100644
--- a/tools/lib/perf/cpumap.c
+++ b/tools/lib/perf/cpumap.c
@@ -10,16 +10,16 @@
 #include <ctype.h>
 #include <limits.h>
 
-static struct perf_cpu_map *perf_cpu_map__alloc(int nr_cpus)
+struct perf_cpu_map *perf_cpu_map__alloc(int nr_cpus)
 {
-	struct perf_cpu_map *cpus = malloc(sizeof(*cpus) + sizeof(struct perf_cpu) * nr_cpus);
-
-	if (cpus != NULL) {
+	struct perf_cpu_map *result;
+	RC_STRUCT(perf_cpu_map) *cpus =
+		malloc(sizeof(*cpus) + sizeof(struct perf_cpu) * nr_cpus);
+	if (ADD_RC_CHK(result, cpus)) {
 		cpus->nr = nr_cpus;
 		refcount_set(&cpus->refcnt, 1);
-
 	}
-	return cpus;
+	return result;
 }
 
 struct perf_cpu_map *perf_cpu_map__dummy_new(void)
@@ -27,7 +27,7 @@ struct perf_cpu_map *perf_cpu_map__dummy_new(void)
 	struct perf_cpu_map *cpus = perf_cpu_map__alloc(1);
 
 	if (cpus)
-		cpus->map[0].cpu = -1;
+		RC_CHK_ACCESS(cpus)->map[0].cpu = -1;
 
 	return cpus;
 }
@@ -35,23 +35,30 @@ struct perf_cpu_map *perf_cpu_map__dummy_new(void)
 static void cpu_map__delete(struct perf_cpu_map *map)
 {
 	if (map) {
-		WARN_ONCE(refcount_read(&map->refcnt) != 0,
+		WARN_ONCE(refcount_read(&RC_CHK_ACCESS(map)->refcnt) != 0,
 			  "cpu_map refcnt unbalanced\n");
-		free(map);
+		RC_CHK_FREE(map);
 	}
 }
 
 struct perf_cpu_map *perf_cpu_map__get(struct perf_cpu_map *map)
 {
-	if (map)
-		refcount_inc(&map->refcnt);
-	return map;
+	struct perf_cpu_map *result;
+
+	if (RC_CHK_GET(result, map))
+		refcount_inc(&RC_CHK_ACCESS(map)->refcnt);
+
+	return result;
 }
 
 void perf_cpu_map__put(struct perf_cpu_map *map)
 {
-	if (map && refcount_dec_and_test(&map->refcnt))
-		cpu_map__delete(map);
+	if (map) {
+		if (refcount_dec_and_test(&RC_CHK_ACCESS(map)->refcnt))
+			cpu_map__delete(map);
+		else
+			RC_CHK_PUT(map);
+	}
 }
 
 static struct perf_cpu_map *cpu_map__default_new(void)
@@ -68,7 +75,7 @@ static struct perf_cpu_map *cpu_map__default_new(void)
 		int i;
 
 		for (i = 0; i < nr_cpus; ++i)
-			cpus->map[i].cpu = i;
+			RC_CHK_ACCESS(cpus)->map[i].cpu = i;
 	}
 
 	return cpus;
@@ -94,15 +101,16 @@ static struct perf_cpu_map *cpu_map__trim_new(int nr_cpus, const struct perf_cpu
 	int i, j;
 
 	if (cpus != NULL) {
-		memcpy(cpus->map, tmp_cpus, payload_size);
-		qsort(cpus->map, nr_cpus, sizeof(struct perf_cpu), cmp_cpu);
+		memcpy(RC_CHK_ACCESS(cpus)->map, tmp_cpus, payload_size);
+		qsort(RC_CHK_ACCESS(cpus)->map, nr_cpus, sizeof(struct perf_cpu), cmp_cpu);
 		/* Remove dups */
 		j = 0;
 		for (i = 0; i < nr_cpus; i++) {
-			if (i == 0 || cpus->map[i].cpu != cpus->map[i - 1].cpu)
-				cpus->map[j++].cpu = cpus->map[i].cpu;
+			if (i == 0 ||
+			    RC_CHK_ACCESS(cpus)->map[i].cpu != RC_CHK_ACCESS(cpus)->map[i - 1].cpu)
+				RC_CHK_ACCESS(cpus)->map[j++].cpu = RC_CHK_ACCESS(cpus)->map[i].cpu;
 		}
-		cpus->nr = j;
+		RC_CHK_ACCESS(cpus)->nr = j;
 		assert(j <= nr_cpus);
 	}
 	return cpus;
@@ -263,20 +271,20 @@ struct perf_cpu perf_cpu_map__cpu(const struct perf_cpu_map *cpus, int idx)
 		.cpu = -1
 	};
 
-	if (cpus && idx < cpus->nr)
-		return cpus->map[idx];
+	if (cpus && idx < RC_CHK_ACCESS(cpus)->nr)
+		return RC_CHK_ACCESS(cpus)->map[idx];
 
 	return result;
 }
 
 int perf_cpu_map__nr(const struct perf_cpu_map *cpus)
 {
-	return cpus ? cpus->nr : 1;
+	return cpus ? RC_CHK_ACCESS(cpus)->nr : 1;
 }
 
 bool perf_cpu_map__empty(const struct perf_cpu_map *map)
 {
-	return map ? map->map[0].cpu == -1 : true;
+	return map ? RC_CHK_ACCESS(map)->map[0].cpu == -1 : true;
 }
 
 int perf_cpu_map__idx(const struct perf_cpu_map *cpus, struct perf_cpu cpu)
@@ -287,10 +295,10 @@ int perf_cpu_map__idx(const struct perf_cpu_map *cpus, struct perf_cpu cpu)
 		return -1;
 
 	low = 0;
-	high = cpus->nr;
+	high = RC_CHK_ACCESS(cpus)->nr;
 	while (low < high) {
 		int idx = (low + high) / 2;
-		struct perf_cpu cpu_at_idx = cpus->map[idx];
+		struct perf_cpu cpu_at_idx = RC_CHK_ACCESS(cpus)->map[idx];
 
 		if (cpu_at_idx.cpu == cpu.cpu)
 			return idx;
@@ -316,7 +324,7 @@ struct perf_cpu perf_cpu_map__max(struct perf_cpu_map *map)
 	};
 
 	// cpu_map__trim_new() qsort()s it, cpu_map__default_new() sorts it as well.
-	return map->nr > 0 ? map->map[map->nr - 1] : result;
+	return RC_CHK_ACCESS(map)->nr > 0 ? RC_CHK_ACCESS(map)->map[RC_CHK_ACCESS(map)->nr - 1] : result;
 }
 
 /*
@@ -337,37 +345,36 @@ struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig,
 
 	if (!orig && !other)
 		return NULL;
-	if (!orig) {
-		perf_cpu_map__get(other);
-		return other;
-	}
+	if (!orig)
+		return perf_cpu_map__get(other);
 	if (!other)
 		return orig;
-	if (orig->nr == other->nr &&
-	    !memcmp(orig->map, other->map, orig->nr * sizeof(struct perf_cpu)))
+	if (RC_CHK_ACCESS(orig)->nr == RC_CHK_ACCESS(other)->nr &&
+	    !memcmp(RC_CHK_ACCESS(orig)->map, RC_CHK_ACCESS(other)->map,
+		    RC_CHK_ACCESS(orig)->nr * sizeof(struct perf_cpu)))
 		return orig;
 
-	tmp_len = orig->nr + other->nr;
+	tmp_len = RC_CHK_ACCESS(orig)->nr + RC_CHK_ACCESS(other)->nr;
 	tmp_cpus = malloc(tmp_len * sizeof(struct perf_cpu));
 	if (!tmp_cpus)
 		return NULL;
 
 	/* Standard merge algorithm from wikipedia */
 	i = j = k = 0;
-	while (i < orig->nr && j < other->nr) {
-		if (orig->map[i].cpu <= other->map[j].cpu) {
-			if (orig->map[i].cpu == other->map[j].cpu)
+	while (i < RC_CHK_ACCESS(orig)->nr && j < RC_CHK_ACCESS(other)->nr) {
+		if (RC_CHK_ACCESS(orig)->map[i].cpu <= RC_CHK_ACCESS(other)->map[j].cpu) {
+			if (RC_CHK_ACCESS(orig)->map[i].cpu == RC_CHK_ACCESS(other)->map[j].cpu)
 				j++;
-			tmp_cpus[k++] = orig->map[i++];
+			tmp_cpus[k++] = RC_CHK_ACCESS(orig)->map[i++];
 		} else
-			tmp_cpus[k++] = other->map[j++];
+			tmp_cpus[k++] = RC_CHK_ACCESS(other)->map[j++];
 	}
 
-	while (i < orig->nr)
-		tmp_cpus[k++] = orig->map[i++];
+	while (i < RC_CHK_ACCESS(orig)->nr)
+		tmp_cpus[k++] = RC_CHK_ACCESS(orig)->map[i++];
 
-	while (j < other->nr)
-		tmp_cpus[k++] = other->map[j++];
+	while (j < RC_CHK_ACCESS(other)->nr)
+		tmp_cpus[k++] = RC_CHK_ACCESS(other)->map[j++];
 	assert(k <= tmp_len);
 
 	merged = cpu_map__trim_new(k, tmp_cpus);
diff --git a/tools/lib/perf/include/internal/cpumap.h b/tools/lib/perf/include/internal/cpumap.h
index 581f9ffb4237..1a584d4f125c 100644
--- a/tools/lib/perf/include/internal/cpumap.h
+++ b/tools/lib/perf/include/internal/cpumap.h
@@ -3,6 +3,7 @@
 #define __LIBPERF_INTERNAL_CPUMAP_H
 
 #include <linux/refcount.h>
+#include <internal/rc_check.h>
 
 /** A wrapper around a CPU to avoid confusion with the perf_cpu_map's map's indices. */
 struct perf_cpu {
@@ -16,7 +17,7 @@ struct perf_cpu {
  * gaps if CPU numbers were used. For events associated with a pid, rather than
  * a CPU, a single dummy map with an entry of -1 is used.
  */
-struct perf_cpu_map {
+DECLARE_RC_STRUCT(perf_cpu_map) {
 	refcount_t	refcnt;
 	/** Length of the map array. */
 	int		nr;
@@ -28,6 +29,7 @@ struct perf_cpu_map {
 #define MAX_NR_CPUS	2048
 #endif
 
+struct perf_cpu_map *perf_cpu_map__alloc(int nr_cpus);
 int perf_cpu_map__idx(const struct perf_cpu_map *cpus, struct perf_cpu cpu);
 
 #endif /* __LIBPERF_INTERNAL_CPUMAP_H */
diff --git a/tools/perf/tests/cpumap.c b/tools/perf/tests/cpumap.c
index f94929ebb54b..d4a7c289b062 100644
--- a/tools/perf/tests/cpumap.c
+++ b/tools/perf/tests/cpumap.c
@@ -69,7 +69,7 @@ static int process_event_cpus(struct perf_tool *tool __maybe_unused,
 	TEST_ASSERT_VAL("wrong nr",  perf_cpu_map__nr(map) == 2);
 	TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, 0).cpu == 1);
 	TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, 1).cpu == 256);
-	TEST_ASSERT_VAL("wrong refcnt", refcount_read(&map->refcnt) == 1);
+	TEST_ASSERT_VAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(map)->refcnt) == 1);
 	perf_cpu_map__put(map);
 	return 0;
 }
diff --git a/tools/perf/util/cpumap.c b/tools/perf/util/cpumap.c
index 12b2243222b0..d35a849c896a 100644
--- a/tools/perf/util/cpumap.c
+++ b/tools/perf/util/cpumap.c
@@ -37,9 +37,9 @@ static struct perf_cpu_map *cpu_map__from_entries(struct cpu_map_entries *cpus)
 			 * otherwise it would become 65535.
 			 */
 			if (cpus->cpu[i] == (u16) -1)
-				map->map[i].cpu = -1;
+				RC_CHK_ACCESS(map)->map[i].cpu = -1;
 			else
-				map->map[i].cpu = (int) cpus->cpu[i];
+				RC_CHK_ACCESS(map)->map[i].cpu = (int) cpus->cpu[i];
 		}
 	}
 
@@ -58,7 +58,7 @@ static struct perf_cpu_map *cpu_map__from_mask(struct perf_record_record_cpu_map
 		int cpu, i = 0;
 
 		for_each_set_bit(cpu, mask->mask, nbits)
-			map->map[i++].cpu = cpu;
+			RC_CHK_ACCESS(map)->map[i++].cpu = cpu;
 	}
 	return map;
 
@@ -84,16 +84,13 @@ size_t cpu_map__fprintf(struct perf_cpu_map *map, FILE *fp)
 
 struct perf_cpu_map *perf_cpu_map__empty_new(int nr)
 {
-	struct perf_cpu_map *cpus = malloc(sizeof(*cpus) + sizeof(int) * nr);
+	struct perf_cpu_map *cpus = perf_cpu_map__alloc(nr);
 
 	if (cpus != NULL) {
 		int i;
 
-		cpus->nr = nr;
 		for (i = 0; i < nr; i++)
-			cpus->map[i].cpu = -1;
-
-		refcount_set(&cpus->refcnt, 1);
+			RC_CHK_ACCESS(cpus)->map[i].cpu = -1;
 	}
 
 	return cpus;
@@ -163,7 +160,7 @@ struct cpu_aggr_map *cpu_aggr_map__new(const struct perf_cpu_map *cpus,
 {
 	int idx;
 	struct perf_cpu cpu;
-	struct cpu_aggr_map *c = cpu_aggr_map__empty_new(cpus->nr);
+	struct cpu_aggr_map *c = cpu_aggr_map__empty_new(perf_cpu_map__nr(cpus));
 
 	if (!c)
 		return NULL;
@@ -187,7 +184,7 @@ struct cpu_aggr_map *cpu_aggr_map__new(const struct perf_cpu_map *cpus,
 		}
 	}
 	/* Trim. */
-	if (c->nr != cpus->nr) {
+	if (c->nr != perf_cpu_map__nr(cpus)) {
 		struct cpu_aggr_map *trimmed_c =
 			realloc(c,
 				sizeof(struct cpu_aggr_map) + sizeof(struct aggr_cpu_id) * c->nr);
@@ -494,31 +491,32 @@ size_t cpu_map__snprint(struct perf_cpu_map *map, char *buf, size_t size)
 
 #define COMMA first ? "" : ","
 
-	for (i = 0; i < map->nr + 1; i++) {
+	for (i = 0; i < perf_cpu_map__nr(map) + 1; i++) {
 		struct perf_cpu cpu = { .cpu = INT_MAX };
-		bool last = i == map->nr;
+		bool last = i == perf_cpu_map__nr(map);
 
 		if (!last)
-			cpu = map->map[i];
+			cpu = perf_cpu_map__cpu(map, i);
 
 		if (start == -1) {
 			start = i;
 			if (last) {
 				ret += snprintf(buf + ret, size - ret,
 						"%s%d", COMMA,
-						map->map[i].cpu);
+						perf_cpu_map__cpu(map, i).cpu);
 			}
-		} else if (((i - start) != (cpu.cpu - map->map[start].cpu)) || last) {
+		} else if (((i - start) != (cpu.cpu - perf_cpu_map__cpu(map, start).cpu)) || last) {
 			int end = i - 1;
 
 			if (start == end) {
 				ret += snprintf(buf + ret, size - ret,
 						"%s%d", COMMA,
-						map->map[start].cpu);
+						perf_cpu_map__cpu(map, start).cpu);
 			} else {
 				ret += snprintf(buf + ret, size - ret,
 						"%s%d-%d", COMMA,
-						map->map[start].cpu, map->map[end].cpu);
+						perf_cpu_map__cpu(map, start).cpu,
+						perf_cpu_map__cpu(map, end).cpu);
 			}
 			first = false;
 			start = i;
@@ -545,7 +543,7 @@ size_t cpu_map__snprint_mask(struct perf_cpu_map *map, char *buf, size_t size)
 	int i, cpu;
 	char *ptr = buf;
 	unsigned char *bitmap;
-	struct perf_cpu last_cpu = perf_cpu_map__cpu(map, map->nr - 1);
+	struct perf_cpu last_cpu = perf_cpu_map__cpu(map, perf_cpu_map__nr(map) - 1);
 
 	if (buf == NULL)
 		return 0;
@@ -556,7 +554,7 @@ size_t cpu_map__snprint_mask(struct perf_cpu_map *map, char *buf, size_t size)
 		return 0;
 	}
 
-	for (i = 0; i < map->nr; i++) {
+	for (i = 0; i < perf_cpu_map__nr(map); i++) {
 		cpu = perf_cpu_map__cpu(map, i).cpu;
 		bitmap[cpu / 8] |= 1 << (cpu % 8);
 	}
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index 9a1c7e63e663..015aa92100ab 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -2013,13 +2013,13 @@ int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
 
 	perf_cpu_map__for_each_cpu(cpu, i, cpus) {
 		if (!perf_cpu_map__has(pmu_cpus, cpu))
-			unmatched_cpus->map[unmatched_nr++] = cpu;
+			RC_CHK_ACCESS(unmatched_cpus)->map[unmatched_nr++] = cpu;
 		else
-			matched_cpus->map[matched_nr++] = cpu;
+			RC_CHK_ACCESS(matched_cpus)->map[matched_nr++] = cpu;
 	}
 
-	unmatched_cpus->nr = unmatched_nr;
-	matched_cpus->nr = matched_nr;
+	RC_CHK_ACCESS(unmatched_cpus)->nr = unmatched_nr;
+	RC_CHK_ACCESS(matched_cpus)->nr = matched_nr;
 	*mcpus_ptr = matched_cpus;
 	*ucpus_ptr = unmatched_cpus;
 	return 0;
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 20/22] perf namespaces: Add reference count checking
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (18 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 19/22] perf cpumap: Add reference count checking Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 10:34 ` [PATCH v3 21/22] perf maps: " Ian Rogers
  2022-02-11 10:34 ` [PATCH v3 22/22] perf map: " Ian Rogers
  21 siblings, 0 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Add reference count checking controlled by REFCNT_CHECKING ifdef. The
reference count checking interposes an allocated pointer between the
reference counted struct on a get and frees the pointer on a put.
Accesses after a put cause faults and use after free, missed puts are
caughts as leaks and double puts are double frees.

This checking helped resolve a memory leak and use after free:
https://lore.kernel.org/linux-perf-users/CAP-5=fWZH20L4kv-BwVtGLwR=Em3AOOT+Q4QGivvQuYn5AsPRg@mail.gmail.com/

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/util/namespaces.c | 132 ++++++++++++++++++++---------------
 tools/perf/util/namespaces.h |   3 +-
 2 files changed, 78 insertions(+), 57 deletions(-)

diff --git a/tools/perf/util/namespaces.c b/tools/perf/util/namespaces.c
index dd536220cdb9..8a3b7bd27b19 100644
--- a/tools/perf/util/namespaces.c
+++ b/tools/perf/util/namespaces.c
@@ -60,7 +60,7 @@ void namespaces__free(struct namespaces *namespaces)
 	free(namespaces);
 }
 
-static int nsinfo__get_nspid(struct nsinfo *nsi, const char *path)
+static int nsinfo__get_nspid(pid_t *tgid, pid_t *nstgid, bool *in_pidns, const char *path)
 {
 	FILE *f = NULL;
 	char *statln = NULL;
@@ -74,19 +74,18 @@ static int nsinfo__get_nspid(struct nsinfo *nsi, const char *path)
 	while (getline(&statln, &linesz, f) != -1) {
 		/* Use tgid if CONFIG_PID_NS is not defined. */
 		if (strstr(statln, "Tgid:") != NULL) {
-			nsi->tgid = (pid_t)strtol(strrchr(statln, '\t'),
-						     NULL, 10);
-			nsi->nstgid = nsinfo__tgid(nsi);
+			*tgid = (pid_t)strtol(strrchr(statln, '\t'), NULL, 10);
+			*nstgid = *tgid;
 		}
 
 		if (strstr(statln, "NStgid:") != NULL) {
 			nspid = strrchr(statln, '\t');
-			nsi->nstgid = (pid_t)strtol(nspid, NULL, 10);
+			*nstgid = (pid_t)strtol(nspid, NULL, 10);
 			/*
 			 * If innermost tgid is not the first, process is in a different
 			 * PID namespace.
 			 */
-			nsi->in_pidns = (statln + sizeof("NStgid:") - 1) != nspid;
+			*in_pidns = (statln + sizeof("NStgid:") - 1) != nspid;
 			break;
 		}
 	}
@@ -121,8 +120,8 @@ int nsinfo__init(struct nsinfo *nsi)
 	 * want to switch as part of looking up dso/map data.
 	 */
 	if (old_stat.st_ino != new_stat.st_ino) {
-		nsi->need_setns = true;
-		nsi->mntns_path = newns;
+		RC_CHK_ACCESS(nsi)->need_setns = true;
+		RC_CHK_ACCESS(nsi)->mntns_path = newns;
 		newns = NULL;
 	}
 
@@ -132,13 +131,26 @@ int nsinfo__init(struct nsinfo *nsi)
 	if (snprintf(spath, PATH_MAX, "/proc/%d/status", nsinfo__pid(nsi)) >= PATH_MAX)
 		goto out;
 
-	rv = nsinfo__get_nspid(nsi, spath);
+	rv = nsinfo__get_nspid(&RC_CHK_ACCESS(nsi)->tgid, &RC_CHK_ACCESS(nsi)->nstgid,
+			       &RC_CHK_ACCESS(nsi)->in_pidns, spath);
 
 out:
 	free(newns);
 	return rv;
 }
 
+static struct nsinfo *nsinfo__alloc(void)
+{
+	struct nsinfo *res;
+	RC_STRUCT(nsinfo) *nsi;
+
+	nsi = calloc(1, sizeof(*nsi));
+	if (ADD_RC_CHK(res, nsi))
+		refcount_set(&nsi->refcnt, 1);
+
+	return res;
+}
+
 struct nsinfo *nsinfo__new(pid_t pid)
 {
 	struct nsinfo *nsi;
@@ -146,22 +158,21 @@ struct nsinfo *nsinfo__new(pid_t pid)
 	if (pid == 0)
 		return NULL;
 
-	nsi = calloc(1, sizeof(*nsi));
-	if (nsi != NULL) {
-		nsi->pid = pid;
-		nsi->tgid = pid;
-		nsi->nstgid = pid;
-		nsi->need_setns = false;
-		nsi->in_pidns = false;
-		/* Init may fail if the process exits while we're trying to look
-		 * at its proc information.  In that case, save the pid but
-		 * don't try to enter the namespace.
-		 */
-		if (nsinfo__init(nsi) == -1)
-			nsi->need_setns = false;
+	nsi = nsinfo__alloc();
+	if (!nsi)
+		return NULL;
 
-		refcount_set(&nsi->refcnt, 1);
-	}
+	RC_CHK_ACCESS(nsi)->pid = pid;
+	RC_CHK_ACCESS(nsi)->tgid = pid;
+	RC_CHK_ACCESS(nsi)->nstgid = pid;
+	RC_CHK_ACCESS(nsi)->need_setns = false;
+	RC_CHK_ACCESS(nsi)->in_pidns = false;
+	/* Init may fail if the process exits while we're trying to look at its
+	 * proc information. In that case, save the pid but don't try to enter
+	 * the namespace.
+	 */
+	if (nsinfo__init(nsi) == -1)
+		RC_CHK_ACCESS(nsi)->need_setns = false;
 
 	return nsi;
 }
@@ -173,21 +184,21 @@ struct nsinfo *nsinfo__copy(const struct nsinfo *nsi)
 	if (nsi == NULL)
 		return NULL;
 
-	nnsi = calloc(1, sizeof(*nnsi));
-	if (nnsi != NULL) {
-		nnsi->pid = nsinfo__pid(nsi);
-		nnsi->tgid = nsinfo__tgid(nsi);
-		nnsi->nstgid = nsinfo__nstgid(nsi);
-		nnsi->need_setns = nsinfo__need_setns(nsi);
-		nnsi->in_pidns = nsinfo__in_pidns(nsi);
-		if (nsi->mntns_path) {
-			nnsi->mntns_path = strdup(nsi->mntns_path);
-			if (!nnsi->mntns_path) {
-				free(nnsi);
-				return NULL;
-			}
+	nnsi = nsinfo__alloc();
+	if (!nnsi)
+		return NULL;
+
+	RC_CHK_ACCESS(nnsi)->pid = nsinfo__pid(nsi);
+	RC_CHK_ACCESS(nnsi)->tgid = nsinfo__tgid(nsi);
+	RC_CHK_ACCESS(nnsi)->nstgid = nsinfo__nstgid(nsi);
+	RC_CHK_ACCESS(nnsi)->need_setns = nsinfo__need_setns(nsi);
+	RC_CHK_ACCESS(nnsi)->in_pidns = nsinfo__in_pidns(nsi);
+	if (RC_CHK_ACCESS(nsi)->mntns_path) {
+		RC_CHK_ACCESS(nnsi)->mntns_path = strdup(RC_CHK_ACCESS(nsi)->mntns_path);
+		if (!RC_CHK_ACCESS(nnsi)->mntns_path) {
+			nsinfo__put(nnsi);
+			return NULL;
 		}
-		refcount_set(&nnsi->refcnt, 1);
 	}
 
 	return nnsi;
@@ -195,51 +206,60 @@ struct nsinfo *nsinfo__copy(const struct nsinfo *nsi)
 
 static void nsinfo__delete(struct nsinfo *nsi)
 {
-	zfree(&nsi->mntns_path);
-	free(nsi);
+	if (nsi) {
+		WARN_ONCE(refcount_read(&RC_CHK_ACCESS(nsi)->refcnt) != 0,
+			"nsinfo refcnt unbalanced\n");
+		zfree(&RC_CHK_ACCESS(nsi)->mntns_path);
+		RC_CHK_FREE(nsi);
+	}
 }
 
 struct nsinfo *nsinfo__get(struct nsinfo *nsi)
 {
-	if (nsi)
-		refcount_inc(&nsi->refcnt);
-	return nsi;
+	struct nsinfo *result;
+
+	if (RC_CHK_GET(result, nsi))
+		refcount_inc(&RC_CHK_ACCESS(nsi)->refcnt);
+
+	return result;
 }
 
 void nsinfo__put(struct nsinfo *nsi)
 {
-	if (nsi && refcount_dec_and_test(&nsi->refcnt))
+	if (nsi && refcount_dec_and_test(&RC_CHK_ACCESS(nsi)->refcnt))
 		nsinfo__delete(nsi);
+	else
+		RC_CHK_PUT(nsi);
 }
 
 bool nsinfo__need_setns(const struct nsinfo *nsi)
 {
-        return nsi->need_setns;
+	return RC_CHK_ACCESS(nsi)->need_setns;
 }
 
 void nsinfo__clear_need_setns(struct nsinfo *nsi)
 {
-        nsi->need_setns = false;
+	RC_CHK_ACCESS(nsi)->need_setns = false;
 }
 
 pid_t nsinfo__tgid(const struct nsinfo  *nsi)
 {
-        return nsi->tgid;
+	return RC_CHK_ACCESS(nsi)->tgid;
 }
 
 pid_t nsinfo__nstgid(const struct nsinfo  *nsi)
 {
-        return nsi->nstgid;
+	return RC_CHK_ACCESS(nsi)->nstgid;
 }
 
 pid_t nsinfo__pid(const struct nsinfo  *nsi)
 {
-        return nsi->pid;
+	return RC_CHK_ACCESS(nsi)->pid;
 }
 
 pid_t nsinfo__in_pidns(const struct nsinfo  *nsi)
 {
-        return nsi->in_pidns;
+	return RC_CHK_ACCESS(nsi)->in_pidns;
 }
 
 void nsinfo__mountns_enter(struct nsinfo *nsi,
@@ -256,7 +276,7 @@ void nsinfo__mountns_enter(struct nsinfo *nsi,
 	nc->oldns = -1;
 	nc->newns = -1;
 
-	if (!nsi || !nsi->need_setns)
+	if (!nsi || !RC_CHK_ACCESS(nsi)->need_setns)
 		return;
 
 	if (snprintf(curpath, PATH_MAX, "/proc/self/ns/mnt") >= PATH_MAX)
@@ -270,7 +290,7 @@ void nsinfo__mountns_enter(struct nsinfo *nsi,
 	if (oldns < 0)
 		goto errout;
 
-	newns = open(nsi->mntns_path, O_RDONLY);
+	newns = open(RC_CHK_ACCESS(nsi)->mntns_path, O_RDONLY);
 	if (newns < 0)
 		goto errout;
 
@@ -339,9 +359,9 @@ int nsinfo__stat(const char *filename, struct stat *st, struct nsinfo *nsi)
 
 bool nsinfo__is_in_root_namespace(void)
 {
-	struct nsinfo nsi;
+	pid_t tgid = 0, nstgid = 0;
+	bool in_pidns = false;
 
-	memset(&nsi, 0x0, sizeof(nsi));
-	nsinfo__get_nspid(&nsi, "/proc/self/status");
-	return !nsi.in_pidns;
+	nsinfo__get_nspid(&tgid, &nstgid, &in_pidns, "/proc/self/status");
+	return !in_pidns;
 }
diff --git a/tools/perf/util/namespaces.h b/tools/perf/util/namespaces.h
index 567829262c42..8c0731c6cbb7 100644
--- a/tools/perf/util/namespaces.h
+++ b/tools/perf/util/namespaces.h
@@ -13,6 +13,7 @@
 #include <linux/perf_event.h>
 #include <linux/refcount.h>
 #include <linux/types.h>
+#include <internal/rc_check.h>
 
 #ifndef HAVE_SETNS_SUPPORT
 int setns(int fd, int nstype);
@@ -29,7 +30,7 @@ struct namespaces {
 struct namespaces *namespaces__new(struct perf_record_namespaces *event);
 void namespaces__free(struct namespaces *namespaces);
 
-struct nsinfo {
+DECLARE_RC_STRUCT(nsinfo) {
 	pid_t			pid;
 	pid_t			tgid;
 	pid_t			nstgid;
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 21/22] perf maps: Add reference count checking.
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (19 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 20/22] perf namespaces: " Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  2022-02-11 10:34 ` [PATCH v3 22/22] perf map: " Ian Rogers
  21 siblings, 0 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

Add reference count checking to make sure of good use of get and put.
Add and use accessors to reduce RC_CHK clutter.

The only significant issue was in tests/thread-maps-share.c where
reference counts were released in the reverse order to acquisition,
leading to a use after put. This was fixed by reversing the put order.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/tests/thread-maps-share.c     | 29 ++++++-------
 tools/perf/util/maps.c                   | 53 +++++++++++++-----------
 tools/perf/util/maps.h                   | 17 ++++----
 tools/perf/util/symbol.c                 | 10 ++---
 tools/perf/util/unwind-libunwind-local.c |  2 +-
 tools/perf/util/unwind-libunwind.c       |  2 +-
 6 files changed, 60 insertions(+), 53 deletions(-)

diff --git a/tools/perf/tests/thread-maps-share.c b/tools/perf/tests/thread-maps-share.c
index 84edd82c519e..dfe51b21bd7d 100644
--- a/tools/perf/tests/thread-maps-share.c
+++ b/tools/perf/tests/thread-maps-share.c
@@ -43,12 +43,12 @@ static int test__thread_maps_share(struct test_suite *test __maybe_unused, int s
 			leader && t1 && t2 && t3 && other);
 
 	maps = leader->maps;
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&maps->refcnt), 4);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(maps)->refcnt), 4);
 
 	/* test the maps pointer is shared */
-	TEST_ASSERT_VAL("maps don't match", maps == t1->maps);
-	TEST_ASSERT_VAL("maps don't match", maps == t2->maps);
-	TEST_ASSERT_VAL("maps don't match", maps == t3->maps);
+	TEST_ASSERT_VAL("maps don't match", RC_CHK_ACCESS(maps) == RC_CHK_ACCESS(t1->maps));
+	TEST_ASSERT_VAL("maps don't match", RC_CHK_ACCESS(maps) == RC_CHK_ACCESS(t2->maps));
+	TEST_ASSERT_VAL("maps don't match", RC_CHK_ACCESS(maps) == RC_CHK_ACCESS(t3->maps));
 
 	/*
 	 * Verify the other leader was created by previous call.
@@ -71,25 +71,26 @@ static int test__thread_maps_share(struct test_suite *test __maybe_unused, int s
 	machine__remove_thread(machine, other_leader);
 
 	other_maps = other->maps;
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&other_maps->refcnt), 2);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(other_maps)->refcnt), 2);
 
-	TEST_ASSERT_VAL("maps don't match", other_maps == other_leader->maps);
+	TEST_ASSERT_VAL("maps don't match",
+			RC_CHK_ACCESS(other_maps) == RC_CHK_ACCESS(other_leader->maps));
 
 	/* release thread group */
-	thread__put(leader);
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&maps->refcnt), 3);
-
-	thread__put(t1);
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&maps->refcnt), 2);
+	thread__put(t3);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(maps)->refcnt), 3);
 
 	thread__put(t2);
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&maps->refcnt), 1);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(maps)->refcnt), 2);
 
-	thread__put(t3);
+	thread__put(t1);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(maps)->refcnt), 1);
+
+	thread__put(leader);
 
 	/* release other group  */
 	thread__put(other_leader);
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&other_maps->refcnt), 1);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(other_maps)->refcnt), 1);
 
 	thread__put(other);
 
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 6efbcb79131c..da59204cb9bb 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -12,13 +12,13 @@
 
 static void maps__init(struct maps *maps, struct machine *machine)
 {
-	maps->entries = RB_ROOT;
+	RC_CHK_ACCESS(maps)->entries = RB_ROOT;
 	init_rwsem(maps__lock(maps));
-	maps->machine = machine;
-	maps->last_search_by_name = NULL;
-	maps->nr_maps = 0;
-	maps->maps_by_name = NULL;
-	refcount_set(&maps->refcnt, 1);
+	RC_CHK_ACCESS(maps)->machine = machine;
+	RC_CHK_ACCESS(maps)->last_search_by_name = NULL;
+	RC_CHK_ACCESS(maps)->nr_maps = 0;
+	RC_CHK_ACCESS(maps)->maps_by_name = NULL;
+	refcount_set(&RC_CHK_ACCESS(maps)->refcnt, 1);
 }
 
 static void __maps__free_maps_by_name(struct maps *maps)
@@ -26,8 +26,8 @@ static void __maps__free_maps_by_name(struct maps *maps)
 	/*
 	 * Free everything to try to do it from the rbtree in the next search
 	 */
-	zfree(&maps->maps_by_name);
-	maps->nr_maps_allocated = 0;
+	zfree(&RC_CHK_ACCESS(maps)->maps_by_name);
+	RC_CHK_ACCESS(maps)->nr_maps_allocated = 0;
 }
 
 static struct map *__maps__insert(struct maps *maps, struct map *map)
@@ -69,7 +69,7 @@ int maps__insert(struct maps *maps, struct map *map)
 		goto out;
 	}
 
-	++maps->nr_maps;
+	++RC_CHK_ACCESS(maps)->nr_maps;
 
 	if (map__dso(map) && map__dso(map)->kernel) {
 		struct kmap *kmap = map__kmap(map);
@@ -86,7 +86,7 @@ int maps__insert(struct maps *maps, struct map *map)
 	 * inserted map and resort.
 	 */
 	if (maps__maps_by_name(maps)) {
-		if (maps__nr_maps(maps) > maps->nr_maps_allocated) {
+		if (maps__nr_maps(maps) > RC_CHK_ACCESS(maps)->nr_maps_allocated) {
 			int nr_allocate = maps__nr_maps(maps) * 2;
 			struct map **maps_by_name = realloc(maps__maps_by_name(maps),
 							    nr_allocate * sizeof(map));
@@ -97,8 +97,8 @@ int maps__insert(struct maps *maps, struct map *map)
 				goto out;
 			}
 
-			maps->maps_by_name = maps_by_name;
-			maps->nr_maps_allocated = nr_allocate;
+			RC_CHK_ACCESS(maps)->maps_by_name = maps_by_name;
+			RC_CHK_ACCESS(maps)->nr_maps_allocated = nr_allocate;
 		}
 		maps__maps_by_name(maps)[maps__nr_maps(maps) - 1] = map;
 		__maps__sort_by_name(maps);
@@ -120,13 +120,13 @@ void maps__remove(struct maps *maps, struct map *map)
 	struct map_rb_node *rb_node;
 
 	down_write(maps__lock(maps));
-	if (maps->last_search_by_name == map)
-		maps->last_search_by_name = NULL;
+	if (RC_CHK_ACCESS(maps)->last_search_by_name == map)
+		RC_CHK_ACCESS(maps)->last_search_by_name = NULL;
 
 	rb_node = maps__find_node(maps, map);
 	assert(rb_node->map == map);
 	__maps__remove(maps, rb_node);
-	--maps->nr_maps;
+	--RC_CHK_ACCESS(maps)->nr_maps;
 	if (maps__maps_by_name(maps))
 		__maps__free_maps_by_name(maps);
 	up_write(maps__lock(maps));
@@ -157,33 +157,38 @@ bool maps__empty(struct maps *maps)
 
 struct maps *maps__new(struct machine *machine)
 {
-	struct maps *maps = zalloc(sizeof(*maps));
+	struct maps *res;
+	RC_STRUCT(maps) *maps = zalloc(sizeof(*maps));
 
-	if (maps != NULL)
-		maps__init(maps, machine);
+	if (ADD_RC_CHK(res, maps))
+		maps__init(res, machine);
 
-	return maps;
+	return res;
 }
 
 void maps__delete(struct maps *maps)
 {
 	maps__exit(maps);
 	unwind__finish_access(maps);
-	free(maps);
+	RC_CHK_FREE(maps);
 }
 
 struct maps *maps__get(struct maps *maps)
 {
-	if (maps)
-		refcount_inc(&maps->refcnt);
+	struct maps *result;
 
-	return maps;
+	if (RC_CHK_GET(result, maps))
+		refcount_inc(&RC_CHK_ACCESS(maps)->refcnt);
+
+	return result;
 }
 
 void maps__put(struct maps *maps)
 {
-	if (maps && refcount_dec_and_test(&maps->refcnt))
+	if (maps && refcount_dec_and_test(&RC_CHK_ACCESS(maps)->refcnt))
 		maps__delete(maps);
+	else
+		RC_CHK_PUT(maps);
 }
 
 struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
index bde3390c7096..0af4b7e42fca 100644
--- a/tools/perf/util/maps.h
+++ b/tools/perf/util/maps.h
@@ -8,6 +8,7 @@
 #include <stdbool.h>
 #include <linux/types.h>
 #include "rwsem.h"
+#include <internal/rc_check.h>
 
 struct ref_reloc_sym;
 struct machine;
@@ -32,7 +33,7 @@ struct map *maps__find(struct maps *maps, u64 addr);
 	for (map = maps__first(maps), next = map_rb_node__next(map); map; \
 	     map = next, next = map_rb_node__next(map))
 
-struct maps {
+DECLARE_RC_STRUCT(maps) {
 	struct rb_root      entries;
 	struct rw_semaphore lock;
 	struct machine	 *machine;
@@ -65,38 +66,38 @@ void maps__put(struct maps *maps);
 
 static inline struct rb_root *maps__entries(struct maps *maps)
 {
-	return &maps->entries;
+	return &RC_CHK_ACCESS(maps)->entries;
 }
 
 static inline struct machine *maps__machine(struct maps *maps)
 {
-	return maps->machine;
+	return RC_CHK_ACCESS(maps)->machine;
 }
 
 static inline struct rw_semaphore *maps__lock(struct maps *maps)
 {
-	return &maps->lock;
+	return &RC_CHK_ACCESS(maps)->lock;
 }
 
 static inline struct map **maps__maps_by_name(struct maps *maps)
 {
-	return maps->maps_by_name;
+	return RC_CHK_ACCESS(maps)->maps_by_name;
 }
 
 static inline unsigned int maps__nr_maps(const struct maps *maps)
 {
-	return maps->nr_maps;
+	return RC_CHK_ACCESS(maps)->nr_maps;
 }
 
 #ifdef HAVE_LIBUNWIND_SUPPORT
 static inline void *maps__addr_space(struct maps *maps)
 {
-	return maps->addr_space;
+	return RC_CHK_ACCESS(maps)->addr_space;
 }
 
 static inline const struct unwind_libunwind_ops *maps__unwind_libunwind_ops(const struct maps *maps)
 {
-	return maps->unwind_libunwind_ops;
+	return RC_CHK_ACCESS(maps)->unwind_libunwind_ops;
 }
 #endif
 
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 6289b3028b91..fdaeeebd6050 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -2025,8 +2025,8 @@ static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
 	if (maps_by_name == NULL)
 		return -1;
 
-	maps->maps_by_name = maps_by_name;
-	maps->nr_maps_allocated = maps__nr_maps(maps);
+	RC_CHK_ACCESS(maps)->maps_by_name = maps_by_name;
+	RC_CHK_ACCESS(maps)->nr_maps_allocated = maps__nr_maps(maps);
 
 	maps__for_each_entry(maps, rb_node)
 		maps_by_name[i++] = rb_node->map;
@@ -2057,9 +2057,9 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 
 	down_read(maps__lock(maps));
 
-	if (maps->last_search_by_name &&
+	if (RC_CHK_ACCESS(maps)->last_search_by_name &&
 	    strcmp(map__dso(maps->last_search_by_name)->short_name, name) == 0) {
-		map = maps->last_search_by_name;
+		map = RC_CHK_ACCESS(maps)->last_search_by_name;
 		goto out_unlock;
 	}
 	/*
@@ -2075,7 +2075,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 	maps__for_each_entry(maps, rb_node) {
 		map = rb_node->map;
 		if (strcmp(map__dso(map)->short_name, name) == 0) {
-			maps->last_search_by_name = map;
+			RC_CHK_ACCESS(maps)->last_search_by_name = map;
 			goto out_unlock;
 		}
 	}
diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
index 841ac84a93ab..e86a6e594017 100644
--- a/tools/perf/util/unwind-libunwind-local.c
+++ b/tools/perf/util/unwind-libunwind-local.c
@@ -620,7 +620,7 @@ static int _unwind__prepare_access(struct maps *maps)
 {
 	void *addr_space = unw_create_addr_space(&accessors, 0);
 
-	maps->addr_space = addr_space;
+	RC_CHK_ACCESS(maps)->addr_space = addr_space;
 	if (!addr_space) {
 		pr_err("unwind: Can't create unwind address space.\n");
 		return -ENOMEM;
diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
index cece1ee89031..973eaa18ec75 100644
--- a/tools/perf/util/unwind-libunwind.c
+++ b/tools/perf/util/unwind-libunwind.c
@@ -14,7 +14,7 @@ struct unwind_libunwind_ops __weak *arm64_unwind_libunwind_ops;
 
 static void unwind__register_ops(struct maps *maps, struct unwind_libunwind_ops *ops)
 {
-	maps->unwind_libunwind_ops = ops;
+	RC_CHK_ACCESS(maps)->unwind_libunwind_ops = ops;
 }
 
 int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized)
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* [PATCH v3 22/22] perf map: Add reference count checking
  2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
                   ` (20 preceding siblings ...)
  2022-02-11 10:34 ` [PATCH v3 21/22] perf maps: " Ian Rogers
@ 2022-02-11 10:34 ` Ian Rogers
  21 siblings, 0 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 10:34 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo
  Cc: eranian, Ian Rogers

There's no strict get/put policy with map that leads to leaks or use
after free. Reference count checking identifies correct pairing of gets
and puts.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/arch/s390/annotate/instructions.c |  2 +-
 tools/perf/builtin-top.c                     |  4 +-
 tools/perf/tests/hists_link.c                |  2 +-
 tools/perf/tests/maps.c                      | 20 +++---
 tools/perf/tests/vmlinux-kallsyms.c          |  4 +-
 tools/perf/util/annotate.c                   |  4 +-
 tools/perf/util/machine.c                    | 26 ++++----
 tools/perf/util/map.c                        | 65 +++++++++++---------
 tools/perf/util/map.h                        | 34 +++++-----
 tools/perf/util/maps.c                       | 11 ++--
 tools/perf/util/symbol-elf.c                 | 27 ++++----
 tools/perf/util/symbol.c                     | 42 +++++++------
 12 files changed, 127 insertions(+), 114 deletions(-)

diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c
index 740f1a63bc04..9953d510f7c1 100644
--- a/tools/perf/arch/s390/annotate/instructions.c
+++ b/tools/perf/arch/s390/annotate/instructions.c
@@ -40,7 +40,7 @@ static int s390_call__parse(struct arch *arch, struct ins_operands *ops,
 
 	if (maps__find_ams(ms->maps, &target) == 0 &&
 	    map__rip_2objdump(target.ms.map,
-			      map->map_ip(target.ms.map, target.addr)
+			      RC_CHK_ACCESS(map)->map_ip(target.ms.map, target.addr)
 			     ) == ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 8db1df7bdabe..269d2dc3647c 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -190,7 +190,7 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
 	if (use_browser <= 0)
 		sleep(5);
 
-	map->erange_warned = true;
+	RC_CHK_ACCESS(map)->erange_warned = true;
 }
 
 static void perf_top__record_precise_ip(struct perf_top *top,
@@ -223,7 +223,7 @@ static void perf_top__record_precise_ip(struct perf_top *top,
 		 */
 		pthread_mutex_unlock(&he->hists->lock);
 
-		if (err == -ERANGE && !he->ms.map->erange_warned)
+		if (err == -ERANGE && !RC_CHK_ACCESS(he->ms.map)->erange_warned)
 			ui__warn_map_erange(he->ms.map, sym, ip);
 		else if (err == -ENOMEM) {
 			pr_err("Not enough memory for annotating '%s' symbol!\n",
diff --git a/tools/perf/tests/hists_link.c b/tools/perf/tests/hists_link.c
index 060e8731feff..6aae97c5c5a8 100644
--- a/tools/perf/tests/hists_link.c
+++ b/tools/perf/tests/hists_link.c
@@ -145,7 +145,7 @@ static int find_sample(struct sample *samples, size_t nr_samples,
 {
 	while (nr_samples--) {
 		if (samples->thread == t &&
-		    samples->map == m &&
+		    RC_CHK_ACCESS(samples->map) == RC_CHK_ACCESS(m) &&
 		    samples->sym == s)
 			return 1;
 		samples++;
diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
index 38c1ec0074d1..9ef13e3316cd 100644
--- a/tools/perf/tests/maps.c
+++ b/tools/perf/tests/maps.c
@@ -30,7 +30,7 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
 			if (map__start(map) != merged[i].start ||
 			    map__end(map) != merged[i].end ||
 			    strcmp(map__dso(map)->name, merged[i].name) ||
-			    refcount_read(&map->refcnt) != 1) {
+			    refcount_read(&RC_CHK_ACCESS(map)->refcnt) != 1) {
 				failed = true;
 			}
 			i++;
@@ -50,7 +50,7 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
 				map__start(map),
 				map__end(map),
 				map__dso(map)->name,
-				refcount_read(&map->refcnt));
+				refcount_read(&RC_CHK_ACCESS(map)->refcnt));
 		}
 	}
 	return failed ? TEST_FAIL : TEST_OK;
@@ -95,8 +95,8 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
 		map = dso__new_map(bpf_progs[i].name);
 		TEST_ASSERT_VAL("failed to create map", map);
 
-		map->start = bpf_progs[i].start;
-		map->end   = bpf_progs[i].end;
+		RC_CHK_ACCESS(map)->start = bpf_progs[i].start;
+		RC_CHK_ACCESS(map)->end   = bpf_progs[i].end;
 		TEST_ASSERT_VAL("failed to insert map", maps__insert(maps, map) == 0);
 		map__put(map);
 	}
@@ -111,16 +111,16 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
 	TEST_ASSERT_VAL("failed to create map", map_kcore3);
 
 	/* kcore1 map overlaps over all bpf maps */
-	map_kcore1->start = 100;
-	map_kcore1->end   = 1000;
+	RC_CHK_ACCESS(map_kcore1)->start = 100;
+	RC_CHK_ACCESS(map_kcore1)->end   = 1000;
 
 	/* kcore2 map hides behind bpf_prog_2 */
-	map_kcore2->start = 550;
-	map_kcore2->end   = 570;
+	RC_CHK_ACCESS(map_kcore2)->start = 550;
+	RC_CHK_ACCESS(map_kcore2)->end   = 570;
 
 	/* kcore3 map hides behind bpf_prog_3, kcore1 and adds new map */
-	map_kcore3->start = 880;
-	map_kcore3->end   = 1100;
+	RC_CHK_ACCESS(map_kcore3)->start = 880;
+	RC_CHK_ACCESS(map_kcore3)->end   = 1100;
 
 	ret = maps__merge_in(maps, map_kcore1);
 	TEST_ASSERT_VAL("failed to merge map", !ret);
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index 5afab21455f1..be22822f341e 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -299,7 +299,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 						? map__dso(map)->short_name
 						: map__dso(map)->name);
 		if (pair) {
-			pair->priv = 1;
+			RC_CHK_ACCESS(pair)->priv = 1;
 		} else {
 			if (!header_printed) {
 				pr_info("WARN: Maps only in vmlinux:\n");
@@ -335,7 +335,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 					map__start(pair), map__end(pair),
 					map__pgoff(pair));
 			pr_info(" %s\n", map__dso(pair)->name);
-			pair->priv = 1;
+			RC_CHK_ACCESS(pair)->priv = 1;
 		}
 	}
 
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 3a7433d3e48a..6afe2aa3321c 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -281,7 +281,7 @@ static int call__parse(struct arch *arch, struct ins_operands *ops, struct map_s
 
 	if (maps__find_ams(ms->maps, &target) == 0 &&
 	    map__rip_2objdump(target.ms.map,
-			      map->map_ip(target.ms.map, target.addr)
+			      RC_CHK_ACCESS(map)->map_ip(target.ms.map, target.addr)
 			      ) == ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
@@ -411,7 +411,7 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
 	 */
 	if (maps__find_ams(ms->maps, &target) == 0 &&
 	    map__rip_2objdump(target.ms.map,
-			      map->map_ip(target.ms.map, target.addr)
+			      RC_CHK_ACCESS(map)->map_ip(target.ms.map, target.addr)
 			      ) == ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 49e4891e92b7..d948d365c5a8 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -812,8 +812,8 @@ static int machine__process_ksymbol_register(struct machine *machine,
 			dso__set_loaded(map__dso(map));
 		}
 
-		map->start = event->ksymbol.addr;
-		map->end = map__start(map) + event->ksymbol.len;
+		RC_CHK_ACCESS(map)->start = event->ksymbol.addr;
+		RC_CHK_ACCESS(map)->end = map__start(map) + event->ksymbol.len;
 		err = maps__insert(machine__kernel_maps(machine), map);
 		if (err) {
 			err = -ENOMEM;
@@ -853,7 +853,7 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
 	if (!map)
 		return 0;
 
-	if (map != machine->vmlinux_map)
+	if (RC_CHK_ACCESS(map) != RC_CHK_ACCESS(machine->vmlinux_map))
 		maps__remove(machine__kernel_maps(machine), map);
 	else {
 		sym = dso__find_symbol(map__dso(map),
@@ -1120,8 +1120,8 @@ int machine__create_extra_kernel_map(struct machine *machine,
 	if (!map)
 		return -ENOMEM;
 
-	map->end   = xm->end;
-	map->pgoff = xm->pgoff;
+	RC_CHK_ACCESS(map)->end   = xm->end;
+	RC_CHK_ACCESS(map)->pgoff = xm->pgoff;
 
 	kmap = map__kmap(map);
 
@@ -1193,7 +1193,7 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
 
 		dest_map = maps__find(kmaps, map__pgoff(map));
 		if (dest_map != map)
-			map->pgoff = map__map_ip(dest_map, map__pgoff(map));
+			RC_CHK_ACCESS(map)->pgoff = map__map_ip(dest_map, map__pgoff(map));
 		found = true;
 	}
 	if (found || machine->trampolines_mapped)
@@ -1244,8 +1244,8 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
 	if (machine->vmlinux_map == NULL)
 		return -ENOMEM;
 
-	machine->vmlinux_map->map_ip = map__identity_ip;
-	machine->vmlinux_map->unmap_ip = map__identity_ip;
+	RC_CHK_ACCESS(machine->vmlinux_map)->map_ip = map__identity_ip;
+	RC_CHK_ACCESS(machine->vmlinux_map)->unmap_ip = map__identity_ip;
 	return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
 }
 
@@ -1522,7 +1522,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
 	map = machine__addnew_module_map(machine, start, name);
 	if (map == NULL)
 		return -1;
-	map->end = start + size;
+	RC_CHK_ACCESS(map)->end = start + size;
 
 	dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
 	map__put(map);
@@ -1558,14 +1558,14 @@ static int machine__create_modules(struct machine *machine)
 static void machine__set_kernel_mmap(struct machine *machine,
 				     u64 start, u64 end)
 {
-	machine->vmlinux_map->start = start;
-	machine->vmlinux_map->end   = end;
+	RC_CHK_ACCESS(machine->vmlinux_map)->start = start;
+	RC_CHK_ACCESS(machine->vmlinux_map)->end   = end;
 	/*
 	 * Be a bit paranoid here, some perf.data file came with
 	 * a zero sized synthesized MMAP event for the kernel.
 	 */
 	if (start == 0 && end == 0)
-		machine->vmlinux_map->end = ~0ULL;
+		RC_CHK_ACCESS(machine->vmlinux_map)->end = ~0ULL;
 }
 
 static int machine__update_kernel_mmap(struct machine *machine,
@@ -1700,7 +1700,7 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
 		if (map == NULL)
 			goto out_problem;
 
-		map->end = map__start(map) + xm->end - xm->start;
+		RC_CHK_ACCESS(map)->end = map__start(map) + xm->end - xm->start;
 
 		if (build_id__is_defined(bid))
 			dso__set_build_id(map__dso(map), bid);
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 47d81e361e29..ad52c763596d 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -104,15 +104,15 @@ static inline bool replace_android_lib(const char *filename, char *newfilename)
 
 void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
 {
-	map->start    = start;
-	map->end      = end;
-	map->pgoff    = pgoff;
-	map->reloc    = 0;
-	map->dso      = dso__get(dso);
-	map->map_ip   = map__dso_map_ip;
-	map->unmap_ip = map__dso_unmap_ip;
-	map->erange_warned = false;
-	refcount_set(&map->refcnt, 1);
+	RC_CHK_ACCESS(map)->start    = start;
+	RC_CHK_ACCESS(map)->end      = end;
+	RC_CHK_ACCESS(map)->pgoff    = pgoff;
+	RC_CHK_ACCESS(map)->reloc    = 0;
+	RC_CHK_ACCESS(map)->dso      = dso__get(dso);
+	RC_CHK_ACCESS(map)->map_ip   = map__dso_map_ip;
+	RC_CHK_ACCESS(map)->unmap_ip = map__dso_unmap_ip;
+	RC_CHK_ACCESS(map)->erange_warned = false;
+	refcount_set(&RC_CHK_ACCESS(map)->refcnt, 1);
 }
 
 struct map *map__new(struct machine *machine, u64 start, u64 len,
@@ -120,12 +120,13 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 		     u32 prot, u32 flags, struct build_id *bid,
 		     char *filename, struct thread *thread)
 {
-	struct map *map;
+	struct map *res;
+	RC_STRUCT(map) *map;
 	struct nsinfo *nsi = NULL;
 	struct nsinfo *nnsi;
 
 	map = malloc(sizeof(*map));
-	if (map != NULL) {
+	if (ADD_RC_CHK(res, map)) {
 		char newfilename[PATH_MAX];
 		struct dso *dso;
 		int anon, no_dso, vdso, android;
@@ -168,7 +169,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 		if (dso == NULL)
 			goto out_delete;
 
-		map__init(map, start, start + len, pgoff, dso);
+		map__init(res, start, start + len, pgoff, dso);
 
 		if (anon || no_dso) {
 			map->map_ip = map->unmap_ip = map__identity_ip;
@@ -191,10 +192,10 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 
 		dso__put(dso);
 	}
-	return map;
+	return res;
 out_delete:
 	nsinfo__put(nsi);
-	free(map);
+	RC_CHK_FREE(res);
 	return NULL;
 }
 
@@ -205,17 +206,18 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
  */
 struct map *map__new2(u64 start, struct dso *dso)
 {
-	struct map *map;
+	struct map *res;
+	RC_STRUCT(map) *map;
 
 	map = calloc(1, sizeof(*map) + (dso->kernel ? sizeof(struct kmap) : 0));
-	if (map != NULL) {
+	if (ADD_RC_CHK(res, map)) {
 		/*
 		 * ->end will be filled after we load all the symbols
 		 */
-		map__init(map, start, 0, 0, dso);
+		map__init(res, start, 0, 0, dso);
 	}
 
-	return map;
+	return res;
 }
 
 bool __map__is_kernel(const struct map *map)
@@ -277,20 +279,22 @@ bool map__has_symbols(const struct map *map)
 
 static void map__exit(struct map *map)
 {
-	BUG_ON(refcount_read(&map->refcnt) != 0);
-	dso__zput(map->dso);
+	BUG_ON(refcount_read(&RC_CHK_ACCESS(map)->refcnt) != 0);
+	dso__zput(RC_CHK_ACCESS(map)->dso);
 }
 
 void map__delete(struct map *map)
 {
 	map__exit(map);
-	free(map);
+	RC_CHK_FREE(map);
 }
 
 void map__put(struct map *map)
 {
-	if (map && refcount_dec_and_test(&map->refcnt))
+	if (map && refcount_dec_and_test(&RC_CHK_ACCESS(map)->refcnt))
 		map__delete(map);
+	else
+		RC_CHK_PUT(map);
 }
 
 void map__fixup_start(struct map *map)
@@ -299,7 +303,7 @@ void map__fixup_start(struct map *map)
 	struct rb_node *nd = rb_first_cached(symbols);
 	if (nd != NULL) {
 		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
-		map->start = sym->start;
+		RC_CHK_ACCESS(map)->start = sym->start;
 	}
 }
 
@@ -309,7 +313,7 @@ void map__fixup_end(struct map *map)
 	struct rb_node *nd = rb_last(&symbols->rb_root);
 	if (nd != NULL) {
 		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
-		map->end = sym->end;
+		RC_CHK_ACCESS(map)->end = sym->end;
 	}
 }
 
@@ -376,19 +380,20 @@ struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
 
 struct map *map__clone(struct map *from)
 {
-	struct map *map;
-	size_t size = sizeof(struct map);
+	struct map *res;
+	RC_STRUCT(map) *map;
+	size_t size = sizeof(RC_STRUCT(map));
 
 	if (map__dso(from) && map__dso(from)->kernel)
 		size += sizeof(struct kmap);
 
-	map = memdup(from, size);
-	if (map != NULL) {
+	map = memdup(RC_CHK_ACCESS(from), size);
+	if (ADD_RC_CHK(res, map)) {
 		refcount_set(&map->refcnt, 1);
 		map->dso = dso__get(map->dso);
 	}
 
-	return map;
+	return res;
 }
 
 size_t map__fprintf(struct map *map, FILE *fp)
@@ -534,7 +539,7 @@ struct kmap *__map__kmap(struct map *map)
 {
 	if (!map__dso(map) || !map__dso(map)->kernel)
 		return NULL;
-	return (struct kmap *)(&map[1]);
+	return (struct kmap *)(&RC_CHK_ACCESS(map)[1]);
 }
 
 struct kmap *map__kmap(struct map *map)
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 99ef0464a357..6a6bc7605e75 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -10,12 +10,13 @@
 #include <string.h>
 #include <stdbool.h>
 #include <linux/types.h>
+#include <internal/rc_check.h>
 
 struct dso;
 struct maps;
 struct machine;
 
-struct map {
+DECLARE_RC_STRUCT(map) {
 	u64			start;
 	u64			end;
 	bool			erange_warned:1;
@@ -49,52 +50,52 @@ u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip);
 
 static inline struct dso *map__dso(const struct map *map)
 {
-	return map->dso;
+	return RC_CHK_ACCESS(map)->dso;
 }
 
 static inline u64 map__map_ip(const struct map *map, u64 ip)
 {
-	return map->map_ip(map, ip);
+	return RC_CHK_ACCESS(map)->map_ip(map, ip);
 }
 
 static inline u64 map__unmap_ip(const struct map *map, u64 ip)
 {
-	return map->unmap_ip(map, ip);
+	return RC_CHK_ACCESS(map)->unmap_ip(map, ip);
 }
 
 static inline u64 map__start(const struct map *map)
 {
-	return map->start;
+	return RC_CHK_ACCESS(map)->start;
 }
 
 static inline u64 map__end(const struct map *map)
 {
-	return map->end;
+	return RC_CHK_ACCESS(map)->end;
 }
 
 static inline u64 map__pgoff(const struct map *map)
 {
-	return map->pgoff;
+	return RC_CHK_ACCESS(map)->pgoff;
 }
 
 static inline u64 map__reloc(const struct map *map)
 {
-	return map->reloc;
+	return RC_CHK_ACCESS(map)->reloc;
 }
 
 static inline u32 map__flags(const struct map *map)
 {
-	return map->flags;
+	return RC_CHK_ACCESS(map)->flags;
 }
 
 static inline u32 map__prot(const struct map *map)
 {
-	return map->prot;
+	return RC_CHK_ACCESS(map)->prot;
 }
 
 static inline bool map__priv(const struct map *map)
 {
-	return map->priv;
+	return RC_CHK_ACCESS(map)->priv;
 }
 
 static inline size_t map__size(const struct map *map)
@@ -119,7 +120,7 @@ struct thread;
  * Note: caller must ensure map->dso is not NULL (map is loaded).
  */
 #define map__for_each_symbol(map, pos, n)	\
-	dso__for_each_symbol(map->dso, pos, n)
+	dso__for_each_symbol(map__dso(map), pos, n)
 
 /* map__for_each_symbol_with_name - iterate over the symbols in the given map
  *                                  that have the given name
@@ -153,9 +154,12 @@ struct map *map__clone(struct map *map);
 
 static inline struct map *map__get(struct map *map)
 {
-	if (map)
-		refcount_inc(&map->refcnt);
-	return map;
+	struct map *result;
+
+	if (RC_CHK_GET(result, map))
+		refcount_inc(&RC_CHK_ACCESS(map)->refcnt);
+
+	return result;
 }
 
 void map__put(struct map *map);
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index da59204cb9bb..c579161c12c8 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -124,7 +124,7 @@ void maps__remove(struct maps *maps, struct map *map)
 		RC_CHK_ACCESS(maps)->last_search_by_name = NULL;
 
 	rb_node = maps__find_node(maps, map);
-	assert(rb_node->map == map);
+	assert(rb_node->RC_CHK_ACCESS(map) == RC_CHK_ACCESS(map));
 	__maps__remove(maps, rb_node);
 	--RC_CHK_ACCESS(maps)->nr_maps;
 	if (maps__maps_by_name(maps))
@@ -335,7 +335,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 				goto put_map;
 			}
 
-			before->end = map__start(map);
+			RC_CHK_ACCESS(before)->end = map__start(map);
 			if (!__maps__insert(maps, before)) {
 				map__put(before);
 				err = -ENOMEM;
@@ -355,8 +355,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 				goto put_map;
 			}
 
-			after->start = map__end(map);
-			after->pgoff += map__end(map) - map__start(pos->map);
+			RC_CHK_ACCESS(after)->start = map__end(map);
+			RC_CHK_ACCESS(after)->pgoff +=
+				map__end(map) - map__start(pos->map);
 			assert(map__map_ip(pos->map, map__end(map)) ==
 				map__map_ip(after, map__end(map)));
 			if (!__maps__insert(maps, after)) {
@@ -418,7 +419,7 @@ struct map_rb_node *maps__find_node(struct maps *maps, struct map *map)
 	struct map_rb_node *rb_node;
 
 	maps__for_each_entry(maps, rb_node) {
-		if (rb_node->map == map)
+		if (rb_node->RC_CHK_ACCESS(map) == RC_CHK_ACCESS(map))
 			return rb_node;
 	}
 	return NULL;
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 056405d3d655..555ac6f5bd75 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -993,11 +993,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		 */
 		if (*remap_kernel && dso->kernel && !kmodule) {
 			*remap_kernel = false;
-			map->start = shdr->sh_addr + ref_reloc(kmap);
-			map->end = map__start(map) + shdr->sh_size;
-			map->pgoff = shdr->sh_offset;
-			map->map_ip = map__dso_map_ip;
-			map->unmap_ip = map__dso_unmap_ip;
+			RC_CHK_ACCESS(map)->start = shdr->sh_addr + ref_reloc(kmap);
+			RC_CHK_ACCESS(map)->end = map__start(map) + shdr->sh_size;
+			RC_CHK_ACCESS(map)->pgoff = shdr->sh_offset;
+			RC_CHK_ACCESS(map)->map_ip = map__dso_map_ip;
+			RC_CHK_ACCESS(map)->unmap_ip = map__dso_unmap_ip;
 			/* Ensure maps are correctly ordered */
 			if (kmaps) {
 				int err;
@@ -1018,7 +1018,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		 */
 		if (*remap_kernel && kmodule) {
 			*remap_kernel = false;
-			map->pgoff = shdr->sh_offset;
+			RC_CHK_ACCESS(map)->pgoff = shdr->sh_offset;
 		}
 
 		*curr_dsop = dso;
@@ -1052,12 +1052,13 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 			map__kmap(curr_map)->kmaps = kmaps;
 
 		if (adjust_kernel_syms) {
-			curr_map->start  = shdr->sh_addr + ref_reloc(kmap);
-			curr_map->end	= map__start(curr_map) + shdr->sh_size;
-			curr_map->pgoff	= shdr->sh_offset;
+			RC_CHK_ACCESS(curr_map)->start  = shdr->sh_addr + ref_reloc(kmap);
+			RC_CHK_ACCESS(curr_map)->end	= map__start(curr_map) +
+							  shdr->sh_size;
+			RC_CHK_ACCESS(curr_map)->pgoff	= shdr->sh_offset;
 		} else {
-			curr_map->map_ip = map__identity_ip;
-			curr_map->unmap_ip = map__identity_ip;
+			RC_CHK_ACCESS(curr_map)->map_ip = map__identity_ip;
+			RC_CHK_ACCESS(curr_map)->unmap_ip = map__identity_ip;
 		}
 		curr_dso->symtab_type = dso->symtab_type;
 		if (maps__insert(kmaps, curr_map))
@@ -1161,7 +1162,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
 			if (strcmp(elf_name, kmap->ref_reloc_sym->name))
 				continue;
 			kmap->ref_reloc_sym->unrelocated_addr = sym.st_value;
-			map->reloc = kmap->ref_reloc_sym->addr -
+			RC_CHK_ACCESS(map)->reloc = kmap->ref_reloc_sym->addr -
 				     kmap->ref_reloc_sym->unrelocated_addr;
 			break;
 		}
@@ -1172,7 +1173,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
 	 * attempted to prelink vdso to its virtual address.
 	 */
 	if (dso__is_vdso(dso))
-		map->reloc = map__start(map) - dso->text_offset;
+		RC_CHK_ACCESS(map)->reloc = map__start(map) - dso->text_offset;
 
 	dso->adjust_symbols = runtime_ss->adjust_symbols || ref_reloc(kmap);
 	/*
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index fdaeeebd6050..39a650322300 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -253,7 +253,7 @@ void maps__fixup_end(struct maps *maps)
 
 	maps__for_each_entry(maps, curr) {
 		if (prev != NULL && !map__end(prev->map))
-			prev->map->end = map__start(curr->map);
+			RC_CHK_ACCESS(prev->map)->end = map__start(curr->map);
 
 		prev = curr;
 	}
@@ -263,7 +263,7 @@ void maps__fixup_end(struct maps *maps)
 	 * last map final address.
 	 */
 	if (curr && !map__end(curr->map))
-		curr->map->end = ~0ULL;
+		RC_CHK_ACCESS(curr->map)->end = ~0ULL;
 
 	up_write(maps__lock(maps));
 }
@@ -831,7 +831,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 			*module++ = '\0';
 
 			if (strcmp(map__dso(curr_map)->short_name, module)) {
-				if (curr_map != initial_map &&
+				if (RC_CHK_ACCESS(curr_map) != RC_CHK_ACCESS(initial_map) &&
 				    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
 				    machine__is_default_guest(machine)) {
 					/*
@@ -910,8 +910,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 				return -1;
 			}
 
-			curr_map->map_ip = map__identity_ip;
-			curr_map->unmap_ip = map__identity_ip;
+			RC_CHK_ACCESS(curr_map)->map_ip = map__identity_ip;
+			RC_CHK_ACCESS(curr_map)->unmap_ip = map__identity_ip;
 			if (maps__insert(kmaps, curr_map)) {
 				dso__put(ndso);
 				return -1;
@@ -1215,8 +1215,8 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
 		return -ENOMEM;
 	}
 
-	list_node->map->end = map__start(list_node->map) + len;
-	list_node->map->pgoff = pgoff;
+	list_node->RC_CHK_ACCESS(map)->end = map__start(list_node->map) + len;
+	list_node->RC_CHK_ACCESS(map)->pgoff = pgoff;
 
 	list_add(&list_node->node, &md->maps);
 
@@ -1251,7 +1251,7 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 				 * |new......|     -> |new..|
 				 *       |old....| ->       |old....|
 				 */
-				new_map->end = map__start(old_map);
+				RC_CHK_ACCESS(new_map)->end = map__start(old_map);
 			} else {
 				/*
 				 * |new.............| -> |new..|       |new..|
@@ -1272,11 +1272,12 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 					goto out;
 				}
 
-				m->map->end = map__start(old_map);
+
+				RC_CHK_ACCESS(m->map)->end = map__start(old_map);
 				list_add_tail(&m->node, &merged);
-				new_map->pgoff +=
+				RC_CHK_ACCESS(new_map)->pgoff +=
 					map__end(old_map) - map__start(new_map);
-				new_map->start = map__end(old_map);
+				RC_CHK_ACCESS(new_map)->start = map__end(old_map);
 			}
 		} else {
 			/*
@@ -1296,9 +1297,10 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 				 *      |new......| ->         |new...|
 				 * |old....|        -> |old....|
 				 */
-				new_map->pgoff +=
+
+				RC_CHK_ACCESS(new_map)->pgoff +=
 					map__end(old_map) - map__start(new_map);
-				new_map->start = map__end(old_map);
+				RC_CHK_ACCESS(new_map)->start = map__end(old_map);
 			}
 		}
 	}
@@ -1411,14 +1413,14 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 
 		new_node = list_entry(md.maps.next, struct map_list_node, node);
 		list_del_init(&new_node->node);
-		if (new_node->map == replacement_map) {
+		if (RC_CHK_ACCESS(new_node->map) == RC_CHK_ACCESS(replacement_map)) {
 			struct  map *updated;
 
-			map->start = map__start(new_node->map);
-			map->end   = map__end(new_node->map);
-			map->pgoff = map__pgoff(new_node->map);
-			map->map_ip = new_node->map->map_ip;
-			map->unmap_ip = new_node->map->unmap_ip;
+			RC_CHK_ACCESS(map)->start = map__start(new_node->map);
+			RC_CHK_ACCESS(map)->end   = map__end(new_node->map);
+			RC_CHK_ACCESS(map)->pgoff = map__pgoff(new_node->map);
+			RC_CHK_ACCESS(map)->map_ip = RC_CHK_ACCESS(new_node->map)->map_ip;
+			RC_CHK_ACCESS(map)->unmap_ip = RC_CHK_ACCESS(new_node->map)->unmap_ip;
 			/* Ensure maps are correctly ordered */
 			updated = map__get(map);
 			maps__remove(kmaps, map);
@@ -2058,7 +2060,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 	down_read(maps__lock(maps));
 
 	if (RC_CHK_ACCESS(maps)->last_search_by_name &&
-	    strcmp(map__dso(maps->last_search_by_name)->short_name, name) == 0) {
+	    strcmp(map__dso(RC_CHK_ACCESS(maps)->last_search_by_name)->short_name, name) == 0) {
 		map = RC_CHK_ACCESS(maps)->last_search_by_name;
 		goto out_unlock;
 	}
-- 
2.35.1.265.g69c8d7142f-goog


^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 01/22] perf cpumap: Migrate to libperf cpumap api
  2022-02-11 10:33 ` [PATCH v3 01/22] perf cpumap: Migrate to libperf cpumap api Ian Rogers
@ 2022-02-11 17:02   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:02 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:33:54AM -0800, Ian Rogers escreveu:
> Switch from directly accessing the perf_cpu_map to using the appropriate
> libperf API when possible. Using the API simplifies the job of
> refactoring use of perf_cpu_map.

Thanks, applied.

- Arnaldo

 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/tests/cpumap.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/perf/tests/cpumap.c b/tools/perf/tests/cpumap.c
> index 84e87e31f119..f94929ebb54b 100644
> --- a/tools/perf/tests/cpumap.c
> +++ b/tools/perf/tests/cpumap.c
> @@ -35,10 +35,10 @@ static int process_event_mask(struct perf_tool *tool __maybe_unused,
>  	}
>  
>  	map = cpu_map__new_data(data);
> -	TEST_ASSERT_VAL("wrong nr",  map->nr == 20);
> +	TEST_ASSERT_VAL("wrong nr",  perf_cpu_map__nr(map) == 20);
>  
>  	for (i = 0; i < 20; i++) {
> -		TEST_ASSERT_VAL("wrong cpu", map->map[i].cpu == i);
> +		TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, i).cpu == i);
>  	}
>  
>  	perf_cpu_map__put(map);
> @@ -66,9 +66,9 @@ static int process_event_cpus(struct perf_tool *tool __maybe_unused,
>  	TEST_ASSERT_VAL("wrong cpu",  cpus->cpu[1] == 256);
>  
>  	map = cpu_map__new_data(data);
> -	TEST_ASSERT_VAL("wrong nr",  map->nr == 2);
> -	TEST_ASSERT_VAL("wrong cpu", map->map[0].cpu == 1);
> -	TEST_ASSERT_VAL("wrong cpu", map->map[1].cpu == 256);
> +	TEST_ASSERT_VAL("wrong nr",  perf_cpu_map__nr(map) == 2);
> +	TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, 0).cpu == 1);
> +	TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, 1).cpu == 256);
>  	TEST_ASSERT_VAL("wrong refcnt", refcount_read(&map->refcnt) == 1);
>  	perf_cpu_map__put(map);
>  	return 0;
> @@ -130,7 +130,7 @@ static int test__cpu_map_merge(struct test_suite *test __maybe_unused, int subte
>  	struct perf_cpu_map *c = perf_cpu_map__merge(a, b);
>  	char buf[100];
>  
> -	TEST_ASSERT_VAL("failed to merge map: bad nr", c->nr == 5);
> +	TEST_ASSERT_VAL("failed to merge map: bad nr", perf_cpu_map__nr(c) == 5);
>  	cpu_map__snprint(c, buf, sizeof(buf));
>  	TEST_ASSERT_VAL("failed to merge map: bad result", !strcmp(buf, "1-2,4-5,7"));
>  	perf_cpu_map__put(b);
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 02/22] perf cpumap: Use for each loop
  2022-02-11 10:33 ` [PATCH v3 02/22] perf cpumap: Use for each loop Ian Rogers
@ 2022-02-11 17:04   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:04 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:33:55AM -0800, Ian Rogers escreveu:
> Improve readability in perf_pmu__cpus_match by using
> perf_cpu_map__for_each_cpu.
> 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/util/pmu.c | 14 ++++++--------
>  1 file changed, 6 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
> index 8dfbba15aeb8..9a1c7e63e663 100644
> --- a/tools/perf/util/pmu.c
> +++ b/tools/perf/util/pmu.c
> @@ -1998,7 +1998,8 @@ int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
>  {
>  	struct perf_cpu_map *pmu_cpus = pmu->cpus;
>  	struct perf_cpu_map *matched_cpus, *unmatched_cpus;
> -	int matched_nr = 0, unmatched_nr = 0;
> +	struct perf_cpu cpu;
> +	int i, matched_nr = 0, unmatched_nr = 0;
>  
>  	matched_cpus = perf_cpu_map__default_new();
>  	if (!matched_cpus)
> @@ -2010,14 +2011,11 @@ int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
>  		return -1;
>  	}
>  
> -	for (int i = 0; i < cpus->nr; i++) {
> -		int cpu;
> -
> -		cpu = perf_cpu_map__idx(pmu_cpus, cpus->map[i]);
> -		if (cpu == -1)
> -			unmatched_cpus->map[unmatched_nr++] = cpus->map[i];
> +	perf_cpu_map__for_each_cpu(cpu, i, cpus) {

I'm applying this patch, but I wonder if we couldn't remove the need for
pre-declaring the integer iterator, so that the previous patch hunk
wouldn't be needed.

> +		if (!perf_cpu_map__has(pmu_cpus, cpu))
> +			unmatched_cpus->map[unmatched_nr++] = cpu;
>  		else
> -			matched_cpus->map[matched_nr++] = cpus->map[i];
> +			matched_cpus->map[matched_nr++] = cpu;
>  	}
>  
>  	unmatched_cpus->nr = unmatched_nr;
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs
  2022-02-11 10:33 ` [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs Ian Rogers
@ 2022-02-11 17:13   ` Arnaldo Carvalho de Melo
  2022-02-11 17:43     ` Ian Rogers
  0 siblings, 1 reply; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:13 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:33:56AM -0800, Ian Rogers escreveu:
> Make the pthread mutex on dso use the error check type. This allows
> deadlock checking via the return type. Assert the returned value from
> mutex lock is always 0.

I think this is too blunt/pervasive source code wise, perhaps we should
wrap this like its done with rwsem in tools/perf/util/rwsem.h to get
away from pthreads primitives and make the source code look more like
a kernel one and then, taking advantage of the so far ideologic
needless indirection, add this BUG_ON if we build with "DEBUG=1" or
something, wdyt?

- Arnaldo
 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/util/dso.c    | 12 +++++++++---
>  tools/perf/util/symbol.c |  2 +-
>  2 files changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> index 9cc8a1772b4b..6beccffeef7b 100644
> --- a/tools/perf/util/dso.c
> +++ b/tools/perf/util/dso.c
> @@ -784,7 +784,7 @@ dso_cache__free(struct dso *dso)
>  	struct rb_root *root = &dso->data.cache;
>  	struct rb_node *next = rb_first(root);
>  
> -	pthread_mutex_lock(&dso->lock);
> +	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  	while (next) {
>  		struct dso_cache *cache;
>  
> @@ -830,7 +830,7 @@ dso_cache__insert(struct dso *dso, struct dso_cache *new)
>  	struct dso_cache *cache;
>  	u64 offset = new->offset;
>  
> -	pthread_mutex_lock(&dso->lock);
> +	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  	while (*p != NULL) {
>  		u64 end;
>  
> @@ -1259,6 +1259,8 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
>  	struct dso *dso = calloc(1, sizeof(*dso) + strlen(name) + 1);
>  
>  	if (dso != NULL) {
> +		pthread_mutexattr_t lock_attr;
> +
>  		strcpy(dso->name, name);
>  		if (id)
>  			dso->id = *id;
> @@ -1286,8 +1288,12 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
>  		dso->root = NULL;
>  		INIT_LIST_HEAD(&dso->node);
>  		INIT_LIST_HEAD(&dso->data.open_entry);
> -		pthread_mutex_init(&dso->lock, NULL);
> +		pthread_mutexattr_init(&lock_attr);
> +		pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_ERRORCHECK);
> +		pthread_mutex_init(&dso->lock, &lock_attr);
> +		pthread_mutexattr_destroy(&lock_attr);
>  		refcount_set(&dso->refcnt, 1);
> +
>  	}
>  
>  	return dso;
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index b2ed3140a1fa..43f47532696f 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -1783,7 +1783,7 @@ int dso__load(struct dso *dso, struct map *map)
>  	}
>  
>  	nsinfo__mountns_enter(dso->nsinfo, &nsc);
> -	pthread_mutex_lock(&dso->lock);
> +	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  
>  	/* check again under the dso->lock */
>  	if (dso__loaded(dso)) {
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 04/22] perf dso: Hold lock when accessing nsinfo
  2022-02-11 10:33 ` [PATCH v3 04/22] perf dso: Hold lock when accessing nsinfo Ian Rogers
@ 2022-02-11 17:14   ` Arnaldo Carvalho de Melo
  2022-02-12 11:30   ` Jiri Olsa
  1 sibling, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:14 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:33:57AM -0800, Ian Rogers escreveu:
> There may be threads racing to update dso->nsinfo:
> https://lore.kernel.org/linux-perf-users/CAP-5=fWZH20L4kv-BwVtGLwR=Em3AOOT+Q4QGivvQuYn5AsPRg@mail.gmail.com/
> Holding the dso->lock avoids use-after-free, memory leaks and other
> such bugs. Apply the fix in:
> https://lore.kernel.org/linux-perf-users/20211118193714.2293728-1-irogers@google.com/
> of there being a missing nsinfo__put now that the accesses are data race
> free.

I think this is too source code polluting, see previous comment, that
would cover this case as well, I think.

- Arnaldo
 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/builtin-inject.c   | 4 ++++
>  tools/perf/util/dso.c         | 5 ++++-
>  tools/perf/util/map.c         | 3 +++
>  tools/perf/util/probe-event.c | 2 ++
>  tools/perf/util/symbol.c      | 2 +-
>  5 files changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
> index fbf43a454cba..bede332bf0e2 100644
> --- a/tools/perf/builtin-inject.c
> +++ b/tools/perf/builtin-inject.c
> @@ -363,8 +363,10 @@ static struct dso *findnew_dso(int pid, int tid, const char *filename,
>  	}
>  
>  	if (dso) {
> +		BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  		nsinfo__put(dso->nsinfo);
>  		dso->nsinfo = nsi;
> +		pthread_mutex_unlock(&dso->lock);
>  	} else
>  		nsinfo__put(nsi);
>  
> @@ -547,7 +549,9 @@ static int dso__read_build_id(struct dso *dso)
>  	if (dso->has_build_id)
>  		return 0;
>  
> +	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  	nsinfo__mountns_enter(dso->nsinfo, &nsc);
> +	pthread_mutex_unlock(&dso->lock);
>  	if (filename__read_build_id(dso->long_name, &dso->bid) > 0)
>  		dso->has_build_id = true;
>  	nsinfo__mountns_exit(&nsc);
> diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> index 6beccffeef7b..b2f570adba35 100644
> --- a/tools/perf/util/dso.c
> +++ b/tools/perf/util/dso.c
> @@ -548,8 +548,11 @@ static int open_dso(struct dso *dso, struct machine *machine)
>  	int fd;
>  	struct nscookie nsc;
>  
> -	if (dso->binary_type != DSO_BINARY_TYPE__BUILD_ID_CACHE)
> +	if (dso->binary_type != DSO_BINARY_TYPE__BUILD_ID_CACHE) {
> +		BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  		nsinfo__mountns_enter(dso->nsinfo, &nsc);
> +		pthread_mutex_unlock(&dso->lock);
> +	}
>  	fd = __open_dso(dso, machine);
>  	if (dso->binary_type != DSO_BINARY_TYPE__BUILD_ID_CACHE)
>  		nsinfo__mountns_exit(&nsc);
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index 8af693d9678c..ae99b52502d5 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -192,7 +192,10 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
>  			if (!(prot & PROT_EXEC))
>  				dso__set_loaded(dso);
>  		}
> +		BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> +		nsinfo__put(dso->nsinfo);
>  		dso->nsinfo = nsi;
> +		pthread_mutex_unlock(&dso->lock);
>  
>  		if (build_id__is_defined(bid))
>  			dso__set_build_id(dso, bid);
> diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
> index a834918a0a0d..7444e689ece7 100644
> --- a/tools/perf/util/probe-event.c
> +++ b/tools/perf/util/probe-event.c
> @@ -180,8 +180,10 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
>  
>  		map = dso__new_map(target);
>  		if (map && map->dso) {
> +			BUG_ON(pthread_mutex_lock(&map->dso->lock) != 0);
>  			nsinfo__put(map->dso->nsinfo);
>  			map->dso->nsinfo = nsinfo__get(nsi);
> +			pthread_mutex_unlock(&map->dso->lock);
>  		}
>  		return map;
>  	} else {
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index 43f47532696f..a504346feb05 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -1774,6 +1774,7 @@ int dso__load(struct dso *dso, struct map *map)
>  	char newmapname[PATH_MAX];
>  	const char *map_path = dso->long_name;
>  
> +	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  	perfmap = strncmp(dso->name, "/tmp/perf-", 10) == 0;
>  	if (perfmap) {
>  		if (dso->nsinfo && (dso__find_perf_map(newmapname,
> @@ -1783,7 +1784,6 @@ int dso__load(struct dso *dso, struct map *map)
>  	}
>  
>  	nsinfo__mountns_enter(dso->nsinfo, &nsc);
> -	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  
>  	/* check again under the dso->lock */
>  	if (dso__loaded(dso)) {
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 05/22] perf maps: Use a pointer for kmaps
  2022-02-11 10:33 ` [PATCH v3 05/22] perf maps: Use a pointer for kmaps Ian Rogers
@ 2022-02-11 17:23   ` Arnaldo Carvalho de Melo
  2022-02-14 19:45     ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:23 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:33:58AM -0800, Ian Rogers escreveu:
> struct maps is reference counted, using a pointer is more idiomatic.

So, I tried to apply this after adding this to the cset comming log to
make sure reviewers know that this is just a clarifying comming, no code
change:

Committer notes:

Definition of machine__kernel_maps(machine), the replacement of &machine->kmaps

static inline
struct maps *machine__kernel_maps(struct machine *machine)
{
        return machine->kmaps;
}

but then when building on a f34 system I got:

  CC      /tmp/build/perf/bench/inject-buildid.o
In file included from /var/home/acme/git/perf/tools/perf/util/build-id.h:10,
                 from /var/home/acme/git/perf/tools/perf/util/dso.h:13,
                 from tests/vmlinux-kallsyms.c:8:
In function ‘machine__kernel_maps’,
    inlined from ‘test__vmlinux_matches_kallsyms’ at tests/vmlinux-kallsyms.c:122:22:
/var/home/acme/git/perf/tools/perf/util/machine.h:86:23: error: ‘vmlinux.kmaps’ is used uninitialized [-Werror=uninitialized]
   86 |         return machine->kmaps;
      |                ~~~~~~~^~~~~~~
tests/vmlinux-kallsyms.c: In function ‘test__vmlinux_matches_kallsyms’:
tests/vmlinux-kallsyms.c:121:34: note: ‘vmlinux’ declared here
  121 |         struct machine kallsyms, vmlinux;
      |                                  ^~~~~~~
cc1: all warnings being treated as errors
make[4]: *** [/var/home/acme/git/perf/tools/build/Makefile.build:96: /tmp/build/perf/tests/vmlinux-kallsyms.o] Error 1
make[4]: *** Waiting for unfinished jobs....
  CC      /tmp/build/perf/util/config.o
  CC      /tmp/build/perf/arch/x86/util/archinsn.o
  CC      /tmp/build/perf/arch/x86/util/intel-pt.o
  CC      /tmp/build/perf/arch/x86/util/intel-bts.o
  CC      /tmp/build/perf/util/db-export.o
  CC      /tmp/build/perf/util/event.o
make[3]: *** [/var/home/acme/git/perf/tools/build/Makefile.build:139: tests] Error 2
make[3]: *** Waiting for unfinished jobs....

Can you please  take a look at that?

- Arnaldo

 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/arch/x86/util/event.c    |  2 +-
>  tools/perf/tests/vmlinux-kallsyms.c |  4 +--
>  tools/perf/util/bpf-event.c         |  2 +-
>  tools/perf/util/callchain.c         |  2 +-
>  tools/perf/util/event.c             |  6 ++---
>  tools/perf/util/machine.c           | 38 ++++++++++++++++-------------
>  tools/perf/util/machine.h           |  8 +++---
>  tools/perf/util/probe-event.c       |  2 +-
>  8 files changed, 34 insertions(+), 30 deletions(-)
> 
> diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
> index 9b31734ee968..e670f3547581 100644
> --- a/tools/perf/arch/x86/util/event.c
> +++ b/tools/perf/arch/x86/util/event.c
> @@ -18,7 +18,7 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
>  {
>  	int rc = 0;
>  	struct map *pos;
> -	struct maps *kmaps = &machine->kmaps;
> +	struct maps *kmaps = machine__kernel_maps(machine);
>  	union perf_event *event = zalloc(sizeof(event->mmap) +
>  					 machine->id_hdr_size);
>  
> diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
> index e80df13c0420..84bf5f640065 100644
> --- a/tools/perf/tests/vmlinux-kallsyms.c
> +++ b/tools/perf/tests/vmlinux-kallsyms.c
> @@ -293,7 +293,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  		 * so use the short name, less descriptive but the same ("[kernel]" in
>  		 * both cases.
>  		 */
> -		pair = maps__find_by_name(&kallsyms.kmaps, (map->dso->kernel ?
> +		pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
>  								map->dso->short_name :
>  								map->dso->name));
>  		if (pair) {
> @@ -315,7 +315,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
>  		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
>  
> -		pair = maps__find(&kallsyms.kmaps, mem_start);
> +		pair = maps__find(kallsyms.kmaps, mem_start);
>  		if (pair == NULL || pair->priv)
>  			continue;
>  
> diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
> index a517eaa51eb3..33257b594a71 100644
> --- a/tools/perf/util/bpf-event.c
> +++ b/tools/perf/util/bpf-event.c
> @@ -92,7 +92,7 @@ static int machine__process_bpf_event_load(struct machine *machine,
>  	for (i = 0; i < info_linear->info.nr_jited_ksyms; i++) {
>  		u64 *addrs = (u64 *)(uintptr_t)(info_linear->info.jited_ksyms);
>  		u64 addr = addrs[i];
> -		struct map *map = maps__find(&machine->kmaps, addr);
> +		struct map *map = maps__find(machine__kernel_maps(machine), addr);
>  
>  		if (map) {
>  			map->dso->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
> diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
> index 131207b91d15..5c27a4b2e7a7 100644
> --- a/tools/perf/util/callchain.c
> +++ b/tools/perf/util/callchain.c
> @@ -1119,7 +1119,7 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
>  			goto out;
>  	}
>  
> -	if (al->maps == &al->maps->machine->kmaps) {
> +	if (al->maps == machine__kernel_maps(al->maps->machine)) {
>  		if (machine__is_host(al->maps->machine)) {
>  			al->cpumode = PERF_RECORD_MISC_KERNEL;
>  			al->level = 'k';
> diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> index fe24801f8e9f..6439c888ae38 100644
> --- a/tools/perf/util/event.c
> +++ b/tools/perf/util/event.c
> @@ -484,7 +484,7 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
>  	if (machine) {
>  		struct addr_location al;
>  
> -		al.map = maps__find(&machine->kmaps, tp->addr);
> +		al.map = maps__find(machine__kernel_maps(machine), tp->addr);
>  		if (al.map && map__load(al.map) >= 0) {
>  			al.addr = al.map->map_ip(al.map, tp->addr);
>  			al.sym = map__find_symbol(al.map, al.addr);
> @@ -587,13 +587,13 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
>  
>  	if (cpumode == PERF_RECORD_MISC_KERNEL && perf_host) {
>  		al->level = 'k';
> -		al->maps = maps = &machine->kmaps;
> +		al->maps = maps = machine__kernel_maps(machine);
>  		load_map = true;
>  	} else if (cpumode == PERF_RECORD_MISC_USER && perf_host) {
>  		al->level = '.';
>  	} else if (cpumode == PERF_RECORD_MISC_GUEST_KERNEL && perf_guest) {
>  		al->level = 'g';
> -		al->maps = maps = &machine->kmaps;
> +		al->maps = maps = machine__kernel_maps(machine);
>  		load_map = true;
>  	} else if (cpumode == PERF_RECORD_MISC_GUEST_USER && perf_guest) {
>  		al->level = 'u';
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index f70ba56912d4..57fbdba66425 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -89,7 +89,10 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
>  	int err = -ENOMEM;
>  
>  	memset(machine, 0, sizeof(*machine));
> -	maps__init(&machine->kmaps, machine);
> +	machine->kmaps = maps__new(machine);
> +	if (machine->kmaps == NULL)
> +		return -ENOMEM;
> +
>  	RB_CLEAR_NODE(&machine->rb_node);
>  	dsos__init(&machine->dsos);
>  
> @@ -108,7 +111,7 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
>  
>  	machine->root_dir = strdup(root_dir);
>  	if (machine->root_dir == NULL)
> -		return -ENOMEM;
> +		goto out;
>  
>  	if (machine__set_mmap_name(machine))
>  		goto out;
> @@ -131,6 +134,7 @@ int machine__init(struct machine *machine, const char *root_dir, pid_t pid)
>  
>  out:
>  	if (err) {
> +		zfree(&machine->kmaps);
>  		zfree(&machine->root_dir);
>  		zfree(&machine->mmap_name);
>  	}
> @@ -220,7 +224,7 @@ void machine__exit(struct machine *machine)
>  		return;
>  
>  	machine__destroy_kernel_maps(machine);
> -	maps__exit(&machine->kmaps);
> +	maps__delete(machine->kmaps);
>  	dsos__exit(&machine->dsos);
>  	machine__exit_vdso(machine);
>  	zfree(&machine->root_dir);
> @@ -778,7 +782,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
>  					     struct perf_sample *sample __maybe_unused)
>  {
>  	struct symbol *sym;
> -	struct map *map = maps__find(&machine->kmaps, event->ksymbol.addr);
> +	struct map *map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
>  
>  	if (!map) {
>  		struct dso *dso = dso__new(event->ksymbol.name);
> @@ -801,7 +805,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
>  
>  		map->start = event->ksymbol.addr;
>  		map->end = map->start + event->ksymbol.len;
> -		maps__insert(&machine->kmaps, map);
> +		maps__insert(machine__kernel_maps(machine), map);
>  		map__put(map);
>  		dso__set_loaded(dso);
>  
> @@ -827,12 +831,12 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
>  	struct symbol *sym;
>  	struct map *map;
>  
> -	map = maps__find(&machine->kmaps, event->ksymbol.addr);
> +	map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
>  	if (!map)
>  		return 0;
>  
>  	if (map != machine->vmlinux_map)
> -		maps__remove(&machine->kmaps, map);
> +		maps__remove(machine__kernel_maps(machine), map);
>  	else {
>  		sym = dso__find_symbol(map->dso, map->map_ip(map, map->start));
>  		if (sym)
> @@ -858,7 +862,7 @@ int machine__process_ksymbol(struct machine *machine __maybe_unused,
>  int machine__process_text_poke(struct machine *machine, union perf_event *event,
>  			       struct perf_sample *sample __maybe_unused)
>  {
> -	struct map *map = maps__find(&machine->kmaps, event->text_poke.addr);
> +	struct map *map = maps__find(machine__kernel_maps(machine), event->text_poke.addr);
>  	u8 cpumode = event->header.misc & PERF_RECORD_MISC_CPUMODE_MASK;
>  
>  	if (dump_trace)
> @@ -914,7 +918,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
>  	if (map == NULL)
>  		goto out;
>  
> -	maps__insert(&machine->kmaps, map);
> +	maps__insert(machine__kernel_maps(machine), map);
>  
>  	/* Put the map here because maps__insert already got it */
>  	map__put(map);
> @@ -1100,7 +1104,7 @@ int machine__create_extra_kernel_map(struct machine *machine,
>  
>  	strlcpy(kmap->name, xm->name, KMAP_NAME_LEN);
>  
> -	maps__insert(&machine->kmaps, map);
> +	maps__insert(machine__kernel_maps(machine), map);
>  
>  	pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
>  		  kmap->name, map->start, map->end);
> @@ -1145,7 +1149,7 @@ static u64 find_entry_trampoline(struct dso *dso)
>  int machine__map_x86_64_entry_trampolines(struct machine *machine,
>  					  struct dso *kernel)
>  {
> -	struct maps *kmaps = &machine->kmaps;
> +	struct maps *kmaps = machine__kernel_maps(machine);
>  	int nr_cpus_avail, cpu;
>  	bool found = false;
>  	struct map *map;
> @@ -1215,7 +1219,7 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
>  		return -1;
>  
>  	machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
> -	maps__insert(&machine->kmaps, machine->vmlinux_map);
> +	maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
>  	return 0;
>  }
>  
> @@ -1228,7 +1232,7 @@ void machine__destroy_kernel_maps(struct machine *machine)
>  		return;
>  
>  	kmap = map__kmap(map);
> -	maps__remove(&machine->kmaps, map);
> +	maps__remove(machine__kernel_maps(machine), map);
>  	if (kmap && kmap->ref_reloc_sym) {
>  		zfree((char **)&kmap->ref_reloc_sym->name);
>  		zfree(&kmap->ref_reloc_sym);
> @@ -1323,7 +1327,7 @@ int machine__load_kallsyms(struct machine *machine, const char *filename)
>  		 * kernel, with modules between them, fixup the end of all
>  		 * sections.
>  		 */
> -		maps__fixup_end(&machine->kmaps);
> +		maps__fixup_end(machine__kernel_maps(machine));
>  	}
>  
>  	return ret;
> @@ -1471,7 +1475,7 @@ static int machine__set_modules_path(struct machine *machine)
>  		 machine->root_dir, version);
>  	free(version);
>  
> -	return maps__set_modules_path_dir(&machine->kmaps, modules_path, 0);
> +	return maps__set_modules_path_dir(machine__kernel_maps(machine), modules_path, 0);
>  }
>  int __weak arch__fix_module_text_start(u64 *start __maybe_unused,
>  				u64 *size __maybe_unused,
> @@ -1544,11 +1548,11 @@ static void machine__update_kernel_mmap(struct machine *machine,
>  	struct map *map = machine__kernel_map(machine);
>  
>  	map__get(map);
> -	maps__remove(&machine->kmaps, map);
> +	maps__remove(machine__kernel_maps(machine), map);
>  
>  	machine__set_kernel_mmap(machine, start, end);
>  
> -	maps__insert(&machine->kmaps, map);
> +	maps__insert(machine__kernel_maps(machine), map);
>  	map__put(map);
>  }
>  
> diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
> index c5a45dc8df4c..0023165422aa 100644
> --- a/tools/perf/util/machine.h
> +++ b/tools/perf/util/machine.h
> @@ -51,7 +51,7 @@ struct machine {
>  	struct vdso_info  *vdso_info;
>  	struct perf_env   *env;
>  	struct dsos	  dsos;
> -	struct maps	  kmaps;
> +	struct maps	  *kmaps;
>  	struct map	  *vmlinux_map;
>  	u64		  kernel_start;
>  	pid_t		  *current_tid;
> @@ -83,7 +83,7 @@ struct map *machine__kernel_map(struct machine *machine)
>  static inline
>  struct maps *machine__kernel_maps(struct machine *machine)
>  {
> -	return &machine->kmaps;
> +	return machine->kmaps;
>  }
>  
>  int machine__get_kernel_start(struct machine *machine);
> @@ -223,7 +223,7 @@ static inline
>  struct symbol *machine__find_kernel_symbol(struct machine *machine, u64 addr,
>  					   struct map **mapp)
>  {
> -	return maps__find_symbol(&machine->kmaps, addr, mapp);
> +	return maps__find_symbol(machine->kmaps, addr, mapp);
>  }
>  
>  static inline
> @@ -231,7 +231,7 @@ struct symbol *machine__find_kernel_symbol_by_name(struct machine *machine,
>  						   const char *name,
>  						   struct map **mapp)
>  {
> -	return maps__find_symbol_by_name(&machine->kmaps, name, mapp);
> +	return maps__find_symbol_by_name(machine->kmaps, name, mapp);
>  }
>  
>  int arch__fix_module_text_start(u64 *start, u64 *size, const char *name);
> diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
> index 7444e689ece7..bc5ab782ace5 100644
> --- a/tools/perf/util/probe-event.c
> +++ b/tools/perf/util/probe-event.c
> @@ -334,7 +334,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
>  		char module_name[128];
>  
>  		snprintf(module_name, sizeof(module_name), "[%s]", module);
> -		map = maps__find_by_name(&host_machine->kmaps, module_name);
> +		map = maps__find_by_name(machine__kernel_maps(host_machine), module_name);
>  		if (map) {
>  			dso = map->dso;
>  			goto found;
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 06/22] perf test: Use pointer for maps
  2022-02-11 10:33 ` [PATCH v3 06/22] perf test: Use pointer for maps Ian Rogers
@ 2022-02-11 17:24   ` Arnaldo Carvalho de Melo
  2022-02-14 19:48   ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:24 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:33:59AM -0800, Ian Rogers escreveu:
> struct maps is reference counted, using a pointer is more idiomatic.
> 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/tests/maps.c | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
> index e308a3296cef..6f53f17f788e 100644
> --- a/tools/perf/tests/maps.c
> +++ b/tools/perf/tests/maps.c
> @@ -35,7 +35,7 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
>  
>  static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest __maybe_unused)
>  {
> -	struct maps maps;
> +	struct maps *maps;
>  	unsigned int i;
>  	struct map_def bpf_progs[] = {
>  		{ "bpf_prog_1", 200, 300 },
> @@ -64,7 +64,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
>  	struct map *map_kcore1, *map_kcore2, *map_kcore3;
>  	int ret;
>  
> -	maps__init(&maps, NULL);
> +	maps = maps__new(NULL);


Any __news() method can fail, so we should check for that and bail out.
  
>  	for (i = 0; i < ARRAY_SIZE(bpf_progs); i++) {
>  		struct map *map;
> @@ -74,7 +74,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
>  
>  		map->start = bpf_progs[i].start;
>  		map->end   = bpf_progs[i].end;
> -		maps__insert(&maps, map);
> +		maps__insert(maps, map);
>  		map__put(map);
>  	}
>  
> @@ -99,25 +99,25 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
>  	map_kcore3->start = 880;
>  	map_kcore3->end   = 1100;
>  
> -	ret = maps__merge_in(&maps, map_kcore1);
> +	ret = maps__merge_in(maps, map_kcore1);
>  	TEST_ASSERT_VAL("failed to merge map", !ret);
>  
> -	ret = check_maps(merged12, ARRAY_SIZE(merged12), &maps);
> +	ret = check_maps(merged12, ARRAY_SIZE(merged12), maps);
>  	TEST_ASSERT_VAL("merge check failed", !ret);
>  
> -	ret = maps__merge_in(&maps, map_kcore2);
> +	ret = maps__merge_in(maps, map_kcore2);
>  	TEST_ASSERT_VAL("failed to merge map", !ret);
>  
> -	ret = check_maps(merged12, ARRAY_SIZE(merged12), &maps);
> +	ret = check_maps(merged12, ARRAY_SIZE(merged12), maps);
>  	TEST_ASSERT_VAL("merge check failed", !ret);
>  
> -	ret = maps__merge_in(&maps, map_kcore3);
> +	ret = maps__merge_in(maps, map_kcore3);
>  	TEST_ASSERT_VAL("failed to merge map", !ret);
>  
> -	ret = check_maps(merged3, ARRAY_SIZE(merged3), &maps);
> +	ret = check_maps(merged3, ARRAY_SIZE(merged3), maps);
>  	TEST_ASSERT_VAL("merge check failed", !ret);
>  
> -	maps__exit(&maps);
> +	maps__delete(maps);
>  	return TEST_OK;
>  }
>  
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 07/22] perf maps: Reduce scope of init and exit
  2022-02-11 10:34 ` [PATCH v3 07/22] perf maps: Reduce scope of init and exit Ian Rogers
@ 2022-02-11 17:26   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:26 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:34:00AM -0800, Ian Rogers escreveu:
> Now purely accessed through new and delete, so reduce to file scope.

Seems to depend on previously dropped patch proposals.

- Arnaldo
 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/util/map.c  | 4 ++--
>  tools/perf/util/maps.h | 2 --
>  2 files changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index ae99b52502d5..4d1de363c19a 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -527,7 +527,7 @@ u64 map__objdump_2mem(struct map *map, u64 ip)
>  	return ip + map->reloc;
>  }
>  
> -void maps__init(struct maps *maps, struct machine *machine)
> +static void maps__init(struct maps *maps, struct machine *machine)
>  {
>  	maps->entries = RB_ROOT;
>  	init_rwsem(&maps->lock);
> @@ -616,7 +616,7 @@ static void __maps__purge(struct maps *maps)
>  	}
>  }
>  
> -void maps__exit(struct maps *maps)
> +static void maps__exit(struct maps *maps)
>  {
>  	down_write(&maps->lock);
>  	__maps__purge(maps);
> diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
> index 3dd000ddf925..7e729ff42749 100644
> --- a/tools/perf/util/maps.h
> +++ b/tools/perf/util/maps.h
> @@ -60,8 +60,6 @@ static inline struct maps *maps__get(struct maps *maps)
>  }
>  
>  void maps__put(struct maps *maps);
> -void maps__init(struct maps *maps, struct machine *machine);
> -void maps__exit(struct maps *maps);
>  int maps__clone(struct thread *thread, struct maps *parent);
>  size_t maps__fprintf(struct maps *maps, FILE *fp);
>  
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 08/22] perf maps: Move maps code to own C file
  2022-02-11 10:34 ` [PATCH v3 08/22] perf maps: Move maps code to own C file Ian Rogers
@ 2022-02-11 17:27   ` Arnaldo Carvalho de Melo
  2022-02-14 19:58   ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:27 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:34:01AM -0800, Ian Rogers escreveu:
> The maps code has its own header, move the corresponding C function
> definitions to their own C file. In the process tidy and minimize
> includes.

Depends on previously not processed patches.
 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/util/Build  |   1 +
>  tools/perf/util/map.c  | 417 +----------------------------------------
>  tools/perf/util/map.h  |   2 +
>  tools/perf/util/maps.c | 403 +++++++++++++++++++++++++++++++++++++++
>  4 files changed, 414 insertions(+), 409 deletions(-)
>  create mode 100644 tools/perf/util/maps.c
> 
> diff --git a/tools/perf/util/Build b/tools/perf/util/Build
> index 2a403cefcaf2..9a7209a99e16 100644
> --- a/tools/perf/util/Build
> +++ b/tools/perf/util/Build
> @@ -56,6 +56,7 @@ perf-y += debug.o
>  perf-y += fncache.o
>  perf-y += machine.o
>  perf-y += map.o
> +perf-y += maps.o
>  perf-y += pstack.o
>  perf-y += session.o
>  perf-y += sample-raw.o
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index 4d1de363c19a..2cfe5744b86c 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -1,31 +1,20 @@
>  // SPDX-License-Identifier: GPL-2.0
> -#include "symbol.h"
> -#include <assert.h>
> -#include <errno.h>
>  #include <inttypes.h>
>  #include <limits.h>
> +#include <stdio.h>
>  #include <stdlib.h>
>  #include <string.h>
> -#include <stdio.h>
> -#include <unistd.h>
> +#include <linux/string.h>
> +#include <linux/zalloc.h>
>  #include <uapi/linux/mman.h> /* To get things like MAP_HUGETLB even on older libc headers */
> +#include "debug.h"
>  #include "dso.h"
>  #include "map.h"
> -#include "map_symbol.h"
> +#include "namespaces.h"
> +#include "srcline.h"
> +#include "symbol.h"
>  #include "thread.h"
>  #include "vdso.h"
> -#include "build-id.h"
> -#include "debug.h"
> -#include "machine.h"
> -#include <linux/string.h>
> -#include <linux/zalloc.h>
> -#include "srcline.h"
> -#include "namespaces.h"
> -#include "unwind.h"
> -#include "srccode.h"
> -#include "ui/ui.h"
> -
> -static void __maps__insert(struct maps *maps, struct map *map);
>  
>  static inline int is_android_lib(const char *filename)
>  {
> @@ -527,403 +516,13 @@ u64 map__objdump_2mem(struct map *map, u64 ip)
>  	return ip + map->reloc;
>  }
>  
> -static void maps__init(struct maps *maps, struct machine *machine)
> -{
> -	maps->entries = RB_ROOT;
> -	init_rwsem(&maps->lock);
> -	maps->machine = machine;
> -	maps->last_search_by_name = NULL;
> -	maps->nr_maps = 0;
> -	maps->maps_by_name = NULL;
> -	refcount_set(&maps->refcnt, 1);
> -}
> -
> -static void __maps__free_maps_by_name(struct maps *maps)
> -{
> -	/*
> -	 * Free everything to try to do it from the rbtree in the next search
> -	 */
> -	zfree(&maps->maps_by_name);
> -	maps->nr_maps_allocated = 0;
> -}
> -
> -void maps__insert(struct maps *maps, struct map *map)
> -{
> -	down_write(&maps->lock);
> -	__maps__insert(maps, map);
> -	++maps->nr_maps;
> -
> -	if (map->dso && map->dso->kernel) {
> -		struct kmap *kmap = map__kmap(map);
> -
> -		if (kmap)
> -			kmap->kmaps = maps;
> -		else
> -			pr_err("Internal error: kernel dso with non kernel map\n");
> -	}
> -
> -
> -	/*
> -	 * If we already performed some search by name, then we need to add the just
> -	 * inserted map and resort.
> -	 */
> -	if (maps->maps_by_name) {
> -		if (maps->nr_maps > maps->nr_maps_allocated) {
> -			int nr_allocate = maps->nr_maps * 2;
> -			struct map **maps_by_name = realloc(maps->maps_by_name, nr_allocate * sizeof(map));
> -
> -			if (maps_by_name == NULL) {
> -				__maps__free_maps_by_name(maps);
> -				up_write(&maps->lock);
> -				return;
> -			}
> -
> -			maps->maps_by_name = maps_by_name;
> -			maps->nr_maps_allocated = nr_allocate;
> -		}
> -		maps->maps_by_name[maps->nr_maps - 1] = map;
> -		__maps__sort_by_name(maps);
> -	}
> -	up_write(&maps->lock);
> -}
> -
> -static void __maps__remove(struct maps *maps, struct map *map)
> -{
> -	rb_erase_init(&map->rb_node, &maps->entries);
> -	map__put(map);
> -}
> -
> -void maps__remove(struct maps *maps, struct map *map)
> -{
> -	down_write(&maps->lock);
> -	if (maps->last_search_by_name == map)
> -		maps->last_search_by_name = NULL;
> -
> -	__maps__remove(maps, map);
> -	--maps->nr_maps;
> -	if (maps->maps_by_name)
> -		__maps__free_maps_by_name(maps);
> -	up_write(&maps->lock);
> -}
> -
> -static void __maps__purge(struct maps *maps)
> -{
> -	struct map *pos, *next;
> -
> -	maps__for_each_entry_safe(maps, pos, next) {
> -		rb_erase_init(&pos->rb_node,  &maps->entries);
> -		map__put(pos);
> -	}
> -}
> -
> -static void maps__exit(struct maps *maps)
> -{
> -	down_write(&maps->lock);
> -	__maps__purge(maps);
> -	up_write(&maps->lock);
> -}
> -
> -bool maps__empty(struct maps *maps)
> -{
> -	return !maps__first(maps);
> -}
> -
> -struct maps *maps__new(struct machine *machine)
> -{
> -	struct maps *maps = zalloc(sizeof(*maps));
> -
> -	if (maps != NULL)
> -		maps__init(maps, machine);
> -
> -	return maps;
> -}
> -
> -void maps__delete(struct maps *maps)
> -{
> -	maps__exit(maps);
> -	unwind__finish_access(maps);
> -	free(maps);
> -}
> -
> -void maps__put(struct maps *maps)
> -{
> -	if (maps && refcount_dec_and_test(&maps->refcnt))
> -		maps__delete(maps);
> -}
> -
> -struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
> -{
> -	struct map *map = maps__find(maps, addr);
> -
> -	/* Ensure map is loaded before using map->map_ip */
> -	if (map != NULL && map__load(map) >= 0) {
> -		if (mapp != NULL)
> -			*mapp = map;
> -		return map__find_symbol(map, map->map_ip(map, addr));
> -	}
> -
> -	return NULL;
> -}
> -
> -static bool map__contains_symbol(struct map *map, struct symbol *sym)
> +bool map__contains_symbol(struct map *map, struct symbol *sym)
>  {
>  	u64 ip = map->unmap_ip(map, sym->start);
>  
>  	return ip >= map->start && ip < map->end;
>  }
>  
> -struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct map **mapp)
> -{
> -	struct symbol *sym;
> -	struct map *pos;
> -
> -	down_read(&maps->lock);
> -
> -	maps__for_each_entry(maps, pos) {
> -		sym = map__find_symbol_by_name(pos, name);
> -
> -		if (sym == NULL)
> -			continue;
> -		if (!map__contains_symbol(pos, sym)) {
> -			sym = NULL;
> -			continue;
> -		}
> -		if (mapp != NULL)
> -			*mapp = pos;
> -		goto out;
> -	}
> -
> -	sym = NULL;
> -out:
> -	up_read(&maps->lock);
> -	return sym;
> -}
> -
> -int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
> -{
> -	if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
> -		if (maps == NULL)
> -			return -1;
> -		ams->ms.map = maps__find(maps, ams->addr);
> -		if (ams->ms.map == NULL)
> -			return -1;
> -	}
> -
> -	ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
> -	ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
> -
> -	return ams->ms.sym ? 0 : -1;
> -}
> -
> -size_t maps__fprintf(struct maps *maps, FILE *fp)
> -{
> -	size_t printed = 0;
> -	struct map *pos;
> -
> -	down_read(&maps->lock);
> -
> -	maps__for_each_entry(maps, pos) {
> -		printed += fprintf(fp, "Map:");
> -		printed += map__fprintf(pos, fp);
> -		if (verbose > 2) {
> -			printed += dso__fprintf(pos->dso, fp);
> -			printed += fprintf(fp, "--\n");
> -		}
> -	}
> -
> -	up_read(&maps->lock);
> -
> -	return printed;
> -}
> -
> -int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> -{
> -	struct rb_root *root;
> -	struct rb_node *next, *first;
> -	int err = 0;
> -
> -	down_write(&maps->lock);
> -
> -	root = &maps->entries;
> -
> -	/*
> -	 * Find first map where end > map->start.
> -	 * Same as find_vma() in kernel.
> -	 */
> -	next = root->rb_node;
> -	first = NULL;
> -	while (next) {
> -		struct map *pos = rb_entry(next, struct map, rb_node);
> -
> -		if (pos->end > map->start) {
> -			first = next;
> -			if (pos->start <= map->start)
> -				break;
> -			next = next->rb_left;
> -		} else
> -			next = next->rb_right;
> -	}
> -
> -	next = first;
> -	while (next) {
> -		struct map *pos = rb_entry(next, struct map, rb_node);
> -		next = rb_next(&pos->rb_node);
> -
> -		/*
> -		 * Stop if current map starts after map->end.
> -		 * Maps are ordered by start: next will not overlap for sure.
> -		 */
> -		if (pos->start >= map->end)
> -			break;
> -
> -		if (verbose >= 2) {
> -
> -			if (use_browser) {
> -				pr_debug("overlapping maps in %s (disable tui for more info)\n",
> -					   map->dso->name);
> -			} else {
> -				fputs("overlapping maps:\n", fp);
> -				map__fprintf(map, fp);
> -				map__fprintf(pos, fp);
> -			}
> -		}
> -
> -		rb_erase_init(&pos->rb_node, root);
> -		/*
> -		 * Now check if we need to create new maps for areas not
> -		 * overlapped by the new map:
> -		 */
> -		if (map->start > pos->start) {
> -			struct map *before = map__clone(pos);
> -
> -			if (before == NULL) {
> -				err = -ENOMEM;
> -				goto put_map;
> -			}
> -
> -			before->end = map->start;
> -			__maps__insert(maps, before);
> -			if (verbose >= 2 && !use_browser)
> -				map__fprintf(before, fp);
> -			map__put(before);
> -		}
> -
> -		if (map->end < pos->end) {
> -			struct map *after = map__clone(pos);
> -
> -			if (after == NULL) {
> -				err = -ENOMEM;
> -				goto put_map;
> -			}
> -
> -			after->start = map->end;
> -			after->pgoff += map->end - pos->start;
> -			assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
> -			__maps__insert(maps, after);
> -			if (verbose >= 2 && !use_browser)
> -				map__fprintf(after, fp);
> -			map__put(after);
> -		}
> -put_map:
> -		map__put(pos);
> -
> -		if (err)
> -			goto out;
> -	}
> -
> -	err = 0;
> -out:
> -	up_write(&maps->lock);
> -	return err;
> -}
> -
> -/*
> - * XXX This should not really _copy_ te maps, but refcount them.
> - */
> -int maps__clone(struct thread *thread, struct maps *parent)
> -{
> -	struct maps *maps = thread->maps;
> -	int err;
> -	struct map *map;
> -
> -	down_read(&parent->lock);
> -
> -	maps__for_each_entry(parent, map) {
> -		struct map *new = map__clone(map);
> -
> -		if (new == NULL) {
> -			err = -ENOMEM;
> -			goto out_unlock;
> -		}
> -
> -		err = unwind__prepare_access(maps, new, NULL);
> -		if (err)
> -			goto out_unlock;
> -
> -		maps__insert(maps, new);
> -		map__put(new);
> -	}
> -
> -	err = 0;
> -out_unlock:
> -	up_read(&parent->lock);
> -	return err;
> -}
> -
> -static void __maps__insert(struct maps *maps, struct map *map)
> -{
> -	struct rb_node **p = &maps->entries.rb_node;
> -	struct rb_node *parent = NULL;
> -	const u64 ip = map->start;
> -	struct map *m;
> -
> -	while (*p != NULL) {
> -		parent = *p;
> -		m = rb_entry(parent, struct map, rb_node);
> -		if (ip < m->start)
> -			p = &(*p)->rb_left;
> -		else
> -			p = &(*p)->rb_right;
> -	}
> -
> -	rb_link_node(&map->rb_node, parent, p);
> -	rb_insert_color(&map->rb_node, &maps->entries);
> -	map__get(map);
> -}
> -
> -struct map *maps__find(struct maps *maps, u64 ip)
> -{
> -	struct rb_node *p;
> -	struct map *m;
> -
> -	down_read(&maps->lock);
> -
> -	p = maps->entries.rb_node;
> -	while (p != NULL) {
> -		m = rb_entry(p, struct map, rb_node);
> -		if (ip < m->start)
> -			p = p->rb_left;
> -		else if (ip >= m->end)
> -			p = p->rb_right;
> -		else
> -			goto out;
> -	}
> -
> -	m = NULL;
> -out:
> -	up_read(&maps->lock);
> -	return m;
> -}
> -
> -struct map *maps__first(struct maps *maps)
> -{
> -	struct rb_node *first = rb_first(&maps->entries);
> -
> -	if (first)
> -		return rb_entry(first, struct map, rb_node);
> -	return NULL;
> -}
> -
>  static struct map *__map__next(struct map *map)
>  {
>  	struct rb_node *next = rb_next(&map->rb_node);
> diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
> index d32f5b28c1fb..973dce27b253 100644
> --- a/tools/perf/util/map.h
> +++ b/tools/perf/util/map.h
> @@ -160,6 +160,8 @@ static inline bool __map__is_kmodule(const struct map *map)
>  
>  bool map__has_symbols(const struct map *map);
>  
> +bool map__contains_symbol(struct map *map, struct symbol *sym);
> +
>  #define ENTRY_TRAMPOLINE_NAME "__entry_SYSCALL_64_trampoline"
>  
>  static inline bool is_entry_trampoline(const char *name)
> diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
> new file mode 100644
> index 000000000000..ededabf0a230
> --- /dev/null
> +++ b/tools/perf/util/maps.c
> @@ -0,0 +1,403 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <errno.h>
> +#include <stdlib.h>
> +#include <linux/zalloc.h>
> +#include "debug.h"
> +#include "dso.h"
> +#include "map.h"
> +#include "maps.h"
> +#include "thread.h"
> +#include "ui/ui.h"
> +#include "unwind.h"
> +
> +static void __maps__insert(struct maps *maps, struct map *map);
> +
> +void maps__init(struct maps *maps, struct machine *machine)
> +{
> +	maps->entries = RB_ROOT;
> +	init_rwsem(&maps->lock);
> +	maps->machine = machine;
> +	maps->last_search_by_name = NULL;
> +	maps->nr_maps = 0;
> +	maps->maps_by_name = NULL;
> +	refcount_set(&maps->refcnt, 1);
> +}
> +
> +static void __maps__free_maps_by_name(struct maps *maps)
> +{
> +	/*
> +	 * Free everything to try to do it from the rbtree in the next search
> +	 */
> +	zfree(&maps->maps_by_name);
> +	maps->nr_maps_allocated = 0;
> +}
> +
> +void maps__insert(struct maps *maps, struct map *map)
> +{
> +	down_write(&maps->lock);
> +	__maps__insert(maps, map);
> +	++maps->nr_maps;
> +
> +	if (map->dso && map->dso->kernel) {
> +		struct kmap *kmap = map__kmap(map);
> +
> +		if (kmap)
> +			kmap->kmaps = maps;
> +		else
> +			pr_err("Internal error: kernel dso with non kernel map\n");
> +	}
> +
> +
> +	/*
> +	 * If we already performed some search by name, then we need to add the just
> +	 * inserted map and resort.
> +	 */
> +	if (maps->maps_by_name) {
> +		if (maps->nr_maps > maps->nr_maps_allocated) {
> +			int nr_allocate = maps->nr_maps * 2;
> +			struct map **maps_by_name = realloc(maps->maps_by_name, nr_allocate * sizeof(map));
> +
> +			if (maps_by_name == NULL) {
> +				__maps__free_maps_by_name(maps);
> +				up_write(&maps->lock);
> +				return;
> +			}
> +
> +			maps->maps_by_name = maps_by_name;
> +			maps->nr_maps_allocated = nr_allocate;
> +		}
> +		maps->maps_by_name[maps->nr_maps - 1] = map;
> +		__maps__sort_by_name(maps);
> +	}
> +	up_write(&maps->lock);
> +}
> +
> +static void __maps__remove(struct maps *maps, struct map *map)
> +{
> +	rb_erase_init(&map->rb_node, &maps->entries);
> +	map__put(map);
> +}
> +
> +void maps__remove(struct maps *maps, struct map *map)
> +{
> +	down_write(&maps->lock);
> +	if (maps->last_search_by_name == map)
> +		maps->last_search_by_name = NULL;
> +
> +	__maps__remove(maps, map);
> +	--maps->nr_maps;
> +	if (maps->maps_by_name)
> +		__maps__free_maps_by_name(maps);
> +	up_write(&maps->lock);
> +}
> +
> +static void __maps__purge(struct maps *maps)
> +{
> +	struct map *pos, *next;
> +
> +	maps__for_each_entry_safe(maps, pos, next) {
> +		rb_erase_init(&pos->rb_node,  &maps->entries);
> +		map__put(pos);
> +	}
> +}
> +
> +void maps__exit(struct maps *maps)
> +{
> +	down_write(&maps->lock);
> +	__maps__purge(maps);
> +	up_write(&maps->lock);
> +}
> +
> +bool maps__empty(struct maps *maps)
> +{
> +	return !maps__first(maps);
> +}
> +
> +struct maps *maps__new(struct machine *machine)
> +{
> +	struct maps *maps = zalloc(sizeof(*maps));
> +
> +	if (maps != NULL)
> +		maps__init(maps, machine);
> +
> +	return maps;
> +}
> +
> +void maps__delete(struct maps *maps)
> +{
> +	maps__exit(maps);
> +	unwind__finish_access(maps);
> +	free(maps);
> +}
> +
> +void maps__put(struct maps *maps)
> +{
> +	if (maps && refcount_dec_and_test(&maps->refcnt))
> +		maps__delete(maps);
> +}
> +
> +struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
> +{
> +	struct map *map = maps__find(maps, addr);
> +
> +	/* Ensure map is loaded before using map->map_ip */
> +	if (map != NULL && map__load(map) >= 0) {
> +		if (mapp != NULL)
> +			*mapp = map;
> +		return map__find_symbol(map, map->map_ip(map, addr));
> +	}
> +
> +	return NULL;
> +}
> +
> +struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct map **mapp)
> +{
> +	struct symbol *sym;
> +	struct map *pos;
> +
> +	down_read(&maps->lock);
> +
> +	maps__for_each_entry(maps, pos) {
> +		sym = map__find_symbol_by_name(pos, name);
> +
> +		if (sym == NULL)
> +			continue;
> +		if (!map__contains_symbol(pos, sym)) {
> +			sym = NULL;
> +			continue;
> +		}
> +		if (mapp != NULL)
> +			*mapp = pos;
> +		goto out;
> +	}
> +
> +	sym = NULL;
> +out:
> +	up_read(&maps->lock);
> +	return sym;
> +}
> +
> +int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
> +{
> +	if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
> +		if (maps == NULL)
> +			return -1;
> +		ams->ms.map = maps__find(maps, ams->addr);
> +		if (ams->ms.map == NULL)
> +			return -1;
> +	}
> +
> +	ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
> +	ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
> +
> +	return ams->ms.sym ? 0 : -1;
> +}
> +
> +size_t maps__fprintf(struct maps *maps, FILE *fp)
> +{
> +	size_t printed = 0;
> +	struct map *pos;
> +
> +	down_read(&maps->lock);
> +
> +	maps__for_each_entry(maps, pos) {
> +		printed += fprintf(fp, "Map:");
> +		printed += map__fprintf(pos, fp);
> +		if (verbose > 2) {
> +			printed += dso__fprintf(pos->dso, fp);
> +			printed += fprintf(fp, "--\n");
> +		}
> +	}
> +
> +	up_read(&maps->lock);
> +
> +	return printed;
> +}
> +
> +int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> +{
> +	struct rb_root *root;
> +	struct rb_node *next, *first;
> +	int err = 0;
> +
> +	down_write(&maps->lock);
> +
> +	root = &maps->entries;
> +
> +	/*
> +	 * Find first map where end > map->start.
> +	 * Same as find_vma() in kernel.
> +	 */
> +	next = root->rb_node;
> +	first = NULL;
> +	while (next) {
> +		struct map *pos = rb_entry(next, struct map, rb_node);
> +
> +		if (pos->end > map->start) {
> +			first = next;
> +			if (pos->start <= map->start)
> +				break;
> +			next = next->rb_left;
> +		} else
> +			next = next->rb_right;
> +	}
> +
> +	next = first;
> +	while (next) {
> +		struct map *pos = rb_entry(next, struct map, rb_node);
> +		next = rb_next(&pos->rb_node);
> +
> +		/*
> +		 * Stop if current map starts after map->end.
> +		 * Maps are ordered by start: next will not overlap for sure.
> +		 */
> +		if (pos->start >= map->end)
> +			break;
> +
> +		if (verbose >= 2) {
> +
> +			if (use_browser) {
> +				pr_debug("overlapping maps in %s (disable tui for more info)\n",
> +					   map->dso->name);
> +			} else {
> +				fputs("overlapping maps:\n", fp);
> +				map__fprintf(map, fp);
> +				map__fprintf(pos, fp);
> +			}
> +		}
> +
> +		rb_erase_init(&pos->rb_node, root);
> +		/*
> +		 * Now check if we need to create new maps for areas not
> +		 * overlapped by the new map:
> +		 */
> +		if (map->start > pos->start) {
> +			struct map *before = map__clone(pos);
> +
> +			if (before == NULL) {
> +				err = -ENOMEM;
> +				goto put_map;
> +			}
> +
> +			before->end = map->start;
> +			__maps__insert(maps, before);
> +			if (verbose >= 2 && !use_browser)
> +				map__fprintf(before, fp);
> +			map__put(before);
> +		}
> +
> +		if (map->end < pos->end) {
> +			struct map *after = map__clone(pos);
> +
> +			if (after == NULL) {
> +				err = -ENOMEM;
> +				goto put_map;
> +			}
> +
> +			after->start = map->end;
> +			after->pgoff += map->end - pos->start;
> +			assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
> +			__maps__insert(maps, after);
> +			if (verbose >= 2 && !use_browser)
> +				map__fprintf(after, fp);
> +			map__put(after);
> +		}
> +put_map:
> +		map__put(pos);
> +
> +		if (err)
> +			goto out;
> +	}
> +
> +	err = 0;
> +out:
> +	up_write(&maps->lock);
> +	return err;
> +}
> +
> +/*
> + * XXX This should not really _copy_ te maps, but refcount them.
> + */
> +int maps__clone(struct thread *thread, struct maps *parent)
> +{
> +	struct maps *maps = thread->maps;
> +	int err;
> +	struct map *map;
> +
> +	down_read(&parent->lock);
> +
> +	maps__for_each_entry(parent, map) {
> +		struct map *new = map__clone(map);
> +
> +		if (new == NULL) {
> +			err = -ENOMEM;
> +			goto out_unlock;
> +		}
> +
> +		err = unwind__prepare_access(maps, new, NULL);
> +		if (err)
> +			goto out_unlock;
> +
> +		maps__insert(maps, new);
> +		map__put(new);
> +	}
> +
> +	err = 0;
> +out_unlock:
> +	up_read(&parent->lock);
> +	return err;
> +}
> +
> +static void __maps__insert(struct maps *maps, struct map *map)
> +{
> +	struct rb_node **p = &maps->entries.rb_node;
> +	struct rb_node *parent = NULL;
> +	const u64 ip = map->start;
> +	struct map *m;
> +
> +	while (*p != NULL) {
> +		parent = *p;
> +		m = rb_entry(parent, struct map, rb_node);
> +		if (ip < m->start)
> +			p = &(*p)->rb_left;
> +		else
> +			p = &(*p)->rb_right;
> +	}
> +
> +	rb_link_node(&map->rb_node, parent, p);
> +	rb_insert_color(&map->rb_node, &maps->entries);
> +	map__get(map);
> +}
> +
> +struct map *maps__find(struct maps *maps, u64 ip)
> +{
> +	struct rb_node *p;
> +	struct map *m;
> +
> +	down_read(&maps->lock);
> +
> +	p = maps->entries.rb_node;
> +	while (p != NULL) {
> +		m = rb_entry(p, struct map, rb_node);
> +		if (ip < m->start)
> +			p = p->rb_left;
> +		else if (ip >= m->end)
> +			p = p->rb_right;
> +		else
> +			goto out;
> +	}
> +
> +	m = NULL;
> +out:
> +	up_read(&maps->lock);
> +	return m;
> +}
> +
> +struct map *maps__first(struct maps *maps)
> +{
> +	struct rb_node *first = rb_first(&maps->entries);
> +
> +	if (first)
> +		return rb_entry(first, struct map, rb_node);
> +	return NULL;
> +}
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 09/22] perf map: Add const to map_ip and unmap_ip
  2022-02-11 10:34 ` [PATCH v3 09/22] perf map: Add const to map_ip and unmap_ip Ian Rogers
@ 2022-02-11 17:28   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:28 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:34:02AM -0800, Ian Rogers escreveu:
> Functions purely determine a value from the map and don't need to modify
> it. Move functions to C file as they are most commonly used via a
> function pointer.

Builds, applied.

- Arnaldo
 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/util/map.c | 15 +++++++++++++++
>  tools/perf/util/map.h | 24 ++++++++----------------
>  2 files changed, 23 insertions(+), 16 deletions(-)
> 
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index 2cfe5744b86c..b98fb000eb5c 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -563,3 +563,18 @@ struct maps *map__kmaps(struct map *map)
>  	}
>  	return kmap->kmaps;
>  }
> +
> +u64 map__map_ip(const struct map *map, u64 ip)
> +{
> +	return ip - map->start + map->pgoff;
> +}
> +
> +u64 map__unmap_ip(const struct map *map, u64 ip)
> +{
> +	return ip + map->start - map->pgoff;
> +}
> +
> +u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip)
> +{
> +	return ip;
> +}
> diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
> index 973dce27b253..212a9468d5e1 100644
> --- a/tools/perf/util/map.h
> +++ b/tools/perf/util/map.h
> @@ -29,9 +29,9 @@ struct map {
>  	u64			reloc;
>  
>  	/* ip -> dso rip */
> -	u64			(*map_ip)(struct map *, u64);
> +	u64			(*map_ip)(const struct map *, u64);
>  	/* dso rip -> ip */
> -	u64			(*unmap_ip)(struct map *, u64);
> +	u64			(*unmap_ip)(const struct map *, u64);
>  
>  	struct dso		*dso;
>  	refcount_t		refcnt;
> @@ -44,20 +44,12 @@ struct kmap *__map__kmap(struct map *map);
>  struct kmap *map__kmap(struct map *map);
>  struct maps *map__kmaps(struct map *map);
>  
> -static inline u64 map__map_ip(struct map *map, u64 ip)
> -{
> -	return ip - map->start + map->pgoff;
> -}
> -
> -static inline u64 map__unmap_ip(struct map *map, u64 ip)
> -{
> -	return ip + map->start - map->pgoff;
> -}
> -
> -static inline u64 identity__map_ip(struct map *map __maybe_unused, u64 ip)
> -{
> -	return ip;
> -}
> +/* ip -> dso rip */
> +u64 map__map_ip(const struct map *map, u64 ip);
> +/* dso rip -> ip */
> +u64 map__unmap_ip(const struct map *map, u64 ip);
> +/* Returns ip */
> +u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip);
>  
>  static inline size_t map__size(const struct map *map)
>  {
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 10/22] perf map: Make map__contains_symbol args const
  2022-02-11 10:34 ` [PATCH v3 10/22] perf map: Make map__contains_symbol args const Ian Rogers
@ 2022-02-11 17:28   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:28 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:34:03AM -0800, Ian Rogers escreveu:
> Now unmap_ip is const, make contains symbol const.

Not applying, waiting for a refresh of this patch set after the subset
that has been applied.

- Arnaldo
 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/util/map.c | 2 +-
>  tools/perf/util/map.h | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index b98fb000eb5c..8bbf9246a3cf 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -516,7 +516,7 @@ u64 map__objdump_2mem(struct map *map, u64 ip)
>  	return ip + map->reloc;
>  }
>  
> -bool map__contains_symbol(struct map *map, struct symbol *sym)
> +bool map__contains_symbol(const struct map *map, const struct symbol *sym)
>  {
>  	u64 ip = map->unmap_ip(map, sym->start);
>  
> diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
> index 212a9468d5e1..3dcfe06db6b3 100644
> --- a/tools/perf/util/map.h
> +++ b/tools/perf/util/map.h
> @@ -152,7 +152,7 @@ static inline bool __map__is_kmodule(const struct map *map)
>  
>  bool map__has_symbols(const struct map *map);
>  
> -bool map__contains_symbol(struct map *map, struct symbol *sym);
> +bool map__contains_symbol(const struct map *map, const struct symbol *sym);
>  
>  #define ENTRY_TRAMPOLINE_NAME "__entry_SYSCALL_64_trampoline"
>  
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 13/22] perf namespaces: Add functions to access nsinfo
  2022-02-11 10:34 ` [PATCH v3 13/22] perf namespaces: Add functions to access nsinfo Ian Rogers
@ 2022-02-11 17:31   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:31 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:34:06AM -0800, Ian Rogers escreveu:
> Having functions to access nsinfo reduces the places where reference
> counting checking needs to be added.

Looks sensible, applied.

- Arnaldo
 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/builtin-inject.c  |  2 +-
>  tools/perf/builtin-probe.c   |  2 +-
>  tools/perf/util/build-id.c   |  4 +--
>  tools/perf/util/jitdump.c    | 10 ++++----
>  tools/perf/util/map.c        |  4 +--
>  tools/perf/util/namespaces.c | 50 ++++++++++++++++++++++++++++--------
>  tools/perf/util/namespaces.h | 10 ++++++--
>  tools/perf/util/symbol.c     |  8 +++---
>  8 files changed, 63 insertions(+), 27 deletions(-)
> 
> diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
> index bede332bf0e2..f7917c390e96 100644
> --- a/tools/perf/builtin-inject.c
> +++ b/tools/perf/builtin-inject.c
> @@ -354,7 +354,7 @@ static struct dso *findnew_dso(int pid, int tid, const char *filename,
>  		nnsi = nsinfo__copy(nsi);
>  		if (nnsi) {
>  			nsinfo__put(nsi);
> -			nnsi->need_setns = false;
> +			nsinfo__clear_need_setns(nnsi);
>  			nsi = nnsi;
>  		}
>  		dso = machine__findnew_vdso(machine, thread);
> diff --git a/tools/perf/builtin-probe.c b/tools/perf/builtin-probe.c
> index c31627af75d4..f62298f5db3b 100644
> --- a/tools/perf/builtin-probe.c
> +++ b/tools/perf/builtin-probe.c
> @@ -217,7 +217,7 @@ static int opt_set_target_ns(const struct option *opt __maybe_unused,
>  			return ret;
>  		}
>  		nsip = nsinfo__new(ns_pid);
> -		if (nsip && nsip->need_setns)
> +		if (nsip && nsinfo__need_setns(nsip))
>  			params.nsi = nsinfo__get(nsip);
>  		nsinfo__put(nsip);
>  
> diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
> index e32e8f2ff3bd..7a5821c87f94 100644
> --- a/tools/perf/util/build-id.c
> +++ b/tools/perf/util/build-id.c
> @@ -706,7 +706,7 @@ build_id_cache__add(const char *sbuild_id, const char *name, const char *realnam
>  		if (is_kallsyms) {
>  			if (copyfile("/proc/kallsyms", filename))
>  				goto out_free;
> -		} else if (nsi && nsi->need_setns) {
> +		} else if (nsi && nsinfo__need_setns(nsi)) {
>  			if (copyfile_ns(name, filename, nsi))
>  				goto out_free;
>  		} else if (link(realname, filename) && errno != EEXIST &&
> @@ -730,7 +730,7 @@ build_id_cache__add(const char *sbuild_id, const char *name, const char *realnam
>  				goto out_free;
>  			}
>  			if (access(filename, F_OK)) {
> -				if (nsi && nsi->need_setns) {
> +				if (nsi && nsinfo__need_setns(nsi)) {
>  					if (copyfile_ns(debugfile, filename,
>  							nsi))
>  						goto out_free;
> diff --git a/tools/perf/util/jitdump.c b/tools/perf/util/jitdump.c
> index 917a9c707371..a23255773c60 100644
> --- a/tools/perf/util/jitdump.c
> +++ b/tools/perf/util/jitdump.c
> @@ -382,15 +382,15 @@ jit_inject_event(struct jit_buf_desc *jd, union perf_event *event)
>  
>  static pid_t jr_entry_pid(struct jit_buf_desc *jd, union jr_entry *jr)
>  {
> -	if (jd->nsi && jd->nsi->in_pidns)
> -		return jd->nsi->tgid;
> +	if (jd->nsi && nsinfo__in_pidns(jd->nsi))
> +		return nsinfo__tgid(jd->nsi);
>  	return jr->load.pid;
>  }
>  
>  static pid_t jr_entry_tid(struct jit_buf_desc *jd, union jr_entry *jr)
>  {
> -	if (jd->nsi && jd->nsi->in_pidns)
> -		return jd->nsi->pid;
> +	if (jd->nsi && nsinfo__in_pidns(jd->nsi))
> +		return nsinfo__pid(jd->nsi);
>  	return jr->load.tid;
>  }
>  
> @@ -779,7 +779,7 @@ jit_detect(char *mmap_name, pid_t pid, struct nsinfo *nsi)
>  	 * pid does not match mmap pid
>  	 * pid==0 in system-wide mode (synthesized)
>  	 */
> -	if (pid && pid2 != nsi->nstgid)
> +	if (pid && pid2 != nsinfo__nstgid(nsi))
>  		return -1;
>  	/*
>  	 * validate suffix
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index dfa5f6b7381f..166c84c829f6 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -139,7 +139,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
>  
>  		if ((anon || no_dso) && nsi && (prot & PROT_EXEC)) {
>  			snprintf(newfilename, sizeof(newfilename),
> -				 "/tmp/perf-%d.map", nsi->pid);
> +				 "/tmp/perf-%d.map", nsinfo__pid(nsi));
>  			filename = newfilename;
>  		}
>  
> @@ -156,7 +156,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
>  			nnsi = nsinfo__copy(nsi);
>  			if (nnsi) {
>  				nsinfo__put(nsi);
> -				nnsi->need_setns = false;
> +				nsinfo__clear_need_setns(nnsi);
>  				nsi = nnsi;
>  			}
>  			pgoff = 0;
> diff --git a/tools/perf/util/namespaces.c b/tools/perf/util/namespaces.c
> index 48aa3217300b..dd536220cdb9 100644
> --- a/tools/perf/util/namespaces.c
> +++ b/tools/perf/util/namespaces.c
> @@ -76,7 +76,7 @@ static int nsinfo__get_nspid(struct nsinfo *nsi, const char *path)
>  		if (strstr(statln, "Tgid:") != NULL) {
>  			nsi->tgid = (pid_t)strtol(strrchr(statln, '\t'),
>  						     NULL, 10);
> -			nsi->nstgid = nsi->tgid;
> +			nsi->nstgid = nsinfo__tgid(nsi);
>  		}
>  
>  		if (strstr(statln, "NStgid:") != NULL) {
> @@ -108,7 +108,7 @@ int nsinfo__init(struct nsinfo *nsi)
>  	if (snprintf(oldns, PATH_MAX, "/proc/self/ns/mnt") >= PATH_MAX)
>  		return rv;
>  
> -	if (asprintf(&newns, "/proc/%d/ns/mnt", nsi->pid) == -1)
> +	if (asprintf(&newns, "/proc/%d/ns/mnt", nsinfo__pid(nsi)) == -1)
>  		return rv;
>  
>  	if (stat(oldns, &old_stat) < 0)
> @@ -129,7 +129,7 @@ int nsinfo__init(struct nsinfo *nsi)
>  	/* If we're dealing with a process that is in a different PID namespace,
>  	 * attempt to work out the innermost tgid for the process.
>  	 */
> -	if (snprintf(spath, PATH_MAX, "/proc/%d/status", nsi->pid) >= PATH_MAX)
> +	if (snprintf(spath, PATH_MAX, "/proc/%d/status", nsinfo__pid(nsi)) >= PATH_MAX)
>  		goto out;
>  
>  	rv = nsinfo__get_nspid(nsi, spath);
> @@ -166,7 +166,7 @@ struct nsinfo *nsinfo__new(pid_t pid)
>  	return nsi;
>  }
>  
> -struct nsinfo *nsinfo__copy(struct nsinfo *nsi)
> +struct nsinfo *nsinfo__copy(const struct nsinfo *nsi)
>  {
>  	struct nsinfo *nnsi;
>  
> @@ -175,11 +175,11 @@ struct nsinfo *nsinfo__copy(struct nsinfo *nsi)
>  
>  	nnsi = calloc(1, sizeof(*nnsi));
>  	if (nnsi != NULL) {
> -		nnsi->pid = nsi->pid;
> -		nnsi->tgid = nsi->tgid;
> -		nnsi->nstgid = nsi->nstgid;
> -		nnsi->need_setns = nsi->need_setns;
> -		nnsi->in_pidns = nsi->in_pidns;
> +		nnsi->pid = nsinfo__pid(nsi);
> +		nnsi->tgid = nsinfo__tgid(nsi);
> +		nnsi->nstgid = nsinfo__nstgid(nsi);
> +		nnsi->need_setns = nsinfo__need_setns(nsi);
> +		nnsi->in_pidns = nsinfo__in_pidns(nsi);
>  		if (nsi->mntns_path) {
>  			nnsi->mntns_path = strdup(nsi->mntns_path);
>  			if (!nnsi->mntns_path) {
> @@ -193,7 +193,7 @@ struct nsinfo *nsinfo__copy(struct nsinfo *nsi)
>  	return nnsi;
>  }
>  
> -void nsinfo__delete(struct nsinfo *nsi)
> +static void nsinfo__delete(struct nsinfo *nsi)
>  {
>  	zfree(&nsi->mntns_path);
>  	free(nsi);
> @@ -212,6 +212,36 @@ void nsinfo__put(struct nsinfo *nsi)
>  		nsinfo__delete(nsi);
>  }
>  
> +bool nsinfo__need_setns(const struct nsinfo *nsi)
> +{
> +        return nsi->need_setns;
> +}
> +
> +void nsinfo__clear_need_setns(struct nsinfo *nsi)
> +{
> +        nsi->need_setns = false;
> +}
> +
> +pid_t nsinfo__tgid(const struct nsinfo  *nsi)
> +{
> +        return nsi->tgid;
> +}
> +
> +pid_t nsinfo__nstgid(const struct nsinfo  *nsi)
> +{
> +        return nsi->nstgid;
> +}
> +
> +pid_t nsinfo__pid(const struct nsinfo  *nsi)
> +{
> +        return nsi->pid;
> +}
> +
> +pid_t nsinfo__in_pidns(const struct nsinfo  *nsi)
> +{
> +        return nsi->in_pidns;
> +}
> +
>  void nsinfo__mountns_enter(struct nsinfo *nsi,
>  				  struct nscookie *nc)
>  {
> diff --git a/tools/perf/util/namespaces.h b/tools/perf/util/namespaces.h
> index 9ceea9643507..567829262c42 100644
> --- a/tools/perf/util/namespaces.h
> +++ b/tools/perf/util/namespaces.h
> @@ -47,12 +47,18 @@ struct nscookie {
>  
>  int nsinfo__init(struct nsinfo *nsi);
>  struct nsinfo *nsinfo__new(pid_t pid);
> -struct nsinfo *nsinfo__copy(struct nsinfo *nsi);
> -void nsinfo__delete(struct nsinfo *nsi);
> +struct nsinfo *nsinfo__copy(const struct nsinfo *nsi);
>  
>  struct nsinfo *nsinfo__get(struct nsinfo *nsi);
>  void nsinfo__put(struct nsinfo *nsi);
>  
> +bool nsinfo__need_setns(const struct nsinfo *nsi);
> +void nsinfo__clear_need_setns(struct nsinfo *nsi);
> +pid_t nsinfo__tgid(const struct nsinfo  *nsi);
> +pid_t nsinfo__nstgid(const struct nsinfo  *nsi);
> +pid_t nsinfo__pid(const struct nsinfo  *nsi);
> +pid_t nsinfo__in_pidns(const struct nsinfo  *nsi);
> +
>  void nsinfo__mountns_enter(struct nsinfo *nsi, struct nscookie *nc);
>  void nsinfo__mountns_exit(struct nscookie *nc);
>  
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index 266c65bb8bbb..e8045b1c8700 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -1784,8 +1784,8 @@ static int dso__find_perf_map(char *filebuf, size_t bufsz,
>  
>  	nsi = *nsip;
>  
> -	if (nsi->need_setns) {
> -		snprintf(filebuf, bufsz, "/tmp/perf-%d.map", nsi->nstgid);
> +	if (nsinfo__need_setns(nsi)) {
> +		snprintf(filebuf, bufsz, "/tmp/perf-%d.map", nsinfo__nstgid(nsi));
>  		nsinfo__mountns_enter(nsi, &nsc);
>  		rc = access(filebuf, R_OK);
>  		nsinfo__mountns_exit(&nsc);
> @@ -1797,8 +1797,8 @@ static int dso__find_perf_map(char *filebuf, size_t bufsz,
>  	if (nnsi) {
>  		nsinfo__put(nsi);
>  
> -		nnsi->need_setns = false;
> -		snprintf(filebuf, bufsz, "/tmp/perf-%d.map", nnsi->tgid);
> +		nsinfo__clear_need_setns(nnsi);
> +		snprintf(filebuf, bufsz, "/tmp/perf-%d.map", nsinfo__tgid(nnsi));
>  		*nsip = nnsi;
>  		rc = 0;
>  	}
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 14/22] perf maps: Add functions to access maps
  2022-02-11 10:34 ` [PATCH v3 14/22] perf maps: Add functions to access maps Ian Rogers
@ 2022-02-11 17:33   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:33 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:34:07AM -0800, Ian Rogers escreveu:
> Introduce functions to access struct maps. These functions reduce the
> number of places reference counting is necessary. While tidying APIs do
> some small const-ification, in particlar to unwind_libunwind_ops.
> 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  .../scripts/python/Perf-Trace-Util/Context.c  |  7 +-
>  tools/perf/tests/code-reading.c               |  2 +-
>  tools/perf/ui/browsers/hists.c                |  3 +-
>  tools/perf/util/callchain.c                   |  9 +--
>  tools/perf/util/db-export.c                   | 12 ++--
>  tools/perf/util/dlfilter.c                    |  8 ++-
>  tools/perf/util/event.c                       |  4 +-
>  tools/perf/util/hist.c                        |  2 +-
>  tools/perf/util/machine.c                     |  2 +-
>  tools/perf/util/map.c                         | 14 ++--
>  tools/perf/util/maps.c                        | 69 +++++++++++--------
>  tools/perf/util/maps.h                        | 47 ++++++++++---
>  .../scripting-engines/trace-event-python.c    |  2 +-
>  tools/perf/util/sort.c                        |  2 +-
>  tools/perf/util/symbol-elf.c                  |  2 +-
>  tools/perf/util/symbol.c                      | 36 +++++-----
>  tools/perf/util/thread-stack.c                |  4 +-
>  tools/perf/util/thread.c                      |  4 +-
>  tools/perf/util/unwind-libunwind-local.c      | 16 +++--
>  tools/perf/util/unwind-libunwind.c            | 30 +++++---
>  20 files changed, 170 insertions(+), 105 deletions(-)
> 
> diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> index 895f5fc23965..b64013a87c54 100644
> --- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> +++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> @@ -98,10 +98,11 @@ static PyObject *perf_sample_insn(PyObject *obj, PyObject *args)
>  	if (!c)
>  		return NULL;
>  
> -	if (c->sample->ip && !c->sample->insn_len &&
> -	    c->al->thread->maps && c->al->thread->maps->machine)
> -		script_fetch_insn(c->sample, c->al->thread, c->al->thread->maps->machine);
> +	if (c->sample->ip && !c->sample->insn_len && c->al->thread->maps) {
> +		struct machine *machine =  maps__machine(c->al->thread->maps);
>  
> +		script_fetch_insn(c->sample, c->al->thread, machine);
> +	}

Please reflow this to reduce the number of patch lines, first impression
is that his is possible and would help in reviewing.

>  	if (!c->sample->insn_len)
>  		Py_RETURN_NONE; /* N.B. This is a return statement */
>  
> diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
> index 5610767b407f..6eafe36a8704 100644
> --- a/tools/perf/tests/code-reading.c
> +++ b/tools/perf/tests/code-reading.c
> @@ -268,7 +268,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  		len = al.map->end - addr;
>  
>  	/* Read the object code using perf */
> -	ret_len = dso__data_read_offset(al.map->dso, thread->maps->machine,
> +	ret_len = dso__data_read_offset(al.map->dso, maps__machine(thread->maps),
>  					al.addr, buf1, len);
>  	if (ret_len != len) {
>  		pr_debug("dso__data_read_offset failed\n");
> diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
> index b72ee6822222..572ff38ceb0f 100644
> --- a/tools/perf/ui/browsers/hists.c
> +++ b/tools/perf/ui/browsers/hists.c
> @@ -3139,7 +3139,8 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
>  			continue;
>  		case 'k':
>  			if (browser->selection != NULL)
> -				hists_browser__zoom_map(browser, browser->selection->maps->machine->vmlinux_map);
> +				hists_browser__zoom_map(browser,
> +					      maps__machine(browser->selection->maps)->vmlinux_map);
>  			continue;
>  		case 'V':
>  			verbose = (verbose + 1) % 4;
> diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
> index 5c27a4b2e7a7..61bb3fb2107a 100644
> --- a/tools/perf/util/callchain.c
> +++ b/tools/perf/util/callchain.c
> @@ -1106,6 +1106,8 @@ int hist_entry__append_callchain(struct hist_entry *he, struct perf_sample *samp
>  int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *node,
>  			bool hide_unresolved)
>  {
> +	struct machine *machine = maps__machine(node->ms.maps);
> +
>  	al->maps = node->ms.maps;
>  	al->map = node->ms.map;
>  	al->sym = node->ms.sym;
> @@ -1118,9 +1120,8 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
>  		if (al->map == NULL)
>  			goto out;
>  	}
> -
> -	if (al->maps == machine__kernel_maps(al->maps->machine)) {
> -		if (machine__is_host(al->maps->machine)) {
> +	if (al->maps == machine__kernel_maps(machine)) {
> +		if (machine__is_host(machine)) {
>  			al->cpumode = PERF_RECORD_MISC_KERNEL;
>  			al->level = 'k';
>  		} else {
> @@ -1128,7 +1129,7 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
>  			al->level = 'g';
>  		}
>  	} else {
> -		if (machine__is_host(al->maps->machine)) {
> +		if (machine__is_host(machine)) {
>  			al->cpumode = PERF_RECORD_MISC_USER;
>  			al->level = '.';
>  		} else if (perf_guest) {
> diff --git a/tools/perf/util/db-export.c b/tools/perf/util/db-export.c
> index e0d4f08839fb..1cfcfdd3cf52 100644
> --- a/tools/perf/util/db-export.c
> +++ b/tools/perf/util/db-export.c
> @@ -181,7 +181,7 @@ static int db_ids_from_al(struct db_export *dbe, struct addr_location *al,
>  	if (al->map) {
>  		struct dso *dso = al->map->dso;
>  
> -		err = db_export__dso(dbe, dso, al->maps->machine);
> +		err = db_export__dso(dbe, dso, maps__machine(al->maps));
>  		if (err)
>  			return err;
>  		*dso_db_id = dso->db_id;
> @@ -354,19 +354,21 @@ int db_export__sample(struct db_export *dbe, union perf_event *event,
>  	};
>  	struct thread *main_thread;
>  	struct comm *comm = NULL;
> +	struct machine *machine;
>  	int err;
>  
>  	err = db_export__evsel(dbe, evsel);
>  	if (err)
>  		return err;
>  
> -	err = db_export__machine(dbe, al->maps->machine);
> +	machine = maps__machine(al->maps);
> +	err = db_export__machine(dbe, machine);
>  	if (err)
>  		return err;
>  
> -	main_thread = thread__main_thread(al->maps->machine, thread);
> +	main_thread = thread__main_thread(machine, thread);
>  
> -	err = db_export__threads(dbe, thread, main_thread, al->maps->machine, &comm);
> +	err = db_export__threads(dbe, thread, main_thread, machine, &comm);
>  	if (err)
>  		goto out_put;
>  
> @@ -380,7 +382,7 @@ int db_export__sample(struct db_export *dbe, union perf_event *event,
>  		goto out_put;
>  
>  	if (dbe->cpr) {
> -		struct call_path *cp = call_path_from_sample(dbe, al->maps->machine,
> +		struct call_path *cp = call_path_from_sample(dbe, machine,
>  							     thread, sample,
>  							     evsel);
>  		if (cp) {
> diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
> index db964d5a52af..d59462af15f1 100644
> --- a/tools/perf/util/dlfilter.c
> +++ b/tools/perf/util/dlfilter.c
> @@ -197,8 +197,12 @@ static const __u8 *dlfilter__insn(void *ctx, __u32 *len)
>  		if (!al->thread && machine__resolve(d->machine, al, d->sample) < 0)
>  			return NULL;
>  
> -		if (al->thread->maps && al->thread->maps->machine)
> -			script_fetch_insn(d->sample, al->thread, al->thread->maps->machine);
> +		if (al->thread->maps) {
> +			struct machine *machine = maps__machine(al->thread->maps);
> +
> +			if (machine)
> +				script_fetch_insn(d->sample, al->thread, machine);
> +		}
>  	}
>  
>  	if (!d->sample->insn_len)
> diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> index 6439c888ae38..40a3b1a35613 100644
> --- a/tools/perf/util/event.c
> +++ b/tools/perf/util/event.c
> @@ -571,7 +571,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
>  			     struct addr_location *al)
>  {
>  	struct maps *maps = thread->maps;
> -	struct machine *machine = maps->machine;
> +	struct machine *machine = maps__machine(maps);
>  	bool load_map = false;
>  
>  	al->maps = maps;
> @@ -636,7 +636,7 @@ struct map *thread__find_map_fb(struct thread *thread, u8 cpumode, u64 addr,
>  				struct addr_location *al)
>  {
>  	struct map *map = thread__find_map(thread, cpumode, addr, al);
> -	struct machine *machine = thread->maps->machine;
> +	struct machine *machine = maps__machine(thread->maps);
>  	u8 addr_cpumode = machine__addr_cpumode(machine, cpumode, addr);
>  
>  	if (map || addr_cpumode == cpumode)
> diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> index 0a8033b09e28..78f9fbb925a7 100644
> --- a/tools/perf/util/hist.c
> +++ b/tools/perf/util/hist.c
> @@ -237,7 +237,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
>  
>  	if (h->cgroup) {
>  		const char *cgrp_name = "unknown";
> -		struct cgroup *cgrp = cgroup__find(h->ms.maps->machine->env,
> +		struct cgroup *cgrp = cgroup__find(maps__machine(h->ms.maps)->env,
>  						   h->cgroup);
>  		if (cgrp != NULL)
>  			cgrp_name = cgrp->name;
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index fa25174cabf7..88279008e761 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -2739,7 +2739,7 @@ static int find_prev_cpumode(struct ip_callchain *chain, struct thread *thread,
>  static u64 get_leaf_frame_caller(struct perf_sample *sample,
>  		struct thread *thread, int usr_idx)
>  {
> -	if (machine__normalized_is(thread->maps->machine, "arm64"))
> +	if (machine__normalized_is(maps__machine(thread->maps), "arm64"))
>  		return get_leaf_frame_caller_aarch64(sample, thread, usr_idx);
>  	else
>  		return 0;
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index 166c84c829f6..57e926ce115f 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -220,7 +220,7 @@ bool __map__is_kernel(const struct map *map)
>  {
>  	if (!map->dso->kernel)
>  		return false;
> -	return machine__kernel_map(map__kmaps((struct map *)map)->machine) == map;
> +	return machine__kernel_map(maps__machine(map__kmaps((struct map *)map))) == map;
>  }
>  
>  bool __map__is_extra_kernel_map(const struct map *map)
> @@ -461,11 +461,15 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
>  	 * kcore may not either. However the trampoline object code is on the
>  	 * main kernel map, so just use that instead.
>  	 */
> -	if (kmap && is_entry_trampoline(kmap->name) && kmap->kmaps && kmap->kmaps->machine) {
> -		struct map *kernel_map = machine__kernel_map(kmap->kmaps->machine);
> +	if (kmap && is_entry_trampoline(kmap->name) && kmap->kmaps) {
> +		struct machine *machine = maps__machine(kmap->kmaps);
>  
> -		if (kernel_map)
> -			map = kernel_map;
> +		if (machine) {
> +			struct map *kernel_map = machine__kernel_map(machine);
> +
> +			if (kernel_map)
> +				map = kernel_map;
> +		}
>  	}
>  
>  	if (!map->dso->adjust_symbols)
> diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
> index beb09b9a122c..9fc3e7186b8e 100644
> --- a/tools/perf/util/maps.c
> +++ b/tools/perf/util/maps.c
> @@ -13,7 +13,7 @@
>  static void maps__init(struct maps *maps, struct machine *machine)
>  {
>  	maps->entries = RB_ROOT;
> -	init_rwsem(&maps->lock);
> +	init_rwsem(maps__lock(maps));
>  	maps->machine = machine;
>  	maps->last_search_by_name = NULL;
>  	maps->nr_maps = 0;
> @@ -32,7 +32,7 @@ static void __maps__free_maps_by_name(struct maps *maps)
>  
>  static int __maps__insert(struct maps *maps, struct map *map)
>  {
> -	struct rb_node **p = &maps->entries.rb_node;
> +	struct rb_node **p = &maps__entries(maps)->rb_node;
>  	struct rb_node *parent = NULL;
>  	const u64 ip = map->start;
>  	struct map_rb_node *m, *new_rb_node;
> @@ -54,7 +54,7 @@ static int __maps__insert(struct maps *maps, struct map *map)
>  	}
>  
>  	rb_link_node(&new_rb_node->rb_node, parent, p);
> -	rb_insert_color(&new_rb_node->rb_node, &maps->entries);
> +	rb_insert_color(&new_rb_node->rb_node, maps__entries(maps));
>  	map__get(map);
>  	return 0;
>  }
> @@ -63,7 +63,7 @@ int maps__insert(struct maps *maps, struct map *map)
>  {
>  	int err;
>  
> -	down_write(&maps->lock);
> +	down_write(maps__lock(maps));
>  	err = __maps__insert(maps, map);
>  	if (err)
>  		goto out;
> @@ -84,10 +84,11 @@ int maps__insert(struct maps *maps, struct map *map)
>  	 * If we already performed some search by name, then we need to add the just
>  	 * inserted map and resort.
>  	 */
> -	if (maps->maps_by_name) {
> -		if (maps->nr_maps > maps->nr_maps_allocated) {
> -			int nr_allocate = maps->nr_maps * 2;
> -			struct map **maps_by_name = realloc(maps->maps_by_name, nr_allocate * sizeof(map));
> +	if (maps__maps_by_name(maps)) {
> +		if (maps__nr_maps(maps) > maps->nr_maps_allocated) {
> +			int nr_allocate = maps__nr_maps(maps) * 2;
> +			struct map **maps_by_name = realloc(maps__maps_by_name(maps),
> +							    nr_allocate * sizeof(map));
>  
>  			if (maps_by_name == NULL) {
>  				__maps__free_maps_by_name(maps);
> @@ -98,17 +99,17 @@ int maps__insert(struct maps *maps, struct map *map)
>  			maps->maps_by_name = maps_by_name;
>  			maps->nr_maps_allocated = nr_allocate;
>  		}
> -		maps->maps_by_name[maps->nr_maps - 1] = map;
> +		maps__maps_by_name(maps)[maps__nr_maps(maps) - 1] = map;
>  		__maps__sort_by_name(maps);
>  	}
>  out:
> -	up_write(&maps->lock);
> +	up_write(maps__lock(maps));
>  	return err;
>  }
>  
>  static void __maps__remove(struct maps *maps, struct map_rb_node *rb_node)
>  {
> -	rb_erase_init(&rb_node->rb_node, &maps->entries);
> +	rb_erase_init(&rb_node->rb_node, maps__entries(maps));
>  	map__put(rb_node->map);
>  	free(rb_node);
>  }
> @@ -117,7 +118,7 @@ void maps__remove(struct maps *maps, struct map *map)
>  {
>  	struct map_rb_node *rb_node;
>  
> -	down_write(&maps->lock);
> +	down_write(maps__lock(maps));
>  	if (maps->last_search_by_name == map)
>  		maps->last_search_by_name = NULL;
>  
> @@ -125,9 +126,9 @@ void maps__remove(struct maps *maps, struct map *map)
>  	assert(rb_node->map == map);
>  	__maps__remove(maps, rb_node);
>  	--maps->nr_maps;
> -	if (maps->maps_by_name)
> +	if (maps__maps_by_name(maps))
>  		__maps__free_maps_by_name(maps);
> -	up_write(&maps->lock);
> +	up_write(maps__lock(maps));
>  }
>  
>  static void __maps__purge(struct maps *maps)
> @@ -135,7 +136,7 @@ static void __maps__purge(struct maps *maps)
>  	struct map_rb_node *pos, *next;
>  
>  	maps__for_each_entry_safe(maps, pos, next) {
> -		rb_erase_init(&pos->rb_node,  &maps->entries);
> +		rb_erase_init(&pos->rb_node,  maps__entries(maps));
>  		map__put(pos->map);
>  		free(pos);
>  	}
> @@ -143,9 +144,9 @@ static void __maps__purge(struct maps *maps)
>  
>  static void maps__exit(struct maps *maps)
>  {
> -	down_write(&maps->lock);
> +	down_write(maps__lock(maps));
>  	__maps__purge(maps);
> -	up_write(&maps->lock);
> +	up_write(maps__lock(maps));
>  }
>  
>  bool maps__empty(struct maps *maps)
> @@ -170,6 +171,14 @@ void maps__delete(struct maps *maps)
>  	free(maps);
>  }
>  
> +struct maps *maps__get(struct maps *maps)
> +{
> +	if (maps)
> +		refcount_inc(&maps->refcnt);
> +
> +	return maps;
> +}
> +
>  void maps__put(struct maps *maps)
>  {
>  	if (maps && refcount_dec_and_test(&maps->refcnt))
> @@ -195,7 +204,7 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
>  	struct symbol *sym;
>  	struct map_rb_node *pos;
>  
> -	down_read(&maps->lock);
> +	down_read(maps__lock(maps));
>  
>  	maps__for_each_entry(maps, pos) {
>  		sym = map__find_symbol_by_name(pos->map, name);
> @@ -213,7 +222,7 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
>  
>  	sym = NULL;
>  out:
> -	up_read(&maps->lock);
> +	up_read(maps__lock(maps));
>  	return sym;
>  }
>  
> @@ -238,7 +247,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
>  	size_t printed = 0;
>  	struct map_rb_node *pos;
>  
> -	down_read(&maps->lock);
> +	down_read(maps__lock(maps));
>  
>  	maps__for_each_entry(maps, pos) {
>  		printed += fprintf(fp, "Map:");
> @@ -249,7 +258,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
>  		}
>  	}
>  
> -	up_read(&maps->lock);
> +	up_read(maps__lock(maps));
>  
>  	return printed;
>  }
> @@ -260,9 +269,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  	struct rb_node *next, *first;
>  	int err = 0;
>  
> -	down_write(&maps->lock);
> +	down_write(maps__lock(maps));
>  
> -	root = &maps->entries;
> +	root = maps__entries(maps);
>  
>  	/*
>  	 * Find first map where end > map->start.
> @@ -358,7 +367,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  
>  	err = 0;
>  out:
> -	up_write(&maps->lock);
> +	up_write(maps__lock(maps));
>  	return err;
>  }
>  
> @@ -371,7 +380,7 @@ int maps__clone(struct thread *thread, struct maps *parent)
>  	int err;
>  	struct map_rb_node *rb_node;
>  
> -	down_read(&parent->lock);
> +	down_read(maps__lock(parent));
>  
>  	maps__for_each_entry(parent, rb_node) {
>  		struct map *new = map__clone(rb_node->map);
> @@ -394,7 +403,7 @@ int maps__clone(struct thread *thread, struct maps *parent)
>  
>  	err = 0;
>  out_unlock:
> -	up_read(&parent->lock);
> +	up_read(maps__lock(parent));
>  	return err;
>  }
>  
> @@ -414,9 +423,9 @@ struct map *maps__find(struct maps *maps, u64 ip)
>  	struct rb_node *p;
>  	struct map_rb_node *m;
>  
> -	down_read(&maps->lock);
> +	down_read(maps__lock(maps));
>  
> -	p = maps->entries.rb_node;
> +	p = maps__entries(maps)->rb_node;
>  	while (p != NULL) {
>  		m = rb_entry(p, struct map_rb_node, rb_node);
>  		if (ip < m->map->start)
> @@ -429,14 +438,14 @@ struct map *maps__find(struct maps *maps, u64 ip)
>  
>  	m = NULL;
>  out:
> -	up_read(&maps->lock);
> +	up_read(maps__lock(maps));
>  
>  	return m ? m->map : NULL;
>  }
>  
>  struct map_rb_node *maps__first(struct maps *maps)
>  {
> -	struct rb_node *first = rb_first(&maps->entries);
> +	struct rb_node *first = rb_first(maps__entries(maps));
>  
>  	if (first)
>  		return rb_entry(first, struct map_rb_node, rb_node);
> diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
> index 512746ec0f9a..bde3390c7096 100644
> --- a/tools/perf/util/maps.h
> +++ b/tools/perf/util/maps.h
> @@ -43,7 +43,7 @@ struct maps {
>  	unsigned int	 nr_maps_allocated;
>  #ifdef HAVE_LIBUNWIND_SUPPORT
>  	void				*addr_space;
> -	struct unwind_libunwind_ops	*unwind_libunwind_ops;
> +	const struct unwind_libunwind_ops *unwind_libunwind_ops;
>  #endif
>  };
>  
> @@ -58,20 +58,51 @@ struct kmap {
>  struct maps *maps__new(struct machine *machine);
>  void maps__delete(struct maps *maps);
>  bool maps__empty(struct maps *maps);
> +int maps__clone(struct thread *thread, struct maps *parent);
> +
> +struct maps *maps__get(struct maps *maps);
> +void maps__put(struct maps *maps);
>  
> -static inline struct maps *maps__get(struct maps *maps)
> +static inline struct rb_root *maps__entries(struct maps *maps)
>  {
> -	if (maps)
> -		refcount_inc(&maps->refcnt);
> -	return maps;
> +	return &maps->entries;
>  }
>  
> -void maps__put(struct maps *maps);
> -int maps__clone(struct thread *thread, struct maps *parent);
> +static inline struct machine *maps__machine(struct maps *maps)
> +{
> +	return maps->machine;
> +}
> +
> +static inline struct rw_semaphore *maps__lock(struct maps *maps)
> +{
> +	return &maps->lock;
> +}
> +
> +static inline struct map **maps__maps_by_name(struct maps *maps)
> +{
> +	return maps->maps_by_name;
> +}
> +
> +static inline unsigned int maps__nr_maps(const struct maps *maps)
> +{
> +	return maps->nr_maps;
> +}
> +
> +#ifdef HAVE_LIBUNWIND_SUPPORT
> +static inline void *maps__addr_space(struct maps *maps)
> +{
> +	return maps->addr_space;
> +}
> +
> +static inline const struct unwind_libunwind_ops *maps__unwind_libunwind_ops(const struct maps *maps)
> +{
> +	return maps->unwind_libunwind_ops;
> +}
> +#endif
> +
>  size_t maps__fprintf(struct maps *maps, FILE *fp);
>  
>  int maps__insert(struct maps *maps, struct map *map);
> -
>  void maps__remove(struct maps *maps, struct map *map);
>  
>  struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp);
> diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
> index e752e1f4a5f0..0290dc3a6258 100644
> --- a/tools/perf/util/scripting-engines/trace-event-python.c
> +++ b/tools/perf/util/scripting-engines/trace-event-python.c
> @@ -1220,7 +1220,7 @@ static void python_export_sample_table(struct db_export *dbe,
>  
>  	tuple_set_d64(t, 0, es->db_id);
>  	tuple_set_d64(t, 1, es->evsel->db_id);
> -	tuple_set_d64(t, 2, es->al->maps->machine->db_id);
> +	tuple_set_d64(t, 2, maps__machine(es->al->maps)->db_id);
>  	tuple_set_d64(t, 3, es->al->thread->db_id);
>  	tuple_set_d64(t, 4, es->comm_db_id);
>  	tuple_set_d64(t, 5, es->dso_db_id);
> diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
> index cfba8c337783..25686d67ee6f 100644
> --- a/tools/perf/util/sort.c
> +++ b/tools/perf/util/sort.c
> @@ -661,7 +661,7 @@ static int hist_entry__cgroup_snprintf(struct hist_entry *he,
>  	const char *cgrp_name = "N/A";
>  
>  	if (he->cgroup) {
> -		struct cgroup *cgrp = cgroup__find(he->ms.maps->machine->env,
> +		struct cgroup *cgrp = cgroup__find(maps__machine(he->ms.maps)->env,
>  						   he->cgroup);
>  		if (cgrp != NULL)
>  			cgrp_name = cgrp->name;
> diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
> index 4607c9438866..3ca9a0968345 100644
> --- a/tools/perf/util/symbol-elf.c
> +++ b/tools/perf/util/symbol-elf.c
> @@ -1067,7 +1067,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  		 * we still are sure to have a reference to this DSO via
>  		 * *curr_map->dso.
>  		 */
> -		dsos__add(&kmaps->machine->dsos, curr_dso);
> +		dsos__add(&maps__machine(kmaps)->dsos, curr_dso);
>  		/* kmaps already got it */
>  		map__put(curr_map);
>  		dso__set_loaded(curr_dso);
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index e8045b1c8700..9b51e669a722 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -249,7 +249,7 @@ void maps__fixup_end(struct maps *maps)
>  {
>  	struct map_rb_node *prev = NULL, *curr;
>  
> -	down_write(&maps->lock);
> +	down_write(maps__lock(maps));
>  
>  	maps__for_each_entry(maps, curr) {
>  		if (prev != NULL && !prev->map->end)
> @@ -265,7 +265,7 @@ void maps__fixup_end(struct maps *maps)
>  	if (curr && !curr->map->end)
>  		curr->map->end = ~0ULL;
>  
> -	up_write(&maps->lock);
> +	up_write(maps__lock(maps));
>  }
>  
>  struct symbol *symbol__new(u64 start, u64 len, u8 binding, u8 type, const char *name)
> @@ -813,7 +813,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  	if (!kmaps)
>  		return -1;
>  
> -	machine = kmaps->machine;
> +	machine = maps__machine(kmaps);
>  
>  	x86_64 = machine__is(machine, "x86_64");
>  
> @@ -937,7 +937,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  
>  	if (curr_map != initial_map &&
>  	    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
> -	    machine__is_default_guest(kmaps->machine)) {
> +	    machine__is_default_guest(maps__machine(kmaps))) {
>  		dso__set_loaded(curr_map->dso);
>  	}
>  
> @@ -1336,7 +1336,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  	if (!kmaps)
>  		return -EINVAL;
>  
> -	machine = kmaps->machine;
> +	machine = maps__machine(kmaps);
>  
>  	/* This function requires that the map is the kernel map */
>  	if (!__map__is_kernel(map))
> @@ -1851,7 +1851,7 @@ int dso__load(struct dso *dso, struct map *map)
>  		else if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
>  			ret = dso__load_guest_kernel_sym(dso, map);
>  
> -		machine = map__kmaps(map)->machine;
> +		machine = maps__machine(map__kmaps(map));
>  		if (machine__is(machine, "x86_64"))
>  			machine__map_x86_64_entry_trampolines(machine, dso);
>  		goto out;
> @@ -2006,21 +2006,21 @@ static int map__strcmp_name(const void *name, const void *b)
>  
>  void __maps__sort_by_name(struct maps *maps)
>  {
> -	qsort(maps->maps_by_name, maps->nr_maps, sizeof(struct map *), map__strcmp);
> +	qsort(maps__maps_by_name(maps), maps__nr_maps(maps), sizeof(struct map *), map__strcmp);
>  }
>  
>  static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
>  {
>  	struct map_rb_node *rb_node;
> -	struct map **maps_by_name = realloc(maps->maps_by_name,
> -					    maps->nr_maps * sizeof(struct map *));
> +	struct map **maps_by_name = realloc(maps__maps_by_name(maps),
> +					    maps__nr_maps(maps) * sizeof(struct map *));
>  	int i = 0;
>  
>  	if (maps_by_name == NULL)
>  		return -1;
>  
>  	maps->maps_by_name = maps_by_name;
> -	maps->nr_maps_allocated = maps->nr_maps;
> +	maps->nr_maps_allocated = maps__nr_maps(maps);
>  
>  	maps__for_each_entry(maps, rb_node)
>  		maps_by_name[i++] = rb_node->map;
> @@ -2033,11 +2033,12 @@ static struct map *__maps__find_by_name(struct maps *maps, const char *name)
>  {
>  	struct map **mapp;
>  
> -	if (maps->maps_by_name == NULL &&
> +	if (maps__maps_by_name(maps) == NULL &&
>  	    map__groups__sort_by_name_from_rbtree(maps))
>  		return NULL;
>  
> -	mapp = bsearch(name, maps->maps_by_name, maps->nr_maps, sizeof(*mapp), map__strcmp_name);
> +	mapp = bsearch(name, maps__maps_by_name(maps), maps__nr_maps(maps),
> +		       sizeof(*mapp), map__strcmp_name);
>  	if (mapp)
>  		return *mapp;
>  	return NULL;
> @@ -2048,9 +2049,10 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
>  	struct map_rb_node *rb_node;
>  	struct map *map;
>  
> -	down_read(&maps->lock);
> +	down_read(maps__lock(maps));
>  
> -	if (maps->last_search_by_name && strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
> +	if (maps->last_search_by_name &&
> +	    strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
>  		map = maps->last_search_by_name;
>  		goto out_unlock;
>  	}
> @@ -2060,7 +2062,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
>  	 * made.
>  	 */
>  	map = __maps__find_by_name(maps, name);
> -	if (map || maps->maps_by_name != NULL)
> +	if (map || maps__maps_by_name(maps) != NULL)
>  		goto out_unlock;
>  
>  	/* Fallback to traversing the rbtree... */
> @@ -2074,7 +2076,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
>  	map = NULL;
>  
>  out_unlock:
> -	up_read(&maps->lock);
> +	up_read(maps__lock(maps));
>  	return map;
>  }
>  
> @@ -2326,7 +2328,7 @@ static int dso__load_guest_kernel_sym(struct dso *dso, struct map *map)
>  {
>  	int err;
>  	const char *kallsyms_filename = NULL;
> -	struct machine *machine = map__kmaps(map)->machine;
> +	struct machine *machine = maps__machine(map__kmaps(map));
>  	char path[PATH_MAX];
>  
>  	if (machine__is_default_guest(machine)) {
> diff --git a/tools/perf/util/thread-stack.c b/tools/perf/util/thread-stack.c
> index 1b992bbba4e8..4b85c1728012 100644
> --- a/tools/perf/util/thread-stack.c
> +++ b/tools/perf/util/thread-stack.c
> @@ -155,8 +155,8 @@ static int thread_stack__init(struct thread_stack *ts, struct thread *thread,
>  		ts->br_stack_sz = br_stack_sz;
>  	}
>  
> -	if (thread->maps && thread->maps->machine) {
> -		struct machine *machine = thread->maps->machine;
> +	if (thread->maps && maps__machine(thread->maps)) {
> +		struct machine *machine = maps__machine(thread->maps);
>  		const char *arch = perf_env__arch(machine->env);
>  
>  		ts->kernel_start = machine__kernel_start(machine);
> diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
> index 4baf4db8af65..c2256777b813 100644
> --- a/tools/perf/util/thread.c
> +++ b/tools/perf/util/thread.c
> @@ -348,7 +348,7 @@ static int __thread__prepare_access(struct thread *thread)
>  	struct maps *maps = thread->maps;
>  	struct map_rb_node *rb_node;
>  
> -	down_read(&maps->lock);
> +	down_read(maps__lock(maps));
>  
>  	maps__for_each_entry(maps, rb_node) {
>  		err = unwind__prepare_access(thread->maps, rb_node->map, &initialized);
> @@ -356,7 +356,7 @@ static int __thread__prepare_access(struct thread *thread)
>  			break;
>  	}
>  
> -	up_read(&maps->lock);
> +	up_read(maps__lock(maps));
>  
>  	return err;
>  }
> diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
> index 71a353349181..7e6c59811292 100644
> --- a/tools/perf/util/unwind-libunwind-local.c
> +++ b/tools/perf/util/unwind-libunwind-local.c
> @@ -618,24 +618,26 @@ static unw_accessors_t accessors = {
>  
>  static int _unwind__prepare_access(struct maps *maps)
>  {
> -	maps->addr_space = unw_create_addr_space(&accessors, 0);
> -	if (!maps->addr_space) {
> +	void *addr_space = unw_create_addr_space(&accessors, 0);
> +
> +	maps->addr_space = addr_space;
> +	if (!addr_space) {
>  		pr_err("unwind: Can't create unwind address space.\n");
>  		return -ENOMEM;
>  	}
>  
> -	unw_set_caching_policy(maps->addr_space, UNW_CACHE_GLOBAL);
> +	unw_set_caching_policy(addr_space, UNW_CACHE_GLOBAL);
>  	return 0;
>  }
>  
>  static void _unwind__flush_access(struct maps *maps)
>  {
> -	unw_flush_cache(maps->addr_space, 0, 0);
> +	unw_flush_cache(maps__addr_space(maps), 0, 0);
>  }
>  
>  static void _unwind__finish_access(struct maps *maps)
>  {
> -	unw_destroy_addr_space(maps->addr_space);
> +	unw_destroy_addr_space(maps__addr_space(maps));
>  }
>  
>  static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
> @@ -660,7 +662,7 @@ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
>  	 */
>  	if (max_stack - 1 > 0) {
>  		WARN_ONCE(!ui->thread, "WARNING: ui->thread is NULL");
> -		addr_space = ui->thread->maps->addr_space;
> +		addr_space = maps__addr_space(ui->thread->maps);
>  
>  		if (addr_space == NULL)
>  			return -1;
> @@ -709,7 +711,7 @@ static int _unwind__get_entries(unwind_entry_cb_t cb, void *arg,
>  	struct unwind_info ui = {
>  		.sample       = data,
>  		.thread       = thread,
> -		.machine      = thread->maps->machine,
> +		.machine      = maps__machine(thread->maps),
>  	};
>  
>  	if (!data->user_regs.regs)
> diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
> index e89a5479b361..7b797ffadd19 100644
> --- a/tools/perf/util/unwind-libunwind.c
> +++ b/tools/perf/util/unwind-libunwind.c
> @@ -22,12 +22,13 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
>  	const char *arch;
>  	enum dso_type dso_type;
>  	struct unwind_libunwind_ops *ops = local_unwind_libunwind_ops;
> +	struct machine *machine;
>  	int err;
>  
>  	if (!dwarf_callchain_users)
>  		return 0;
>  
> -	if (maps->addr_space) {
> +	if (maps__addr_space(maps)) {
>  		pr_debug("unwind: thread map already set, dso=%s\n",
>  			 map->dso->name);
>  		if (initialized)
> @@ -35,15 +36,16 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
>  		return 0;
>  	}
>  
> +	machine = maps__machine(maps);
>  	/* env->arch is NULL for live-mode (i.e. perf top) */
> -	if (!maps->machine->env || !maps->machine->env->arch)
> +	if (!machine->env || !machine->env->arch)
>  		goto out_register;
>  
> -	dso_type = dso__type(map->dso, maps->machine);
> +	dso_type = dso__type(map->dso, machine);
>  	if (dso_type == DSO__TYPE_UNKNOWN)
>  		return 0;
>  
> -	arch = perf_env__arch(maps->machine->env);
> +	arch = perf_env__arch(machine->env);
>  
>  	if (!strcmp(arch, "x86")) {
>  		if (dso_type != DSO__TYPE_64BIT)
> @@ -60,7 +62,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
>  out_register:
>  	unwind__register_ops(maps, ops);
>  
> -	err = maps->unwind_libunwind_ops->prepare_access(maps);
> +	err = maps__unwind_libunwind_ops(maps)->prepare_access(maps);
>  	if (initialized)
>  		*initialized = err ? false : true;
>  	return err;
> @@ -68,21 +70,27 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
>  
>  void unwind__flush_access(struct maps *maps)
>  {
> -	if (maps->unwind_libunwind_ops)
> -		maps->unwind_libunwind_ops->flush_access(maps);
> +	const struct unwind_libunwind_ops *ops = maps__unwind_libunwind_ops(maps);
> +
> +	if (ops)
> +		ops->flush_access(maps);
>  }
>  
>  void unwind__finish_access(struct maps *maps)
>  {
> -	if (maps->unwind_libunwind_ops)
> -		maps->unwind_libunwind_ops->finish_access(maps);
> +	const struct unwind_libunwind_ops *ops = maps__unwind_libunwind_ops(maps);
> +
> +	if (ops)
> +		ops->finish_access(maps);
>  }
>  
>  int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
>  			 struct thread *thread,
>  			 struct perf_sample *data, int max_stack)
>  {
> -	if (thread->maps->unwind_libunwind_ops)
> -		return thread->maps->unwind_libunwind_ops->get_entries(cb, arg, thread, data, max_stack);
> +	const struct unwind_libunwind_ops *ops = maps__unwind_libunwind_ops(thread->maps);
> +
> +	if (ops)
> +		return ops->get_entries(cb, arg, thread, data, max_stack);
>  	return 0;
>  }
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 15/22] perf map: Use functions to access the variables in map
  2022-02-11 10:34 ` [PATCH v3 15/22] perf map: Use functions to access the variables in map Ian Rogers
@ 2022-02-11 17:35   ` Arnaldo Carvalho de Melo
  2022-02-11 17:36   ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:35 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:34:08AM -0800, Ian Rogers escreveu:
> The use of functions enables easier reference count
> checking. Some minor changes to map_ip and unmap_ip to making the
> naming a little clearer. __maps_insert is modified to return the
> inserted map, which simplifies the reference checking
> wrapping. maps__fixup_overlappings has some minor tweaks so that
> puts occur on error paths. dso__process_kernel_symbol has the
> unused curr_mapp argument removed.

This one should be at the forefront of this patchset to reduce the
possibility that it would clash with patches coming after it, lets see..

- Arnaldo
 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/arch/s390/annotate/instructions.c  |   4 +-
>  tools/perf/arch/x86/tests/dwarf-unwind.c      |   2 +-
>  tools/perf/arch/x86/util/event.c              |   6 +-
>  tools/perf/builtin-annotate.c                 |   8 +-
>  tools/perf/builtin-inject.c                   |   8 +-
>  tools/perf/builtin-kallsyms.c                 |   6 +-
>  tools/perf/builtin-kmem.c                     |   4 +-
>  tools/perf/builtin-mem.c                      |   4 +-
>  tools/perf/builtin-report.c                   |  20 +--
>  tools/perf/builtin-script.c                   |  26 ++--
>  tools/perf/builtin-top.c                      |  12 +-
>  tools/perf/builtin-trace.c                    |   2 +-
>  .../scripts/python/Perf-Trace-Util/Context.c  |   7 +-
>  tools/perf/tests/code-reading.c               |  32 ++---
>  tools/perf/tests/hists_common.c               |   4 +-
>  tools/perf/tests/vmlinux-kallsyms.c           |  35 +++---
>  tools/perf/ui/browsers/annotate.c             |   7 +-
>  tools/perf/ui/browsers/hists.c                |  18 +--
>  tools/perf/ui/browsers/map.c                  |   4 +-
>  tools/perf/util/annotate.c                    |  38 +++---
>  tools/perf/util/auxtrace.c                    |   2 +-
>  tools/perf/util/block-info.c                  |   4 +-
>  tools/perf/util/bpf-event.c                   |   8 +-
>  tools/perf/util/build-id.c                    |   2 +-
>  tools/perf/util/callchain.c                   |  10 +-
>  tools/perf/util/data-convert-json.c           |   4 +-
>  tools/perf/util/db-export.c                   |   4 +-
>  tools/perf/util/dlfilter.c                    |  21 ++--
>  tools/perf/util/dso.c                         |   4 +-
>  tools/perf/util/event.c                       |  14 +--
>  tools/perf/util/evsel_fprintf.c               |   4 +-
>  tools/perf/util/hist.c                        |  10 +-
>  tools/perf/util/intel-pt.c                    |  48 +++----
>  tools/perf/util/machine.c                     |  84 +++++++------
>  tools/perf/util/map.c                         | 117 +++++++++---------
>  tools/perf/util/map.h                         |  58 ++++++++-
>  tools/perf/util/maps.c                        |  83 +++++++------
>  tools/perf/util/probe-event.c                 |  44 +++----
>  .../util/scripting-engines/trace-event-perl.c |   9 +-
>  .../scripting-engines/trace-event-python.c    |  12 +-
>  tools/perf/util/sort.c                        |  46 +++----
>  tools/perf/util/symbol-elf.c                  |  39 +++---
>  tools/perf/util/symbol.c                      |  96 +++++++-------
>  tools/perf/util/symbol_fprintf.c              |   2 +-
>  tools/perf/util/synthetic-events.c            |  28 ++---
>  tools/perf/util/thread.c                      |  26 ++--
>  tools/perf/util/unwind-libunwind-local.c      |  34 ++---
>  tools/perf/util/unwind-libunwind.c            |   4 +-
>  tools/perf/util/vdso.c                        |   2 +-
>  49 files changed, 577 insertions(+), 489 deletions(-)
> 
> diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c
> index 0e136630659e..740f1a63bc04 100644
> --- a/tools/perf/arch/s390/annotate/instructions.c
> +++ b/tools/perf/arch/s390/annotate/instructions.c
> @@ -39,7 +39,9 @@ static int s390_call__parse(struct arch *arch, struct ins_operands *ops,
>  	target.addr = map__objdump_2mem(map, ops->target.addr);
>  
>  	if (maps__find_ams(ms->maps, &target) == 0 &&
> -	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> +	    map__rip_2objdump(target.ms.map,
> +			      map->map_ip(target.ms.map, target.addr)
> +			     ) == ops->target.addr)
>  		ops->target.sym = target.ms.sym;
>  
>  	return 0;
> diff --git a/tools/perf/arch/x86/tests/dwarf-unwind.c b/tools/perf/arch/x86/tests/dwarf-unwind.c
> index a54dea7c112f..497593be80f2 100644
> --- a/tools/perf/arch/x86/tests/dwarf-unwind.c
> +++ b/tools/perf/arch/x86/tests/dwarf-unwind.c
> @@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample,
>  		return -1;
>  	}
>  
> -	stack_size = map->end - sp;
> +	stack_size = map__end(map) - sp;
>  	stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size;
>  
>  	memcpy(buf, (void *) sp, stack_size);
> diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
> index 7b6b0c98fb36..c790c682b76e 100644
> --- a/tools/perf/arch/x86/util/event.c
> +++ b/tools/perf/arch/x86/util/event.c
> @@ -57,9 +57,9 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
>  
>  		event->mmap.header.size = size;
>  
> -		event->mmap.start = map->start;
> -		event->mmap.len   = map->end - map->start;
> -		event->mmap.pgoff = map->pgoff;
> +		event->mmap.start = map__start(map);
> +		event->mmap.len   = map__size(map);
> +		event->mmap.pgoff = map__pgoff(map);
>  		event->mmap.pid   = machine->pid;
>  
>  		strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
> diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
> index 490bb9b8cf17..49d3ae36fd89 100644
> --- a/tools/perf/builtin-annotate.c
> +++ b/tools/perf/builtin-annotate.c
> @@ -199,7 +199,7 @@ static int process_branch_callback(struct evsel *evsel,
>  		return 0;
>  
>  	if (a.map != NULL)
> -		a.map->dso->hit = 1;
> +		map__dso(a.map)->hit = 1;
>  
>  	hist__account_cycles(sample->branch_stack, al, sample, false, NULL);
>  
> @@ -231,9 +231,9 @@ static int evsel__add_sample(struct evsel *evsel, struct perf_sample *sample,
>  		 */
>  		if (al->sym != NULL) {
>  			rb_erase_cached(&al->sym->rb_node,
> -				 &al->map->dso->symbols);
> +					&map__dso(al->map)->symbols);
>  			symbol__delete(al->sym);
> -			dso__reset_find_symbol_cache(al->map->dso);
> +			dso__reset_find_symbol_cache(map__dso(al->map));
>  		}
>  		return 0;
>  	}
> @@ -315,7 +315,7 @@ static void hists__find_annotations(struct hists *hists,
>  		struct hist_entry *he = rb_entry(nd, struct hist_entry, rb_node);
>  		struct annotation *notes;
>  
> -		if (he->ms.sym == NULL || he->ms.map->dso->annotate_warned)
> +		if (he->ms.sym == NULL || map__dso(he->ms.map)->annotate_warned)
>  			goto find_next;
>  
>  		if (ann->sym_hist_filter &&
> diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
> index f7917c390e96..92a9dbc3d4cd 100644
> --- a/tools/perf/builtin-inject.c
> +++ b/tools/perf/builtin-inject.c
> @@ -600,10 +600,10 @@ int perf_event__inject_buildid(struct perf_tool *tool, union perf_event *event,
>  	}
>  
>  	if (thread__find_map(thread, sample->cpumode, sample->ip, &al)) {
> -		if (!al.map->dso->hit) {
> -			al.map->dso->hit = 1;
> -			dso__inject_build_id(al.map->dso, tool, machine,
> -					     sample->cpumode, al.map->flags);
> +		if (!map__dso(al.map)->hit) {
> +			map__dso(al.map)->hit = 1;
> +			dso__inject_build_id(map__dso(al.map), tool, machine,
> +					     sample->cpumode, map__flags(al.map));
>  		}
>  	}
>  
> diff --git a/tools/perf/builtin-kallsyms.c b/tools/perf/builtin-kallsyms.c
> index c08ee81529e8..d940b60ce812 100644
> --- a/tools/perf/builtin-kallsyms.c
> +++ b/tools/perf/builtin-kallsyms.c
> @@ -36,8 +36,10 @@ static int __cmd_kallsyms(int argc, const char **argv)
>  		}
>  
>  		printf("%s: %s %s %#" PRIx64 "-%#" PRIx64 " (%#" PRIx64 "-%#" PRIx64")\n",
> -			symbol->name, map->dso->short_name, map->dso->long_name,
> -			map->unmap_ip(map, symbol->start), map->unmap_ip(map, symbol->end),
> +			symbol->name, map__dso(map)->short_name,
> +			map__dso(map)->long_name,
> +			map__unmap_ip(map, symbol->start),
> +			map__unmap_ip(map, symbol->end),
>  			symbol->start, symbol->end);
>  	}
>  
> diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
> index 99d7ff9a8eff..d87d9c341a20 100644
> --- a/tools/perf/builtin-kmem.c
> +++ b/tools/perf/builtin-kmem.c
> @@ -410,7 +410,7 @@ static u64 find_callsite(struct evsel *evsel, struct perf_sample *sample)
>  		if (!caller) {
>  			/* found */
>  			if (node->ms.map)
> -				addr = map__unmap_ip(node->ms.map, node->ip);
> +				addr = map__dso_unmap_ip(node->ms.map, node->ip);
>  			else
>  				addr = node->ip;
>  
> @@ -1012,7 +1012,7 @@ static void __print_slab_result(struct rb_root *root,
>  
>  		if (sym != NULL)
>  			snprintf(buf, sizeof(buf), "%s+%" PRIx64 "", sym->name,
> -				 addr - map->unmap_ip(map, sym->start));
> +				 addr - map__unmap_ip(map, sym->start));
>  		else
>  			snprintf(buf, sizeof(buf), "%#" PRIx64 "", addr);
>  		printf(" %-34s |", buf);
> diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c
> index fcf65a59bea2..d18083f57303 100644
> --- a/tools/perf/builtin-mem.c
> +++ b/tools/perf/builtin-mem.c
> @@ -200,7 +200,7 @@ dump_raw_samples(struct perf_tool *tool,
>  		goto out_put;
>  
>  	if (al.map != NULL)
> -		al.map->dso->hit = 1;
> +		map__dso(al.map)->hit = 1;
>  
>  	field_sep = symbol_conf.field_sep;
>  	if (field_sep) {
> @@ -241,7 +241,7 @@ dump_raw_samples(struct perf_tool *tool,
>  		symbol_conf.field_sep,
>  		sample->data_src,
>  		symbol_conf.field_sep,
> -		al.map ? (al.map->dso ? al.map->dso->long_name : "???") : "???",
> +		al.map && map__dso(al.map) ? map__dso(al.map)->long_name : "???",
>  		al.sym ? al.sym->name : "???");
>  out_put:
>  	addr_location__put(&al);
> diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
> index 57611ef725c3..9b92b2bbd7de 100644
> --- a/tools/perf/builtin-report.c
> +++ b/tools/perf/builtin-report.c
> @@ -304,7 +304,7 @@ static int process_sample_event(struct perf_tool *tool,
>  	}
>  
>  	if (al.map != NULL)
> -		al.map->dso->hit = 1;
> +		map__dso(al.map)->hit = 1;
>  
>  	if (ui__has_annotation() || rep->symbol_ipc || rep->total_cycles_mode) {
>  		hist__account_cycles(sample->branch_stack, &al, sample,
> @@ -579,7 +579,7 @@ static void report__warn_kptr_restrict(const struct report *rep)
>  		return;
>  
>  	if (kernel_map == NULL ||
> -	    (kernel_map->dso->hit &&
> +	    (map__dso(kernel_map)->hit &&
>  	     (kernel_kmap->ref_reloc_sym == NULL ||
>  	      kernel_kmap->ref_reloc_sym->addr == 0))) {
>  		const char *desc =
> @@ -805,13 +805,15 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
>  		struct map *map = rb_node->map;
>  
>  		printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
> -				   indent, "", map->start, map->end,
> -				   map->prot & PROT_READ ? 'r' : '-',
> -				   map->prot & PROT_WRITE ? 'w' : '-',
> -				   map->prot & PROT_EXEC ? 'x' : '-',
> -				   map->flags & MAP_SHARED ? 's' : 'p',
> -				   map->pgoff,
> -				   map->dso->id.ino, map->dso->name);
> +				   indent, "",
> +				   map__start(map), map__end(map),
> +				   map__prot(map) & PROT_READ ? 'r' : '-',
> +				   map__prot(map) & PROT_WRITE ? 'w' : '-',
> +				   map__prot(map) & PROT_EXEC ? 'x' : '-',
> +				   map__flags(map) & MAP_SHARED ? 's' : 'p',
> +				   map__pgoff(map),
> +				   map__dso(map)->id.ino,
> +				   map__dso(map)->name);
>  	}
>  
>  	return printed;
> diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
> index abae8184e171..4edfce95e137 100644
> --- a/tools/perf/builtin-script.c
> +++ b/tools/perf/builtin-script.c
> @@ -972,12 +972,12 @@ static int perf_sample__fprintf_brstackoff(struct perf_sample *sample,
>  		to   = entries[i].to;
>  
>  		if (thread__find_map_fb(thread, sample->cpumode, from, &alf) &&
> -		    !alf.map->dso->adjust_symbols)
> -			from = map__map_ip(alf.map, from);
> +		    !map__dso(alf.map)->adjust_symbols)
> +			from = map__dso_map_ip(alf.map, from);
>  
>  		if (thread__find_map_fb(thread, sample->cpumode, to, &alt) &&
> -		    !alt.map->dso->adjust_symbols)
> -			to = map__map_ip(alt.map, to);
> +		    !map__dso(alt.map)->adjust_symbols)
> +			to = map__dso_map_ip(alt.map, to);
>  
>  		printed += fprintf(fp, " 0x%"PRIx64, from);
>  		if (PRINT_FIELD(DSO)) {
> @@ -1039,11 +1039,11 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
>  		return 0;
>  	}
>  
> -	if (!thread__find_map(thread, *cpumode, start, &al) || !al.map->dso) {
> +	if (!thread__find_map(thread, *cpumode, start, &al) || !map__dso(al.map)) {
>  		pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
>  		return 0;
>  	}
> -	if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR) {
> +	if (map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR) {
>  		pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
>  		return 0;
>  	}
> @@ -1051,11 +1051,11 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
>  	/* Load maps to ensure dso->is_64_bit has been updated */
>  	map__load(al.map);
>  
> -	offset = al.map->map_ip(al.map, start);
> -	len = dso__data_read_offset(al.map->dso, machine, offset, (u8 *)buffer,
> -				    end - start + MAXINSN);
> +	offset = map__map_ip(al.map, start);
> +	len = dso__data_read_offset(map__dso(al.map), machine, offset,
> +				    (u8 *)buffer, end - start + MAXINSN);
>  
> -	*is64bit = al.map->dso->is_64_bit;
> +	*is64bit = map__dso(al.map)->is_64_bit;
>  	if (len <= 0)
>  		pr_debug("\tcannot fetch code for block at %" PRIx64 "-%" PRIx64 "\n",
>  			start, end);
> @@ -1070,9 +1070,9 @@ static int map__fprintf_srccode(struct map *map, u64 addr, FILE *fp, struct srcc
>  	int len;
>  	char *srccode;
>  
> -	if (!map || !map->dso)
> +	if (!map || !map__dso(map))
>  		return 0;
> -	srcfile = get_srcline_split(map->dso,
> +	srcfile = get_srcline_split(map__dso(map),
>  				    map__rip_2objdump(map, addr),
>  				    &line);
>  	if (!srcfile)
> @@ -1164,7 +1164,7 @@ static int ip__fprintf_sym(uint64_t addr, struct thread *thread,
>  	if (al.addr < al.sym->end)
>  		off = al.addr - al.sym->start;
>  	else
> -		off = al.addr - al.map->start - al.sym->start;
> +		off = al.addr - map__start(al.map) - al.sym->start;
>  	printed += fprintf(fp, "\t%s", al.sym->name);
>  	if (off)
>  		printed += fprintf(fp, "%+d", off);
> diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
> index 1fc390f136dd..8db1df7bdabe 100644
> --- a/tools/perf/builtin-top.c
> +++ b/tools/perf/builtin-top.c
> @@ -127,8 +127,8 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
>  	/*
>  	 * We can't annotate with just /proc/kallsyms
>  	 */
> -	if (map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> -	    !dso__is_kcore(map->dso)) {
> +	if (map__dso(map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> +	    !dso__is_kcore(map__dso(map))) {
>  		pr_err("Can't annotate %s: No vmlinux file was found in the "
>  		       "path\n", sym->name);
>  		sleep(1);
> @@ -180,8 +180,9 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
>  		    "Tools:  %s\n\n"
>  		    "Not all samples will be on the annotation output.\n\n"
>  		    "Please report to linux-kernel@vger.kernel.org\n",
> -		    ip, map->dso->long_name, dso__symtab_origin(map->dso),
> -		    map->start, map->end, sym->start, sym->end,
> +		    ip, map__dso(map)->long_name,
> +		    dso__symtab_origin(map__dso(map)),
> +		    map__start(map), map__end(map), sym->start, sym->end,
>  		    sym->binding == STB_GLOBAL ? 'g' :
>  		    sym->binding == STB_LOCAL  ? 'l' : 'w', sym->name,
>  		    err ? "[unknown]" : uts.machine,
> @@ -810,7 +811,8 @@ static void perf_event__process_sample(struct perf_tool *tool,
>  		    __map__is_kernel(al.map) && map__has_symbols(al.map)) {
>  			if (symbol_conf.vmlinux_name) {
>  				char serr[256];
> -				dso__strerror_load(al.map->dso, serr, sizeof(serr));
> +				dso__strerror_load(map__dso(al.map),
> +						   serr, sizeof(serr));
>  				ui__warning("The %s file can't be used: %s\n%s",
>  					    symbol_conf.vmlinux_name, serr, msg);
>  			} else {
> diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
> index 32844d8a0ea5..0134f24da3e3 100644
> --- a/tools/perf/builtin-trace.c
> +++ b/tools/perf/builtin-trace.c
> @@ -2862,7 +2862,7 @@ static void print_location(FILE *f, struct perf_sample *sample,
>  {
>  
>  	if ((verbose > 0 || print_dso) && al->map)
> -		fprintf(f, "%s@", al->map->dso->long_name);
> +		fprintf(f, "%s@", map__dso(al->map)->long_name);
>  
>  	if ((verbose > 0 || print_sym) && al->sym)
>  		fprintf(f, "%s+0x%" PRIx64, al->sym->name,
> diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> index b64013a87c54..b83b62d33945 100644
> --- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> +++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> @@ -152,9 +152,10 @@ static PyObject *perf_sample_src(PyObject *obj, PyObject *args, bool get_srccode
>  	map = c->al->map;
>  	addr = c->al->addr;
>  
> -	if (map && map->dso)
> -		srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
> -
> +	if (map && map__dso(map)) {
> +		srcfile = get_srcline_split(map__dso(map),
> +					    map__rip_2objdump(map, addr), &line);
> +	}
>  	if (get_srccode) {
>  		if (srcfile)
>  			srccode = find_sourceline(srcfile, line, &len);
> diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
> index 6eafe36a8704..9cb7d3f577d7 100644
> --- a/tools/perf/tests/code-reading.c
> +++ b/tools/perf/tests/code-reading.c
> @@ -240,7 +240,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  
>  	pr_debug("Reading object code for memory address: %#"PRIx64"\n", addr);
>  
> -	if (!thread__find_map(thread, cpumode, addr, &al) || !al.map->dso) {
> +	if (!thread__find_map(thread, cpumode, addr, &al) || !map__dso(al.map)) {
>  		if (cpumode == PERF_RECORD_MISC_HYPERVISOR) {
>  			pr_debug("Hypervisor address can not be resolved - skipping\n");
>  			return 0;
> @@ -250,10 +250,10 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  		return -1;
>  	}
>  
> -	pr_debug("File is: %s\n", al.map->dso->long_name);
> +	pr_debug("File is: %s\n", map__dso(al.map)->long_name);
>  
> -	if (al.map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> -	    !dso__is_kcore(al.map->dso)) {
> +	if (map__dso(al.map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> +	    !dso__is_kcore(map__dso(al.map))) {
>  		pr_debug("Unexpected kernel address - skipping\n");
>  		return 0;
>  	}
> @@ -264,11 +264,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  		len = BUFSZ;
>  
>  	/* Do not go off the map */
> -	if (addr + len > al.map->end)
> -		len = al.map->end - addr;
> +	if (addr + len > map__end(al.map))
> +		len = map__end(al.map) - addr;
>  
>  	/* Read the object code using perf */
> -	ret_len = dso__data_read_offset(al.map->dso, maps__machine(thread->maps),
> +	ret_len = dso__data_read_offset(map__dso(al.map), maps__machine(thread->maps),
>  					al.addr, buf1, len);
>  	if (ret_len != len) {
>  		pr_debug("dso__data_read_offset failed\n");
> @@ -283,11 +283,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  		return -1;
>  
>  	/* objdump struggles with kcore - try each map only once */
> -	if (dso__is_kcore(al.map->dso)) {
> +	if (dso__is_kcore(map__dso(al.map))) {
>  		size_t d;
>  
>  		for (d = 0; d < state->done_cnt; d++) {
> -			if (state->done[d] == al.map->start) {
> +			if (state->done[d] == map__start(al.map)) {
>  				pr_debug("kcore map tested already");
>  				pr_debug(" - skipping\n");
>  				return 0;
> @@ -297,12 +297,12 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  			pr_debug("Too many kcore maps - skipping\n");
>  			return 0;
>  		}
> -		state->done[state->done_cnt++] = al.map->start;
> +		state->done[state->done_cnt++] = map__start(al.map);
>  	}
>  
> -	objdump_name = al.map->dso->long_name;
> -	if (dso__needs_decompress(al.map->dso)) {
> -		if (dso__decompress_kmodule_path(al.map->dso, objdump_name,
> +	objdump_name = map__dso(al.map)->long_name;
> +	if (dso__needs_decompress(map__dso(al.map))) {
> +		if (dso__decompress_kmodule_path(map__dso(al.map), objdump_name,
>  						 decomp_name,
>  						 sizeof(decomp_name)) < 0) {
>  			pr_debug("decompression failed\n");
> @@ -330,7 +330,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  			len -= ret;
>  			if (len) {
>  				pr_debug("Reducing len to %zu\n", len);
> -			} else if (dso__is_kcore(al.map->dso)) {
> +			} else if (dso__is_kcore(map__dso(al.map))) {
>  				/*
>  				 * objdump cannot handle very large segments
>  				 * that may be found in kcore.
> @@ -588,8 +588,8 @@ static int do_test_code_reading(bool try_kcore)
>  		pr_debug("map__load failed\n");
>  		goto out_err;
>  	}
> -	have_vmlinux = dso__is_vmlinux(map->dso);
> -	have_kcore = dso__is_kcore(map->dso);
> +	have_vmlinux = dso__is_vmlinux(map__dso(map));
> +	have_kcore = dso__is_kcore(map__dso(map));
>  
>  	/* 2nd time through we just try kcore */
>  	if (try_kcore && !have_kcore)
> diff --git a/tools/perf/tests/hists_common.c b/tools/perf/tests/hists_common.c
> index 6f34d08b84e5..40eccc659767 100644
> --- a/tools/perf/tests/hists_common.c
> +++ b/tools/perf/tests/hists_common.c
> @@ -181,7 +181,7 @@ void print_hists_in(struct hists *hists)
>  		if (!he->filtered) {
>  			pr_info("%2d: entry: %-8s [%-8s] %20s: period = %"PRIu64"\n",
>  				i, thread__comm_str(he->thread),
> -				he->ms.map->dso->short_name,
> +				map__dso(he->ms.map)->short_name,
>  				he->ms.sym->name, he->stat.period);
>  		}
>  
> @@ -208,7 +208,7 @@ void print_hists_out(struct hists *hists)
>  		if (!he->filtered) {
>  			pr_info("%2d: entry: %8s:%5d [%-8s] %20s: period = %"PRIu64"/%"PRIu64"\n",
>  				i, thread__comm_str(he->thread), he->thread->tid,
> -				he->ms.map->dso->short_name,
> +				map__dso(he->ms.map)->short_name,
>  				he->ms.sym->name, he->stat.period,
>  				he->stat_acc ? he->stat_acc->period : 0);
>  		}
> diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
> index 11a230ee5894..5afab21455f1 100644
> --- a/tools/perf/tests/vmlinux-kallsyms.c
> +++ b/tools/perf/tests/vmlinux-kallsyms.c
> @@ -13,7 +13,7 @@
>  #include "debug.h"
>  #include "machine.h"
>  
> -#define UM(x) kallsyms_map->unmap_ip(kallsyms_map, (x))
> +#define UM(x) map__unmap_ip(kallsyms_map, (x))
>  
>  static bool is_ignored_symbol(const char *name, char type)
>  {
> @@ -216,8 +216,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  		if (sym->start == sym->end)
>  			continue;
>  
> -		mem_start = vmlinux_map->unmap_ip(vmlinux_map, sym->start);
> -		mem_end = vmlinux_map->unmap_ip(vmlinux_map, sym->end);
> +		mem_start = map__unmap_ip(vmlinux_map, sym->start);
> +		mem_end = map__unmap_ip(vmlinux_map, sym->end);
>  
>  		first_pair = machine__find_kernel_symbol(&kallsyms, mem_start, NULL);
>  		pair = first_pair;
> @@ -262,7 +262,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  
>  				continue;
>  			}
> -		} else if (mem_start == kallsyms.vmlinux_map->end) {
> +		} else if (mem_start == map__end(kallsyms.vmlinux_map)) {
>  			/*
>  			 * Ignore aliases to _etext, i.e. to the end of the kernel text area,
>  			 * such as __indirect_thunk_end.
> @@ -294,9 +294,10 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  		 * so use the short name, less descriptive but the same ("[kernel]" in
>  		 * both cases.
>  		 */
> -		struct map *pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
> -								map->dso->short_name :
> -								map->dso->name));
> +		struct map *pair = maps__find_by_name(kallsyms.kmaps,
> +						map__dso(map)->kernel
> +						? map__dso(map)->short_name
> +						: map__dso(map)->name);
>  		if (pair) {
>  			pair->priv = 1;
>  		} else {
> @@ -313,25 +314,27 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  	maps__for_each_entry(maps, rb_node) {
>  		struct map *pair, *map = rb_node->map;
>  
> -		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
> -		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
> +		mem_start = map__unmap_ip(vmlinux_map, map__start(map));
> +		mem_end = map__unmap_ip(vmlinux_map, map__end(map));
>  
>  		pair = maps__find(kallsyms.kmaps, mem_start);
> -		if (pair == NULL || pair->priv)
> +		if (pair == NULL || map__priv(pair))
>  			continue;
>  
> -		if (pair->start == mem_start) {
> +		if (map__start(pair) == mem_start) {
>  			if (!header_printed) {
>  				pr_info("WARN: Maps in vmlinux with a different name in kallsyms:\n");
>  				header_printed = true;
>  			}
>  
>  			pr_info("WARN: %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s in kallsyms as",
> -				map->start, map->end, map->pgoff, map->dso->name);
> -			if (mem_end != pair->end)
> +				map__start(map), map__end(map),
> +				map__pgoff(map), map__dso(map)->name);
> +			if (mem_end != map__end(pair))
>  				pr_info(":\nWARN: *%" PRIx64 "-%" PRIx64 " %" PRIx64,
> -					pair->start, pair->end, pair->pgoff);
> -			pr_info(" %s\n", pair->dso->name);
> +					map__start(pair), map__end(pair),
> +					map__pgoff(pair));
> +			pr_info(" %s\n", map__dso(pair)->name);
>  			pair->priv = 1;
>  		}
>  	}
> @@ -343,7 +346,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  	maps__for_each_entry(maps, rb_node) {
>  		struct map *map = rb_node->map;
>  
> -		if (!map->priv) {
> +		if (!map__priv(map)) {
>  			if (!header_printed) {
>  				pr_info("WARN: Maps only in kallsyms:\n");
>  				header_printed = true;
> diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
> index 44ba900828f6..7d51d92302dc 100644
> --- a/tools/perf/ui/browsers/annotate.c
> +++ b/tools/perf/ui/browsers/annotate.c
> @@ -446,7 +446,8 @@ static void ui_browser__init_asm_mode(struct ui_browser *browser)
>  static int sym_title(struct symbol *sym, struct map *map, char *title,
>  		     size_t sz, int percent_type)
>  {
> -	return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name, map->dso->long_name,
> +	return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name,
> +			map__dso(map)->long_name,
>  			percent_type_str(percent_type));
>  }
>  
> @@ -971,14 +972,14 @@ int symbol__tui_annotate(struct map_symbol *ms, struct evsel *evsel,
>  	if (sym == NULL)
>  		return -1;
>  
> -	if (ms->map->dso->annotate_warned)
> +	if (map__dso(ms->map)->annotate_warned)
>  		return -1;
>  
>  	if (not_annotated) {
>  		err = symbol__annotate2(ms, evsel, opts, &browser.arch);
>  		if (err) {
>  			char msg[BUFSIZ];
> -			ms->map->dso->annotate_warned = true;
> +			map__dso(ms->map)->annotate_warned = true;
>  			symbol__strerror_disassemble(ms, err, msg, sizeof(msg));
>  			ui__error("Couldn't annotate %s:\n%s", sym->name, msg);
>  			goto out_free_offsets;
> diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
> index 572ff38ceb0f..2241447e9bfb 100644
> --- a/tools/perf/ui/browsers/hists.c
> +++ b/tools/perf/ui/browsers/hists.c
> @@ -2487,7 +2487,7 @@ static struct symbol *symbol__new_unresolved(u64 addr, struct map *map)
>  			return NULL;
>  		}
>  
> -		dso__insert_symbol(map->dso, sym);
> +		dso__insert_symbol(map__dso(map), sym);
>  	}
>  
>  	return sym;
> @@ -2499,7 +2499,7 @@ add_annotate_opt(struct hist_browser *browser __maybe_unused,
>  		 struct map_symbol *ms,
>  		 u64 addr)
>  {
> -	if (!ms->map || !ms->map->dso || ms->map->dso->annotate_warned)
> +	if (!ms->map || !map__dso(ms->map) || map__dso(ms->map)->annotate_warned)
>  		return 0;
>  
>  	if (!ms->sym)
> @@ -2590,8 +2590,10 @@ static int hists_browser__zoom_map(struct hist_browser *browser, struct map *map
>  		ui_helpline__pop();
>  	} else {
>  		ui_helpline__fpush("To zoom out press ESC or ENTER + \"Zoom out of %s DSO\"",
> -				   __map__is_kernel(map) ? "the Kernel" : map->dso->short_name);
> -		browser->hists->dso_filter = map->dso;
> +				   __map__is_kernel(map)
> +				   ? "the Kernel"
> +				   : map__dso(map)->short_name);
> +		browser->hists->dso_filter = map__dso(map);
>  		perf_hpp__set_elide(HISTC_DSO, true);
>  		pstack__push(browser->pstack, &browser->hists->dso_filter);
>  	}
> @@ -2616,7 +2618,9 @@ add_dso_opt(struct hist_browser *browser, struct popup_action *act,
>  
>  	if (asprintf(optstr, "Zoom %s %s DSO (use the 'k' hotkey to zoom directly into the kernel)",
>  		     browser->hists->dso_filter ? "out of" : "into",
> -		     __map__is_kernel(map) ? "the Kernel" : map->dso->short_name) < 0)
> +		     __map__is_kernel(map)
> +		     ? "the Kernel"
> +		     : map__dso(map)->short_name) < 0)
>  		return 0;
>  
>  	act->ms.map = map;
> @@ -3091,8 +3095,8 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
>  
>  			if (!browser->selection ||
>  			    !browser->selection->map ||
> -			    !browser->selection->map->dso ||
> -			    browser->selection->map->dso->annotate_warned) {
> +			    !map__dso(browser->selection->map) ||
> +			    map__dso(browser->selection->map)->annotate_warned) {
>  				continue;
>  			}
>  
> diff --git a/tools/perf/ui/browsers/map.c b/tools/perf/ui/browsers/map.c
> index 3d49b916c9e4..3d1b958d8832 100644
> --- a/tools/perf/ui/browsers/map.c
> +++ b/tools/perf/ui/browsers/map.c
> @@ -76,7 +76,7 @@ static int map_browser__run(struct map_browser *browser)
>  {
>  	int key;
>  
> -	if (ui_browser__show(&browser->b, browser->map->dso->long_name,
> +	if (ui_browser__show(&browser->b, map__dso(browser->map)->long_name,
>  			     "Press ESC to exit, %s / to search",
>  			     verbose > 0 ? "" : "restart with -v to use") < 0)
>  		return -1;
> @@ -106,7 +106,7 @@ int map__browse(struct map *map)
>  {
>  	struct map_browser mb = {
>  		.b = {
> -			.entries = &map->dso->symbols,
> +			.entries = &map__dso(map)->symbols,
>  			.refresh = ui_browser__rb_tree_refresh,
>  			.seek	 = ui_browser__rb_tree_seek,
>  			.write	 = map_browser__write,
> diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
> index 01900689dc00..3a7433d3e48a 100644
> --- a/tools/perf/util/annotate.c
> +++ b/tools/perf/util/annotate.c
> @@ -280,7 +280,9 @@ static int call__parse(struct arch *arch, struct ins_operands *ops, struct map_s
>  	target.addr = map__objdump_2mem(map, ops->target.addr);
>  
>  	if (maps__find_ams(ms->maps, &target) == 0 &&
> -	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> +	    map__rip_2objdump(target.ms.map,
> +			      map->map_ip(target.ms.map, target.addr)
> +			      ) == ops->target.addr)
>  		ops->target.sym = target.ms.sym;
>  
>  	return 0;
> @@ -384,8 +386,8 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
>  	}
>  
>  	target.addr = map__objdump_2mem(map, ops->target.addr);
> -	start = map->unmap_ip(map, sym->start),
> -	end = map->unmap_ip(map, sym->end);
> +	start = map__unmap_ip(map, sym->start),
> +	end = map__unmap_ip(map, sym->end);
>  
>  	ops->target.outside = target.addr < start || target.addr > end;
>  
> @@ -408,7 +410,9 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
>  	 * the symbol searching and disassembly should be done.
>  	 */
>  	if (maps__find_ams(ms->maps, &target) == 0 &&
> -	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> +	    map__rip_2objdump(target.ms.map,
> +			      map->map_ip(target.ms.map, target.addr)
> +			      ) == ops->target.addr)
>  		ops->target.sym = target.ms.sym;
>  
>  	if (!ops->target.outside) {
> @@ -889,7 +893,7 @@ static int __symbol__inc_addr_samples(struct map_symbol *ms,
>  	unsigned offset;
>  	struct sym_hist *h;
>  
> -	pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, ms->map->unmap_ip(ms->map, addr));
> +	pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, map__unmap_ip(ms->map, addr));
>  
>  	if ((addr < sym->start || addr >= sym->end) &&
>  	    (addr != sym->end || sym->start != sym->end)) {
> @@ -1016,13 +1020,13 @@ int addr_map_symbol__account_cycles(struct addr_map_symbol *ams,
>  	if (start &&
>  		(start->ms.sym == ams->ms.sym ||
>  		 (ams->ms.sym &&
> -		   start->addr == ams->ms.sym->start + ams->ms.map->start)))
> +		  start->addr == ams->ms.sym->start + map__start(ams->ms.map))))
>  		saddr = start->al_addr;
>  	if (saddr == 0)
>  		pr_debug2("BB with bad start: addr %"PRIx64" start %"PRIx64" sym %"PRIx64" saddr %"PRIx64"\n",
>  			ams->addr,
>  			start ? start->addr : 0,
> -			ams->ms.sym ? ams->ms.sym->start + ams->ms.map->start : 0,
> +			ams->ms.sym ? ams->ms.sym->start + map__start(ams->ms.map) : 0,
>  			saddr);
>  	err = symbol__account_cycles(ams->al_addr, saddr, ams->ms.sym, cycles);
>  	if (err)
> @@ -1593,7 +1597,7 @@ static void delete_last_nop(struct symbol *sym)
>  
>  int symbol__strerror_disassemble(struct map_symbol *ms, int errnum, char *buf, size_t buflen)
>  {
> -	struct dso *dso = ms->map->dso;
> +	struct dso *dso = map__dso(ms->map);
>  
>  	BUG_ON(buflen == 0);
>  
> @@ -1723,7 +1727,7 @@ static int symbol__disassemble_bpf(struct symbol *sym,
>  	struct map *map = args->ms.map;
>  	struct perf_bpil *info_linear;
>  	struct disassemble_info info;
> -	struct dso *dso = map->dso;
> +	struct dso *dso = map__dso(map);
>  	int pc = 0, count, sub_id;
>  	struct btf *btf = NULL;
>  	char tpath[PATH_MAX];
> @@ -1946,7 +1950,7 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
>  {
>  	struct annotation_options *opts = args->options;
>  	struct map *map = args->ms.map;
> -	struct dso *dso = map->dso;
> +	struct dso *dso = map__dso(map);
>  	char *command;
>  	FILE *file;
>  	char symfs_filename[PATH_MAX];
> @@ -1973,8 +1977,8 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
>  		return err;
>  
>  	pr_debug("%s: filename=%s, sym=%s, start=%#" PRIx64 ", end=%#" PRIx64 "\n", __func__,
> -		 symfs_filename, sym->name, map->unmap_ip(map, sym->start),
> -		 map->unmap_ip(map, sym->end));
> +		 symfs_filename, sym->name, map__unmap_ip(map, sym->start),
> +		 map__unmap_ip(map, sym->end));
>  
>  	pr_debug("annotating [%p] %30s : [%p] %30s\n",
>  		 dso, dso->long_name, sym, sym->name);
> @@ -2386,7 +2390,7 @@ int symbol__annotate_printf(struct map_symbol *ms, struct evsel *evsel,
>  {
>  	struct map *map = ms->map;
>  	struct symbol *sym = ms->sym;
> -	struct dso *dso = map->dso;
> +	struct dso *dso = map__dso(map);
>  	char *filename;
>  	const char *d_filename;
>  	const char *evsel_name = evsel__name(evsel);
> @@ -2569,7 +2573,7 @@ int map_symbol__annotation_dump(struct map_symbol *ms, struct evsel *evsel,
>  	}
>  
>  	fprintf(fp, "%s() %s\nEvent: %s\n\n",
> -		ms->sym->name, ms->map->dso->long_name, ev_name);
> +		ms->sym->name, map__dso(ms->map)->long_name, ev_name);
>  	symbol__annotate_fprintf2(ms->sym, fp, opts);
>  
>  	fclose(fp);
> @@ -2781,7 +2785,7 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,
>  		if (percent_max <= 0.5)
>  			continue;
>  
> -		al->path = get_srcline(map->dso, notes->start + al->offset, NULL,
> +		al->path = get_srcline(map__dso(map), notes->start + al->offset, NULL,
>  				       false, true, notes->start + al->offset);
>  		insert_source_line(&tmp_root, al, opts);
>  	}
> @@ -2800,7 +2804,7 @@ static void symbol__calc_lines(struct map_symbol *ms, struct rb_root *root,
>  int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
>  			  struct annotation_options *opts)
>  {
> -	struct dso *dso = ms->map->dso;
> +	struct dso *dso = map__dso(ms->map);
>  	struct symbol *sym = ms->sym;
>  	struct rb_root source_line = RB_ROOT;
>  	struct hists *hists = evsel__hists(evsel);
> @@ -2836,7 +2840,7 @@ int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
>  int symbol__tty_annotate(struct map_symbol *ms, struct evsel *evsel,
>  			 struct annotation_options *opts)
>  {
> -	struct dso *dso = ms->map->dso;
> +	struct dso *dso = map__dso(ms->map);
>  	struct symbol *sym = ms->sym;
>  	struct rb_root source_line = RB_ROOT;
>  	int err;
> diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
> index 825336304a37..2e864c9bdef3 100644
> --- a/tools/perf/util/auxtrace.c
> +++ b/tools/perf/util/auxtrace.c
> @@ -2478,7 +2478,7 @@ static struct dso *load_dso(const char *name)
>  	if (map__load(map) < 0)
>  		pr_err("File '%s' not found or has no symbols.\n", name);
>  
> -	dso = dso__get(map->dso);
> +	dso = dso__get(map__dso(map));
>  
>  	map__put(map);
>  
> diff --git a/tools/perf/util/block-info.c b/tools/perf/util/block-info.c
> index 5ecd4f401f32..16a7b4adcf18 100644
> --- a/tools/perf/util/block-info.c
> +++ b/tools/perf/util/block-info.c
> @@ -317,9 +317,9 @@ static int block_dso_entry(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp,
>  	struct block_fmt *block_fmt = container_of(fmt, struct block_fmt, fmt);
>  	struct map *map = he->ms.map;
>  
> -	if (map && map->dso) {
> +	if (map && map__dso(map)) {
>  		return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
> -				 map->dso->short_name);
> +				 map__dso(map)->short_name);
>  	}
>  
>  	return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
> diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
> index 33257b594a71..5717933be116 100644
> --- a/tools/perf/util/bpf-event.c
> +++ b/tools/perf/util/bpf-event.c
> @@ -95,10 +95,10 @@ static int machine__process_bpf_event_load(struct machine *machine,
>  		struct map *map = maps__find(machine__kernel_maps(machine), addr);
>  
>  		if (map) {
> -			map->dso->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
> -			map->dso->bpf_prog.id = id;
> -			map->dso->bpf_prog.sub_id = i;
> -			map->dso->bpf_prog.env = env;
> +			map__dso(map)->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
> +			map__dso(map)->bpf_prog.id = id;
> +			map__dso(map)->bpf_prog.sub_id = i;
> +			map__dso(map)->bpf_prog.env = env;
>  		}
>  	}
>  	return 0;
> diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
> index 7a5821c87f94..274b705dd941 100644
> --- a/tools/perf/util/build-id.c
> +++ b/tools/perf/util/build-id.c
> @@ -59,7 +59,7 @@ int build_id__mark_dso_hit(struct perf_tool *tool __maybe_unused,
>  	}
>  
>  	if (thread__find_map(thread, sample->cpumode, sample->ip, &al))
> -		al.map->dso->hit = 1;
> +		map__dso(al.map)->hit = 1;
>  
>  	thread__put(thread);
>  	return 0;
> diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
> index 61bb3fb2107a..a8cfd31a3ff0 100644
> --- a/tools/perf/util/callchain.c
> +++ b/tools/perf/util/callchain.c
> @@ -695,8 +695,8 @@ static enum match_result match_chain_strings(const char *left,
>  static enum match_result match_chain_dso_addresses(struct map *left_map, u64 left_ip,
>  						   struct map *right_map, u64 right_ip)
>  {
> -	struct dso *left_dso = left_map ? left_map->dso : NULL;
> -	struct dso *right_dso = right_map ? right_map->dso : NULL;
> +	struct dso *left_dso = left_map ? map__dso(left_map) : NULL;
> +	struct dso *right_dso = right_map ? map__dso(right_map) : NULL;
>  
>  	if (left_dso != right_dso)
>  		return left_dso < right_dso ? MATCH_LT : MATCH_GT;
> @@ -1167,9 +1167,9 @@ char *callchain_list__sym_name(struct callchain_list *cl,
>  
>  	if (show_dso)
>  		scnprintf(bf + printed, bfsize - printed, " %s",
> -			  cl->ms.map ?
> -			  cl->ms.map->dso->short_name :
> -			  "unknown");
> +			  cl->ms.map
> +			  ? map__dso(cl->ms.map)->short_name
> +			  : "unknown");
>  
>  	return bf;
>  }
> diff --git a/tools/perf/util/data-convert-json.c b/tools/perf/util/data-convert-json.c
> index f1ab6edba446..9c83228bb9f1 100644
> --- a/tools/perf/util/data-convert-json.c
> +++ b/tools/perf/util/data-convert-json.c
> @@ -127,8 +127,8 @@ static void output_sample_callchain_entry(struct perf_tool *tool,
>  		fputc(',', out);
>  		output_json_key_string(out, false, 5, "symbol", al->sym->name);
>  
> -		if (al->map && al->map->dso) {
> -			const char *dso = al->map->dso->short_name;
> +		if (al->map && map__dso(al->map)) {
> +			const char *dso = map__dso(al->map)->short_name;
>  
>  			if (dso && strlen(dso) > 0) {
>  				fputc(',', out);
> diff --git a/tools/perf/util/db-export.c b/tools/perf/util/db-export.c
> index 1cfcfdd3cf52..84c970c11794 100644
> --- a/tools/perf/util/db-export.c
> +++ b/tools/perf/util/db-export.c
> @@ -179,7 +179,7 @@ static int db_ids_from_al(struct db_export *dbe, struct addr_location *al,
>  	int err;
>  
>  	if (al->map) {
> -		struct dso *dso = al->map->dso;
> +		struct dso *dso = map__dso(al->map);
>  
>  		err = db_export__dso(dbe, dso, maps__machine(al->maps));
>  		if (err)
> @@ -255,7 +255,7 @@ static struct call_path *call_path_from_sample(struct db_export *dbe,
>  		al.addr = node->ip;
>  
>  		if (al.map && !al.sym)
> -			al.sym = dso__find_symbol(al.map->dso, al.addr);
> +			al.sym = dso__find_symbol(map__dso(al.map), al.addr);
>  
>  		db_ids_from_al(dbe, &al, &dso_db_id, &sym_db_id, &offset);
>  
> diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
> index d59462af15f1..f1d9dd7065e6 100644
> --- a/tools/perf/util/dlfilter.c
> +++ b/tools/perf/util/dlfilter.c
> @@ -29,7 +29,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
>  
>  	d_al->size = sizeof(*d_al);
>  	if (al->map) {
> -		struct dso *dso = al->map->dso;
> +		struct dso *dso = map__dso(al->map);
>  
>  		if (symbol_conf.show_kernel_path && dso->long_name)
>  			d_al->dso = dso->long_name;
> @@ -51,7 +51,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
>  		if (al->addr < sym->end)
>  			d_al->symoff = al->addr - sym->start;
>  		else
> -			d_al->symoff = al->addr - al->map->start - sym->start;
> +			d_al->symoff = al->addr - map__start(al->map) - sym->start;
>  		d_al->sym_binding = sym->binding;
>  	} else {
>  		d_al->sym = NULL;
> @@ -232,9 +232,10 @@ static const char *dlfilter__srcline(void *ctx, __u32 *line_no)
>  	map = al->map;
>  	addr = al->addr;
>  
> -	if (map && map->dso)
> -		srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
> -
> +	if (map && map__dso(map)) {
> +		srcfile = get_srcline_split(map__dso(map),
> +					    map__rip_2objdump(map, addr), &line);
> +	}
>  	*line_no = line;
>  	return srcfile;
>  }
> @@ -266,7 +267,7 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
>  
>  	map = al->map;
>  
> -	if (map && ip >= map->start && ip < map->end &&
> +	if (map && ip >= map__start(map) && ip < map__end(map) &&
>  	    machine__kernel_ip(d->machine, ip) == machine__kernel_ip(d->machine, d->sample->ip))
>  		goto have_map;
>  
> @@ -276,10 +277,10 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
>  
>  	map = a.map;
>  have_map:
> -	offset = map->map_ip(map, ip);
> -	if (ip + len >= map->end)
> -		len = map->end - ip;
> -	return dso__data_read_offset(map->dso, d->machine, offset, buf, len);
> +	offset = map__map_ip(map, ip);
> +	if (ip + len >= map__end(map))
> +		len = map__end(map) - ip;
> +	return dso__data_read_offset(map__dso(map), d->machine, offset, buf, len);
>  }
>  
>  static const struct perf_dlfilter_fns perf_dlfilter_fns = {
> diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> index b2f570adba35..1115bc51a261 100644
> --- a/tools/perf/util/dso.c
> +++ b/tools/perf/util/dso.c
> @@ -1109,7 +1109,7 @@ ssize_t dso__data_read_addr(struct dso *dso, struct map *map,
>  			    struct machine *machine, u64 addr,
>  			    u8 *data, ssize_t size)
>  {
> -	u64 offset = map->map_ip(map, addr);
> +	u64 offset = map__map_ip(map, addr);
>  	return dso__data_read_offset(dso, machine, offset, data, size);
>  }
>  
> @@ -1149,7 +1149,7 @@ ssize_t dso__data_write_cache_addr(struct dso *dso, struct map *map,
>  				   struct machine *machine, u64 addr,
>  				   const u8 *data, ssize_t size)
>  {
> -	u64 offset = map->map_ip(map, addr);
> +	u64 offset = map__map_ip(map, addr);
>  	return dso__data_write_cache_offs(dso, machine, offset, data, size);
>  }
>  
> diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> index 40a3b1a35613..54a1d4df5f70 100644
> --- a/tools/perf/util/event.c
> +++ b/tools/perf/util/event.c
> @@ -486,7 +486,7 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
>  
>  		al.map = maps__find(machine__kernel_maps(machine), tp->addr);
>  		if (al.map && map__load(al.map) >= 0) {
> -			al.addr = al.map->map_ip(al.map, tp->addr);
> +			al.addr = map__map_ip(al.map, tp->addr);
>  			al.sym = map__find_symbol(al.map, al.addr);
>  			if (al.sym)
>  				ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
> @@ -621,7 +621,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
>  		 */
>  		if (load_map)
>  			map__load(al->map);
> -		al->addr = al->map->map_ip(al->map, al->addr);
> +		al->addr = map__map_ip(al->map, al->addr);
>  	}
>  
>  	return al->map;
> @@ -692,8 +692,8 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
>  	dump_printf(" ... thread: %s:%d\n", thread__comm_str(thread), thread->tid);
>  	thread__find_map(thread, sample->cpumode, sample->ip, al);
>  	dump_printf(" ...... dso: %s\n",
> -		    al->map ? al->map->dso->long_name :
> -			al->level == 'H' ? "[hypervisor]" : "<not found>");
> +		    al->map ? map__dso(al->map)->long_name
> +			    : al->level == 'H' ? "[hypervisor]" : "<not found>");
>  
>  	if (thread__is_filtered(thread))
>  		al->filtered |= (1 << HIST_FILTER__THREAD);
> @@ -711,7 +711,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
>  	}
>  
>  	if (al->map) {
> -		struct dso *dso = al->map->dso;
> +		struct dso *dso = map__dso(al->map);
>  
>  		if (symbol_conf.dso_list &&
>  		    (!dso || !(strlist__has_entry(symbol_conf.dso_list,
> @@ -738,12 +738,12 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
>  		}
>  		if (!ret && al->sym) {
>  			snprintf(al_addr_str, sz, "0x%"PRIx64,
> -				al->map->unmap_ip(al->map, al->sym->start));
> +				 map__unmap_ip(al->map, al->sym->start));
>  			ret = strlist__has_entry(symbol_conf.sym_list,
>  						al_addr_str);
>  		}
>  		if (!ret && symbol_conf.addr_list && al->map) {
> -			unsigned long addr = al->map->unmap_ip(al->map, al->addr);
> +			unsigned long addr = map__unmap_ip(al->map, al->addr);
>  
>  			ret = intlist__has_entry(symbol_conf.addr_list, addr);
>  			if (!ret && symbol_conf.addr_range) {
> diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c
> index 8c2ea8001329..ac6fef9d8906 100644
> --- a/tools/perf/util/evsel_fprintf.c
> +++ b/tools/perf/util/evsel_fprintf.c
> @@ -146,11 +146,11 @@ int sample__fprintf_callchain(struct perf_sample *sample, int left_alignment,
>  				printed += fprintf(fp, " <-");
>  
>  			if (map)
> -				addr = map->map_ip(map, node->ip);
> +				addr = map__map_ip(map, node->ip);
>  
>  			if (print_ip) {
>  				/* Show binary offset for userspace addr */
> -				if (map && !map->dso->kernel)
> +				if (map && !map__dso(map)->kernel)
>  					printed += fprintf(fp, "%c%16" PRIx64, s, addr);
>  				else
>  					printed += fprintf(fp, "%c%16" PRIx64, s, node->ip);
> diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> index 78f9fbb925a7..f19ac6eb4775 100644
> --- a/tools/perf/util/hist.c
> +++ b/tools/perf/util/hist.c
> @@ -105,7 +105,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
>  		hists__set_col_len(hists, HISTC_THREAD, len + 8);
>  
>  	if (h->ms.map) {
> -		len = dso__name_len(h->ms.map->dso);
> +		len = dso__name_len(map__dso(h->ms.map));
>  		hists__new_col_len(hists, HISTC_DSO, len);
>  	}
>  
> @@ -119,7 +119,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
>  				symlen += BITS_PER_LONG / 4 + 2 + 3;
>  			hists__new_col_len(hists, HISTC_SYMBOL_FROM, symlen);
>  
> -			symlen = dso__name_len(h->branch_info->from.ms.map->dso);
> +			symlen = dso__name_len(map__dso(h->branch_info->from.ms.map));
>  			hists__new_col_len(hists, HISTC_DSO_FROM, symlen);
>  		} else {
>  			symlen = unresolved_col_width + 4 + 2;
> @@ -133,7 +133,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
>  				symlen += BITS_PER_LONG / 4 + 2 + 3;
>  			hists__new_col_len(hists, HISTC_SYMBOL_TO, symlen);
>  
> -			symlen = dso__name_len(h->branch_info->to.ms.map->dso);
> +			symlen = dso__name_len(map__dso(h->branch_info->to.ms.map));
>  			hists__new_col_len(hists, HISTC_DSO_TO, symlen);
>  		} else {
>  			symlen = unresolved_col_width + 4 + 2;
> @@ -177,7 +177,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
>  		}
>  
>  		if (h->mem_info->daddr.ms.map) {
> -			symlen = dso__name_len(h->mem_info->daddr.ms.map->dso);
> +			symlen = dso__name_len(map__dso(h->mem_info->daddr.ms.map));
>  			hists__new_col_len(hists, HISTC_MEM_DADDR_DSO,
>  					   symlen);
>  		} else {
> @@ -2096,7 +2096,7 @@ static bool hists__filter_entry_by_dso(struct hists *hists,
>  				       struct hist_entry *he)
>  {
>  	if (hists->dso_filter != NULL &&
> -	    (he->ms.map == NULL || he->ms.map->dso != hists->dso_filter)) {
> +	    (he->ms.map == NULL || map__dso(he->ms.map) != hists->dso_filter)) {
>  		he->filtered |= (1 << HIST_FILTER__DSO);
>  		return true;
>  	}
> diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
> index e8613cbda331..c88f112c0a06 100644
> --- a/tools/perf/util/intel-pt.c
> +++ b/tools/perf/util/intel-pt.c
> @@ -731,20 +731,20 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
>  	}
>  
>  	while (1) {
> -		if (!thread__find_map(thread, cpumode, *ip, &al) || !al.map->dso)
> +		if (!thread__find_map(thread, cpumode, *ip, &al) || !map__dso(al.map))
>  			return -EINVAL;
>  
> -		if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR &&
> -		    dso__data_status_seen(al.map->dso,
> +		if (map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR &&
> +		    dso__data_status_seen(map__dso(al.map),
>  					  DSO_DATA_STATUS_SEEN_ITRACE))
>  			return -ENOENT;
>  
> -		offset = al.map->map_ip(al.map, *ip);
> +		offset = map__map_ip(al.map, *ip);
>  
>  		if (!to_ip && one_map) {
>  			struct intel_pt_cache_entry *e;
>  
> -			e = intel_pt_cache_lookup(al.map->dso, machine, offset);
> +			e = intel_pt_cache_lookup(map__dso(al.map), machine, offset);
>  			if (e &&
>  			    (!max_insn_cnt || e->insn_cnt <= max_insn_cnt)) {
>  				*insn_cnt_ptr = e->insn_cnt;
> @@ -766,10 +766,10 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
>  		/* Load maps to ensure dso->is_64_bit has been updated */
>  		map__load(al.map);
>  
> -		x86_64 = al.map->dso->is_64_bit;
> +		x86_64 = map__dso(al.map)->is_64_bit;
>  
>  		while (1) {
> -			len = dso__data_read_offset(al.map->dso, machine,
> +			len = dso__data_read_offset(map__dso(al.map), machine,
>  						    offset, buf,
>  						    INTEL_PT_INSN_BUF_SZ);
>  			if (len <= 0)
> @@ -795,7 +795,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
>  				goto out_no_cache;
>  			}
>  
> -			if (*ip >= al.map->end)
> +			if (*ip >= map__end(al.map))
>  				break;
>  
>  			offset += intel_pt_insn->length;
> @@ -815,13 +815,13 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
>  	if (to_ip) {
>  		struct intel_pt_cache_entry *e;
>  
> -		e = intel_pt_cache_lookup(al.map->dso, machine, start_offset);
> +		e = intel_pt_cache_lookup(map__dso(al.map), machine, start_offset);
>  		if (e)
>  			return 0;
>  	}
>  
>  	/* Ignore cache errors */
> -	intel_pt_cache_add(al.map->dso, machine, start_offset, insn_cnt,
> +	intel_pt_cache_add(map__dso(al.map), machine, start_offset, insn_cnt,
>  			   *ip - start_ip, intel_pt_insn);
>  
>  	return 0;
> @@ -892,13 +892,13 @@ static int __intel_pt_pgd_ip(uint64_t ip, void *data)
>  	if (!thread)
>  		return -EINVAL;
>  
> -	if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso)
> +	if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map))
>  		return -EINVAL;
>  
> -	offset = al.map->map_ip(al.map, ip);
> +	offset = map__map_ip(al.map, ip);
>  
>  	return intel_pt_match_pgd_ip(ptq->pt, ip, offset,
> -				     al.map->dso->long_name);
> +				     map__dso(al.map)->long_name);
>  }
>  
>  static bool intel_pt_pgd_ip(uint64_t ip, void *data)
> @@ -2406,13 +2406,13 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
>  	if (map__load(map))
>  		return 0;
>  
> -	start = dso__first_symbol(map->dso);
> +	start = dso__first_symbol(map__dso(map));
>  
>  	for (sym = start; sym; sym = dso__next_symbol(sym)) {
>  		if (sym->binding == STB_GLOBAL &&
>  		    !strcmp(sym->name, "__switch_to")) {
> -			ip = map->unmap_ip(map, sym->start);
> -			if (ip >= map->start && ip < map->end) {
> +			ip = map__unmap_ip(map, sym->start);
> +			if (ip >= map__start(map) && ip < map__end(map)) {
>  				switch_ip = ip;
>  				break;
>  			}
> @@ -2429,8 +2429,8 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
>  
>  	for (sym = start; sym; sym = dso__next_symbol(sym)) {
>  		if (!strcmp(sym->name, ptss)) {
> -			ip = map->unmap_ip(map, sym->start);
> -			if (ip >= map->start && ip < map->end) {
> +			ip = map__unmap_ip(map, sym->start);
> +			if (ip >= map__start(map) && ip < map__end(map)) {
>  				*ptss_ip = ip;
>  				break;
>  			}
> @@ -2965,7 +2965,7 @@ static int intel_pt_process_aux_output_hw_id(struct intel_pt *pt,
>  static int intel_pt_find_map(struct thread *thread, u8 cpumode, u64 addr,
>  			     struct addr_location *al)
>  {
> -	if (!al->map || addr < al->map->start || addr >= al->map->end) {
> +	if (!al->map || addr < map__start(al->map) || addr >= map__end(al->map)) {
>  		if (!thread__find_map(thread, cpumode, addr, al))
>  			return -1;
>  	}
> @@ -2996,12 +2996,12 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
>  			continue;
>  		}
>  
> -		if (!al.map->dso || !al.map->dso->auxtrace_cache)
> +		if (!map__dso(al.map) || !map__dso(al.map)->auxtrace_cache)
>  			continue;
>  
> -		offset = al.map->map_ip(al.map, addr);
> +		offset = map__map_ip(al.map, addr);
>  
> -		e = intel_pt_cache_lookup(al.map->dso, machine, offset);
> +		e = intel_pt_cache_lookup(map__dso(al.map), machine, offset);
>  		if (!e)
>  			continue;
>  
> @@ -3014,9 +3014,9 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
>  			if (e->branch != INTEL_PT_BR_NO_BRANCH)
>  				return 0;
>  		} else {
> -			intel_pt_cache_invalidate(al.map->dso, machine, offset);
> +			intel_pt_cache_invalidate(map__dso(al.map), machine, offset);
>  			intel_pt_log("Invalidated instruction cache for %s at %#"PRIx64"\n",
> -				     al.map->dso->long_name, addr);
> +				     map__dso(al.map)->long_name, addr);
>  		}
>  	}
>  
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index 88279008e761..940fb2a50dfd 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -47,7 +47,7 @@ static void __machine__remove_thread(struct machine *machine, struct thread *th,
>  
>  static struct dso *machine__kernel_dso(struct machine *machine)
>  {
> -	return machine->vmlinux_map->dso;
> +	return map__dso(machine->vmlinux_map);
>  }
>  
>  static void dsos__init(struct dsos *dsos)
> @@ -842,9 +842,10 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
>  	if (map != machine->vmlinux_map)
>  		maps__remove(machine__kernel_maps(machine), map);
>  	else {
> -		sym = dso__find_symbol(map->dso, map->map_ip(map, map->start));
> +		sym = dso__find_symbol(map__dso(map),
> +				map__map_ip(map, map__start(map)));
>  		if (sym)
> -			dso__delete_symbol(map->dso, sym);
> +			dso__delete_symbol(map__dso(map), sym);
>  	}
>  
>  	return 0;
> @@ -880,7 +881,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
>  		return 0;
>  	}
>  
> -	if (map && map->dso) {
> +	if (map && map__dso(map)) {
>  		u8 *new_bytes = event->text_poke.bytes + event->text_poke.old_len;
>  		int ret;
>  
> @@ -889,7 +890,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
>  		 * must be done prior to using kernel maps.
>  		 */
>  		map__load(map);
> -		ret = dso__data_write_cache_addr(map->dso, map, machine,
> +		ret = dso__data_write_cache_addr(map__dso(map), map, machine,
>  						 event->text_poke.addr,
>  						 new_bytes,
>  						 event->text_poke.new_len);
> @@ -931,6 +932,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
>  	/* If maps__insert failed, return NULL. */
>  	if (err)
>  		map = NULL;
> +
>  out:
>  	/* put the dso here, corresponding to  machine__findnew_module_dso */
>  	dso__put(dso);
> @@ -1118,7 +1120,7 @@ int machine__create_extra_kernel_map(struct machine *machine,
>  
>  	if (!err) {
>  		pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
> -			kmap->name, map->start, map->end);
> +			kmap->name, map__start(map), map__end(map));
>  	}
>  
>  	map__put(map);
> @@ -1178,9 +1180,9 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
>  		if (!kmap || !is_entry_trampoline(kmap->name))
>  			continue;
>  
> -		dest_map = maps__find(kmaps, map->pgoff);
> +		dest_map = maps__find(kmaps, map__pgoff(map));
>  		if (dest_map != map)
> -			map->pgoff = dest_map->map_ip(dest_map, map->pgoff);
> +			map->pgoff = map__map_ip(dest_map, map__pgoff(map));
>  		found = true;
>  	}
>  	if (found || machine->trampolines_mapped)
> @@ -1230,7 +1232,8 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
>  	if (machine->vmlinux_map == NULL)
>  		return -ENOMEM;
>  
> -	machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
> +	machine->vmlinux_map->map_ip = map__identity_ip;
> +	machine->vmlinux_map->unmap_ip = map__identity_ip;
>  	return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
>  }
>  
> @@ -1329,10 +1332,10 @@ int machines__create_kernel_maps(struct machines *machines, pid_t pid)
>  int machine__load_kallsyms(struct machine *machine, const char *filename)
>  {
>  	struct map *map = machine__kernel_map(machine);
> -	int ret = __dso__load_kallsyms(map->dso, filename, map, true);
> +	int ret = __dso__load_kallsyms(map__dso(map), filename, map, true);
>  
>  	if (ret > 0) {
> -		dso__set_loaded(map->dso);
> +		dso__set_loaded(map__dso(map));
>  		/*
>  		 * Since /proc/kallsyms will have multiple sessions for the
>  		 * kernel, with modules between them, fixup the end of all
> @@ -1347,10 +1350,10 @@ int machine__load_kallsyms(struct machine *machine, const char *filename)
>  int machine__load_vmlinux_path(struct machine *machine)
>  {
>  	struct map *map = machine__kernel_map(machine);
> -	int ret = dso__load_vmlinux_path(map->dso, map);
> +	int ret = dso__load_vmlinux_path(map__dso(map), map);
>  
>  	if (ret > 0)
> -		dso__set_loaded(map->dso);
> +		dso__set_loaded(map__dso(map));
>  
>  	return ret;
>  }
> @@ -1401,16 +1404,16 @@ static int maps__set_module_path(struct maps *maps, const char *path, struct kmo
>  	if (long_name == NULL)
>  		return -ENOMEM;
>  
> -	dso__set_long_name(map->dso, long_name, true);
> -	dso__kernel_module_get_build_id(map->dso, "");
> +	dso__set_long_name(map__dso(map), long_name, true);
> +	dso__kernel_module_get_build_id(map__dso(map), "");
>  
>  	/*
>  	 * Full name could reveal us kmod compression, so
>  	 * we need to update the symtab_type if needed.
>  	 */
> -	if (m->comp && is_kmod_dso(map->dso)) {
> -		map->dso->symtab_type++;
> -		map->dso->comp = m->comp;
> +	if (m->comp && is_kmod_dso(map__dso(map))) {
> +		map__dso(map)->symtab_type++;
> +		map__dso(map)->comp = m->comp;
>  	}
>  
>  	return 0;
> @@ -1509,8 +1512,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
>  		return -1;
>  	map->end = start + size;
>  
> -	dso__kernel_module_get_build_id(map->dso, machine->root_dir);
> -
> +	dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
>  	return 0;
>  }
>  
> @@ -1619,7 +1621,7 @@ int machine__create_kernel_maps(struct machine *machine)
>  		struct map_rb_node *next = map_rb_node__next(rb_node);
>  
>  		if (next)
> -			machine__set_kernel_mmap(machine, start, next->map->start);
> +			machine__set_kernel_mmap(machine, start, map__start(next->map));
>  	}
>  
>  out_put:
> @@ -1683,10 +1685,10 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
>  		if (map == NULL)
>  			goto out_problem;
>  
> -		map->end = map->start + xm->end - xm->start;
> +		map->end = map__start(map) + xm->end - xm->start;
>  
>  		if (build_id__is_defined(bid))
> -			dso__set_build_id(map->dso, bid);
> +			dso__set_build_id(map__dso(map), bid);
>  
>  	} else if (is_kernel_mmap) {
>  		const char *symbol_name = (xm->name + strlen(machine->mmap_name));
> @@ -2148,14 +2150,14 @@ static char *callchain_srcline(struct map_symbol *ms, u64 ip)
>  	if (!map || callchain_param.key == CCKEY_FUNCTION)
>  		return srcline;
>  
> -	srcline = srcline__tree_find(&map->dso->srclines, ip);
> +	srcline = srcline__tree_find(&map__dso(map)->srclines, ip);
>  	if (!srcline) {
>  		bool show_sym = false;
>  		bool show_addr = callchain_param.key == CCKEY_ADDRESS;
>  
> -		srcline = get_srcline(map->dso, map__rip_2objdump(map, ip),
> +		srcline = get_srcline(map__dso(map), map__rip_2objdump(map, ip),
>  				      ms->sym, show_sym, show_addr, ip);
> -		srcline__tree_insert(&map->dso->srclines, ip, srcline);
> +		srcline__tree_insert(&map__dso(map)->srclines, ip, srcline);
>  	}
>  
>  	return srcline;
> @@ -2179,7 +2181,7 @@ static int add_callchain_ip(struct thread *thread,
>  {
>  	struct map_symbol ms;
>  	struct addr_location al;
> -	int nr_loop_iter = 0;
> +	int nr_loop_iter = 0, err;
>  	u64 iter_cycles = 0;
>  	const char *srcline = NULL;
>  
> @@ -2228,9 +2230,10 @@ static int add_callchain_ip(struct thread *thread,
>  		}
>  	}
>  
> -	if (symbol_conf.hide_unresolved && al.sym == NULL)
> +	if (symbol_conf.hide_unresolved && al.sym == NULL) {
> +		addr_location__put(&al);
>  		return 0;
> -
> +	}
>  	if (iter) {
>  		nr_loop_iter = iter->nr_loop_iter;
>  		iter_cycles = iter->cycles;
> @@ -2240,9 +2243,10 @@ static int add_callchain_ip(struct thread *thread,
>  	ms.map = al.map;
>  	ms.sym = al.sym;
>  	srcline = callchain_srcline(&ms, al.addr);
> -	return callchain_cursor_append(cursor, ip, &ms,
> -				       branch, flags, nr_loop_iter,
> -				       iter_cycles, branch_from, srcline);
> +	err = callchain_cursor_append(cursor, ip, &ms,
> +				      branch, flags, nr_loop_iter,
> +				      iter_cycles, branch_from, srcline);
> +	return err;
>  }
>  
>  struct branch_info *sample__resolve_bstack(struct perf_sample *sample,
> @@ -2937,15 +2941,15 @@ static int append_inlines(struct callchain_cursor *cursor, struct map_symbol *ms
>  	if (!symbol_conf.inline_name || !map || !sym)
>  		return ret;
>  
> -	addr = map__map_ip(map, ip);
> +	addr = map__dso_map_ip(map, ip);
>  	addr = map__rip_2objdump(map, addr);
>  
> -	inline_node = inlines__tree_find(&map->dso->inlined_nodes, addr);
> +	inline_node = inlines__tree_find(&map__dso(map)->inlined_nodes, addr);
>  	if (!inline_node) {
> -		inline_node = dso__parse_addr_inlines(map->dso, addr, sym);
> +		inline_node = dso__parse_addr_inlines(map__dso(map), addr, sym);
>  		if (!inline_node)
>  			return ret;
> -		inlines__tree_insert(&map->dso->inlined_nodes, inline_node);
> +		inlines__tree_insert(&map__dso(map)->inlined_nodes, inline_node);
>  	}
>  
>  	list_for_each_entry(ilist, &inline_node->val, list) {
> @@ -2981,7 +2985,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
>  	 * its corresponding binary.
>  	 */
>  	if (entry->ms.map)
> -		addr = map__map_ip(entry->ms.map, entry->ip);
> +		addr = map__dso_map_ip(entry->ms.map, entry->ip);
>  
>  	srcline = callchain_srcline(&entry->ms, addr);
>  	return callchain_cursor_append(cursor, entry->ip, &entry->ms,
> @@ -3183,7 +3187,7 @@ int machine__get_kernel_start(struct machine *machine)
>  		 * kernel_start = 1ULL << 63 for x86_64.
>  		 */
>  		if (!err && !machine__is(machine, "x86_64"))
> -			machine->kernel_start = map->start;
> +			machine->kernel_start = map__start(map);
>  	}
>  	return err;
>  }
> @@ -3234,8 +3238,8 @@ char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, ch
>  	if (sym == NULL)
>  		return NULL;
>  
> -	*modp = __map__is_kmodule(map) ? (char *)map->dso->short_name : NULL;
> -	*addrp = map->unmap_ip(map, sym->start);
> +	*modp = __map__is_kmodule(map) ? (char *)map__dso(map)->short_name : NULL;
> +	*addrp = map__unmap_ip(map, sym->start);
>  	return sym->name;
>  }
>  
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index 57e926ce115f..47d81e361e29 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -109,8 +109,8 @@ void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
>  	map->pgoff    = pgoff;
>  	map->reloc    = 0;
>  	map->dso      = dso__get(dso);
> -	map->map_ip   = map__map_ip;
> -	map->unmap_ip = map__unmap_ip;
> +	map->map_ip   = map__dso_map_ip;
> +	map->unmap_ip = map__dso_unmap_ip;
>  	map->erange_warned = false;
>  	refcount_set(&map->refcnt, 1);
>  }
> @@ -120,10 +120,11 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
>  		     u32 prot, u32 flags, struct build_id *bid,
>  		     char *filename, struct thread *thread)
>  {
> -	struct map *map = malloc(sizeof(*map));
> +	struct map *map;
>  	struct nsinfo *nsi = NULL;
>  	struct nsinfo *nnsi;
>  
> +	map = malloc(sizeof(*map));
>  	if (map != NULL) {
>  		char newfilename[PATH_MAX];
>  		struct dso *dso;
> @@ -170,7 +171,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
>  		map__init(map, start, start + len, pgoff, dso);
>  
>  		if (anon || no_dso) {
> -			map->map_ip = map->unmap_ip = identity__map_ip;
> +			map->map_ip = map->unmap_ip = map__identity_ip;
>  
>  			/*
>  			 * Set memory without DSO as loaded. All map__find_*
> @@ -204,8 +205,9 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
>   */
>  struct map *map__new2(u64 start, struct dso *dso)
>  {
> -	struct map *map = calloc(1, (sizeof(*map) +
> -				     (dso->kernel ? sizeof(struct kmap) : 0)));
> +	struct map *map;
> +
> +	map = calloc(1, sizeof(*map) + (dso->kernel ? sizeof(struct kmap) : 0));
>  	if (map != NULL) {
>  		/*
>  		 * ->end will be filled after we load all the symbols
> @@ -218,7 +220,7 @@ struct map *map__new2(u64 start, struct dso *dso)
>  
>  bool __map__is_kernel(const struct map *map)
>  {
> -	if (!map->dso->kernel)
> +	if (!map__dso(map)->kernel)
>  		return false;
>  	return machine__kernel_map(maps__machine(map__kmaps((struct map *)map))) == map;
>  }
> @@ -234,7 +236,7 @@ bool __map__is_bpf_prog(const struct map *map)
>  {
>  	const char *name;
>  
> -	if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
> +	if (map__dso(map)->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
>  		return true;
>  
>  	/*
> @@ -242,7 +244,7 @@ bool __map__is_bpf_prog(const struct map *map)
>  	 * type of DSO_BINARY_TYPE__BPF_PROG_INFO. In such cases, we can
>  	 * guess the type based on name.
>  	 */
> -	name = map->dso->short_name;
> +	name = map__dso(map)->short_name;
>  	return name && (strstr(name, "bpf_prog_") == name);
>  }
>  
> @@ -250,7 +252,7 @@ bool __map__is_bpf_image(const struct map *map)
>  {
>  	const char *name;
>  
> -	if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
> +	if (map__dso(map)->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
>  		return true;
>  
>  	/*
> @@ -258,18 +260,19 @@ bool __map__is_bpf_image(const struct map *map)
>  	 * type of DSO_BINARY_TYPE__BPF_IMAGE. In such cases, we can
>  	 * guess the type based on name.
>  	 */
> -	name = map->dso->short_name;
> +	name = map__dso(map)->short_name;
>  	return name && is_bpf_image(name);
>  }
>  
>  bool __map__is_ool(const struct map *map)
>  {
> -	return map->dso && map->dso->binary_type == DSO_BINARY_TYPE__OOL;
> +	return map__dso(map) &&
> +	       map__dso(map)->binary_type == DSO_BINARY_TYPE__OOL;
>  }
>  
>  bool map__has_symbols(const struct map *map)
>  {
> -	return dso__has_symbols(map->dso);
> +	return dso__has_symbols(map__dso(map));
>  }
>  
>  static void map__exit(struct map *map)
> @@ -292,7 +295,7 @@ void map__put(struct map *map)
>  
>  void map__fixup_start(struct map *map)
>  {
> -	struct rb_root_cached *symbols = &map->dso->symbols;
> +	struct rb_root_cached *symbols = &map__dso(map)->symbols;
>  	struct rb_node *nd = rb_first_cached(symbols);
>  	if (nd != NULL) {
>  		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
> @@ -302,7 +305,7 @@ void map__fixup_start(struct map *map)
>  
>  void map__fixup_end(struct map *map)
>  {
> -	struct rb_root_cached *symbols = &map->dso->symbols;
> +	struct rb_root_cached *symbols = &map__dso(map)->symbols;
>  	struct rb_node *nd = rb_last(&symbols->rb_root);
>  	if (nd != NULL) {
>  		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
> @@ -314,18 +317,18 @@ void map__fixup_end(struct map *map)
>  
>  int map__load(struct map *map)
>  {
> -	const char *name = map->dso->long_name;
> +	const char *name = map__dso(map)->long_name;
>  	int nr;
>  
> -	if (dso__loaded(map->dso))
> +	if (dso__loaded(map__dso(map)))
>  		return 0;
>  
> -	nr = dso__load(map->dso, map);
> +	nr = dso__load(map__dso(map), map);
>  	if (nr < 0) {
> -		if (map->dso->has_build_id) {
> +		if (map__dso(map)->has_build_id) {
>  			char sbuild_id[SBUILD_ID_SIZE];
>  
> -			build_id__sprintf(&map->dso->bid, sbuild_id);
> +			build_id__sprintf(&map__dso(map)->bid, sbuild_id);
>  			pr_debug("%s with build id %s not found", name, sbuild_id);
>  		} else
>  			pr_debug("Failed to open %s", name);
> @@ -357,7 +360,7 @@ struct symbol *map__find_symbol(struct map *map, u64 addr)
>  	if (map__load(map) < 0)
>  		return NULL;
>  
> -	return dso__find_symbol(map->dso, addr);
> +	return dso__find_symbol(map__dso(map), addr);
>  }
>  
>  struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
> @@ -365,24 +368,24 @@ struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
>  	if (map__load(map) < 0)
>  		return NULL;
>  
> -	if (!dso__sorted_by_name(map->dso))
> -		dso__sort_by_name(map->dso);
> +	if (!dso__sorted_by_name(map__dso(map)))
> +		dso__sort_by_name(map__dso(map));
>  
> -	return dso__find_symbol_by_name(map->dso, name);
> +	return dso__find_symbol_by_name(map__dso(map), name);
>  }
>  
>  struct map *map__clone(struct map *from)
>  {
> -	size_t size = sizeof(struct map);
>  	struct map *map;
> +	size_t size = sizeof(struct map);
>  
> -	if (from->dso && from->dso->kernel)
> +	if (map__dso(from) && map__dso(from)->kernel)
>  		size += sizeof(struct kmap);
>  
>  	map = memdup(from, size);
>  	if (map != NULL) {
>  		refcount_set(&map->refcnt, 1);
> -		dso__get(map->dso);
> +		map->dso = dso__get(map->dso);
>  	}
>  
>  	return map;
> @@ -391,7 +394,8 @@ struct map *map__clone(struct map *from)
>  size_t map__fprintf(struct map *map, FILE *fp)
>  {
>  	return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s\n",
> -		       map->start, map->end, map->pgoff, map->dso->name);
> +		       map__start(map), map__end(map),
> +		       map__pgoff(map), map__dso(map)->name);
>  }
>  
>  size_t map__fprintf_dsoname(struct map *map, FILE *fp)
> @@ -399,11 +403,11 @@ size_t map__fprintf_dsoname(struct map *map, FILE *fp)
>  	char buf[symbol_conf.pad_output_len_dso + 1];
>  	const char *dsoname = "[unknown]";
>  
> -	if (map && map->dso) {
> -		if (symbol_conf.show_kernel_path && map->dso->long_name)
> -			dsoname = map->dso->long_name;
> +	if (map && map__dso(map)) {
> +		if (symbol_conf.show_kernel_path && map__dso(map)->long_name)
> +			dsoname = map__dso(map)->long_name;
>  		else
> -			dsoname = map->dso->name;
> +			dsoname = map__dso(map)->name;
>  	}
>  
>  	if (symbol_conf.pad_output_len_dso) {
> @@ -418,7 +422,8 @@ char *map__srcline(struct map *map, u64 addr, struct symbol *sym)
>  {
>  	if (map == NULL)
>  		return SRCLINE_UNKNOWN;
> -	return get_srcline(map->dso, map__rip_2objdump(map, addr), sym, true, true, addr);
> +	return get_srcline(map__dso(map), map__rip_2objdump(map, addr),
> +			   sym, true, true, addr);
>  }
>  
>  int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
> @@ -426,7 +431,7 @@ int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
>  {
>  	int ret = 0;
>  
> -	if (map && map->dso) {
> +	if (map && map__dso(map)) {
>  		char *srcline = map__srcline(map, addr, NULL);
>  		if (strncmp(srcline, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0)
>  			ret = fprintf(fp, "%s%s", prefix, srcline);
> @@ -472,20 +477,20 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
>  		}
>  	}
>  
> -	if (!map->dso->adjust_symbols)
> +	if (!map__dso(map)->adjust_symbols)
>  		return rip;
>  
> -	if (map->dso->rel)
> -		return rip - map->pgoff;
> +	if (map__dso(map)->rel)
> +		return rip - map__pgoff(map);
>  
>  	/*
>  	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
>  	 * but all kernel modules are ET_REL, so won't get here.
>  	 */
> -	if (map->dso->kernel == DSO_SPACE__USER)
> -		return rip + map->dso->text_offset;
> +	if (map__dso(map)->kernel == DSO_SPACE__USER)
> +		return rip + map__dso(map)->text_offset;
>  
> -	return map->unmap_ip(map, rip) - map->reloc;
> +	return map__unmap_ip(map, rip) - map__reloc(map);
>  }
>  
>  /**
> @@ -502,34 +507,34 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
>   */
>  u64 map__objdump_2mem(struct map *map, u64 ip)
>  {
> -	if (!map->dso->adjust_symbols)
> -		return map->unmap_ip(map, ip);
> +	if (!map__dso(map)->adjust_symbols)
> +		return map__unmap_ip(map, ip);
>  
> -	if (map->dso->rel)
> -		return map->unmap_ip(map, ip + map->pgoff);
> +	if (map__dso(map)->rel)
> +		return map__unmap_ip(map, ip + map__pgoff(map));
>  
>  	/*
>  	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
>  	 * but all kernel modules are ET_REL, so won't get here.
>  	 */
> -	if (map->dso->kernel == DSO_SPACE__USER)
> -		return map->unmap_ip(map, ip - map->dso->text_offset);
> +	if (map__dso(map)->kernel == DSO_SPACE__USER)
> +		return map__unmap_ip(map, ip - map__dso(map)->text_offset);
>  
> -	return ip + map->reloc;
> +	return ip + map__reloc(map);
>  }
>  
>  bool map__contains_symbol(const struct map *map, const struct symbol *sym)
>  {
> -	u64 ip = map->unmap_ip(map, sym->start);
> +	u64 ip = map__unmap_ip(map, sym->start);
>  
> -	return ip >= map->start && ip < map->end;
> +	return ip >= map__start(map) && ip < map__end(map);
>  }
>  
>  struct kmap *__map__kmap(struct map *map)
>  {
> -	if (!map->dso || !map->dso->kernel)
> +	if (!map__dso(map) || !map__dso(map)->kernel)
>  		return NULL;
> -	return (struct kmap *)(map + 1);
> +	return (struct kmap *)(&map[1]);
>  }
>  
>  struct kmap *map__kmap(struct map *map)
> @@ -552,17 +557,17 @@ struct maps *map__kmaps(struct map *map)
>  	return kmap->kmaps;
>  }
>  
> -u64 map__map_ip(const struct map *map, u64 ip)
> +u64 map__dso_map_ip(const struct map *map, u64 ip)
>  {
> -	return ip - map->start + map->pgoff;
> +	return ip - map__start(map) + map__pgoff(map);
>  }
>  
> -u64 map__unmap_ip(const struct map *map, u64 ip)
> +u64 map__dso_unmap_ip(const struct map *map, u64 ip)
>  {
> -	return ip + map->start - map->pgoff;
> +	return ip + map__start(map) - map__pgoff(map);
>  }
>  
> -u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip)
> +u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip)
>  {
>  	return ip;
>  }
> diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
> index d1a6f85fd31d..99ef0464a357 100644
> --- a/tools/perf/util/map.h
> +++ b/tools/perf/util/map.h
> @@ -41,15 +41,65 @@ struct kmap *map__kmap(struct map *map);
>  struct maps *map__kmaps(struct map *map);
>  
>  /* ip -> dso rip */
> -u64 map__map_ip(const struct map *map, u64 ip);
> +u64 map__dso_map_ip(const struct map *map, u64 ip);
>  /* dso rip -> ip */
> -u64 map__unmap_ip(const struct map *map, u64 ip);
> +u64 map__dso_unmap_ip(const struct map *map, u64 ip);
>  /* Returns ip */
> -u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip);
> +u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip);
> +
> +static inline struct dso *map__dso(const struct map *map)
> +{
> +	return map->dso;
> +}
> +
> +static inline u64 map__map_ip(const struct map *map, u64 ip)
> +{
> +	return map->map_ip(map, ip);
> +}
> +
> +static inline u64 map__unmap_ip(const struct map *map, u64 ip)
> +{
> +	return map->unmap_ip(map, ip);
> +}
> +
> +static inline u64 map__start(const struct map *map)
> +{
> +	return map->start;
> +}
> +
> +static inline u64 map__end(const struct map *map)
> +{
> +	return map->end;
> +}
> +
> +static inline u64 map__pgoff(const struct map *map)
> +{
> +	return map->pgoff;
> +}
> +
> +static inline u64 map__reloc(const struct map *map)
> +{
> +	return map->reloc;
> +}
> +
> +static inline u32 map__flags(const struct map *map)
> +{
> +	return map->flags;
> +}
> +
> +static inline u32 map__prot(const struct map *map)
> +{
> +	return map->prot;
> +}
> +
> +static inline bool map__priv(const struct map *map)
> +{
> +	return map->priv;
> +}
>  
>  static inline size_t map__size(const struct map *map)
>  {
> -	return map->end - map->start;
> +	return map__end(map) - map__start(map);
>  }
>  
>  /* rip/ip <-> addr suitable for passing to `objdump --start-address=` */
> diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
> index 9fc3e7186b8e..6efbcb79131c 100644
> --- a/tools/perf/util/maps.c
> +++ b/tools/perf/util/maps.c
> @@ -30,24 +30,24 @@ static void __maps__free_maps_by_name(struct maps *maps)
>  	maps->nr_maps_allocated = 0;
>  }
>  
> -static int __maps__insert(struct maps *maps, struct map *map)
> +static struct map *__maps__insert(struct maps *maps, struct map *map)
>  {
>  	struct rb_node **p = &maps__entries(maps)->rb_node;
>  	struct rb_node *parent = NULL;
> -	const u64 ip = map->start;
> +	const u64 ip = map__start(map);
>  	struct map_rb_node *m, *new_rb_node;
>  
>  	new_rb_node = malloc(sizeof(*new_rb_node));
>  	if (!new_rb_node)
> -		return -ENOMEM;
> +		return NULL;
>  
>  	RB_CLEAR_NODE(&new_rb_node->rb_node);
> -	new_rb_node->map = map;
> +	new_rb_node->map = map__get(map);
>  
>  	while (*p != NULL) {
>  		parent = *p;
>  		m = rb_entry(parent, struct map_rb_node, rb_node);
> -		if (ip < m->map->start)
> +		if (ip < map__start(m->map))
>  			p = &(*p)->rb_left;
>  		else
>  			p = &(*p)->rb_right;
> @@ -55,22 +55,23 @@ static int __maps__insert(struct maps *maps, struct map *map)
>  
>  	rb_link_node(&new_rb_node->rb_node, parent, p);
>  	rb_insert_color(&new_rb_node->rb_node, maps__entries(maps));
> -	map__get(map);
> -	return 0;
> +	return new_rb_node->map;
>  }
>  
>  int maps__insert(struct maps *maps, struct map *map)
>  {
> -	int err;
> +	int err = 0;
>  
>  	down_write(maps__lock(maps));
> -	err = __maps__insert(maps, map);
> -	if (err)
> +	map = __maps__insert(maps, map);
> +	if (!map) {
> +		err = -ENOMEM;
>  		goto out;
> +	}
>  
>  	++maps->nr_maps;
>  
> -	if (map->dso && map->dso->kernel) {
> +	if (map__dso(map) && map__dso(map)->kernel) {
>  		struct kmap *kmap = map__kmap(map);
>  
>  		if (kmap)
> @@ -193,7 +194,7 @@ struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
>  	if (map != NULL && map__load(map) >= 0) {
>  		if (mapp != NULL)
>  			*mapp = map;
> -		return map__find_symbol(map, map->map_ip(map, addr));
> +		return map__find_symbol(map, map__map_ip(map, addr));
>  	}
>  
>  	return NULL;
> @@ -228,7 +229,8 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
>  
>  int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
>  {
> -	if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
> +	if (ams->addr < map__start(ams->ms.map) ||
> +	    ams->addr >= map__end(ams->ms.map)) {
>  		if (maps == NULL)
>  			return -1;
>  		ams->ms.map = maps__find(maps, ams->addr);
> @@ -236,7 +238,7 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
>  			return -1;
>  	}
>  
> -	ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
> +	ams->al_addr = map__map_ip(ams->ms.map, ams->addr);
>  	ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
>  
>  	return ams->ms.sym ? 0 : -1;
> @@ -253,7 +255,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
>  		printed += fprintf(fp, "Map:");
>  		printed += map__fprintf(pos->map, fp);
>  		if (verbose > 2) {
> -			printed += dso__fprintf(pos->map->dso, fp);
> +			printed += dso__fprintf(map__dso(pos->map), fp);
>  			printed += fprintf(fp, "--\n");
>  		}
>  	}
> @@ -282,9 +284,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  	while (next) {
>  		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
>  
> -		if (pos->map->end > map->start) {
> +		if (map__end(pos->map) > map__start(map)) {
>  			first = next;
> -			if (pos->map->start <= map->start)
> +			if (map__start(pos->map) <= map__start(map))
>  				break;
>  			next = next->rb_left;
>  		} else
> @@ -300,14 +302,14 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  		 * Stop if current map starts after map->end.
>  		 * Maps are ordered by start: next will not overlap for sure.
>  		 */
> -		if (pos->map->start >= map->end)
> +		if (map__start(pos->map) >= map__end(map))
>  			break;
>  
>  		if (verbose >= 2) {
>  
>  			if (use_browser) {
>  				pr_debug("overlapping maps in %s (disable tui for more info)\n",
> -					   map->dso->name);
> +					   map__dso(map)->name);
>  			} else {
>  				fputs("overlapping maps:\n", fp);
>  				map__fprintf(map, fp);
> @@ -320,7 +322,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  		 * Now check if we need to create new maps for areas not
>  		 * overlapped by the new map:
>  		 */
> -		if (map->start > pos->map->start) {
> +		if (map__start(map) > map__start(pos->map)) {
>  			struct map *before = map__clone(pos->map);
>  
>  			if (before == NULL) {
> @@ -328,17 +330,19 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  				goto put_map;
>  			}
>  
> -			before->end = map->start;
> -			err = __maps__insert(maps, before);
> -			if (err)
> +			before->end = map__start(map);
> +			if (!__maps__insert(maps, before)) {
> +				map__put(before);
> +				err = -ENOMEM;
>  				goto put_map;
> +			}
>  
>  			if (verbose >= 2 && !use_browser)
>  				map__fprintf(before, fp);
>  			map__put(before);
>  		}
>  
> -		if (map->end < pos->map->end) {
> +		if (map__end(map) < map__end(pos->map)) {
>  			struct map *after = map__clone(pos->map);
>  
>  			if (after == NULL) {
> @@ -346,14 +350,15 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  				goto put_map;
>  			}
>  
> -			after->start = map->end;
> -			after->pgoff += map->end - pos->map->start;
> -			assert(pos->map->map_ip(pos->map, map->end) ==
> -				after->map_ip(after, map->end));
> -			err = __maps__insert(maps, after);
> -			if (err)
> +			after->start = map__end(map);
> +			after->pgoff += map__end(map) - map__start(pos->map);
> +			assert(map__map_ip(pos->map, map__end(map)) ==
> +				map__map_ip(after, map__end(map)));
> +			if (!__maps__insert(maps, after)) {
> +				map__put(after);
> +				err = -ENOMEM;
>  				goto put_map;
> -
> +			}
>  			if (verbose >= 2 && !use_browser)
>  				map__fprintf(after, fp);
>  			map__put(after);
> @@ -377,7 +382,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  int maps__clone(struct thread *thread, struct maps *parent)
>  {
>  	struct maps *maps = thread->maps;
> -	int err;
> +	int err = 0;
>  	struct map_rb_node *rb_node;
>  
>  	down_read(maps__lock(parent));
> @@ -391,17 +396,13 @@ int maps__clone(struct thread *thread, struct maps *parent)
>  		}
>  
>  		err = unwind__prepare_access(maps, new, NULL);
> -		if (err)
> -			goto out_unlock;
> +		if (!err)
> +			err = maps__insert(maps, new);
>  
> -		err = maps__insert(maps, new);
> +		map__put(new);
>  		if (err)
>  			goto out_unlock;
> -
> -		map__put(new);
>  	}
> -
> -	err = 0;
>  out_unlock:
>  	up_read(maps__lock(parent));
>  	return err;
> @@ -428,9 +429,9 @@ struct map *maps__find(struct maps *maps, u64 ip)
>  	p = maps__entries(maps)->rb_node;
>  	while (p != NULL) {
>  		m = rb_entry(p, struct map_rb_node, rb_node);
> -		if (ip < m->map->start)
> +		if (ip < map__start(m->map))
>  			p = p->rb_left;
> -		else if (ip >= m->map->end)
> +		else if (ip >= map__end(m->map))
>  			p = p->rb_right;
>  		else
>  			goto out;
> diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
> index f9fbf611f2bf..1a93dca50a4c 100644
> --- a/tools/perf/util/probe-event.c
> +++ b/tools/perf/util/probe-event.c
> @@ -134,15 +134,15 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
>  	/* ref_reloc_sym is just a label. Need a special fix*/
>  	reloc_sym = kernel_get_ref_reloc_sym(&map);
>  	if (reloc_sym && strcmp(name, reloc_sym->name) == 0)
> -		*addr = (!map->reloc || reloc) ? reloc_sym->addr :
> +		*addr = (!map__reloc(map) || reloc) ? reloc_sym->addr :
>  			reloc_sym->unrelocated_addr;
>  	else {
>  		sym = machine__find_kernel_symbol_by_name(host_machine, name, &map);
>  		if (!sym)
>  			return -ENOENT;
> -		*addr = map->unmap_ip(map, sym->start) -
> -			((reloc) ? 0 : map->reloc) -
> -			((reladdr) ? map->start : 0);
> +		*addr = map__unmap_ip(map, sym->start) -
> +			((reloc) ? 0 : map__reloc(map)) -
> +			((reladdr) ? map__start(map) : 0);
>  	}
>  	return 0;
>  }
> @@ -164,8 +164,8 @@ static struct map *kernel_get_module_map(const char *module)
>  
>  	maps__for_each_entry(maps, pos) {
>  		/* short_name is "[module]" */
> -		const char *short_name = pos->map->dso->short_name;
> -		u16 short_name_len =  pos->map->dso->short_name_len;
> +		const char *short_name = map__dso(pos->map)->short_name;
> +		u16 short_name_len =  map__dso(pos->map)->short_name_len;
>  
>  		if (strncmp(short_name + 1, module,
>  			    short_name_len - 2) == 0 &&
> @@ -183,11 +183,11 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
>  		struct map *map;
>  
>  		map = dso__new_map(target);
> -		if (map && map->dso) {
> -			BUG_ON(pthread_mutex_lock(&map->dso->lock) != 0);
> -			nsinfo__put(map->dso->nsinfo);
> -			map->dso->nsinfo = nsinfo__get(nsi);
> -			pthread_mutex_unlock(&map->dso->lock);
> +		if (map && map__dso(map)) {
> +			BUG_ON(pthread_mutex_lock(&map__dso(map)->lock) != 0);
> +			nsinfo__put(map__dso(map)->nsinfo);
> +			map__dso(map)->nsinfo = nsinfo__get(nsi);
> +			pthread_mutex_unlock(&map__dso(map)->lock);
>  		}
>  		return map;
>  	} else {
> @@ -253,7 +253,7 @@ static bool kprobe_warn_out_range(const char *symbol, u64 address)
>  
>  	map = kernel_get_module_map(NULL);
>  	if (map) {
> -		ret = address <= map->start || map->end < address;
> +		ret = address <= map__start(map) || map__end(map) < address;
>  		if (ret)
>  			pr_warning("%s is out of .text, skip it.\n", symbol);
>  		map__put(map);
> @@ -340,7 +340,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
>  		snprintf(module_name, sizeof(module_name), "[%s]", module);
>  		map = maps__find_by_name(machine__kernel_maps(host_machine), module_name);
>  		if (map) {
> -			dso = map->dso;
> +			dso = map__dso(map);
>  			goto found;
>  		}
>  		pr_debug("Failed to find module %s.\n", module);
> @@ -348,7 +348,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
>  	}
>  
>  	map = machine__kernel_map(host_machine);
> -	dso = map->dso;
> +	dso = map__dso(map);
>  	if (!dso->has_build_id)
>  		dso__read_running_kernel_build_id(dso, host_machine);
>  
> @@ -396,7 +396,8 @@ static int find_alternative_probe_point(struct debuginfo *dinfo,
>  					   "Consider identifying the final function used at run time and set the probe directly on that.\n",
>  					   pp->function);
>  		} else
> -			address = map->unmap_ip(map, sym->start) - map->reloc;
> +			address = map__unmap_ip(map, sym->start) -
> +				  map__reloc(map);
>  		break;
>  	}
>  	if (!address) {
> @@ -862,8 +863,7 @@ post_process_kernel_probe_trace_events(struct probe_trace_event *tevs,
>  			free(tevs[i].point.symbol);
>  		tevs[i].point.symbol = tmp;
>  		tevs[i].point.offset = tevs[i].point.address -
> -			(map->reloc ? reloc_sym->unrelocated_addr :
> -				      reloc_sym->addr);
> +			(map__reloc(map) ? reloc_sym->unrelocated_addr : reloc_sym->addr);
>  	}
>  	return skipped;
>  }
> @@ -2243,7 +2243,7 @@ static int find_perf_probe_point_from_map(struct probe_trace_point *tp,
>  		goto out;
>  
>  	pp->retprobe = tp->retprobe;
> -	pp->offset = addr - map->unmap_ip(map, sym->start);
> +	pp->offset = addr - map__unmap_ip(map, sym->start);
>  	pp->function = strdup(sym->name);
>  	ret = pp->function ? 0 : -ENOMEM;
>  
> @@ -3117,7 +3117,7 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
>  			goto err_out;
>  		}
>  		/* Add one probe point */
> -		tp->address = map->unmap_ip(map, sym->start) + pp->offset;
> +		tp->address = map__unmap_ip(map, sym->start) + pp->offset;
>  
>  		/* Check the kprobe (not in module) is within .text  */
>  		if (!pev->uprobes && !pev->target &&
> @@ -3759,13 +3759,13 @@ int show_available_funcs(const char *target, struct nsinfo *nsi,
>  			       (target) ? : "kernel");
>  		goto end;
>  	}
> -	if (!dso__sorted_by_name(map->dso))
> -		dso__sort_by_name(map->dso);
> +	if (!dso__sorted_by_name(map__dso(map)))
> +		dso__sort_by_name(map__dso(map));
>  
>  	/* Show all (filtered) symbols */
>  	setup_pager();
>  
> -	for (nd = rb_first_cached(&map->dso->symbol_names); nd;
> +	for (nd = rb_first_cached(&map__dso(map)->symbol_names); nd;
>  	     nd = rb_next(nd)) {
>  		struct symbol_name_rb_node *pos = rb_entry(nd, struct symbol_name_rb_node, rb_node);
>  
> diff --git a/tools/perf/util/scripting-engines/trace-event-perl.c b/tools/perf/util/scripting-engines/trace-event-perl.c
> index a5d945415bbc..1282fb9b45e1 100644
> --- a/tools/perf/util/scripting-engines/trace-event-perl.c
> +++ b/tools/perf/util/scripting-engines/trace-event-perl.c
> @@ -315,11 +315,12 @@ static SV *perl_process_callchain(struct perf_sample *sample,
>  		if (node->ms.map) {
>  			struct map *map = node->ms.map;
>  			const char *dsoname = "[unknown]";
> -			if (map && map->dso) {
> -				if (symbol_conf.show_kernel_path && map->dso->long_name)
> -					dsoname = map->dso->long_name;
> +			if (map && map__dso(map)) {
> +				if (symbol_conf.show_kernel_path &&
> +				    map__dso(map)->long_name)
> +					dsoname = map__dso(map)->long_name;
>  				else
> -					dsoname = map->dso->name;
> +					dsoname = map__dso(map)->name;
>  			}
>  			if (!hv_stores(elem, "dso", newSVpv(dsoname,0))) {
>  				hv_undef(elem);
> diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
> index 0290dc3a6258..559b2ac5cac3 100644
> --- a/tools/perf/util/scripting-engines/trace-event-python.c
> +++ b/tools/perf/util/scripting-engines/trace-event-python.c
> @@ -382,11 +382,11 @@ static const char *get_dsoname(struct map *map)
>  {
>  	const char *dsoname = "[unknown]";
>  
> -	if (map && map->dso) {
> -		if (symbol_conf.show_kernel_path && map->dso->long_name)
> -			dsoname = map->dso->long_name;
> +	if (map && map__dso(map)) {
> +		if (symbol_conf.show_kernel_path && map__dso(map)->long_name)
> +			dsoname = map__dso(map)->long_name;
>  		else
> -			dsoname = map->dso->name;
> +			dsoname = map__dso(map)->name;
>  	}
>  
>  	return dsoname;
> @@ -527,7 +527,7 @@ static unsigned long get_offset(struct symbol *sym, struct addr_location *al)
>  	if (al->addr < sym->end)
>  		offset = al->addr - sym->start;
>  	else
> -		offset = al->addr - al->map->start - sym->start;
> +		offset = al->addr - map__start(al->map) - sym->start;
>  
>  	return offset;
>  }
> @@ -741,7 +741,7 @@ static void set_sym_in_dict(PyObject *dict, struct addr_location *al,
>  {
>  	if (al->map) {
>  		pydict_set_item_string_decref(dict, dso_field,
> -			_PyUnicode_FromString(al->map->dso->name));
> +			_PyUnicode_FromString(map__dso(al->map)->name));
>  	}
>  	if (al->sym) {
>  		pydict_set_item_string_decref(dict, sym_field,
> diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
> index 25686d67ee6f..6d19bbcd30df 100644
> --- a/tools/perf/util/sort.c
> +++ b/tools/perf/util/sort.c
> @@ -173,8 +173,8 @@ struct sort_entry sort_comm = {
>  
>  static int64_t _sort__dso_cmp(struct map *map_l, struct map *map_r)
>  {
> -	struct dso *dso_l = map_l ? map_l->dso : NULL;
> -	struct dso *dso_r = map_r ? map_r->dso : NULL;
> +	struct dso *dso_l = map_l ? map__dso(map_l) : NULL;
> +	struct dso *dso_r = map_r ? map__dso(map_r) : NULL;
>  	const char *dso_name_l, *dso_name_r;
>  
>  	if (!dso_l || !dso_r)
> @@ -200,9 +200,9 @@ sort__dso_cmp(struct hist_entry *left, struct hist_entry *right)
>  static int _hist_entry__dso_snprintf(struct map *map, char *bf,
>  				     size_t size, unsigned int width)
>  {
> -	if (map && map->dso) {
> -		const char *dso_name = verbose > 0 ? map->dso->long_name :
> -			map->dso->short_name;
> +	if (map && map__dso(map)) {
> +		const char *dso_name = verbose > 0 ? map__dso(map)->long_name :
> +			map__dso(map)->short_name;
>  		return repsep_snprintf(bf, size, "%-*.*s", width, width, dso_name);
>  	}
>  
> @@ -222,7 +222,7 @@ static int hist_entry__dso_filter(struct hist_entry *he, int type, const void *a
>  	if (type != HIST_FILTER__DSO)
>  		return -1;
>  
> -	return dso && (!he->ms.map || he->ms.map->dso != dso);
> +	return dso && (!he->ms.map || map__dso(he->ms.map) != dso);
>  }
>  
>  struct sort_entry sort_dso = {
> @@ -302,12 +302,12 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
>  	size_t ret = 0;
>  
>  	if (verbose > 0) {
> -		char o = map ? dso__symtab_origin(map->dso) : '!';
> +		char o = map ? dso__symtab_origin(map__dso(map)) : '!';
>  		u64 rip = ip;
>  
> -		if (map && map->dso && map->dso->kernel
> -		    && map->dso->adjust_symbols)
> -			rip = map->unmap_ip(map, ip);
> +		if (map && map__dso(map) && map__dso(map)->kernel
> +		    && map__dso(map)->adjust_symbols)
> +			rip = map__unmap_ip(map, ip);
>  
>  		ret += repsep_snprintf(bf, size, "%-#*llx %c ",
>  				       BITS_PER_LONG / 4 + 2, rip, o);
> @@ -318,7 +318,7 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
>  		if (sym->type == STT_OBJECT) {
>  			ret += repsep_snprintf(bf + ret, size - ret, "%s", sym->name);
>  			ret += repsep_snprintf(bf + ret, size - ret, "+0x%llx",
> -					ip - map->unmap_ip(map, sym->start));
> +					ip - map__unmap_ip(map, sym->start));
>  		} else {
>  			ret += repsep_snprintf(bf + ret, size - ret, "%.*s",
>  					       width - ret,
> @@ -517,7 +517,7 @@ static char *hist_entry__get_srcfile(struct hist_entry *e)
>  	if (!map)
>  		return no_srcfile;
>  
> -	sf = __get_srcline(map->dso, map__rip_2objdump(map, e->ip),
> +	sf = __get_srcline(map__dso(map), map__rip_2objdump(map, e->ip),
>  			 e->ms.sym, false, true, true, e->ip);
>  	if (!strcmp(sf, SRCLINE_UNKNOWN))
>  		return no_srcfile;
> @@ -838,7 +838,7 @@ static int hist_entry__dso_from_filter(struct hist_entry *he, int type,
>  		return -1;
>  
>  	return dso && (!he->branch_info || !he->branch_info->from.ms.map ||
> -		       he->branch_info->from.ms.map->dso != dso);
> +		map__dso(he->branch_info->from.ms.map) != dso);
>  }
>  
>  static int64_t
> @@ -870,7 +870,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type,
>  		return -1;
>  
>  	return dso && (!he->branch_info || !he->branch_info->to.ms.map ||
> -		       he->branch_info->to.ms.map->dso != dso);
> +		map__dso(he->branch_info->to.ms.map) != dso);
>  }
>  
>  static int64_t
> @@ -1259,7 +1259,7 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
>  	if (!l_map) return -1;
>  	if (!r_map) return 1;
>  
> -	rc = dso__cmp_id(l_map->dso, r_map->dso);
> +	rc = dso__cmp_id(map__dso(l_map), map__dso(r_map));
>  	if (rc)
>  		return rc;
>  	/*
> @@ -1271,9 +1271,9 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
>  	 */
>  
>  	if ((left->cpumode != PERF_RECORD_MISC_KERNEL) &&
> -	    (!(l_map->flags & MAP_SHARED)) &&
> -	    !l_map->dso->id.maj && !l_map->dso->id.min &&
> -	    !l_map->dso->id.ino && !l_map->dso->id.ino_generation) {
> +	    (!(map__flags(l_map) & MAP_SHARED)) &&
> +	    !map__dso(l_map)->id.maj && !map__dso(l_map)->id.min &&
> +	    !map__dso(l_map)->id.ino && !map__dso(l_map)->id.ino_generation) {
>  		/* userspace anonymous */
>  
>  		if (left->thread->pid_ > right->thread->pid_) return -1;
> @@ -1307,10 +1307,10 @@ static int hist_entry__dcacheline_snprintf(struct hist_entry *he, char *bf,
>  
>  		/* print [s] for shared data mmaps */
>  		if ((he->cpumode != PERF_RECORD_MISC_KERNEL) &&
> -		     map && !(map->prot & PROT_EXEC) &&
> -		    (map->flags & MAP_SHARED) &&
> -		    (map->dso->id.maj || map->dso->id.min ||
> -		     map->dso->id.ino || map->dso->id.ino_generation))
> +		    map && !(map__prot(map) & PROT_EXEC) &&
> +		    (map__flags(map) & MAP_SHARED) &&
> +		    (map__dso(map)->id.maj || map__dso(map)->id.min ||
> +		     map__dso(map)->id.ino || map__dso(map)->id.ino_generation))
>  			level = 's';
>  		else if (!map)
>  			level = 'X';
> @@ -1806,7 +1806,7 @@ sort__dso_size_cmp(struct hist_entry *left, struct hist_entry *right)
>  static int _hist_entry__dso_size_snprintf(struct map *map, char *bf,
>  					  size_t bf_size, unsigned int width)
>  {
> -	if (map && map->dso)
> +	if (map && map__dso(map))
>  		return repsep_snprintf(bf, bf_size, "%*d", width,
>  				       map__size(map));
>  
> diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
> index 3ca9a0968345..056405d3d655 100644
> --- a/tools/perf/util/symbol-elf.c
> +++ b/tools/perf/util/symbol-elf.c
> @@ -970,7 +970,7 @@ void __weak arch__sym_update(struct symbol *s __maybe_unused,
>  static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  				      GElf_Sym *sym, GElf_Shdr *shdr,
>  				      struct maps *kmaps, struct kmap *kmap,
> -				      struct dso **curr_dsop, struct map **curr_mapp,
> +				      struct dso **curr_dsop,
>  				      const char *section_name,
>  				      bool adjust_kernel_syms, bool kmodule, bool *remap_kernel)
>  {
> @@ -994,18 +994,18 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  		if (*remap_kernel && dso->kernel && !kmodule) {
>  			*remap_kernel = false;
>  			map->start = shdr->sh_addr + ref_reloc(kmap);
> -			map->end = map->start + shdr->sh_size;
> +			map->end = map__start(map) + shdr->sh_size;
>  			map->pgoff = shdr->sh_offset;
> -			map->map_ip = map__map_ip;
> -			map->unmap_ip = map__unmap_ip;
> +			map->map_ip = map__dso_map_ip;
> +			map->unmap_ip = map__dso_unmap_ip;
>  			/* Ensure maps are correctly ordered */
>  			if (kmaps) {
>  				int err;
> +				struct map *updated = map__get(map);
>  
> -				map__get(map);
>  				maps__remove(kmaps, map);
> -				err = maps__insert(kmaps, map);
> -				map__put(map);
> +				err = maps__insert(kmaps, updated);
> +				map__put(updated);
>  				if (err)
>  					return err;
>  			}
> @@ -1021,7 +1021,6 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  			map->pgoff = shdr->sh_offset;
>  		}
>  
> -		*curr_mapp = map;
>  		*curr_dsop = dso;
>  		return 0;
>  	}
> @@ -1036,7 +1035,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  		u64 start = sym->st_value;
>  
>  		if (kmodule)
> -			start += map->start + shdr->sh_offset;
> +			start += map__start(map) + shdr->sh_offset;
>  
>  		curr_dso = dso__new(dso_name);
>  		if (curr_dso == NULL)
> @@ -1054,10 +1053,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  
>  		if (adjust_kernel_syms) {
>  			curr_map->start  = shdr->sh_addr + ref_reloc(kmap);
> -			curr_map->end	 = curr_map->start + shdr->sh_size;
> -			curr_map->pgoff	 = shdr->sh_offset;
> +			curr_map->end	= map__start(curr_map) + shdr->sh_size;
> +			curr_map->pgoff	= shdr->sh_offset;
>  		} else {
> -			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> +			curr_map->map_ip = map__identity_ip;
> +			curr_map->unmap_ip = map__identity_ip;
>  		}
>  		curr_dso->symtab_type = dso->symtab_type;
>  		if (maps__insert(kmaps, curr_map))
> @@ -1068,13 +1068,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  		 * *curr_map->dso.
>  		 */
>  		dsos__add(&maps__machine(kmaps)->dsos, curr_dso);
> -		/* kmaps already got it */
> -		map__put(curr_map);
>  		dso__set_loaded(curr_dso);
> -		*curr_mapp = curr_map;
>  		*curr_dsop = curr_dso;
> +		map__put(curr_map);
>  	} else
> -		*curr_dsop = curr_map->dso;
> +		*curr_dsop = map__dso(curr_map);
>  
>  	return 0;
>  }
> @@ -1085,7 +1083,6 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
>  {
>  	struct kmap *kmap = dso->kernel ? map__kmap(map) : NULL;
>  	struct maps *kmaps = kmap ? map__kmaps(map) : NULL;
> -	struct map *curr_map = map;
>  	struct dso *curr_dso = dso;
>  	Elf_Data *symstrs, *secstrs, *secstrs_run, *secstrs_sym;
>  	uint32_t nr_syms;
> @@ -1175,7 +1172,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
>  	 * attempted to prelink vdso to its virtual address.
>  	 */
>  	if (dso__is_vdso(dso))
> -		map->reloc = map->start - dso->text_offset;
> +		map->reloc = map__start(map) - dso->text_offset;
>  
>  	dso->adjust_symbols = runtime_ss->adjust_symbols || ref_reloc(kmap);
>  	/*
> @@ -1262,8 +1259,10 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
>  			--sym.st_value;
>  
>  		if (dso->kernel) {
> -			if (dso__process_kernel_symbol(dso, map, &sym, &shdr, kmaps, kmap, &curr_dso, &curr_map,
> -						       section_name, adjust_kernel_syms, kmodule, &remap_kernel))
> +			if (dso__process_kernel_symbol(dso, map, &sym, &shdr,
> +						       kmaps, kmap, &curr_dso,
> +						       section_name, adjust_kernel_syms,
> +						       kmodule, &remap_kernel))
>  				goto out_elf_end;
>  		} else if ((used_opd && runtime_ss->adjust_symbols) ||
>  			   (!used_opd && syms_ss->adjust_symbols)) {
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index 9b51e669a722..6289b3028b91 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -252,8 +252,8 @@ void maps__fixup_end(struct maps *maps)
>  	down_write(maps__lock(maps));
>  
>  	maps__for_each_entry(maps, curr) {
> -		if (prev != NULL && !prev->map->end)
> -			prev->map->end = curr->map->start;
> +		if (prev != NULL && !map__end(prev->map))
> +			prev->map->end = map__start(curr->map);
>  
>  		prev = curr;
>  	}
> @@ -262,7 +262,7 @@ void maps__fixup_end(struct maps *maps)
>  	 * We still haven't the actual symbols, so guess the
>  	 * last map final address.
>  	 */
> -	if (curr && !curr->map->end)
> +	if (curr && !map__end(curr->map))
>  		curr->map->end = ~0ULL;
>  
>  	up_write(maps__lock(maps));
> @@ -778,12 +778,12 @@ static int maps__split_kallsyms_for_kcore(struct maps *kmaps, struct dso *dso)
>  			continue;
>  		}
>  
> -		pos->start -= curr_map->start - curr_map->pgoff;
> -		if (pos->end > curr_map->end)
> -			pos->end = curr_map->end;
> +		pos->start -= map__start(curr_map) - map__pgoff(curr_map);
> +		if (pos->end > map__end(curr_map))
> +			pos->end = map__end(curr_map);
>  		if (pos->end)
> -			pos->end -= curr_map->start - curr_map->pgoff;
> -		symbols__insert(&curr_map->dso->symbols, pos);
> +			pos->end -= map__start(curr_map) - map__pgoff(curr_map);
> +		symbols__insert(&map__dso(curr_map)->symbols, pos);
>  		++count;
>  	}
>  
> @@ -830,7 +830,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  
>  			*module++ = '\0';
>  
> -			if (strcmp(curr_map->dso->short_name, module)) {
> +			if (strcmp(map__dso(curr_map)->short_name, module)) {
>  				if (curr_map != initial_map &&
>  				    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
>  				    machine__is_default_guest(machine)) {
> @@ -841,7 +841,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  					 * symbols are in its kmap. Mark it as
>  					 * loaded.
>  					 */
> -					dso__set_loaded(curr_map->dso);
> +					dso__set_loaded(map__dso(curr_map));
>  				}
>  
>  				curr_map = maps__find_by_name(kmaps, module);
> @@ -854,7 +854,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  					goto discard_symbol;
>  				}
>  
> -				if (curr_map->dso->loaded &&
> +				if (map__dso(curr_map)->loaded &&
>  				    !machine__is_default_guest(machine))
>  					goto discard_symbol;
>  			}
> @@ -862,8 +862,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  			 * So that we look just like we get from .ko files,
>  			 * i.e. not prelinked, relative to initial_map->start.
>  			 */
> -			pos->start = curr_map->map_ip(curr_map, pos->start);
> -			pos->end   = curr_map->map_ip(curr_map, pos->end);
> +			pos->start = map__map_ip(curr_map, pos->start);
> +			pos->end   = map__map_ip(curr_map, pos->end);
>  		} else if (x86_64 && is_entry_trampoline(pos->name)) {
>  			/*
>  			 * These symbols are not needed anymore since the
> @@ -910,7 +910,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  				return -1;
>  			}
>  
> -			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> +			curr_map->map_ip = map__identity_ip;
> +			curr_map->unmap_ip = map__identity_ip;
>  			if (maps__insert(kmaps, curr_map)) {
>  				dso__put(ndso);
>  				return -1;
> @@ -924,7 +925,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  add_symbol:
>  		if (curr_map != initial_map) {
>  			rb_erase_cached(&pos->rb_node, root);
> -			symbols__insert(&curr_map->dso->symbols, pos);
> +			symbols__insert(&map__dso(curr_map)->symbols, pos);
>  			++moved;
>  		} else
>  			++count;
> @@ -938,7 +939,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  	if (curr_map != initial_map &&
>  	    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
>  	    machine__is_default_guest(maps__machine(kmaps))) {
> -		dso__set_loaded(curr_map->dso);
> +		dso__set_loaded(map__dso(curr_map));
>  	}
>  
>  	return count + moved;
> @@ -1118,8 +1119,8 @@ static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
>  		}
>  
>  		/* Module must be in memory at the same address */
> -		mi = find_module(old_map->dso->short_name, &modules);
> -		if (!mi || mi->start != old_map->start) {
> +		mi = find_module(map__dso(old_map)->short_name, &modules);
> +		if (!mi || mi->start != map__start(old_map)) {
>  			err = -EINVAL;
>  			goto out;
>  		}
> @@ -1214,7 +1215,7 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
>  		return -ENOMEM;
>  	}
>  
> -	list_node->map->end = list_node->map->start + len;
> +	list_node->map->end = map__start(list_node->map) + len;
>  	list_node->map->pgoff = pgoff;
>  
>  	list_add(&list_node->node, &md->maps);
> @@ -1236,21 +1237,21 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
>  		struct map *old_map = rb_node->map;
>  
>  		/* no overload with this one */
> -		if (new_map->end < old_map->start ||
> -		    new_map->start >= old_map->end)
> +		if (map__end(new_map) < map__start(old_map) ||
> +		    map__start(new_map) >= map__end(old_map))
>  			continue;
>  
> -		if (new_map->start < old_map->start) {
> +		if (map__start(new_map) < map__start(old_map)) {
>  			/*
>  			 * |new......
>  			 *       |old....
>  			 */
> -			if (new_map->end < old_map->end) {
> +			if (map__end(new_map) < map__end(old_map)) {
>  				/*
>  				 * |new......|     -> |new..|
>  				 *       |old....| ->       |old....|
>  				 */
> -				new_map->end = old_map->start;
> +				new_map->end = map__start(old_map);
>  			} else {
>  				/*
>  				 * |new.............| -> |new..|       |new..|
> @@ -1271,17 +1272,18 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
>  					goto out;
>  				}
>  
> -				m->map->end = old_map->start;
> +				m->map->end = map__start(old_map);
>  				list_add_tail(&m->node, &merged);
> -				new_map->pgoff += old_map->end - new_map->start;
> -				new_map->start = old_map->end;
> +				new_map->pgoff +=
> +					map__end(old_map) - map__start(new_map);
> +				new_map->start = map__end(old_map);
>  			}
>  		} else {
>  			/*
>  			 *      |new......
>  			 * |old....
>  			 */
> -			if (new_map->end < old_map->end) {
> +			if (map__end(new_map) < map__end(old_map)) {
>  				/*
>  				 *      |new..|   -> x
>  				 * |old.........| -> |old.........|
> @@ -1294,8 +1296,9 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
>  				 *      |new......| ->         |new...|
>  				 * |old....|        -> |old....|
>  				 */
> -				new_map->pgoff += old_map->end - new_map->start;
> -				new_map->start = old_map->end;
> +				new_map->pgoff +=
> +					map__end(old_map) - map__start(new_map);
> +				new_map->start = map__end(old_map);
>  			}
>  		}
>  	}
> @@ -1361,7 +1364,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  	}
>  
>  	/* Read new maps into temporary lists */
> -	err = file__read_maps(fd, map->prot & PROT_EXEC, kcore_mapfn, &md,
> +	err = file__read_maps(fd, map__prot(map) & PROT_EXEC, kcore_mapfn, &md,
>  			      &is_64_bit);
>  	if (err)
>  		goto out_err;
> @@ -1391,7 +1394,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  		struct map_list_node *new_node;
>  
>  		list_for_each_entry(new_node, &md.maps, node) {
> -			if (stext >= new_node->map->start && stext < new_node->map->end) {
> +			if (stext >= map__start(new_node->map) &&
> +			    stext < map__end(new_node->map)) {
>  				replacement_map = new_node->map;
>  				break;
>  			}
> @@ -1408,16 +1412,18 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  		new_node = list_entry(md.maps.next, struct map_list_node, node);
>  		list_del_init(&new_node->node);
>  		if (new_node->map == replacement_map) {
> -			map->start	= new_node->map->start;
> -			map->end	= new_node->map->end;
> -			map->pgoff	= new_node->map->pgoff;
> -			map->map_ip	= new_node->map->map_ip;
> -			map->unmap_ip	= new_node->map->unmap_ip;
> +			struct  map *updated;
> +
> +			map->start = map__start(new_node->map);
> +			map->end   = map__end(new_node->map);
> +			map->pgoff = map__pgoff(new_node->map);
> +			map->map_ip = new_node->map->map_ip;
> +			map->unmap_ip = new_node->map->unmap_ip;
>  			/* Ensure maps are correctly ordered */
> -			map__get(map);
> +			updated = map__get(map);
>  			maps__remove(kmaps, map);
> -			err = maps__insert(kmaps, map);
> -			map__put(map);
> +			err = maps__insert(kmaps, updated);
> +			map__put(updated);
>  			map__put(new_node->map);
>  			if (err)
>  				goto out_err;
> @@ -1460,7 +1466,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  
>  	close(fd);
>  
> -	if (map->prot & PROT_EXEC)
> +	if (map__prot(map) & PROT_EXEC)
>  		pr_debug("Using %s for kernel object code\n", kcore_filename);
>  	else
>  		pr_debug("Using %s for kernel data\n", kcore_filename);
> @@ -1995,13 +2001,13 @@ int dso__load(struct dso *dso, struct map *map)
>  static int map__strcmp(const void *a, const void *b)
>  {
>  	const struct map *ma = *(const struct map **)a, *mb = *(const struct map **)b;
> -	return strcmp(ma->dso->short_name, mb->dso->short_name);
> +	return strcmp(map__dso(ma)->short_name, map__dso(mb)->short_name);
>  }
>  
>  static int map__strcmp_name(const void *name, const void *b)
>  {
>  	const struct map *map = *(const struct map **)b;
> -	return strcmp(name, map->dso->short_name);
> +	return strcmp(name, map__dso(map)->short_name);
>  }
>  
>  void __maps__sort_by_name(struct maps *maps)
> @@ -2052,7 +2058,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
>  	down_read(maps__lock(maps));
>  
>  	if (maps->last_search_by_name &&
> -	    strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
> +	    strcmp(map__dso(maps->last_search_by_name)->short_name, name) == 0) {
>  		map = maps->last_search_by_name;
>  		goto out_unlock;
>  	}
> @@ -2068,7 +2074,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
>  	/* Fallback to traversing the rbtree... */
>  	maps__for_each_entry(maps, rb_node) {
>  		map = rb_node->map;
> -		if (strcmp(map->dso->short_name, name) == 0) {
> +		if (strcmp(map__dso(map)->short_name, name) == 0) {
>  			maps->last_search_by_name = map;
>  			goto out_unlock;
>  		}
> diff --git a/tools/perf/util/symbol_fprintf.c b/tools/perf/util/symbol_fprintf.c
> index 2664fb65e47a..d9e5ad040b6a 100644
> --- a/tools/perf/util/symbol_fprintf.c
> +++ b/tools/perf/util/symbol_fprintf.c
> @@ -30,7 +30,7 @@ size_t __symbol__fprintf_symname_offs(const struct symbol *sym,
>  			if (al->addr < sym->end)
>  				offset = al->addr - sym->start;
>  			else
> -				offset = al->addr - al->map->start - sym->start;
> +				offset = al->addr - map__start(al->map) - sym->start;
>  			length += fprintf(fp, "+0x%lx", offset);
>  		}
>  		return length;
> diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
> index ed2d55d224aa..437fd57c2084 100644
> --- a/tools/perf/util/synthetic-events.c
> +++ b/tools/perf/util/synthetic-events.c
> @@ -668,33 +668,33 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
>  			continue;
>  
>  		if (symbol_conf.buildid_mmap2) {
> -			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
> +			size = PERF_ALIGN(map__dso(map)->long_name_len + 1, sizeof(u64));
>  			event->mmap2.header.type = PERF_RECORD_MMAP2;
>  			event->mmap2.header.size = (sizeof(event->mmap2) -
>  						(sizeof(event->mmap2.filename) - size));
>  			memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
>  			event->mmap2.header.size += machine->id_hdr_size;
> -			event->mmap2.start = map->start;
> -			event->mmap2.len   = map->end - map->start;
> +			event->mmap2.start = map__start(map);
> +			event->mmap2.len   = map__end(map) - map__start(map);
>  			event->mmap2.pid   = machine->pid;
>  
> -			memcpy(event->mmap2.filename, map->dso->long_name,
> -			       map->dso->long_name_len + 1);
> +			memcpy(event->mmap2.filename, map__dso(map)->long_name,
> +			       map__dso(map)->long_name_len + 1);
>  
>  			perf_record_mmap2__read_build_id(&event->mmap2, false);
>  		} else {
> -			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
> +			size = PERF_ALIGN(map__dso(map)->long_name_len + 1, sizeof(u64));
>  			event->mmap.header.type = PERF_RECORD_MMAP;
>  			event->mmap.header.size = (sizeof(event->mmap) -
>  						(sizeof(event->mmap.filename) - size));
>  			memset(event->mmap.filename + size, 0, machine->id_hdr_size);
>  			event->mmap.header.size += machine->id_hdr_size;
> -			event->mmap.start = map->start;
> -			event->mmap.len   = map->end - map->start;
> +			event->mmap.start = map__start(map);
> +			event->mmap.len   = map__end(map) - map__start(map);
>  			event->mmap.pid   = machine->pid;
>  
> -			memcpy(event->mmap.filename, map->dso->long_name,
> -			       map->dso->long_name_len + 1);
> +			memcpy(event->mmap.filename, map__dso(map)->long_name,
> +			       map__dso(map)->long_name_len + 1);
>  		}
>  
>  		if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
> @@ -1112,8 +1112,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
>  		event->mmap2.header.size = (sizeof(event->mmap2) -
>  				(sizeof(event->mmap2.filename) - size) + machine->id_hdr_size);
>  		event->mmap2.pgoff = kmap->ref_reloc_sym->addr;
> -		event->mmap2.start = map->start;
> -		event->mmap2.len   = map->end - event->mmap.start;
> +		event->mmap2.start = map__start(map);
> +		event->mmap2.len   = map__end(map) - event->mmap.start;
>  		event->mmap2.pid   = machine->pid;
>  
>  		perf_record_mmap2__read_build_id(&event->mmap2, true);
> @@ -1125,8 +1125,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
>  		event->mmap.header.size = (sizeof(event->mmap) -
>  				(sizeof(event->mmap.filename) - size) + machine->id_hdr_size);
>  		event->mmap.pgoff = kmap->ref_reloc_sym->addr;
> -		event->mmap.start = map->start;
> -		event->mmap.len   = map->end - event->mmap.start;
> +		event->mmap.start = map__start(map);
> +		event->mmap.len   = map__end(map) - event->mmap.start;
>  		event->mmap.pid   = machine->pid;
>  	}
>  
> diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
> index c2256777b813..6fbcc115cc6d 100644
> --- a/tools/perf/util/thread.c
> +++ b/tools/perf/util/thread.c
> @@ -434,23 +434,23 @@ struct thread *thread__main_thread(struct machine *machine, struct thread *threa
>  int thread__memcpy(struct thread *thread, struct machine *machine,
>  		   void *buf, u64 ip, int len, bool *is64bit)
>  {
> -       u8 cpumode = PERF_RECORD_MISC_USER;
> -       struct addr_location al;
> -       long offset;
> +	u8 cpumode = PERF_RECORD_MISC_USER;
> +	struct addr_location al;
> +	long offset;
>  
> -       if (machine__kernel_ip(machine, ip))
> -               cpumode = PERF_RECORD_MISC_KERNEL;
> +	if (machine__kernel_ip(machine, ip))
> +		cpumode = PERF_RECORD_MISC_KERNEL;
>  
> -       if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso ||
> -	   al.map->dso->data.status == DSO_DATA_STATUS_ERROR ||
> -	   map__load(al.map) < 0)
> -               return -1;
> +	if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map) ||
> +		map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR ||
> +		map__load(al.map) < 0)
> +		return -1;
>  
> -       offset = al.map->map_ip(al.map, ip);
> -       if (is64bit)
> -               *is64bit = al.map->dso->is_64_bit;
> +	offset = map__map_ip(al.map, ip);
> +	if (is64bit)
> +		*is64bit = map__dso(al.map)->is_64_bit;
>  
> -       return dso__data_read_offset(al.map->dso, machine, offset, buf, len);
> +	return dso__data_read_offset(map__dso(al.map), machine, offset, buf, len);
>  }
>  
>  void thread__free_stitch_list(struct thread *thread)
> diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
> index 7e6c59811292..841ac84a93ab 100644
> --- a/tools/perf/util/unwind-libunwind-local.c
> +++ b/tools/perf/util/unwind-libunwind-local.c
> @@ -381,20 +381,20 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
>  	int ret = -EINVAL;
>  
>  	map = find_map(ip, ui);
> -	if (!map || !map->dso)
> +	if (!map || !map__dso(map))
>  		return -EINVAL;
>  
> -	pr_debug("unwind: find_proc_info dso %s\n", map->dso->name);
> +	pr_debug("unwind: %s dso %s\n", __func__, map__dso(map)->name);
>  
>  	/* Check the .eh_frame section for unwinding info */
> -	if (!read_unwind_spec_eh_frame(map->dso, ui->machine,
> +	if (!read_unwind_spec_eh_frame(map__dso(map), ui->machine,
>  				       &table_data, &segbase, &fde_count)) {
>  		memset(&di, 0, sizeof(di));
>  		di.format   = UNW_INFO_FORMAT_REMOTE_TABLE;
> -		di.start_ip = map->start;
> -		di.end_ip   = map->end;
> -		di.u.rti.segbase    = map->start + segbase - map->pgoff;
> -		di.u.rti.table_data = map->start + table_data - map->pgoff;
> +		di.start_ip = map__start(map);
> +		di.end_ip   = map__end(map);
> +		di.u.rti.segbase    = map__start(map) + segbase - map__pgoff(map);
> +		di.u.rti.table_data = map__start(map) + table_data - map__pgoff(map);
>  		di.u.rti.table_len  = fde_count * sizeof(struct table_entry)
>  				      / sizeof(unw_word_t);
>  		ret = dwarf_search_unwind_table(as, ip, &di, pi,
> @@ -404,20 +404,20 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
>  #ifndef NO_LIBUNWIND_DEBUG_FRAME
>  	/* Check the .debug_frame section for unwinding info */
>  	if (ret < 0 &&
> -	    !read_unwind_spec_debug_frame(map->dso, ui->machine, &segbase)) {
> -		int fd = dso__data_get_fd(map->dso, ui->machine);
> -		int is_exec = elf_is_exec(fd, map->dso->name);
> -		unw_word_t base = is_exec ? 0 : map->start;
> +	    !read_unwind_spec_debug_frame(map__dso(map), ui->machine, &segbase)) {
> +		int fd = dso__data_get_fd(map__dso(map), ui->machine);
> +		int is_exec = elf_is_exec(fd, map__dso(map)->name);
> +		unw_word_t base = is_exec ? 0 : map__start(map);
>  		const char *symfile;
>  
>  		if (fd >= 0)
> -			dso__data_put_fd(map->dso);
> +			dso__data_put_fd(map__dso(map));
>  
> -		symfile = map->dso->symsrc_filename ?: map->dso->name;
> +		symfile = map__dso(map)->symsrc_filename ?: map__dso(map)->name;
>  
>  		memset(&di, 0, sizeof(di));
>  		if (dwarf_find_debug_frame(0, &di, ip, base, symfile,
> -					   map->start, map->end))
> +					   map__start(map), map__end(map)))
>  			return dwarf_search_unwind_table(as, ip, &di, pi,
>  							 need_unwind_info, arg);
>  	}
> @@ -473,10 +473,10 @@ static int access_dso_mem(struct unwind_info *ui, unw_word_t addr,
>  		return -1;
>  	}
>  
> -	if (!map->dso)
> +	if (!map__dso(map))
>  		return -1;
>  
> -	size = dso__data_read_addr(map->dso, map, ui->machine,
> +	size = dso__data_read_addr(map__dso(map), map, ui->machine,
>  				   addr, (u8 *) data, sizeof(*data));
>  
>  	return !(size == sizeof(*data));
> @@ -583,7 +583,7 @@ static int entry(u64 ip, struct thread *thread,
>  	pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n",
>  		 al.sym ? al.sym->name : "''",
>  		 ip,
> -		 al.map ? al.map->map_ip(al.map, ip) : (u64) 0);
> +		 al.map ? map__map_ip(al.map, ip) : (u64) 0);
>  
>  	return cb(&e, arg);
>  }
> diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
> index 7b797ffadd19..cece1ee89031 100644
> --- a/tools/perf/util/unwind-libunwind.c
> +++ b/tools/perf/util/unwind-libunwind.c
> @@ -30,7 +30,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
>  
>  	if (maps__addr_space(maps)) {
>  		pr_debug("unwind: thread map already set, dso=%s\n",
> -			 map->dso->name);
> +			 map__dso(map)->name);
>  		if (initialized)
>  			*initialized = true;
>  		return 0;
> @@ -41,7 +41,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
>  	if (!machine->env || !machine->env->arch)
>  		goto out_register;
>  
> -	dso_type = dso__type(map->dso, machine);
> +	dso_type = dso__type(map__dso(map), machine);
>  	if (dso_type == DSO__TYPE_UNKNOWN)
>  		return 0;
>  
> diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
> index 835c39efb80d..ec777ee11493 100644
> --- a/tools/perf/util/vdso.c
> +++ b/tools/perf/util/vdso.c
> @@ -147,7 +147,7 @@ static enum dso_type machine__thread_dso_type(struct machine *machine,
>  	struct map_rb_node *rb_node;
>  
>  	maps__for_each_entry(thread->maps, rb_node) {
> -		struct dso *dso = rb_node->map->dso;
> +		struct dso *dso = map__dso(rb_node->map);
>  
>  		if (!dso || dso->long_name[0] != '/')
>  			continue;
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 15/22] perf map: Use functions to access the variables in map
  2022-02-11 10:34 ` [PATCH v3 15/22] perf map: Use functions to access the variables in map Ian Rogers
  2022-02-11 17:35   ` Arnaldo Carvalho de Melo
@ 2022-02-11 17:36   ` Arnaldo Carvalho de Melo
  2022-02-11 17:54     ` Ian Rogers
  1 sibling, 1 reply; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 17:36 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:34:08AM -0800, Ian Rogers escreveu:
> The use of functions enables easier reference count
> checking. Some minor changes to map_ip and unmap_ip to making the
> naming a little clearer. __maps_insert is modified to return the
> inserted map, which simplifies the reference checking
> wrapping. maps__fixup_overlappings has some minor tweaks so that
> puts occur on error paths. dso__process_kernel_symbol has the
> unused curr_mapp argument removed.
> 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/arch/s390/annotate/instructions.c  |   4 +-
>  tools/perf/arch/x86/tests/dwarf-unwind.c      |   2 +-
>  tools/perf/arch/x86/util/event.c              |   6 +-
>  tools/perf/builtin-annotate.c                 |   8 +-
>  tools/perf/builtin-inject.c                   |   8 +-
>  tools/perf/builtin-kallsyms.c                 |   6 +-
>  tools/perf/builtin-kmem.c                     |   4 +-
>  tools/perf/builtin-mem.c                      |   4 +-
>  tools/perf/builtin-report.c                   |  20 +--
>  tools/perf/builtin-script.c                   |  26 ++--
>  tools/perf/builtin-top.c                      |  12 +-
>  tools/perf/builtin-trace.c                    |   2 +-
>  .../scripts/python/Perf-Trace-Util/Context.c  |   7 +-
>  tools/perf/tests/code-reading.c               |  32 ++---
>  tools/perf/tests/hists_common.c               |   4 +-
>  tools/perf/tests/vmlinux-kallsyms.c           |  35 +++---
>  tools/perf/ui/browsers/annotate.c             |   7 +-
>  tools/perf/ui/browsers/hists.c                |  18 +--
>  tools/perf/ui/browsers/map.c                  |   4 +-
>  tools/perf/util/annotate.c                    |  38 +++---
>  tools/perf/util/auxtrace.c                    |   2 +-
>  tools/perf/util/block-info.c                  |   4 +-
>  tools/perf/util/bpf-event.c                   |   8 +-
>  tools/perf/util/build-id.c                    |   2 +-
>  tools/perf/util/callchain.c                   |  10 +-
>  tools/perf/util/data-convert-json.c           |   4 +-
>  tools/perf/util/db-export.c                   |   4 +-
>  tools/perf/util/dlfilter.c                    |  21 ++--
>  tools/perf/util/dso.c                         |   4 +-
>  tools/perf/util/event.c                       |  14 +--
>  tools/perf/util/evsel_fprintf.c               |   4 +-
>  tools/perf/util/hist.c                        |  10 +-
>  tools/perf/util/intel-pt.c                    |  48 +++----
>  tools/perf/util/machine.c                     |  84 +++++++------
>  tools/perf/util/map.c                         | 117 +++++++++---------
>  tools/perf/util/map.h                         |  58 ++++++++-
>  tools/perf/util/maps.c                        |  83 +++++++------
>  tools/perf/util/probe-event.c                 |  44 +++----
>  .../util/scripting-engines/trace-event-perl.c |   9 +-
>  .../scripting-engines/trace-event-python.c    |  12 +-
>  tools/perf/util/sort.c                        |  46 +++----
>  tools/perf/util/symbol-elf.c                  |  39 +++---
>  tools/perf/util/symbol.c                      |  96 +++++++-------
>  tools/perf/util/symbol_fprintf.c              |   2 +-
>  tools/perf/util/synthetic-events.c            |  28 ++---
>  tools/perf/util/thread.c                      |  26 ++--
>  tools/perf/util/unwind-libunwind-local.c      |  34 ++---
>  tools/perf/util/unwind-libunwind.c            |   4 +-
>  tools/perf/util/vdso.c                        |   2 +-
>  49 files changed, 577 insertions(+), 489 deletions(-)
> 
> diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c
> index 0e136630659e..740f1a63bc04 100644
> --- a/tools/perf/arch/s390/annotate/instructions.c
> +++ b/tools/perf/arch/s390/annotate/instructions.c
> @@ -39,7 +39,9 @@ static int s390_call__parse(struct arch *arch, struct ins_operands *ops,
>  	target.addr = map__objdump_2mem(map, ops->target.addr);
>  
>  	if (maps__find_ams(ms->maps, &target) == 0 &&
> -	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> +	    map__rip_2objdump(target.ms.map,
> +			      map->map_ip(target.ms.map, target.addr)
> +			     ) == ops->target.addr)


This changes nothing, right? Please try not to do this in the v2 for
this patch.

- Arnaldo

>  		ops->target.sym = target.ms.sym;
>  
>  	return 0;
> diff --git a/tools/perf/arch/x86/tests/dwarf-unwind.c b/tools/perf/arch/x86/tests/dwarf-unwind.c
> index a54dea7c112f..497593be80f2 100644
> --- a/tools/perf/arch/x86/tests/dwarf-unwind.c
> +++ b/tools/perf/arch/x86/tests/dwarf-unwind.c
> @@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample,
>  		return -1;
>  	}
>  
> -	stack_size = map->end - sp;
> +	stack_size = map__end(map) - sp;
>  	stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size;
>  
>  	memcpy(buf, (void *) sp, stack_size);
> diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
> index 7b6b0c98fb36..c790c682b76e 100644
> --- a/tools/perf/arch/x86/util/event.c
> +++ b/tools/perf/arch/x86/util/event.c
> @@ -57,9 +57,9 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
>  
>  		event->mmap.header.size = size;
>  
> -		event->mmap.start = map->start;
> -		event->mmap.len   = map->end - map->start;
> -		event->mmap.pgoff = map->pgoff;
> +		event->mmap.start = map__start(map);
> +		event->mmap.len   = map__size(map);
> +		event->mmap.pgoff = map__pgoff(map);
>  		event->mmap.pid   = machine->pid;
>  
>  		strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
> diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
> index 490bb9b8cf17..49d3ae36fd89 100644
> --- a/tools/perf/builtin-annotate.c
> +++ b/tools/perf/builtin-annotate.c
> @@ -199,7 +199,7 @@ static int process_branch_callback(struct evsel *evsel,
>  		return 0;
>  
>  	if (a.map != NULL)
> -		a.map->dso->hit = 1;
> +		map__dso(a.map)->hit = 1;
>  
>  	hist__account_cycles(sample->branch_stack, al, sample, false, NULL);
>  
> @@ -231,9 +231,9 @@ static int evsel__add_sample(struct evsel *evsel, struct perf_sample *sample,
>  		 */
>  		if (al->sym != NULL) {
>  			rb_erase_cached(&al->sym->rb_node,
> -				 &al->map->dso->symbols);
> +					&map__dso(al->map)->symbols);
>  			symbol__delete(al->sym);
> -			dso__reset_find_symbol_cache(al->map->dso);
> +			dso__reset_find_symbol_cache(map__dso(al->map));
>  		}
>  		return 0;
>  	}
> @@ -315,7 +315,7 @@ static void hists__find_annotations(struct hists *hists,
>  		struct hist_entry *he = rb_entry(nd, struct hist_entry, rb_node);
>  		struct annotation *notes;
>  
> -		if (he->ms.sym == NULL || he->ms.map->dso->annotate_warned)
> +		if (he->ms.sym == NULL || map__dso(he->ms.map)->annotate_warned)
>  			goto find_next;
>  
>  		if (ann->sym_hist_filter &&
> diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
> index f7917c390e96..92a9dbc3d4cd 100644
> --- a/tools/perf/builtin-inject.c
> +++ b/tools/perf/builtin-inject.c
> @@ -600,10 +600,10 @@ int perf_event__inject_buildid(struct perf_tool *tool, union perf_event *event,
>  	}
>  
>  	if (thread__find_map(thread, sample->cpumode, sample->ip, &al)) {
> -		if (!al.map->dso->hit) {
> -			al.map->dso->hit = 1;
> -			dso__inject_build_id(al.map->dso, tool, machine,
> -					     sample->cpumode, al.map->flags);
> +		if (!map__dso(al.map)->hit) {
> +			map__dso(al.map)->hit = 1;
> +			dso__inject_build_id(map__dso(al.map), tool, machine,
> +					     sample->cpumode, map__flags(al.map));
>  		}
>  	}
>  
> diff --git a/tools/perf/builtin-kallsyms.c b/tools/perf/builtin-kallsyms.c
> index c08ee81529e8..d940b60ce812 100644
> --- a/tools/perf/builtin-kallsyms.c
> +++ b/tools/perf/builtin-kallsyms.c
> @@ -36,8 +36,10 @@ static int __cmd_kallsyms(int argc, const char **argv)
>  		}
>  
>  		printf("%s: %s %s %#" PRIx64 "-%#" PRIx64 " (%#" PRIx64 "-%#" PRIx64")\n",
> -			symbol->name, map->dso->short_name, map->dso->long_name,
> -			map->unmap_ip(map, symbol->start), map->unmap_ip(map, symbol->end),
> +			symbol->name, map__dso(map)->short_name,
> +			map__dso(map)->long_name,
> +			map__unmap_ip(map, symbol->start),
> +			map__unmap_ip(map, symbol->end),
>  			symbol->start, symbol->end);
>  	}
>  
> diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
> index 99d7ff9a8eff..d87d9c341a20 100644
> --- a/tools/perf/builtin-kmem.c
> +++ b/tools/perf/builtin-kmem.c
> @@ -410,7 +410,7 @@ static u64 find_callsite(struct evsel *evsel, struct perf_sample *sample)
>  		if (!caller) {
>  			/* found */
>  			if (node->ms.map)
> -				addr = map__unmap_ip(node->ms.map, node->ip);
> +				addr = map__dso_unmap_ip(node->ms.map, node->ip);
>  			else
>  				addr = node->ip;
>  
> @@ -1012,7 +1012,7 @@ static void __print_slab_result(struct rb_root *root,
>  
>  		if (sym != NULL)
>  			snprintf(buf, sizeof(buf), "%s+%" PRIx64 "", sym->name,
> -				 addr - map->unmap_ip(map, sym->start));
> +				 addr - map__unmap_ip(map, sym->start));
>  		else
>  			snprintf(buf, sizeof(buf), "%#" PRIx64 "", addr);
>  		printf(" %-34s |", buf);
> diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c
> index fcf65a59bea2..d18083f57303 100644
> --- a/tools/perf/builtin-mem.c
> +++ b/tools/perf/builtin-mem.c
> @@ -200,7 +200,7 @@ dump_raw_samples(struct perf_tool *tool,
>  		goto out_put;
>  
>  	if (al.map != NULL)
> -		al.map->dso->hit = 1;
> +		map__dso(al.map)->hit = 1;
>  
>  	field_sep = symbol_conf.field_sep;
>  	if (field_sep) {
> @@ -241,7 +241,7 @@ dump_raw_samples(struct perf_tool *tool,
>  		symbol_conf.field_sep,
>  		sample->data_src,
>  		symbol_conf.field_sep,
> -		al.map ? (al.map->dso ? al.map->dso->long_name : "???") : "???",
> +		al.map && map__dso(al.map) ? map__dso(al.map)->long_name : "???",
>  		al.sym ? al.sym->name : "???");
>  out_put:
>  	addr_location__put(&al);
> diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
> index 57611ef725c3..9b92b2bbd7de 100644
> --- a/tools/perf/builtin-report.c
> +++ b/tools/perf/builtin-report.c
> @@ -304,7 +304,7 @@ static int process_sample_event(struct perf_tool *tool,
>  	}
>  
>  	if (al.map != NULL)
> -		al.map->dso->hit = 1;
> +		map__dso(al.map)->hit = 1;
>  
>  	if (ui__has_annotation() || rep->symbol_ipc || rep->total_cycles_mode) {
>  		hist__account_cycles(sample->branch_stack, &al, sample,
> @@ -579,7 +579,7 @@ static void report__warn_kptr_restrict(const struct report *rep)
>  		return;
>  
>  	if (kernel_map == NULL ||
> -	    (kernel_map->dso->hit &&
> +	    (map__dso(kernel_map)->hit &&
>  	     (kernel_kmap->ref_reloc_sym == NULL ||
>  	      kernel_kmap->ref_reloc_sym->addr == 0))) {
>  		const char *desc =
> @@ -805,13 +805,15 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
>  		struct map *map = rb_node->map;
>  
>  		printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
> -				   indent, "", map->start, map->end,
> -				   map->prot & PROT_READ ? 'r' : '-',
> -				   map->prot & PROT_WRITE ? 'w' : '-',
> -				   map->prot & PROT_EXEC ? 'x' : '-',
> -				   map->flags & MAP_SHARED ? 's' : 'p',
> -				   map->pgoff,
> -				   map->dso->id.ino, map->dso->name);
> +				   indent, "",
> +				   map__start(map), map__end(map),
> +				   map__prot(map) & PROT_READ ? 'r' : '-',
> +				   map__prot(map) & PROT_WRITE ? 'w' : '-',
> +				   map__prot(map) & PROT_EXEC ? 'x' : '-',
> +				   map__flags(map) & MAP_SHARED ? 's' : 'p',
> +				   map__pgoff(map),
> +				   map__dso(map)->id.ino,
> +				   map__dso(map)->name);
>  	}
>  
>  	return printed;
> diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
> index abae8184e171..4edfce95e137 100644
> --- a/tools/perf/builtin-script.c
> +++ b/tools/perf/builtin-script.c
> @@ -972,12 +972,12 @@ static int perf_sample__fprintf_brstackoff(struct perf_sample *sample,
>  		to   = entries[i].to;
>  
>  		if (thread__find_map_fb(thread, sample->cpumode, from, &alf) &&
> -		    !alf.map->dso->adjust_symbols)
> -			from = map__map_ip(alf.map, from);
> +		    !map__dso(alf.map)->adjust_symbols)
> +			from = map__dso_map_ip(alf.map, from);
>  
>  		if (thread__find_map_fb(thread, sample->cpumode, to, &alt) &&
> -		    !alt.map->dso->adjust_symbols)
> -			to = map__map_ip(alt.map, to);
> +		    !map__dso(alt.map)->adjust_symbols)
> +			to = map__dso_map_ip(alt.map, to);
>  
>  		printed += fprintf(fp, " 0x%"PRIx64, from);
>  		if (PRINT_FIELD(DSO)) {
> @@ -1039,11 +1039,11 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
>  		return 0;
>  	}
>  
> -	if (!thread__find_map(thread, *cpumode, start, &al) || !al.map->dso) {
> +	if (!thread__find_map(thread, *cpumode, start, &al) || !map__dso(al.map)) {
>  		pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
>  		return 0;
>  	}
> -	if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR) {
> +	if (map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR) {
>  		pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
>  		return 0;
>  	}
> @@ -1051,11 +1051,11 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
>  	/* Load maps to ensure dso->is_64_bit has been updated */
>  	map__load(al.map);
>  
> -	offset = al.map->map_ip(al.map, start);
> -	len = dso__data_read_offset(al.map->dso, machine, offset, (u8 *)buffer,
> -				    end - start + MAXINSN);
> +	offset = map__map_ip(al.map, start);
> +	len = dso__data_read_offset(map__dso(al.map), machine, offset,
> +				    (u8 *)buffer, end - start + MAXINSN);
>  
> -	*is64bit = al.map->dso->is_64_bit;
> +	*is64bit = map__dso(al.map)->is_64_bit;
>  	if (len <= 0)
>  		pr_debug("\tcannot fetch code for block at %" PRIx64 "-%" PRIx64 "\n",
>  			start, end);
> @@ -1070,9 +1070,9 @@ static int map__fprintf_srccode(struct map *map, u64 addr, FILE *fp, struct srcc
>  	int len;
>  	char *srccode;
>  
> -	if (!map || !map->dso)
> +	if (!map || !map__dso(map))
>  		return 0;
> -	srcfile = get_srcline_split(map->dso,
> +	srcfile = get_srcline_split(map__dso(map),
>  				    map__rip_2objdump(map, addr),
>  				    &line);
>  	if (!srcfile)
> @@ -1164,7 +1164,7 @@ static int ip__fprintf_sym(uint64_t addr, struct thread *thread,
>  	if (al.addr < al.sym->end)
>  		off = al.addr - al.sym->start;
>  	else
> -		off = al.addr - al.map->start - al.sym->start;
> +		off = al.addr - map__start(al.map) - al.sym->start;
>  	printed += fprintf(fp, "\t%s", al.sym->name);
>  	if (off)
>  		printed += fprintf(fp, "%+d", off);
> diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
> index 1fc390f136dd..8db1df7bdabe 100644
> --- a/tools/perf/builtin-top.c
> +++ b/tools/perf/builtin-top.c
> @@ -127,8 +127,8 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
>  	/*
>  	 * We can't annotate with just /proc/kallsyms
>  	 */
> -	if (map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> -	    !dso__is_kcore(map->dso)) {
> +	if (map__dso(map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> +	    !dso__is_kcore(map__dso(map))) {
>  		pr_err("Can't annotate %s: No vmlinux file was found in the "
>  		       "path\n", sym->name);
>  		sleep(1);
> @@ -180,8 +180,9 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
>  		    "Tools:  %s\n\n"
>  		    "Not all samples will be on the annotation output.\n\n"
>  		    "Please report to linux-kernel@vger.kernel.org\n",
> -		    ip, map->dso->long_name, dso__symtab_origin(map->dso),
> -		    map->start, map->end, sym->start, sym->end,
> +		    ip, map__dso(map)->long_name,
> +		    dso__symtab_origin(map__dso(map)),
> +		    map__start(map), map__end(map), sym->start, sym->end,
>  		    sym->binding == STB_GLOBAL ? 'g' :
>  		    sym->binding == STB_LOCAL  ? 'l' : 'w', sym->name,
>  		    err ? "[unknown]" : uts.machine,
> @@ -810,7 +811,8 @@ static void perf_event__process_sample(struct perf_tool *tool,
>  		    __map__is_kernel(al.map) && map__has_symbols(al.map)) {
>  			if (symbol_conf.vmlinux_name) {
>  				char serr[256];
> -				dso__strerror_load(al.map->dso, serr, sizeof(serr));
> +				dso__strerror_load(map__dso(al.map),
> +						   serr, sizeof(serr));
>  				ui__warning("The %s file can't be used: %s\n%s",
>  					    symbol_conf.vmlinux_name, serr, msg);
>  			} else {
> diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
> index 32844d8a0ea5..0134f24da3e3 100644
> --- a/tools/perf/builtin-trace.c
> +++ b/tools/perf/builtin-trace.c
> @@ -2862,7 +2862,7 @@ static void print_location(FILE *f, struct perf_sample *sample,
>  {
>  
>  	if ((verbose > 0 || print_dso) && al->map)
> -		fprintf(f, "%s@", al->map->dso->long_name);
> +		fprintf(f, "%s@", map__dso(al->map)->long_name);
>  
>  	if ((verbose > 0 || print_sym) && al->sym)
>  		fprintf(f, "%s+0x%" PRIx64, al->sym->name,
> diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> index b64013a87c54..b83b62d33945 100644
> --- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> +++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> @@ -152,9 +152,10 @@ static PyObject *perf_sample_src(PyObject *obj, PyObject *args, bool get_srccode
>  	map = c->al->map;
>  	addr = c->al->addr;
>  
> -	if (map && map->dso)
> -		srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
> -
> +	if (map && map__dso(map)) {
> +		srcfile = get_srcline_split(map__dso(map),
> +					    map__rip_2objdump(map, addr), &line);
> +	}
>  	if (get_srccode) {
>  		if (srcfile)
>  			srccode = find_sourceline(srcfile, line, &len);
> diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
> index 6eafe36a8704..9cb7d3f577d7 100644
> --- a/tools/perf/tests/code-reading.c
> +++ b/tools/perf/tests/code-reading.c
> @@ -240,7 +240,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  
>  	pr_debug("Reading object code for memory address: %#"PRIx64"\n", addr);
>  
> -	if (!thread__find_map(thread, cpumode, addr, &al) || !al.map->dso) {
> +	if (!thread__find_map(thread, cpumode, addr, &al) || !map__dso(al.map)) {
>  		if (cpumode == PERF_RECORD_MISC_HYPERVISOR) {
>  			pr_debug("Hypervisor address can not be resolved - skipping\n");
>  			return 0;
> @@ -250,10 +250,10 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  		return -1;
>  	}
>  
> -	pr_debug("File is: %s\n", al.map->dso->long_name);
> +	pr_debug("File is: %s\n", map__dso(al.map)->long_name);
>  
> -	if (al.map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> -	    !dso__is_kcore(al.map->dso)) {
> +	if (map__dso(al.map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> +	    !dso__is_kcore(map__dso(al.map))) {
>  		pr_debug("Unexpected kernel address - skipping\n");
>  		return 0;
>  	}
> @@ -264,11 +264,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  		len = BUFSZ;
>  
>  	/* Do not go off the map */
> -	if (addr + len > al.map->end)
> -		len = al.map->end - addr;
> +	if (addr + len > map__end(al.map))
> +		len = map__end(al.map) - addr;
>  
>  	/* Read the object code using perf */
> -	ret_len = dso__data_read_offset(al.map->dso, maps__machine(thread->maps),
> +	ret_len = dso__data_read_offset(map__dso(al.map), maps__machine(thread->maps),
>  					al.addr, buf1, len);
>  	if (ret_len != len) {
>  		pr_debug("dso__data_read_offset failed\n");
> @@ -283,11 +283,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  		return -1;
>  
>  	/* objdump struggles with kcore - try each map only once */
> -	if (dso__is_kcore(al.map->dso)) {
> +	if (dso__is_kcore(map__dso(al.map))) {
>  		size_t d;
>  
>  		for (d = 0; d < state->done_cnt; d++) {
> -			if (state->done[d] == al.map->start) {
> +			if (state->done[d] == map__start(al.map)) {
>  				pr_debug("kcore map tested already");
>  				pr_debug(" - skipping\n");
>  				return 0;
> @@ -297,12 +297,12 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  			pr_debug("Too many kcore maps - skipping\n");
>  			return 0;
>  		}
> -		state->done[state->done_cnt++] = al.map->start;
> +		state->done[state->done_cnt++] = map__start(al.map);
>  	}
>  
> -	objdump_name = al.map->dso->long_name;
> -	if (dso__needs_decompress(al.map->dso)) {
> -		if (dso__decompress_kmodule_path(al.map->dso, objdump_name,
> +	objdump_name = map__dso(al.map)->long_name;
> +	if (dso__needs_decompress(map__dso(al.map))) {
> +		if (dso__decompress_kmodule_path(map__dso(al.map), objdump_name,
>  						 decomp_name,
>  						 sizeof(decomp_name)) < 0) {
>  			pr_debug("decompression failed\n");
> @@ -330,7 +330,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
>  			len -= ret;
>  			if (len) {
>  				pr_debug("Reducing len to %zu\n", len);
> -			} else if (dso__is_kcore(al.map->dso)) {
> +			} else if (dso__is_kcore(map__dso(al.map))) {
>  				/*
>  				 * objdump cannot handle very large segments
>  				 * that may be found in kcore.
> @@ -588,8 +588,8 @@ static int do_test_code_reading(bool try_kcore)
>  		pr_debug("map__load failed\n");
>  		goto out_err;
>  	}
> -	have_vmlinux = dso__is_vmlinux(map->dso);
> -	have_kcore = dso__is_kcore(map->dso);
> +	have_vmlinux = dso__is_vmlinux(map__dso(map));
> +	have_kcore = dso__is_kcore(map__dso(map));
>  
>  	/* 2nd time through we just try kcore */
>  	if (try_kcore && !have_kcore)
> diff --git a/tools/perf/tests/hists_common.c b/tools/perf/tests/hists_common.c
> index 6f34d08b84e5..40eccc659767 100644
> --- a/tools/perf/tests/hists_common.c
> +++ b/tools/perf/tests/hists_common.c
> @@ -181,7 +181,7 @@ void print_hists_in(struct hists *hists)
>  		if (!he->filtered) {
>  			pr_info("%2d: entry: %-8s [%-8s] %20s: period = %"PRIu64"\n",
>  				i, thread__comm_str(he->thread),
> -				he->ms.map->dso->short_name,
> +				map__dso(he->ms.map)->short_name,
>  				he->ms.sym->name, he->stat.period);
>  		}
>  
> @@ -208,7 +208,7 @@ void print_hists_out(struct hists *hists)
>  		if (!he->filtered) {
>  			pr_info("%2d: entry: %8s:%5d [%-8s] %20s: period = %"PRIu64"/%"PRIu64"\n",
>  				i, thread__comm_str(he->thread), he->thread->tid,
> -				he->ms.map->dso->short_name,
> +				map__dso(he->ms.map)->short_name,
>  				he->ms.sym->name, he->stat.period,
>  				he->stat_acc ? he->stat_acc->period : 0);
>  		}
> diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
> index 11a230ee5894..5afab21455f1 100644
> --- a/tools/perf/tests/vmlinux-kallsyms.c
> +++ b/tools/perf/tests/vmlinux-kallsyms.c
> @@ -13,7 +13,7 @@
>  #include "debug.h"
>  #include "machine.h"
>  
> -#define UM(x) kallsyms_map->unmap_ip(kallsyms_map, (x))
> +#define UM(x) map__unmap_ip(kallsyms_map, (x))
>  
>  static bool is_ignored_symbol(const char *name, char type)
>  {
> @@ -216,8 +216,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  		if (sym->start == sym->end)
>  			continue;
>  
> -		mem_start = vmlinux_map->unmap_ip(vmlinux_map, sym->start);
> -		mem_end = vmlinux_map->unmap_ip(vmlinux_map, sym->end);
> +		mem_start = map__unmap_ip(vmlinux_map, sym->start);
> +		mem_end = map__unmap_ip(vmlinux_map, sym->end);
>  
>  		first_pair = machine__find_kernel_symbol(&kallsyms, mem_start, NULL);
>  		pair = first_pair;
> @@ -262,7 +262,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  
>  				continue;
>  			}
> -		} else if (mem_start == kallsyms.vmlinux_map->end) {
> +		} else if (mem_start == map__end(kallsyms.vmlinux_map)) {
>  			/*
>  			 * Ignore aliases to _etext, i.e. to the end of the kernel text area,
>  			 * such as __indirect_thunk_end.
> @@ -294,9 +294,10 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  		 * so use the short name, less descriptive but the same ("[kernel]" in
>  		 * both cases.
>  		 */
> -		struct map *pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
> -								map->dso->short_name :
> -								map->dso->name));
> +		struct map *pair = maps__find_by_name(kallsyms.kmaps,
> +						map__dso(map)->kernel
> +						? map__dso(map)->short_name
> +						: map__dso(map)->name);
>  		if (pair) {
>  			pair->priv = 1;
>  		} else {
> @@ -313,25 +314,27 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  	maps__for_each_entry(maps, rb_node) {
>  		struct map *pair, *map = rb_node->map;
>  
> -		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
> -		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
> +		mem_start = map__unmap_ip(vmlinux_map, map__start(map));
> +		mem_end = map__unmap_ip(vmlinux_map, map__end(map));
>  
>  		pair = maps__find(kallsyms.kmaps, mem_start);
> -		if (pair == NULL || pair->priv)
> +		if (pair == NULL || map__priv(pair))
>  			continue;
>  
> -		if (pair->start == mem_start) {
> +		if (map__start(pair) == mem_start) {
>  			if (!header_printed) {
>  				pr_info("WARN: Maps in vmlinux with a different name in kallsyms:\n");
>  				header_printed = true;
>  			}
>  
>  			pr_info("WARN: %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s in kallsyms as",
> -				map->start, map->end, map->pgoff, map->dso->name);
> -			if (mem_end != pair->end)
> +				map__start(map), map__end(map),
> +				map__pgoff(map), map__dso(map)->name);
> +			if (mem_end != map__end(pair))
>  				pr_info(":\nWARN: *%" PRIx64 "-%" PRIx64 " %" PRIx64,
> -					pair->start, pair->end, pair->pgoff);
> -			pr_info(" %s\n", pair->dso->name);
> +					map__start(pair), map__end(pair),
> +					map__pgoff(pair));
> +			pr_info(" %s\n", map__dso(pair)->name);
>  			pair->priv = 1;
>  		}
>  	}
> @@ -343,7 +346,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  	maps__for_each_entry(maps, rb_node) {
>  		struct map *map = rb_node->map;
>  
> -		if (!map->priv) {
> +		if (!map__priv(map)) {
>  			if (!header_printed) {
>  				pr_info("WARN: Maps only in kallsyms:\n");
>  				header_printed = true;
> diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
> index 44ba900828f6..7d51d92302dc 100644
> --- a/tools/perf/ui/browsers/annotate.c
> +++ b/tools/perf/ui/browsers/annotate.c
> @@ -446,7 +446,8 @@ static void ui_browser__init_asm_mode(struct ui_browser *browser)
>  static int sym_title(struct symbol *sym, struct map *map, char *title,
>  		     size_t sz, int percent_type)
>  {
> -	return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name, map->dso->long_name,
> +	return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name,
> +			map__dso(map)->long_name,
>  			percent_type_str(percent_type));
>  }
>  
> @@ -971,14 +972,14 @@ int symbol__tui_annotate(struct map_symbol *ms, struct evsel *evsel,
>  	if (sym == NULL)
>  		return -1;
>  
> -	if (ms->map->dso->annotate_warned)
> +	if (map__dso(ms->map)->annotate_warned)
>  		return -1;
>  
>  	if (not_annotated) {
>  		err = symbol__annotate2(ms, evsel, opts, &browser.arch);
>  		if (err) {
>  			char msg[BUFSIZ];
> -			ms->map->dso->annotate_warned = true;
> +			map__dso(ms->map)->annotate_warned = true;
>  			symbol__strerror_disassemble(ms, err, msg, sizeof(msg));
>  			ui__error("Couldn't annotate %s:\n%s", sym->name, msg);
>  			goto out_free_offsets;
> diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
> index 572ff38ceb0f..2241447e9bfb 100644
> --- a/tools/perf/ui/browsers/hists.c
> +++ b/tools/perf/ui/browsers/hists.c
> @@ -2487,7 +2487,7 @@ static struct symbol *symbol__new_unresolved(u64 addr, struct map *map)
>  			return NULL;
>  		}
>  
> -		dso__insert_symbol(map->dso, sym);
> +		dso__insert_symbol(map__dso(map), sym);
>  	}
>  
>  	return sym;
> @@ -2499,7 +2499,7 @@ add_annotate_opt(struct hist_browser *browser __maybe_unused,
>  		 struct map_symbol *ms,
>  		 u64 addr)
>  {
> -	if (!ms->map || !ms->map->dso || ms->map->dso->annotate_warned)
> +	if (!ms->map || !map__dso(ms->map) || map__dso(ms->map)->annotate_warned)
>  		return 0;
>  
>  	if (!ms->sym)
> @@ -2590,8 +2590,10 @@ static int hists_browser__zoom_map(struct hist_browser *browser, struct map *map
>  		ui_helpline__pop();
>  	} else {
>  		ui_helpline__fpush("To zoom out press ESC or ENTER + \"Zoom out of %s DSO\"",
> -				   __map__is_kernel(map) ? "the Kernel" : map->dso->short_name);
> -		browser->hists->dso_filter = map->dso;
> +				   __map__is_kernel(map)
> +				   ? "the Kernel"
> +				   : map__dso(map)->short_name);
> +		browser->hists->dso_filter = map__dso(map);
>  		perf_hpp__set_elide(HISTC_DSO, true);
>  		pstack__push(browser->pstack, &browser->hists->dso_filter);
>  	}
> @@ -2616,7 +2618,9 @@ add_dso_opt(struct hist_browser *browser, struct popup_action *act,
>  
>  	if (asprintf(optstr, "Zoom %s %s DSO (use the 'k' hotkey to zoom directly into the kernel)",
>  		     browser->hists->dso_filter ? "out of" : "into",
> -		     __map__is_kernel(map) ? "the Kernel" : map->dso->short_name) < 0)
> +		     __map__is_kernel(map)
> +		     ? "the Kernel"
> +		     : map__dso(map)->short_name) < 0)
>  		return 0;
>  
>  	act->ms.map = map;
> @@ -3091,8 +3095,8 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
>  
>  			if (!browser->selection ||
>  			    !browser->selection->map ||
> -			    !browser->selection->map->dso ||
> -			    browser->selection->map->dso->annotate_warned) {
> +			    !map__dso(browser->selection->map) ||
> +			    map__dso(browser->selection->map)->annotate_warned) {
>  				continue;
>  			}
>  
> diff --git a/tools/perf/ui/browsers/map.c b/tools/perf/ui/browsers/map.c
> index 3d49b916c9e4..3d1b958d8832 100644
> --- a/tools/perf/ui/browsers/map.c
> +++ b/tools/perf/ui/browsers/map.c
> @@ -76,7 +76,7 @@ static int map_browser__run(struct map_browser *browser)
>  {
>  	int key;
>  
> -	if (ui_browser__show(&browser->b, browser->map->dso->long_name,
> +	if (ui_browser__show(&browser->b, map__dso(browser->map)->long_name,
>  			     "Press ESC to exit, %s / to search",
>  			     verbose > 0 ? "" : "restart with -v to use") < 0)
>  		return -1;
> @@ -106,7 +106,7 @@ int map__browse(struct map *map)
>  {
>  	struct map_browser mb = {
>  		.b = {
> -			.entries = &map->dso->symbols,
> +			.entries = &map__dso(map)->symbols,
>  			.refresh = ui_browser__rb_tree_refresh,
>  			.seek	 = ui_browser__rb_tree_seek,
>  			.write	 = map_browser__write,
> diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
> index 01900689dc00..3a7433d3e48a 100644
> --- a/tools/perf/util/annotate.c
> +++ b/tools/perf/util/annotate.c
> @@ -280,7 +280,9 @@ static int call__parse(struct arch *arch, struct ins_operands *ops, struct map_s
>  	target.addr = map__objdump_2mem(map, ops->target.addr);
>  
>  	if (maps__find_ams(ms->maps, &target) == 0 &&
> -	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> +	    map__rip_2objdump(target.ms.map,
> +			      map->map_ip(target.ms.map, target.addr)
> +			      ) == ops->target.addr)
>  		ops->target.sym = target.ms.sym;
>  
>  	return 0;
> @@ -384,8 +386,8 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
>  	}
>  
>  	target.addr = map__objdump_2mem(map, ops->target.addr);
> -	start = map->unmap_ip(map, sym->start),
> -	end = map->unmap_ip(map, sym->end);
> +	start = map__unmap_ip(map, sym->start),
> +	end = map__unmap_ip(map, sym->end);
>  
>  	ops->target.outside = target.addr < start || target.addr > end;
>  
> @@ -408,7 +410,9 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
>  	 * the symbol searching and disassembly should be done.
>  	 */
>  	if (maps__find_ams(ms->maps, &target) == 0 &&
> -	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> +	    map__rip_2objdump(target.ms.map,
> +			      map->map_ip(target.ms.map, target.addr)
> +			      ) == ops->target.addr)
>  		ops->target.sym = target.ms.sym;
>  
>  	if (!ops->target.outside) {
> @@ -889,7 +893,7 @@ static int __symbol__inc_addr_samples(struct map_symbol *ms,
>  	unsigned offset;
>  	struct sym_hist *h;
>  
> -	pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, ms->map->unmap_ip(ms->map, addr));
> +	pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, map__unmap_ip(ms->map, addr));
>  
>  	if ((addr < sym->start || addr >= sym->end) &&
>  	    (addr != sym->end || sym->start != sym->end)) {
> @@ -1016,13 +1020,13 @@ int addr_map_symbol__account_cycles(struct addr_map_symbol *ams,
>  	if (start &&
>  		(start->ms.sym == ams->ms.sym ||
>  		 (ams->ms.sym &&
> -		   start->addr == ams->ms.sym->start + ams->ms.map->start)))
> +		  start->addr == ams->ms.sym->start + map__start(ams->ms.map))))
>  		saddr = start->al_addr;
>  	if (saddr == 0)
>  		pr_debug2("BB with bad start: addr %"PRIx64" start %"PRIx64" sym %"PRIx64" saddr %"PRIx64"\n",
>  			ams->addr,
>  			start ? start->addr : 0,
> -			ams->ms.sym ? ams->ms.sym->start + ams->ms.map->start : 0,
> +			ams->ms.sym ? ams->ms.sym->start + map__start(ams->ms.map) : 0,
>  			saddr);
>  	err = symbol__account_cycles(ams->al_addr, saddr, ams->ms.sym, cycles);
>  	if (err)
> @@ -1593,7 +1597,7 @@ static void delete_last_nop(struct symbol *sym)
>  
>  int symbol__strerror_disassemble(struct map_symbol *ms, int errnum, char *buf, size_t buflen)
>  {
> -	struct dso *dso = ms->map->dso;
> +	struct dso *dso = map__dso(ms->map);
>  
>  	BUG_ON(buflen == 0);
>  
> @@ -1723,7 +1727,7 @@ static int symbol__disassemble_bpf(struct symbol *sym,
>  	struct map *map = args->ms.map;
>  	struct perf_bpil *info_linear;
>  	struct disassemble_info info;
> -	struct dso *dso = map->dso;
> +	struct dso *dso = map__dso(map);
>  	int pc = 0, count, sub_id;
>  	struct btf *btf = NULL;
>  	char tpath[PATH_MAX];
> @@ -1946,7 +1950,7 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
>  {
>  	struct annotation_options *opts = args->options;
>  	struct map *map = args->ms.map;
> -	struct dso *dso = map->dso;
> +	struct dso *dso = map__dso(map);
>  	char *command;
>  	FILE *file;
>  	char symfs_filename[PATH_MAX];
> @@ -1973,8 +1977,8 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
>  		return err;
>  
>  	pr_debug("%s: filename=%s, sym=%s, start=%#" PRIx64 ", end=%#" PRIx64 "\n", __func__,
> -		 symfs_filename, sym->name, map->unmap_ip(map, sym->start),
> -		 map->unmap_ip(map, sym->end));
> +		 symfs_filename, sym->name, map__unmap_ip(map, sym->start),
> +		 map__unmap_ip(map, sym->end));
>  
>  	pr_debug("annotating [%p] %30s : [%p] %30s\n",
>  		 dso, dso->long_name, sym, sym->name);
> @@ -2386,7 +2390,7 @@ int symbol__annotate_printf(struct map_symbol *ms, struct evsel *evsel,
>  {
>  	struct map *map = ms->map;
>  	struct symbol *sym = ms->sym;
> -	struct dso *dso = map->dso;
> +	struct dso *dso = map__dso(map);
>  	char *filename;
>  	const char *d_filename;
>  	const char *evsel_name = evsel__name(evsel);
> @@ -2569,7 +2573,7 @@ int map_symbol__annotation_dump(struct map_symbol *ms, struct evsel *evsel,
>  	}
>  
>  	fprintf(fp, "%s() %s\nEvent: %s\n\n",
> -		ms->sym->name, ms->map->dso->long_name, ev_name);
> +		ms->sym->name, map__dso(ms->map)->long_name, ev_name);
>  	symbol__annotate_fprintf2(ms->sym, fp, opts);
>  
>  	fclose(fp);
> @@ -2781,7 +2785,7 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,
>  		if (percent_max <= 0.5)
>  			continue;
>  
> -		al->path = get_srcline(map->dso, notes->start + al->offset, NULL,
> +		al->path = get_srcline(map__dso(map), notes->start + al->offset, NULL,
>  				       false, true, notes->start + al->offset);
>  		insert_source_line(&tmp_root, al, opts);
>  	}
> @@ -2800,7 +2804,7 @@ static void symbol__calc_lines(struct map_symbol *ms, struct rb_root *root,
>  int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
>  			  struct annotation_options *opts)
>  {
> -	struct dso *dso = ms->map->dso;
> +	struct dso *dso = map__dso(ms->map);
>  	struct symbol *sym = ms->sym;
>  	struct rb_root source_line = RB_ROOT;
>  	struct hists *hists = evsel__hists(evsel);
> @@ -2836,7 +2840,7 @@ int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
>  int symbol__tty_annotate(struct map_symbol *ms, struct evsel *evsel,
>  			 struct annotation_options *opts)
>  {
> -	struct dso *dso = ms->map->dso;
> +	struct dso *dso = map__dso(ms->map);
>  	struct symbol *sym = ms->sym;
>  	struct rb_root source_line = RB_ROOT;
>  	int err;
> diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
> index 825336304a37..2e864c9bdef3 100644
> --- a/tools/perf/util/auxtrace.c
> +++ b/tools/perf/util/auxtrace.c
> @@ -2478,7 +2478,7 @@ static struct dso *load_dso(const char *name)
>  	if (map__load(map) < 0)
>  		pr_err("File '%s' not found or has no symbols.\n", name);
>  
> -	dso = dso__get(map->dso);
> +	dso = dso__get(map__dso(map));
>  
>  	map__put(map);
>  
> diff --git a/tools/perf/util/block-info.c b/tools/perf/util/block-info.c
> index 5ecd4f401f32..16a7b4adcf18 100644
> --- a/tools/perf/util/block-info.c
> +++ b/tools/perf/util/block-info.c
> @@ -317,9 +317,9 @@ static int block_dso_entry(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp,
>  	struct block_fmt *block_fmt = container_of(fmt, struct block_fmt, fmt);
>  	struct map *map = he->ms.map;
>  
> -	if (map && map->dso) {
> +	if (map && map__dso(map)) {
>  		return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
> -				 map->dso->short_name);
> +				 map__dso(map)->short_name);
>  	}
>  
>  	return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
> diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
> index 33257b594a71..5717933be116 100644
> --- a/tools/perf/util/bpf-event.c
> +++ b/tools/perf/util/bpf-event.c
> @@ -95,10 +95,10 @@ static int machine__process_bpf_event_load(struct machine *machine,
>  		struct map *map = maps__find(machine__kernel_maps(machine), addr);
>  
>  		if (map) {
> -			map->dso->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
> -			map->dso->bpf_prog.id = id;
> -			map->dso->bpf_prog.sub_id = i;
> -			map->dso->bpf_prog.env = env;
> +			map__dso(map)->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
> +			map__dso(map)->bpf_prog.id = id;
> +			map__dso(map)->bpf_prog.sub_id = i;
> +			map__dso(map)->bpf_prog.env = env;
>  		}
>  	}
>  	return 0;
> diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
> index 7a5821c87f94..274b705dd941 100644
> --- a/tools/perf/util/build-id.c
> +++ b/tools/perf/util/build-id.c
> @@ -59,7 +59,7 @@ int build_id__mark_dso_hit(struct perf_tool *tool __maybe_unused,
>  	}
>  
>  	if (thread__find_map(thread, sample->cpumode, sample->ip, &al))
> -		al.map->dso->hit = 1;
> +		map__dso(al.map)->hit = 1;
>  
>  	thread__put(thread);
>  	return 0;
> diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
> index 61bb3fb2107a..a8cfd31a3ff0 100644
> --- a/tools/perf/util/callchain.c
> +++ b/tools/perf/util/callchain.c
> @@ -695,8 +695,8 @@ static enum match_result match_chain_strings(const char *left,
>  static enum match_result match_chain_dso_addresses(struct map *left_map, u64 left_ip,
>  						   struct map *right_map, u64 right_ip)
>  {
> -	struct dso *left_dso = left_map ? left_map->dso : NULL;
> -	struct dso *right_dso = right_map ? right_map->dso : NULL;
> +	struct dso *left_dso = left_map ? map__dso(left_map) : NULL;
> +	struct dso *right_dso = right_map ? map__dso(right_map) : NULL;
>  
>  	if (left_dso != right_dso)
>  		return left_dso < right_dso ? MATCH_LT : MATCH_GT;
> @@ -1167,9 +1167,9 @@ char *callchain_list__sym_name(struct callchain_list *cl,
>  
>  	if (show_dso)
>  		scnprintf(bf + printed, bfsize - printed, " %s",
> -			  cl->ms.map ?
> -			  cl->ms.map->dso->short_name :
> -			  "unknown");
> +			  cl->ms.map
> +			  ? map__dso(cl->ms.map)->short_name
> +			  : "unknown");
>  
>  	return bf;
>  }
> diff --git a/tools/perf/util/data-convert-json.c b/tools/perf/util/data-convert-json.c
> index f1ab6edba446..9c83228bb9f1 100644
> --- a/tools/perf/util/data-convert-json.c
> +++ b/tools/perf/util/data-convert-json.c
> @@ -127,8 +127,8 @@ static void output_sample_callchain_entry(struct perf_tool *tool,
>  		fputc(',', out);
>  		output_json_key_string(out, false, 5, "symbol", al->sym->name);
>  
> -		if (al->map && al->map->dso) {
> -			const char *dso = al->map->dso->short_name;
> +		if (al->map && map__dso(al->map)) {
> +			const char *dso = map__dso(al->map)->short_name;
>  
>  			if (dso && strlen(dso) > 0) {
>  				fputc(',', out);
> diff --git a/tools/perf/util/db-export.c b/tools/perf/util/db-export.c
> index 1cfcfdd3cf52..84c970c11794 100644
> --- a/tools/perf/util/db-export.c
> +++ b/tools/perf/util/db-export.c
> @@ -179,7 +179,7 @@ static int db_ids_from_al(struct db_export *dbe, struct addr_location *al,
>  	int err;
>  
>  	if (al->map) {
> -		struct dso *dso = al->map->dso;
> +		struct dso *dso = map__dso(al->map);
>  
>  		err = db_export__dso(dbe, dso, maps__machine(al->maps));
>  		if (err)
> @@ -255,7 +255,7 @@ static struct call_path *call_path_from_sample(struct db_export *dbe,
>  		al.addr = node->ip;
>  
>  		if (al.map && !al.sym)
> -			al.sym = dso__find_symbol(al.map->dso, al.addr);
> +			al.sym = dso__find_symbol(map__dso(al.map), al.addr);
>  
>  		db_ids_from_al(dbe, &al, &dso_db_id, &sym_db_id, &offset);
>  
> diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
> index d59462af15f1..f1d9dd7065e6 100644
> --- a/tools/perf/util/dlfilter.c
> +++ b/tools/perf/util/dlfilter.c
> @@ -29,7 +29,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
>  
>  	d_al->size = sizeof(*d_al);
>  	if (al->map) {
> -		struct dso *dso = al->map->dso;
> +		struct dso *dso = map__dso(al->map);
>  
>  		if (symbol_conf.show_kernel_path && dso->long_name)
>  			d_al->dso = dso->long_name;
> @@ -51,7 +51,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
>  		if (al->addr < sym->end)
>  			d_al->symoff = al->addr - sym->start;
>  		else
> -			d_al->symoff = al->addr - al->map->start - sym->start;
> +			d_al->symoff = al->addr - map__start(al->map) - sym->start;
>  		d_al->sym_binding = sym->binding;
>  	} else {
>  		d_al->sym = NULL;
> @@ -232,9 +232,10 @@ static const char *dlfilter__srcline(void *ctx, __u32 *line_no)
>  	map = al->map;
>  	addr = al->addr;
>  
> -	if (map && map->dso)
> -		srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
> -
> +	if (map && map__dso(map)) {
> +		srcfile = get_srcline_split(map__dso(map),
> +					    map__rip_2objdump(map, addr), &line);
> +	}
>  	*line_no = line;
>  	return srcfile;
>  }
> @@ -266,7 +267,7 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
>  
>  	map = al->map;
>  
> -	if (map && ip >= map->start && ip < map->end &&
> +	if (map && ip >= map__start(map) && ip < map__end(map) &&
>  	    machine__kernel_ip(d->machine, ip) == machine__kernel_ip(d->machine, d->sample->ip))
>  		goto have_map;
>  
> @@ -276,10 +277,10 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
>  
>  	map = a.map;
>  have_map:
> -	offset = map->map_ip(map, ip);
> -	if (ip + len >= map->end)
> -		len = map->end - ip;
> -	return dso__data_read_offset(map->dso, d->machine, offset, buf, len);
> +	offset = map__map_ip(map, ip);
> +	if (ip + len >= map__end(map))
> +		len = map__end(map) - ip;
> +	return dso__data_read_offset(map__dso(map), d->machine, offset, buf, len);
>  }
>  
>  static const struct perf_dlfilter_fns perf_dlfilter_fns = {
> diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> index b2f570adba35..1115bc51a261 100644
> --- a/tools/perf/util/dso.c
> +++ b/tools/perf/util/dso.c
> @@ -1109,7 +1109,7 @@ ssize_t dso__data_read_addr(struct dso *dso, struct map *map,
>  			    struct machine *machine, u64 addr,
>  			    u8 *data, ssize_t size)
>  {
> -	u64 offset = map->map_ip(map, addr);
> +	u64 offset = map__map_ip(map, addr);
>  	return dso__data_read_offset(dso, machine, offset, data, size);
>  }
>  
> @@ -1149,7 +1149,7 @@ ssize_t dso__data_write_cache_addr(struct dso *dso, struct map *map,
>  				   struct machine *machine, u64 addr,
>  				   const u8 *data, ssize_t size)
>  {
> -	u64 offset = map->map_ip(map, addr);
> +	u64 offset = map__map_ip(map, addr);
>  	return dso__data_write_cache_offs(dso, machine, offset, data, size);
>  }
>  
> diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> index 40a3b1a35613..54a1d4df5f70 100644
> --- a/tools/perf/util/event.c
> +++ b/tools/perf/util/event.c
> @@ -486,7 +486,7 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
>  
>  		al.map = maps__find(machine__kernel_maps(machine), tp->addr);
>  		if (al.map && map__load(al.map) >= 0) {
> -			al.addr = al.map->map_ip(al.map, tp->addr);
> +			al.addr = map__map_ip(al.map, tp->addr);
>  			al.sym = map__find_symbol(al.map, al.addr);
>  			if (al.sym)
>  				ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
> @@ -621,7 +621,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
>  		 */
>  		if (load_map)
>  			map__load(al->map);
> -		al->addr = al->map->map_ip(al->map, al->addr);
> +		al->addr = map__map_ip(al->map, al->addr);
>  	}
>  
>  	return al->map;
> @@ -692,8 +692,8 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
>  	dump_printf(" ... thread: %s:%d\n", thread__comm_str(thread), thread->tid);
>  	thread__find_map(thread, sample->cpumode, sample->ip, al);
>  	dump_printf(" ...... dso: %s\n",
> -		    al->map ? al->map->dso->long_name :
> -			al->level == 'H' ? "[hypervisor]" : "<not found>");
> +		    al->map ? map__dso(al->map)->long_name
> +			    : al->level == 'H' ? "[hypervisor]" : "<not found>");
>  
>  	if (thread__is_filtered(thread))
>  		al->filtered |= (1 << HIST_FILTER__THREAD);
> @@ -711,7 +711,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
>  	}
>  
>  	if (al->map) {
> -		struct dso *dso = al->map->dso;
> +		struct dso *dso = map__dso(al->map);
>  
>  		if (symbol_conf.dso_list &&
>  		    (!dso || !(strlist__has_entry(symbol_conf.dso_list,
> @@ -738,12 +738,12 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
>  		}
>  		if (!ret && al->sym) {
>  			snprintf(al_addr_str, sz, "0x%"PRIx64,
> -				al->map->unmap_ip(al->map, al->sym->start));
> +				 map__unmap_ip(al->map, al->sym->start));
>  			ret = strlist__has_entry(symbol_conf.sym_list,
>  						al_addr_str);
>  		}
>  		if (!ret && symbol_conf.addr_list && al->map) {
> -			unsigned long addr = al->map->unmap_ip(al->map, al->addr);
> +			unsigned long addr = map__unmap_ip(al->map, al->addr);
>  
>  			ret = intlist__has_entry(symbol_conf.addr_list, addr);
>  			if (!ret && symbol_conf.addr_range) {
> diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c
> index 8c2ea8001329..ac6fef9d8906 100644
> --- a/tools/perf/util/evsel_fprintf.c
> +++ b/tools/perf/util/evsel_fprintf.c
> @@ -146,11 +146,11 @@ int sample__fprintf_callchain(struct perf_sample *sample, int left_alignment,
>  				printed += fprintf(fp, " <-");
>  
>  			if (map)
> -				addr = map->map_ip(map, node->ip);
> +				addr = map__map_ip(map, node->ip);
>  
>  			if (print_ip) {
>  				/* Show binary offset for userspace addr */
> -				if (map && !map->dso->kernel)
> +				if (map && !map__dso(map)->kernel)
>  					printed += fprintf(fp, "%c%16" PRIx64, s, addr);
>  				else
>  					printed += fprintf(fp, "%c%16" PRIx64, s, node->ip);
> diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> index 78f9fbb925a7..f19ac6eb4775 100644
> --- a/tools/perf/util/hist.c
> +++ b/tools/perf/util/hist.c
> @@ -105,7 +105,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
>  		hists__set_col_len(hists, HISTC_THREAD, len + 8);
>  
>  	if (h->ms.map) {
> -		len = dso__name_len(h->ms.map->dso);
> +		len = dso__name_len(map__dso(h->ms.map));
>  		hists__new_col_len(hists, HISTC_DSO, len);
>  	}
>  
> @@ -119,7 +119,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
>  				symlen += BITS_PER_LONG / 4 + 2 + 3;
>  			hists__new_col_len(hists, HISTC_SYMBOL_FROM, symlen);
>  
> -			symlen = dso__name_len(h->branch_info->from.ms.map->dso);
> +			symlen = dso__name_len(map__dso(h->branch_info->from.ms.map));
>  			hists__new_col_len(hists, HISTC_DSO_FROM, symlen);
>  		} else {
>  			symlen = unresolved_col_width + 4 + 2;
> @@ -133,7 +133,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
>  				symlen += BITS_PER_LONG / 4 + 2 + 3;
>  			hists__new_col_len(hists, HISTC_SYMBOL_TO, symlen);
>  
> -			symlen = dso__name_len(h->branch_info->to.ms.map->dso);
> +			symlen = dso__name_len(map__dso(h->branch_info->to.ms.map));
>  			hists__new_col_len(hists, HISTC_DSO_TO, symlen);
>  		} else {
>  			symlen = unresolved_col_width + 4 + 2;
> @@ -177,7 +177,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
>  		}
>  
>  		if (h->mem_info->daddr.ms.map) {
> -			symlen = dso__name_len(h->mem_info->daddr.ms.map->dso);
> +			symlen = dso__name_len(map__dso(h->mem_info->daddr.ms.map));
>  			hists__new_col_len(hists, HISTC_MEM_DADDR_DSO,
>  					   symlen);
>  		} else {
> @@ -2096,7 +2096,7 @@ static bool hists__filter_entry_by_dso(struct hists *hists,
>  				       struct hist_entry *he)
>  {
>  	if (hists->dso_filter != NULL &&
> -	    (he->ms.map == NULL || he->ms.map->dso != hists->dso_filter)) {
> +	    (he->ms.map == NULL || map__dso(he->ms.map) != hists->dso_filter)) {
>  		he->filtered |= (1 << HIST_FILTER__DSO);
>  		return true;
>  	}
> diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
> index e8613cbda331..c88f112c0a06 100644
> --- a/tools/perf/util/intel-pt.c
> +++ b/tools/perf/util/intel-pt.c
> @@ -731,20 +731,20 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
>  	}
>  
>  	while (1) {
> -		if (!thread__find_map(thread, cpumode, *ip, &al) || !al.map->dso)
> +		if (!thread__find_map(thread, cpumode, *ip, &al) || !map__dso(al.map))
>  			return -EINVAL;
>  
> -		if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR &&
> -		    dso__data_status_seen(al.map->dso,
> +		if (map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR &&
> +		    dso__data_status_seen(map__dso(al.map),
>  					  DSO_DATA_STATUS_SEEN_ITRACE))
>  			return -ENOENT;
>  
> -		offset = al.map->map_ip(al.map, *ip);
> +		offset = map__map_ip(al.map, *ip);
>  
>  		if (!to_ip && one_map) {
>  			struct intel_pt_cache_entry *e;
>  
> -			e = intel_pt_cache_lookup(al.map->dso, machine, offset);
> +			e = intel_pt_cache_lookup(map__dso(al.map), machine, offset);
>  			if (e &&
>  			    (!max_insn_cnt || e->insn_cnt <= max_insn_cnt)) {
>  				*insn_cnt_ptr = e->insn_cnt;
> @@ -766,10 +766,10 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
>  		/* Load maps to ensure dso->is_64_bit has been updated */
>  		map__load(al.map);
>  
> -		x86_64 = al.map->dso->is_64_bit;
> +		x86_64 = map__dso(al.map)->is_64_bit;
>  
>  		while (1) {
> -			len = dso__data_read_offset(al.map->dso, machine,
> +			len = dso__data_read_offset(map__dso(al.map), machine,
>  						    offset, buf,
>  						    INTEL_PT_INSN_BUF_SZ);
>  			if (len <= 0)
> @@ -795,7 +795,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
>  				goto out_no_cache;
>  			}
>  
> -			if (*ip >= al.map->end)
> +			if (*ip >= map__end(al.map))
>  				break;
>  
>  			offset += intel_pt_insn->length;
> @@ -815,13 +815,13 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
>  	if (to_ip) {
>  		struct intel_pt_cache_entry *e;
>  
> -		e = intel_pt_cache_lookup(al.map->dso, machine, start_offset);
> +		e = intel_pt_cache_lookup(map__dso(al.map), machine, start_offset);
>  		if (e)
>  			return 0;
>  	}
>  
>  	/* Ignore cache errors */
> -	intel_pt_cache_add(al.map->dso, machine, start_offset, insn_cnt,
> +	intel_pt_cache_add(map__dso(al.map), machine, start_offset, insn_cnt,
>  			   *ip - start_ip, intel_pt_insn);
>  
>  	return 0;
> @@ -892,13 +892,13 @@ static int __intel_pt_pgd_ip(uint64_t ip, void *data)
>  	if (!thread)
>  		return -EINVAL;
>  
> -	if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso)
> +	if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map))
>  		return -EINVAL;
>  
> -	offset = al.map->map_ip(al.map, ip);
> +	offset = map__map_ip(al.map, ip);
>  
>  	return intel_pt_match_pgd_ip(ptq->pt, ip, offset,
> -				     al.map->dso->long_name);
> +				     map__dso(al.map)->long_name);
>  }
>  
>  static bool intel_pt_pgd_ip(uint64_t ip, void *data)
> @@ -2406,13 +2406,13 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
>  	if (map__load(map))
>  		return 0;
>  
> -	start = dso__first_symbol(map->dso);
> +	start = dso__first_symbol(map__dso(map));
>  
>  	for (sym = start; sym; sym = dso__next_symbol(sym)) {
>  		if (sym->binding == STB_GLOBAL &&
>  		    !strcmp(sym->name, "__switch_to")) {
> -			ip = map->unmap_ip(map, sym->start);
> -			if (ip >= map->start && ip < map->end) {
> +			ip = map__unmap_ip(map, sym->start);
> +			if (ip >= map__start(map) && ip < map__end(map)) {
>  				switch_ip = ip;
>  				break;
>  			}
> @@ -2429,8 +2429,8 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
>  
>  	for (sym = start; sym; sym = dso__next_symbol(sym)) {
>  		if (!strcmp(sym->name, ptss)) {
> -			ip = map->unmap_ip(map, sym->start);
> -			if (ip >= map->start && ip < map->end) {
> +			ip = map__unmap_ip(map, sym->start);
> +			if (ip >= map__start(map) && ip < map__end(map)) {
>  				*ptss_ip = ip;
>  				break;
>  			}
> @@ -2965,7 +2965,7 @@ static int intel_pt_process_aux_output_hw_id(struct intel_pt *pt,
>  static int intel_pt_find_map(struct thread *thread, u8 cpumode, u64 addr,
>  			     struct addr_location *al)
>  {
> -	if (!al->map || addr < al->map->start || addr >= al->map->end) {
> +	if (!al->map || addr < map__start(al->map) || addr >= map__end(al->map)) {
>  		if (!thread__find_map(thread, cpumode, addr, al))
>  			return -1;
>  	}
> @@ -2996,12 +2996,12 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
>  			continue;
>  		}
>  
> -		if (!al.map->dso || !al.map->dso->auxtrace_cache)
> +		if (!map__dso(al.map) || !map__dso(al.map)->auxtrace_cache)
>  			continue;
>  
> -		offset = al.map->map_ip(al.map, addr);
> +		offset = map__map_ip(al.map, addr);
>  
> -		e = intel_pt_cache_lookup(al.map->dso, machine, offset);
> +		e = intel_pt_cache_lookup(map__dso(al.map), machine, offset);
>  		if (!e)
>  			continue;
>  
> @@ -3014,9 +3014,9 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
>  			if (e->branch != INTEL_PT_BR_NO_BRANCH)
>  				return 0;
>  		} else {
> -			intel_pt_cache_invalidate(al.map->dso, machine, offset);
> +			intel_pt_cache_invalidate(map__dso(al.map), machine, offset);
>  			intel_pt_log("Invalidated instruction cache for %s at %#"PRIx64"\n",
> -				     al.map->dso->long_name, addr);
> +				     map__dso(al.map)->long_name, addr);
>  		}
>  	}
>  
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index 88279008e761..940fb2a50dfd 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -47,7 +47,7 @@ static void __machine__remove_thread(struct machine *machine, struct thread *th,
>  
>  static struct dso *machine__kernel_dso(struct machine *machine)
>  {
> -	return machine->vmlinux_map->dso;
> +	return map__dso(machine->vmlinux_map);
>  }
>  
>  static void dsos__init(struct dsos *dsos)
> @@ -842,9 +842,10 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
>  	if (map != machine->vmlinux_map)
>  		maps__remove(machine__kernel_maps(machine), map);
>  	else {
> -		sym = dso__find_symbol(map->dso, map->map_ip(map, map->start));
> +		sym = dso__find_symbol(map__dso(map),
> +				map__map_ip(map, map__start(map)));
>  		if (sym)
> -			dso__delete_symbol(map->dso, sym);
> +			dso__delete_symbol(map__dso(map), sym);
>  	}
>  
>  	return 0;
> @@ -880,7 +881,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
>  		return 0;
>  	}
>  
> -	if (map && map->dso) {
> +	if (map && map__dso(map)) {
>  		u8 *new_bytes = event->text_poke.bytes + event->text_poke.old_len;
>  		int ret;
>  
> @@ -889,7 +890,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
>  		 * must be done prior to using kernel maps.
>  		 */
>  		map__load(map);
> -		ret = dso__data_write_cache_addr(map->dso, map, machine,
> +		ret = dso__data_write_cache_addr(map__dso(map), map, machine,
>  						 event->text_poke.addr,
>  						 new_bytes,
>  						 event->text_poke.new_len);
> @@ -931,6 +932,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
>  	/* If maps__insert failed, return NULL. */
>  	if (err)
>  		map = NULL;
> +
>  out:
>  	/* put the dso here, corresponding to  machine__findnew_module_dso */
>  	dso__put(dso);
> @@ -1118,7 +1120,7 @@ int machine__create_extra_kernel_map(struct machine *machine,
>  
>  	if (!err) {
>  		pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
> -			kmap->name, map->start, map->end);
> +			kmap->name, map__start(map), map__end(map));
>  	}
>  
>  	map__put(map);
> @@ -1178,9 +1180,9 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
>  		if (!kmap || !is_entry_trampoline(kmap->name))
>  			continue;
>  
> -		dest_map = maps__find(kmaps, map->pgoff);
> +		dest_map = maps__find(kmaps, map__pgoff(map));
>  		if (dest_map != map)
> -			map->pgoff = dest_map->map_ip(dest_map, map->pgoff);
> +			map->pgoff = map__map_ip(dest_map, map__pgoff(map));
>  		found = true;
>  	}
>  	if (found || machine->trampolines_mapped)
> @@ -1230,7 +1232,8 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
>  	if (machine->vmlinux_map == NULL)
>  		return -ENOMEM;
>  
> -	machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
> +	machine->vmlinux_map->map_ip = map__identity_ip;
> +	machine->vmlinux_map->unmap_ip = map__identity_ip;
>  	return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
>  }
>  
> @@ -1329,10 +1332,10 @@ int machines__create_kernel_maps(struct machines *machines, pid_t pid)
>  int machine__load_kallsyms(struct machine *machine, const char *filename)
>  {
>  	struct map *map = machine__kernel_map(machine);
> -	int ret = __dso__load_kallsyms(map->dso, filename, map, true);
> +	int ret = __dso__load_kallsyms(map__dso(map), filename, map, true);
>  
>  	if (ret > 0) {
> -		dso__set_loaded(map->dso);
> +		dso__set_loaded(map__dso(map));
>  		/*
>  		 * Since /proc/kallsyms will have multiple sessions for the
>  		 * kernel, with modules between them, fixup the end of all
> @@ -1347,10 +1350,10 @@ int machine__load_kallsyms(struct machine *machine, const char *filename)
>  int machine__load_vmlinux_path(struct machine *machine)
>  {
>  	struct map *map = machine__kernel_map(machine);
> -	int ret = dso__load_vmlinux_path(map->dso, map);
> +	int ret = dso__load_vmlinux_path(map__dso(map), map);
>  
>  	if (ret > 0)
> -		dso__set_loaded(map->dso);
> +		dso__set_loaded(map__dso(map));
>  
>  	return ret;
>  }
> @@ -1401,16 +1404,16 @@ static int maps__set_module_path(struct maps *maps, const char *path, struct kmo
>  	if (long_name == NULL)
>  		return -ENOMEM;
>  
> -	dso__set_long_name(map->dso, long_name, true);
> -	dso__kernel_module_get_build_id(map->dso, "");
> +	dso__set_long_name(map__dso(map), long_name, true);
> +	dso__kernel_module_get_build_id(map__dso(map), "");
>  
>  	/*
>  	 * Full name could reveal us kmod compression, so
>  	 * we need to update the symtab_type if needed.
>  	 */
> -	if (m->comp && is_kmod_dso(map->dso)) {
> -		map->dso->symtab_type++;
> -		map->dso->comp = m->comp;
> +	if (m->comp && is_kmod_dso(map__dso(map))) {
> +		map__dso(map)->symtab_type++;
> +		map__dso(map)->comp = m->comp;
>  	}
>  
>  	return 0;
> @@ -1509,8 +1512,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
>  		return -1;
>  	map->end = start + size;
>  
> -	dso__kernel_module_get_build_id(map->dso, machine->root_dir);
> -
> +	dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
>  	return 0;
>  }
>  
> @@ -1619,7 +1621,7 @@ int machine__create_kernel_maps(struct machine *machine)
>  		struct map_rb_node *next = map_rb_node__next(rb_node);
>  
>  		if (next)
> -			machine__set_kernel_mmap(machine, start, next->map->start);
> +			machine__set_kernel_mmap(machine, start, map__start(next->map));
>  	}
>  
>  out_put:
> @@ -1683,10 +1685,10 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
>  		if (map == NULL)
>  			goto out_problem;
>  
> -		map->end = map->start + xm->end - xm->start;
> +		map->end = map__start(map) + xm->end - xm->start;
>  
>  		if (build_id__is_defined(bid))
> -			dso__set_build_id(map->dso, bid);
> +			dso__set_build_id(map__dso(map), bid);
>  
>  	} else if (is_kernel_mmap) {
>  		const char *symbol_name = (xm->name + strlen(machine->mmap_name));
> @@ -2148,14 +2150,14 @@ static char *callchain_srcline(struct map_symbol *ms, u64 ip)
>  	if (!map || callchain_param.key == CCKEY_FUNCTION)
>  		return srcline;
>  
> -	srcline = srcline__tree_find(&map->dso->srclines, ip);
> +	srcline = srcline__tree_find(&map__dso(map)->srclines, ip);
>  	if (!srcline) {
>  		bool show_sym = false;
>  		bool show_addr = callchain_param.key == CCKEY_ADDRESS;
>  
> -		srcline = get_srcline(map->dso, map__rip_2objdump(map, ip),
> +		srcline = get_srcline(map__dso(map), map__rip_2objdump(map, ip),
>  				      ms->sym, show_sym, show_addr, ip);
> -		srcline__tree_insert(&map->dso->srclines, ip, srcline);
> +		srcline__tree_insert(&map__dso(map)->srclines, ip, srcline);
>  	}
>  
>  	return srcline;
> @@ -2179,7 +2181,7 @@ static int add_callchain_ip(struct thread *thread,
>  {
>  	struct map_symbol ms;
>  	struct addr_location al;
> -	int nr_loop_iter = 0;
> +	int nr_loop_iter = 0, err;
>  	u64 iter_cycles = 0;
>  	const char *srcline = NULL;
>  
> @@ -2228,9 +2230,10 @@ static int add_callchain_ip(struct thread *thread,
>  		}
>  	}
>  
> -	if (symbol_conf.hide_unresolved && al.sym == NULL)
> +	if (symbol_conf.hide_unresolved && al.sym == NULL) {
> +		addr_location__put(&al);
>  		return 0;
> -
> +	}
>  	if (iter) {
>  		nr_loop_iter = iter->nr_loop_iter;
>  		iter_cycles = iter->cycles;
> @@ -2240,9 +2243,10 @@ static int add_callchain_ip(struct thread *thread,
>  	ms.map = al.map;
>  	ms.sym = al.sym;
>  	srcline = callchain_srcline(&ms, al.addr);
> -	return callchain_cursor_append(cursor, ip, &ms,
> -				       branch, flags, nr_loop_iter,
> -				       iter_cycles, branch_from, srcline);
> +	err = callchain_cursor_append(cursor, ip, &ms,
> +				      branch, flags, nr_loop_iter,
> +				      iter_cycles, branch_from, srcline);
> +	return err;
>  }
>  
>  struct branch_info *sample__resolve_bstack(struct perf_sample *sample,
> @@ -2937,15 +2941,15 @@ static int append_inlines(struct callchain_cursor *cursor, struct map_symbol *ms
>  	if (!symbol_conf.inline_name || !map || !sym)
>  		return ret;
>  
> -	addr = map__map_ip(map, ip);
> +	addr = map__dso_map_ip(map, ip);
>  	addr = map__rip_2objdump(map, addr);
>  
> -	inline_node = inlines__tree_find(&map->dso->inlined_nodes, addr);
> +	inline_node = inlines__tree_find(&map__dso(map)->inlined_nodes, addr);
>  	if (!inline_node) {
> -		inline_node = dso__parse_addr_inlines(map->dso, addr, sym);
> +		inline_node = dso__parse_addr_inlines(map__dso(map), addr, sym);
>  		if (!inline_node)
>  			return ret;
> -		inlines__tree_insert(&map->dso->inlined_nodes, inline_node);
> +		inlines__tree_insert(&map__dso(map)->inlined_nodes, inline_node);
>  	}
>  
>  	list_for_each_entry(ilist, &inline_node->val, list) {
> @@ -2981,7 +2985,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
>  	 * its corresponding binary.
>  	 */
>  	if (entry->ms.map)
> -		addr = map__map_ip(entry->ms.map, entry->ip);
> +		addr = map__dso_map_ip(entry->ms.map, entry->ip);
>  
>  	srcline = callchain_srcline(&entry->ms, addr);
>  	return callchain_cursor_append(cursor, entry->ip, &entry->ms,
> @@ -3183,7 +3187,7 @@ int machine__get_kernel_start(struct machine *machine)
>  		 * kernel_start = 1ULL << 63 for x86_64.
>  		 */
>  		if (!err && !machine__is(machine, "x86_64"))
> -			machine->kernel_start = map->start;
> +			machine->kernel_start = map__start(map);
>  	}
>  	return err;
>  }
> @@ -3234,8 +3238,8 @@ char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, ch
>  	if (sym == NULL)
>  		return NULL;
>  
> -	*modp = __map__is_kmodule(map) ? (char *)map->dso->short_name : NULL;
> -	*addrp = map->unmap_ip(map, sym->start);
> +	*modp = __map__is_kmodule(map) ? (char *)map__dso(map)->short_name : NULL;
> +	*addrp = map__unmap_ip(map, sym->start);
>  	return sym->name;
>  }
>  
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index 57e926ce115f..47d81e361e29 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -109,8 +109,8 @@ void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
>  	map->pgoff    = pgoff;
>  	map->reloc    = 0;
>  	map->dso      = dso__get(dso);
> -	map->map_ip   = map__map_ip;
> -	map->unmap_ip = map__unmap_ip;
> +	map->map_ip   = map__dso_map_ip;
> +	map->unmap_ip = map__dso_unmap_ip;
>  	map->erange_warned = false;
>  	refcount_set(&map->refcnt, 1);
>  }
> @@ -120,10 +120,11 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
>  		     u32 prot, u32 flags, struct build_id *bid,
>  		     char *filename, struct thread *thread)
>  {
> -	struct map *map = malloc(sizeof(*map));
> +	struct map *map;
>  	struct nsinfo *nsi = NULL;
>  	struct nsinfo *nnsi;
>  
> +	map = malloc(sizeof(*map));
>  	if (map != NULL) {
>  		char newfilename[PATH_MAX];
>  		struct dso *dso;
> @@ -170,7 +171,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
>  		map__init(map, start, start + len, pgoff, dso);
>  
>  		if (anon || no_dso) {
> -			map->map_ip = map->unmap_ip = identity__map_ip;
> +			map->map_ip = map->unmap_ip = map__identity_ip;
>  
>  			/*
>  			 * Set memory without DSO as loaded. All map__find_*
> @@ -204,8 +205,9 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
>   */
>  struct map *map__new2(u64 start, struct dso *dso)
>  {
> -	struct map *map = calloc(1, (sizeof(*map) +
> -				     (dso->kernel ? sizeof(struct kmap) : 0)));
> +	struct map *map;
> +
> +	map = calloc(1, sizeof(*map) + (dso->kernel ? sizeof(struct kmap) : 0));
>  	if (map != NULL) {
>  		/*
>  		 * ->end will be filled after we load all the symbols
> @@ -218,7 +220,7 @@ struct map *map__new2(u64 start, struct dso *dso)
>  
>  bool __map__is_kernel(const struct map *map)
>  {
> -	if (!map->dso->kernel)
> +	if (!map__dso(map)->kernel)
>  		return false;
>  	return machine__kernel_map(maps__machine(map__kmaps((struct map *)map))) == map;
>  }
> @@ -234,7 +236,7 @@ bool __map__is_bpf_prog(const struct map *map)
>  {
>  	const char *name;
>  
> -	if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
> +	if (map__dso(map)->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
>  		return true;
>  
>  	/*
> @@ -242,7 +244,7 @@ bool __map__is_bpf_prog(const struct map *map)
>  	 * type of DSO_BINARY_TYPE__BPF_PROG_INFO. In such cases, we can
>  	 * guess the type based on name.
>  	 */
> -	name = map->dso->short_name;
> +	name = map__dso(map)->short_name;
>  	return name && (strstr(name, "bpf_prog_") == name);
>  }
>  
> @@ -250,7 +252,7 @@ bool __map__is_bpf_image(const struct map *map)
>  {
>  	const char *name;
>  
> -	if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
> +	if (map__dso(map)->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
>  		return true;
>  
>  	/*
> @@ -258,18 +260,19 @@ bool __map__is_bpf_image(const struct map *map)
>  	 * type of DSO_BINARY_TYPE__BPF_IMAGE. In such cases, we can
>  	 * guess the type based on name.
>  	 */
> -	name = map->dso->short_name;
> +	name = map__dso(map)->short_name;
>  	return name && is_bpf_image(name);
>  }
>  
>  bool __map__is_ool(const struct map *map)
>  {
> -	return map->dso && map->dso->binary_type == DSO_BINARY_TYPE__OOL;
> +	return map__dso(map) &&
> +	       map__dso(map)->binary_type == DSO_BINARY_TYPE__OOL;
>  }
>  
>  bool map__has_symbols(const struct map *map)
>  {
> -	return dso__has_symbols(map->dso);
> +	return dso__has_symbols(map__dso(map));
>  }
>  
>  static void map__exit(struct map *map)
> @@ -292,7 +295,7 @@ void map__put(struct map *map)
>  
>  void map__fixup_start(struct map *map)
>  {
> -	struct rb_root_cached *symbols = &map->dso->symbols;
> +	struct rb_root_cached *symbols = &map__dso(map)->symbols;
>  	struct rb_node *nd = rb_first_cached(symbols);
>  	if (nd != NULL) {
>  		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
> @@ -302,7 +305,7 @@ void map__fixup_start(struct map *map)
>  
>  void map__fixup_end(struct map *map)
>  {
> -	struct rb_root_cached *symbols = &map->dso->symbols;
> +	struct rb_root_cached *symbols = &map__dso(map)->symbols;
>  	struct rb_node *nd = rb_last(&symbols->rb_root);
>  	if (nd != NULL) {
>  		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
> @@ -314,18 +317,18 @@ void map__fixup_end(struct map *map)
>  
>  int map__load(struct map *map)
>  {
> -	const char *name = map->dso->long_name;
> +	const char *name = map__dso(map)->long_name;
>  	int nr;
>  
> -	if (dso__loaded(map->dso))
> +	if (dso__loaded(map__dso(map)))
>  		return 0;
>  
> -	nr = dso__load(map->dso, map);
> +	nr = dso__load(map__dso(map), map);
>  	if (nr < 0) {
> -		if (map->dso->has_build_id) {
> +		if (map__dso(map)->has_build_id) {
>  			char sbuild_id[SBUILD_ID_SIZE];
>  
> -			build_id__sprintf(&map->dso->bid, sbuild_id);
> +			build_id__sprintf(&map__dso(map)->bid, sbuild_id);
>  			pr_debug("%s with build id %s not found", name, sbuild_id);
>  		} else
>  			pr_debug("Failed to open %s", name);
> @@ -357,7 +360,7 @@ struct symbol *map__find_symbol(struct map *map, u64 addr)
>  	if (map__load(map) < 0)
>  		return NULL;
>  
> -	return dso__find_symbol(map->dso, addr);
> +	return dso__find_symbol(map__dso(map), addr);
>  }
>  
>  struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
> @@ -365,24 +368,24 @@ struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
>  	if (map__load(map) < 0)
>  		return NULL;
>  
> -	if (!dso__sorted_by_name(map->dso))
> -		dso__sort_by_name(map->dso);
> +	if (!dso__sorted_by_name(map__dso(map)))
> +		dso__sort_by_name(map__dso(map));
>  
> -	return dso__find_symbol_by_name(map->dso, name);
> +	return dso__find_symbol_by_name(map__dso(map), name);
>  }
>  
>  struct map *map__clone(struct map *from)
>  {
> -	size_t size = sizeof(struct map);
>  	struct map *map;
> +	size_t size = sizeof(struct map);
>  
> -	if (from->dso && from->dso->kernel)
> +	if (map__dso(from) && map__dso(from)->kernel)
>  		size += sizeof(struct kmap);
>  
>  	map = memdup(from, size);
>  	if (map != NULL) {
>  		refcount_set(&map->refcnt, 1);
> -		dso__get(map->dso);
> +		map->dso = dso__get(map->dso);
>  	}
>  
>  	return map;
> @@ -391,7 +394,8 @@ struct map *map__clone(struct map *from)
>  size_t map__fprintf(struct map *map, FILE *fp)
>  {
>  	return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s\n",
> -		       map->start, map->end, map->pgoff, map->dso->name);
> +		       map__start(map), map__end(map),
> +		       map__pgoff(map), map__dso(map)->name);
>  }
>  
>  size_t map__fprintf_dsoname(struct map *map, FILE *fp)
> @@ -399,11 +403,11 @@ size_t map__fprintf_dsoname(struct map *map, FILE *fp)
>  	char buf[symbol_conf.pad_output_len_dso + 1];
>  	const char *dsoname = "[unknown]";
>  
> -	if (map && map->dso) {
> -		if (symbol_conf.show_kernel_path && map->dso->long_name)
> -			dsoname = map->dso->long_name;
> +	if (map && map__dso(map)) {
> +		if (symbol_conf.show_kernel_path && map__dso(map)->long_name)
> +			dsoname = map__dso(map)->long_name;
>  		else
> -			dsoname = map->dso->name;
> +			dsoname = map__dso(map)->name;
>  	}
>  
>  	if (symbol_conf.pad_output_len_dso) {
> @@ -418,7 +422,8 @@ char *map__srcline(struct map *map, u64 addr, struct symbol *sym)
>  {
>  	if (map == NULL)
>  		return SRCLINE_UNKNOWN;
> -	return get_srcline(map->dso, map__rip_2objdump(map, addr), sym, true, true, addr);
> +	return get_srcline(map__dso(map), map__rip_2objdump(map, addr),
> +			   sym, true, true, addr);
>  }
>  
>  int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
> @@ -426,7 +431,7 @@ int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
>  {
>  	int ret = 0;
>  
> -	if (map && map->dso) {
> +	if (map && map__dso(map)) {
>  		char *srcline = map__srcline(map, addr, NULL);
>  		if (strncmp(srcline, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0)
>  			ret = fprintf(fp, "%s%s", prefix, srcline);
> @@ -472,20 +477,20 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
>  		}
>  	}
>  
> -	if (!map->dso->adjust_symbols)
> +	if (!map__dso(map)->adjust_symbols)
>  		return rip;
>  
> -	if (map->dso->rel)
> -		return rip - map->pgoff;
> +	if (map__dso(map)->rel)
> +		return rip - map__pgoff(map);
>  
>  	/*
>  	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
>  	 * but all kernel modules are ET_REL, so won't get here.
>  	 */
> -	if (map->dso->kernel == DSO_SPACE__USER)
> -		return rip + map->dso->text_offset;
> +	if (map__dso(map)->kernel == DSO_SPACE__USER)
> +		return rip + map__dso(map)->text_offset;
>  
> -	return map->unmap_ip(map, rip) - map->reloc;
> +	return map__unmap_ip(map, rip) - map__reloc(map);
>  }
>  
>  /**
> @@ -502,34 +507,34 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
>   */
>  u64 map__objdump_2mem(struct map *map, u64 ip)
>  {
> -	if (!map->dso->adjust_symbols)
> -		return map->unmap_ip(map, ip);
> +	if (!map__dso(map)->adjust_symbols)
> +		return map__unmap_ip(map, ip);
>  
> -	if (map->dso->rel)
> -		return map->unmap_ip(map, ip + map->pgoff);
> +	if (map__dso(map)->rel)
> +		return map__unmap_ip(map, ip + map__pgoff(map));
>  
>  	/*
>  	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
>  	 * but all kernel modules are ET_REL, so won't get here.
>  	 */
> -	if (map->dso->kernel == DSO_SPACE__USER)
> -		return map->unmap_ip(map, ip - map->dso->text_offset);
> +	if (map__dso(map)->kernel == DSO_SPACE__USER)
> +		return map__unmap_ip(map, ip - map__dso(map)->text_offset);
>  
> -	return ip + map->reloc;
> +	return ip + map__reloc(map);
>  }
>  
>  bool map__contains_symbol(const struct map *map, const struct symbol *sym)
>  {
> -	u64 ip = map->unmap_ip(map, sym->start);
> +	u64 ip = map__unmap_ip(map, sym->start);
>  
> -	return ip >= map->start && ip < map->end;
> +	return ip >= map__start(map) && ip < map__end(map);
>  }
>  
>  struct kmap *__map__kmap(struct map *map)
>  {
> -	if (!map->dso || !map->dso->kernel)
> +	if (!map__dso(map) || !map__dso(map)->kernel)
>  		return NULL;
> -	return (struct kmap *)(map + 1);
> +	return (struct kmap *)(&map[1]);
>  }
>  
>  struct kmap *map__kmap(struct map *map)
> @@ -552,17 +557,17 @@ struct maps *map__kmaps(struct map *map)
>  	return kmap->kmaps;
>  }
>  
> -u64 map__map_ip(const struct map *map, u64 ip)
> +u64 map__dso_map_ip(const struct map *map, u64 ip)
>  {
> -	return ip - map->start + map->pgoff;
> +	return ip - map__start(map) + map__pgoff(map);
>  }
>  
> -u64 map__unmap_ip(const struct map *map, u64 ip)
> +u64 map__dso_unmap_ip(const struct map *map, u64 ip)
>  {
> -	return ip + map->start - map->pgoff;
> +	return ip + map__start(map) - map__pgoff(map);
>  }
>  
> -u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip)
> +u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip)
>  {
>  	return ip;
>  }
> diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
> index d1a6f85fd31d..99ef0464a357 100644
> --- a/tools/perf/util/map.h
> +++ b/tools/perf/util/map.h
> @@ -41,15 +41,65 @@ struct kmap *map__kmap(struct map *map);
>  struct maps *map__kmaps(struct map *map);
>  
>  /* ip -> dso rip */
> -u64 map__map_ip(const struct map *map, u64 ip);
> +u64 map__dso_map_ip(const struct map *map, u64 ip);
>  /* dso rip -> ip */
> -u64 map__unmap_ip(const struct map *map, u64 ip);
> +u64 map__dso_unmap_ip(const struct map *map, u64 ip);
>  /* Returns ip */
> -u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip);
> +u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip);
> +
> +static inline struct dso *map__dso(const struct map *map)
> +{
> +	return map->dso;
> +}
> +
> +static inline u64 map__map_ip(const struct map *map, u64 ip)
> +{
> +	return map->map_ip(map, ip);
> +}
> +
> +static inline u64 map__unmap_ip(const struct map *map, u64 ip)
> +{
> +	return map->unmap_ip(map, ip);
> +}
> +
> +static inline u64 map__start(const struct map *map)
> +{
> +	return map->start;
> +}
> +
> +static inline u64 map__end(const struct map *map)
> +{
> +	return map->end;
> +}
> +
> +static inline u64 map__pgoff(const struct map *map)
> +{
> +	return map->pgoff;
> +}
> +
> +static inline u64 map__reloc(const struct map *map)
> +{
> +	return map->reloc;
> +}
> +
> +static inline u32 map__flags(const struct map *map)
> +{
> +	return map->flags;
> +}
> +
> +static inline u32 map__prot(const struct map *map)
> +{
> +	return map->prot;
> +}
> +
> +static inline bool map__priv(const struct map *map)
> +{
> +	return map->priv;
> +}
>  
>  static inline size_t map__size(const struct map *map)
>  {
> -	return map->end - map->start;
> +	return map__end(map) - map__start(map);
>  }
>  
>  /* rip/ip <-> addr suitable for passing to `objdump --start-address=` */
> diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
> index 9fc3e7186b8e..6efbcb79131c 100644
> --- a/tools/perf/util/maps.c
> +++ b/tools/perf/util/maps.c
> @@ -30,24 +30,24 @@ static void __maps__free_maps_by_name(struct maps *maps)
>  	maps->nr_maps_allocated = 0;
>  }
>  
> -static int __maps__insert(struct maps *maps, struct map *map)
> +static struct map *__maps__insert(struct maps *maps, struct map *map)
>  {
>  	struct rb_node **p = &maps__entries(maps)->rb_node;
>  	struct rb_node *parent = NULL;
> -	const u64 ip = map->start;
> +	const u64 ip = map__start(map);
>  	struct map_rb_node *m, *new_rb_node;
>  
>  	new_rb_node = malloc(sizeof(*new_rb_node));
>  	if (!new_rb_node)
> -		return -ENOMEM;
> +		return NULL;
>  
>  	RB_CLEAR_NODE(&new_rb_node->rb_node);
> -	new_rb_node->map = map;
> +	new_rb_node->map = map__get(map);
>  
>  	while (*p != NULL) {
>  		parent = *p;
>  		m = rb_entry(parent, struct map_rb_node, rb_node);
> -		if (ip < m->map->start)
> +		if (ip < map__start(m->map))
>  			p = &(*p)->rb_left;
>  		else
>  			p = &(*p)->rb_right;
> @@ -55,22 +55,23 @@ static int __maps__insert(struct maps *maps, struct map *map)
>  
>  	rb_link_node(&new_rb_node->rb_node, parent, p);
>  	rb_insert_color(&new_rb_node->rb_node, maps__entries(maps));
> -	map__get(map);
> -	return 0;
> +	return new_rb_node->map;
>  }
>  
>  int maps__insert(struct maps *maps, struct map *map)
>  {
> -	int err;
> +	int err = 0;
>  
>  	down_write(maps__lock(maps));
> -	err = __maps__insert(maps, map);
> -	if (err)
> +	map = __maps__insert(maps, map);
> +	if (!map) {
> +		err = -ENOMEM;
>  		goto out;
> +	}
>  
>  	++maps->nr_maps;
>  
> -	if (map->dso && map->dso->kernel) {
> +	if (map__dso(map) && map__dso(map)->kernel) {
>  		struct kmap *kmap = map__kmap(map);
>  
>  		if (kmap)
> @@ -193,7 +194,7 @@ struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
>  	if (map != NULL && map__load(map) >= 0) {
>  		if (mapp != NULL)
>  			*mapp = map;
> -		return map__find_symbol(map, map->map_ip(map, addr));
> +		return map__find_symbol(map, map__map_ip(map, addr));
>  	}
>  
>  	return NULL;
> @@ -228,7 +229,8 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
>  
>  int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
>  {
> -	if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
> +	if (ams->addr < map__start(ams->ms.map) ||
> +	    ams->addr >= map__end(ams->ms.map)) {
>  		if (maps == NULL)
>  			return -1;
>  		ams->ms.map = maps__find(maps, ams->addr);
> @@ -236,7 +238,7 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
>  			return -1;
>  	}
>  
> -	ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
> +	ams->al_addr = map__map_ip(ams->ms.map, ams->addr);
>  	ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
>  
>  	return ams->ms.sym ? 0 : -1;
> @@ -253,7 +255,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
>  		printed += fprintf(fp, "Map:");
>  		printed += map__fprintf(pos->map, fp);
>  		if (verbose > 2) {
> -			printed += dso__fprintf(pos->map->dso, fp);
> +			printed += dso__fprintf(map__dso(pos->map), fp);
>  			printed += fprintf(fp, "--\n");
>  		}
>  	}
> @@ -282,9 +284,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  	while (next) {
>  		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
>  
> -		if (pos->map->end > map->start) {
> +		if (map__end(pos->map) > map__start(map)) {
>  			first = next;
> -			if (pos->map->start <= map->start)
> +			if (map__start(pos->map) <= map__start(map))
>  				break;
>  			next = next->rb_left;
>  		} else
> @@ -300,14 +302,14 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  		 * Stop if current map starts after map->end.
>  		 * Maps are ordered by start: next will not overlap for sure.
>  		 */
> -		if (pos->map->start >= map->end)
> +		if (map__start(pos->map) >= map__end(map))
>  			break;
>  
>  		if (verbose >= 2) {
>  
>  			if (use_browser) {
>  				pr_debug("overlapping maps in %s (disable tui for more info)\n",
> -					   map->dso->name);
> +					   map__dso(map)->name);
>  			} else {
>  				fputs("overlapping maps:\n", fp);
>  				map__fprintf(map, fp);
> @@ -320,7 +322,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  		 * Now check if we need to create new maps for areas not
>  		 * overlapped by the new map:
>  		 */
> -		if (map->start > pos->map->start) {
> +		if (map__start(map) > map__start(pos->map)) {
>  			struct map *before = map__clone(pos->map);
>  
>  			if (before == NULL) {
> @@ -328,17 +330,19 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  				goto put_map;
>  			}
>  
> -			before->end = map->start;
> -			err = __maps__insert(maps, before);
> -			if (err)
> +			before->end = map__start(map);
> +			if (!__maps__insert(maps, before)) {
> +				map__put(before);
> +				err = -ENOMEM;
>  				goto put_map;
> +			}
>  
>  			if (verbose >= 2 && !use_browser)
>  				map__fprintf(before, fp);
>  			map__put(before);
>  		}
>  
> -		if (map->end < pos->map->end) {
> +		if (map__end(map) < map__end(pos->map)) {
>  			struct map *after = map__clone(pos->map);
>  
>  			if (after == NULL) {
> @@ -346,14 +350,15 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  				goto put_map;
>  			}
>  
> -			after->start = map->end;
> -			after->pgoff += map->end - pos->map->start;
> -			assert(pos->map->map_ip(pos->map, map->end) ==
> -				after->map_ip(after, map->end));
> -			err = __maps__insert(maps, after);
> -			if (err)
> +			after->start = map__end(map);
> +			after->pgoff += map__end(map) - map__start(pos->map);
> +			assert(map__map_ip(pos->map, map__end(map)) ==
> +				map__map_ip(after, map__end(map)));
> +			if (!__maps__insert(maps, after)) {
> +				map__put(after);
> +				err = -ENOMEM;
>  				goto put_map;
> -
> +			}
>  			if (verbose >= 2 && !use_browser)
>  				map__fprintf(after, fp);
>  			map__put(after);
> @@ -377,7 +382,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  int maps__clone(struct thread *thread, struct maps *parent)
>  {
>  	struct maps *maps = thread->maps;
> -	int err;
> +	int err = 0;
>  	struct map_rb_node *rb_node;
>  
>  	down_read(maps__lock(parent));
> @@ -391,17 +396,13 @@ int maps__clone(struct thread *thread, struct maps *parent)
>  		}
>  
>  		err = unwind__prepare_access(maps, new, NULL);
> -		if (err)
> -			goto out_unlock;
> +		if (!err)
> +			err = maps__insert(maps, new);
>  
> -		err = maps__insert(maps, new);
> +		map__put(new);
>  		if (err)
>  			goto out_unlock;
> -
> -		map__put(new);
>  	}
> -
> -	err = 0;
>  out_unlock:
>  	up_read(maps__lock(parent));
>  	return err;
> @@ -428,9 +429,9 @@ struct map *maps__find(struct maps *maps, u64 ip)
>  	p = maps__entries(maps)->rb_node;
>  	while (p != NULL) {
>  		m = rb_entry(p, struct map_rb_node, rb_node);
> -		if (ip < m->map->start)
> +		if (ip < map__start(m->map))
>  			p = p->rb_left;
> -		else if (ip >= m->map->end)
> +		else if (ip >= map__end(m->map))
>  			p = p->rb_right;
>  		else
>  			goto out;
> diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
> index f9fbf611f2bf..1a93dca50a4c 100644
> --- a/tools/perf/util/probe-event.c
> +++ b/tools/perf/util/probe-event.c
> @@ -134,15 +134,15 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
>  	/* ref_reloc_sym is just a label. Need a special fix*/
>  	reloc_sym = kernel_get_ref_reloc_sym(&map);
>  	if (reloc_sym && strcmp(name, reloc_sym->name) == 0)
> -		*addr = (!map->reloc || reloc) ? reloc_sym->addr :
> +		*addr = (!map__reloc(map) || reloc) ? reloc_sym->addr :
>  			reloc_sym->unrelocated_addr;
>  	else {
>  		sym = machine__find_kernel_symbol_by_name(host_machine, name, &map);
>  		if (!sym)
>  			return -ENOENT;
> -		*addr = map->unmap_ip(map, sym->start) -
> -			((reloc) ? 0 : map->reloc) -
> -			((reladdr) ? map->start : 0);
> +		*addr = map__unmap_ip(map, sym->start) -
> +			((reloc) ? 0 : map__reloc(map)) -
> +			((reladdr) ? map__start(map) : 0);
>  	}
>  	return 0;
>  }
> @@ -164,8 +164,8 @@ static struct map *kernel_get_module_map(const char *module)
>  
>  	maps__for_each_entry(maps, pos) {
>  		/* short_name is "[module]" */
> -		const char *short_name = pos->map->dso->short_name;
> -		u16 short_name_len =  pos->map->dso->short_name_len;
> +		const char *short_name = map__dso(pos->map)->short_name;
> +		u16 short_name_len =  map__dso(pos->map)->short_name_len;
>  
>  		if (strncmp(short_name + 1, module,
>  			    short_name_len - 2) == 0 &&
> @@ -183,11 +183,11 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
>  		struct map *map;
>  
>  		map = dso__new_map(target);
> -		if (map && map->dso) {
> -			BUG_ON(pthread_mutex_lock(&map->dso->lock) != 0);
> -			nsinfo__put(map->dso->nsinfo);
> -			map->dso->nsinfo = nsinfo__get(nsi);
> -			pthread_mutex_unlock(&map->dso->lock);
> +		if (map && map__dso(map)) {
> +			BUG_ON(pthread_mutex_lock(&map__dso(map)->lock) != 0);
> +			nsinfo__put(map__dso(map)->nsinfo);
> +			map__dso(map)->nsinfo = nsinfo__get(nsi);
> +			pthread_mutex_unlock(&map__dso(map)->lock);
>  		}
>  		return map;
>  	} else {
> @@ -253,7 +253,7 @@ static bool kprobe_warn_out_range(const char *symbol, u64 address)
>  
>  	map = kernel_get_module_map(NULL);
>  	if (map) {
> -		ret = address <= map->start || map->end < address;
> +		ret = address <= map__start(map) || map__end(map) < address;
>  		if (ret)
>  			pr_warning("%s is out of .text, skip it.\n", symbol);
>  		map__put(map);
> @@ -340,7 +340,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
>  		snprintf(module_name, sizeof(module_name), "[%s]", module);
>  		map = maps__find_by_name(machine__kernel_maps(host_machine), module_name);
>  		if (map) {
> -			dso = map->dso;
> +			dso = map__dso(map);
>  			goto found;
>  		}
>  		pr_debug("Failed to find module %s.\n", module);
> @@ -348,7 +348,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
>  	}
>  
>  	map = machine__kernel_map(host_machine);
> -	dso = map->dso;
> +	dso = map__dso(map);
>  	if (!dso->has_build_id)
>  		dso__read_running_kernel_build_id(dso, host_machine);
>  
> @@ -396,7 +396,8 @@ static int find_alternative_probe_point(struct debuginfo *dinfo,
>  					   "Consider identifying the final function used at run time and set the probe directly on that.\n",
>  					   pp->function);
>  		} else
> -			address = map->unmap_ip(map, sym->start) - map->reloc;
> +			address = map__unmap_ip(map, sym->start) -
> +				  map__reloc(map);
>  		break;
>  	}
>  	if (!address) {
> @@ -862,8 +863,7 @@ post_process_kernel_probe_trace_events(struct probe_trace_event *tevs,
>  			free(tevs[i].point.symbol);
>  		tevs[i].point.symbol = tmp;
>  		tevs[i].point.offset = tevs[i].point.address -
> -			(map->reloc ? reloc_sym->unrelocated_addr :
> -				      reloc_sym->addr);
> +			(map__reloc(map) ? reloc_sym->unrelocated_addr : reloc_sym->addr);
>  	}
>  	return skipped;
>  }
> @@ -2243,7 +2243,7 @@ static int find_perf_probe_point_from_map(struct probe_trace_point *tp,
>  		goto out;
>  
>  	pp->retprobe = tp->retprobe;
> -	pp->offset = addr - map->unmap_ip(map, sym->start);
> +	pp->offset = addr - map__unmap_ip(map, sym->start);
>  	pp->function = strdup(sym->name);
>  	ret = pp->function ? 0 : -ENOMEM;
>  
> @@ -3117,7 +3117,7 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
>  			goto err_out;
>  		}
>  		/* Add one probe point */
> -		tp->address = map->unmap_ip(map, sym->start) + pp->offset;
> +		tp->address = map__unmap_ip(map, sym->start) + pp->offset;
>  
>  		/* Check the kprobe (not in module) is within .text  */
>  		if (!pev->uprobes && !pev->target &&
> @@ -3759,13 +3759,13 @@ int show_available_funcs(const char *target, struct nsinfo *nsi,
>  			       (target) ? : "kernel");
>  		goto end;
>  	}
> -	if (!dso__sorted_by_name(map->dso))
> -		dso__sort_by_name(map->dso);
> +	if (!dso__sorted_by_name(map__dso(map)))
> +		dso__sort_by_name(map__dso(map));
>  
>  	/* Show all (filtered) symbols */
>  	setup_pager();
>  
> -	for (nd = rb_first_cached(&map->dso->symbol_names); nd;
> +	for (nd = rb_first_cached(&map__dso(map)->symbol_names); nd;
>  	     nd = rb_next(nd)) {
>  		struct symbol_name_rb_node *pos = rb_entry(nd, struct symbol_name_rb_node, rb_node);
>  
> diff --git a/tools/perf/util/scripting-engines/trace-event-perl.c b/tools/perf/util/scripting-engines/trace-event-perl.c
> index a5d945415bbc..1282fb9b45e1 100644
> --- a/tools/perf/util/scripting-engines/trace-event-perl.c
> +++ b/tools/perf/util/scripting-engines/trace-event-perl.c
> @@ -315,11 +315,12 @@ static SV *perl_process_callchain(struct perf_sample *sample,
>  		if (node->ms.map) {
>  			struct map *map = node->ms.map;
>  			const char *dsoname = "[unknown]";
> -			if (map && map->dso) {
> -				if (symbol_conf.show_kernel_path && map->dso->long_name)
> -					dsoname = map->dso->long_name;
> +			if (map && map__dso(map)) {
> +				if (symbol_conf.show_kernel_path &&
> +				    map__dso(map)->long_name)
> +					dsoname = map__dso(map)->long_name;
>  				else
> -					dsoname = map->dso->name;
> +					dsoname = map__dso(map)->name;
>  			}
>  			if (!hv_stores(elem, "dso", newSVpv(dsoname,0))) {
>  				hv_undef(elem);
> diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
> index 0290dc3a6258..559b2ac5cac3 100644
> --- a/tools/perf/util/scripting-engines/trace-event-python.c
> +++ b/tools/perf/util/scripting-engines/trace-event-python.c
> @@ -382,11 +382,11 @@ static const char *get_dsoname(struct map *map)
>  {
>  	const char *dsoname = "[unknown]";
>  
> -	if (map && map->dso) {
> -		if (symbol_conf.show_kernel_path && map->dso->long_name)
> -			dsoname = map->dso->long_name;
> +	if (map && map__dso(map)) {
> +		if (symbol_conf.show_kernel_path && map__dso(map)->long_name)
> +			dsoname = map__dso(map)->long_name;
>  		else
> -			dsoname = map->dso->name;
> +			dsoname = map__dso(map)->name;
>  	}
>  
>  	return dsoname;
> @@ -527,7 +527,7 @@ static unsigned long get_offset(struct symbol *sym, struct addr_location *al)
>  	if (al->addr < sym->end)
>  		offset = al->addr - sym->start;
>  	else
> -		offset = al->addr - al->map->start - sym->start;
> +		offset = al->addr - map__start(al->map) - sym->start;
>  
>  	return offset;
>  }
> @@ -741,7 +741,7 @@ static void set_sym_in_dict(PyObject *dict, struct addr_location *al,
>  {
>  	if (al->map) {
>  		pydict_set_item_string_decref(dict, dso_field,
> -			_PyUnicode_FromString(al->map->dso->name));
> +			_PyUnicode_FromString(map__dso(al->map)->name));
>  	}
>  	if (al->sym) {
>  		pydict_set_item_string_decref(dict, sym_field,
> diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
> index 25686d67ee6f..6d19bbcd30df 100644
> --- a/tools/perf/util/sort.c
> +++ b/tools/perf/util/sort.c
> @@ -173,8 +173,8 @@ struct sort_entry sort_comm = {
>  
>  static int64_t _sort__dso_cmp(struct map *map_l, struct map *map_r)
>  {
> -	struct dso *dso_l = map_l ? map_l->dso : NULL;
> -	struct dso *dso_r = map_r ? map_r->dso : NULL;
> +	struct dso *dso_l = map_l ? map__dso(map_l) : NULL;
> +	struct dso *dso_r = map_r ? map__dso(map_r) : NULL;
>  	const char *dso_name_l, *dso_name_r;
>  
>  	if (!dso_l || !dso_r)
> @@ -200,9 +200,9 @@ sort__dso_cmp(struct hist_entry *left, struct hist_entry *right)
>  static int _hist_entry__dso_snprintf(struct map *map, char *bf,
>  				     size_t size, unsigned int width)
>  {
> -	if (map && map->dso) {
> -		const char *dso_name = verbose > 0 ? map->dso->long_name :
> -			map->dso->short_name;
> +	if (map && map__dso(map)) {
> +		const char *dso_name = verbose > 0 ? map__dso(map)->long_name :
> +			map__dso(map)->short_name;
>  		return repsep_snprintf(bf, size, "%-*.*s", width, width, dso_name);
>  	}
>  
> @@ -222,7 +222,7 @@ static int hist_entry__dso_filter(struct hist_entry *he, int type, const void *a
>  	if (type != HIST_FILTER__DSO)
>  		return -1;
>  
> -	return dso && (!he->ms.map || he->ms.map->dso != dso);
> +	return dso && (!he->ms.map || map__dso(he->ms.map) != dso);
>  }
>  
>  struct sort_entry sort_dso = {
> @@ -302,12 +302,12 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
>  	size_t ret = 0;
>  
>  	if (verbose > 0) {
> -		char o = map ? dso__symtab_origin(map->dso) : '!';
> +		char o = map ? dso__symtab_origin(map__dso(map)) : '!';
>  		u64 rip = ip;
>  
> -		if (map && map->dso && map->dso->kernel
> -		    && map->dso->adjust_symbols)
> -			rip = map->unmap_ip(map, ip);
> +		if (map && map__dso(map) && map__dso(map)->kernel
> +		    && map__dso(map)->adjust_symbols)
> +			rip = map__unmap_ip(map, ip);
>  
>  		ret += repsep_snprintf(bf, size, "%-#*llx %c ",
>  				       BITS_PER_LONG / 4 + 2, rip, o);
> @@ -318,7 +318,7 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
>  		if (sym->type == STT_OBJECT) {
>  			ret += repsep_snprintf(bf + ret, size - ret, "%s", sym->name);
>  			ret += repsep_snprintf(bf + ret, size - ret, "+0x%llx",
> -					ip - map->unmap_ip(map, sym->start));
> +					ip - map__unmap_ip(map, sym->start));
>  		} else {
>  			ret += repsep_snprintf(bf + ret, size - ret, "%.*s",
>  					       width - ret,
> @@ -517,7 +517,7 @@ static char *hist_entry__get_srcfile(struct hist_entry *e)
>  	if (!map)
>  		return no_srcfile;
>  
> -	sf = __get_srcline(map->dso, map__rip_2objdump(map, e->ip),
> +	sf = __get_srcline(map__dso(map), map__rip_2objdump(map, e->ip),
>  			 e->ms.sym, false, true, true, e->ip);
>  	if (!strcmp(sf, SRCLINE_UNKNOWN))
>  		return no_srcfile;
> @@ -838,7 +838,7 @@ static int hist_entry__dso_from_filter(struct hist_entry *he, int type,
>  		return -1;
>  
>  	return dso && (!he->branch_info || !he->branch_info->from.ms.map ||
> -		       he->branch_info->from.ms.map->dso != dso);
> +		map__dso(he->branch_info->from.ms.map) != dso);
>  }
>  
>  static int64_t
> @@ -870,7 +870,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type,
>  		return -1;
>  
>  	return dso && (!he->branch_info || !he->branch_info->to.ms.map ||
> -		       he->branch_info->to.ms.map->dso != dso);
> +		map__dso(he->branch_info->to.ms.map) != dso);
>  }
>  
>  static int64_t
> @@ -1259,7 +1259,7 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
>  	if (!l_map) return -1;
>  	if (!r_map) return 1;
>  
> -	rc = dso__cmp_id(l_map->dso, r_map->dso);
> +	rc = dso__cmp_id(map__dso(l_map), map__dso(r_map));
>  	if (rc)
>  		return rc;
>  	/*
> @@ -1271,9 +1271,9 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
>  	 */
>  
>  	if ((left->cpumode != PERF_RECORD_MISC_KERNEL) &&
> -	    (!(l_map->flags & MAP_SHARED)) &&
> -	    !l_map->dso->id.maj && !l_map->dso->id.min &&
> -	    !l_map->dso->id.ino && !l_map->dso->id.ino_generation) {
> +	    (!(map__flags(l_map) & MAP_SHARED)) &&
> +	    !map__dso(l_map)->id.maj && !map__dso(l_map)->id.min &&
> +	    !map__dso(l_map)->id.ino && !map__dso(l_map)->id.ino_generation) {
>  		/* userspace anonymous */
>  
>  		if (left->thread->pid_ > right->thread->pid_) return -1;
> @@ -1307,10 +1307,10 @@ static int hist_entry__dcacheline_snprintf(struct hist_entry *he, char *bf,
>  
>  		/* print [s] for shared data mmaps */
>  		if ((he->cpumode != PERF_RECORD_MISC_KERNEL) &&
> -		     map && !(map->prot & PROT_EXEC) &&
> -		    (map->flags & MAP_SHARED) &&
> -		    (map->dso->id.maj || map->dso->id.min ||
> -		     map->dso->id.ino || map->dso->id.ino_generation))
> +		    map && !(map__prot(map) & PROT_EXEC) &&
> +		    (map__flags(map) & MAP_SHARED) &&
> +		    (map__dso(map)->id.maj || map__dso(map)->id.min ||
> +		     map__dso(map)->id.ino || map__dso(map)->id.ino_generation))
>  			level = 's';
>  		else if (!map)
>  			level = 'X';
> @@ -1806,7 +1806,7 @@ sort__dso_size_cmp(struct hist_entry *left, struct hist_entry *right)
>  static int _hist_entry__dso_size_snprintf(struct map *map, char *bf,
>  					  size_t bf_size, unsigned int width)
>  {
> -	if (map && map->dso)
> +	if (map && map__dso(map))
>  		return repsep_snprintf(bf, bf_size, "%*d", width,
>  				       map__size(map));
>  
> diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
> index 3ca9a0968345..056405d3d655 100644
> --- a/tools/perf/util/symbol-elf.c
> +++ b/tools/perf/util/symbol-elf.c
> @@ -970,7 +970,7 @@ void __weak arch__sym_update(struct symbol *s __maybe_unused,
>  static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  				      GElf_Sym *sym, GElf_Shdr *shdr,
>  				      struct maps *kmaps, struct kmap *kmap,
> -				      struct dso **curr_dsop, struct map **curr_mapp,
> +				      struct dso **curr_dsop,
>  				      const char *section_name,
>  				      bool adjust_kernel_syms, bool kmodule, bool *remap_kernel)
>  {
> @@ -994,18 +994,18 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  		if (*remap_kernel && dso->kernel && !kmodule) {
>  			*remap_kernel = false;
>  			map->start = shdr->sh_addr + ref_reloc(kmap);
> -			map->end = map->start + shdr->sh_size;
> +			map->end = map__start(map) + shdr->sh_size;
>  			map->pgoff = shdr->sh_offset;
> -			map->map_ip = map__map_ip;
> -			map->unmap_ip = map__unmap_ip;
> +			map->map_ip = map__dso_map_ip;
> +			map->unmap_ip = map__dso_unmap_ip;
>  			/* Ensure maps are correctly ordered */
>  			if (kmaps) {
>  				int err;
> +				struct map *updated = map__get(map);
>  
> -				map__get(map);
>  				maps__remove(kmaps, map);
> -				err = maps__insert(kmaps, map);
> -				map__put(map);
> +				err = maps__insert(kmaps, updated);
> +				map__put(updated);
>  				if (err)
>  					return err;
>  			}
> @@ -1021,7 +1021,6 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  			map->pgoff = shdr->sh_offset;
>  		}
>  
> -		*curr_mapp = map;
>  		*curr_dsop = dso;
>  		return 0;
>  	}
> @@ -1036,7 +1035,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  		u64 start = sym->st_value;
>  
>  		if (kmodule)
> -			start += map->start + shdr->sh_offset;
> +			start += map__start(map) + shdr->sh_offset;
>  
>  		curr_dso = dso__new(dso_name);
>  		if (curr_dso == NULL)
> @@ -1054,10 +1053,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  
>  		if (adjust_kernel_syms) {
>  			curr_map->start  = shdr->sh_addr + ref_reloc(kmap);
> -			curr_map->end	 = curr_map->start + shdr->sh_size;
> -			curr_map->pgoff	 = shdr->sh_offset;
> +			curr_map->end	= map__start(curr_map) + shdr->sh_size;
> +			curr_map->pgoff	= shdr->sh_offset;
>  		} else {
> -			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> +			curr_map->map_ip = map__identity_ip;
> +			curr_map->unmap_ip = map__identity_ip;
>  		}
>  		curr_dso->symtab_type = dso->symtab_type;
>  		if (maps__insert(kmaps, curr_map))
> @@ -1068,13 +1068,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  		 * *curr_map->dso.
>  		 */
>  		dsos__add(&maps__machine(kmaps)->dsos, curr_dso);
> -		/* kmaps already got it */
> -		map__put(curr_map);
>  		dso__set_loaded(curr_dso);
> -		*curr_mapp = curr_map;
>  		*curr_dsop = curr_dso;
> +		map__put(curr_map);
>  	} else
> -		*curr_dsop = curr_map->dso;
> +		*curr_dsop = map__dso(curr_map);
>  
>  	return 0;
>  }
> @@ -1085,7 +1083,6 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
>  {
>  	struct kmap *kmap = dso->kernel ? map__kmap(map) : NULL;
>  	struct maps *kmaps = kmap ? map__kmaps(map) : NULL;
> -	struct map *curr_map = map;
>  	struct dso *curr_dso = dso;
>  	Elf_Data *symstrs, *secstrs, *secstrs_run, *secstrs_sym;
>  	uint32_t nr_syms;
> @@ -1175,7 +1172,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
>  	 * attempted to prelink vdso to its virtual address.
>  	 */
>  	if (dso__is_vdso(dso))
> -		map->reloc = map->start - dso->text_offset;
> +		map->reloc = map__start(map) - dso->text_offset;
>  
>  	dso->adjust_symbols = runtime_ss->adjust_symbols || ref_reloc(kmap);
>  	/*
> @@ -1262,8 +1259,10 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
>  			--sym.st_value;
>  
>  		if (dso->kernel) {
> -			if (dso__process_kernel_symbol(dso, map, &sym, &shdr, kmaps, kmap, &curr_dso, &curr_map,
> -						       section_name, adjust_kernel_syms, kmodule, &remap_kernel))
> +			if (dso__process_kernel_symbol(dso, map, &sym, &shdr,
> +						       kmaps, kmap, &curr_dso,
> +						       section_name, adjust_kernel_syms,
> +						       kmodule, &remap_kernel))
>  				goto out_elf_end;
>  		} else if ((used_opd && runtime_ss->adjust_symbols) ||
>  			   (!used_opd && syms_ss->adjust_symbols)) {
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index 9b51e669a722..6289b3028b91 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -252,8 +252,8 @@ void maps__fixup_end(struct maps *maps)
>  	down_write(maps__lock(maps));
>  
>  	maps__for_each_entry(maps, curr) {
> -		if (prev != NULL && !prev->map->end)
> -			prev->map->end = curr->map->start;
> +		if (prev != NULL && !map__end(prev->map))
> +			prev->map->end = map__start(curr->map);
>  
>  		prev = curr;
>  	}
> @@ -262,7 +262,7 @@ void maps__fixup_end(struct maps *maps)
>  	 * We still haven't the actual symbols, so guess the
>  	 * last map final address.
>  	 */
> -	if (curr && !curr->map->end)
> +	if (curr && !map__end(curr->map))
>  		curr->map->end = ~0ULL;
>  
>  	up_write(maps__lock(maps));
> @@ -778,12 +778,12 @@ static int maps__split_kallsyms_for_kcore(struct maps *kmaps, struct dso *dso)
>  			continue;
>  		}
>  
> -		pos->start -= curr_map->start - curr_map->pgoff;
> -		if (pos->end > curr_map->end)
> -			pos->end = curr_map->end;
> +		pos->start -= map__start(curr_map) - map__pgoff(curr_map);
> +		if (pos->end > map__end(curr_map))
> +			pos->end = map__end(curr_map);
>  		if (pos->end)
> -			pos->end -= curr_map->start - curr_map->pgoff;
> -		symbols__insert(&curr_map->dso->symbols, pos);
> +			pos->end -= map__start(curr_map) - map__pgoff(curr_map);
> +		symbols__insert(&map__dso(curr_map)->symbols, pos);
>  		++count;
>  	}
>  
> @@ -830,7 +830,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  
>  			*module++ = '\0';
>  
> -			if (strcmp(curr_map->dso->short_name, module)) {
> +			if (strcmp(map__dso(curr_map)->short_name, module)) {
>  				if (curr_map != initial_map &&
>  				    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
>  				    machine__is_default_guest(machine)) {
> @@ -841,7 +841,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  					 * symbols are in its kmap. Mark it as
>  					 * loaded.
>  					 */
> -					dso__set_loaded(curr_map->dso);
> +					dso__set_loaded(map__dso(curr_map));
>  				}
>  
>  				curr_map = maps__find_by_name(kmaps, module);
> @@ -854,7 +854,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  					goto discard_symbol;
>  				}
>  
> -				if (curr_map->dso->loaded &&
> +				if (map__dso(curr_map)->loaded &&
>  				    !machine__is_default_guest(machine))
>  					goto discard_symbol;
>  			}
> @@ -862,8 +862,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  			 * So that we look just like we get from .ko files,
>  			 * i.e. not prelinked, relative to initial_map->start.
>  			 */
> -			pos->start = curr_map->map_ip(curr_map, pos->start);
> -			pos->end   = curr_map->map_ip(curr_map, pos->end);
> +			pos->start = map__map_ip(curr_map, pos->start);
> +			pos->end   = map__map_ip(curr_map, pos->end);
>  		} else if (x86_64 && is_entry_trampoline(pos->name)) {
>  			/*
>  			 * These symbols are not needed anymore since the
> @@ -910,7 +910,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  				return -1;
>  			}
>  
> -			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> +			curr_map->map_ip = map__identity_ip;
> +			curr_map->unmap_ip = map__identity_ip;
>  			if (maps__insert(kmaps, curr_map)) {
>  				dso__put(ndso);
>  				return -1;
> @@ -924,7 +925,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  add_symbol:
>  		if (curr_map != initial_map) {
>  			rb_erase_cached(&pos->rb_node, root);
> -			symbols__insert(&curr_map->dso->symbols, pos);
> +			symbols__insert(&map__dso(curr_map)->symbols, pos);
>  			++moved;
>  		} else
>  			++count;
> @@ -938,7 +939,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  	if (curr_map != initial_map &&
>  	    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
>  	    machine__is_default_guest(maps__machine(kmaps))) {
> -		dso__set_loaded(curr_map->dso);
> +		dso__set_loaded(map__dso(curr_map));
>  	}
>  
>  	return count + moved;
> @@ -1118,8 +1119,8 @@ static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
>  		}
>  
>  		/* Module must be in memory at the same address */
> -		mi = find_module(old_map->dso->short_name, &modules);
> -		if (!mi || mi->start != old_map->start) {
> +		mi = find_module(map__dso(old_map)->short_name, &modules);
> +		if (!mi || mi->start != map__start(old_map)) {
>  			err = -EINVAL;
>  			goto out;
>  		}
> @@ -1214,7 +1215,7 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
>  		return -ENOMEM;
>  	}
>  
> -	list_node->map->end = list_node->map->start + len;
> +	list_node->map->end = map__start(list_node->map) + len;
>  	list_node->map->pgoff = pgoff;
>  
>  	list_add(&list_node->node, &md->maps);
> @@ -1236,21 +1237,21 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
>  		struct map *old_map = rb_node->map;
>  
>  		/* no overload with this one */
> -		if (new_map->end < old_map->start ||
> -		    new_map->start >= old_map->end)
> +		if (map__end(new_map) < map__start(old_map) ||
> +		    map__start(new_map) >= map__end(old_map))
>  			continue;
>  
> -		if (new_map->start < old_map->start) {
> +		if (map__start(new_map) < map__start(old_map)) {
>  			/*
>  			 * |new......
>  			 *       |old....
>  			 */
> -			if (new_map->end < old_map->end) {
> +			if (map__end(new_map) < map__end(old_map)) {
>  				/*
>  				 * |new......|     -> |new..|
>  				 *       |old....| ->       |old....|
>  				 */
> -				new_map->end = old_map->start;
> +				new_map->end = map__start(old_map);
>  			} else {
>  				/*
>  				 * |new.............| -> |new..|       |new..|
> @@ -1271,17 +1272,18 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
>  					goto out;
>  				}
>  
> -				m->map->end = old_map->start;
> +				m->map->end = map__start(old_map);
>  				list_add_tail(&m->node, &merged);
> -				new_map->pgoff += old_map->end - new_map->start;
> -				new_map->start = old_map->end;
> +				new_map->pgoff +=
> +					map__end(old_map) - map__start(new_map);
> +				new_map->start = map__end(old_map);
>  			}
>  		} else {
>  			/*
>  			 *      |new......
>  			 * |old....
>  			 */
> -			if (new_map->end < old_map->end) {
> +			if (map__end(new_map) < map__end(old_map)) {
>  				/*
>  				 *      |new..|   -> x
>  				 * |old.........| -> |old.........|
> @@ -1294,8 +1296,9 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
>  				 *      |new......| ->         |new...|
>  				 * |old....|        -> |old....|
>  				 */
> -				new_map->pgoff += old_map->end - new_map->start;
> -				new_map->start = old_map->end;
> +				new_map->pgoff +=
> +					map__end(old_map) - map__start(new_map);
> +				new_map->start = map__end(old_map);
>  			}
>  		}
>  	}
> @@ -1361,7 +1364,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  	}
>  
>  	/* Read new maps into temporary lists */
> -	err = file__read_maps(fd, map->prot & PROT_EXEC, kcore_mapfn, &md,
> +	err = file__read_maps(fd, map__prot(map) & PROT_EXEC, kcore_mapfn, &md,
>  			      &is_64_bit);
>  	if (err)
>  		goto out_err;
> @@ -1391,7 +1394,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  		struct map_list_node *new_node;
>  
>  		list_for_each_entry(new_node, &md.maps, node) {
> -			if (stext >= new_node->map->start && stext < new_node->map->end) {
> +			if (stext >= map__start(new_node->map) &&
> +			    stext < map__end(new_node->map)) {
>  				replacement_map = new_node->map;
>  				break;
>  			}
> @@ -1408,16 +1412,18 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  		new_node = list_entry(md.maps.next, struct map_list_node, node);
>  		list_del_init(&new_node->node);
>  		if (new_node->map == replacement_map) {
> -			map->start	= new_node->map->start;
> -			map->end	= new_node->map->end;
> -			map->pgoff	= new_node->map->pgoff;
> -			map->map_ip	= new_node->map->map_ip;
> -			map->unmap_ip	= new_node->map->unmap_ip;
> +			struct  map *updated;
> +
> +			map->start = map__start(new_node->map);
> +			map->end   = map__end(new_node->map);
> +			map->pgoff = map__pgoff(new_node->map);
> +			map->map_ip = new_node->map->map_ip;
> +			map->unmap_ip = new_node->map->unmap_ip;
>  			/* Ensure maps are correctly ordered */
> -			map__get(map);
> +			updated = map__get(map);
>  			maps__remove(kmaps, map);
> -			err = maps__insert(kmaps, map);
> -			map__put(map);
> +			err = maps__insert(kmaps, updated);
> +			map__put(updated);
>  			map__put(new_node->map);
>  			if (err)
>  				goto out_err;
> @@ -1460,7 +1466,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  
>  	close(fd);
>  
> -	if (map->prot & PROT_EXEC)
> +	if (map__prot(map) & PROT_EXEC)
>  		pr_debug("Using %s for kernel object code\n", kcore_filename);
>  	else
>  		pr_debug("Using %s for kernel data\n", kcore_filename);
> @@ -1995,13 +2001,13 @@ int dso__load(struct dso *dso, struct map *map)
>  static int map__strcmp(const void *a, const void *b)
>  {
>  	const struct map *ma = *(const struct map **)a, *mb = *(const struct map **)b;
> -	return strcmp(ma->dso->short_name, mb->dso->short_name);
> +	return strcmp(map__dso(ma)->short_name, map__dso(mb)->short_name);
>  }
>  
>  static int map__strcmp_name(const void *name, const void *b)
>  {
>  	const struct map *map = *(const struct map **)b;
> -	return strcmp(name, map->dso->short_name);
> +	return strcmp(name, map__dso(map)->short_name);
>  }
>  
>  void __maps__sort_by_name(struct maps *maps)
> @@ -2052,7 +2058,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
>  	down_read(maps__lock(maps));
>  
>  	if (maps->last_search_by_name &&
> -	    strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
> +	    strcmp(map__dso(maps->last_search_by_name)->short_name, name) == 0) {
>  		map = maps->last_search_by_name;
>  		goto out_unlock;
>  	}
> @@ -2068,7 +2074,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
>  	/* Fallback to traversing the rbtree... */
>  	maps__for_each_entry(maps, rb_node) {
>  		map = rb_node->map;
> -		if (strcmp(map->dso->short_name, name) == 0) {
> +		if (strcmp(map__dso(map)->short_name, name) == 0) {
>  			maps->last_search_by_name = map;
>  			goto out_unlock;
>  		}
> diff --git a/tools/perf/util/symbol_fprintf.c b/tools/perf/util/symbol_fprintf.c
> index 2664fb65e47a..d9e5ad040b6a 100644
> --- a/tools/perf/util/symbol_fprintf.c
> +++ b/tools/perf/util/symbol_fprintf.c
> @@ -30,7 +30,7 @@ size_t __symbol__fprintf_symname_offs(const struct symbol *sym,
>  			if (al->addr < sym->end)
>  				offset = al->addr - sym->start;
>  			else
> -				offset = al->addr - al->map->start - sym->start;
> +				offset = al->addr - map__start(al->map) - sym->start;
>  			length += fprintf(fp, "+0x%lx", offset);
>  		}
>  		return length;
> diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
> index ed2d55d224aa..437fd57c2084 100644
> --- a/tools/perf/util/synthetic-events.c
> +++ b/tools/perf/util/synthetic-events.c
> @@ -668,33 +668,33 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
>  			continue;
>  
>  		if (symbol_conf.buildid_mmap2) {
> -			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
> +			size = PERF_ALIGN(map__dso(map)->long_name_len + 1, sizeof(u64));
>  			event->mmap2.header.type = PERF_RECORD_MMAP2;
>  			event->mmap2.header.size = (sizeof(event->mmap2) -
>  						(sizeof(event->mmap2.filename) - size));
>  			memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
>  			event->mmap2.header.size += machine->id_hdr_size;
> -			event->mmap2.start = map->start;
> -			event->mmap2.len   = map->end - map->start;
> +			event->mmap2.start = map__start(map);
> +			event->mmap2.len   = map__end(map) - map__start(map);
>  			event->mmap2.pid   = machine->pid;
>  
> -			memcpy(event->mmap2.filename, map->dso->long_name,
> -			       map->dso->long_name_len + 1);
> +			memcpy(event->mmap2.filename, map__dso(map)->long_name,
> +			       map__dso(map)->long_name_len + 1);
>  
>  			perf_record_mmap2__read_build_id(&event->mmap2, false);
>  		} else {
> -			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
> +			size = PERF_ALIGN(map__dso(map)->long_name_len + 1, sizeof(u64));
>  			event->mmap.header.type = PERF_RECORD_MMAP;
>  			event->mmap.header.size = (sizeof(event->mmap) -
>  						(sizeof(event->mmap.filename) - size));
>  			memset(event->mmap.filename + size, 0, machine->id_hdr_size);
>  			event->mmap.header.size += machine->id_hdr_size;
> -			event->mmap.start = map->start;
> -			event->mmap.len   = map->end - map->start;
> +			event->mmap.start = map__start(map);
> +			event->mmap.len   = map__end(map) - map__start(map);
>  			event->mmap.pid   = machine->pid;
>  
> -			memcpy(event->mmap.filename, map->dso->long_name,
> -			       map->dso->long_name_len + 1);
> +			memcpy(event->mmap.filename, map__dso(map)->long_name,
> +			       map__dso(map)->long_name_len + 1);
>  		}
>  
>  		if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
> @@ -1112,8 +1112,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
>  		event->mmap2.header.size = (sizeof(event->mmap2) -
>  				(sizeof(event->mmap2.filename) - size) + machine->id_hdr_size);
>  		event->mmap2.pgoff = kmap->ref_reloc_sym->addr;
> -		event->mmap2.start = map->start;
> -		event->mmap2.len   = map->end - event->mmap.start;
> +		event->mmap2.start = map__start(map);
> +		event->mmap2.len   = map__end(map) - event->mmap.start;
>  		event->mmap2.pid   = machine->pid;
>  
>  		perf_record_mmap2__read_build_id(&event->mmap2, true);
> @@ -1125,8 +1125,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
>  		event->mmap.header.size = (sizeof(event->mmap) -
>  				(sizeof(event->mmap.filename) - size) + machine->id_hdr_size);
>  		event->mmap.pgoff = kmap->ref_reloc_sym->addr;
> -		event->mmap.start = map->start;
> -		event->mmap.len   = map->end - event->mmap.start;
> +		event->mmap.start = map__start(map);
> +		event->mmap.len   = map__end(map) - event->mmap.start;
>  		event->mmap.pid   = machine->pid;
>  	}
>  
> diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
> index c2256777b813..6fbcc115cc6d 100644
> --- a/tools/perf/util/thread.c
> +++ b/tools/perf/util/thread.c
> @@ -434,23 +434,23 @@ struct thread *thread__main_thread(struct machine *machine, struct thread *threa
>  int thread__memcpy(struct thread *thread, struct machine *machine,
>  		   void *buf, u64 ip, int len, bool *is64bit)
>  {
> -       u8 cpumode = PERF_RECORD_MISC_USER;
> -       struct addr_location al;
> -       long offset;
> +	u8 cpumode = PERF_RECORD_MISC_USER;
> +	struct addr_location al;
> +	long offset;
>  
> -       if (machine__kernel_ip(machine, ip))
> -               cpumode = PERF_RECORD_MISC_KERNEL;
> +	if (machine__kernel_ip(machine, ip))
> +		cpumode = PERF_RECORD_MISC_KERNEL;
>  
> -       if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso ||
> -	   al.map->dso->data.status == DSO_DATA_STATUS_ERROR ||
> -	   map__load(al.map) < 0)
> -               return -1;
> +	if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map) ||
> +		map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR ||
> +		map__load(al.map) < 0)
> +		return -1;
>  
> -       offset = al.map->map_ip(al.map, ip);
> -       if (is64bit)
> -               *is64bit = al.map->dso->is_64_bit;
> +	offset = map__map_ip(al.map, ip);
> +	if (is64bit)
> +		*is64bit = map__dso(al.map)->is_64_bit;
>  
> -       return dso__data_read_offset(al.map->dso, machine, offset, buf, len);
> +	return dso__data_read_offset(map__dso(al.map), machine, offset, buf, len);
>  }
>  
>  void thread__free_stitch_list(struct thread *thread)
> diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
> index 7e6c59811292..841ac84a93ab 100644
> --- a/tools/perf/util/unwind-libunwind-local.c
> +++ b/tools/perf/util/unwind-libunwind-local.c
> @@ -381,20 +381,20 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
>  	int ret = -EINVAL;
>  
>  	map = find_map(ip, ui);
> -	if (!map || !map->dso)
> +	if (!map || !map__dso(map))
>  		return -EINVAL;
>  
> -	pr_debug("unwind: find_proc_info dso %s\n", map->dso->name);
> +	pr_debug("unwind: %s dso %s\n", __func__, map__dso(map)->name);
>  
>  	/* Check the .eh_frame section for unwinding info */
> -	if (!read_unwind_spec_eh_frame(map->dso, ui->machine,
> +	if (!read_unwind_spec_eh_frame(map__dso(map), ui->machine,
>  				       &table_data, &segbase, &fde_count)) {
>  		memset(&di, 0, sizeof(di));
>  		di.format   = UNW_INFO_FORMAT_REMOTE_TABLE;
> -		di.start_ip = map->start;
> -		di.end_ip   = map->end;
> -		di.u.rti.segbase    = map->start + segbase - map->pgoff;
> -		di.u.rti.table_data = map->start + table_data - map->pgoff;
> +		di.start_ip = map__start(map);
> +		di.end_ip   = map__end(map);
> +		di.u.rti.segbase    = map__start(map) + segbase - map__pgoff(map);
> +		di.u.rti.table_data = map__start(map) + table_data - map__pgoff(map);
>  		di.u.rti.table_len  = fde_count * sizeof(struct table_entry)
>  				      / sizeof(unw_word_t);
>  		ret = dwarf_search_unwind_table(as, ip, &di, pi,
> @@ -404,20 +404,20 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
>  #ifndef NO_LIBUNWIND_DEBUG_FRAME
>  	/* Check the .debug_frame section for unwinding info */
>  	if (ret < 0 &&
> -	    !read_unwind_spec_debug_frame(map->dso, ui->machine, &segbase)) {
> -		int fd = dso__data_get_fd(map->dso, ui->machine);
> -		int is_exec = elf_is_exec(fd, map->dso->name);
> -		unw_word_t base = is_exec ? 0 : map->start;
> +	    !read_unwind_spec_debug_frame(map__dso(map), ui->machine, &segbase)) {
> +		int fd = dso__data_get_fd(map__dso(map), ui->machine);
> +		int is_exec = elf_is_exec(fd, map__dso(map)->name);
> +		unw_word_t base = is_exec ? 0 : map__start(map);
>  		const char *symfile;
>  
>  		if (fd >= 0)
> -			dso__data_put_fd(map->dso);
> +			dso__data_put_fd(map__dso(map));
>  
> -		symfile = map->dso->symsrc_filename ?: map->dso->name;
> +		symfile = map__dso(map)->symsrc_filename ?: map__dso(map)->name;
>  
>  		memset(&di, 0, sizeof(di));
>  		if (dwarf_find_debug_frame(0, &di, ip, base, symfile,
> -					   map->start, map->end))
> +					   map__start(map), map__end(map)))
>  			return dwarf_search_unwind_table(as, ip, &di, pi,
>  							 need_unwind_info, arg);
>  	}
> @@ -473,10 +473,10 @@ static int access_dso_mem(struct unwind_info *ui, unw_word_t addr,
>  		return -1;
>  	}
>  
> -	if (!map->dso)
> +	if (!map__dso(map))
>  		return -1;
>  
> -	size = dso__data_read_addr(map->dso, map, ui->machine,
> +	size = dso__data_read_addr(map__dso(map), map, ui->machine,
>  				   addr, (u8 *) data, sizeof(*data));
>  
>  	return !(size == sizeof(*data));
> @@ -583,7 +583,7 @@ static int entry(u64 ip, struct thread *thread,
>  	pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n",
>  		 al.sym ? al.sym->name : "''",
>  		 ip,
> -		 al.map ? al.map->map_ip(al.map, ip) : (u64) 0);
> +		 al.map ? map__map_ip(al.map, ip) : (u64) 0);
>  
>  	return cb(&e, arg);
>  }
> diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
> index 7b797ffadd19..cece1ee89031 100644
> --- a/tools/perf/util/unwind-libunwind.c
> +++ b/tools/perf/util/unwind-libunwind.c
> @@ -30,7 +30,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
>  
>  	if (maps__addr_space(maps)) {
>  		pr_debug("unwind: thread map already set, dso=%s\n",
> -			 map->dso->name);
> +			 map__dso(map)->name);
>  		if (initialized)
>  			*initialized = true;
>  		return 0;
> @@ -41,7 +41,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
>  	if (!machine->env || !machine->env->arch)
>  		goto out_register;
>  
> -	dso_type = dso__type(map->dso, machine);
> +	dso_type = dso__type(map__dso(map), machine);
>  	if (dso_type == DSO__TYPE_UNKNOWN)
>  		return 0;
>  
> diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
> index 835c39efb80d..ec777ee11493 100644
> --- a/tools/perf/util/vdso.c
> +++ b/tools/perf/util/vdso.c
> @@ -147,7 +147,7 @@ static enum dso_type machine__thread_dso_type(struct machine *machine,
>  	struct map_rb_node *rb_node;
>  
>  	maps__for_each_entry(thread->maps, rb_node) {
> -		struct dso *dso = rb_node->map->dso;
> +		struct dso *dso = map__dso(rb_node->map);
>  
>  		if (!dso || dso->long_name[0] != '/')
>  			continue;
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs
  2022-02-11 17:13   ` Arnaldo Carvalho de Melo
@ 2022-02-11 17:43     ` Ian Rogers
  2022-02-11 19:21       ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 17:43 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

On Fri, Feb 11, 2022 at 9:13 AM Arnaldo Carvalho de Melo
<acme@kernel.org> wrote:
>
> Em Fri, Feb 11, 2022 at 02:33:56AM -0800, Ian Rogers escreveu:
> > Make the pthread mutex on dso use the error check type. This allows
> > deadlock checking via the return type. Assert the returned value from
> > mutex lock is always 0.
>
> I think this is too blunt/pervasive source code wise, perhaps we should
> wrap this like its done with rwsem in tools/perf/util/rwsem.h to get
> away from pthreads primitives and make the source code look more like
> a kernel one and then, taking advantage of the so far ideologic
> needless indirection, add this BUG_ON if we build with "DEBUG=1" or
> something, wdyt?

My concern with semaphores is that they are a concurrency primitive
that has more flexibility and power than a mutex. I like a mutex as it
is quite obvious what is going on and that is good from a tooling
point of view. A deadlock with two mutexes is easy to understand. On a
semaphore, were we using it like a condition variable? There's more to
figure out. I also like the idea of compiling the perf command with
emscripten, we could then generate say perf annotate output in your
web browser. Emscripten has implementations of standard posix
libraries including pthreads, but we may need to have two approaches
in the perf code if we want to compile with emscripten and use
semaphores when targeting linux.

Where this change comes from is that I worried that extending the
locked regions to cover the race that'd been found would then expose
the kind of recursive deadlock that pthread mutexes all too willingly
allow. With this code we at least see the bug and don't just hang. I
don't think we need the change to the mutexes for this change, but we
do need to extend the regions to fix the data race.

Let me know how you prefer it and I can roll it into a v4 version.

Thanks,
Ian

> - Arnaldo
>
> > Signed-off-by: Ian Rogers <irogers@google.com>
> > ---
> >  tools/perf/util/dso.c    | 12 +++++++++---
> >  tools/perf/util/symbol.c |  2 +-
> >  2 files changed, 10 insertions(+), 4 deletions(-)
> >
> > diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> > index 9cc8a1772b4b..6beccffeef7b 100644
> > --- a/tools/perf/util/dso.c
> > +++ b/tools/perf/util/dso.c
> > @@ -784,7 +784,7 @@ dso_cache__free(struct dso *dso)
> >       struct rb_root *root = &dso->data.cache;
> >       struct rb_node *next = rb_first(root);
> >
> > -     pthread_mutex_lock(&dso->lock);
> > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> >       while (next) {
> >               struct dso_cache *cache;
> >
> > @@ -830,7 +830,7 @@ dso_cache__insert(struct dso *dso, struct dso_cache *new)
> >       struct dso_cache *cache;
> >       u64 offset = new->offset;
> >
> > -     pthread_mutex_lock(&dso->lock);
> > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> >       while (*p != NULL) {
> >               u64 end;
> >
> > @@ -1259,6 +1259,8 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> >       struct dso *dso = calloc(1, sizeof(*dso) + strlen(name) + 1);
> >
> >       if (dso != NULL) {
> > +             pthread_mutexattr_t lock_attr;
> > +
> >               strcpy(dso->name, name);
> >               if (id)
> >                       dso->id = *id;
> > @@ -1286,8 +1288,12 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> >               dso->root = NULL;
> >               INIT_LIST_HEAD(&dso->node);
> >               INIT_LIST_HEAD(&dso->data.open_entry);
> > -             pthread_mutex_init(&dso->lock, NULL);
> > +             pthread_mutexattr_init(&lock_attr);
> > +             pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_ERRORCHECK);
> > +             pthread_mutex_init(&dso->lock, &lock_attr);
> > +             pthread_mutexattr_destroy(&lock_attr);
> >               refcount_set(&dso->refcnt, 1);
> > +
> >       }
> >
> >       return dso;
> > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> > index b2ed3140a1fa..43f47532696f 100644
> > --- a/tools/perf/util/symbol.c
> > +++ b/tools/perf/util/symbol.c
> > @@ -1783,7 +1783,7 @@ int dso__load(struct dso *dso, struct map *map)
> >       }
> >
> >       nsinfo__mountns_enter(dso->nsinfo, &nsc);
> > -     pthread_mutex_lock(&dso->lock);
> > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> >
> >       /* check again under the dso->lock */
> >       if (dso__loaded(dso)) {
> > --
> > 2.35.1.265.g69c8d7142f-goog
>
> --
>
> - Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 15/22] perf map: Use functions to access the variables in map
  2022-02-11 17:36   ` Arnaldo Carvalho de Melo
@ 2022-02-11 17:54     ` Ian Rogers
  2022-02-11 19:22       ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 17:54 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

On Fri, Feb 11, 2022 at 9:36 AM Arnaldo Carvalho de Melo
<acme@kernel.org> wrote:
>
> Em Fri, Feb 11, 2022 at 02:34:08AM -0800, Ian Rogers escreveu:
> > The use of functions enables easier reference count
> > checking. Some minor changes to map_ip and unmap_ip to making the
> > naming a little clearer. __maps_insert is modified to return the
> > inserted map, which simplifies the reference checking
> > wrapping. maps__fixup_overlappings has some minor tweaks so that
> > puts occur on error paths. dso__process_kernel_symbol has the
> > unused curr_mapp argument removed.
> >
> > Signed-off-by: Ian Rogers <irogers@google.com>
> > ---
> >  tools/perf/arch/s390/annotate/instructions.c  |   4 +-
> >  tools/perf/arch/x86/tests/dwarf-unwind.c      |   2 +-
> >  tools/perf/arch/x86/util/event.c              |   6 +-
> >  tools/perf/builtin-annotate.c                 |   8 +-
> >  tools/perf/builtin-inject.c                   |   8 +-
> >  tools/perf/builtin-kallsyms.c                 |   6 +-
> >  tools/perf/builtin-kmem.c                     |   4 +-
> >  tools/perf/builtin-mem.c                      |   4 +-
> >  tools/perf/builtin-report.c                   |  20 +--
> >  tools/perf/builtin-script.c                   |  26 ++--
> >  tools/perf/builtin-top.c                      |  12 +-
> >  tools/perf/builtin-trace.c                    |   2 +-
> >  .../scripts/python/Perf-Trace-Util/Context.c  |   7 +-
> >  tools/perf/tests/code-reading.c               |  32 ++---
> >  tools/perf/tests/hists_common.c               |   4 +-
> >  tools/perf/tests/vmlinux-kallsyms.c           |  35 +++---
> >  tools/perf/ui/browsers/annotate.c             |   7 +-
> >  tools/perf/ui/browsers/hists.c                |  18 +--
> >  tools/perf/ui/browsers/map.c                  |   4 +-
> >  tools/perf/util/annotate.c                    |  38 +++---
> >  tools/perf/util/auxtrace.c                    |   2 +-
> >  tools/perf/util/block-info.c                  |   4 +-
> >  tools/perf/util/bpf-event.c                   |   8 +-
> >  tools/perf/util/build-id.c                    |   2 +-
> >  tools/perf/util/callchain.c                   |  10 +-
> >  tools/perf/util/data-convert-json.c           |   4 +-
> >  tools/perf/util/db-export.c                   |   4 +-
> >  tools/perf/util/dlfilter.c                    |  21 ++--
> >  tools/perf/util/dso.c                         |   4 +-
> >  tools/perf/util/event.c                       |  14 +--
> >  tools/perf/util/evsel_fprintf.c               |   4 +-
> >  tools/perf/util/hist.c                        |  10 +-
> >  tools/perf/util/intel-pt.c                    |  48 +++----
> >  tools/perf/util/machine.c                     |  84 +++++++------
> >  tools/perf/util/map.c                         | 117 +++++++++---------
> >  tools/perf/util/map.h                         |  58 ++++++++-
> >  tools/perf/util/maps.c                        |  83 +++++++------
> >  tools/perf/util/probe-event.c                 |  44 +++----
> >  .../util/scripting-engines/trace-event-perl.c |   9 +-
> >  .../scripting-engines/trace-event-python.c    |  12 +-
> >  tools/perf/util/sort.c                        |  46 +++----
> >  tools/perf/util/symbol-elf.c                  |  39 +++---
> >  tools/perf/util/symbol.c                      |  96 +++++++-------
> >  tools/perf/util/symbol_fprintf.c              |   2 +-
> >  tools/perf/util/synthetic-events.c            |  28 ++---
> >  tools/perf/util/thread.c                      |  26 ++--
> >  tools/perf/util/unwind-libunwind-local.c      |  34 ++---
> >  tools/perf/util/unwind-libunwind.c            |   4 +-
> >  tools/perf/util/vdso.c                        |   2 +-
> >  49 files changed, 577 insertions(+), 489 deletions(-)
> >
> > diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c
> > index 0e136630659e..740f1a63bc04 100644
> > --- a/tools/perf/arch/s390/annotate/instructions.c
> > +++ b/tools/perf/arch/s390/annotate/instructions.c
> > @@ -39,7 +39,9 @@ static int s390_call__parse(struct arch *arch, struct ins_operands *ops,
> >       target.addr = map__objdump_2mem(map, ops->target.addr);
> >
> >       if (maps__find_ams(ms->maps, &target) == 0 &&
> > -         map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> > +         map__rip_2objdump(target.ms.map,
> > +                           map->map_ip(target.ms.map, target.addr)
> > +                          ) == ops->target.addr)
>
>
> This changes nothing, right? Please try not to do this in the v2 for
> this patch.

Agreed. The original code here looks wrong to me. I would have
translated this into map__map_ip, but that would have changed the
function pointer from map->map_ip to target.ms.map->map_ip. The
reformatting is so that when I add a reference count check here the
lines aren't reformatted and that change is more minimal and obvious.
I think the right thing is really to use map__map_ip. That goes beyond
what this change was trying to do, and I lack a means to test this
code. Could you investigate? If I switch this to map__map_ip in v2
then it resolves this issue and is most likely the right thing, its
just that's a behavioral change that I was trying to avoid in this
change.

Thanks,
Ian

> - Arnaldo
>
> >               ops->target.sym = target.ms.sym;
> >
> >       return 0;
> > diff --git a/tools/perf/arch/x86/tests/dwarf-unwind.c b/tools/perf/arch/x86/tests/dwarf-unwind.c
> > index a54dea7c112f..497593be80f2 100644
> > --- a/tools/perf/arch/x86/tests/dwarf-unwind.c
> > +++ b/tools/perf/arch/x86/tests/dwarf-unwind.c
> > @@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample,
> >               return -1;
> >       }
> >
> > -     stack_size = map->end - sp;
> > +     stack_size = map__end(map) - sp;
> >       stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size;
> >
> >       memcpy(buf, (void *) sp, stack_size);
> > diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
> > index 7b6b0c98fb36..c790c682b76e 100644
> > --- a/tools/perf/arch/x86/util/event.c
> > +++ b/tools/perf/arch/x86/util/event.c
> > @@ -57,9 +57,9 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
> >
> >               event->mmap.header.size = size;
> >
> > -             event->mmap.start = map->start;
> > -             event->mmap.len   = map->end - map->start;
> > -             event->mmap.pgoff = map->pgoff;
> > +             event->mmap.start = map__start(map);
> > +             event->mmap.len   = map__size(map);
> > +             event->mmap.pgoff = map__pgoff(map);
> >               event->mmap.pid   = machine->pid;
> >
> >               strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
> > diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
> > index 490bb9b8cf17..49d3ae36fd89 100644
> > --- a/tools/perf/builtin-annotate.c
> > +++ b/tools/perf/builtin-annotate.c
> > @@ -199,7 +199,7 @@ static int process_branch_callback(struct evsel *evsel,
> >               return 0;
> >
> >       if (a.map != NULL)
> > -             a.map->dso->hit = 1;
> > +             map__dso(a.map)->hit = 1;
> >
> >       hist__account_cycles(sample->branch_stack, al, sample, false, NULL);
> >
> > @@ -231,9 +231,9 @@ static int evsel__add_sample(struct evsel *evsel, struct perf_sample *sample,
> >                */
> >               if (al->sym != NULL) {
> >                       rb_erase_cached(&al->sym->rb_node,
> > -                              &al->map->dso->symbols);
> > +                                     &map__dso(al->map)->symbols);
> >                       symbol__delete(al->sym);
> > -                     dso__reset_find_symbol_cache(al->map->dso);
> > +                     dso__reset_find_symbol_cache(map__dso(al->map));
> >               }
> >               return 0;
> >       }
> > @@ -315,7 +315,7 @@ static void hists__find_annotations(struct hists *hists,
> >               struct hist_entry *he = rb_entry(nd, struct hist_entry, rb_node);
> >               struct annotation *notes;
> >
> > -             if (he->ms.sym == NULL || he->ms.map->dso->annotate_warned)
> > +             if (he->ms.sym == NULL || map__dso(he->ms.map)->annotate_warned)
> >                       goto find_next;
> >
> >               if (ann->sym_hist_filter &&
> > diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
> > index f7917c390e96..92a9dbc3d4cd 100644
> > --- a/tools/perf/builtin-inject.c
> > +++ b/tools/perf/builtin-inject.c
> > @@ -600,10 +600,10 @@ int perf_event__inject_buildid(struct perf_tool *tool, union perf_event *event,
> >       }
> >
> >       if (thread__find_map(thread, sample->cpumode, sample->ip, &al)) {
> > -             if (!al.map->dso->hit) {
> > -                     al.map->dso->hit = 1;
> > -                     dso__inject_build_id(al.map->dso, tool, machine,
> > -                                          sample->cpumode, al.map->flags);
> > +             if (!map__dso(al.map)->hit) {
> > +                     map__dso(al.map)->hit = 1;
> > +                     dso__inject_build_id(map__dso(al.map), tool, machine,
> > +                                          sample->cpumode, map__flags(al.map));
> >               }
> >       }
> >
> > diff --git a/tools/perf/builtin-kallsyms.c b/tools/perf/builtin-kallsyms.c
> > index c08ee81529e8..d940b60ce812 100644
> > --- a/tools/perf/builtin-kallsyms.c
> > +++ b/tools/perf/builtin-kallsyms.c
> > @@ -36,8 +36,10 @@ static int __cmd_kallsyms(int argc, const char **argv)
> >               }
> >
> >               printf("%s: %s %s %#" PRIx64 "-%#" PRIx64 " (%#" PRIx64 "-%#" PRIx64")\n",
> > -                     symbol->name, map->dso->short_name, map->dso->long_name,
> > -                     map->unmap_ip(map, symbol->start), map->unmap_ip(map, symbol->end),
> > +                     symbol->name, map__dso(map)->short_name,
> > +                     map__dso(map)->long_name,
> > +                     map__unmap_ip(map, symbol->start),
> > +                     map__unmap_ip(map, symbol->end),
> >                       symbol->start, symbol->end);
> >       }
> >
> > diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
> > index 99d7ff9a8eff..d87d9c341a20 100644
> > --- a/tools/perf/builtin-kmem.c
> > +++ b/tools/perf/builtin-kmem.c
> > @@ -410,7 +410,7 @@ static u64 find_callsite(struct evsel *evsel, struct perf_sample *sample)
> >               if (!caller) {
> >                       /* found */
> >                       if (node->ms.map)
> > -                             addr = map__unmap_ip(node->ms.map, node->ip);
> > +                             addr = map__dso_unmap_ip(node->ms.map, node->ip);
> >                       else
> >                               addr = node->ip;
> >
> > @@ -1012,7 +1012,7 @@ static void __print_slab_result(struct rb_root *root,
> >
> >               if (sym != NULL)
> >                       snprintf(buf, sizeof(buf), "%s+%" PRIx64 "", sym->name,
> > -                              addr - map->unmap_ip(map, sym->start));
> > +                              addr - map__unmap_ip(map, sym->start));
> >               else
> >                       snprintf(buf, sizeof(buf), "%#" PRIx64 "", addr);
> >               printf(" %-34s |", buf);
> > diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c
> > index fcf65a59bea2..d18083f57303 100644
> > --- a/tools/perf/builtin-mem.c
> > +++ b/tools/perf/builtin-mem.c
> > @@ -200,7 +200,7 @@ dump_raw_samples(struct perf_tool *tool,
> >               goto out_put;
> >
> >       if (al.map != NULL)
> > -             al.map->dso->hit = 1;
> > +             map__dso(al.map)->hit = 1;
> >
> >       field_sep = symbol_conf.field_sep;
> >       if (field_sep) {
> > @@ -241,7 +241,7 @@ dump_raw_samples(struct perf_tool *tool,
> >               symbol_conf.field_sep,
> >               sample->data_src,
> >               symbol_conf.field_sep,
> > -             al.map ? (al.map->dso ? al.map->dso->long_name : "???") : "???",
> > +             al.map && map__dso(al.map) ? map__dso(al.map)->long_name : "???",
> >               al.sym ? al.sym->name : "???");
> >  out_put:
> >       addr_location__put(&al);
> > diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
> > index 57611ef725c3..9b92b2bbd7de 100644
> > --- a/tools/perf/builtin-report.c
> > +++ b/tools/perf/builtin-report.c
> > @@ -304,7 +304,7 @@ static int process_sample_event(struct perf_tool *tool,
> >       }
> >
> >       if (al.map != NULL)
> > -             al.map->dso->hit = 1;
> > +             map__dso(al.map)->hit = 1;
> >
> >       if (ui__has_annotation() || rep->symbol_ipc || rep->total_cycles_mode) {
> >               hist__account_cycles(sample->branch_stack, &al, sample,
> > @@ -579,7 +579,7 @@ static void report__warn_kptr_restrict(const struct report *rep)
> >               return;
> >
> >       if (kernel_map == NULL ||
> > -         (kernel_map->dso->hit &&
> > +         (map__dso(kernel_map)->hit &&
> >            (kernel_kmap->ref_reloc_sym == NULL ||
> >             kernel_kmap->ref_reloc_sym->addr == 0))) {
> >               const char *desc =
> > @@ -805,13 +805,15 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
> >               struct map *map = rb_node->map;
> >
> >               printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
> > -                                indent, "", map->start, map->end,
> > -                                map->prot & PROT_READ ? 'r' : '-',
> > -                                map->prot & PROT_WRITE ? 'w' : '-',
> > -                                map->prot & PROT_EXEC ? 'x' : '-',
> > -                                map->flags & MAP_SHARED ? 's' : 'p',
> > -                                map->pgoff,
> > -                                map->dso->id.ino, map->dso->name);
> > +                                indent, "",
> > +                                map__start(map), map__end(map),
> > +                                map__prot(map) & PROT_READ ? 'r' : '-',
> > +                                map__prot(map) & PROT_WRITE ? 'w' : '-',
> > +                                map__prot(map) & PROT_EXEC ? 'x' : '-',
> > +                                map__flags(map) & MAP_SHARED ? 's' : 'p',
> > +                                map__pgoff(map),
> > +                                map__dso(map)->id.ino,
> > +                                map__dso(map)->name);
> >       }
> >
> >       return printed;
> > diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
> > index abae8184e171..4edfce95e137 100644
> > --- a/tools/perf/builtin-script.c
> > +++ b/tools/perf/builtin-script.c
> > @@ -972,12 +972,12 @@ static int perf_sample__fprintf_brstackoff(struct perf_sample *sample,
> >               to   = entries[i].to;
> >
> >               if (thread__find_map_fb(thread, sample->cpumode, from, &alf) &&
> > -                 !alf.map->dso->adjust_symbols)
> > -                     from = map__map_ip(alf.map, from);
> > +                 !map__dso(alf.map)->adjust_symbols)
> > +                     from = map__dso_map_ip(alf.map, from);
> >
> >               if (thread__find_map_fb(thread, sample->cpumode, to, &alt) &&
> > -                 !alt.map->dso->adjust_symbols)
> > -                     to = map__map_ip(alt.map, to);
> > +                 !map__dso(alt.map)->adjust_symbols)
> > +                     to = map__dso_map_ip(alt.map, to);
> >
> >               printed += fprintf(fp, " 0x%"PRIx64, from);
> >               if (PRINT_FIELD(DSO)) {
> > @@ -1039,11 +1039,11 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
> >               return 0;
> >       }
> >
> > -     if (!thread__find_map(thread, *cpumode, start, &al) || !al.map->dso) {
> > +     if (!thread__find_map(thread, *cpumode, start, &al) || !map__dso(al.map)) {
> >               pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
> >               return 0;
> >       }
> > -     if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR) {
> > +     if (map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR) {
> >               pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
> >               return 0;
> >       }
> > @@ -1051,11 +1051,11 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
> >       /* Load maps to ensure dso->is_64_bit has been updated */
> >       map__load(al.map);
> >
> > -     offset = al.map->map_ip(al.map, start);
> > -     len = dso__data_read_offset(al.map->dso, machine, offset, (u8 *)buffer,
> > -                                 end - start + MAXINSN);
> > +     offset = map__map_ip(al.map, start);
> > +     len = dso__data_read_offset(map__dso(al.map), machine, offset,
> > +                                 (u8 *)buffer, end - start + MAXINSN);
> >
> > -     *is64bit = al.map->dso->is_64_bit;
> > +     *is64bit = map__dso(al.map)->is_64_bit;
> >       if (len <= 0)
> >               pr_debug("\tcannot fetch code for block at %" PRIx64 "-%" PRIx64 "\n",
> >                       start, end);
> > @@ -1070,9 +1070,9 @@ static int map__fprintf_srccode(struct map *map, u64 addr, FILE *fp, struct srcc
> >       int len;
> >       char *srccode;
> >
> > -     if (!map || !map->dso)
> > +     if (!map || !map__dso(map))
> >               return 0;
> > -     srcfile = get_srcline_split(map->dso,
> > +     srcfile = get_srcline_split(map__dso(map),
> >                                   map__rip_2objdump(map, addr),
> >                                   &line);
> >       if (!srcfile)
> > @@ -1164,7 +1164,7 @@ static int ip__fprintf_sym(uint64_t addr, struct thread *thread,
> >       if (al.addr < al.sym->end)
> >               off = al.addr - al.sym->start;
> >       else
> > -             off = al.addr - al.map->start - al.sym->start;
> > +             off = al.addr - map__start(al.map) - al.sym->start;
> >       printed += fprintf(fp, "\t%s", al.sym->name);
> >       if (off)
> >               printed += fprintf(fp, "%+d", off);
> > diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
> > index 1fc390f136dd..8db1df7bdabe 100644
> > --- a/tools/perf/builtin-top.c
> > +++ b/tools/perf/builtin-top.c
> > @@ -127,8 +127,8 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
> >       /*
> >        * We can't annotate with just /proc/kallsyms
> >        */
> > -     if (map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> > -         !dso__is_kcore(map->dso)) {
> > +     if (map__dso(map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> > +         !dso__is_kcore(map__dso(map))) {
> >               pr_err("Can't annotate %s: No vmlinux file was found in the "
> >                      "path\n", sym->name);
> >               sleep(1);
> > @@ -180,8 +180,9 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
> >                   "Tools:  %s\n\n"
> >                   "Not all samples will be on the annotation output.\n\n"
> >                   "Please report to linux-kernel@vger.kernel.org\n",
> > -                 ip, map->dso->long_name, dso__symtab_origin(map->dso),
> > -                 map->start, map->end, sym->start, sym->end,
> > +                 ip, map__dso(map)->long_name,
> > +                 dso__symtab_origin(map__dso(map)),
> > +                 map__start(map), map__end(map), sym->start, sym->end,
> >                   sym->binding == STB_GLOBAL ? 'g' :
> >                   sym->binding == STB_LOCAL  ? 'l' : 'w', sym->name,
> >                   err ? "[unknown]" : uts.machine,
> > @@ -810,7 +811,8 @@ static void perf_event__process_sample(struct perf_tool *tool,
> >                   __map__is_kernel(al.map) && map__has_symbols(al.map)) {
> >                       if (symbol_conf.vmlinux_name) {
> >                               char serr[256];
> > -                             dso__strerror_load(al.map->dso, serr, sizeof(serr));
> > +                             dso__strerror_load(map__dso(al.map),
> > +                                                serr, sizeof(serr));
> >                               ui__warning("The %s file can't be used: %s\n%s",
> >                                           symbol_conf.vmlinux_name, serr, msg);
> >                       } else {
> > diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
> > index 32844d8a0ea5..0134f24da3e3 100644
> > --- a/tools/perf/builtin-trace.c
> > +++ b/tools/perf/builtin-trace.c
> > @@ -2862,7 +2862,7 @@ static void print_location(FILE *f, struct perf_sample *sample,
> >  {
> >
> >       if ((verbose > 0 || print_dso) && al->map)
> > -             fprintf(f, "%s@", al->map->dso->long_name);
> > +             fprintf(f, "%s@", map__dso(al->map)->long_name);
> >
> >       if ((verbose > 0 || print_sym) && al->sym)
> >               fprintf(f, "%s+0x%" PRIx64, al->sym->name,
> > diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> > index b64013a87c54..b83b62d33945 100644
> > --- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> > +++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> > @@ -152,9 +152,10 @@ static PyObject *perf_sample_src(PyObject *obj, PyObject *args, bool get_srccode
> >       map = c->al->map;
> >       addr = c->al->addr;
> >
> > -     if (map && map->dso)
> > -             srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
> > -
> > +     if (map && map__dso(map)) {
> > +             srcfile = get_srcline_split(map__dso(map),
> > +                                         map__rip_2objdump(map, addr), &line);
> > +     }
> >       if (get_srccode) {
> >               if (srcfile)
> >                       srccode = find_sourceline(srcfile, line, &len);
> > diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
> > index 6eafe36a8704..9cb7d3f577d7 100644
> > --- a/tools/perf/tests/code-reading.c
> > +++ b/tools/perf/tests/code-reading.c
> > @@ -240,7 +240,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> >
> >       pr_debug("Reading object code for memory address: %#"PRIx64"\n", addr);
> >
> > -     if (!thread__find_map(thread, cpumode, addr, &al) || !al.map->dso) {
> > +     if (!thread__find_map(thread, cpumode, addr, &al) || !map__dso(al.map)) {
> >               if (cpumode == PERF_RECORD_MISC_HYPERVISOR) {
> >                       pr_debug("Hypervisor address can not be resolved - skipping\n");
> >                       return 0;
> > @@ -250,10 +250,10 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> >               return -1;
> >       }
> >
> > -     pr_debug("File is: %s\n", al.map->dso->long_name);
> > +     pr_debug("File is: %s\n", map__dso(al.map)->long_name);
> >
> > -     if (al.map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> > -         !dso__is_kcore(al.map->dso)) {
> > +     if (map__dso(al.map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> > +         !dso__is_kcore(map__dso(al.map))) {
> >               pr_debug("Unexpected kernel address - skipping\n");
> >               return 0;
> >       }
> > @@ -264,11 +264,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> >               len = BUFSZ;
> >
> >       /* Do not go off the map */
> > -     if (addr + len > al.map->end)
> > -             len = al.map->end - addr;
> > +     if (addr + len > map__end(al.map))
> > +             len = map__end(al.map) - addr;
> >
> >       /* Read the object code using perf */
> > -     ret_len = dso__data_read_offset(al.map->dso, maps__machine(thread->maps),
> > +     ret_len = dso__data_read_offset(map__dso(al.map), maps__machine(thread->maps),
> >                                       al.addr, buf1, len);
> >       if (ret_len != len) {
> >               pr_debug("dso__data_read_offset failed\n");
> > @@ -283,11 +283,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> >               return -1;
> >
> >       /* objdump struggles with kcore - try each map only once */
> > -     if (dso__is_kcore(al.map->dso)) {
> > +     if (dso__is_kcore(map__dso(al.map))) {
> >               size_t d;
> >
> >               for (d = 0; d < state->done_cnt; d++) {
> > -                     if (state->done[d] == al.map->start) {
> > +                     if (state->done[d] == map__start(al.map)) {
> >                               pr_debug("kcore map tested already");
> >                               pr_debug(" - skipping\n");
> >                               return 0;
> > @@ -297,12 +297,12 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> >                       pr_debug("Too many kcore maps - skipping\n");
> >                       return 0;
> >               }
> > -             state->done[state->done_cnt++] = al.map->start;
> > +             state->done[state->done_cnt++] = map__start(al.map);
> >       }
> >
> > -     objdump_name = al.map->dso->long_name;
> > -     if (dso__needs_decompress(al.map->dso)) {
> > -             if (dso__decompress_kmodule_path(al.map->dso, objdump_name,
> > +     objdump_name = map__dso(al.map)->long_name;
> > +     if (dso__needs_decompress(map__dso(al.map))) {
> > +             if (dso__decompress_kmodule_path(map__dso(al.map), objdump_name,
> >                                                decomp_name,
> >                                                sizeof(decomp_name)) < 0) {
> >                       pr_debug("decompression failed\n");
> > @@ -330,7 +330,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> >                       len -= ret;
> >                       if (len) {
> >                               pr_debug("Reducing len to %zu\n", len);
> > -                     } else if (dso__is_kcore(al.map->dso)) {
> > +                     } else if (dso__is_kcore(map__dso(al.map))) {
> >                               /*
> >                                * objdump cannot handle very large segments
> >                                * that may be found in kcore.
> > @@ -588,8 +588,8 @@ static int do_test_code_reading(bool try_kcore)
> >               pr_debug("map__load failed\n");
> >               goto out_err;
> >       }
> > -     have_vmlinux = dso__is_vmlinux(map->dso);
> > -     have_kcore = dso__is_kcore(map->dso);
> > +     have_vmlinux = dso__is_vmlinux(map__dso(map));
> > +     have_kcore = dso__is_kcore(map__dso(map));
> >
> >       /* 2nd time through we just try kcore */
> >       if (try_kcore && !have_kcore)
> > diff --git a/tools/perf/tests/hists_common.c b/tools/perf/tests/hists_common.c
> > index 6f34d08b84e5..40eccc659767 100644
> > --- a/tools/perf/tests/hists_common.c
> > +++ b/tools/perf/tests/hists_common.c
> > @@ -181,7 +181,7 @@ void print_hists_in(struct hists *hists)
> >               if (!he->filtered) {
> >                       pr_info("%2d: entry: %-8s [%-8s] %20s: period = %"PRIu64"\n",
> >                               i, thread__comm_str(he->thread),
> > -                             he->ms.map->dso->short_name,
> > +                             map__dso(he->ms.map)->short_name,
> >                               he->ms.sym->name, he->stat.period);
> >               }
> >
> > @@ -208,7 +208,7 @@ void print_hists_out(struct hists *hists)
> >               if (!he->filtered) {
> >                       pr_info("%2d: entry: %8s:%5d [%-8s] %20s: period = %"PRIu64"/%"PRIu64"\n",
> >                               i, thread__comm_str(he->thread), he->thread->tid,
> > -                             he->ms.map->dso->short_name,
> > +                             map__dso(he->ms.map)->short_name,
> >                               he->ms.sym->name, he->stat.period,
> >                               he->stat_acc ? he->stat_acc->period : 0);
> >               }
> > diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
> > index 11a230ee5894..5afab21455f1 100644
> > --- a/tools/perf/tests/vmlinux-kallsyms.c
> > +++ b/tools/perf/tests/vmlinux-kallsyms.c
> > @@ -13,7 +13,7 @@
> >  #include "debug.h"
> >  #include "machine.h"
> >
> > -#define UM(x) kallsyms_map->unmap_ip(kallsyms_map, (x))
> > +#define UM(x) map__unmap_ip(kallsyms_map, (x))
> >
> >  static bool is_ignored_symbol(const char *name, char type)
> >  {
> > @@ -216,8 +216,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> >               if (sym->start == sym->end)
> >                       continue;
> >
> > -             mem_start = vmlinux_map->unmap_ip(vmlinux_map, sym->start);
> > -             mem_end = vmlinux_map->unmap_ip(vmlinux_map, sym->end);
> > +             mem_start = map__unmap_ip(vmlinux_map, sym->start);
> > +             mem_end = map__unmap_ip(vmlinux_map, sym->end);
> >
> >               first_pair = machine__find_kernel_symbol(&kallsyms, mem_start, NULL);
> >               pair = first_pair;
> > @@ -262,7 +262,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> >
> >                               continue;
> >                       }
> > -             } else if (mem_start == kallsyms.vmlinux_map->end) {
> > +             } else if (mem_start == map__end(kallsyms.vmlinux_map)) {
> >                       /*
> >                        * Ignore aliases to _etext, i.e. to the end of the kernel text area,
> >                        * such as __indirect_thunk_end.
> > @@ -294,9 +294,10 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> >                * so use the short name, less descriptive but the same ("[kernel]" in
> >                * both cases.
> >                */
> > -             struct map *pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
> > -                                                             map->dso->short_name :
> > -                                                             map->dso->name));
> > +             struct map *pair = maps__find_by_name(kallsyms.kmaps,
> > +                                             map__dso(map)->kernel
> > +                                             ? map__dso(map)->short_name
> > +                                             : map__dso(map)->name);
> >               if (pair) {
> >                       pair->priv = 1;
> >               } else {
> > @@ -313,25 +314,27 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> >       maps__for_each_entry(maps, rb_node) {
> >               struct map *pair, *map = rb_node->map;
> >
> > -             mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
> > -             mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
> > +             mem_start = map__unmap_ip(vmlinux_map, map__start(map));
> > +             mem_end = map__unmap_ip(vmlinux_map, map__end(map));
> >
> >               pair = maps__find(kallsyms.kmaps, mem_start);
> > -             if (pair == NULL || pair->priv)
> > +             if (pair == NULL || map__priv(pair))
> >                       continue;
> >
> > -             if (pair->start == mem_start) {
> > +             if (map__start(pair) == mem_start) {
> >                       if (!header_printed) {
> >                               pr_info("WARN: Maps in vmlinux with a different name in kallsyms:\n");
> >                               header_printed = true;
> >                       }
> >
> >                       pr_info("WARN: %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s in kallsyms as",
> > -                             map->start, map->end, map->pgoff, map->dso->name);
> > -                     if (mem_end != pair->end)
> > +                             map__start(map), map__end(map),
> > +                             map__pgoff(map), map__dso(map)->name);
> > +                     if (mem_end != map__end(pair))
> >                               pr_info(":\nWARN: *%" PRIx64 "-%" PRIx64 " %" PRIx64,
> > -                                     pair->start, pair->end, pair->pgoff);
> > -                     pr_info(" %s\n", pair->dso->name);
> > +                                     map__start(pair), map__end(pair),
> > +                                     map__pgoff(pair));
> > +                     pr_info(" %s\n", map__dso(pair)->name);
> >                       pair->priv = 1;
> >               }
> >       }
> > @@ -343,7 +346,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> >       maps__for_each_entry(maps, rb_node) {
> >               struct map *map = rb_node->map;
> >
> > -             if (!map->priv) {
> > +             if (!map__priv(map)) {
> >                       if (!header_printed) {
> >                               pr_info("WARN: Maps only in kallsyms:\n");
> >                               header_printed = true;
> > diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
> > index 44ba900828f6..7d51d92302dc 100644
> > --- a/tools/perf/ui/browsers/annotate.c
> > +++ b/tools/perf/ui/browsers/annotate.c
> > @@ -446,7 +446,8 @@ static void ui_browser__init_asm_mode(struct ui_browser *browser)
> >  static int sym_title(struct symbol *sym, struct map *map, char *title,
> >                    size_t sz, int percent_type)
> >  {
> > -     return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name, map->dso->long_name,
> > +     return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name,
> > +                     map__dso(map)->long_name,
> >                       percent_type_str(percent_type));
> >  }
> >
> > @@ -971,14 +972,14 @@ int symbol__tui_annotate(struct map_symbol *ms, struct evsel *evsel,
> >       if (sym == NULL)
> >               return -1;
> >
> > -     if (ms->map->dso->annotate_warned)
> > +     if (map__dso(ms->map)->annotate_warned)
> >               return -1;
> >
> >       if (not_annotated) {
> >               err = symbol__annotate2(ms, evsel, opts, &browser.arch);
> >               if (err) {
> >                       char msg[BUFSIZ];
> > -                     ms->map->dso->annotate_warned = true;
> > +                     map__dso(ms->map)->annotate_warned = true;
> >                       symbol__strerror_disassemble(ms, err, msg, sizeof(msg));
> >                       ui__error("Couldn't annotate %s:\n%s", sym->name, msg);
> >                       goto out_free_offsets;
> > diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
> > index 572ff38ceb0f..2241447e9bfb 100644
> > --- a/tools/perf/ui/browsers/hists.c
> > +++ b/tools/perf/ui/browsers/hists.c
> > @@ -2487,7 +2487,7 @@ static struct symbol *symbol__new_unresolved(u64 addr, struct map *map)
> >                       return NULL;
> >               }
> >
> > -             dso__insert_symbol(map->dso, sym);
> > +             dso__insert_symbol(map__dso(map), sym);
> >       }
> >
> >       return sym;
> > @@ -2499,7 +2499,7 @@ add_annotate_opt(struct hist_browser *browser __maybe_unused,
> >                struct map_symbol *ms,
> >                u64 addr)
> >  {
> > -     if (!ms->map || !ms->map->dso || ms->map->dso->annotate_warned)
> > +     if (!ms->map || !map__dso(ms->map) || map__dso(ms->map)->annotate_warned)
> >               return 0;
> >
> >       if (!ms->sym)
> > @@ -2590,8 +2590,10 @@ static int hists_browser__zoom_map(struct hist_browser *browser, struct map *map
> >               ui_helpline__pop();
> >       } else {
> >               ui_helpline__fpush("To zoom out press ESC or ENTER + \"Zoom out of %s DSO\"",
> > -                                __map__is_kernel(map) ? "the Kernel" : map->dso->short_name);
> > -             browser->hists->dso_filter = map->dso;
> > +                                __map__is_kernel(map)
> > +                                ? "the Kernel"
> > +                                : map__dso(map)->short_name);
> > +             browser->hists->dso_filter = map__dso(map);
> >               perf_hpp__set_elide(HISTC_DSO, true);
> >               pstack__push(browser->pstack, &browser->hists->dso_filter);
> >       }
> > @@ -2616,7 +2618,9 @@ add_dso_opt(struct hist_browser *browser, struct popup_action *act,
> >
> >       if (asprintf(optstr, "Zoom %s %s DSO (use the 'k' hotkey to zoom directly into the kernel)",
> >                    browser->hists->dso_filter ? "out of" : "into",
> > -                  __map__is_kernel(map) ? "the Kernel" : map->dso->short_name) < 0)
> > +                  __map__is_kernel(map)
> > +                  ? "the Kernel"
> > +                  : map__dso(map)->short_name) < 0)
> >               return 0;
> >
> >       act->ms.map = map;
> > @@ -3091,8 +3095,8 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
> >
> >                       if (!browser->selection ||
> >                           !browser->selection->map ||
> > -                         !browser->selection->map->dso ||
> > -                         browser->selection->map->dso->annotate_warned) {
> > +                         !map__dso(browser->selection->map) ||
> > +                         map__dso(browser->selection->map)->annotate_warned) {
> >                               continue;
> >                       }
> >
> > diff --git a/tools/perf/ui/browsers/map.c b/tools/perf/ui/browsers/map.c
> > index 3d49b916c9e4..3d1b958d8832 100644
> > --- a/tools/perf/ui/browsers/map.c
> > +++ b/tools/perf/ui/browsers/map.c
> > @@ -76,7 +76,7 @@ static int map_browser__run(struct map_browser *browser)
> >  {
> >       int key;
> >
> > -     if (ui_browser__show(&browser->b, browser->map->dso->long_name,
> > +     if (ui_browser__show(&browser->b, map__dso(browser->map)->long_name,
> >                            "Press ESC to exit, %s / to search",
> >                            verbose > 0 ? "" : "restart with -v to use") < 0)
> >               return -1;
> > @@ -106,7 +106,7 @@ int map__browse(struct map *map)
> >  {
> >       struct map_browser mb = {
> >               .b = {
> > -                     .entries = &map->dso->symbols,
> > +                     .entries = &map__dso(map)->symbols,
> >                       .refresh = ui_browser__rb_tree_refresh,
> >                       .seek    = ui_browser__rb_tree_seek,
> >                       .write   = map_browser__write,
> > diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
> > index 01900689dc00..3a7433d3e48a 100644
> > --- a/tools/perf/util/annotate.c
> > +++ b/tools/perf/util/annotate.c
> > @@ -280,7 +280,9 @@ static int call__parse(struct arch *arch, struct ins_operands *ops, struct map_s
> >       target.addr = map__objdump_2mem(map, ops->target.addr);
> >
> >       if (maps__find_ams(ms->maps, &target) == 0 &&
> > -         map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> > +         map__rip_2objdump(target.ms.map,
> > +                           map->map_ip(target.ms.map, target.addr)
> > +                           ) == ops->target.addr)
> >               ops->target.sym = target.ms.sym;
> >
> >       return 0;
> > @@ -384,8 +386,8 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
> >       }
> >
> >       target.addr = map__objdump_2mem(map, ops->target.addr);
> > -     start = map->unmap_ip(map, sym->start),
> > -     end = map->unmap_ip(map, sym->end);
> > +     start = map__unmap_ip(map, sym->start),
> > +     end = map__unmap_ip(map, sym->end);
> >
> >       ops->target.outside = target.addr < start || target.addr > end;
> >
> > @@ -408,7 +410,9 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
> >        * the symbol searching and disassembly should be done.
> >        */
> >       if (maps__find_ams(ms->maps, &target) == 0 &&
> > -         map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> > +         map__rip_2objdump(target.ms.map,
> > +                           map->map_ip(target.ms.map, target.addr)
> > +                           ) == ops->target.addr)
> >               ops->target.sym = target.ms.sym;
> >
> >       if (!ops->target.outside) {
> > @@ -889,7 +893,7 @@ static int __symbol__inc_addr_samples(struct map_symbol *ms,
> >       unsigned offset;
> >       struct sym_hist *h;
> >
> > -     pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, ms->map->unmap_ip(ms->map, addr));
> > +     pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, map__unmap_ip(ms->map, addr));
> >
> >       if ((addr < sym->start || addr >= sym->end) &&
> >           (addr != sym->end || sym->start != sym->end)) {
> > @@ -1016,13 +1020,13 @@ int addr_map_symbol__account_cycles(struct addr_map_symbol *ams,
> >       if (start &&
> >               (start->ms.sym == ams->ms.sym ||
> >                (ams->ms.sym &&
> > -                start->addr == ams->ms.sym->start + ams->ms.map->start)))
> > +               start->addr == ams->ms.sym->start + map__start(ams->ms.map))))
> >               saddr = start->al_addr;
> >       if (saddr == 0)
> >               pr_debug2("BB with bad start: addr %"PRIx64" start %"PRIx64" sym %"PRIx64" saddr %"PRIx64"\n",
> >                       ams->addr,
> >                       start ? start->addr : 0,
> > -                     ams->ms.sym ? ams->ms.sym->start + ams->ms.map->start : 0,
> > +                     ams->ms.sym ? ams->ms.sym->start + map__start(ams->ms.map) : 0,
> >                       saddr);
> >       err = symbol__account_cycles(ams->al_addr, saddr, ams->ms.sym, cycles);
> >       if (err)
> > @@ -1593,7 +1597,7 @@ static void delete_last_nop(struct symbol *sym)
> >
> >  int symbol__strerror_disassemble(struct map_symbol *ms, int errnum, char *buf, size_t buflen)
> >  {
> > -     struct dso *dso = ms->map->dso;
> > +     struct dso *dso = map__dso(ms->map);
> >
> >       BUG_ON(buflen == 0);
> >
> > @@ -1723,7 +1727,7 @@ static int symbol__disassemble_bpf(struct symbol *sym,
> >       struct map *map = args->ms.map;
> >       struct perf_bpil *info_linear;
> >       struct disassemble_info info;
> > -     struct dso *dso = map->dso;
> > +     struct dso *dso = map__dso(map);
> >       int pc = 0, count, sub_id;
> >       struct btf *btf = NULL;
> >       char tpath[PATH_MAX];
> > @@ -1946,7 +1950,7 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
> >  {
> >       struct annotation_options *opts = args->options;
> >       struct map *map = args->ms.map;
> > -     struct dso *dso = map->dso;
> > +     struct dso *dso = map__dso(map);
> >       char *command;
> >       FILE *file;
> >       char symfs_filename[PATH_MAX];
> > @@ -1973,8 +1977,8 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
> >               return err;
> >
> >       pr_debug("%s: filename=%s, sym=%s, start=%#" PRIx64 ", end=%#" PRIx64 "\n", __func__,
> > -              symfs_filename, sym->name, map->unmap_ip(map, sym->start),
> > -              map->unmap_ip(map, sym->end));
> > +              symfs_filename, sym->name, map__unmap_ip(map, sym->start),
> > +              map__unmap_ip(map, sym->end));
> >
> >       pr_debug("annotating [%p] %30s : [%p] %30s\n",
> >                dso, dso->long_name, sym, sym->name);
> > @@ -2386,7 +2390,7 @@ int symbol__annotate_printf(struct map_symbol *ms, struct evsel *evsel,
> >  {
> >       struct map *map = ms->map;
> >       struct symbol *sym = ms->sym;
> > -     struct dso *dso = map->dso;
> > +     struct dso *dso = map__dso(map);
> >       char *filename;
> >       const char *d_filename;
> >       const char *evsel_name = evsel__name(evsel);
> > @@ -2569,7 +2573,7 @@ int map_symbol__annotation_dump(struct map_symbol *ms, struct evsel *evsel,
> >       }
> >
> >       fprintf(fp, "%s() %s\nEvent: %s\n\n",
> > -             ms->sym->name, ms->map->dso->long_name, ev_name);
> > +             ms->sym->name, map__dso(ms->map)->long_name, ev_name);
> >       symbol__annotate_fprintf2(ms->sym, fp, opts);
> >
> >       fclose(fp);
> > @@ -2781,7 +2785,7 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,
> >               if (percent_max <= 0.5)
> >                       continue;
> >
> > -             al->path = get_srcline(map->dso, notes->start + al->offset, NULL,
> > +             al->path = get_srcline(map__dso(map), notes->start + al->offset, NULL,
> >                                      false, true, notes->start + al->offset);
> >               insert_source_line(&tmp_root, al, opts);
> >       }
> > @@ -2800,7 +2804,7 @@ static void symbol__calc_lines(struct map_symbol *ms, struct rb_root *root,
> >  int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
> >                         struct annotation_options *opts)
> >  {
> > -     struct dso *dso = ms->map->dso;
> > +     struct dso *dso = map__dso(ms->map);
> >       struct symbol *sym = ms->sym;
> >       struct rb_root source_line = RB_ROOT;
> >       struct hists *hists = evsel__hists(evsel);
> > @@ -2836,7 +2840,7 @@ int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
> >  int symbol__tty_annotate(struct map_symbol *ms, struct evsel *evsel,
> >                        struct annotation_options *opts)
> >  {
> > -     struct dso *dso = ms->map->dso;
> > +     struct dso *dso = map__dso(ms->map);
> >       struct symbol *sym = ms->sym;
> >       struct rb_root source_line = RB_ROOT;
> >       int err;
> > diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
> > index 825336304a37..2e864c9bdef3 100644
> > --- a/tools/perf/util/auxtrace.c
> > +++ b/tools/perf/util/auxtrace.c
> > @@ -2478,7 +2478,7 @@ static struct dso *load_dso(const char *name)
> >       if (map__load(map) < 0)
> >               pr_err("File '%s' not found or has no symbols.\n", name);
> >
> > -     dso = dso__get(map->dso);
> > +     dso = dso__get(map__dso(map));
> >
> >       map__put(map);
> >
> > diff --git a/tools/perf/util/block-info.c b/tools/perf/util/block-info.c
> > index 5ecd4f401f32..16a7b4adcf18 100644
> > --- a/tools/perf/util/block-info.c
> > +++ b/tools/perf/util/block-info.c
> > @@ -317,9 +317,9 @@ static int block_dso_entry(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp,
> >       struct block_fmt *block_fmt = container_of(fmt, struct block_fmt, fmt);
> >       struct map *map = he->ms.map;
> >
> > -     if (map && map->dso) {
> > +     if (map && map__dso(map)) {
> >               return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
> > -                              map->dso->short_name);
> > +                              map__dso(map)->short_name);
> >       }
> >
> >       return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
> > diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
> > index 33257b594a71..5717933be116 100644
> > --- a/tools/perf/util/bpf-event.c
> > +++ b/tools/perf/util/bpf-event.c
> > @@ -95,10 +95,10 @@ static int machine__process_bpf_event_load(struct machine *machine,
> >               struct map *map = maps__find(machine__kernel_maps(machine), addr);
> >
> >               if (map) {
> > -                     map->dso->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
> > -                     map->dso->bpf_prog.id = id;
> > -                     map->dso->bpf_prog.sub_id = i;
> > -                     map->dso->bpf_prog.env = env;
> > +                     map__dso(map)->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
> > +                     map__dso(map)->bpf_prog.id = id;
> > +                     map__dso(map)->bpf_prog.sub_id = i;
> > +                     map__dso(map)->bpf_prog.env = env;
> >               }
> >       }
> >       return 0;
> > diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
> > index 7a5821c87f94..274b705dd941 100644
> > --- a/tools/perf/util/build-id.c
> > +++ b/tools/perf/util/build-id.c
> > @@ -59,7 +59,7 @@ int build_id__mark_dso_hit(struct perf_tool *tool __maybe_unused,
> >       }
> >
> >       if (thread__find_map(thread, sample->cpumode, sample->ip, &al))
> > -             al.map->dso->hit = 1;
> > +             map__dso(al.map)->hit = 1;
> >
> >       thread__put(thread);
> >       return 0;
> > diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
> > index 61bb3fb2107a..a8cfd31a3ff0 100644
> > --- a/tools/perf/util/callchain.c
> > +++ b/tools/perf/util/callchain.c
> > @@ -695,8 +695,8 @@ static enum match_result match_chain_strings(const char *left,
> >  static enum match_result match_chain_dso_addresses(struct map *left_map, u64 left_ip,
> >                                                  struct map *right_map, u64 right_ip)
> >  {
> > -     struct dso *left_dso = left_map ? left_map->dso : NULL;
> > -     struct dso *right_dso = right_map ? right_map->dso : NULL;
> > +     struct dso *left_dso = left_map ? map__dso(left_map) : NULL;
> > +     struct dso *right_dso = right_map ? map__dso(right_map) : NULL;
> >
> >       if (left_dso != right_dso)
> >               return left_dso < right_dso ? MATCH_LT : MATCH_GT;
> > @@ -1167,9 +1167,9 @@ char *callchain_list__sym_name(struct callchain_list *cl,
> >
> >       if (show_dso)
> >               scnprintf(bf + printed, bfsize - printed, " %s",
> > -                       cl->ms.map ?
> > -                       cl->ms.map->dso->short_name :
> > -                       "unknown");
> > +                       cl->ms.map
> > +                       ? map__dso(cl->ms.map)->short_name
> > +                       : "unknown");
> >
> >       return bf;
> >  }
> > diff --git a/tools/perf/util/data-convert-json.c b/tools/perf/util/data-convert-json.c
> > index f1ab6edba446..9c83228bb9f1 100644
> > --- a/tools/perf/util/data-convert-json.c
> > +++ b/tools/perf/util/data-convert-json.c
> > @@ -127,8 +127,8 @@ static void output_sample_callchain_entry(struct perf_tool *tool,
> >               fputc(',', out);
> >               output_json_key_string(out, false, 5, "symbol", al->sym->name);
> >
> > -             if (al->map && al->map->dso) {
> > -                     const char *dso = al->map->dso->short_name;
> > +             if (al->map && map__dso(al->map)) {
> > +                     const char *dso = map__dso(al->map)->short_name;
> >
> >                       if (dso && strlen(dso) > 0) {
> >                               fputc(',', out);
> > diff --git a/tools/perf/util/db-export.c b/tools/perf/util/db-export.c
> > index 1cfcfdd3cf52..84c970c11794 100644
> > --- a/tools/perf/util/db-export.c
> > +++ b/tools/perf/util/db-export.c
> > @@ -179,7 +179,7 @@ static int db_ids_from_al(struct db_export *dbe, struct addr_location *al,
> >       int err;
> >
> >       if (al->map) {
> > -             struct dso *dso = al->map->dso;
> > +             struct dso *dso = map__dso(al->map);
> >
> >               err = db_export__dso(dbe, dso, maps__machine(al->maps));
> >               if (err)
> > @@ -255,7 +255,7 @@ static struct call_path *call_path_from_sample(struct db_export *dbe,
> >               al.addr = node->ip;
> >
> >               if (al.map && !al.sym)
> > -                     al.sym = dso__find_symbol(al.map->dso, al.addr);
> > +                     al.sym = dso__find_symbol(map__dso(al.map), al.addr);
> >
> >               db_ids_from_al(dbe, &al, &dso_db_id, &sym_db_id, &offset);
> >
> > diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
> > index d59462af15f1..f1d9dd7065e6 100644
> > --- a/tools/perf/util/dlfilter.c
> > +++ b/tools/perf/util/dlfilter.c
> > @@ -29,7 +29,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
> >
> >       d_al->size = sizeof(*d_al);
> >       if (al->map) {
> > -             struct dso *dso = al->map->dso;
> > +             struct dso *dso = map__dso(al->map);
> >
> >               if (symbol_conf.show_kernel_path && dso->long_name)
> >                       d_al->dso = dso->long_name;
> > @@ -51,7 +51,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
> >               if (al->addr < sym->end)
> >                       d_al->symoff = al->addr - sym->start;
> >               else
> > -                     d_al->symoff = al->addr - al->map->start - sym->start;
> > +                     d_al->symoff = al->addr - map__start(al->map) - sym->start;
> >               d_al->sym_binding = sym->binding;
> >       } else {
> >               d_al->sym = NULL;
> > @@ -232,9 +232,10 @@ static const char *dlfilter__srcline(void *ctx, __u32 *line_no)
> >       map = al->map;
> >       addr = al->addr;
> >
> > -     if (map && map->dso)
> > -             srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
> > -
> > +     if (map && map__dso(map)) {
> > +             srcfile = get_srcline_split(map__dso(map),
> > +                                         map__rip_2objdump(map, addr), &line);
> > +     }
> >       *line_no = line;
> >       return srcfile;
> >  }
> > @@ -266,7 +267,7 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
> >
> >       map = al->map;
> >
> > -     if (map && ip >= map->start && ip < map->end &&
> > +     if (map && ip >= map__start(map) && ip < map__end(map) &&
> >           machine__kernel_ip(d->machine, ip) == machine__kernel_ip(d->machine, d->sample->ip))
> >               goto have_map;
> >
> > @@ -276,10 +277,10 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
> >
> >       map = a.map;
> >  have_map:
> > -     offset = map->map_ip(map, ip);
> > -     if (ip + len >= map->end)
> > -             len = map->end - ip;
> > -     return dso__data_read_offset(map->dso, d->machine, offset, buf, len);
> > +     offset = map__map_ip(map, ip);
> > +     if (ip + len >= map__end(map))
> > +             len = map__end(map) - ip;
> > +     return dso__data_read_offset(map__dso(map), d->machine, offset, buf, len);
> >  }
> >
> >  static const struct perf_dlfilter_fns perf_dlfilter_fns = {
> > diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> > index b2f570adba35..1115bc51a261 100644
> > --- a/tools/perf/util/dso.c
> > +++ b/tools/perf/util/dso.c
> > @@ -1109,7 +1109,7 @@ ssize_t dso__data_read_addr(struct dso *dso, struct map *map,
> >                           struct machine *machine, u64 addr,
> >                           u8 *data, ssize_t size)
> >  {
> > -     u64 offset = map->map_ip(map, addr);
> > +     u64 offset = map__map_ip(map, addr);
> >       return dso__data_read_offset(dso, machine, offset, data, size);
> >  }
> >
> > @@ -1149,7 +1149,7 @@ ssize_t dso__data_write_cache_addr(struct dso *dso, struct map *map,
> >                                  struct machine *machine, u64 addr,
> >                                  const u8 *data, ssize_t size)
> >  {
> > -     u64 offset = map->map_ip(map, addr);
> > +     u64 offset = map__map_ip(map, addr);
> >       return dso__data_write_cache_offs(dso, machine, offset, data, size);
> >  }
> >
> > diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> > index 40a3b1a35613..54a1d4df5f70 100644
> > --- a/tools/perf/util/event.c
> > +++ b/tools/perf/util/event.c
> > @@ -486,7 +486,7 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
> >
> >               al.map = maps__find(machine__kernel_maps(machine), tp->addr);
> >               if (al.map && map__load(al.map) >= 0) {
> > -                     al.addr = al.map->map_ip(al.map, tp->addr);
> > +                     al.addr = map__map_ip(al.map, tp->addr);
> >                       al.sym = map__find_symbol(al.map, al.addr);
> >                       if (al.sym)
> >                               ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
> > @@ -621,7 +621,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> >                */
> >               if (load_map)
> >                       map__load(al->map);
> > -             al->addr = al->map->map_ip(al->map, al->addr);
> > +             al->addr = map__map_ip(al->map, al->addr);
> >       }
> >
> >       return al->map;
> > @@ -692,8 +692,8 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
> >       dump_printf(" ... thread: %s:%d\n", thread__comm_str(thread), thread->tid);
> >       thread__find_map(thread, sample->cpumode, sample->ip, al);
> >       dump_printf(" ...... dso: %s\n",
> > -                 al->map ? al->map->dso->long_name :
> > -                     al->level == 'H' ? "[hypervisor]" : "<not found>");
> > +                 al->map ? map__dso(al->map)->long_name
> > +                         : al->level == 'H' ? "[hypervisor]" : "<not found>");
> >
> >       if (thread__is_filtered(thread))
> >               al->filtered |= (1 << HIST_FILTER__THREAD);
> > @@ -711,7 +711,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
> >       }
> >
> >       if (al->map) {
> > -             struct dso *dso = al->map->dso;
> > +             struct dso *dso = map__dso(al->map);
> >
> >               if (symbol_conf.dso_list &&
> >                   (!dso || !(strlist__has_entry(symbol_conf.dso_list,
> > @@ -738,12 +738,12 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
> >               }
> >               if (!ret && al->sym) {
> >                       snprintf(al_addr_str, sz, "0x%"PRIx64,
> > -                             al->map->unmap_ip(al->map, al->sym->start));
> > +                              map__unmap_ip(al->map, al->sym->start));
> >                       ret = strlist__has_entry(symbol_conf.sym_list,
> >                                               al_addr_str);
> >               }
> >               if (!ret && symbol_conf.addr_list && al->map) {
> > -                     unsigned long addr = al->map->unmap_ip(al->map, al->addr);
> > +                     unsigned long addr = map__unmap_ip(al->map, al->addr);
> >
> >                       ret = intlist__has_entry(symbol_conf.addr_list, addr);
> >                       if (!ret && symbol_conf.addr_range) {
> > diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c
> > index 8c2ea8001329..ac6fef9d8906 100644
> > --- a/tools/perf/util/evsel_fprintf.c
> > +++ b/tools/perf/util/evsel_fprintf.c
> > @@ -146,11 +146,11 @@ int sample__fprintf_callchain(struct perf_sample *sample, int left_alignment,
> >                               printed += fprintf(fp, " <-");
> >
> >                       if (map)
> > -                             addr = map->map_ip(map, node->ip);
> > +                             addr = map__map_ip(map, node->ip);
> >
> >                       if (print_ip) {
> >                               /* Show binary offset for userspace addr */
> > -                             if (map && !map->dso->kernel)
> > +                             if (map && !map__dso(map)->kernel)
> >                                       printed += fprintf(fp, "%c%16" PRIx64, s, addr);
> >                               else
> >                                       printed += fprintf(fp, "%c%16" PRIx64, s, node->ip);
> > diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> > index 78f9fbb925a7..f19ac6eb4775 100644
> > --- a/tools/perf/util/hist.c
> > +++ b/tools/perf/util/hist.c
> > @@ -105,7 +105,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
> >               hists__set_col_len(hists, HISTC_THREAD, len + 8);
> >
> >       if (h->ms.map) {
> > -             len = dso__name_len(h->ms.map->dso);
> > +             len = dso__name_len(map__dso(h->ms.map));
> >               hists__new_col_len(hists, HISTC_DSO, len);
> >       }
> >
> > @@ -119,7 +119,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
> >                               symlen += BITS_PER_LONG / 4 + 2 + 3;
> >                       hists__new_col_len(hists, HISTC_SYMBOL_FROM, symlen);
> >
> > -                     symlen = dso__name_len(h->branch_info->from.ms.map->dso);
> > +                     symlen = dso__name_len(map__dso(h->branch_info->from.ms.map));
> >                       hists__new_col_len(hists, HISTC_DSO_FROM, symlen);
> >               } else {
> >                       symlen = unresolved_col_width + 4 + 2;
> > @@ -133,7 +133,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
> >                               symlen += BITS_PER_LONG / 4 + 2 + 3;
> >                       hists__new_col_len(hists, HISTC_SYMBOL_TO, symlen);
> >
> > -                     symlen = dso__name_len(h->branch_info->to.ms.map->dso);
> > +                     symlen = dso__name_len(map__dso(h->branch_info->to.ms.map));
> >                       hists__new_col_len(hists, HISTC_DSO_TO, symlen);
> >               } else {
> >                       symlen = unresolved_col_width + 4 + 2;
> > @@ -177,7 +177,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
> >               }
> >
> >               if (h->mem_info->daddr.ms.map) {
> > -                     symlen = dso__name_len(h->mem_info->daddr.ms.map->dso);
> > +                     symlen = dso__name_len(map__dso(h->mem_info->daddr.ms.map));
> >                       hists__new_col_len(hists, HISTC_MEM_DADDR_DSO,
> >                                          symlen);
> >               } else {
> > @@ -2096,7 +2096,7 @@ static bool hists__filter_entry_by_dso(struct hists *hists,
> >                                      struct hist_entry *he)
> >  {
> >       if (hists->dso_filter != NULL &&
> > -         (he->ms.map == NULL || he->ms.map->dso != hists->dso_filter)) {
> > +         (he->ms.map == NULL || map__dso(he->ms.map) != hists->dso_filter)) {
> >               he->filtered |= (1 << HIST_FILTER__DSO);
> >               return true;
> >       }
> > diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
> > index e8613cbda331..c88f112c0a06 100644
> > --- a/tools/perf/util/intel-pt.c
> > +++ b/tools/perf/util/intel-pt.c
> > @@ -731,20 +731,20 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
> >       }
> >
> >       while (1) {
> > -             if (!thread__find_map(thread, cpumode, *ip, &al) || !al.map->dso)
> > +             if (!thread__find_map(thread, cpumode, *ip, &al) || !map__dso(al.map))
> >                       return -EINVAL;
> >
> > -             if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR &&
> > -                 dso__data_status_seen(al.map->dso,
> > +             if (map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR &&
> > +                 dso__data_status_seen(map__dso(al.map),
> >                                         DSO_DATA_STATUS_SEEN_ITRACE))
> >                       return -ENOENT;
> >
> > -             offset = al.map->map_ip(al.map, *ip);
> > +             offset = map__map_ip(al.map, *ip);
> >
> >               if (!to_ip && one_map) {
> >                       struct intel_pt_cache_entry *e;
> >
> > -                     e = intel_pt_cache_lookup(al.map->dso, machine, offset);
> > +                     e = intel_pt_cache_lookup(map__dso(al.map), machine, offset);
> >                       if (e &&
> >                           (!max_insn_cnt || e->insn_cnt <= max_insn_cnt)) {
> >                               *insn_cnt_ptr = e->insn_cnt;
> > @@ -766,10 +766,10 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
> >               /* Load maps to ensure dso->is_64_bit has been updated */
> >               map__load(al.map);
> >
> > -             x86_64 = al.map->dso->is_64_bit;
> > +             x86_64 = map__dso(al.map)->is_64_bit;
> >
> >               while (1) {
> > -                     len = dso__data_read_offset(al.map->dso, machine,
> > +                     len = dso__data_read_offset(map__dso(al.map), machine,
> >                                                   offset, buf,
> >                                                   INTEL_PT_INSN_BUF_SZ);
> >                       if (len <= 0)
> > @@ -795,7 +795,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
> >                               goto out_no_cache;
> >                       }
> >
> > -                     if (*ip >= al.map->end)
> > +                     if (*ip >= map__end(al.map))
> >                               break;
> >
> >                       offset += intel_pt_insn->length;
> > @@ -815,13 +815,13 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
> >       if (to_ip) {
> >               struct intel_pt_cache_entry *e;
> >
> > -             e = intel_pt_cache_lookup(al.map->dso, machine, start_offset);
> > +             e = intel_pt_cache_lookup(map__dso(al.map), machine, start_offset);
> >               if (e)
> >                       return 0;
> >       }
> >
> >       /* Ignore cache errors */
> > -     intel_pt_cache_add(al.map->dso, machine, start_offset, insn_cnt,
> > +     intel_pt_cache_add(map__dso(al.map), machine, start_offset, insn_cnt,
> >                          *ip - start_ip, intel_pt_insn);
> >
> >       return 0;
> > @@ -892,13 +892,13 @@ static int __intel_pt_pgd_ip(uint64_t ip, void *data)
> >       if (!thread)
> >               return -EINVAL;
> >
> > -     if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso)
> > +     if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map))
> >               return -EINVAL;
> >
> > -     offset = al.map->map_ip(al.map, ip);
> > +     offset = map__map_ip(al.map, ip);
> >
> >       return intel_pt_match_pgd_ip(ptq->pt, ip, offset,
> > -                                  al.map->dso->long_name);
> > +                                  map__dso(al.map)->long_name);
> >  }
> >
> >  static bool intel_pt_pgd_ip(uint64_t ip, void *data)
> > @@ -2406,13 +2406,13 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
> >       if (map__load(map))
> >               return 0;
> >
> > -     start = dso__first_symbol(map->dso);
> > +     start = dso__first_symbol(map__dso(map));
> >
> >       for (sym = start; sym; sym = dso__next_symbol(sym)) {
> >               if (sym->binding == STB_GLOBAL &&
> >                   !strcmp(sym->name, "__switch_to")) {
> > -                     ip = map->unmap_ip(map, sym->start);
> > -                     if (ip >= map->start && ip < map->end) {
> > +                     ip = map__unmap_ip(map, sym->start);
> > +                     if (ip >= map__start(map) && ip < map__end(map)) {
> >                               switch_ip = ip;
> >                               break;
> >                       }
> > @@ -2429,8 +2429,8 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
> >
> >       for (sym = start; sym; sym = dso__next_symbol(sym)) {
> >               if (!strcmp(sym->name, ptss)) {
> > -                     ip = map->unmap_ip(map, sym->start);
> > -                     if (ip >= map->start && ip < map->end) {
> > +                     ip = map__unmap_ip(map, sym->start);
> > +                     if (ip >= map__start(map) && ip < map__end(map)) {
> >                               *ptss_ip = ip;
> >                               break;
> >                       }
> > @@ -2965,7 +2965,7 @@ static int intel_pt_process_aux_output_hw_id(struct intel_pt *pt,
> >  static int intel_pt_find_map(struct thread *thread, u8 cpumode, u64 addr,
> >                            struct addr_location *al)
> >  {
> > -     if (!al->map || addr < al->map->start || addr >= al->map->end) {
> > +     if (!al->map || addr < map__start(al->map) || addr >= map__end(al->map)) {
> >               if (!thread__find_map(thread, cpumode, addr, al))
> >                       return -1;
> >       }
> > @@ -2996,12 +2996,12 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
> >                       continue;
> >               }
> >
> > -             if (!al.map->dso || !al.map->dso->auxtrace_cache)
> > +             if (!map__dso(al.map) || !map__dso(al.map)->auxtrace_cache)
> >                       continue;
> >
> > -             offset = al.map->map_ip(al.map, addr);
> > +             offset = map__map_ip(al.map, addr);
> >
> > -             e = intel_pt_cache_lookup(al.map->dso, machine, offset);
> > +             e = intel_pt_cache_lookup(map__dso(al.map), machine, offset);
> >               if (!e)
> >                       continue;
> >
> > @@ -3014,9 +3014,9 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
> >                       if (e->branch != INTEL_PT_BR_NO_BRANCH)
> >                               return 0;
> >               } else {
> > -                     intel_pt_cache_invalidate(al.map->dso, machine, offset);
> > +                     intel_pt_cache_invalidate(map__dso(al.map), machine, offset);
> >                       intel_pt_log("Invalidated instruction cache for %s at %#"PRIx64"\n",
> > -                                  al.map->dso->long_name, addr);
> > +                                  map__dso(al.map)->long_name, addr);
> >               }
> >       }
> >
> > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > index 88279008e761..940fb2a50dfd 100644
> > --- a/tools/perf/util/machine.c
> > +++ b/tools/perf/util/machine.c
> > @@ -47,7 +47,7 @@ static void __machine__remove_thread(struct machine *machine, struct thread *th,
> >
> >  static struct dso *machine__kernel_dso(struct machine *machine)
> >  {
> > -     return machine->vmlinux_map->dso;
> > +     return map__dso(machine->vmlinux_map);
> >  }
> >
> >  static void dsos__init(struct dsos *dsos)
> > @@ -842,9 +842,10 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
> >       if (map != machine->vmlinux_map)
> >               maps__remove(machine__kernel_maps(machine), map);
> >       else {
> > -             sym = dso__find_symbol(map->dso, map->map_ip(map, map->start));
> > +             sym = dso__find_symbol(map__dso(map),
> > +                             map__map_ip(map, map__start(map)));
> >               if (sym)
> > -                     dso__delete_symbol(map->dso, sym);
> > +                     dso__delete_symbol(map__dso(map), sym);
> >       }
> >
> >       return 0;
> > @@ -880,7 +881,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
> >               return 0;
> >       }
> >
> > -     if (map && map->dso) {
> > +     if (map && map__dso(map)) {
> >               u8 *new_bytes = event->text_poke.bytes + event->text_poke.old_len;
> >               int ret;
> >
> > @@ -889,7 +890,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
> >                * must be done prior to using kernel maps.
> >                */
> >               map__load(map);
> > -             ret = dso__data_write_cache_addr(map->dso, map, machine,
> > +             ret = dso__data_write_cache_addr(map__dso(map), map, machine,
> >                                                event->text_poke.addr,
> >                                                new_bytes,
> >                                                event->text_poke.new_len);
> > @@ -931,6 +932,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
> >       /* If maps__insert failed, return NULL. */
> >       if (err)
> >               map = NULL;
> > +
> >  out:
> >       /* put the dso here, corresponding to  machine__findnew_module_dso */
> >       dso__put(dso);
> > @@ -1118,7 +1120,7 @@ int machine__create_extra_kernel_map(struct machine *machine,
> >
> >       if (!err) {
> >               pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
> > -                     kmap->name, map->start, map->end);
> > +                     kmap->name, map__start(map), map__end(map));
> >       }
> >
> >       map__put(map);
> > @@ -1178,9 +1180,9 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
> >               if (!kmap || !is_entry_trampoline(kmap->name))
> >                       continue;
> >
> > -             dest_map = maps__find(kmaps, map->pgoff);
> > +             dest_map = maps__find(kmaps, map__pgoff(map));
> >               if (dest_map != map)
> > -                     map->pgoff = dest_map->map_ip(dest_map, map->pgoff);
> > +                     map->pgoff = map__map_ip(dest_map, map__pgoff(map));
> >               found = true;
> >       }
> >       if (found || machine->trampolines_mapped)
> > @@ -1230,7 +1232,8 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
> >       if (machine->vmlinux_map == NULL)
> >               return -ENOMEM;
> >
> > -     machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
> > +     machine->vmlinux_map->map_ip = map__identity_ip;
> > +     machine->vmlinux_map->unmap_ip = map__identity_ip;
> >       return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
> >  }
> >
> > @@ -1329,10 +1332,10 @@ int machines__create_kernel_maps(struct machines *machines, pid_t pid)
> >  int machine__load_kallsyms(struct machine *machine, const char *filename)
> >  {
> >       struct map *map = machine__kernel_map(machine);
> > -     int ret = __dso__load_kallsyms(map->dso, filename, map, true);
> > +     int ret = __dso__load_kallsyms(map__dso(map), filename, map, true);
> >
> >       if (ret > 0) {
> > -             dso__set_loaded(map->dso);
> > +             dso__set_loaded(map__dso(map));
> >               /*
> >                * Since /proc/kallsyms will have multiple sessions for the
> >                * kernel, with modules between them, fixup the end of all
> > @@ -1347,10 +1350,10 @@ int machine__load_kallsyms(struct machine *machine, const char *filename)
> >  int machine__load_vmlinux_path(struct machine *machine)
> >  {
> >       struct map *map = machine__kernel_map(machine);
> > -     int ret = dso__load_vmlinux_path(map->dso, map);
> > +     int ret = dso__load_vmlinux_path(map__dso(map), map);
> >
> >       if (ret > 0)
> > -             dso__set_loaded(map->dso);
> > +             dso__set_loaded(map__dso(map));
> >
> >       return ret;
> >  }
> > @@ -1401,16 +1404,16 @@ static int maps__set_module_path(struct maps *maps, const char *path, struct kmo
> >       if (long_name == NULL)
> >               return -ENOMEM;
> >
> > -     dso__set_long_name(map->dso, long_name, true);
> > -     dso__kernel_module_get_build_id(map->dso, "");
> > +     dso__set_long_name(map__dso(map), long_name, true);
> > +     dso__kernel_module_get_build_id(map__dso(map), "");
> >
> >       /*
> >        * Full name could reveal us kmod compression, so
> >        * we need to update the symtab_type if needed.
> >        */
> > -     if (m->comp && is_kmod_dso(map->dso)) {
> > -             map->dso->symtab_type++;
> > -             map->dso->comp = m->comp;
> > +     if (m->comp && is_kmod_dso(map__dso(map))) {
> > +             map__dso(map)->symtab_type++;
> > +             map__dso(map)->comp = m->comp;
> >       }
> >
> >       return 0;
> > @@ -1509,8 +1512,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
> >               return -1;
> >       map->end = start + size;
> >
> > -     dso__kernel_module_get_build_id(map->dso, machine->root_dir);
> > -
> > +     dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
> >       return 0;
> >  }
> >
> > @@ -1619,7 +1621,7 @@ int machine__create_kernel_maps(struct machine *machine)
> >               struct map_rb_node *next = map_rb_node__next(rb_node);
> >
> >               if (next)
> > -                     machine__set_kernel_mmap(machine, start, next->map->start);
> > +                     machine__set_kernel_mmap(machine, start, map__start(next->map));
> >       }
> >
> >  out_put:
> > @@ -1683,10 +1685,10 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
> >               if (map == NULL)
> >                       goto out_problem;
> >
> > -             map->end = map->start + xm->end - xm->start;
> > +             map->end = map__start(map) + xm->end - xm->start;
> >
> >               if (build_id__is_defined(bid))
> > -                     dso__set_build_id(map->dso, bid);
> > +                     dso__set_build_id(map__dso(map), bid);
> >
> >       } else if (is_kernel_mmap) {
> >               const char *symbol_name = (xm->name + strlen(machine->mmap_name));
> > @@ -2148,14 +2150,14 @@ static char *callchain_srcline(struct map_symbol *ms, u64 ip)
> >       if (!map || callchain_param.key == CCKEY_FUNCTION)
> >               return srcline;
> >
> > -     srcline = srcline__tree_find(&map->dso->srclines, ip);
> > +     srcline = srcline__tree_find(&map__dso(map)->srclines, ip);
> >       if (!srcline) {
> >               bool show_sym = false;
> >               bool show_addr = callchain_param.key == CCKEY_ADDRESS;
> >
> > -             srcline = get_srcline(map->dso, map__rip_2objdump(map, ip),
> > +             srcline = get_srcline(map__dso(map), map__rip_2objdump(map, ip),
> >                                     ms->sym, show_sym, show_addr, ip);
> > -             srcline__tree_insert(&map->dso->srclines, ip, srcline);
> > +             srcline__tree_insert(&map__dso(map)->srclines, ip, srcline);
> >       }
> >
> >       return srcline;
> > @@ -2179,7 +2181,7 @@ static int add_callchain_ip(struct thread *thread,
> >  {
> >       struct map_symbol ms;
> >       struct addr_location al;
> > -     int nr_loop_iter = 0;
> > +     int nr_loop_iter = 0, err;
> >       u64 iter_cycles = 0;
> >       const char *srcline = NULL;
> >
> > @@ -2228,9 +2230,10 @@ static int add_callchain_ip(struct thread *thread,
> >               }
> >       }
> >
> > -     if (symbol_conf.hide_unresolved && al.sym == NULL)
> > +     if (symbol_conf.hide_unresolved && al.sym == NULL) {
> > +             addr_location__put(&al);
> >               return 0;
> > -
> > +     }
> >       if (iter) {
> >               nr_loop_iter = iter->nr_loop_iter;
> >               iter_cycles = iter->cycles;
> > @@ -2240,9 +2243,10 @@ static int add_callchain_ip(struct thread *thread,
> >       ms.map = al.map;
> >       ms.sym = al.sym;
> >       srcline = callchain_srcline(&ms, al.addr);
> > -     return callchain_cursor_append(cursor, ip, &ms,
> > -                                    branch, flags, nr_loop_iter,
> > -                                    iter_cycles, branch_from, srcline);
> > +     err = callchain_cursor_append(cursor, ip, &ms,
> > +                                   branch, flags, nr_loop_iter,
> > +                                   iter_cycles, branch_from, srcline);
> > +     return err;
> >  }
> >
> >  struct branch_info *sample__resolve_bstack(struct perf_sample *sample,
> > @@ -2937,15 +2941,15 @@ static int append_inlines(struct callchain_cursor *cursor, struct map_symbol *ms
> >       if (!symbol_conf.inline_name || !map || !sym)
> >               return ret;
> >
> > -     addr = map__map_ip(map, ip);
> > +     addr = map__dso_map_ip(map, ip);
> >       addr = map__rip_2objdump(map, addr);
> >
> > -     inline_node = inlines__tree_find(&map->dso->inlined_nodes, addr);
> > +     inline_node = inlines__tree_find(&map__dso(map)->inlined_nodes, addr);
> >       if (!inline_node) {
> > -             inline_node = dso__parse_addr_inlines(map->dso, addr, sym);
> > +             inline_node = dso__parse_addr_inlines(map__dso(map), addr, sym);
> >               if (!inline_node)
> >                       return ret;
> > -             inlines__tree_insert(&map->dso->inlined_nodes, inline_node);
> > +             inlines__tree_insert(&map__dso(map)->inlined_nodes, inline_node);
> >       }
> >
> >       list_for_each_entry(ilist, &inline_node->val, list) {
> > @@ -2981,7 +2985,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
> >        * its corresponding binary.
> >        */
> >       if (entry->ms.map)
> > -             addr = map__map_ip(entry->ms.map, entry->ip);
> > +             addr = map__dso_map_ip(entry->ms.map, entry->ip);
> >
> >       srcline = callchain_srcline(&entry->ms, addr);
> >       return callchain_cursor_append(cursor, entry->ip, &entry->ms,
> > @@ -3183,7 +3187,7 @@ int machine__get_kernel_start(struct machine *machine)
> >                * kernel_start = 1ULL << 63 for x86_64.
> >                */
> >               if (!err && !machine__is(machine, "x86_64"))
> > -                     machine->kernel_start = map->start;
> > +                     machine->kernel_start = map__start(map);
> >       }
> >       return err;
> >  }
> > @@ -3234,8 +3238,8 @@ char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, ch
> >       if (sym == NULL)
> >               return NULL;
> >
> > -     *modp = __map__is_kmodule(map) ? (char *)map->dso->short_name : NULL;
> > -     *addrp = map->unmap_ip(map, sym->start);
> > +     *modp = __map__is_kmodule(map) ? (char *)map__dso(map)->short_name : NULL;
> > +     *addrp = map__unmap_ip(map, sym->start);
> >       return sym->name;
> >  }
> >
> > diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> > index 57e926ce115f..47d81e361e29 100644
> > --- a/tools/perf/util/map.c
> > +++ b/tools/perf/util/map.c
> > @@ -109,8 +109,8 @@ void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
> >       map->pgoff    = pgoff;
> >       map->reloc    = 0;
> >       map->dso      = dso__get(dso);
> > -     map->map_ip   = map__map_ip;
> > -     map->unmap_ip = map__unmap_ip;
> > +     map->map_ip   = map__dso_map_ip;
> > +     map->unmap_ip = map__dso_unmap_ip;
> >       map->erange_warned = false;
> >       refcount_set(&map->refcnt, 1);
> >  }
> > @@ -120,10 +120,11 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
> >                    u32 prot, u32 flags, struct build_id *bid,
> >                    char *filename, struct thread *thread)
> >  {
> > -     struct map *map = malloc(sizeof(*map));
> > +     struct map *map;
> >       struct nsinfo *nsi = NULL;
> >       struct nsinfo *nnsi;
> >
> > +     map = malloc(sizeof(*map));
> >       if (map != NULL) {
> >               char newfilename[PATH_MAX];
> >               struct dso *dso;
> > @@ -170,7 +171,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
> >               map__init(map, start, start + len, pgoff, dso);
> >
> >               if (anon || no_dso) {
> > -                     map->map_ip = map->unmap_ip = identity__map_ip;
> > +                     map->map_ip = map->unmap_ip = map__identity_ip;
> >
> >                       /*
> >                        * Set memory without DSO as loaded. All map__find_*
> > @@ -204,8 +205,9 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
> >   */
> >  struct map *map__new2(u64 start, struct dso *dso)
> >  {
> > -     struct map *map = calloc(1, (sizeof(*map) +
> > -                                  (dso->kernel ? sizeof(struct kmap) : 0)));
> > +     struct map *map;
> > +
> > +     map = calloc(1, sizeof(*map) + (dso->kernel ? sizeof(struct kmap) : 0));
> >       if (map != NULL) {
> >               /*
> >                * ->end will be filled after we load all the symbols
> > @@ -218,7 +220,7 @@ struct map *map__new2(u64 start, struct dso *dso)
> >
> >  bool __map__is_kernel(const struct map *map)
> >  {
> > -     if (!map->dso->kernel)
> > +     if (!map__dso(map)->kernel)
> >               return false;
> >       return machine__kernel_map(maps__machine(map__kmaps((struct map *)map))) == map;
> >  }
> > @@ -234,7 +236,7 @@ bool __map__is_bpf_prog(const struct map *map)
> >  {
> >       const char *name;
> >
> > -     if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
> > +     if (map__dso(map)->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
> >               return true;
> >
> >       /*
> > @@ -242,7 +244,7 @@ bool __map__is_bpf_prog(const struct map *map)
> >        * type of DSO_BINARY_TYPE__BPF_PROG_INFO. In such cases, we can
> >        * guess the type based on name.
> >        */
> > -     name = map->dso->short_name;
> > +     name = map__dso(map)->short_name;
> >       return name && (strstr(name, "bpf_prog_") == name);
> >  }
> >
> > @@ -250,7 +252,7 @@ bool __map__is_bpf_image(const struct map *map)
> >  {
> >       const char *name;
> >
> > -     if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
> > +     if (map__dso(map)->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
> >               return true;
> >
> >       /*
> > @@ -258,18 +260,19 @@ bool __map__is_bpf_image(const struct map *map)
> >        * type of DSO_BINARY_TYPE__BPF_IMAGE. In such cases, we can
> >        * guess the type based on name.
> >        */
> > -     name = map->dso->short_name;
> > +     name = map__dso(map)->short_name;
> >       return name && is_bpf_image(name);
> >  }
> >
> >  bool __map__is_ool(const struct map *map)
> >  {
> > -     return map->dso && map->dso->binary_type == DSO_BINARY_TYPE__OOL;
> > +     return map__dso(map) &&
> > +            map__dso(map)->binary_type == DSO_BINARY_TYPE__OOL;
> >  }
> >
> >  bool map__has_symbols(const struct map *map)
> >  {
> > -     return dso__has_symbols(map->dso);
> > +     return dso__has_symbols(map__dso(map));
> >  }
> >
> >  static void map__exit(struct map *map)
> > @@ -292,7 +295,7 @@ void map__put(struct map *map)
> >
> >  void map__fixup_start(struct map *map)
> >  {
> > -     struct rb_root_cached *symbols = &map->dso->symbols;
> > +     struct rb_root_cached *symbols = &map__dso(map)->symbols;
> >       struct rb_node *nd = rb_first_cached(symbols);
> >       if (nd != NULL) {
> >               struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
> > @@ -302,7 +305,7 @@ void map__fixup_start(struct map *map)
> >
> >  void map__fixup_end(struct map *map)
> >  {
> > -     struct rb_root_cached *symbols = &map->dso->symbols;
> > +     struct rb_root_cached *symbols = &map__dso(map)->symbols;
> >       struct rb_node *nd = rb_last(&symbols->rb_root);
> >       if (nd != NULL) {
> >               struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
> > @@ -314,18 +317,18 @@ void map__fixup_end(struct map *map)
> >
> >  int map__load(struct map *map)
> >  {
> > -     const char *name = map->dso->long_name;
> > +     const char *name = map__dso(map)->long_name;
> >       int nr;
> >
> > -     if (dso__loaded(map->dso))
> > +     if (dso__loaded(map__dso(map)))
> >               return 0;
> >
> > -     nr = dso__load(map->dso, map);
> > +     nr = dso__load(map__dso(map), map);
> >       if (nr < 0) {
> > -             if (map->dso->has_build_id) {
> > +             if (map__dso(map)->has_build_id) {
> >                       char sbuild_id[SBUILD_ID_SIZE];
> >
> > -                     build_id__sprintf(&map->dso->bid, sbuild_id);
> > +                     build_id__sprintf(&map__dso(map)->bid, sbuild_id);
> >                       pr_debug("%s with build id %s not found", name, sbuild_id);
> >               } else
> >                       pr_debug("Failed to open %s", name);
> > @@ -357,7 +360,7 @@ struct symbol *map__find_symbol(struct map *map, u64 addr)
> >       if (map__load(map) < 0)
> >               return NULL;
> >
> > -     return dso__find_symbol(map->dso, addr);
> > +     return dso__find_symbol(map__dso(map), addr);
> >  }
> >
> >  struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
> > @@ -365,24 +368,24 @@ struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
> >       if (map__load(map) < 0)
> >               return NULL;
> >
> > -     if (!dso__sorted_by_name(map->dso))
> > -             dso__sort_by_name(map->dso);
> > +     if (!dso__sorted_by_name(map__dso(map)))
> > +             dso__sort_by_name(map__dso(map));
> >
> > -     return dso__find_symbol_by_name(map->dso, name);
> > +     return dso__find_symbol_by_name(map__dso(map), name);
> >  }
> >
> >  struct map *map__clone(struct map *from)
> >  {
> > -     size_t size = sizeof(struct map);
> >       struct map *map;
> > +     size_t size = sizeof(struct map);
> >
> > -     if (from->dso && from->dso->kernel)
> > +     if (map__dso(from) && map__dso(from)->kernel)
> >               size += sizeof(struct kmap);
> >
> >       map = memdup(from, size);
> >       if (map != NULL) {
> >               refcount_set(&map->refcnt, 1);
> > -             dso__get(map->dso);
> > +             map->dso = dso__get(map->dso);
> >       }
> >
> >       return map;
> > @@ -391,7 +394,8 @@ struct map *map__clone(struct map *from)
> >  size_t map__fprintf(struct map *map, FILE *fp)
> >  {
> >       return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s\n",
> > -                    map->start, map->end, map->pgoff, map->dso->name);
> > +                    map__start(map), map__end(map),
> > +                    map__pgoff(map), map__dso(map)->name);
> >  }
> >
> >  size_t map__fprintf_dsoname(struct map *map, FILE *fp)
> > @@ -399,11 +403,11 @@ size_t map__fprintf_dsoname(struct map *map, FILE *fp)
> >       char buf[symbol_conf.pad_output_len_dso + 1];
> >       const char *dsoname = "[unknown]";
> >
> > -     if (map && map->dso) {
> > -             if (symbol_conf.show_kernel_path && map->dso->long_name)
> > -                     dsoname = map->dso->long_name;
> > +     if (map && map__dso(map)) {
> > +             if (symbol_conf.show_kernel_path && map__dso(map)->long_name)
> > +                     dsoname = map__dso(map)->long_name;
> >               else
> > -                     dsoname = map->dso->name;
> > +                     dsoname = map__dso(map)->name;
> >       }
> >
> >       if (symbol_conf.pad_output_len_dso) {
> > @@ -418,7 +422,8 @@ char *map__srcline(struct map *map, u64 addr, struct symbol *sym)
> >  {
> >       if (map == NULL)
> >               return SRCLINE_UNKNOWN;
> > -     return get_srcline(map->dso, map__rip_2objdump(map, addr), sym, true, true, addr);
> > +     return get_srcline(map__dso(map), map__rip_2objdump(map, addr),
> > +                        sym, true, true, addr);
> >  }
> >
> >  int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
> > @@ -426,7 +431,7 @@ int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
> >  {
> >       int ret = 0;
> >
> > -     if (map && map->dso) {
> > +     if (map && map__dso(map)) {
> >               char *srcline = map__srcline(map, addr, NULL);
> >               if (strncmp(srcline, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0)
> >                       ret = fprintf(fp, "%s%s", prefix, srcline);
> > @@ -472,20 +477,20 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
> >               }
> >       }
> >
> > -     if (!map->dso->adjust_symbols)
> > +     if (!map__dso(map)->adjust_symbols)
> >               return rip;
> >
> > -     if (map->dso->rel)
> > -             return rip - map->pgoff;
> > +     if (map__dso(map)->rel)
> > +             return rip - map__pgoff(map);
> >
> >       /*
> >        * kernel modules also have DSO_TYPE_USER in dso->kernel,
> >        * but all kernel modules are ET_REL, so won't get here.
> >        */
> > -     if (map->dso->kernel == DSO_SPACE__USER)
> > -             return rip + map->dso->text_offset;
> > +     if (map__dso(map)->kernel == DSO_SPACE__USER)
> > +             return rip + map__dso(map)->text_offset;
> >
> > -     return map->unmap_ip(map, rip) - map->reloc;
> > +     return map__unmap_ip(map, rip) - map__reloc(map);
> >  }
> >
> >  /**
> > @@ -502,34 +507,34 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
> >   */
> >  u64 map__objdump_2mem(struct map *map, u64 ip)
> >  {
> > -     if (!map->dso->adjust_symbols)
> > -             return map->unmap_ip(map, ip);
> > +     if (!map__dso(map)->adjust_symbols)
> > +             return map__unmap_ip(map, ip);
> >
> > -     if (map->dso->rel)
> > -             return map->unmap_ip(map, ip + map->pgoff);
> > +     if (map__dso(map)->rel)
> > +             return map__unmap_ip(map, ip + map__pgoff(map));
> >
> >       /*
> >        * kernel modules also have DSO_TYPE_USER in dso->kernel,
> >        * but all kernel modules are ET_REL, so won't get here.
> >        */
> > -     if (map->dso->kernel == DSO_SPACE__USER)
> > -             return map->unmap_ip(map, ip - map->dso->text_offset);
> > +     if (map__dso(map)->kernel == DSO_SPACE__USER)
> > +             return map__unmap_ip(map, ip - map__dso(map)->text_offset);
> >
> > -     return ip + map->reloc;
> > +     return ip + map__reloc(map);
> >  }
> >
> >  bool map__contains_symbol(const struct map *map, const struct symbol *sym)
> >  {
> > -     u64 ip = map->unmap_ip(map, sym->start);
> > +     u64 ip = map__unmap_ip(map, sym->start);
> >
> > -     return ip >= map->start && ip < map->end;
> > +     return ip >= map__start(map) && ip < map__end(map);
> >  }
> >
> >  struct kmap *__map__kmap(struct map *map)
> >  {
> > -     if (!map->dso || !map->dso->kernel)
> > +     if (!map__dso(map) || !map__dso(map)->kernel)
> >               return NULL;
> > -     return (struct kmap *)(map + 1);
> > +     return (struct kmap *)(&map[1]);
> >  }
> >
> >  struct kmap *map__kmap(struct map *map)
> > @@ -552,17 +557,17 @@ struct maps *map__kmaps(struct map *map)
> >       return kmap->kmaps;
> >  }
> >
> > -u64 map__map_ip(const struct map *map, u64 ip)
> > +u64 map__dso_map_ip(const struct map *map, u64 ip)
> >  {
> > -     return ip - map->start + map->pgoff;
> > +     return ip - map__start(map) + map__pgoff(map);
> >  }
> >
> > -u64 map__unmap_ip(const struct map *map, u64 ip)
> > +u64 map__dso_unmap_ip(const struct map *map, u64 ip)
> >  {
> > -     return ip + map->start - map->pgoff;
> > +     return ip + map__start(map) - map__pgoff(map);
> >  }
> >
> > -u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip)
> > +u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip)
> >  {
> >       return ip;
> >  }
> > diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
> > index d1a6f85fd31d..99ef0464a357 100644
> > --- a/tools/perf/util/map.h
> > +++ b/tools/perf/util/map.h
> > @@ -41,15 +41,65 @@ struct kmap *map__kmap(struct map *map);
> >  struct maps *map__kmaps(struct map *map);
> >
> >  /* ip -> dso rip */
> > -u64 map__map_ip(const struct map *map, u64 ip);
> > +u64 map__dso_map_ip(const struct map *map, u64 ip);
> >  /* dso rip -> ip */
> > -u64 map__unmap_ip(const struct map *map, u64 ip);
> > +u64 map__dso_unmap_ip(const struct map *map, u64 ip);
> >  /* Returns ip */
> > -u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip);
> > +u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip);
> > +
> > +static inline struct dso *map__dso(const struct map *map)
> > +{
> > +     return map->dso;
> > +}
> > +
> > +static inline u64 map__map_ip(const struct map *map, u64 ip)
> > +{
> > +     return map->map_ip(map, ip);
> > +}
> > +
> > +static inline u64 map__unmap_ip(const struct map *map, u64 ip)
> > +{
> > +     return map->unmap_ip(map, ip);
> > +}
> > +
> > +static inline u64 map__start(const struct map *map)
> > +{
> > +     return map->start;
> > +}
> > +
> > +static inline u64 map__end(const struct map *map)
> > +{
> > +     return map->end;
> > +}
> > +
> > +static inline u64 map__pgoff(const struct map *map)
> > +{
> > +     return map->pgoff;
> > +}
> > +
> > +static inline u64 map__reloc(const struct map *map)
> > +{
> > +     return map->reloc;
> > +}
> > +
> > +static inline u32 map__flags(const struct map *map)
> > +{
> > +     return map->flags;
> > +}
> > +
> > +static inline u32 map__prot(const struct map *map)
> > +{
> > +     return map->prot;
> > +}
> > +
> > +static inline bool map__priv(const struct map *map)
> > +{
> > +     return map->priv;
> > +}
> >
> >  static inline size_t map__size(const struct map *map)
> >  {
> > -     return map->end - map->start;
> > +     return map__end(map) - map__start(map);
> >  }
> >
> >  /* rip/ip <-> addr suitable for passing to `objdump --start-address=` */
> > diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
> > index 9fc3e7186b8e..6efbcb79131c 100644
> > --- a/tools/perf/util/maps.c
> > +++ b/tools/perf/util/maps.c
> > @@ -30,24 +30,24 @@ static void __maps__free_maps_by_name(struct maps *maps)
> >       maps->nr_maps_allocated = 0;
> >  }
> >
> > -static int __maps__insert(struct maps *maps, struct map *map)
> > +static struct map *__maps__insert(struct maps *maps, struct map *map)
> >  {
> >       struct rb_node **p = &maps__entries(maps)->rb_node;
> >       struct rb_node *parent = NULL;
> > -     const u64 ip = map->start;
> > +     const u64 ip = map__start(map);
> >       struct map_rb_node *m, *new_rb_node;
> >
> >       new_rb_node = malloc(sizeof(*new_rb_node));
> >       if (!new_rb_node)
> > -             return -ENOMEM;
> > +             return NULL;
> >
> >       RB_CLEAR_NODE(&new_rb_node->rb_node);
> > -     new_rb_node->map = map;
> > +     new_rb_node->map = map__get(map);
> >
> >       while (*p != NULL) {
> >               parent = *p;
> >               m = rb_entry(parent, struct map_rb_node, rb_node);
> > -             if (ip < m->map->start)
> > +             if (ip < map__start(m->map))
> >                       p = &(*p)->rb_left;
> >               else
> >                       p = &(*p)->rb_right;
> > @@ -55,22 +55,23 @@ static int __maps__insert(struct maps *maps, struct map *map)
> >
> >       rb_link_node(&new_rb_node->rb_node, parent, p);
> >       rb_insert_color(&new_rb_node->rb_node, maps__entries(maps));
> > -     map__get(map);
> > -     return 0;
> > +     return new_rb_node->map;
> >  }
> >
> >  int maps__insert(struct maps *maps, struct map *map)
> >  {
> > -     int err;
> > +     int err = 0;
> >
> >       down_write(maps__lock(maps));
> > -     err = __maps__insert(maps, map);
> > -     if (err)
> > +     map = __maps__insert(maps, map);
> > +     if (!map) {
> > +             err = -ENOMEM;
> >               goto out;
> > +     }
> >
> >       ++maps->nr_maps;
> >
> > -     if (map->dso && map->dso->kernel) {
> > +     if (map__dso(map) && map__dso(map)->kernel) {
> >               struct kmap *kmap = map__kmap(map);
> >
> >               if (kmap)
> > @@ -193,7 +194,7 @@ struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
> >       if (map != NULL && map__load(map) >= 0) {
> >               if (mapp != NULL)
> >                       *mapp = map;
> > -             return map__find_symbol(map, map->map_ip(map, addr));
> > +             return map__find_symbol(map, map__map_ip(map, addr));
> >       }
> >
> >       return NULL;
> > @@ -228,7 +229,8 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
> >
> >  int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
> >  {
> > -     if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
> > +     if (ams->addr < map__start(ams->ms.map) ||
> > +         ams->addr >= map__end(ams->ms.map)) {
> >               if (maps == NULL)
> >                       return -1;
> >               ams->ms.map = maps__find(maps, ams->addr);
> > @@ -236,7 +238,7 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
> >                       return -1;
> >       }
> >
> > -     ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
> > +     ams->al_addr = map__map_ip(ams->ms.map, ams->addr);
> >       ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
> >
> >       return ams->ms.sym ? 0 : -1;
> > @@ -253,7 +255,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
> >               printed += fprintf(fp, "Map:");
> >               printed += map__fprintf(pos->map, fp);
> >               if (verbose > 2) {
> > -                     printed += dso__fprintf(pos->map->dso, fp);
> > +                     printed += dso__fprintf(map__dso(pos->map), fp);
> >                       printed += fprintf(fp, "--\n");
> >               }
> >       }
> > @@ -282,9 +284,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >       while (next) {
> >               struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
> >
> > -             if (pos->map->end > map->start) {
> > +             if (map__end(pos->map) > map__start(map)) {
> >                       first = next;
> > -                     if (pos->map->start <= map->start)
> > +                     if (map__start(pos->map) <= map__start(map))
> >                               break;
> >                       next = next->rb_left;
> >               } else
> > @@ -300,14 +302,14 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >                * Stop if current map starts after map->end.
> >                * Maps are ordered by start: next will not overlap for sure.
> >                */
> > -             if (pos->map->start >= map->end)
> > +             if (map__start(pos->map) >= map__end(map))
> >                       break;
> >
> >               if (verbose >= 2) {
> >
> >                       if (use_browser) {
> >                               pr_debug("overlapping maps in %s (disable tui for more info)\n",
> > -                                        map->dso->name);
> > +                                        map__dso(map)->name);
> >                       } else {
> >                               fputs("overlapping maps:\n", fp);
> >                               map__fprintf(map, fp);
> > @@ -320,7 +322,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >                * Now check if we need to create new maps for areas not
> >                * overlapped by the new map:
> >                */
> > -             if (map->start > pos->map->start) {
> > +             if (map__start(map) > map__start(pos->map)) {
> >                       struct map *before = map__clone(pos->map);
> >
> >                       if (before == NULL) {
> > @@ -328,17 +330,19 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >                               goto put_map;
> >                       }
> >
> > -                     before->end = map->start;
> > -                     err = __maps__insert(maps, before);
> > -                     if (err)
> > +                     before->end = map__start(map);
> > +                     if (!__maps__insert(maps, before)) {
> > +                             map__put(before);
> > +                             err = -ENOMEM;
> >                               goto put_map;
> > +                     }
> >
> >                       if (verbose >= 2 && !use_browser)
> >                               map__fprintf(before, fp);
> >                       map__put(before);
> >               }
> >
> > -             if (map->end < pos->map->end) {
> > +             if (map__end(map) < map__end(pos->map)) {
> >                       struct map *after = map__clone(pos->map);
> >
> >                       if (after == NULL) {
> > @@ -346,14 +350,15 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >                               goto put_map;
> >                       }
> >
> > -                     after->start = map->end;
> > -                     after->pgoff += map->end - pos->map->start;
> > -                     assert(pos->map->map_ip(pos->map, map->end) ==
> > -                             after->map_ip(after, map->end));
> > -                     err = __maps__insert(maps, after);
> > -                     if (err)
> > +                     after->start = map__end(map);
> > +                     after->pgoff += map__end(map) - map__start(pos->map);
> > +                     assert(map__map_ip(pos->map, map__end(map)) ==
> > +                             map__map_ip(after, map__end(map)));
> > +                     if (!__maps__insert(maps, after)) {
> > +                             map__put(after);
> > +                             err = -ENOMEM;
> >                               goto put_map;
> > -
> > +                     }
> >                       if (verbose >= 2 && !use_browser)
> >                               map__fprintf(after, fp);
> >                       map__put(after);
> > @@ -377,7 +382,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >  int maps__clone(struct thread *thread, struct maps *parent)
> >  {
> >       struct maps *maps = thread->maps;
> > -     int err;
> > +     int err = 0;
> >       struct map_rb_node *rb_node;
> >
> >       down_read(maps__lock(parent));
> > @@ -391,17 +396,13 @@ int maps__clone(struct thread *thread, struct maps *parent)
> >               }
> >
> >               err = unwind__prepare_access(maps, new, NULL);
> > -             if (err)
> > -                     goto out_unlock;
> > +             if (!err)
> > +                     err = maps__insert(maps, new);
> >
> > -             err = maps__insert(maps, new);
> > +             map__put(new);
> >               if (err)
> >                       goto out_unlock;
> > -
> > -             map__put(new);
> >       }
> > -
> > -     err = 0;
> >  out_unlock:
> >       up_read(maps__lock(parent));
> >       return err;
> > @@ -428,9 +429,9 @@ struct map *maps__find(struct maps *maps, u64 ip)
> >       p = maps__entries(maps)->rb_node;
> >       while (p != NULL) {
> >               m = rb_entry(p, struct map_rb_node, rb_node);
> > -             if (ip < m->map->start)
> > +             if (ip < map__start(m->map))
> >                       p = p->rb_left;
> > -             else if (ip >= m->map->end)
> > +             else if (ip >= map__end(m->map))
> >                       p = p->rb_right;
> >               else
> >                       goto out;
> > diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
> > index f9fbf611f2bf..1a93dca50a4c 100644
> > --- a/tools/perf/util/probe-event.c
> > +++ b/tools/perf/util/probe-event.c
> > @@ -134,15 +134,15 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
> >       /* ref_reloc_sym is just a label. Need a special fix*/
> >       reloc_sym = kernel_get_ref_reloc_sym(&map);
> >       if (reloc_sym && strcmp(name, reloc_sym->name) == 0)
> > -             *addr = (!map->reloc || reloc) ? reloc_sym->addr :
> > +             *addr = (!map__reloc(map) || reloc) ? reloc_sym->addr :
> >                       reloc_sym->unrelocated_addr;
> >       else {
> >               sym = machine__find_kernel_symbol_by_name(host_machine, name, &map);
> >               if (!sym)
> >                       return -ENOENT;
> > -             *addr = map->unmap_ip(map, sym->start) -
> > -                     ((reloc) ? 0 : map->reloc) -
> > -                     ((reladdr) ? map->start : 0);
> > +             *addr = map__unmap_ip(map, sym->start) -
> > +                     ((reloc) ? 0 : map__reloc(map)) -
> > +                     ((reladdr) ? map__start(map) : 0);
> >       }
> >       return 0;
> >  }
> > @@ -164,8 +164,8 @@ static struct map *kernel_get_module_map(const char *module)
> >
> >       maps__for_each_entry(maps, pos) {
> >               /* short_name is "[module]" */
> > -             const char *short_name = pos->map->dso->short_name;
> > -             u16 short_name_len =  pos->map->dso->short_name_len;
> > +             const char *short_name = map__dso(pos->map)->short_name;
> > +             u16 short_name_len =  map__dso(pos->map)->short_name_len;
> >
> >               if (strncmp(short_name + 1, module,
> >                           short_name_len - 2) == 0 &&
> > @@ -183,11 +183,11 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
> >               struct map *map;
> >
> >               map = dso__new_map(target);
> > -             if (map && map->dso) {
> > -                     BUG_ON(pthread_mutex_lock(&map->dso->lock) != 0);
> > -                     nsinfo__put(map->dso->nsinfo);
> > -                     map->dso->nsinfo = nsinfo__get(nsi);
> > -                     pthread_mutex_unlock(&map->dso->lock);
> > +             if (map && map__dso(map)) {
> > +                     BUG_ON(pthread_mutex_lock(&map__dso(map)->lock) != 0);
> > +                     nsinfo__put(map__dso(map)->nsinfo);
> > +                     map__dso(map)->nsinfo = nsinfo__get(nsi);
> > +                     pthread_mutex_unlock(&map__dso(map)->lock);
> >               }
> >               return map;
> >       } else {
> > @@ -253,7 +253,7 @@ static bool kprobe_warn_out_range(const char *symbol, u64 address)
> >
> >       map = kernel_get_module_map(NULL);
> >       if (map) {
> > -             ret = address <= map->start || map->end < address;
> > +             ret = address <= map__start(map) || map__end(map) < address;
> >               if (ret)
> >                       pr_warning("%s is out of .text, skip it.\n", symbol);
> >               map__put(map);
> > @@ -340,7 +340,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
> >               snprintf(module_name, sizeof(module_name), "[%s]", module);
> >               map = maps__find_by_name(machine__kernel_maps(host_machine), module_name);
> >               if (map) {
> > -                     dso = map->dso;
> > +                     dso = map__dso(map);
> >                       goto found;
> >               }
> >               pr_debug("Failed to find module %s.\n", module);
> > @@ -348,7 +348,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
> >       }
> >
> >       map = machine__kernel_map(host_machine);
> > -     dso = map->dso;
> > +     dso = map__dso(map);
> >       if (!dso->has_build_id)
> >               dso__read_running_kernel_build_id(dso, host_machine);
> >
> > @@ -396,7 +396,8 @@ static int find_alternative_probe_point(struct debuginfo *dinfo,
> >                                          "Consider identifying the final function used at run time and set the probe directly on that.\n",
> >                                          pp->function);
> >               } else
> > -                     address = map->unmap_ip(map, sym->start) - map->reloc;
> > +                     address = map__unmap_ip(map, sym->start) -
> > +                               map__reloc(map);
> >               break;
> >       }
> >       if (!address) {
> > @@ -862,8 +863,7 @@ post_process_kernel_probe_trace_events(struct probe_trace_event *tevs,
> >                       free(tevs[i].point.symbol);
> >               tevs[i].point.symbol = tmp;
> >               tevs[i].point.offset = tevs[i].point.address -
> > -                     (map->reloc ? reloc_sym->unrelocated_addr :
> > -                                   reloc_sym->addr);
> > +                     (map__reloc(map) ? reloc_sym->unrelocated_addr : reloc_sym->addr);
> >       }
> >       return skipped;
> >  }
> > @@ -2243,7 +2243,7 @@ static int find_perf_probe_point_from_map(struct probe_trace_point *tp,
> >               goto out;
> >
> >       pp->retprobe = tp->retprobe;
> > -     pp->offset = addr - map->unmap_ip(map, sym->start);
> > +     pp->offset = addr - map__unmap_ip(map, sym->start);
> >       pp->function = strdup(sym->name);
> >       ret = pp->function ? 0 : -ENOMEM;
> >
> > @@ -3117,7 +3117,7 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
> >                       goto err_out;
> >               }
> >               /* Add one probe point */
> > -             tp->address = map->unmap_ip(map, sym->start) + pp->offset;
> > +             tp->address = map__unmap_ip(map, sym->start) + pp->offset;
> >
> >               /* Check the kprobe (not in module) is within .text  */
> >               if (!pev->uprobes && !pev->target &&
> > @@ -3759,13 +3759,13 @@ int show_available_funcs(const char *target, struct nsinfo *nsi,
> >                              (target) ? : "kernel");
> >               goto end;
> >       }
> > -     if (!dso__sorted_by_name(map->dso))
> > -             dso__sort_by_name(map->dso);
> > +     if (!dso__sorted_by_name(map__dso(map)))
> > +             dso__sort_by_name(map__dso(map));
> >
> >       /* Show all (filtered) symbols */
> >       setup_pager();
> >
> > -     for (nd = rb_first_cached(&map->dso->symbol_names); nd;
> > +     for (nd = rb_first_cached(&map__dso(map)->symbol_names); nd;
> >            nd = rb_next(nd)) {
> >               struct symbol_name_rb_node *pos = rb_entry(nd, struct symbol_name_rb_node, rb_node);
> >
> > diff --git a/tools/perf/util/scripting-engines/trace-event-perl.c b/tools/perf/util/scripting-engines/trace-event-perl.c
> > index a5d945415bbc..1282fb9b45e1 100644
> > --- a/tools/perf/util/scripting-engines/trace-event-perl.c
> > +++ b/tools/perf/util/scripting-engines/trace-event-perl.c
> > @@ -315,11 +315,12 @@ static SV *perl_process_callchain(struct perf_sample *sample,
> >               if (node->ms.map) {
> >                       struct map *map = node->ms.map;
> >                       const char *dsoname = "[unknown]";
> > -                     if (map && map->dso) {
> > -                             if (symbol_conf.show_kernel_path && map->dso->long_name)
> > -                                     dsoname = map->dso->long_name;
> > +                     if (map && map__dso(map)) {
> > +                             if (symbol_conf.show_kernel_path &&
> > +                                 map__dso(map)->long_name)
> > +                                     dsoname = map__dso(map)->long_name;
> >                               else
> > -                                     dsoname = map->dso->name;
> > +                                     dsoname = map__dso(map)->name;
> >                       }
> >                       if (!hv_stores(elem, "dso", newSVpv(dsoname,0))) {
> >                               hv_undef(elem);
> > diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
> > index 0290dc3a6258..559b2ac5cac3 100644
> > --- a/tools/perf/util/scripting-engines/trace-event-python.c
> > +++ b/tools/perf/util/scripting-engines/trace-event-python.c
> > @@ -382,11 +382,11 @@ static const char *get_dsoname(struct map *map)
> >  {
> >       const char *dsoname = "[unknown]";
> >
> > -     if (map && map->dso) {
> > -             if (symbol_conf.show_kernel_path && map->dso->long_name)
> > -                     dsoname = map->dso->long_name;
> > +     if (map && map__dso(map)) {
> > +             if (symbol_conf.show_kernel_path && map__dso(map)->long_name)
> > +                     dsoname = map__dso(map)->long_name;
> >               else
> > -                     dsoname = map->dso->name;
> > +                     dsoname = map__dso(map)->name;
> >       }
> >
> >       return dsoname;
> > @@ -527,7 +527,7 @@ static unsigned long get_offset(struct symbol *sym, struct addr_location *al)
> >       if (al->addr < sym->end)
> >               offset = al->addr - sym->start;
> >       else
> > -             offset = al->addr - al->map->start - sym->start;
> > +             offset = al->addr - map__start(al->map) - sym->start;
> >
> >       return offset;
> >  }
> > @@ -741,7 +741,7 @@ static void set_sym_in_dict(PyObject *dict, struct addr_location *al,
> >  {
> >       if (al->map) {
> >               pydict_set_item_string_decref(dict, dso_field,
> > -                     _PyUnicode_FromString(al->map->dso->name));
> > +                     _PyUnicode_FromString(map__dso(al->map)->name));
> >       }
> >       if (al->sym) {
> >               pydict_set_item_string_decref(dict, sym_field,
> > diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
> > index 25686d67ee6f..6d19bbcd30df 100644
> > --- a/tools/perf/util/sort.c
> > +++ b/tools/perf/util/sort.c
> > @@ -173,8 +173,8 @@ struct sort_entry sort_comm = {
> >
> >  static int64_t _sort__dso_cmp(struct map *map_l, struct map *map_r)
> >  {
> > -     struct dso *dso_l = map_l ? map_l->dso : NULL;
> > -     struct dso *dso_r = map_r ? map_r->dso : NULL;
> > +     struct dso *dso_l = map_l ? map__dso(map_l) : NULL;
> > +     struct dso *dso_r = map_r ? map__dso(map_r) : NULL;
> >       const char *dso_name_l, *dso_name_r;
> >
> >       if (!dso_l || !dso_r)
> > @@ -200,9 +200,9 @@ sort__dso_cmp(struct hist_entry *left, struct hist_entry *right)
> >  static int _hist_entry__dso_snprintf(struct map *map, char *bf,
> >                                    size_t size, unsigned int width)
> >  {
> > -     if (map && map->dso) {
> > -             const char *dso_name = verbose > 0 ? map->dso->long_name :
> > -                     map->dso->short_name;
> > +     if (map && map__dso(map)) {
> > +             const char *dso_name = verbose > 0 ? map__dso(map)->long_name :
> > +                     map__dso(map)->short_name;
> >               return repsep_snprintf(bf, size, "%-*.*s", width, width, dso_name);
> >       }
> >
> > @@ -222,7 +222,7 @@ static int hist_entry__dso_filter(struct hist_entry *he, int type, const void *a
> >       if (type != HIST_FILTER__DSO)
> >               return -1;
> >
> > -     return dso && (!he->ms.map || he->ms.map->dso != dso);
> > +     return dso && (!he->ms.map || map__dso(he->ms.map) != dso);
> >  }
> >
> >  struct sort_entry sort_dso = {
> > @@ -302,12 +302,12 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
> >       size_t ret = 0;
> >
> >       if (verbose > 0) {
> > -             char o = map ? dso__symtab_origin(map->dso) : '!';
> > +             char o = map ? dso__symtab_origin(map__dso(map)) : '!';
> >               u64 rip = ip;
> >
> > -             if (map && map->dso && map->dso->kernel
> > -                 && map->dso->adjust_symbols)
> > -                     rip = map->unmap_ip(map, ip);
> > +             if (map && map__dso(map) && map__dso(map)->kernel
> > +                 && map__dso(map)->adjust_symbols)
> > +                     rip = map__unmap_ip(map, ip);
> >
> >               ret += repsep_snprintf(bf, size, "%-#*llx %c ",
> >                                      BITS_PER_LONG / 4 + 2, rip, o);
> > @@ -318,7 +318,7 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
> >               if (sym->type == STT_OBJECT) {
> >                       ret += repsep_snprintf(bf + ret, size - ret, "%s", sym->name);
> >                       ret += repsep_snprintf(bf + ret, size - ret, "+0x%llx",
> > -                                     ip - map->unmap_ip(map, sym->start));
> > +                                     ip - map__unmap_ip(map, sym->start));
> >               } else {
> >                       ret += repsep_snprintf(bf + ret, size - ret, "%.*s",
> >                                              width - ret,
> > @@ -517,7 +517,7 @@ static char *hist_entry__get_srcfile(struct hist_entry *e)
> >       if (!map)
> >               return no_srcfile;
> >
> > -     sf = __get_srcline(map->dso, map__rip_2objdump(map, e->ip),
> > +     sf = __get_srcline(map__dso(map), map__rip_2objdump(map, e->ip),
> >                        e->ms.sym, false, true, true, e->ip);
> >       if (!strcmp(sf, SRCLINE_UNKNOWN))
> >               return no_srcfile;
> > @@ -838,7 +838,7 @@ static int hist_entry__dso_from_filter(struct hist_entry *he, int type,
> >               return -1;
> >
> >       return dso && (!he->branch_info || !he->branch_info->from.ms.map ||
> > -                    he->branch_info->from.ms.map->dso != dso);
> > +             map__dso(he->branch_info->from.ms.map) != dso);
> >  }
> >
> >  static int64_t
> > @@ -870,7 +870,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type,
> >               return -1;
> >
> >       return dso && (!he->branch_info || !he->branch_info->to.ms.map ||
> > -                    he->branch_info->to.ms.map->dso != dso);
> > +             map__dso(he->branch_info->to.ms.map) != dso);
> >  }
> >
> >  static int64_t
> > @@ -1259,7 +1259,7 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
> >       if (!l_map) return -1;
> >       if (!r_map) return 1;
> >
> > -     rc = dso__cmp_id(l_map->dso, r_map->dso);
> > +     rc = dso__cmp_id(map__dso(l_map), map__dso(r_map));
> >       if (rc)
> >               return rc;
> >       /*
> > @@ -1271,9 +1271,9 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
> >        */
> >
> >       if ((left->cpumode != PERF_RECORD_MISC_KERNEL) &&
> > -         (!(l_map->flags & MAP_SHARED)) &&
> > -         !l_map->dso->id.maj && !l_map->dso->id.min &&
> > -         !l_map->dso->id.ino && !l_map->dso->id.ino_generation) {
> > +         (!(map__flags(l_map) & MAP_SHARED)) &&
> > +         !map__dso(l_map)->id.maj && !map__dso(l_map)->id.min &&
> > +         !map__dso(l_map)->id.ino && !map__dso(l_map)->id.ino_generation) {
> >               /* userspace anonymous */
> >
> >               if (left->thread->pid_ > right->thread->pid_) return -1;
> > @@ -1307,10 +1307,10 @@ static int hist_entry__dcacheline_snprintf(struct hist_entry *he, char *bf,
> >
> >               /* print [s] for shared data mmaps */
> >               if ((he->cpumode != PERF_RECORD_MISC_KERNEL) &&
> > -                  map && !(map->prot & PROT_EXEC) &&
> > -                 (map->flags & MAP_SHARED) &&
> > -                 (map->dso->id.maj || map->dso->id.min ||
> > -                  map->dso->id.ino || map->dso->id.ino_generation))
> > +                 map && !(map__prot(map) & PROT_EXEC) &&
> > +                 (map__flags(map) & MAP_SHARED) &&
> > +                 (map__dso(map)->id.maj || map__dso(map)->id.min ||
> > +                  map__dso(map)->id.ino || map__dso(map)->id.ino_generation))
> >                       level = 's';
> >               else if (!map)
> >                       level = 'X';
> > @@ -1806,7 +1806,7 @@ sort__dso_size_cmp(struct hist_entry *left, struct hist_entry *right)
> >  static int _hist_entry__dso_size_snprintf(struct map *map, char *bf,
> >                                         size_t bf_size, unsigned int width)
> >  {
> > -     if (map && map->dso)
> > +     if (map && map__dso(map))
> >               return repsep_snprintf(bf, bf_size, "%*d", width,
> >                                      map__size(map));
> >
> > diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
> > index 3ca9a0968345..056405d3d655 100644
> > --- a/tools/perf/util/symbol-elf.c
> > +++ b/tools/perf/util/symbol-elf.c
> > @@ -970,7 +970,7 @@ void __weak arch__sym_update(struct symbol *s __maybe_unused,
> >  static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> >                                     GElf_Sym *sym, GElf_Shdr *shdr,
> >                                     struct maps *kmaps, struct kmap *kmap,
> > -                                   struct dso **curr_dsop, struct map **curr_mapp,
> > +                                   struct dso **curr_dsop,
> >                                     const char *section_name,
> >                                     bool adjust_kernel_syms, bool kmodule, bool *remap_kernel)
> >  {
> > @@ -994,18 +994,18 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> >               if (*remap_kernel && dso->kernel && !kmodule) {
> >                       *remap_kernel = false;
> >                       map->start = shdr->sh_addr + ref_reloc(kmap);
> > -                     map->end = map->start + shdr->sh_size;
> > +                     map->end = map__start(map) + shdr->sh_size;
> >                       map->pgoff = shdr->sh_offset;
> > -                     map->map_ip = map__map_ip;
> > -                     map->unmap_ip = map__unmap_ip;
> > +                     map->map_ip = map__dso_map_ip;
> > +                     map->unmap_ip = map__dso_unmap_ip;
> >                       /* Ensure maps are correctly ordered */
> >                       if (kmaps) {
> >                               int err;
> > +                             struct map *updated = map__get(map);
> >
> > -                             map__get(map);
> >                               maps__remove(kmaps, map);
> > -                             err = maps__insert(kmaps, map);
> > -                             map__put(map);
> > +                             err = maps__insert(kmaps, updated);
> > +                             map__put(updated);
> >                               if (err)
> >                                       return err;
> >                       }
> > @@ -1021,7 +1021,6 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> >                       map->pgoff = shdr->sh_offset;
> >               }
> >
> > -             *curr_mapp = map;
> >               *curr_dsop = dso;
> >               return 0;
> >       }
> > @@ -1036,7 +1035,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> >               u64 start = sym->st_value;
> >
> >               if (kmodule)
> > -                     start += map->start + shdr->sh_offset;
> > +                     start += map__start(map) + shdr->sh_offset;
> >
> >               curr_dso = dso__new(dso_name);
> >               if (curr_dso == NULL)
> > @@ -1054,10 +1053,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> >
> >               if (adjust_kernel_syms) {
> >                       curr_map->start  = shdr->sh_addr + ref_reloc(kmap);
> > -                     curr_map->end    = curr_map->start + shdr->sh_size;
> > -                     curr_map->pgoff  = shdr->sh_offset;
> > +                     curr_map->end   = map__start(curr_map) + shdr->sh_size;
> > +                     curr_map->pgoff = shdr->sh_offset;
> >               } else {
> > -                     curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> > +                     curr_map->map_ip = map__identity_ip;
> > +                     curr_map->unmap_ip = map__identity_ip;
> >               }
> >               curr_dso->symtab_type = dso->symtab_type;
> >               if (maps__insert(kmaps, curr_map))
> > @@ -1068,13 +1068,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> >                * *curr_map->dso.
> >                */
> >               dsos__add(&maps__machine(kmaps)->dsos, curr_dso);
> > -             /* kmaps already got it */
> > -             map__put(curr_map);
> >               dso__set_loaded(curr_dso);
> > -             *curr_mapp = curr_map;
> >               *curr_dsop = curr_dso;
> > +             map__put(curr_map);
> >       } else
> > -             *curr_dsop = curr_map->dso;
> > +             *curr_dsop = map__dso(curr_map);
> >
> >       return 0;
> >  }
> > @@ -1085,7 +1083,6 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
> >  {
> >       struct kmap *kmap = dso->kernel ? map__kmap(map) : NULL;
> >       struct maps *kmaps = kmap ? map__kmaps(map) : NULL;
> > -     struct map *curr_map = map;
> >       struct dso *curr_dso = dso;
> >       Elf_Data *symstrs, *secstrs, *secstrs_run, *secstrs_sym;
> >       uint32_t nr_syms;
> > @@ -1175,7 +1172,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
> >        * attempted to prelink vdso to its virtual address.
> >        */
> >       if (dso__is_vdso(dso))
> > -             map->reloc = map->start - dso->text_offset;
> > +             map->reloc = map__start(map) - dso->text_offset;
> >
> >       dso->adjust_symbols = runtime_ss->adjust_symbols || ref_reloc(kmap);
> >       /*
> > @@ -1262,8 +1259,10 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
> >                       --sym.st_value;
> >
> >               if (dso->kernel) {
> > -                     if (dso__process_kernel_symbol(dso, map, &sym, &shdr, kmaps, kmap, &curr_dso, &curr_map,
> > -                                                    section_name, adjust_kernel_syms, kmodule, &remap_kernel))
> > +                     if (dso__process_kernel_symbol(dso, map, &sym, &shdr,
> > +                                                    kmaps, kmap, &curr_dso,
> > +                                                    section_name, adjust_kernel_syms,
> > +                                                    kmodule, &remap_kernel))
> >                               goto out_elf_end;
> >               } else if ((used_opd && runtime_ss->adjust_symbols) ||
> >                          (!used_opd && syms_ss->adjust_symbols)) {
> > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> > index 9b51e669a722..6289b3028b91 100644
> > --- a/tools/perf/util/symbol.c
> > +++ b/tools/perf/util/symbol.c
> > @@ -252,8 +252,8 @@ void maps__fixup_end(struct maps *maps)
> >       down_write(maps__lock(maps));
> >
> >       maps__for_each_entry(maps, curr) {
> > -             if (prev != NULL && !prev->map->end)
> > -                     prev->map->end = curr->map->start;
> > +             if (prev != NULL && !map__end(prev->map))
> > +                     prev->map->end = map__start(curr->map);
> >
> >               prev = curr;
> >       }
> > @@ -262,7 +262,7 @@ void maps__fixup_end(struct maps *maps)
> >        * We still haven't the actual symbols, so guess the
> >        * last map final address.
> >        */
> > -     if (curr && !curr->map->end)
> > +     if (curr && !map__end(curr->map))
> >               curr->map->end = ~0ULL;
> >
> >       up_write(maps__lock(maps));
> > @@ -778,12 +778,12 @@ static int maps__split_kallsyms_for_kcore(struct maps *kmaps, struct dso *dso)
> >                       continue;
> >               }
> >
> > -             pos->start -= curr_map->start - curr_map->pgoff;
> > -             if (pos->end > curr_map->end)
> > -                     pos->end = curr_map->end;
> > +             pos->start -= map__start(curr_map) - map__pgoff(curr_map);
> > +             if (pos->end > map__end(curr_map))
> > +                     pos->end = map__end(curr_map);
> >               if (pos->end)
> > -                     pos->end -= curr_map->start - curr_map->pgoff;
> > -             symbols__insert(&curr_map->dso->symbols, pos);
> > +                     pos->end -= map__start(curr_map) - map__pgoff(curr_map);
> > +             symbols__insert(&map__dso(curr_map)->symbols, pos);
> >               ++count;
> >       }
> >
> > @@ -830,7 +830,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> >
> >                       *module++ = '\0';
> >
> > -                     if (strcmp(curr_map->dso->short_name, module)) {
> > +                     if (strcmp(map__dso(curr_map)->short_name, module)) {
> >                               if (curr_map != initial_map &&
> >                                   dso->kernel == DSO_SPACE__KERNEL_GUEST &&
> >                                   machine__is_default_guest(machine)) {
> > @@ -841,7 +841,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> >                                        * symbols are in its kmap. Mark it as
> >                                        * loaded.
> >                                        */
> > -                                     dso__set_loaded(curr_map->dso);
> > +                                     dso__set_loaded(map__dso(curr_map));
> >                               }
> >
> >                               curr_map = maps__find_by_name(kmaps, module);
> > @@ -854,7 +854,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> >                                       goto discard_symbol;
> >                               }
> >
> > -                             if (curr_map->dso->loaded &&
> > +                             if (map__dso(curr_map)->loaded &&
> >                                   !machine__is_default_guest(machine))
> >                                       goto discard_symbol;
> >                       }
> > @@ -862,8 +862,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> >                        * So that we look just like we get from .ko files,
> >                        * i.e. not prelinked, relative to initial_map->start.
> >                        */
> > -                     pos->start = curr_map->map_ip(curr_map, pos->start);
> > -                     pos->end   = curr_map->map_ip(curr_map, pos->end);
> > +                     pos->start = map__map_ip(curr_map, pos->start);
> > +                     pos->end   = map__map_ip(curr_map, pos->end);
> >               } else if (x86_64 && is_entry_trampoline(pos->name)) {
> >                       /*
> >                        * These symbols are not needed anymore since the
> > @@ -910,7 +910,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> >                               return -1;
> >                       }
> >
> > -                     curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> > +                     curr_map->map_ip = map__identity_ip;
> > +                     curr_map->unmap_ip = map__identity_ip;
> >                       if (maps__insert(kmaps, curr_map)) {
> >                               dso__put(ndso);
> >                               return -1;
> > @@ -924,7 +925,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> >  add_symbol:
> >               if (curr_map != initial_map) {
> >                       rb_erase_cached(&pos->rb_node, root);
> > -                     symbols__insert(&curr_map->dso->symbols, pos);
> > +                     symbols__insert(&map__dso(curr_map)->symbols, pos);
> >                       ++moved;
> >               } else
> >                       ++count;
> > @@ -938,7 +939,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> >       if (curr_map != initial_map &&
> >           dso->kernel == DSO_SPACE__KERNEL_GUEST &&
> >           machine__is_default_guest(maps__machine(kmaps))) {
> > -             dso__set_loaded(curr_map->dso);
> > +             dso__set_loaded(map__dso(curr_map));
> >       }
> >
> >       return count + moved;
> > @@ -1118,8 +1119,8 @@ static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
> >               }
> >
> >               /* Module must be in memory at the same address */
> > -             mi = find_module(old_map->dso->short_name, &modules);
> > -             if (!mi || mi->start != old_map->start) {
> > +             mi = find_module(map__dso(old_map)->short_name, &modules);
> > +             if (!mi || mi->start != map__start(old_map)) {
> >                       err = -EINVAL;
> >                       goto out;
> >               }
> > @@ -1214,7 +1215,7 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
> >               return -ENOMEM;
> >       }
> >
> > -     list_node->map->end = list_node->map->start + len;
> > +     list_node->map->end = map__start(list_node->map) + len;
> >       list_node->map->pgoff = pgoff;
> >
> >       list_add(&list_node->node, &md->maps);
> > @@ -1236,21 +1237,21 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
> >               struct map *old_map = rb_node->map;
> >
> >               /* no overload with this one */
> > -             if (new_map->end < old_map->start ||
> > -                 new_map->start >= old_map->end)
> > +             if (map__end(new_map) < map__start(old_map) ||
> > +                 map__start(new_map) >= map__end(old_map))
> >                       continue;
> >
> > -             if (new_map->start < old_map->start) {
> > +             if (map__start(new_map) < map__start(old_map)) {
> >                       /*
> >                        * |new......
> >                        *       |old....
> >                        */
> > -                     if (new_map->end < old_map->end) {
> > +                     if (map__end(new_map) < map__end(old_map)) {
> >                               /*
> >                                * |new......|     -> |new..|
> >                                *       |old....| ->       |old....|
> >                                */
> > -                             new_map->end = old_map->start;
> > +                             new_map->end = map__start(old_map);
> >                       } else {
> >                               /*
> >                                * |new.............| -> |new..|       |new..|
> > @@ -1271,17 +1272,18 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
> >                                       goto out;
> >                               }
> >
> > -                             m->map->end = old_map->start;
> > +                             m->map->end = map__start(old_map);
> >                               list_add_tail(&m->node, &merged);
> > -                             new_map->pgoff += old_map->end - new_map->start;
> > -                             new_map->start = old_map->end;
> > +                             new_map->pgoff +=
> > +                                     map__end(old_map) - map__start(new_map);
> > +                             new_map->start = map__end(old_map);
> >                       }
> >               } else {
> >                       /*
> >                        *      |new......
> >                        * |old....
> >                        */
> > -                     if (new_map->end < old_map->end) {
> > +                     if (map__end(new_map) < map__end(old_map)) {
> >                               /*
> >                                *      |new..|   -> x
> >                                * |old.........| -> |old.........|
> > @@ -1294,8 +1296,9 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
> >                                *      |new......| ->         |new...|
> >                                * |old....|        -> |old....|
> >                                */
> > -                             new_map->pgoff += old_map->end - new_map->start;
> > -                             new_map->start = old_map->end;
> > +                             new_map->pgoff +=
> > +                                     map__end(old_map) - map__start(new_map);
> > +                             new_map->start = map__end(old_map);
> >                       }
> >               }
> >       }
> > @@ -1361,7 +1364,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> >       }
> >
> >       /* Read new maps into temporary lists */
> > -     err = file__read_maps(fd, map->prot & PROT_EXEC, kcore_mapfn, &md,
> > +     err = file__read_maps(fd, map__prot(map) & PROT_EXEC, kcore_mapfn, &md,
> >                             &is_64_bit);
> >       if (err)
> >               goto out_err;
> > @@ -1391,7 +1394,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> >               struct map_list_node *new_node;
> >
> >               list_for_each_entry(new_node, &md.maps, node) {
> > -                     if (stext >= new_node->map->start && stext < new_node->map->end) {
> > +                     if (stext >= map__start(new_node->map) &&
> > +                         stext < map__end(new_node->map)) {
> >                               replacement_map = new_node->map;
> >                               break;
> >                       }
> > @@ -1408,16 +1412,18 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> >               new_node = list_entry(md.maps.next, struct map_list_node, node);
> >               list_del_init(&new_node->node);
> >               if (new_node->map == replacement_map) {
> > -                     map->start      = new_node->map->start;
> > -                     map->end        = new_node->map->end;
> > -                     map->pgoff      = new_node->map->pgoff;
> > -                     map->map_ip     = new_node->map->map_ip;
> > -                     map->unmap_ip   = new_node->map->unmap_ip;
> > +                     struct  map *updated;
> > +
> > +                     map->start = map__start(new_node->map);
> > +                     map->end   = map__end(new_node->map);
> > +                     map->pgoff = map__pgoff(new_node->map);
> > +                     map->map_ip = new_node->map->map_ip;
> > +                     map->unmap_ip = new_node->map->unmap_ip;
> >                       /* Ensure maps are correctly ordered */
> > -                     map__get(map);
> > +                     updated = map__get(map);
> >                       maps__remove(kmaps, map);
> > -                     err = maps__insert(kmaps, map);
> > -                     map__put(map);
> > +                     err = maps__insert(kmaps, updated);
> > +                     map__put(updated);
> >                       map__put(new_node->map);
> >                       if (err)
> >                               goto out_err;
> > @@ -1460,7 +1466,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> >
> >       close(fd);
> >
> > -     if (map->prot & PROT_EXEC)
> > +     if (map__prot(map) & PROT_EXEC)
> >               pr_debug("Using %s for kernel object code\n", kcore_filename);
> >       else
> >               pr_debug("Using %s for kernel data\n", kcore_filename);
> > @@ -1995,13 +2001,13 @@ int dso__load(struct dso *dso, struct map *map)
> >  static int map__strcmp(const void *a, const void *b)
> >  {
> >       const struct map *ma = *(const struct map **)a, *mb = *(const struct map **)b;
> > -     return strcmp(ma->dso->short_name, mb->dso->short_name);
> > +     return strcmp(map__dso(ma)->short_name, map__dso(mb)->short_name);
> >  }
> >
> >  static int map__strcmp_name(const void *name, const void *b)
> >  {
> >       const struct map *map = *(const struct map **)b;
> > -     return strcmp(name, map->dso->short_name);
> > +     return strcmp(name, map__dso(map)->short_name);
> >  }
> >
> >  void __maps__sort_by_name(struct maps *maps)
> > @@ -2052,7 +2058,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
> >       down_read(maps__lock(maps));
> >
> >       if (maps->last_search_by_name &&
> > -         strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
> > +         strcmp(map__dso(maps->last_search_by_name)->short_name, name) == 0) {
> >               map = maps->last_search_by_name;
> >               goto out_unlock;
> >       }
> > @@ -2068,7 +2074,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
> >       /* Fallback to traversing the rbtree... */
> >       maps__for_each_entry(maps, rb_node) {
> >               map = rb_node->map;
> > -             if (strcmp(map->dso->short_name, name) == 0) {
> > +             if (strcmp(map__dso(map)->short_name, name) == 0) {
> >                       maps->last_search_by_name = map;
> >                       goto out_unlock;
> >               }
> > diff --git a/tools/perf/util/symbol_fprintf.c b/tools/perf/util/symbol_fprintf.c
> > index 2664fb65e47a..d9e5ad040b6a 100644
> > --- a/tools/perf/util/symbol_fprintf.c
> > +++ b/tools/perf/util/symbol_fprintf.c
> > @@ -30,7 +30,7 @@ size_t __symbol__fprintf_symname_offs(const struct symbol *sym,
> >                       if (al->addr < sym->end)
> >                               offset = al->addr - sym->start;
> >                       else
> > -                             offset = al->addr - al->map->start - sym->start;
> > +                             offset = al->addr - map__start(al->map) - sym->start;
> >                       length += fprintf(fp, "+0x%lx", offset);
> >               }
> >               return length;
> > diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
> > index ed2d55d224aa..437fd57c2084 100644
> > --- a/tools/perf/util/synthetic-events.c
> > +++ b/tools/perf/util/synthetic-events.c
> > @@ -668,33 +668,33 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
> >                       continue;
> >
> >               if (symbol_conf.buildid_mmap2) {
> > -                     size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
> > +                     size = PERF_ALIGN(map__dso(map)->long_name_len + 1, sizeof(u64));
> >                       event->mmap2.header.type = PERF_RECORD_MMAP2;
> >                       event->mmap2.header.size = (sizeof(event->mmap2) -
> >                                               (sizeof(event->mmap2.filename) - size));
> >                       memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
> >                       event->mmap2.header.size += machine->id_hdr_size;
> > -                     event->mmap2.start = map->start;
> > -                     event->mmap2.len   = map->end - map->start;
> > +                     event->mmap2.start = map__start(map);
> > +                     event->mmap2.len   = map__end(map) - map__start(map);
> >                       event->mmap2.pid   = machine->pid;
> >
> > -                     memcpy(event->mmap2.filename, map->dso->long_name,
> > -                            map->dso->long_name_len + 1);
> > +                     memcpy(event->mmap2.filename, map__dso(map)->long_name,
> > +                            map__dso(map)->long_name_len + 1);
> >
> >                       perf_record_mmap2__read_build_id(&event->mmap2, false);
> >               } else {
> > -                     size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
> > +                     size = PERF_ALIGN(map__dso(map)->long_name_len + 1, sizeof(u64));
> >                       event->mmap.header.type = PERF_RECORD_MMAP;
> >                       event->mmap.header.size = (sizeof(event->mmap) -
> >                                               (sizeof(event->mmap.filename) - size));
> >                       memset(event->mmap.filename + size, 0, machine->id_hdr_size);
> >                       event->mmap.header.size += machine->id_hdr_size;
> > -                     event->mmap.start = map->start;
> > -                     event->mmap.len   = map->end - map->start;
> > +                     event->mmap.start = map__start(map);
> > +                     event->mmap.len   = map__end(map) - map__start(map);
> >                       event->mmap.pid   = machine->pid;
> >
> > -                     memcpy(event->mmap.filename, map->dso->long_name,
> > -                            map->dso->long_name_len + 1);
> > +                     memcpy(event->mmap.filename, map__dso(map)->long_name,
> > +                            map__dso(map)->long_name_len + 1);
> >               }
> >
> >               if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
> > @@ -1112,8 +1112,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
> >               event->mmap2.header.size = (sizeof(event->mmap2) -
> >                               (sizeof(event->mmap2.filename) - size) + machine->id_hdr_size);
> >               event->mmap2.pgoff = kmap->ref_reloc_sym->addr;
> > -             event->mmap2.start = map->start;
> > -             event->mmap2.len   = map->end - event->mmap.start;
> > +             event->mmap2.start = map__start(map);
> > +             event->mmap2.len   = map__end(map) - event->mmap.start;
> >               event->mmap2.pid   = machine->pid;
> >
> >               perf_record_mmap2__read_build_id(&event->mmap2, true);
> > @@ -1125,8 +1125,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
> >               event->mmap.header.size = (sizeof(event->mmap) -
> >                               (sizeof(event->mmap.filename) - size) + machine->id_hdr_size);
> >               event->mmap.pgoff = kmap->ref_reloc_sym->addr;
> > -             event->mmap.start = map->start;
> > -             event->mmap.len   = map->end - event->mmap.start;
> > +             event->mmap.start = map__start(map);
> > +             event->mmap.len   = map__end(map) - event->mmap.start;
> >               event->mmap.pid   = machine->pid;
> >       }
> >
> > diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
> > index c2256777b813..6fbcc115cc6d 100644
> > --- a/tools/perf/util/thread.c
> > +++ b/tools/perf/util/thread.c
> > @@ -434,23 +434,23 @@ struct thread *thread__main_thread(struct machine *machine, struct thread *threa
> >  int thread__memcpy(struct thread *thread, struct machine *machine,
> >                  void *buf, u64 ip, int len, bool *is64bit)
> >  {
> > -       u8 cpumode = PERF_RECORD_MISC_USER;
> > -       struct addr_location al;
> > -       long offset;
> > +     u8 cpumode = PERF_RECORD_MISC_USER;
> > +     struct addr_location al;
> > +     long offset;
> >
> > -       if (machine__kernel_ip(machine, ip))
> > -               cpumode = PERF_RECORD_MISC_KERNEL;
> > +     if (machine__kernel_ip(machine, ip))
> > +             cpumode = PERF_RECORD_MISC_KERNEL;
> >
> > -       if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso ||
> > -        al.map->dso->data.status == DSO_DATA_STATUS_ERROR ||
> > -        map__load(al.map) < 0)
> > -               return -1;
> > +     if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map) ||
> > +             map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR ||
> > +             map__load(al.map) < 0)
> > +             return -1;
> >
> > -       offset = al.map->map_ip(al.map, ip);
> > -       if (is64bit)
> > -               *is64bit = al.map->dso->is_64_bit;
> > +     offset = map__map_ip(al.map, ip);
> > +     if (is64bit)
> > +             *is64bit = map__dso(al.map)->is_64_bit;
> >
> > -       return dso__data_read_offset(al.map->dso, machine, offset, buf, len);
> > +     return dso__data_read_offset(map__dso(al.map), machine, offset, buf, len);
> >  }
> >
> >  void thread__free_stitch_list(struct thread *thread)
> > diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
> > index 7e6c59811292..841ac84a93ab 100644
> > --- a/tools/perf/util/unwind-libunwind-local.c
> > +++ b/tools/perf/util/unwind-libunwind-local.c
> > @@ -381,20 +381,20 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
> >       int ret = -EINVAL;
> >
> >       map = find_map(ip, ui);
> > -     if (!map || !map->dso)
> > +     if (!map || !map__dso(map))
> >               return -EINVAL;
> >
> > -     pr_debug("unwind: find_proc_info dso %s\n", map->dso->name);
> > +     pr_debug("unwind: %s dso %s\n", __func__, map__dso(map)->name);
> >
> >       /* Check the .eh_frame section for unwinding info */
> > -     if (!read_unwind_spec_eh_frame(map->dso, ui->machine,
> > +     if (!read_unwind_spec_eh_frame(map__dso(map), ui->machine,
> >                                      &table_data, &segbase, &fde_count)) {
> >               memset(&di, 0, sizeof(di));
> >               di.format   = UNW_INFO_FORMAT_REMOTE_TABLE;
> > -             di.start_ip = map->start;
> > -             di.end_ip   = map->end;
> > -             di.u.rti.segbase    = map->start + segbase - map->pgoff;
> > -             di.u.rti.table_data = map->start + table_data - map->pgoff;
> > +             di.start_ip = map__start(map);
> > +             di.end_ip   = map__end(map);
> > +             di.u.rti.segbase    = map__start(map) + segbase - map__pgoff(map);
> > +             di.u.rti.table_data = map__start(map) + table_data - map__pgoff(map);
> >               di.u.rti.table_len  = fde_count * sizeof(struct table_entry)
> >                                     / sizeof(unw_word_t);
> >               ret = dwarf_search_unwind_table(as, ip, &di, pi,
> > @@ -404,20 +404,20 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
> >  #ifndef NO_LIBUNWIND_DEBUG_FRAME
> >       /* Check the .debug_frame section for unwinding info */
> >       if (ret < 0 &&
> > -         !read_unwind_spec_debug_frame(map->dso, ui->machine, &segbase)) {
> > -             int fd = dso__data_get_fd(map->dso, ui->machine);
> > -             int is_exec = elf_is_exec(fd, map->dso->name);
> > -             unw_word_t base = is_exec ? 0 : map->start;
> > +         !read_unwind_spec_debug_frame(map__dso(map), ui->machine, &segbase)) {
> > +             int fd = dso__data_get_fd(map__dso(map), ui->machine);
> > +             int is_exec = elf_is_exec(fd, map__dso(map)->name);
> > +             unw_word_t base = is_exec ? 0 : map__start(map);
> >               const char *symfile;
> >
> >               if (fd >= 0)
> > -                     dso__data_put_fd(map->dso);
> > +                     dso__data_put_fd(map__dso(map));
> >
> > -             symfile = map->dso->symsrc_filename ?: map->dso->name;
> > +             symfile = map__dso(map)->symsrc_filename ?: map__dso(map)->name;
> >
> >               memset(&di, 0, sizeof(di));
> >               if (dwarf_find_debug_frame(0, &di, ip, base, symfile,
> > -                                        map->start, map->end))
> > +                                        map__start(map), map__end(map)))
> >                       return dwarf_search_unwind_table(as, ip, &di, pi,
> >                                                        need_unwind_info, arg);
> >       }
> > @@ -473,10 +473,10 @@ static int access_dso_mem(struct unwind_info *ui, unw_word_t addr,
> >               return -1;
> >       }
> >
> > -     if (!map->dso)
> > +     if (!map__dso(map))
> >               return -1;
> >
> > -     size = dso__data_read_addr(map->dso, map, ui->machine,
> > +     size = dso__data_read_addr(map__dso(map), map, ui->machine,
> >                                  addr, (u8 *) data, sizeof(*data));
> >
> >       return !(size == sizeof(*data));
> > @@ -583,7 +583,7 @@ static int entry(u64 ip, struct thread *thread,
> >       pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n",
> >                al.sym ? al.sym->name : "''",
> >                ip,
> > -              al.map ? al.map->map_ip(al.map, ip) : (u64) 0);
> > +              al.map ? map__map_ip(al.map, ip) : (u64) 0);
> >
> >       return cb(&e, arg);
> >  }
> > diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
> > index 7b797ffadd19..cece1ee89031 100644
> > --- a/tools/perf/util/unwind-libunwind.c
> > +++ b/tools/perf/util/unwind-libunwind.c
> > @@ -30,7 +30,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
> >
> >       if (maps__addr_space(maps)) {
> >               pr_debug("unwind: thread map already set, dso=%s\n",
> > -                      map->dso->name);
> > +                      map__dso(map)->name);
> >               if (initialized)
> >                       *initialized = true;
> >               return 0;
> > @@ -41,7 +41,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
> >       if (!machine->env || !machine->env->arch)
> >               goto out_register;
> >
> > -     dso_type = dso__type(map->dso, machine);
> > +     dso_type = dso__type(map__dso(map), machine);
> >       if (dso_type == DSO__TYPE_UNKNOWN)
> >               return 0;
> >
> > diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
> > index 835c39efb80d..ec777ee11493 100644
> > --- a/tools/perf/util/vdso.c
> > +++ b/tools/perf/util/vdso.c
> > @@ -147,7 +147,7 @@ static enum dso_type machine__thread_dso_type(struct machine *machine,
> >       struct map_rb_node *rb_node;
> >
> >       maps__for_each_entry(thread->maps, rb_node) {
> > -             struct dso *dso = rb_node->map->dso;
> > +             struct dso *dso = map__dso(rb_node->map);
> >
> >               if (!dso || dso->long_name[0] != '/')
> >                       continue;
> > --
> > 2.35.1.265.g69c8d7142f-goog
>
> --
>
> - Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs
  2022-02-11 17:43     ` Ian Rogers
@ 2022-02-11 19:21       ` Arnaldo Carvalho de Melo
  2022-02-11 19:35         ` Ian Rogers
  0 siblings, 1 reply; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 19:21 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 09:43:19AM -0800, Ian Rogers escreveu:
> On Fri, Feb 11, 2022 at 9:13 AM Arnaldo Carvalho de Melo
> <acme@kernel.org> wrote:
> >
> > Em Fri, Feb 11, 2022 at 02:33:56AM -0800, Ian Rogers escreveu:
> > > Make the pthread mutex on dso use the error check type. This allows
> > > deadlock checking via the return type. Assert the returned value from
> > > mutex lock is always 0.
> >
> > I think this is too blunt/pervasive source code wise, perhaps we should
> > wrap this like its done with rwsem in tools/perf/util/rwsem.h to get
> > away from pthreads primitives and make the source code look more like
> > a kernel one and then, taking advantage of the so far ideologic
> > needless indirection, add this BUG_ON if we build with "DEBUG=1" or
> > something, wdyt?
> 

> My concern with semaphores is that they are a concurrency primitive

I'm not suggesting we switch over to semaphores, just to use the same
technique of wrapping pthread_mutex_t with some other API that then
allows us to add these BUG_ON() calls without polluting the source code
in many places.

- Arnaldo

> that has more flexibility and power than a mutex. I like a mutex as it
> is quite obvious what is going on and that is good from a tooling
> point of view. A deadlock with two mutexes is easy to understand. On a
> semaphore, were we using it like a condition variable? There's more to
> figure out. I also like the idea of compiling the perf command with
> emscripten, we could then generate say perf annotate output in your
> web browser. Emscripten has implementations of standard posix
> libraries including pthreads, but we may need to have two approaches
> in the perf code if we want to compile with emscripten and use
> semaphores when targeting linux.
> 
> Where this change comes from is that I worried that extending the
> locked regions to cover the race that'd been found would then expose
> the kind of recursive deadlock that pthread mutexes all too willingly
> allow. With this code we at least see the bug and don't just hang. I
> don't think we need the change to the mutexes for this change, but we
> do need to extend the regions to fix the data race.
> 
> Let me know how you prefer it and I can roll it into a v4 version.
> 
> Thanks,
> Ian
> 
> > - Arnaldo
> >
> > > Signed-off-by: Ian Rogers <irogers@google.com>
> > > ---
> > >  tools/perf/util/dso.c    | 12 +++++++++---
> > >  tools/perf/util/symbol.c |  2 +-
> > >  2 files changed, 10 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> > > index 9cc8a1772b4b..6beccffeef7b 100644
> > > --- a/tools/perf/util/dso.c
> > > +++ b/tools/perf/util/dso.c
> > > @@ -784,7 +784,7 @@ dso_cache__free(struct dso *dso)
> > >       struct rb_root *root = &dso->data.cache;
> > >       struct rb_node *next = rb_first(root);
> > >
> > > -     pthread_mutex_lock(&dso->lock);
> > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > >       while (next) {
> > >               struct dso_cache *cache;
> > >
> > > @@ -830,7 +830,7 @@ dso_cache__insert(struct dso *dso, struct dso_cache *new)
> > >       struct dso_cache *cache;
> > >       u64 offset = new->offset;
> > >
> > > -     pthread_mutex_lock(&dso->lock);
> > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > >       while (*p != NULL) {
> > >               u64 end;
> > >
> > > @@ -1259,6 +1259,8 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> > >       struct dso *dso = calloc(1, sizeof(*dso) + strlen(name) + 1);
> > >
> > >       if (dso != NULL) {
> > > +             pthread_mutexattr_t lock_attr;
> > > +
> > >               strcpy(dso->name, name);
> > >               if (id)
> > >                       dso->id = *id;
> > > @@ -1286,8 +1288,12 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> > >               dso->root = NULL;
> > >               INIT_LIST_HEAD(&dso->node);
> > >               INIT_LIST_HEAD(&dso->data.open_entry);
> > > -             pthread_mutex_init(&dso->lock, NULL);
> > > +             pthread_mutexattr_init(&lock_attr);
> > > +             pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_ERRORCHECK);
> > > +             pthread_mutex_init(&dso->lock, &lock_attr);
> > > +             pthread_mutexattr_destroy(&lock_attr);
> > >               refcount_set(&dso->refcnt, 1);
> > > +
> > >       }
> > >
> > >       return dso;
> > > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> > > index b2ed3140a1fa..43f47532696f 100644
> > > --- a/tools/perf/util/symbol.c
> > > +++ b/tools/perf/util/symbol.c
> > > @@ -1783,7 +1783,7 @@ int dso__load(struct dso *dso, struct map *map)
> > >       }
> > >
> > >       nsinfo__mountns_enter(dso->nsinfo, &nsc);
> > > -     pthread_mutex_lock(&dso->lock);
> > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > >
> > >       /* check again under the dso->lock */
> > >       if (dso__loaded(dso)) {
> > > --
> > > 2.35.1.265.g69c8d7142f-goog
> >
> > --
> >
> > - Arnaldo

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 15/22] perf map: Use functions to access the variables in map
  2022-02-11 17:54     ` Ian Rogers
@ 2022-02-11 19:22       ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-11 19:22 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 09:54:40AM -0800, Ian Rogers escreveu:
> On Fri, Feb 11, 2022 at 9:36 AM Arnaldo Carvalho de Melo <acme@kernel.org> wrote:
> > Em Fri, Feb 11, 2022 at 02:34:08AM -0800, Ian Rogers escreveu:
> > > +++ b/tools/perf/arch/s390/annotate/instructions.c
> > > @@ -39,7 +39,9 @@ static int s390_call__parse(struct arch *arch, struct ins_operands *ops,
> > >       target.addr = map__objdump_2mem(map, ops->target.addr);
> > >
> > >       if (maps__find_ams(ms->maps, &target) == 0 &&
> > > -         map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> > > +         map__rip_2objdump(target.ms.map,
> > > +                           map->map_ip(target.ms.map, target.addr)
> > > +                          ) == ops->target.addr)
> >
> >
> > This changes nothing, right? Please try not to do this in the v2 for
> > this patch.
> 
> Agreed. The original code here looks wrong to me. I would have

Then that merits a separate patch addressing that problem, my point here
was not to add just reflowing to a patch series.

- Arnaldo

> translated this into map__map_ip, but that would have changed the
> function pointer from map->map_ip to target.ms.map->map_ip. The
> reformatting is so that when I add a reference count check here the
> lines aren't reformatted and that change is more minimal and obvious.
> I think the right thing is really to use map__map_ip. That goes beyond
> what this change was trying to do, and I lack a means to test this
> code. Could you investigate? If I switch this to map__map_ip in v2
> then it resolves this issue and is most likely the right thing, its
> just that's a behavioral change that I was trying to avoid in this
> change.
> 
> Thanks,
> Ian
> 
> > - Arnaldo
> >
> > >               ops->target.sym = target.ms.sym;
> > >
> > >       return 0;
> > > diff --git a/tools/perf/arch/x86/tests/dwarf-unwind.c b/tools/perf/arch/x86/tests/dwarf-unwind.c
> > > index a54dea7c112f..497593be80f2 100644
> > > --- a/tools/perf/arch/x86/tests/dwarf-unwind.c
> > > +++ b/tools/perf/arch/x86/tests/dwarf-unwind.c
> > > @@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample,
> > >               return -1;
> > >       }
> > >
> > > -     stack_size = map->end - sp;
> > > +     stack_size = map__end(map) - sp;
> > >       stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size;
> > >
> > >       memcpy(buf, (void *) sp, stack_size);
> > > diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
> > > index 7b6b0c98fb36..c790c682b76e 100644
> > > --- a/tools/perf/arch/x86/util/event.c
> > > +++ b/tools/perf/arch/x86/util/event.c
> > > @@ -57,9 +57,9 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
> > >
> > >               event->mmap.header.size = size;
> > >
> > > -             event->mmap.start = map->start;
> > > -             event->mmap.len   = map->end - map->start;
> > > -             event->mmap.pgoff = map->pgoff;
> > > +             event->mmap.start = map__start(map);
> > > +             event->mmap.len   = map__size(map);
> > > +             event->mmap.pgoff = map__pgoff(map);
> > >               event->mmap.pid   = machine->pid;
> > >
> > >               strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
> > > diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
> > > index 490bb9b8cf17..49d3ae36fd89 100644
> > > --- a/tools/perf/builtin-annotate.c
> > > +++ b/tools/perf/builtin-annotate.c
> > > @@ -199,7 +199,7 @@ static int process_branch_callback(struct evsel *evsel,
> > >               return 0;
> > >
> > >       if (a.map != NULL)
> > > -             a.map->dso->hit = 1;
> > > +             map__dso(a.map)->hit = 1;
> > >
> > >       hist__account_cycles(sample->branch_stack, al, sample, false, NULL);
> > >
> > > @@ -231,9 +231,9 @@ static int evsel__add_sample(struct evsel *evsel, struct perf_sample *sample,
> > >                */
> > >               if (al->sym != NULL) {
> > >                       rb_erase_cached(&al->sym->rb_node,
> > > -                              &al->map->dso->symbols);
> > > +                                     &map__dso(al->map)->symbols);
> > >                       symbol__delete(al->sym);
> > > -                     dso__reset_find_symbol_cache(al->map->dso);
> > > +                     dso__reset_find_symbol_cache(map__dso(al->map));
> > >               }
> > >               return 0;
> > >       }
> > > @@ -315,7 +315,7 @@ static void hists__find_annotations(struct hists *hists,
> > >               struct hist_entry *he = rb_entry(nd, struct hist_entry, rb_node);
> > >               struct annotation *notes;
> > >
> > > -             if (he->ms.sym == NULL || he->ms.map->dso->annotate_warned)
> > > +             if (he->ms.sym == NULL || map__dso(he->ms.map)->annotate_warned)
> > >                       goto find_next;
> > >
> > >               if (ann->sym_hist_filter &&
> > > diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
> > > index f7917c390e96..92a9dbc3d4cd 100644
> > > --- a/tools/perf/builtin-inject.c
> > > +++ b/tools/perf/builtin-inject.c
> > > @@ -600,10 +600,10 @@ int perf_event__inject_buildid(struct perf_tool *tool, union perf_event *event,
> > >       }
> > >
> > >       if (thread__find_map(thread, sample->cpumode, sample->ip, &al)) {
> > > -             if (!al.map->dso->hit) {
> > > -                     al.map->dso->hit = 1;
> > > -                     dso__inject_build_id(al.map->dso, tool, machine,
> > > -                                          sample->cpumode, al.map->flags);
> > > +             if (!map__dso(al.map)->hit) {
> > > +                     map__dso(al.map)->hit = 1;
> > > +                     dso__inject_build_id(map__dso(al.map), tool, machine,
> > > +                                          sample->cpumode, map__flags(al.map));
> > >               }
> > >       }
> > >
> > > diff --git a/tools/perf/builtin-kallsyms.c b/tools/perf/builtin-kallsyms.c
> > > index c08ee81529e8..d940b60ce812 100644
> > > --- a/tools/perf/builtin-kallsyms.c
> > > +++ b/tools/perf/builtin-kallsyms.c
> > > @@ -36,8 +36,10 @@ static int __cmd_kallsyms(int argc, const char **argv)
> > >               }
> > >
> > >               printf("%s: %s %s %#" PRIx64 "-%#" PRIx64 " (%#" PRIx64 "-%#" PRIx64")\n",
> > > -                     symbol->name, map->dso->short_name, map->dso->long_name,
> > > -                     map->unmap_ip(map, symbol->start), map->unmap_ip(map, symbol->end),
> > > +                     symbol->name, map__dso(map)->short_name,
> > > +                     map__dso(map)->long_name,
> > > +                     map__unmap_ip(map, symbol->start),
> > > +                     map__unmap_ip(map, symbol->end),
> > >                       symbol->start, symbol->end);
> > >       }
> > >
> > > diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
> > > index 99d7ff9a8eff..d87d9c341a20 100644
> > > --- a/tools/perf/builtin-kmem.c
> > > +++ b/tools/perf/builtin-kmem.c
> > > @@ -410,7 +410,7 @@ static u64 find_callsite(struct evsel *evsel, struct perf_sample *sample)
> > >               if (!caller) {
> > >                       /* found */
> > >                       if (node->ms.map)
> > > -                             addr = map__unmap_ip(node->ms.map, node->ip);
> > > +                             addr = map__dso_unmap_ip(node->ms.map, node->ip);
> > >                       else
> > >                               addr = node->ip;
> > >
> > > @@ -1012,7 +1012,7 @@ static void __print_slab_result(struct rb_root *root,
> > >
> > >               if (sym != NULL)
> > >                       snprintf(buf, sizeof(buf), "%s+%" PRIx64 "", sym->name,
> > > -                              addr - map->unmap_ip(map, sym->start));
> > > +                              addr - map__unmap_ip(map, sym->start));
> > >               else
> > >                       snprintf(buf, sizeof(buf), "%#" PRIx64 "", addr);
> > >               printf(" %-34s |", buf);
> > > diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c
> > > index fcf65a59bea2..d18083f57303 100644
> > > --- a/tools/perf/builtin-mem.c
> > > +++ b/tools/perf/builtin-mem.c
> > > @@ -200,7 +200,7 @@ dump_raw_samples(struct perf_tool *tool,
> > >               goto out_put;
> > >
> > >       if (al.map != NULL)
> > > -             al.map->dso->hit = 1;
> > > +             map__dso(al.map)->hit = 1;
> > >
> > >       field_sep = symbol_conf.field_sep;
> > >       if (field_sep) {
> > > @@ -241,7 +241,7 @@ dump_raw_samples(struct perf_tool *tool,
> > >               symbol_conf.field_sep,
> > >               sample->data_src,
> > >               symbol_conf.field_sep,
> > > -             al.map ? (al.map->dso ? al.map->dso->long_name : "???") : "???",
> > > +             al.map && map__dso(al.map) ? map__dso(al.map)->long_name : "???",
> > >               al.sym ? al.sym->name : "???");
> > >  out_put:
> > >       addr_location__put(&al);
> > > diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
> > > index 57611ef725c3..9b92b2bbd7de 100644
> > > --- a/tools/perf/builtin-report.c
> > > +++ b/tools/perf/builtin-report.c
> > > @@ -304,7 +304,7 @@ static int process_sample_event(struct perf_tool *tool,
> > >       }
> > >
> > >       if (al.map != NULL)
> > > -             al.map->dso->hit = 1;
> > > +             map__dso(al.map)->hit = 1;
> > >
> > >       if (ui__has_annotation() || rep->symbol_ipc || rep->total_cycles_mode) {
> > >               hist__account_cycles(sample->branch_stack, &al, sample,
> > > @@ -579,7 +579,7 @@ static void report__warn_kptr_restrict(const struct report *rep)
> > >               return;
> > >
> > >       if (kernel_map == NULL ||
> > > -         (kernel_map->dso->hit &&
> > > +         (map__dso(kernel_map)->hit &&
> > >            (kernel_kmap->ref_reloc_sym == NULL ||
> > >             kernel_kmap->ref_reloc_sym->addr == 0))) {
> > >               const char *desc =
> > > @@ -805,13 +805,15 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
> > >               struct map *map = rb_node->map;
> > >
> > >               printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
> > > -                                indent, "", map->start, map->end,
> > > -                                map->prot & PROT_READ ? 'r' : '-',
> > > -                                map->prot & PROT_WRITE ? 'w' : '-',
> > > -                                map->prot & PROT_EXEC ? 'x' : '-',
> > > -                                map->flags & MAP_SHARED ? 's' : 'p',
> > > -                                map->pgoff,
> > > -                                map->dso->id.ino, map->dso->name);
> > > +                                indent, "",
> > > +                                map__start(map), map__end(map),
> > > +                                map__prot(map) & PROT_READ ? 'r' : '-',
> > > +                                map__prot(map) & PROT_WRITE ? 'w' : '-',
> > > +                                map__prot(map) & PROT_EXEC ? 'x' : '-',
> > > +                                map__flags(map) & MAP_SHARED ? 's' : 'p',
> > > +                                map__pgoff(map),
> > > +                                map__dso(map)->id.ino,
> > > +                                map__dso(map)->name);
> > >       }
> > >
> > >       return printed;
> > > diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
> > > index abae8184e171..4edfce95e137 100644
> > > --- a/tools/perf/builtin-script.c
> > > +++ b/tools/perf/builtin-script.c
> > > @@ -972,12 +972,12 @@ static int perf_sample__fprintf_brstackoff(struct perf_sample *sample,
> > >               to   = entries[i].to;
> > >
> > >               if (thread__find_map_fb(thread, sample->cpumode, from, &alf) &&
> > > -                 !alf.map->dso->adjust_symbols)
> > > -                     from = map__map_ip(alf.map, from);
> > > +                 !map__dso(alf.map)->adjust_symbols)
> > > +                     from = map__dso_map_ip(alf.map, from);
> > >
> > >               if (thread__find_map_fb(thread, sample->cpumode, to, &alt) &&
> > > -                 !alt.map->dso->adjust_symbols)
> > > -                     to = map__map_ip(alt.map, to);
> > > +                 !map__dso(alt.map)->adjust_symbols)
> > > +                     to = map__dso_map_ip(alt.map, to);
> > >
> > >               printed += fprintf(fp, " 0x%"PRIx64, from);
> > >               if (PRINT_FIELD(DSO)) {
> > > @@ -1039,11 +1039,11 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
> > >               return 0;
> > >       }
> > >
> > > -     if (!thread__find_map(thread, *cpumode, start, &al) || !al.map->dso) {
> > > +     if (!thread__find_map(thread, *cpumode, start, &al) || !map__dso(al.map)) {
> > >               pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
> > >               return 0;
> > >       }
> > > -     if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR) {
> > > +     if (map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR) {
> > >               pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
> > >               return 0;
> > >       }
> > > @@ -1051,11 +1051,11 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
> > >       /* Load maps to ensure dso->is_64_bit has been updated */
> > >       map__load(al.map);
> > >
> > > -     offset = al.map->map_ip(al.map, start);
> > > -     len = dso__data_read_offset(al.map->dso, machine, offset, (u8 *)buffer,
> > > -                                 end - start + MAXINSN);
> > > +     offset = map__map_ip(al.map, start);
> > > +     len = dso__data_read_offset(map__dso(al.map), machine, offset,
> > > +                                 (u8 *)buffer, end - start + MAXINSN);
> > >
> > > -     *is64bit = al.map->dso->is_64_bit;
> > > +     *is64bit = map__dso(al.map)->is_64_bit;
> > >       if (len <= 0)
> > >               pr_debug("\tcannot fetch code for block at %" PRIx64 "-%" PRIx64 "\n",
> > >                       start, end);
> > > @@ -1070,9 +1070,9 @@ static int map__fprintf_srccode(struct map *map, u64 addr, FILE *fp, struct srcc
> > >       int len;
> > >       char *srccode;
> > >
> > > -     if (!map || !map->dso)
> > > +     if (!map || !map__dso(map))
> > >               return 0;
> > > -     srcfile = get_srcline_split(map->dso,
> > > +     srcfile = get_srcline_split(map__dso(map),
> > >                                   map__rip_2objdump(map, addr),
> > >                                   &line);
> > >       if (!srcfile)
> > > @@ -1164,7 +1164,7 @@ static int ip__fprintf_sym(uint64_t addr, struct thread *thread,
> > >       if (al.addr < al.sym->end)
> > >               off = al.addr - al.sym->start;
> > >       else
> > > -             off = al.addr - al.map->start - al.sym->start;
> > > +             off = al.addr - map__start(al.map) - al.sym->start;
> > >       printed += fprintf(fp, "\t%s", al.sym->name);
> > >       if (off)
> > >               printed += fprintf(fp, "%+d", off);
> > > diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
> > > index 1fc390f136dd..8db1df7bdabe 100644
> > > --- a/tools/perf/builtin-top.c
> > > +++ b/tools/perf/builtin-top.c
> > > @@ -127,8 +127,8 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
> > >       /*
> > >        * We can't annotate with just /proc/kallsyms
> > >        */
> > > -     if (map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> > > -         !dso__is_kcore(map->dso)) {
> > > +     if (map__dso(map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> > > +         !dso__is_kcore(map__dso(map))) {
> > >               pr_err("Can't annotate %s: No vmlinux file was found in the "
> > >                      "path\n", sym->name);
> > >               sleep(1);
> > > @@ -180,8 +180,9 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
> > >                   "Tools:  %s\n\n"
> > >                   "Not all samples will be on the annotation output.\n\n"
> > >                   "Please report to linux-kernel@vger.kernel.org\n",
> > > -                 ip, map->dso->long_name, dso__symtab_origin(map->dso),
> > > -                 map->start, map->end, sym->start, sym->end,
> > > +                 ip, map__dso(map)->long_name,
> > > +                 dso__symtab_origin(map__dso(map)),
> > > +                 map__start(map), map__end(map), sym->start, sym->end,
> > >                   sym->binding == STB_GLOBAL ? 'g' :
> > >                   sym->binding == STB_LOCAL  ? 'l' : 'w', sym->name,
> > >                   err ? "[unknown]" : uts.machine,
> > > @@ -810,7 +811,8 @@ static void perf_event__process_sample(struct perf_tool *tool,
> > >                   __map__is_kernel(al.map) && map__has_symbols(al.map)) {
> > >                       if (symbol_conf.vmlinux_name) {
> > >                               char serr[256];
> > > -                             dso__strerror_load(al.map->dso, serr, sizeof(serr));
> > > +                             dso__strerror_load(map__dso(al.map),
> > > +                                                serr, sizeof(serr));
> > >                               ui__warning("The %s file can't be used: %s\n%s",
> > >                                           symbol_conf.vmlinux_name, serr, msg);
> > >                       } else {
> > > diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
> > > index 32844d8a0ea5..0134f24da3e3 100644
> > > --- a/tools/perf/builtin-trace.c
> > > +++ b/tools/perf/builtin-trace.c
> > > @@ -2862,7 +2862,7 @@ static void print_location(FILE *f, struct perf_sample *sample,
> > >  {
> > >
> > >       if ((verbose > 0 || print_dso) && al->map)
> > > -             fprintf(f, "%s@", al->map->dso->long_name);
> > > +             fprintf(f, "%s@", map__dso(al->map)->long_name);
> > >
> > >       if ((verbose > 0 || print_sym) && al->sym)
> > >               fprintf(f, "%s+0x%" PRIx64, al->sym->name,
> > > diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> > > index b64013a87c54..b83b62d33945 100644
> > > --- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> > > +++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
> > > @@ -152,9 +152,10 @@ static PyObject *perf_sample_src(PyObject *obj, PyObject *args, bool get_srccode
> > >       map = c->al->map;
> > >       addr = c->al->addr;
> > >
> > > -     if (map && map->dso)
> > > -             srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
> > > -
> > > +     if (map && map__dso(map)) {
> > > +             srcfile = get_srcline_split(map__dso(map),
> > > +                                         map__rip_2objdump(map, addr), &line);
> > > +     }
> > >       if (get_srccode) {
> > >               if (srcfile)
> > >                       srccode = find_sourceline(srcfile, line, &len);
> > > diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
> > > index 6eafe36a8704..9cb7d3f577d7 100644
> > > --- a/tools/perf/tests/code-reading.c
> > > +++ b/tools/perf/tests/code-reading.c
> > > @@ -240,7 +240,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> > >
> > >       pr_debug("Reading object code for memory address: %#"PRIx64"\n", addr);
> > >
> > > -     if (!thread__find_map(thread, cpumode, addr, &al) || !al.map->dso) {
> > > +     if (!thread__find_map(thread, cpumode, addr, &al) || !map__dso(al.map)) {
> > >               if (cpumode == PERF_RECORD_MISC_HYPERVISOR) {
> > >                       pr_debug("Hypervisor address can not be resolved - skipping\n");
> > >                       return 0;
> > > @@ -250,10 +250,10 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> > >               return -1;
> > >       }
> > >
> > > -     pr_debug("File is: %s\n", al.map->dso->long_name);
> > > +     pr_debug("File is: %s\n", map__dso(al.map)->long_name);
> > >
> > > -     if (al.map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> > > -         !dso__is_kcore(al.map->dso)) {
> > > +     if (map__dso(al.map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
> > > +         !dso__is_kcore(map__dso(al.map))) {
> > >               pr_debug("Unexpected kernel address - skipping\n");
> > >               return 0;
> > >       }
> > > @@ -264,11 +264,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> > >               len = BUFSZ;
> > >
> > >       /* Do not go off the map */
> > > -     if (addr + len > al.map->end)
> > > -             len = al.map->end - addr;
> > > +     if (addr + len > map__end(al.map))
> > > +             len = map__end(al.map) - addr;
> > >
> > >       /* Read the object code using perf */
> > > -     ret_len = dso__data_read_offset(al.map->dso, maps__machine(thread->maps),
> > > +     ret_len = dso__data_read_offset(map__dso(al.map), maps__machine(thread->maps),
> > >                                       al.addr, buf1, len);
> > >       if (ret_len != len) {
> > >               pr_debug("dso__data_read_offset failed\n");
> > > @@ -283,11 +283,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> > >               return -1;
> > >
> > >       /* objdump struggles with kcore - try each map only once */
> > > -     if (dso__is_kcore(al.map->dso)) {
> > > +     if (dso__is_kcore(map__dso(al.map))) {
> > >               size_t d;
> > >
> > >               for (d = 0; d < state->done_cnt; d++) {
> > > -                     if (state->done[d] == al.map->start) {
> > > +                     if (state->done[d] == map__start(al.map)) {
> > >                               pr_debug("kcore map tested already");
> > >                               pr_debug(" - skipping\n");
> > >                               return 0;
> > > @@ -297,12 +297,12 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> > >                       pr_debug("Too many kcore maps - skipping\n");
> > >                       return 0;
> > >               }
> > > -             state->done[state->done_cnt++] = al.map->start;
> > > +             state->done[state->done_cnt++] = map__start(al.map);
> > >       }
> > >
> > > -     objdump_name = al.map->dso->long_name;
> > > -     if (dso__needs_decompress(al.map->dso)) {
> > > -             if (dso__decompress_kmodule_path(al.map->dso, objdump_name,
> > > +     objdump_name = map__dso(al.map)->long_name;
> > > +     if (dso__needs_decompress(map__dso(al.map))) {
> > > +             if (dso__decompress_kmodule_path(map__dso(al.map), objdump_name,
> > >                                                decomp_name,
> > >                                                sizeof(decomp_name)) < 0) {
> > >                       pr_debug("decompression failed\n");
> > > @@ -330,7 +330,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
> > >                       len -= ret;
> > >                       if (len) {
> > >                               pr_debug("Reducing len to %zu\n", len);
> > > -                     } else if (dso__is_kcore(al.map->dso)) {
> > > +                     } else if (dso__is_kcore(map__dso(al.map))) {
> > >                               /*
> > >                                * objdump cannot handle very large segments
> > >                                * that may be found in kcore.
> > > @@ -588,8 +588,8 @@ static int do_test_code_reading(bool try_kcore)
> > >               pr_debug("map__load failed\n");
> > >               goto out_err;
> > >       }
> > > -     have_vmlinux = dso__is_vmlinux(map->dso);
> > > -     have_kcore = dso__is_kcore(map->dso);
> > > +     have_vmlinux = dso__is_vmlinux(map__dso(map));
> > > +     have_kcore = dso__is_kcore(map__dso(map));
> > >
> > >       /* 2nd time through we just try kcore */
> > >       if (try_kcore && !have_kcore)
> > > diff --git a/tools/perf/tests/hists_common.c b/tools/perf/tests/hists_common.c
> > > index 6f34d08b84e5..40eccc659767 100644
> > > --- a/tools/perf/tests/hists_common.c
> > > +++ b/tools/perf/tests/hists_common.c
> > > @@ -181,7 +181,7 @@ void print_hists_in(struct hists *hists)
> > >               if (!he->filtered) {
> > >                       pr_info("%2d: entry: %-8s [%-8s] %20s: period = %"PRIu64"\n",
> > >                               i, thread__comm_str(he->thread),
> > > -                             he->ms.map->dso->short_name,
> > > +                             map__dso(he->ms.map)->short_name,
> > >                               he->ms.sym->name, he->stat.period);
> > >               }
> > >
> > > @@ -208,7 +208,7 @@ void print_hists_out(struct hists *hists)
> > >               if (!he->filtered) {
> > >                       pr_info("%2d: entry: %8s:%5d [%-8s] %20s: period = %"PRIu64"/%"PRIu64"\n",
> > >                               i, thread__comm_str(he->thread), he->thread->tid,
> > > -                             he->ms.map->dso->short_name,
> > > +                             map__dso(he->ms.map)->short_name,
> > >                               he->ms.sym->name, he->stat.period,
> > >                               he->stat_acc ? he->stat_acc->period : 0);
> > >               }
> > > diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
> > > index 11a230ee5894..5afab21455f1 100644
> > > --- a/tools/perf/tests/vmlinux-kallsyms.c
> > > +++ b/tools/perf/tests/vmlinux-kallsyms.c
> > > @@ -13,7 +13,7 @@
> > >  #include "debug.h"
> > >  #include "machine.h"
> > >
> > > -#define UM(x) kallsyms_map->unmap_ip(kallsyms_map, (x))
> > > +#define UM(x) map__unmap_ip(kallsyms_map, (x))
> > >
> > >  static bool is_ignored_symbol(const char *name, char type)
> > >  {
> > > @@ -216,8 +216,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> > >               if (sym->start == sym->end)
> > >                       continue;
> > >
> > > -             mem_start = vmlinux_map->unmap_ip(vmlinux_map, sym->start);
> > > -             mem_end = vmlinux_map->unmap_ip(vmlinux_map, sym->end);
> > > +             mem_start = map__unmap_ip(vmlinux_map, sym->start);
> > > +             mem_end = map__unmap_ip(vmlinux_map, sym->end);
> > >
> > >               first_pair = machine__find_kernel_symbol(&kallsyms, mem_start, NULL);
> > >               pair = first_pair;
> > > @@ -262,7 +262,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> > >
> > >                               continue;
> > >                       }
> > > -             } else if (mem_start == kallsyms.vmlinux_map->end) {
> > > +             } else if (mem_start == map__end(kallsyms.vmlinux_map)) {
> > >                       /*
> > >                        * Ignore aliases to _etext, i.e. to the end of the kernel text area,
> > >                        * such as __indirect_thunk_end.
> > > @@ -294,9 +294,10 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> > >                * so use the short name, less descriptive but the same ("[kernel]" in
> > >                * both cases.
> > >                */
> > > -             struct map *pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
> > > -                                                             map->dso->short_name :
> > > -                                                             map->dso->name));
> > > +             struct map *pair = maps__find_by_name(kallsyms.kmaps,
> > > +                                             map__dso(map)->kernel
> > > +                                             ? map__dso(map)->short_name
> > > +                                             : map__dso(map)->name);
> > >               if (pair) {
> > >                       pair->priv = 1;
> > >               } else {
> > > @@ -313,25 +314,27 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> > >       maps__for_each_entry(maps, rb_node) {
> > >               struct map *pair, *map = rb_node->map;
> > >
> > > -             mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
> > > -             mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
> > > +             mem_start = map__unmap_ip(vmlinux_map, map__start(map));
> > > +             mem_end = map__unmap_ip(vmlinux_map, map__end(map));
> > >
> > >               pair = maps__find(kallsyms.kmaps, mem_start);
> > > -             if (pair == NULL || pair->priv)
> > > +             if (pair == NULL || map__priv(pair))
> > >                       continue;
> > >
> > > -             if (pair->start == mem_start) {
> > > +             if (map__start(pair) == mem_start) {
> > >                       if (!header_printed) {
> > >                               pr_info("WARN: Maps in vmlinux with a different name in kallsyms:\n");
> > >                               header_printed = true;
> > >                       }
> > >
> > >                       pr_info("WARN: %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s in kallsyms as",
> > > -                             map->start, map->end, map->pgoff, map->dso->name);
> > > -                     if (mem_end != pair->end)
> > > +                             map__start(map), map__end(map),
> > > +                             map__pgoff(map), map__dso(map)->name);
> > > +                     if (mem_end != map__end(pair))
> > >                               pr_info(":\nWARN: *%" PRIx64 "-%" PRIx64 " %" PRIx64,
> > > -                                     pair->start, pair->end, pair->pgoff);
> > > -                     pr_info(" %s\n", pair->dso->name);
> > > +                                     map__start(pair), map__end(pair),
> > > +                                     map__pgoff(pair));
> > > +                     pr_info(" %s\n", map__dso(pair)->name);
> > >                       pair->priv = 1;
> > >               }
> > >       }
> > > @@ -343,7 +346,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> > >       maps__for_each_entry(maps, rb_node) {
> > >               struct map *map = rb_node->map;
> > >
> > > -             if (!map->priv) {
> > > +             if (!map__priv(map)) {
> > >                       if (!header_printed) {
> > >                               pr_info("WARN: Maps only in kallsyms:\n");
> > >                               header_printed = true;
> > > diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
> > > index 44ba900828f6..7d51d92302dc 100644
> > > --- a/tools/perf/ui/browsers/annotate.c
> > > +++ b/tools/perf/ui/browsers/annotate.c
> > > @@ -446,7 +446,8 @@ static void ui_browser__init_asm_mode(struct ui_browser *browser)
> > >  static int sym_title(struct symbol *sym, struct map *map, char *title,
> > >                    size_t sz, int percent_type)
> > >  {
> > > -     return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name, map->dso->long_name,
> > > +     return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name,
> > > +                     map__dso(map)->long_name,
> > >                       percent_type_str(percent_type));
> > >  }
> > >
> > > @@ -971,14 +972,14 @@ int symbol__tui_annotate(struct map_symbol *ms, struct evsel *evsel,
> > >       if (sym == NULL)
> > >               return -1;
> > >
> > > -     if (ms->map->dso->annotate_warned)
> > > +     if (map__dso(ms->map)->annotate_warned)
> > >               return -1;
> > >
> > >       if (not_annotated) {
> > >               err = symbol__annotate2(ms, evsel, opts, &browser.arch);
> > >               if (err) {
> > >                       char msg[BUFSIZ];
> > > -                     ms->map->dso->annotate_warned = true;
> > > +                     map__dso(ms->map)->annotate_warned = true;
> > >                       symbol__strerror_disassemble(ms, err, msg, sizeof(msg));
> > >                       ui__error("Couldn't annotate %s:\n%s", sym->name, msg);
> > >                       goto out_free_offsets;
> > > diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
> > > index 572ff38ceb0f..2241447e9bfb 100644
> > > --- a/tools/perf/ui/browsers/hists.c
> > > +++ b/tools/perf/ui/browsers/hists.c
> > > @@ -2487,7 +2487,7 @@ static struct symbol *symbol__new_unresolved(u64 addr, struct map *map)
> > >                       return NULL;
> > >               }
> > >
> > > -             dso__insert_symbol(map->dso, sym);
> > > +             dso__insert_symbol(map__dso(map), sym);
> > >       }
> > >
> > >       return sym;
> > > @@ -2499,7 +2499,7 @@ add_annotate_opt(struct hist_browser *browser __maybe_unused,
> > >                struct map_symbol *ms,
> > >                u64 addr)
> > >  {
> > > -     if (!ms->map || !ms->map->dso || ms->map->dso->annotate_warned)
> > > +     if (!ms->map || !map__dso(ms->map) || map__dso(ms->map)->annotate_warned)
> > >               return 0;
> > >
> > >       if (!ms->sym)
> > > @@ -2590,8 +2590,10 @@ static int hists_browser__zoom_map(struct hist_browser *browser, struct map *map
> > >               ui_helpline__pop();
> > >       } else {
> > >               ui_helpline__fpush("To zoom out press ESC or ENTER + \"Zoom out of %s DSO\"",
> > > -                                __map__is_kernel(map) ? "the Kernel" : map->dso->short_name);
> > > -             browser->hists->dso_filter = map->dso;
> > > +                                __map__is_kernel(map)
> > > +                                ? "the Kernel"
> > > +                                : map__dso(map)->short_name);
> > > +             browser->hists->dso_filter = map__dso(map);
> > >               perf_hpp__set_elide(HISTC_DSO, true);
> > >               pstack__push(browser->pstack, &browser->hists->dso_filter);
> > >       }
> > > @@ -2616,7 +2618,9 @@ add_dso_opt(struct hist_browser *browser, struct popup_action *act,
> > >
> > >       if (asprintf(optstr, "Zoom %s %s DSO (use the 'k' hotkey to zoom directly into the kernel)",
> > >                    browser->hists->dso_filter ? "out of" : "into",
> > > -                  __map__is_kernel(map) ? "the Kernel" : map->dso->short_name) < 0)
> > > +                  __map__is_kernel(map)
> > > +                  ? "the Kernel"
> > > +                  : map__dso(map)->short_name) < 0)
> > >               return 0;
> > >
> > >       act->ms.map = map;
> > > @@ -3091,8 +3095,8 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
> > >
> > >                       if (!browser->selection ||
> > >                           !browser->selection->map ||
> > > -                         !browser->selection->map->dso ||
> > > -                         browser->selection->map->dso->annotate_warned) {
> > > +                         !map__dso(browser->selection->map) ||
> > > +                         map__dso(browser->selection->map)->annotate_warned) {
> > >                               continue;
> > >                       }
> > >
> > > diff --git a/tools/perf/ui/browsers/map.c b/tools/perf/ui/browsers/map.c
> > > index 3d49b916c9e4..3d1b958d8832 100644
> > > --- a/tools/perf/ui/browsers/map.c
> > > +++ b/tools/perf/ui/browsers/map.c
> > > @@ -76,7 +76,7 @@ static int map_browser__run(struct map_browser *browser)
> > >  {
> > >       int key;
> > >
> > > -     if (ui_browser__show(&browser->b, browser->map->dso->long_name,
> > > +     if (ui_browser__show(&browser->b, map__dso(browser->map)->long_name,
> > >                            "Press ESC to exit, %s / to search",
> > >                            verbose > 0 ? "" : "restart with -v to use") < 0)
> > >               return -1;
> > > @@ -106,7 +106,7 @@ int map__browse(struct map *map)
> > >  {
> > >       struct map_browser mb = {
> > >               .b = {
> > > -                     .entries = &map->dso->symbols,
> > > +                     .entries = &map__dso(map)->symbols,
> > >                       .refresh = ui_browser__rb_tree_refresh,
> > >                       .seek    = ui_browser__rb_tree_seek,
> > >                       .write   = map_browser__write,
> > > diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
> > > index 01900689dc00..3a7433d3e48a 100644
> > > --- a/tools/perf/util/annotate.c
> > > +++ b/tools/perf/util/annotate.c
> > > @@ -280,7 +280,9 @@ static int call__parse(struct arch *arch, struct ins_operands *ops, struct map_s
> > >       target.addr = map__objdump_2mem(map, ops->target.addr);
> > >
> > >       if (maps__find_ams(ms->maps, &target) == 0 &&
> > > -         map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> > > +         map__rip_2objdump(target.ms.map,
> > > +                           map->map_ip(target.ms.map, target.addr)
> > > +                           ) == ops->target.addr)
> > >               ops->target.sym = target.ms.sym;
> > >
> > >       return 0;
> > > @@ -384,8 +386,8 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
> > >       }
> > >
> > >       target.addr = map__objdump_2mem(map, ops->target.addr);
> > > -     start = map->unmap_ip(map, sym->start),
> > > -     end = map->unmap_ip(map, sym->end);
> > > +     start = map__unmap_ip(map, sym->start),
> > > +     end = map__unmap_ip(map, sym->end);
> > >
> > >       ops->target.outside = target.addr < start || target.addr > end;
> > >
> > > @@ -408,7 +410,9 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
> > >        * the symbol searching and disassembly should be done.
> > >        */
> > >       if (maps__find_ams(ms->maps, &target) == 0 &&
> > > -         map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
> > > +         map__rip_2objdump(target.ms.map,
> > > +                           map->map_ip(target.ms.map, target.addr)
> > > +                           ) == ops->target.addr)
> > >               ops->target.sym = target.ms.sym;
> > >
> > >       if (!ops->target.outside) {
> > > @@ -889,7 +893,7 @@ static int __symbol__inc_addr_samples(struct map_symbol *ms,
> > >       unsigned offset;
> > >       struct sym_hist *h;
> > >
> > > -     pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, ms->map->unmap_ip(ms->map, addr));
> > > +     pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, map__unmap_ip(ms->map, addr));
> > >
> > >       if ((addr < sym->start || addr >= sym->end) &&
> > >           (addr != sym->end || sym->start != sym->end)) {
> > > @@ -1016,13 +1020,13 @@ int addr_map_symbol__account_cycles(struct addr_map_symbol *ams,
> > >       if (start &&
> > >               (start->ms.sym == ams->ms.sym ||
> > >                (ams->ms.sym &&
> > > -                start->addr == ams->ms.sym->start + ams->ms.map->start)))
> > > +               start->addr == ams->ms.sym->start + map__start(ams->ms.map))))
> > >               saddr = start->al_addr;
> > >       if (saddr == 0)
> > >               pr_debug2("BB with bad start: addr %"PRIx64" start %"PRIx64" sym %"PRIx64" saddr %"PRIx64"\n",
> > >                       ams->addr,
> > >                       start ? start->addr : 0,
> > > -                     ams->ms.sym ? ams->ms.sym->start + ams->ms.map->start : 0,
> > > +                     ams->ms.sym ? ams->ms.sym->start + map__start(ams->ms.map) : 0,
> > >                       saddr);
> > >       err = symbol__account_cycles(ams->al_addr, saddr, ams->ms.sym, cycles);
> > >       if (err)
> > > @@ -1593,7 +1597,7 @@ static void delete_last_nop(struct symbol *sym)
> > >
> > >  int symbol__strerror_disassemble(struct map_symbol *ms, int errnum, char *buf, size_t buflen)
> > >  {
> > > -     struct dso *dso = ms->map->dso;
> > > +     struct dso *dso = map__dso(ms->map);
> > >
> > >       BUG_ON(buflen == 0);
> > >
> > > @@ -1723,7 +1727,7 @@ static int symbol__disassemble_bpf(struct symbol *sym,
> > >       struct map *map = args->ms.map;
> > >       struct perf_bpil *info_linear;
> > >       struct disassemble_info info;
> > > -     struct dso *dso = map->dso;
> > > +     struct dso *dso = map__dso(map);
> > >       int pc = 0, count, sub_id;
> > >       struct btf *btf = NULL;
> > >       char tpath[PATH_MAX];
> > > @@ -1946,7 +1950,7 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
> > >  {
> > >       struct annotation_options *opts = args->options;
> > >       struct map *map = args->ms.map;
> > > -     struct dso *dso = map->dso;
> > > +     struct dso *dso = map__dso(map);
> > >       char *command;
> > >       FILE *file;
> > >       char symfs_filename[PATH_MAX];
> > > @@ -1973,8 +1977,8 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
> > >               return err;
> > >
> > >       pr_debug("%s: filename=%s, sym=%s, start=%#" PRIx64 ", end=%#" PRIx64 "\n", __func__,
> > > -              symfs_filename, sym->name, map->unmap_ip(map, sym->start),
> > > -              map->unmap_ip(map, sym->end));
> > > +              symfs_filename, sym->name, map__unmap_ip(map, sym->start),
> > > +              map__unmap_ip(map, sym->end));
> > >
> > >       pr_debug("annotating [%p] %30s : [%p] %30s\n",
> > >                dso, dso->long_name, sym, sym->name);
> > > @@ -2386,7 +2390,7 @@ int symbol__annotate_printf(struct map_symbol *ms, struct evsel *evsel,
> > >  {
> > >       struct map *map = ms->map;
> > >       struct symbol *sym = ms->sym;
> > > -     struct dso *dso = map->dso;
> > > +     struct dso *dso = map__dso(map);
> > >       char *filename;
> > >       const char *d_filename;
> > >       const char *evsel_name = evsel__name(evsel);
> > > @@ -2569,7 +2573,7 @@ int map_symbol__annotation_dump(struct map_symbol *ms, struct evsel *evsel,
> > >       }
> > >
> > >       fprintf(fp, "%s() %s\nEvent: %s\n\n",
> > > -             ms->sym->name, ms->map->dso->long_name, ev_name);
> > > +             ms->sym->name, map__dso(ms->map)->long_name, ev_name);
> > >       symbol__annotate_fprintf2(ms->sym, fp, opts);
> > >
> > >       fclose(fp);
> > > @@ -2781,7 +2785,7 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,
> > >               if (percent_max <= 0.5)
> > >                       continue;
> > >
> > > -             al->path = get_srcline(map->dso, notes->start + al->offset, NULL,
> > > +             al->path = get_srcline(map__dso(map), notes->start + al->offset, NULL,
> > >                                      false, true, notes->start + al->offset);
> > >               insert_source_line(&tmp_root, al, opts);
> > >       }
> > > @@ -2800,7 +2804,7 @@ static void symbol__calc_lines(struct map_symbol *ms, struct rb_root *root,
> > >  int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
> > >                         struct annotation_options *opts)
> > >  {
> > > -     struct dso *dso = ms->map->dso;
> > > +     struct dso *dso = map__dso(ms->map);
> > >       struct symbol *sym = ms->sym;
> > >       struct rb_root source_line = RB_ROOT;
> > >       struct hists *hists = evsel__hists(evsel);
> > > @@ -2836,7 +2840,7 @@ int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
> > >  int symbol__tty_annotate(struct map_symbol *ms, struct evsel *evsel,
> > >                        struct annotation_options *opts)
> > >  {
> > > -     struct dso *dso = ms->map->dso;
> > > +     struct dso *dso = map__dso(ms->map);
> > >       struct symbol *sym = ms->sym;
> > >       struct rb_root source_line = RB_ROOT;
> > >       int err;
> > > diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
> > > index 825336304a37..2e864c9bdef3 100644
> > > --- a/tools/perf/util/auxtrace.c
> > > +++ b/tools/perf/util/auxtrace.c
> > > @@ -2478,7 +2478,7 @@ static struct dso *load_dso(const char *name)
> > >       if (map__load(map) < 0)
> > >               pr_err("File '%s' not found or has no symbols.\n", name);
> > >
> > > -     dso = dso__get(map->dso);
> > > +     dso = dso__get(map__dso(map));
> > >
> > >       map__put(map);
> > >
> > > diff --git a/tools/perf/util/block-info.c b/tools/perf/util/block-info.c
> > > index 5ecd4f401f32..16a7b4adcf18 100644
> > > --- a/tools/perf/util/block-info.c
> > > +++ b/tools/perf/util/block-info.c
> > > @@ -317,9 +317,9 @@ static int block_dso_entry(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp,
> > >       struct block_fmt *block_fmt = container_of(fmt, struct block_fmt, fmt);
> > >       struct map *map = he->ms.map;
> > >
> > > -     if (map && map->dso) {
> > > +     if (map && map__dso(map)) {
> > >               return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
> > > -                              map->dso->short_name);
> > > +                              map__dso(map)->short_name);
> > >       }
> > >
> > >       return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
> > > diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
> > > index 33257b594a71..5717933be116 100644
> > > --- a/tools/perf/util/bpf-event.c
> > > +++ b/tools/perf/util/bpf-event.c
> > > @@ -95,10 +95,10 @@ static int machine__process_bpf_event_load(struct machine *machine,
> > >               struct map *map = maps__find(machine__kernel_maps(machine), addr);
> > >
> > >               if (map) {
> > > -                     map->dso->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
> > > -                     map->dso->bpf_prog.id = id;
> > > -                     map->dso->bpf_prog.sub_id = i;
> > > -                     map->dso->bpf_prog.env = env;
> > > +                     map__dso(map)->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
> > > +                     map__dso(map)->bpf_prog.id = id;
> > > +                     map__dso(map)->bpf_prog.sub_id = i;
> > > +                     map__dso(map)->bpf_prog.env = env;
> > >               }
> > >       }
> > >       return 0;
> > > diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
> > > index 7a5821c87f94..274b705dd941 100644
> > > --- a/tools/perf/util/build-id.c
> > > +++ b/tools/perf/util/build-id.c
> > > @@ -59,7 +59,7 @@ int build_id__mark_dso_hit(struct perf_tool *tool __maybe_unused,
> > >       }
> > >
> > >       if (thread__find_map(thread, sample->cpumode, sample->ip, &al))
> > > -             al.map->dso->hit = 1;
> > > +             map__dso(al.map)->hit = 1;
> > >
> > >       thread__put(thread);
> > >       return 0;
> > > diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
> > > index 61bb3fb2107a..a8cfd31a3ff0 100644
> > > --- a/tools/perf/util/callchain.c
> > > +++ b/tools/perf/util/callchain.c
> > > @@ -695,8 +695,8 @@ static enum match_result match_chain_strings(const char *left,
> > >  static enum match_result match_chain_dso_addresses(struct map *left_map, u64 left_ip,
> > >                                                  struct map *right_map, u64 right_ip)
> > >  {
> > > -     struct dso *left_dso = left_map ? left_map->dso : NULL;
> > > -     struct dso *right_dso = right_map ? right_map->dso : NULL;
> > > +     struct dso *left_dso = left_map ? map__dso(left_map) : NULL;
> > > +     struct dso *right_dso = right_map ? map__dso(right_map) : NULL;
> > >
> > >       if (left_dso != right_dso)
> > >               return left_dso < right_dso ? MATCH_LT : MATCH_GT;
> > > @@ -1167,9 +1167,9 @@ char *callchain_list__sym_name(struct callchain_list *cl,
> > >
> > >       if (show_dso)
> > >               scnprintf(bf + printed, bfsize - printed, " %s",
> > > -                       cl->ms.map ?
> > > -                       cl->ms.map->dso->short_name :
> > > -                       "unknown");
> > > +                       cl->ms.map
> > > +                       ? map__dso(cl->ms.map)->short_name
> > > +                       : "unknown");
> > >
> > >       return bf;
> > >  }
> > > diff --git a/tools/perf/util/data-convert-json.c b/tools/perf/util/data-convert-json.c
> > > index f1ab6edba446..9c83228bb9f1 100644
> > > --- a/tools/perf/util/data-convert-json.c
> > > +++ b/tools/perf/util/data-convert-json.c
> > > @@ -127,8 +127,8 @@ static void output_sample_callchain_entry(struct perf_tool *tool,
> > >               fputc(',', out);
> > >               output_json_key_string(out, false, 5, "symbol", al->sym->name);
> > >
> > > -             if (al->map && al->map->dso) {
> > > -                     const char *dso = al->map->dso->short_name;
> > > +             if (al->map && map__dso(al->map)) {
> > > +                     const char *dso = map__dso(al->map)->short_name;
> > >
> > >                       if (dso && strlen(dso) > 0) {
> > >                               fputc(',', out);
> > > diff --git a/tools/perf/util/db-export.c b/tools/perf/util/db-export.c
> > > index 1cfcfdd3cf52..84c970c11794 100644
> > > --- a/tools/perf/util/db-export.c
> > > +++ b/tools/perf/util/db-export.c
> > > @@ -179,7 +179,7 @@ static int db_ids_from_al(struct db_export *dbe, struct addr_location *al,
> > >       int err;
> > >
> > >       if (al->map) {
> > > -             struct dso *dso = al->map->dso;
> > > +             struct dso *dso = map__dso(al->map);
> > >
> > >               err = db_export__dso(dbe, dso, maps__machine(al->maps));
> > >               if (err)
> > > @@ -255,7 +255,7 @@ static struct call_path *call_path_from_sample(struct db_export *dbe,
> > >               al.addr = node->ip;
> > >
> > >               if (al.map && !al.sym)
> > > -                     al.sym = dso__find_symbol(al.map->dso, al.addr);
> > > +                     al.sym = dso__find_symbol(map__dso(al.map), al.addr);
> > >
> > >               db_ids_from_al(dbe, &al, &dso_db_id, &sym_db_id, &offset);
> > >
> > > diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
> > > index d59462af15f1..f1d9dd7065e6 100644
> > > --- a/tools/perf/util/dlfilter.c
> > > +++ b/tools/perf/util/dlfilter.c
> > > @@ -29,7 +29,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
> > >
> > >       d_al->size = sizeof(*d_al);
> > >       if (al->map) {
> > > -             struct dso *dso = al->map->dso;
> > > +             struct dso *dso = map__dso(al->map);
> > >
> > >               if (symbol_conf.show_kernel_path && dso->long_name)
> > >                       d_al->dso = dso->long_name;
> > > @@ -51,7 +51,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
> > >               if (al->addr < sym->end)
> > >                       d_al->symoff = al->addr - sym->start;
> > >               else
> > > -                     d_al->symoff = al->addr - al->map->start - sym->start;
> > > +                     d_al->symoff = al->addr - map__start(al->map) - sym->start;
> > >               d_al->sym_binding = sym->binding;
> > >       } else {
> > >               d_al->sym = NULL;
> > > @@ -232,9 +232,10 @@ static const char *dlfilter__srcline(void *ctx, __u32 *line_no)
> > >       map = al->map;
> > >       addr = al->addr;
> > >
> > > -     if (map && map->dso)
> > > -             srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
> > > -
> > > +     if (map && map__dso(map)) {
> > > +             srcfile = get_srcline_split(map__dso(map),
> > > +                                         map__rip_2objdump(map, addr), &line);
> > > +     }
> > >       *line_no = line;
> > >       return srcfile;
> > >  }
> > > @@ -266,7 +267,7 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
> > >
> > >       map = al->map;
> > >
> > > -     if (map && ip >= map->start && ip < map->end &&
> > > +     if (map && ip >= map__start(map) && ip < map__end(map) &&
> > >           machine__kernel_ip(d->machine, ip) == machine__kernel_ip(d->machine, d->sample->ip))
> > >               goto have_map;
> > >
> > > @@ -276,10 +277,10 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
> > >
> > >       map = a.map;
> > >  have_map:
> > > -     offset = map->map_ip(map, ip);
> > > -     if (ip + len >= map->end)
> > > -             len = map->end - ip;
> > > -     return dso__data_read_offset(map->dso, d->machine, offset, buf, len);
> > > +     offset = map__map_ip(map, ip);
> > > +     if (ip + len >= map__end(map))
> > > +             len = map__end(map) - ip;
> > > +     return dso__data_read_offset(map__dso(map), d->machine, offset, buf, len);
> > >  }
> > >
> > >  static const struct perf_dlfilter_fns perf_dlfilter_fns = {
> > > diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> > > index b2f570adba35..1115bc51a261 100644
> > > --- a/tools/perf/util/dso.c
> > > +++ b/tools/perf/util/dso.c
> > > @@ -1109,7 +1109,7 @@ ssize_t dso__data_read_addr(struct dso *dso, struct map *map,
> > >                           struct machine *machine, u64 addr,
> > >                           u8 *data, ssize_t size)
> > >  {
> > > -     u64 offset = map->map_ip(map, addr);
> > > +     u64 offset = map__map_ip(map, addr);
> > >       return dso__data_read_offset(dso, machine, offset, data, size);
> > >  }
> > >
> > > @@ -1149,7 +1149,7 @@ ssize_t dso__data_write_cache_addr(struct dso *dso, struct map *map,
> > >                                  struct machine *machine, u64 addr,
> > >                                  const u8 *data, ssize_t size)
> > >  {
> > > -     u64 offset = map->map_ip(map, addr);
> > > +     u64 offset = map__map_ip(map, addr);
> > >       return dso__data_write_cache_offs(dso, machine, offset, data, size);
> > >  }
> > >
> > > diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> > > index 40a3b1a35613..54a1d4df5f70 100644
> > > --- a/tools/perf/util/event.c
> > > +++ b/tools/perf/util/event.c
> > > @@ -486,7 +486,7 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
> > >
> > >               al.map = maps__find(machine__kernel_maps(machine), tp->addr);
> > >               if (al.map && map__load(al.map) >= 0) {
> > > -                     al.addr = al.map->map_ip(al.map, tp->addr);
> > > +                     al.addr = map__map_ip(al.map, tp->addr);
> > >                       al.sym = map__find_symbol(al.map, al.addr);
> > >                       if (al.sym)
> > >                               ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
> > > @@ -621,7 +621,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> > >                */
> > >               if (load_map)
> > >                       map__load(al->map);
> > > -             al->addr = al->map->map_ip(al->map, al->addr);
> > > +             al->addr = map__map_ip(al->map, al->addr);
> > >       }
> > >
> > >       return al->map;
> > > @@ -692,8 +692,8 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
> > >       dump_printf(" ... thread: %s:%d\n", thread__comm_str(thread), thread->tid);
> > >       thread__find_map(thread, sample->cpumode, sample->ip, al);
> > >       dump_printf(" ...... dso: %s\n",
> > > -                 al->map ? al->map->dso->long_name :
> > > -                     al->level == 'H' ? "[hypervisor]" : "<not found>");
> > > +                 al->map ? map__dso(al->map)->long_name
> > > +                         : al->level == 'H' ? "[hypervisor]" : "<not found>");
> > >
> > >       if (thread__is_filtered(thread))
> > >               al->filtered |= (1 << HIST_FILTER__THREAD);
> > > @@ -711,7 +711,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
> > >       }
> > >
> > >       if (al->map) {
> > > -             struct dso *dso = al->map->dso;
> > > +             struct dso *dso = map__dso(al->map);
> > >
> > >               if (symbol_conf.dso_list &&
> > >                   (!dso || !(strlist__has_entry(symbol_conf.dso_list,
> > > @@ -738,12 +738,12 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
> > >               }
> > >               if (!ret && al->sym) {
> > >                       snprintf(al_addr_str, sz, "0x%"PRIx64,
> > > -                             al->map->unmap_ip(al->map, al->sym->start));
> > > +                              map__unmap_ip(al->map, al->sym->start));
> > >                       ret = strlist__has_entry(symbol_conf.sym_list,
> > >                                               al_addr_str);
> > >               }
> > >               if (!ret && symbol_conf.addr_list && al->map) {
> > > -                     unsigned long addr = al->map->unmap_ip(al->map, al->addr);
> > > +                     unsigned long addr = map__unmap_ip(al->map, al->addr);
> > >
> > >                       ret = intlist__has_entry(symbol_conf.addr_list, addr);
> > >                       if (!ret && symbol_conf.addr_range) {
> > > diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c
> > > index 8c2ea8001329..ac6fef9d8906 100644
> > > --- a/tools/perf/util/evsel_fprintf.c
> > > +++ b/tools/perf/util/evsel_fprintf.c
> > > @@ -146,11 +146,11 @@ int sample__fprintf_callchain(struct perf_sample *sample, int left_alignment,
> > >                               printed += fprintf(fp, " <-");
> > >
> > >                       if (map)
> > > -                             addr = map->map_ip(map, node->ip);
> > > +                             addr = map__map_ip(map, node->ip);
> > >
> > >                       if (print_ip) {
> > >                               /* Show binary offset for userspace addr */
> > > -                             if (map && !map->dso->kernel)
> > > +                             if (map && !map__dso(map)->kernel)
> > >                                       printed += fprintf(fp, "%c%16" PRIx64, s, addr);
> > >                               else
> > >                                       printed += fprintf(fp, "%c%16" PRIx64, s, node->ip);
> > > diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> > > index 78f9fbb925a7..f19ac6eb4775 100644
> > > --- a/tools/perf/util/hist.c
> > > +++ b/tools/perf/util/hist.c
> > > @@ -105,7 +105,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
> > >               hists__set_col_len(hists, HISTC_THREAD, len + 8);
> > >
> > >       if (h->ms.map) {
> > > -             len = dso__name_len(h->ms.map->dso);
> > > +             len = dso__name_len(map__dso(h->ms.map));
> > >               hists__new_col_len(hists, HISTC_DSO, len);
> > >       }
> > >
> > > @@ -119,7 +119,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
> > >                               symlen += BITS_PER_LONG / 4 + 2 + 3;
> > >                       hists__new_col_len(hists, HISTC_SYMBOL_FROM, symlen);
> > >
> > > -                     symlen = dso__name_len(h->branch_info->from.ms.map->dso);
> > > +                     symlen = dso__name_len(map__dso(h->branch_info->from.ms.map));
> > >                       hists__new_col_len(hists, HISTC_DSO_FROM, symlen);
> > >               } else {
> > >                       symlen = unresolved_col_width + 4 + 2;
> > > @@ -133,7 +133,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
> > >                               symlen += BITS_PER_LONG / 4 + 2 + 3;
> > >                       hists__new_col_len(hists, HISTC_SYMBOL_TO, symlen);
> > >
> > > -                     symlen = dso__name_len(h->branch_info->to.ms.map->dso);
> > > +                     symlen = dso__name_len(map__dso(h->branch_info->to.ms.map));
> > >                       hists__new_col_len(hists, HISTC_DSO_TO, symlen);
> > >               } else {
> > >                       symlen = unresolved_col_width + 4 + 2;
> > > @@ -177,7 +177,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
> > >               }
> > >
> > >               if (h->mem_info->daddr.ms.map) {
> > > -                     symlen = dso__name_len(h->mem_info->daddr.ms.map->dso);
> > > +                     symlen = dso__name_len(map__dso(h->mem_info->daddr.ms.map));
> > >                       hists__new_col_len(hists, HISTC_MEM_DADDR_DSO,
> > >                                          symlen);
> > >               } else {
> > > @@ -2096,7 +2096,7 @@ static bool hists__filter_entry_by_dso(struct hists *hists,
> > >                                      struct hist_entry *he)
> > >  {
> > >       if (hists->dso_filter != NULL &&
> > > -         (he->ms.map == NULL || he->ms.map->dso != hists->dso_filter)) {
> > > +         (he->ms.map == NULL || map__dso(he->ms.map) != hists->dso_filter)) {
> > >               he->filtered |= (1 << HIST_FILTER__DSO);
> > >               return true;
> > >       }
> > > diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
> > > index e8613cbda331..c88f112c0a06 100644
> > > --- a/tools/perf/util/intel-pt.c
> > > +++ b/tools/perf/util/intel-pt.c
> > > @@ -731,20 +731,20 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
> > >       }
> > >
> > >       while (1) {
> > > -             if (!thread__find_map(thread, cpumode, *ip, &al) || !al.map->dso)
> > > +             if (!thread__find_map(thread, cpumode, *ip, &al) || !map__dso(al.map))
> > >                       return -EINVAL;
> > >
> > > -             if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR &&
> > > -                 dso__data_status_seen(al.map->dso,
> > > +             if (map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR &&
> > > +                 dso__data_status_seen(map__dso(al.map),
> > >                                         DSO_DATA_STATUS_SEEN_ITRACE))
> > >                       return -ENOENT;
> > >
> > > -             offset = al.map->map_ip(al.map, *ip);
> > > +             offset = map__map_ip(al.map, *ip);
> > >
> > >               if (!to_ip && one_map) {
> > >                       struct intel_pt_cache_entry *e;
> > >
> > > -                     e = intel_pt_cache_lookup(al.map->dso, machine, offset);
> > > +                     e = intel_pt_cache_lookup(map__dso(al.map), machine, offset);
> > >                       if (e &&
> > >                           (!max_insn_cnt || e->insn_cnt <= max_insn_cnt)) {
> > >                               *insn_cnt_ptr = e->insn_cnt;
> > > @@ -766,10 +766,10 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
> > >               /* Load maps to ensure dso->is_64_bit has been updated */
> > >               map__load(al.map);
> > >
> > > -             x86_64 = al.map->dso->is_64_bit;
> > > +             x86_64 = map__dso(al.map)->is_64_bit;
> > >
> > >               while (1) {
> > > -                     len = dso__data_read_offset(al.map->dso, machine,
> > > +                     len = dso__data_read_offset(map__dso(al.map), machine,
> > >                                                   offset, buf,
> > >                                                   INTEL_PT_INSN_BUF_SZ);
> > >                       if (len <= 0)
> > > @@ -795,7 +795,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
> > >                               goto out_no_cache;
> > >                       }
> > >
> > > -                     if (*ip >= al.map->end)
> > > +                     if (*ip >= map__end(al.map))
> > >                               break;
> > >
> > >                       offset += intel_pt_insn->length;
> > > @@ -815,13 +815,13 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
> > >       if (to_ip) {
> > >               struct intel_pt_cache_entry *e;
> > >
> > > -             e = intel_pt_cache_lookup(al.map->dso, machine, start_offset);
> > > +             e = intel_pt_cache_lookup(map__dso(al.map), machine, start_offset);
> > >               if (e)
> > >                       return 0;
> > >       }
> > >
> > >       /* Ignore cache errors */
> > > -     intel_pt_cache_add(al.map->dso, machine, start_offset, insn_cnt,
> > > +     intel_pt_cache_add(map__dso(al.map), machine, start_offset, insn_cnt,
> > >                          *ip - start_ip, intel_pt_insn);
> > >
> > >       return 0;
> > > @@ -892,13 +892,13 @@ static int __intel_pt_pgd_ip(uint64_t ip, void *data)
> > >       if (!thread)
> > >               return -EINVAL;
> > >
> > > -     if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso)
> > > +     if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map))
> > >               return -EINVAL;
> > >
> > > -     offset = al.map->map_ip(al.map, ip);
> > > +     offset = map__map_ip(al.map, ip);
> > >
> > >       return intel_pt_match_pgd_ip(ptq->pt, ip, offset,
> > > -                                  al.map->dso->long_name);
> > > +                                  map__dso(al.map)->long_name);
> > >  }
> > >
> > >  static bool intel_pt_pgd_ip(uint64_t ip, void *data)
> > > @@ -2406,13 +2406,13 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
> > >       if (map__load(map))
> > >               return 0;
> > >
> > > -     start = dso__first_symbol(map->dso);
> > > +     start = dso__first_symbol(map__dso(map));
> > >
> > >       for (sym = start; sym; sym = dso__next_symbol(sym)) {
> > >               if (sym->binding == STB_GLOBAL &&
> > >                   !strcmp(sym->name, "__switch_to")) {
> > > -                     ip = map->unmap_ip(map, sym->start);
> > > -                     if (ip >= map->start && ip < map->end) {
> > > +                     ip = map__unmap_ip(map, sym->start);
> > > +                     if (ip >= map__start(map) && ip < map__end(map)) {
> > >                               switch_ip = ip;
> > >                               break;
> > >                       }
> > > @@ -2429,8 +2429,8 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
> > >
> > >       for (sym = start; sym; sym = dso__next_symbol(sym)) {
> > >               if (!strcmp(sym->name, ptss)) {
> > > -                     ip = map->unmap_ip(map, sym->start);
> > > -                     if (ip >= map->start && ip < map->end) {
> > > +                     ip = map__unmap_ip(map, sym->start);
> > > +                     if (ip >= map__start(map) && ip < map__end(map)) {
> > >                               *ptss_ip = ip;
> > >                               break;
> > >                       }
> > > @@ -2965,7 +2965,7 @@ static int intel_pt_process_aux_output_hw_id(struct intel_pt *pt,
> > >  static int intel_pt_find_map(struct thread *thread, u8 cpumode, u64 addr,
> > >                            struct addr_location *al)
> > >  {
> > > -     if (!al->map || addr < al->map->start || addr >= al->map->end) {
> > > +     if (!al->map || addr < map__start(al->map) || addr >= map__end(al->map)) {
> > >               if (!thread__find_map(thread, cpumode, addr, al))
> > >                       return -1;
> > >       }
> > > @@ -2996,12 +2996,12 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
> > >                       continue;
> > >               }
> > >
> > > -             if (!al.map->dso || !al.map->dso->auxtrace_cache)
> > > +             if (!map__dso(al.map) || !map__dso(al.map)->auxtrace_cache)
> > >                       continue;
> > >
> > > -             offset = al.map->map_ip(al.map, addr);
> > > +             offset = map__map_ip(al.map, addr);
> > >
> > > -             e = intel_pt_cache_lookup(al.map->dso, machine, offset);
> > > +             e = intel_pt_cache_lookup(map__dso(al.map), machine, offset);
> > >               if (!e)
> > >                       continue;
> > >
> > > @@ -3014,9 +3014,9 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
> > >                       if (e->branch != INTEL_PT_BR_NO_BRANCH)
> > >                               return 0;
> > >               } else {
> > > -                     intel_pt_cache_invalidate(al.map->dso, machine, offset);
> > > +                     intel_pt_cache_invalidate(map__dso(al.map), machine, offset);
> > >                       intel_pt_log("Invalidated instruction cache for %s at %#"PRIx64"\n",
> > > -                                  al.map->dso->long_name, addr);
> > > +                                  map__dso(al.map)->long_name, addr);
> > >               }
> > >       }
> > >
> > > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > > index 88279008e761..940fb2a50dfd 100644
> > > --- a/tools/perf/util/machine.c
> > > +++ b/tools/perf/util/machine.c
> > > @@ -47,7 +47,7 @@ static void __machine__remove_thread(struct machine *machine, struct thread *th,
> > >
> > >  static struct dso *machine__kernel_dso(struct machine *machine)
> > >  {
> > > -     return machine->vmlinux_map->dso;
> > > +     return map__dso(machine->vmlinux_map);
> > >  }
> > >
> > >  static void dsos__init(struct dsos *dsos)
> > > @@ -842,9 +842,10 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
> > >       if (map != machine->vmlinux_map)
> > >               maps__remove(machine__kernel_maps(machine), map);
> > >       else {
> > > -             sym = dso__find_symbol(map->dso, map->map_ip(map, map->start));
> > > +             sym = dso__find_symbol(map__dso(map),
> > > +                             map__map_ip(map, map__start(map)));
> > >               if (sym)
> > > -                     dso__delete_symbol(map->dso, sym);
> > > +                     dso__delete_symbol(map__dso(map), sym);
> > >       }
> > >
> > >       return 0;
> > > @@ -880,7 +881,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
> > >               return 0;
> > >       }
> > >
> > > -     if (map && map->dso) {
> > > +     if (map && map__dso(map)) {
> > >               u8 *new_bytes = event->text_poke.bytes + event->text_poke.old_len;
> > >               int ret;
> > >
> > > @@ -889,7 +890,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
> > >                * must be done prior to using kernel maps.
> > >                */
> > >               map__load(map);
> > > -             ret = dso__data_write_cache_addr(map->dso, map, machine,
> > > +             ret = dso__data_write_cache_addr(map__dso(map), map, machine,
> > >                                                event->text_poke.addr,
> > >                                                new_bytes,
> > >                                                event->text_poke.new_len);
> > > @@ -931,6 +932,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
> > >       /* If maps__insert failed, return NULL. */
> > >       if (err)
> > >               map = NULL;
> > > +
> > >  out:
> > >       /* put the dso here, corresponding to  machine__findnew_module_dso */
> > >       dso__put(dso);
> > > @@ -1118,7 +1120,7 @@ int machine__create_extra_kernel_map(struct machine *machine,
> > >
> > >       if (!err) {
> > >               pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
> > > -                     kmap->name, map->start, map->end);
> > > +                     kmap->name, map__start(map), map__end(map));
> > >       }
> > >
> > >       map__put(map);
> > > @@ -1178,9 +1180,9 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
> > >               if (!kmap || !is_entry_trampoline(kmap->name))
> > >                       continue;
> > >
> > > -             dest_map = maps__find(kmaps, map->pgoff);
> > > +             dest_map = maps__find(kmaps, map__pgoff(map));
> > >               if (dest_map != map)
> > > -                     map->pgoff = dest_map->map_ip(dest_map, map->pgoff);
> > > +                     map->pgoff = map__map_ip(dest_map, map__pgoff(map));
> > >               found = true;
> > >       }
> > >       if (found || machine->trampolines_mapped)
> > > @@ -1230,7 +1232,8 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
> > >       if (machine->vmlinux_map == NULL)
> > >               return -ENOMEM;
> > >
> > > -     machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
> > > +     machine->vmlinux_map->map_ip = map__identity_ip;
> > > +     machine->vmlinux_map->unmap_ip = map__identity_ip;
> > >       return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
> > >  }
> > >
> > > @@ -1329,10 +1332,10 @@ int machines__create_kernel_maps(struct machines *machines, pid_t pid)
> > >  int machine__load_kallsyms(struct machine *machine, const char *filename)
> > >  {
> > >       struct map *map = machine__kernel_map(machine);
> > > -     int ret = __dso__load_kallsyms(map->dso, filename, map, true);
> > > +     int ret = __dso__load_kallsyms(map__dso(map), filename, map, true);
> > >
> > >       if (ret > 0) {
> > > -             dso__set_loaded(map->dso);
> > > +             dso__set_loaded(map__dso(map));
> > >               /*
> > >                * Since /proc/kallsyms will have multiple sessions for the
> > >                * kernel, with modules between them, fixup the end of all
> > > @@ -1347,10 +1350,10 @@ int machine__load_kallsyms(struct machine *machine, const char *filename)
> > >  int machine__load_vmlinux_path(struct machine *machine)
> > >  {
> > >       struct map *map = machine__kernel_map(machine);
> > > -     int ret = dso__load_vmlinux_path(map->dso, map);
> > > +     int ret = dso__load_vmlinux_path(map__dso(map), map);
> > >
> > >       if (ret > 0)
> > > -             dso__set_loaded(map->dso);
> > > +             dso__set_loaded(map__dso(map));
> > >
> > >       return ret;
> > >  }
> > > @@ -1401,16 +1404,16 @@ static int maps__set_module_path(struct maps *maps, const char *path, struct kmo
> > >       if (long_name == NULL)
> > >               return -ENOMEM;
> > >
> > > -     dso__set_long_name(map->dso, long_name, true);
> > > -     dso__kernel_module_get_build_id(map->dso, "");
> > > +     dso__set_long_name(map__dso(map), long_name, true);
> > > +     dso__kernel_module_get_build_id(map__dso(map), "");
> > >
> > >       /*
> > >        * Full name could reveal us kmod compression, so
> > >        * we need to update the symtab_type if needed.
> > >        */
> > > -     if (m->comp && is_kmod_dso(map->dso)) {
> > > -             map->dso->symtab_type++;
> > > -             map->dso->comp = m->comp;
> > > +     if (m->comp && is_kmod_dso(map__dso(map))) {
> > > +             map__dso(map)->symtab_type++;
> > > +             map__dso(map)->comp = m->comp;
> > >       }
> > >
> > >       return 0;
> > > @@ -1509,8 +1512,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
> > >               return -1;
> > >       map->end = start + size;
> > >
> > > -     dso__kernel_module_get_build_id(map->dso, machine->root_dir);
> > > -
> > > +     dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
> > >       return 0;
> > >  }
> > >
> > > @@ -1619,7 +1621,7 @@ int machine__create_kernel_maps(struct machine *machine)
> > >               struct map_rb_node *next = map_rb_node__next(rb_node);
> > >
> > >               if (next)
> > > -                     machine__set_kernel_mmap(machine, start, next->map->start);
> > > +                     machine__set_kernel_mmap(machine, start, map__start(next->map));
> > >       }
> > >
> > >  out_put:
> > > @@ -1683,10 +1685,10 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
> > >               if (map == NULL)
> > >                       goto out_problem;
> > >
> > > -             map->end = map->start + xm->end - xm->start;
> > > +             map->end = map__start(map) + xm->end - xm->start;
> > >
> > >               if (build_id__is_defined(bid))
> > > -                     dso__set_build_id(map->dso, bid);
> > > +                     dso__set_build_id(map__dso(map), bid);
> > >
> > >       } else if (is_kernel_mmap) {
> > >               const char *symbol_name = (xm->name + strlen(machine->mmap_name));
> > > @@ -2148,14 +2150,14 @@ static char *callchain_srcline(struct map_symbol *ms, u64 ip)
> > >       if (!map || callchain_param.key == CCKEY_FUNCTION)
> > >               return srcline;
> > >
> > > -     srcline = srcline__tree_find(&map->dso->srclines, ip);
> > > +     srcline = srcline__tree_find(&map__dso(map)->srclines, ip);
> > >       if (!srcline) {
> > >               bool show_sym = false;
> > >               bool show_addr = callchain_param.key == CCKEY_ADDRESS;
> > >
> > > -             srcline = get_srcline(map->dso, map__rip_2objdump(map, ip),
> > > +             srcline = get_srcline(map__dso(map), map__rip_2objdump(map, ip),
> > >                                     ms->sym, show_sym, show_addr, ip);
> > > -             srcline__tree_insert(&map->dso->srclines, ip, srcline);
> > > +             srcline__tree_insert(&map__dso(map)->srclines, ip, srcline);
> > >       }
> > >
> > >       return srcline;
> > > @@ -2179,7 +2181,7 @@ static int add_callchain_ip(struct thread *thread,
> > >  {
> > >       struct map_symbol ms;
> > >       struct addr_location al;
> > > -     int nr_loop_iter = 0;
> > > +     int nr_loop_iter = 0, err;
> > >       u64 iter_cycles = 0;
> > >       const char *srcline = NULL;
> > >
> > > @@ -2228,9 +2230,10 @@ static int add_callchain_ip(struct thread *thread,
> > >               }
> > >       }
> > >
> > > -     if (symbol_conf.hide_unresolved && al.sym == NULL)
> > > +     if (symbol_conf.hide_unresolved && al.sym == NULL) {
> > > +             addr_location__put(&al);
> > >               return 0;
> > > -
> > > +     }
> > >       if (iter) {
> > >               nr_loop_iter = iter->nr_loop_iter;
> > >               iter_cycles = iter->cycles;
> > > @@ -2240,9 +2243,10 @@ static int add_callchain_ip(struct thread *thread,
> > >       ms.map = al.map;
> > >       ms.sym = al.sym;
> > >       srcline = callchain_srcline(&ms, al.addr);
> > > -     return callchain_cursor_append(cursor, ip, &ms,
> > > -                                    branch, flags, nr_loop_iter,
> > > -                                    iter_cycles, branch_from, srcline);
> > > +     err = callchain_cursor_append(cursor, ip, &ms,
> > > +                                   branch, flags, nr_loop_iter,
> > > +                                   iter_cycles, branch_from, srcline);
> > > +     return err;
> > >  }
> > >
> > >  struct branch_info *sample__resolve_bstack(struct perf_sample *sample,
> > > @@ -2937,15 +2941,15 @@ static int append_inlines(struct callchain_cursor *cursor, struct map_symbol *ms
> > >       if (!symbol_conf.inline_name || !map || !sym)
> > >               return ret;
> > >
> > > -     addr = map__map_ip(map, ip);
> > > +     addr = map__dso_map_ip(map, ip);
> > >       addr = map__rip_2objdump(map, addr);
> > >
> > > -     inline_node = inlines__tree_find(&map->dso->inlined_nodes, addr);
> > > +     inline_node = inlines__tree_find(&map__dso(map)->inlined_nodes, addr);
> > >       if (!inline_node) {
> > > -             inline_node = dso__parse_addr_inlines(map->dso, addr, sym);
> > > +             inline_node = dso__parse_addr_inlines(map__dso(map), addr, sym);
> > >               if (!inline_node)
> > >                       return ret;
> > > -             inlines__tree_insert(&map->dso->inlined_nodes, inline_node);
> > > +             inlines__tree_insert(&map__dso(map)->inlined_nodes, inline_node);
> > >       }
> > >
> > >       list_for_each_entry(ilist, &inline_node->val, list) {
> > > @@ -2981,7 +2985,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
> > >        * its corresponding binary.
> > >        */
> > >       if (entry->ms.map)
> > > -             addr = map__map_ip(entry->ms.map, entry->ip);
> > > +             addr = map__dso_map_ip(entry->ms.map, entry->ip);
> > >
> > >       srcline = callchain_srcline(&entry->ms, addr);
> > >       return callchain_cursor_append(cursor, entry->ip, &entry->ms,
> > > @@ -3183,7 +3187,7 @@ int machine__get_kernel_start(struct machine *machine)
> > >                * kernel_start = 1ULL << 63 for x86_64.
> > >                */
> > >               if (!err && !machine__is(machine, "x86_64"))
> > > -                     machine->kernel_start = map->start;
> > > +                     machine->kernel_start = map__start(map);
> > >       }
> > >       return err;
> > >  }
> > > @@ -3234,8 +3238,8 @@ char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, ch
> > >       if (sym == NULL)
> > >               return NULL;
> > >
> > > -     *modp = __map__is_kmodule(map) ? (char *)map->dso->short_name : NULL;
> > > -     *addrp = map->unmap_ip(map, sym->start);
> > > +     *modp = __map__is_kmodule(map) ? (char *)map__dso(map)->short_name : NULL;
> > > +     *addrp = map__unmap_ip(map, sym->start);
> > >       return sym->name;
> > >  }
> > >
> > > diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> > > index 57e926ce115f..47d81e361e29 100644
> > > --- a/tools/perf/util/map.c
> > > +++ b/tools/perf/util/map.c
> > > @@ -109,8 +109,8 @@ void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
> > >       map->pgoff    = pgoff;
> > >       map->reloc    = 0;
> > >       map->dso      = dso__get(dso);
> > > -     map->map_ip   = map__map_ip;
> > > -     map->unmap_ip = map__unmap_ip;
> > > +     map->map_ip   = map__dso_map_ip;
> > > +     map->unmap_ip = map__dso_unmap_ip;
> > >       map->erange_warned = false;
> > >       refcount_set(&map->refcnt, 1);
> > >  }
> > > @@ -120,10 +120,11 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
> > >                    u32 prot, u32 flags, struct build_id *bid,
> > >                    char *filename, struct thread *thread)
> > >  {
> > > -     struct map *map = malloc(sizeof(*map));
> > > +     struct map *map;
> > >       struct nsinfo *nsi = NULL;
> > >       struct nsinfo *nnsi;
> > >
> > > +     map = malloc(sizeof(*map));
> > >       if (map != NULL) {
> > >               char newfilename[PATH_MAX];
> > >               struct dso *dso;
> > > @@ -170,7 +171,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
> > >               map__init(map, start, start + len, pgoff, dso);
> > >
> > >               if (anon || no_dso) {
> > > -                     map->map_ip = map->unmap_ip = identity__map_ip;
> > > +                     map->map_ip = map->unmap_ip = map__identity_ip;
> > >
> > >                       /*
> > >                        * Set memory without DSO as loaded. All map__find_*
> > > @@ -204,8 +205,9 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
> > >   */
> > >  struct map *map__new2(u64 start, struct dso *dso)
> > >  {
> > > -     struct map *map = calloc(1, (sizeof(*map) +
> > > -                                  (dso->kernel ? sizeof(struct kmap) : 0)));
> > > +     struct map *map;
> > > +
> > > +     map = calloc(1, sizeof(*map) + (dso->kernel ? sizeof(struct kmap) : 0));
> > >       if (map != NULL) {
> > >               /*
> > >                * ->end will be filled after we load all the symbols
> > > @@ -218,7 +220,7 @@ struct map *map__new2(u64 start, struct dso *dso)
> > >
> > >  bool __map__is_kernel(const struct map *map)
> > >  {
> > > -     if (!map->dso->kernel)
> > > +     if (!map__dso(map)->kernel)
> > >               return false;
> > >       return machine__kernel_map(maps__machine(map__kmaps((struct map *)map))) == map;
> > >  }
> > > @@ -234,7 +236,7 @@ bool __map__is_bpf_prog(const struct map *map)
> > >  {
> > >       const char *name;
> > >
> > > -     if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
> > > +     if (map__dso(map)->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
> > >               return true;
> > >
> > >       /*
> > > @@ -242,7 +244,7 @@ bool __map__is_bpf_prog(const struct map *map)
> > >        * type of DSO_BINARY_TYPE__BPF_PROG_INFO. In such cases, we can
> > >        * guess the type based on name.
> > >        */
> > > -     name = map->dso->short_name;
> > > +     name = map__dso(map)->short_name;
> > >       return name && (strstr(name, "bpf_prog_") == name);
> > >  }
> > >
> > > @@ -250,7 +252,7 @@ bool __map__is_bpf_image(const struct map *map)
> > >  {
> > >       const char *name;
> > >
> > > -     if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
> > > +     if (map__dso(map)->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
> > >               return true;
> > >
> > >       /*
> > > @@ -258,18 +260,19 @@ bool __map__is_bpf_image(const struct map *map)
> > >        * type of DSO_BINARY_TYPE__BPF_IMAGE. In such cases, we can
> > >        * guess the type based on name.
> > >        */
> > > -     name = map->dso->short_name;
> > > +     name = map__dso(map)->short_name;
> > >       return name && is_bpf_image(name);
> > >  }
> > >
> > >  bool __map__is_ool(const struct map *map)
> > >  {
> > > -     return map->dso && map->dso->binary_type == DSO_BINARY_TYPE__OOL;
> > > +     return map__dso(map) &&
> > > +            map__dso(map)->binary_type == DSO_BINARY_TYPE__OOL;
> > >  }
> > >
> > >  bool map__has_symbols(const struct map *map)
> > >  {
> > > -     return dso__has_symbols(map->dso);
> > > +     return dso__has_symbols(map__dso(map));
> > >  }
> > >
> > >  static void map__exit(struct map *map)
> > > @@ -292,7 +295,7 @@ void map__put(struct map *map)
> > >
> > >  void map__fixup_start(struct map *map)
> > >  {
> > > -     struct rb_root_cached *symbols = &map->dso->symbols;
> > > +     struct rb_root_cached *symbols = &map__dso(map)->symbols;
> > >       struct rb_node *nd = rb_first_cached(symbols);
> > >       if (nd != NULL) {
> > >               struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
> > > @@ -302,7 +305,7 @@ void map__fixup_start(struct map *map)
> > >
> > >  void map__fixup_end(struct map *map)
> > >  {
> > > -     struct rb_root_cached *symbols = &map->dso->symbols;
> > > +     struct rb_root_cached *symbols = &map__dso(map)->symbols;
> > >       struct rb_node *nd = rb_last(&symbols->rb_root);
> > >       if (nd != NULL) {
> > >               struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
> > > @@ -314,18 +317,18 @@ void map__fixup_end(struct map *map)
> > >
> > >  int map__load(struct map *map)
> > >  {
> > > -     const char *name = map->dso->long_name;
> > > +     const char *name = map__dso(map)->long_name;
> > >       int nr;
> > >
> > > -     if (dso__loaded(map->dso))
> > > +     if (dso__loaded(map__dso(map)))
> > >               return 0;
> > >
> > > -     nr = dso__load(map->dso, map);
> > > +     nr = dso__load(map__dso(map), map);
> > >       if (nr < 0) {
> > > -             if (map->dso->has_build_id) {
> > > +             if (map__dso(map)->has_build_id) {
> > >                       char sbuild_id[SBUILD_ID_SIZE];
> > >
> > > -                     build_id__sprintf(&map->dso->bid, sbuild_id);
> > > +                     build_id__sprintf(&map__dso(map)->bid, sbuild_id);
> > >                       pr_debug("%s with build id %s not found", name, sbuild_id);
> > >               } else
> > >                       pr_debug("Failed to open %s", name);
> > > @@ -357,7 +360,7 @@ struct symbol *map__find_symbol(struct map *map, u64 addr)
> > >       if (map__load(map) < 0)
> > >               return NULL;
> > >
> > > -     return dso__find_symbol(map->dso, addr);
> > > +     return dso__find_symbol(map__dso(map), addr);
> > >  }
> > >
> > >  struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
> > > @@ -365,24 +368,24 @@ struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
> > >       if (map__load(map) < 0)
> > >               return NULL;
> > >
> > > -     if (!dso__sorted_by_name(map->dso))
> > > -             dso__sort_by_name(map->dso);
> > > +     if (!dso__sorted_by_name(map__dso(map)))
> > > +             dso__sort_by_name(map__dso(map));
> > >
> > > -     return dso__find_symbol_by_name(map->dso, name);
> > > +     return dso__find_symbol_by_name(map__dso(map), name);
> > >  }
> > >
> > >  struct map *map__clone(struct map *from)
> > >  {
> > > -     size_t size = sizeof(struct map);
> > >       struct map *map;
> > > +     size_t size = sizeof(struct map);
> > >
> > > -     if (from->dso && from->dso->kernel)
> > > +     if (map__dso(from) && map__dso(from)->kernel)
> > >               size += sizeof(struct kmap);
> > >
> > >       map = memdup(from, size);
> > >       if (map != NULL) {
> > >               refcount_set(&map->refcnt, 1);
> > > -             dso__get(map->dso);
> > > +             map->dso = dso__get(map->dso);
> > >       }
> > >
> > >       return map;
> > > @@ -391,7 +394,8 @@ struct map *map__clone(struct map *from)
> > >  size_t map__fprintf(struct map *map, FILE *fp)
> > >  {
> > >       return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s\n",
> > > -                    map->start, map->end, map->pgoff, map->dso->name);
> > > +                    map__start(map), map__end(map),
> > > +                    map__pgoff(map), map__dso(map)->name);
> > >  }
> > >
> > >  size_t map__fprintf_dsoname(struct map *map, FILE *fp)
> > > @@ -399,11 +403,11 @@ size_t map__fprintf_dsoname(struct map *map, FILE *fp)
> > >       char buf[symbol_conf.pad_output_len_dso + 1];
> > >       const char *dsoname = "[unknown]";
> > >
> > > -     if (map && map->dso) {
> > > -             if (symbol_conf.show_kernel_path && map->dso->long_name)
> > > -                     dsoname = map->dso->long_name;
> > > +     if (map && map__dso(map)) {
> > > +             if (symbol_conf.show_kernel_path && map__dso(map)->long_name)
> > > +                     dsoname = map__dso(map)->long_name;
> > >               else
> > > -                     dsoname = map->dso->name;
> > > +                     dsoname = map__dso(map)->name;
> > >       }
> > >
> > >       if (symbol_conf.pad_output_len_dso) {
> > > @@ -418,7 +422,8 @@ char *map__srcline(struct map *map, u64 addr, struct symbol *sym)
> > >  {
> > >       if (map == NULL)
> > >               return SRCLINE_UNKNOWN;
> > > -     return get_srcline(map->dso, map__rip_2objdump(map, addr), sym, true, true, addr);
> > > +     return get_srcline(map__dso(map), map__rip_2objdump(map, addr),
> > > +                        sym, true, true, addr);
> > >  }
> > >
> > >  int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
> > > @@ -426,7 +431,7 @@ int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
> > >  {
> > >       int ret = 0;
> > >
> > > -     if (map && map->dso) {
> > > +     if (map && map__dso(map)) {
> > >               char *srcline = map__srcline(map, addr, NULL);
> > >               if (strncmp(srcline, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0)
> > >                       ret = fprintf(fp, "%s%s", prefix, srcline);
> > > @@ -472,20 +477,20 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
> > >               }
> > >       }
> > >
> > > -     if (!map->dso->adjust_symbols)
> > > +     if (!map__dso(map)->adjust_symbols)
> > >               return rip;
> > >
> > > -     if (map->dso->rel)
> > > -             return rip - map->pgoff;
> > > +     if (map__dso(map)->rel)
> > > +             return rip - map__pgoff(map);
> > >
> > >       /*
> > >        * kernel modules also have DSO_TYPE_USER in dso->kernel,
> > >        * but all kernel modules are ET_REL, so won't get here.
> > >        */
> > > -     if (map->dso->kernel == DSO_SPACE__USER)
> > > -             return rip + map->dso->text_offset;
> > > +     if (map__dso(map)->kernel == DSO_SPACE__USER)
> > > +             return rip + map__dso(map)->text_offset;
> > >
> > > -     return map->unmap_ip(map, rip) - map->reloc;
> > > +     return map__unmap_ip(map, rip) - map__reloc(map);
> > >  }
> > >
> > >  /**
> > > @@ -502,34 +507,34 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
> > >   */
> > >  u64 map__objdump_2mem(struct map *map, u64 ip)
> > >  {
> > > -     if (!map->dso->adjust_symbols)
> > > -             return map->unmap_ip(map, ip);
> > > +     if (!map__dso(map)->adjust_symbols)
> > > +             return map__unmap_ip(map, ip);
> > >
> > > -     if (map->dso->rel)
> > > -             return map->unmap_ip(map, ip + map->pgoff);
> > > +     if (map__dso(map)->rel)
> > > +             return map__unmap_ip(map, ip + map__pgoff(map));
> > >
> > >       /*
> > >        * kernel modules also have DSO_TYPE_USER in dso->kernel,
> > >        * but all kernel modules are ET_REL, so won't get here.
> > >        */
> > > -     if (map->dso->kernel == DSO_SPACE__USER)
> > > -             return map->unmap_ip(map, ip - map->dso->text_offset);
> > > +     if (map__dso(map)->kernel == DSO_SPACE__USER)
> > > +             return map__unmap_ip(map, ip - map__dso(map)->text_offset);
> > >
> > > -     return ip + map->reloc;
> > > +     return ip + map__reloc(map);
> > >  }
> > >
> > >  bool map__contains_symbol(const struct map *map, const struct symbol *sym)
> > >  {
> > > -     u64 ip = map->unmap_ip(map, sym->start);
> > > +     u64 ip = map__unmap_ip(map, sym->start);
> > >
> > > -     return ip >= map->start && ip < map->end;
> > > +     return ip >= map__start(map) && ip < map__end(map);
> > >  }
> > >
> > >  struct kmap *__map__kmap(struct map *map)
> > >  {
> > > -     if (!map->dso || !map->dso->kernel)
> > > +     if (!map__dso(map) || !map__dso(map)->kernel)
> > >               return NULL;
> > > -     return (struct kmap *)(map + 1);
> > > +     return (struct kmap *)(&map[1]);
> > >  }
> > >
> > >  struct kmap *map__kmap(struct map *map)
> > > @@ -552,17 +557,17 @@ struct maps *map__kmaps(struct map *map)
> > >       return kmap->kmaps;
> > >  }
> > >
> > > -u64 map__map_ip(const struct map *map, u64 ip)
> > > +u64 map__dso_map_ip(const struct map *map, u64 ip)
> > >  {
> > > -     return ip - map->start + map->pgoff;
> > > +     return ip - map__start(map) + map__pgoff(map);
> > >  }
> > >
> > > -u64 map__unmap_ip(const struct map *map, u64 ip)
> > > +u64 map__dso_unmap_ip(const struct map *map, u64 ip)
> > >  {
> > > -     return ip + map->start - map->pgoff;
> > > +     return ip + map__start(map) - map__pgoff(map);
> > >  }
> > >
> > > -u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip)
> > > +u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip)
> > >  {
> > >       return ip;
> > >  }
> > > diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
> > > index d1a6f85fd31d..99ef0464a357 100644
> > > --- a/tools/perf/util/map.h
> > > +++ b/tools/perf/util/map.h
> > > @@ -41,15 +41,65 @@ struct kmap *map__kmap(struct map *map);
> > >  struct maps *map__kmaps(struct map *map);
> > >
> > >  /* ip -> dso rip */
> > > -u64 map__map_ip(const struct map *map, u64 ip);
> > > +u64 map__dso_map_ip(const struct map *map, u64 ip);
> > >  /* dso rip -> ip */
> > > -u64 map__unmap_ip(const struct map *map, u64 ip);
> > > +u64 map__dso_unmap_ip(const struct map *map, u64 ip);
> > >  /* Returns ip */
> > > -u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip);
> > > +u64 map__identity_ip(const struct map *map __maybe_unused, u64 ip);
> > > +
> > > +static inline struct dso *map__dso(const struct map *map)
> > > +{
> > > +     return map->dso;
> > > +}
> > > +
> > > +static inline u64 map__map_ip(const struct map *map, u64 ip)
> > > +{
> > > +     return map->map_ip(map, ip);
> > > +}
> > > +
> > > +static inline u64 map__unmap_ip(const struct map *map, u64 ip)
> > > +{
> > > +     return map->unmap_ip(map, ip);
> > > +}
> > > +
> > > +static inline u64 map__start(const struct map *map)
> > > +{
> > > +     return map->start;
> > > +}
> > > +
> > > +static inline u64 map__end(const struct map *map)
> > > +{
> > > +     return map->end;
> > > +}
> > > +
> > > +static inline u64 map__pgoff(const struct map *map)
> > > +{
> > > +     return map->pgoff;
> > > +}
> > > +
> > > +static inline u64 map__reloc(const struct map *map)
> > > +{
> > > +     return map->reloc;
> > > +}
> > > +
> > > +static inline u32 map__flags(const struct map *map)
> > > +{
> > > +     return map->flags;
> > > +}
> > > +
> > > +static inline u32 map__prot(const struct map *map)
> > > +{
> > > +     return map->prot;
> > > +}
> > > +
> > > +static inline bool map__priv(const struct map *map)
> > > +{
> > > +     return map->priv;
> > > +}
> > >
> > >  static inline size_t map__size(const struct map *map)
> > >  {
> > > -     return map->end - map->start;
> > > +     return map__end(map) - map__start(map);
> > >  }
> > >
> > >  /* rip/ip <-> addr suitable for passing to `objdump --start-address=` */
> > > diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
> > > index 9fc3e7186b8e..6efbcb79131c 100644
> > > --- a/tools/perf/util/maps.c
> > > +++ b/tools/perf/util/maps.c
> > > @@ -30,24 +30,24 @@ static void __maps__free_maps_by_name(struct maps *maps)
> > >       maps->nr_maps_allocated = 0;
> > >  }
> > >
> > > -static int __maps__insert(struct maps *maps, struct map *map)
> > > +static struct map *__maps__insert(struct maps *maps, struct map *map)
> > >  {
> > >       struct rb_node **p = &maps__entries(maps)->rb_node;
> > >       struct rb_node *parent = NULL;
> > > -     const u64 ip = map->start;
> > > +     const u64 ip = map__start(map);
> > >       struct map_rb_node *m, *new_rb_node;
> > >
> > >       new_rb_node = malloc(sizeof(*new_rb_node));
> > >       if (!new_rb_node)
> > > -             return -ENOMEM;
> > > +             return NULL;
> > >
> > >       RB_CLEAR_NODE(&new_rb_node->rb_node);
> > > -     new_rb_node->map = map;
> > > +     new_rb_node->map = map__get(map);
> > >
> > >       while (*p != NULL) {
> > >               parent = *p;
> > >               m = rb_entry(parent, struct map_rb_node, rb_node);
> > > -             if (ip < m->map->start)
> > > +             if (ip < map__start(m->map))
> > >                       p = &(*p)->rb_left;
> > >               else
> > >                       p = &(*p)->rb_right;
> > > @@ -55,22 +55,23 @@ static int __maps__insert(struct maps *maps, struct map *map)
> > >
> > >       rb_link_node(&new_rb_node->rb_node, parent, p);
> > >       rb_insert_color(&new_rb_node->rb_node, maps__entries(maps));
> > > -     map__get(map);
> > > -     return 0;
> > > +     return new_rb_node->map;
> > >  }
> > >
> > >  int maps__insert(struct maps *maps, struct map *map)
> > >  {
> > > -     int err;
> > > +     int err = 0;
> > >
> > >       down_write(maps__lock(maps));
> > > -     err = __maps__insert(maps, map);
> > > -     if (err)
> > > +     map = __maps__insert(maps, map);
> > > +     if (!map) {
> > > +             err = -ENOMEM;
> > >               goto out;
> > > +     }
> > >
> > >       ++maps->nr_maps;
> > >
> > > -     if (map->dso && map->dso->kernel) {
> > > +     if (map__dso(map) && map__dso(map)->kernel) {
> > >               struct kmap *kmap = map__kmap(map);
> > >
> > >               if (kmap)
> > > @@ -193,7 +194,7 @@ struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
> > >       if (map != NULL && map__load(map) >= 0) {
> > >               if (mapp != NULL)
> > >                       *mapp = map;
> > > -             return map__find_symbol(map, map->map_ip(map, addr));
> > > +             return map__find_symbol(map, map__map_ip(map, addr));
> > >       }
> > >
> > >       return NULL;
> > > @@ -228,7 +229,8 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
> > >
> > >  int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
> > >  {
> > > -     if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
> > > +     if (ams->addr < map__start(ams->ms.map) ||
> > > +         ams->addr >= map__end(ams->ms.map)) {
> > >               if (maps == NULL)
> > >                       return -1;
> > >               ams->ms.map = maps__find(maps, ams->addr);
> > > @@ -236,7 +238,7 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
> > >                       return -1;
> > >       }
> > >
> > > -     ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
> > > +     ams->al_addr = map__map_ip(ams->ms.map, ams->addr);
> > >       ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
> > >
> > >       return ams->ms.sym ? 0 : -1;
> > > @@ -253,7 +255,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
> > >               printed += fprintf(fp, "Map:");
> > >               printed += map__fprintf(pos->map, fp);
> > >               if (verbose > 2) {
> > > -                     printed += dso__fprintf(pos->map->dso, fp);
> > > +                     printed += dso__fprintf(map__dso(pos->map), fp);
> > >                       printed += fprintf(fp, "--\n");
> > >               }
> > >       }
> > > @@ -282,9 +284,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> > >       while (next) {
> > >               struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
> > >
> > > -             if (pos->map->end > map->start) {
> > > +             if (map__end(pos->map) > map__start(map)) {
> > >                       first = next;
> > > -                     if (pos->map->start <= map->start)
> > > +                     if (map__start(pos->map) <= map__start(map))
> > >                               break;
> > >                       next = next->rb_left;
> > >               } else
> > > @@ -300,14 +302,14 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> > >                * Stop if current map starts after map->end.
> > >                * Maps are ordered by start: next will not overlap for sure.
> > >                */
> > > -             if (pos->map->start >= map->end)
> > > +             if (map__start(pos->map) >= map__end(map))
> > >                       break;
> > >
> > >               if (verbose >= 2) {
> > >
> > >                       if (use_browser) {
> > >                               pr_debug("overlapping maps in %s (disable tui for more info)\n",
> > > -                                        map->dso->name);
> > > +                                        map__dso(map)->name);
> > >                       } else {
> > >                               fputs("overlapping maps:\n", fp);
> > >                               map__fprintf(map, fp);
> > > @@ -320,7 +322,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> > >                * Now check if we need to create new maps for areas not
> > >                * overlapped by the new map:
> > >                */
> > > -             if (map->start > pos->map->start) {
> > > +             if (map__start(map) > map__start(pos->map)) {
> > >                       struct map *before = map__clone(pos->map);
> > >
> > >                       if (before == NULL) {
> > > @@ -328,17 +330,19 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> > >                               goto put_map;
> > >                       }
> > >
> > > -                     before->end = map->start;
> > > -                     err = __maps__insert(maps, before);
> > > -                     if (err)
> > > +                     before->end = map__start(map);
> > > +                     if (!__maps__insert(maps, before)) {
> > > +                             map__put(before);
> > > +                             err = -ENOMEM;
> > >                               goto put_map;
> > > +                     }
> > >
> > >                       if (verbose >= 2 && !use_browser)
> > >                               map__fprintf(before, fp);
> > >                       map__put(before);
> > >               }
> > >
> > > -             if (map->end < pos->map->end) {
> > > +             if (map__end(map) < map__end(pos->map)) {
> > >                       struct map *after = map__clone(pos->map);
> > >
> > >                       if (after == NULL) {
> > > @@ -346,14 +350,15 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> > >                               goto put_map;
> > >                       }
> > >
> > > -                     after->start = map->end;
> > > -                     after->pgoff += map->end - pos->map->start;
> > > -                     assert(pos->map->map_ip(pos->map, map->end) ==
> > > -                             after->map_ip(after, map->end));
> > > -                     err = __maps__insert(maps, after);
> > > -                     if (err)
> > > +                     after->start = map__end(map);
> > > +                     after->pgoff += map__end(map) - map__start(pos->map);
> > > +                     assert(map__map_ip(pos->map, map__end(map)) ==
> > > +                             map__map_ip(after, map__end(map)));
> > > +                     if (!__maps__insert(maps, after)) {
> > > +                             map__put(after);
> > > +                             err = -ENOMEM;
> > >                               goto put_map;
> > > -
> > > +                     }
> > >                       if (verbose >= 2 && !use_browser)
> > >                               map__fprintf(after, fp);
> > >                       map__put(after);
> > > @@ -377,7 +382,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> > >  int maps__clone(struct thread *thread, struct maps *parent)
> > >  {
> > >       struct maps *maps = thread->maps;
> > > -     int err;
> > > +     int err = 0;
> > >       struct map_rb_node *rb_node;
> > >
> > >       down_read(maps__lock(parent));
> > > @@ -391,17 +396,13 @@ int maps__clone(struct thread *thread, struct maps *parent)
> > >               }
> > >
> > >               err = unwind__prepare_access(maps, new, NULL);
> > > -             if (err)
> > > -                     goto out_unlock;
> > > +             if (!err)
> > > +                     err = maps__insert(maps, new);
> > >
> > > -             err = maps__insert(maps, new);
> > > +             map__put(new);
> > >               if (err)
> > >                       goto out_unlock;
> > > -
> > > -             map__put(new);
> > >       }
> > > -
> > > -     err = 0;
> > >  out_unlock:
> > >       up_read(maps__lock(parent));
> > >       return err;
> > > @@ -428,9 +429,9 @@ struct map *maps__find(struct maps *maps, u64 ip)
> > >       p = maps__entries(maps)->rb_node;
> > >       while (p != NULL) {
> > >               m = rb_entry(p, struct map_rb_node, rb_node);
> > > -             if (ip < m->map->start)
> > > +             if (ip < map__start(m->map))
> > >                       p = p->rb_left;
> > > -             else if (ip >= m->map->end)
> > > +             else if (ip >= map__end(m->map))
> > >                       p = p->rb_right;
> > >               else
> > >                       goto out;
> > > diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
> > > index f9fbf611f2bf..1a93dca50a4c 100644
> > > --- a/tools/perf/util/probe-event.c
> > > +++ b/tools/perf/util/probe-event.c
> > > @@ -134,15 +134,15 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
> > >       /* ref_reloc_sym is just a label. Need a special fix*/
> > >       reloc_sym = kernel_get_ref_reloc_sym(&map);
> > >       if (reloc_sym && strcmp(name, reloc_sym->name) == 0)
> > > -             *addr = (!map->reloc || reloc) ? reloc_sym->addr :
> > > +             *addr = (!map__reloc(map) || reloc) ? reloc_sym->addr :
> > >                       reloc_sym->unrelocated_addr;
> > >       else {
> > >               sym = machine__find_kernel_symbol_by_name(host_machine, name, &map);
> > >               if (!sym)
> > >                       return -ENOENT;
> > > -             *addr = map->unmap_ip(map, sym->start) -
> > > -                     ((reloc) ? 0 : map->reloc) -
> > > -                     ((reladdr) ? map->start : 0);
> > > +             *addr = map__unmap_ip(map, sym->start) -
> > > +                     ((reloc) ? 0 : map__reloc(map)) -
> > > +                     ((reladdr) ? map__start(map) : 0);
> > >       }
> > >       return 0;
> > >  }
> > > @@ -164,8 +164,8 @@ static struct map *kernel_get_module_map(const char *module)
> > >
> > >       maps__for_each_entry(maps, pos) {
> > >               /* short_name is "[module]" */
> > > -             const char *short_name = pos->map->dso->short_name;
> > > -             u16 short_name_len =  pos->map->dso->short_name_len;
> > > +             const char *short_name = map__dso(pos->map)->short_name;
> > > +             u16 short_name_len =  map__dso(pos->map)->short_name_len;
> > >
> > >               if (strncmp(short_name + 1, module,
> > >                           short_name_len - 2) == 0 &&
> > > @@ -183,11 +183,11 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
> > >               struct map *map;
> > >
> > >               map = dso__new_map(target);
> > > -             if (map && map->dso) {
> > > -                     BUG_ON(pthread_mutex_lock(&map->dso->lock) != 0);
> > > -                     nsinfo__put(map->dso->nsinfo);
> > > -                     map->dso->nsinfo = nsinfo__get(nsi);
> > > -                     pthread_mutex_unlock(&map->dso->lock);
> > > +             if (map && map__dso(map)) {
> > > +                     BUG_ON(pthread_mutex_lock(&map__dso(map)->lock) != 0);
> > > +                     nsinfo__put(map__dso(map)->nsinfo);
> > > +                     map__dso(map)->nsinfo = nsinfo__get(nsi);
> > > +                     pthread_mutex_unlock(&map__dso(map)->lock);
> > >               }
> > >               return map;
> > >       } else {
> > > @@ -253,7 +253,7 @@ static bool kprobe_warn_out_range(const char *symbol, u64 address)
> > >
> > >       map = kernel_get_module_map(NULL);
> > >       if (map) {
> > > -             ret = address <= map->start || map->end < address;
> > > +             ret = address <= map__start(map) || map__end(map) < address;
> > >               if (ret)
> > >                       pr_warning("%s is out of .text, skip it.\n", symbol);
> > >               map__put(map);
> > > @@ -340,7 +340,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
> > >               snprintf(module_name, sizeof(module_name), "[%s]", module);
> > >               map = maps__find_by_name(machine__kernel_maps(host_machine), module_name);
> > >               if (map) {
> > > -                     dso = map->dso;
> > > +                     dso = map__dso(map);
> > >                       goto found;
> > >               }
> > >               pr_debug("Failed to find module %s.\n", module);
> > > @@ -348,7 +348,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
> > >       }
> > >
> > >       map = machine__kernel_map(host_machine);
> > > -     dso = map->dso;
> > > +     dso = map__dso(map);
> > >       if (!dso->has_build_id)
> > >               dso__read_running_kernel_build_id(dso, host_machine);
> > >
> > > @@ -396,7 +396,8 @@ static int find_alternative_probe_point(struct debuginfo *dinfo,
> > >                                          "Consider identifying the final function used at run time and set the probe directly on that.\n",
> > >                                          pp->function);
> > >               } else
> > > -                     address = map->unmap_ip(map, sym->start) - map->reloc;
> > > +                     address = map__unmap_ip(map, sym->start) -
> > > +                               map__reloc(map);
> > >               break;
> > >       }
> > >       if (!address) {
> > > @@ -862,8 +863,7 @@ post_process_kernel_probe_trace_events(struct probe_trace_event *tevs,
> > >                       free(tevs[i].point.symbol);
> > >               tevs[i].point.symbol = tmp;
> > >               tevs[i].point.offset = tevs[i].point.address -
> > > -                     (map->reloc ? reloc_sym->unrelocated_addr :
> > > -                                   reloc_sym->addr);
> > > +                     (map__reloc(map) ? reloc_sym->unrelocated_addr : reloc_sym->addr);
> > >       }
> > >       return skipped;
> > >  }
> > > @@ -2243,7 +2243,7 @@ static int find_perf_probe_point_from_map(struct probe_trace_point *tp,
> > >               goto out;
> > >
> > >       pp->retprobe = tp->retprobe;
> > > -     pp->offset = addr - map->unmap_ip(map, sym->start);
> > > +     pp->offset = addr - map__unmap_ip(map, sym->start);
> > >       pp->function = strdup(sym->name);
> > >       ret = pp->function ? 0 : -ENOMEM;
> > >
> > > @@ -3117,7 +3117,7 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
> > >                       goto err_out;
> > >               }
> > >               /* Add one probe point */
> > > -             tp->address = map->unmap_ip(map, sym->start) + pp->offset;
> > > +             tp->address = map__unmap_ip(map, sym->start) + pp->offset;
> > >
> > >               /* Check the kprobe (not in module) is within .text  */
> > >               if (!pev->uprobes && !pev->target &&
> > > @@ -3759,13 +3759,13 @@ int show_available_funcs(const char *target, struct nsinfo *nsi,
> > >                              (target) ? : "kernel");
> > >               goto end;
> > >       }
> > > -     if (!dso__sorted_by_name(map->dso))
> > > -             dso__sort_by_name(map->dso);
> > > +     if (!dso__sorted_by_name(map__dso(map)))
> > > +             dso__sort_by_name(map__dso(map));
> > >
> > >       /* Show all (filtered) symbols */
> > >       setup_pager();
> > >
> > > -     for (nd = rb_first_cached(&map->dso->symbol_names); nd;
> > > +     for (nd = rb_first_cached(&map__dso(map)->symbol_names); nd;
> > >            nd = rb_next(nd)) {
> > >               struct symbol_name_rb_node *pos = rb_entry(nd, struct symbol_name_rb_node, rb_node);
> > >
> > > diff --git a/tools/perf/util/scripting-engines/trace-event-perl.c b/tools/perf/util/scripting-engines/trace-event-perl.c
> > > index a5d945415bbc..1282fb9b45e1 100644
> > > --- a/tools/perf/util/scripting-engines/trace-event-perl.c
> > > +++ b/tools/perf/util/scripting-engines/trace-event-perl.c
> > > @@ -315,11 +315,12 @@ static SV *perl_process_callchain(struct perf_sample *sample,
> > >               if (node->ms.map) {
> > >                       struct map *map = node->ms.map;
> > >                       const char *dsoname = "[unknown]";
> > > -                     if (map && map->dso) {
> > > -                             if (symbol_conf.show_kernel_path && map->dso->long_name)
> > > -                                     dsoname = map->dso->long_name;
> > > +                     if (map && map__dso(map)) {
> > > +                             if (symbol_conf.show_kernel_path &&
> > > +                                 map__dso(map)->long_name)
> > > +                                     dsoname = map__dso(map)->long_name;
> > >                               else
> > > -                                     dsoname = map->dso->name;
> > > +                                     dsoname = map__dso(map)->name;
> > >                       }
> > >                       if (!hv_stores(elem, "dso", newSVpv(dsoname,0))) {
> > >                               hv_undef(elem);
> > > diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
> > > index 0290dc3a6258..559b2ac5cac3 100644
> > > --- a/tools/perf/util/scripting-engines/trace-event-python.c
> > > +++ b/tools/perf/util/scripting-engines/trace-event-python.c
> > > @@ -382,11 +382,11 @@ static const char *get_dsoname(struct map *map)
> > >  {
> > >       const char *dsoname = "[unknown]";
> > >
> > > -     if (map && map->dso) {
> > > -             if (symbol_conf.show_kernel_path && map->dso->long_name)
> > > -                     dsoname = map->dso->long_name;
> > > +     if (map && map__dso(map)) {
> > > +             if (symbol_conf.show_kernel_path && map__dso(map)->long_name)
> > > +                     dsoname = map__dso(map)->long_name;
> > >               else
> > > -                     dsoname = map->dso->name;
> > > +                     dsoname = map__dso(map)->name;
> > >       }
> > >
> > >       return dsoname;
> > > @@ -527,7 +527,7 @@ static unsigned long get_offset(struct symbol *sym, struct addr_location *al)
> > >       if (al->addr < sym->end)
> > >               offset = al->addr - sym->start;
> > >       else
> > > -             offset = al->addr - al->map->start - sym->start;
> > > +             offset = al->addr - map__start(al->map) - sym->start;
> > >
> > >       return offset;
> > >  }
> > > @@ -741,7 +741,7 @@ static void set_sym_in_dict(PyObject *dict, struct addr_location *al,
> > >  {
> > >       if (al->map) {
> > >               pydict_set_item_string_decref(dict, dso_field,
> > > -                     _PyUnicode_FromString(al->map->dso->name));
> > > +                     _PyUnicode_FromString(map__dso(al->map)->name));
> > >       }
> > >       if (al->sym) {
> > >               pydict_set_item_string_decref(dict, sym_field,
> > > diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
> > > index 25686d67ee6f..6d19bbcd30df 100644
> > > --- a/tools/perf/util/sort.c
> > > +++ b/tools/perf/util/sort.c
> > > @@ -173,8 +173,8 @@ struct sort_entry sort_comm = {
> > >
> > >  static int64_t _sort__dso_cmp(struct map *map_l, struct map *map_r)
> > >  {
> > > -     struct dso *dso_l = map_l ? map_l->dso : NULL;
> > > -     struct dso *dso_r = map_r ? map_r->dso : NULL;
> > > +     struct dso *dso_l = map_l ? map__dso(map_l) : NULL;
> > > +     struct dso *dso_r = map_r ? map__dso(map_r) : NULL;
> > >       const char *dso_name_l, *dso_name_r;
> > >
> > >       if (!dso_l || !dso_r)
> > > @@ -200,9 +200,9 @@ sort__dso_cmp(struct hist_entry *left, struct hist_entry *right)
> > >  static int _hist_entry__dso_snprintf(struct map *map, char *bf,
> > >                                    size_t size, unsigned int width)
> > >  {
> > > -     if (map && map->dso) {
> > > -             const char *dso_name = verbose > 0 ? map->dso->long_name :
> > > -                     map->dso->short_name;
> > > +     if (map && map__dso(map)) {
> > > +             const char *dso_name = verbose > 0 ? map__dso(map)->long_name :
> > > +                     map__dso(map)->short_name;
> > >               return repsep_snprintf(bf, size, "%-*.*s", width, width, dso_name);
> > >       }
> > >
> > > @@ -222,7 +222,7 @@ static int hist_entry__dso_filter(struct hist_entry *he, int type, const void *a
> > >       if (type != HIST_FILTER__DSO)
> > >               return -1;
> > >
> > > -     return dso && (!he->ms.map || he->ms.map->dso != dso);
> > > +     return dso && (!he->ms.map || map__dso(he->ms.map) != dso);
> > >  }
> > >
> > >  struct sort_entry sort_dso = {
> > > @@ -302,12 +302,12 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
> > >       size_t ret = 0;
> > >
> > >       if (verbose > 0) {
> > > -             char o = map ? dso__symtab_origin(map->dso) : '!';
> > > +             char o = map ? dso__symtab_origin(map__dso(map)) : '!';
> > >               u64 rip = ip;
> > >
> > > -             if (map && map->dso && map->dso->kernel
> > > -                 && map->dso->adjust_symbols)
> > > -                     rip = map->unmap_ip(map, ip);
> > > +             if (map && map__dso(map) && map__dso(map)->kernel
> > > +                 && map__dso(map)->adjust_symbols)
> > > +                     rip = map__unmap_ip(map, ip);
> > >
> > >               ret += repsep_snprintf(bf, size, "%-#*llx %c ",
> > >                                      BITS_PER_LONG / 4 + 2, rip, o);
> > > @@ -318,7 +318,7 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
> > >               if (sym->type == STT_OBJECT) {
> > >                       ret += repsep_snprintf(bf + ret, size - ret, "%s", sym->name);
> > >                       ret += repsep_snprintf(bf + ret, size - ret, "+0x%llx",
> > > -                                     ip - map->unmap_ip(map, sym->start));
> > > +                                     ip - map__unmap_ip(map, sym->start));
> > >               } else {
> > >                       ret += repsep_snprintf(bf + ret, size - ret, "%.*s",
> > >                                              width - ret,
> > > @@ -517,7 +517,7 @@ static char *hist_entry__get_srcfile(struct hist_entry *e)
> > >       if (!map)
> > >               return no_srcfile;
> > >
> > > -     sf = __get_srcline(map->dso, map__rip_2objdump(map, e->ip),
> > > +     sf = __get_srcline(map__dso(map), map__rip_2objdump(map, e->ip),
> > >                        e->ms.sym, false, true, true, e->ip);
> > >       if (!strcmp(sf, SRCLINE_UNKNOWN))
> > >               return no_srcfile;
> > > @@ -838,7 +838,7 @@ static int hist_entry__dso_from_filter(struct hist_entry *he, int type,
> > >               return -1;
> > >
> > >       return dso && (!he->branch_info || !he->branch_info->from.ms.map ||
> > > -                    he->branch_info->from.ms.map->dso != dso);
> > > +             map__dso(he->branch_info->from.ms.map) != dso);
> > >  }
> > >
> > >  static int64_t
> > > @@ -870,7 +870,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type,
> > >               return -1;
> > >
> > >       return dso && (!he->branch_info || !he->branch_info->to.ms.map ||
> > > -                    he->branch_info->to.ms.map->dso != dso);
> > > +             map__dso(he->branch_info->to.ms.map) != dso);
> > >  }
> > >
> > >  static int64_t
> > > @@ -1259,7 +1259,7 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
> > >       if (!l_map) return -1;
> > >       if (!r_map) return 1;
> > >
> > > -     rc = dso__cmp_id(l_map->dso, r_map->dso);
> > > +     rc = dso__cmp_id(map__dso(l_map), map__dso(r_map));
> > >       if (rc)
> > >               return rc;
> > >       /*
> > > @@ -1271,9 +1271,9 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
> > >        */
> > >
> > >       if ((left->cpumode != PERF_RECORD_MISC_KERNEL) &&
> > > -         (!(l_map->flags & MAP_SHARED)) &&
> > > -         !l_map->dso->id.maj && !l_map->dso->id.min &&
> > > -         !l_map->dso->id.ino && !l_map->dso->id.ino_generation) {
> > > +         (!(map__flags(l_map) & MAP_SHARED)) &&
> > > +         !map__dso(l_map)->id.maj && !map__dso(l_map)->id.min &&
> > > +         !map__dso(l_map)->id.ino && !map__dso(l_map)->id.ino_generation) {
> > >               /* userspace anonymous */
> > >
> > >               if (left->thread->pid_ > right->thread->pid_) return -1;
> > > @@ -1307,10 +1307,10 @@ static int hist_entry__dcacheline_snprintf(struct hist_entry *he, char *bf,
> > >
> > >               /* print [s] for shared data mmaps */
> > >               if ((he->cpumode != PERF_RECORD_MISC_KERNEL) &&
> > > -                  map && !(map->prot & PROT_EXEC) &&
> > > -                 (map->flags & MAP_SHARED) &&
> > > -                 (map->dso->id.maj || map->dso->id.min ||
> > > -                  map->dso->id.ino || map->dso->id.ino_generation))
> > > +                 map && !(map__prot(map) & PROT_EXEC) &&
> > > +                 (map__flags(map) & MAP_SHARED) &&
> > > +                 (map__dso(map)->id.maj || map__dso(map)->id.min ||
> > > +                  map__dso(map)->id.ino || map__dso(map)->id.ino_generation))
> > >                       level = 's';
> > >               else if (!map)
> > >                       level = 'X';
> > > @@ -1806,7 +1806,7 @@ sort__dso_size_cmp(struct hist_entry *left, struct hist_entry *right)
> > >  static int _hist_entry__dso_size_snprintf(struct map *map, char *bf,
> > >                                         size_t bf_size, unsigned int width)
> > >  {
> > > -     if (map && map->dso)
> > > +     if (map && map__dso(map))
> > >               return repsep_snprintf(bf, bf_size, "%*d", width,
> > >                                      map__size(map));
> > >
> > > diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
> > > index 3ca9a0968345..056405d3d655 100644
> > > --- a/tools/perf/util/symbol-elf.c
> > > +++ b/tools/perf/util/symbol-elf.c
> > > @@ -970,7 +970,7 @@ void __weak arch__sym_update(struct symbol *s __maybe_unused,
> > >  static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> > >                                     GElf_Sym *sym, GElf_Shdr *shdr,
> > >                                     struct maps *kmaps, struct kmap *kmap,
> > > -                                   struct dso **curr_dsop, struct map **curr_mapp,
> > > +                                   struct dso **curr_dsop,
> > >                                     const char *section_name,
> > >                                     bool adjust_kernel_syms, bool kmodule, bool *remap_kernel)
> > >  {
> > > @@ -994,18 +994,18 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> > >               if (*remap_kernel && dso->kernel && !kmodule) {
> > >                       *remap_kernel = false;
> > >                       map->start = shdr->sh_addr + ref_reloc(kmap);
> > > -                     map->end = map->start + shdr->sh_size;
> > > +                     map->end = map__start(map) + shdr->sh_size;
> > >                       map->pgoff = shdr->sh_offset;
> > > -                     map->map_ip = map__map_ip;
> > > -                     map->unmap_ip = map__unmap_ip;
> > > +                     map->map_ip = map__dso_map_ip;
> > > +                     map->unmap_ip = map__dso_unmap_ip;
> > >                       /* Ensure maps are correctly ordered */
> > >                       if (kmaps) {
> > >                               int err;
> > > +                             struct map *updated = map__get(map);
> > >
> > > -                             map__get(map);
> > >                               maps__remove(kmaps, map);
> > > -                             err = maps__insert(kmaps, map);
> > > -                             map__put(map);
> > > +                             err = maps__insert(kmaps, updated);
> > > +                             map__put(updated);
> > >                               if (err)
> > >                                       return err;
> > >                       }
> > > @@ -1021,7 +1021,6 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> > >                       map->pgoff = shdr->sh_offset;
> > >               }
> > >
> > > -             *curr_mapp = map;
> > >               *curr_dsop = dso;
> > >               return 0;
> > >       }
> > > @@ -1036,7 +1035,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> > >               u64 start = sym->st_value;
> > >
> > >               if (kmodule)
> > > -                     start += map->start + shdr->sh_offset;
> > > +                     start += map__start(map) + shdr->sh_offset;
> > >
> > >               curr_dso = dso__new(dso_name);
> > >               if (curr_dso == NULL)
> > > @@ -1054,10 +1053,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> > >
> > >               if (adjust_kernel_syms) {
> > >                       curr_map->start  = shdr->sh_addr + ref_reloc(kmap);
> > > -                     curr_map->end    = curr_map->start + shdr->sh_size;
> > > -                     curr_map->pgoff  = shdr->sh_offset;
> > > +                     curr_map->end   = map__start(curr_map) + shdr->sh_size;
> > > +                     curr_map->pgoff = shdr->sh_offset;
> > >               } else {
> > > -                     curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> > > +                     curr_map->map_ip = map__identity_ip;
> > > +                     curr_map->unmap_ip = map__identity_ip;
> > >               }
> > >               curr_dso->symtab_type = dso->symtab_type;
> > >               if (maps__insert(kmaps, curr_map))
> > > @@ -1068,13 +1068,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> > >                * *curr_map->dso.
> > >                */
> > >               dsos__add(&maps__machine(kmaps)->dsos, curr_dso);
> > > -             /* kmaps already got it */
> > > -             map__put(curr_map);
> > >               dso__set_loaded(curr_dso);
> > > -             *curr_mapp = curr_map;
> > >               *curr_dsop = curr_dso;
> > > +             map__put(curr_map);
> > >       } else
> > > -             *curr_dsop = curr_map->dso;
> > > +             *curr_dsop = map__dso(curr_map);
> > >
> > >       return 0;
> > >  }
> > > @@ -1085,7 +1083,6 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
> > >  {
> > >       struct kmap *kmap = dso->kernel ? map__kmap(map) : NULL;
> > >       struct maps *kmaps = kmap ? map__kmaps(map) : NULL;
> > > -     struct map *curr_map = map;
> > >       struct dso *curr_dso = dso;
> > >       Elf_Data *symstrs, *secstrs, *secstrs_run, *secstrs_sym;
> > >       uint32_t nr_syms;
> > > @@ -1175,7 +1172,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
> > >        * attempted to prelink vdso to its virtual address.
> > >        */
> > >       if (dso__is_vdso(dso))
> > > -             map->reloc = map->start - dso->text_offset;
> > > +             map->reloc = map__start(map) - dso->text_offset;
> > >
> > >       dso->adjust_symbols = runtime_ss->adjust_symbols || ref_reloc(kmap);
> > >       /*
> > > @@ -1262,8 +1259,10 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
> > >                       --sym.st_value;
> > >
> > >               if (dso->kernel) {
> > > -                     if (dso__process_kernel_symbol(dso, map, &sym, &shdr, kmaps, kmap, &curr_dso, &curr_map,
> > > -                                                    section_name, adjust_kernel_syms, kmodule, &remap_kernel))
> > > +                     if (dso__process_kernel_symbol(dso, map, &sym, &shdr,
> > > +                                                    kmaps, kmap, &curr_dso,
> > > +                                                    section_name, adjust_kernel_syms,
> > > +                                                    kmodule, &remap_kernel))
> > >                               goto out_elf_end;
> > >               } else if ((used_opd && runtime_ss->adjust_symbols) ||
> > >                          (!used_opd && syms_ss->adjust_symbols)) {
> > > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> > > index 9b51e669a722..6289b3028b91 100644
> > > --- a/tools/perf/util/symbol.c
> > > +++ b/tools/perf/util/symbol.c
> > > @@ -252,8 +252,8 @@ void maps__fixup_end(struct maps *maps)
> > >       down_write(maps__lock(maps));
> > >
> > >       maps__for_each_entry(maps, curr) {
> > > -             if (prev != NULL && !prev->map->end)
> > > -                     prev->map->end = curr->map->start;
> > > +             if (prev != NULL && !map__end(prev->map))
> > > +                     prev->map->end = map__start(curr->map);
> > >
> > >               prev = curr;
> > >       }
> > > @@ -262,7 +262,7 @@ void maps__fixup_end(struct maps *maps)
> > >        * We still haven't the actual symbols, so guess the
> > >        * last map final address.
> > >        */
> > > -     if (curr && !curr->map->end)
> > > +     if (curr && !map__end(curr->map))
> > >               curr->map->end = ~0ULL;
> > >
> > >       up_write(maps__lock(maps));
> > > @@ -778,12 +778,12 @@ static int maps__split_kallsyms_for_kcore(struct maps *kmaps, struct dso *dso)
> > >                       continue;
> > >               }
> > >
> > > -             pos->start -= curr_map->start - curr_map->pgoff;
> > > -             if (pos->end > curr_map->end)
> > > -                     pos->end = curr_map->end;
> > > +             pos->start -= map__start(curr_map) - map__pgoff(curr_map);
> > > +             if (pos->end > map__end(curr_map))
> > > +                     pos->end = map__end(curr_map);
> > >               if (pos->end)
> > > -                     pos->end -= curr_map->start - curr_map->pgoff;
> > > -             symbols__insert(&curr_map->dso->symbols, pos);
> > > +                     pos->end -= map__start(curr_map) - map__pgoff(curr_map);
> > > +             symbols__insert(&map__dso(curr_map)->symbols, pos);
> > >               ++count;
> > >       }
> > >
> > > @@ -830,7 +830,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> > >
> > >                       *module++ = '\0';
> > >
> > > -                     if (strcmp(curr_map->dso->short_name, module)) {
> > > +                     if (strcmp(map__dso(curr_map)->short_name, module)) {
> > >                               if (curr_map != initial_map &&
> > >                                   dso->kernel == DSO_SPACE__KERNEL_GUEST &&
> > >                                   machine__is_default_guest(machine)) {
> > > @@ -841,7 +841,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> > >                                        * symbols are in its kmap. Mark it as
> > >                                        * loaded.
> > >                                        */
> > > -                                     dso__set_loaded(curr_map->dso);
> > > +                                     dso__set_loaded(map__dso(curr_map));
> > >                               }
> > >
> > >                               curr_map = maps__find_by_name(kmaps, module);
> > > @@ -854,7 +854,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> > >                                       goto discard_symbol;
> > >                               }
> > >
> > > -                             if (curr_map->dso->loaded &&
> > > +                             if (map__dso(curr_map)->loaded &&
> > >                                   !machine__is_default_guest(machine))
> > >                                       goto discard_symbol;
> > >                       }
> > > @@ -862,8 +862,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> > >                        * So that we look just like we get from .ko files,
> > >                        * i.e. not prelinked, relative to initial_map->start.
> > >                        */
> > > -                     pos->start = curr_map->map_ip(curr_map, pos->start);
> > > -                     pos->end   = curr_map->map_ip(curr_map, pos->end);
> > > +                     pos->start = map__map_ip(curr_map, pos->start);
> > > +                     pos->end   = map__map_ip(curr_map, pos->end);
> > >               } else if (x86_64 && is_entry_trampoline(pos->name)) {
> > >                       /*
> > >                        * These symbols are not needed anymore since the
> > > @@ -910,7 +910,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> > >                               return -1;
> > >                       }
> > >
> > > -                     curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> > > +                     curr_map->map_ip = map__identity_ip;
> > > +                     curr_map->unmap_ip = map__identity_ip;
> > >                       if (maps__insert(kmaps, curr_map)) {
> > >                               dso__put(ndso);
> > >                               return -1;
> > > @@ -924,7 +925,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> > >  add_symbol:
> > >               if (curr_map != initial_map) {
> > >                       rb_erase_cached(&pos->rb_node, root);
> > > -                     symbols__insert(&curr_map->dso->symbols, pos);
> > > +                     symbols__insert(&map__dso(curr_map)->symbols, pos);
> > >                       ++moved;
> > >               } else
> > >                       ++count;
> > > @@ -938,7 +939,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> > >       if (curr_map != initial_map &&
> > >           dso->kernel == DSO_SPACE__KERNEL_GUEST &&
> > >           machine__is_default_guest(maps__machine(kmaps))) {
> > > -             dso__set_loaded(curr_map->dso);
> > > +             dso__set_loaded(map__dso(curr_map));
> > >       }
> > >
> > >       return count + moved;
> > > @@ -1118,8 +1119,8 @@ static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
> > >               }
> > >
> > >               /* Module must be in memory at the same address */
> > > -             mi = find_module(old_map->dso->short_name, &modules);
> > > -             if (!mi || mi->start != old_map->start) {
> > > +             mi = find_module(map__dso(old_map)->short_name, &modules);
> > > +             if (!mi || mi->start != map__start(old_map)) {
> > >                       err = -EINVAL;
> > >                       goto out;
> > >               }
> > > @@ -1214,7 +1215,7 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
> > >               return -ENOMEM;
> > >       }
> > >
> > > -     list_node->map->end = list_node->map->start + len;
> > > +     list_node->map->end = map__start(list_node->map) + len;
> > >       list_node->map->pgoff = pgoff;
> > >
> > >       list_add(&list_node->node, &md->maps);
> > > @@ -1236,21 +1237,21 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
> > >               struct map *old_map = rb_node->map;
> > >
> > >               /* no overload with this one */
> > > -             if (new_map->end < old_map->start ||
> > > -                 new_map->start >= old_map->end)
> > > +             if (map__end(new_map) < map__start(old_map) ||
> > > +                 map__start(new_map) >= map__end(old_map))
> > >                       continue;
> > >
> > > -             if (new_map->start < old_map->start) {
> > > +             if (map__start(new_map) < map__start(old_map)) {
> > >                       /*
> > >                        * |new......
> > >                        *       |old....
> > >                        */
> > > -                     if (new_map->end < old_map->end) {
> > > +                     if (map__end(new_map) < map__end(old_map)) {
> > >                               /*
> > >                                * |new......|     -> |new..|
> > >                                *       |old....| ->       |old....|
> > >                                */
> > > -                             new_map->end = old_map->start;
> > > +                             new_map->end = map__start(old_map);
> > >                       } else {
> > >                               /*
> > >                                * |new.............| -> |new..|       |new..|
> > > @@ -1271,17 +1272,18 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
> > >                                       goto out;
> > >                               }
> > >
> > > -                             m->map->end = old_map->start;
> > > +                             m->map->end = map__start(old_map);
> > >                               list_add_tail(&m->node, &merged);
> > > -                             new_map->pgoff += old_map->end - new_map->start;
> > > -                             new_map->start = old_map->end;
> > > +                             new_map->pgoff +=
> > > +                                     map__end(old_map) - map__start(new_map);
> > > +                             new_map->start = map__end(old_map);
> > >                       }
> > >               } else {
> > >                       /*
> > >                        *      |new......
> > >                        * |old....
> > >                        */
> > > -                     if (new_map->end < old_map->end) {
> > > +                     if (map__end(new_map) < map__end(old_map)) {
> > >                               /*
> > >                                *      |new..|   -> x
> > >                                * |old.........| -> |old.........|
> > > @@ -1294,8 +1296,9 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
> > >                                *      |new......| ->         |new...|
> > >                                * |old....|        -> |old....|
> > >                                */
> > > -                             new_map->pgoff += old_map->end - new_map->start;
> > > -                             new_map->start = old_map->end;
> > > +                             new_map->pgoff +=
> > > +                                     map__end(old_map) - map__start(new_map);
> > > +                             new_map->start = map__end(old_map);
> > >                       }
> > >               }
> > >       }
> > > @@ -1361,7 +1364,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> > >       }
> > >
> > >       /* Read new maps into temporary lists */
> > > -     err = file__read_maps(fd, map->prot & PROT_EXEC, kcore_mapfn, &md,
> > > +     err = file__read_maps(fd, map__prot(map) & PROT_EXEC, kcore_mapfn, &md,
> > >                             &is_64_bit);
> > >       if (err)
> > >               goto out_err;
> > > @@ -1391,7 +1394,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> > >               struct map_list_node *new_node;
> > >
> > >               list_for_each_entry(new_node, &md.maps, node) {
> > > -                     if (stext >= new_node->map->start && stext < new_node->map->end) {
> > > +                     if (stext >= map__start(new_node->map) &&
> > > +                         stext < map__end(new_node->map)) {
> > >                               replacement_map = new_node->map;
> > >                               break;
> > >                       }
> > > @@ -1408,16 +1412,18 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> > >               new_node = list_entry(md.maps.next, struct map_list_node, node);
> > >               list_del_init(&new_node->node);
> > >               if (new_node->map == replacement_map) {
> > > -                     map->start      = new_node->map->start;
> > > -                     map->end        = new_node->map->end;
> > > -                     map->pgoff      = new_node->map->pgoff;
> > > -                     map->map_ip     = new_node->map->map_ip;
> > > -                     map->unmap_ip   = new_node->map->unmap_ip;
> > > +                     struct  map *updated;
> > > +
> > > +                     map->start = map__start(new_node->map);
> > > +                     map->end   = map__end(new_node->map);
> > > +                     map->pgoff = map__pgoff(new_node->map);
> > > +                     map->map_ip = new_node->map->map_ip;
> > > +                     map->unmap_ip = new_node->map->unmap_ip;
> > >                       /* Ensure maps are correctly ordered */
> > > -                     map__get(map);
> > > +                     updated = map__get(map);
> > >                       maps__remove(kmaps, map);
> > > -                     err = maps__insert(kmaps, map);
> > > -                     map__put(map);
> > > +                     err = maps__insert(kmaps, updated);
> > > +                     map__put(updated);
> > >                       map__put(new_node->map);
> > >                       if (err)
> > >                               goto out_err;
> > > @@ -1460,7 +1466,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> > >
> > >       close(fd);
> > >
> > > -     if (map->prot & PROT_EXEC)
> > > +     if (map__prot(map) & PROT_EXEC)
> > >               pr_debug("Using %s for kernel object code\n", kcore_filename);
> > >       else
> > >               pr_debug("Using %s for kernel data\n", kcore_filename);
> > > @@ -1995,13 +2001,13 @@ int dso__load(struct dso *dso, struct map *map)
> > >  static int map__strcmp(const void *a, const void *b)
> > >  {
> > >       const struct map *ma = *(const struct map **)a, *mb = *(const struct map **)b;
> > > -     return strcmp(ma->dso->short_name, mb->dso->short_name);
> > > +     return strcmp(map__dso(ma)->short_name, map__dso(mb)->short_name);
> > >  }
> > >
> > >  static int map__strcmp_name(const void *name, const void *b)
> > >  {
> > >       const struct map *map = *(const struct map **)b;
> > > -     return strcmp(name, map->dso->short_name);
> > > +     return strcmp(name, map__dso(map)->short_name);
> > >  }
> > >
> > >  void __maps__sort_by_name(struct maps *maps)
> > > @@ -2052,7 +2058,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
> > >       down_read(maps__lock(maps));
> > >
> > >       if (maps->last_search_by_name &&
> > > -         strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
> > > +         strcmp(map__dso(maps->last_search_by_name)->short_name, name) == 0) {
> > >               map = maps->last_search_by_name;
> > >               goto out_unlock;
> > >       }
> > > @@ -2068,7 +2074,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
> > >       /* Fallback to traversing the rbtree... */
> > >       maps__for_each_entry(maps, rb_node) {
> > >               map = rb_node->map;
> > > -             if (strcmp(map->dso->short_name, name) == 0) {
> > > +             if (strcmp(map__dso(map)->short_name, name) == 0) {
> > >                       maps->last_search_by_name = map;
> > >                       goto out_unlock;
> > >               }
> > > diff --git a/tools/perf/util/symbol_fprintf.c b/tools/perf/util/symbol_fprintf.c
> > > index 2664fb65e47a..d9e5ad040b6a 100644
> > > --- a/tools/perf/util/symbol_fprintf.c
> > > +++ b/tools/perf/util/symbol_fprintf.c
> > > @@ -30,7 +30,7 @@ size_t __symbol__fprintf_symname_offs(const struct symbol *sym,
> > >                       if (al->addr < sym->end)
> > >                               offset = al->addr - sym->start;
> > >                       else
> > > -                             offset = al->addr - al->map->start - sym->start;
> > > +                             offset = al->addr - map__start(al->map) - sym->start;
> > >                       length += fprintf(fp, "+0x%lx", offset);
> > >               }
> > >               return length;
> > > diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
> > > index ed2d55d224aa..437fd57c2084 100644
> > > --- a/tools/perf/util/synthetic-events.c
> > > +++ b/tools/perf/util/synthetic-events.c
> > > @@ -668,33 +668,33 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
> > >                       continue;
> > >
> > >               if (symbol_conf.buildid_mmap2) {
> > > -                     size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
> > > +                     size = PERF_ALIGN(map__dso(map)->long_name_len + 1, sizeof(u64));
> > >                       event->mmap2.header.type = PERF_RECORD_MMAP2;
> > >                       event->mmap2.header.size = (sizeof(event->mmap2) -
> > >                                               (sizeof(event->mmap2.filename) - size));
> > >                       memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
> > >                       event->mmap2.header.size += machine->id_hdr_size;
> > > -                     event->mmap2.start = map->start;
> > > -                     event->mmap2.len   = map->end - map->start;
> > > +                     event->mmap2.start = map__start(map);
> > > +                     event->mmap2.len   = map__end(map) - map__start(map);
> > >                       event->mmap2.pid   = machine->pid;
> > >
> > > -                     memcpy(event->mmap2.filename, map->dso->long_name,
> > > -                            map->dso->long_name_len + 1);
> > > +                     memcpy(event->mmap2.filename, map__dso(map)->long_name,
> > > +                            map__dso(map)->long_name_len + 1);
> > >
> > >                       perf_record_mmap2__read_build_id(&event->mmap2, false);
> > >               } else {
> > > -                     size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
> > > +                     size = PERF_ALIGN(map__dso(map)->long_name_len + 1, sizeof(u64));
> > >                       event->mmap.header.type = PERF_RECORD_MMAP;
> > >                       event->mmap.header.size = (sizeof(event->mmap) -
> > >                                               (sizeof(event->mmap.filename) - size));
> > >                       memset(event->mmap.filename + size, 0, machine->id_hdr_size);
> > >                       event->mmap.header.size += machine->id_hdr_size;
> > > -                     event->mmap.start = map->start;
> > > -                     event->mmap.len   = map->end - map->start;
> > > +                     event->mmap.start = map__start(map);
> > > +                     event->mmap.len   = map__end(map) - map__start(map);
> > >                       event->mmap.pid   = machine->pid;
> > >
> > > -                     memcpy(event->mmap.filename, map->dso->long_name,
> > > -                            map->dso->long_name_len + 1);
> > > +                     memcpy(event->mmap.filename, map__dso(map)->long_name,
> > > +                            map__dso(map)->long_name_len + 1);
> > >               }
> > >
> > >               if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
> > > @@ -1112,8 +1112,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
> > >               event->mmap2.header.size = (sizeof(event->mmap2) -
> > >                               (sizeof(event->mmap2.filename) - size) + machine->id_hdr_size);
> > >               event->mmap2.pgoff = kmap->ref_reloc_sym->addr;
> > > -             event->mmap2.start = map->start;
> > > -             event->mmap2.len   = map->end - event->mmap.start;
> > > +             event->mmap2.start = map__start(map);
> > > +             event->mmap2.len   = map__end(map) - event->mmap.start;
> > >               event->mmap2.pid   = machine->pid;
> > >
> > >               perf_record_mmap2__read_build_id(&event->mmap2, true);
> > > @@ -1125,8 +1125,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
> > >               event->mmap.header.size = (sizeof(event->mmap) -
> > >                               (sizeof(event->mmap.filename) - size) + machine->id_hdr_size);
> > >               event->mmap.pgoff = kmap->ref_reloc_sym->addr;
> > > -             event->mmap.start = map->start;
> > > -             event->mmap.len   = map->end - event->mmap.start;
> > > +             event->mmap.start = map__start(map);
> > > +             event->mmap.len   = map__end(map) - event->mmap.start;
> > >               event->mmap.pid   = machine->pid;
> > >       }
> > >
> > > diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
> > > index c2256777b813..6fbcc115cc6d 100644
> > > --- a/tools/perf/util/thread.c
> > > +++ b/tools/perf/util/thread.c
> > > @@ -434,23 +434,23 @@ struct thread *thread__main_thread(struct machine *machine, struct thread *threa
> > >  int thread__memcpy(struct thread *thread, struct machine *machine,
> > >                  void *buf, u64 ip, int len, bool *is64bit)
> > >  {
> > > -       u8 cpumode = PERF_RECORD_MISC_USER;
> > > -       struct addr_location al;
> > > -       long offset;
> > > +     u8 cpumode = PERF_RECORD_MISC_USER;
> > > +     struct addr_location al;
> > > +     long offset;
> > >
> > > -       if (machine__kernel_ip(machine, ip))
> > > -               cpumode = PERF_RECORD_MISC_KERNEL;
> > > +     if (machine__kernel_ip(machine, ip))
> > > +             cpumode = PERF_RECORD_MISC_KERNEL;
> > >
> > > -       if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso ||
> > > -        al.map->dso->data.status == DSO_DATA_STATUS_ERROR ||
> > > -        map__load(al.map) < 0)
> > > -               return -1;
> > > +     if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map) ||
> > > +             map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR ||
> > > +             map__load(al.map) < 0)
> > > +             return -1;
> > >
> > > -       offset = al.map->map_ip(al.map, ip);
> > > -       if (is64bit)
> > > -               *is64bit = al.map->dso->is_64_bit;
> > > +     offset = map__map_ip(al.map, ip);
> > > +     if (is64bit)
> > > +             *is64bit = map__dso(al.map)->is_64_bit;
> > >
> > > -       return dso__data_read_offset(al.map->dso, machine, offset, buf, len);
> > > +     return dso__data_read_offset(map__dso(al.map), machine, offset, buf, len);
> > >  }
> > >
> > >  void thread__free_stitch_list(struct thread *thread)
> > > diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
> > > index 7e6c59811292..841ac84a93ab 100644
> > > --- a/tools/perf/util/unwind-libunwind-local.c
> > > +++ b/tools/perf/util/unwind-libunwind-local.c
> > > @@ -381,20 +381,20 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
> > >       int ret = -EINVAL;
> > >
> > >       map = find_map(ip, ui);
> > > -     if (!map || !map->dso)
> > > +     if (!map || !map__dso(map))
> > >               return -EINVAL;
> > >
> > > -     pr_debug("unwind: find_proc_info dso %s\n", map->dso->name);
> > > +     pr_debug("unwind: %s dso %s\n", __func__, map__dso(map)->name);
> > >
> > >       /* Check the .eh_frame section for unwinding info */
> > > -     if (!read_unwind_spec_eh_frame(map->dso, ui->machine,
> > > +     if (!read_unwind_spec_eh_frame(map__dso(map), ui->machine,
> > >                                      &table_data, &segbase, &fde_count)) {
> > >               memset(&di, 0, sizeof(di));
> > >               di.format   = UNW_INFO_FORMAT_REMOTE_TABLE;
> > > -             di.start_ip = map->start;
> > > -             di.end_ip   = map->end;
> > > -             di.u.rti.segbase    = map->start + segbase - map->pgoff;
> > > -             di.u.rti.table_data = map->start + table_data - map->pgoff;
> > > +             di.start_ip = map__start(map);
> > > +             di.end_ip   = map__end(map);
> > > +             di.u.rti.segbase    = map__start(map) + segbase - map__pgoff(map);
> > > +             di.u.rti.table_data = map__start(map) + table_data - map__pgoff(map);
> > >               di.u.rti.table_len  = fde_count * sizeof(struct table_entry)
> > >                                     / sizeof(unw_word_t);
> > >               ret = dwarf_search_unwind_table(as, ip, &di, pi,
> > > @@ -404,20 +404,20 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
> > >  #ifndef NO_LIBUNWIND_DEBUG_FRAME
> > >       /* Check the .debug_frame section for unwinding info */
> > >       if (ret < 0 &&
> > > -         !read_unwind_spec_debug_frame(map->dso, ui->machine, &segbase)) {
> > > -             int fd = dso__data_get_fd(map->dso, ui->machine);
> > > -             int is_exec = elf_is_exec(fd, map->dso->name);
> > > -             unw_word_t base = is_exec ? 0 : map->start;
> > > +         !read_unwind_spec_debug_frame(map__dso(map), ui->machine, &segbase)) {
> > > +             int fd = dso__data_get_fd(map__dso(map), ui->machine);
> > > +             int is_exec = elf_is_exec(fd, map__dso(map)->name);
> > > +             unw_word_t base = is_exec ? 0 : map__start(map);
> > >               const char *symfile;
> > >
> > >               if (fd >= 0)
> > > -                     dso__data_put_fd(map->dso);
> > > +                     dso__data_put_fd(map__dso(map));
> > >
> > > -             symfile = map->dso->symsrc_filename ?: map->dso->name;
> > > +             symfile = map__dso(map)->symsrc_filename ?: map__dso(map)->name;
> > >
> > >               memset(&di, 0, sizeof(di));
> > >               if (dwarf_find_debug_frame(0, &di, ip, base, symfile,
> > > -                                        map->start, map->end))
> > > +                                        map__start(map), map__end(map)))
> > >                       return dwarf_search_unwind_table(as, ip, &di, pi,
> > >                                                        need_unwind_info, arg);
> > >       }
> > > @@ -473,10 +473,10 @@ static int access_dso_mem(struct unwind_info *ui, unw_word_t addr,
> > >               return -1;
> > >       }
> > >
> > > -     if (!map->dso)
> > > +     if (!map__dso(map))
> > >               return -1;
> > >
> > > -     size = dso__data_read_addr(map->dso, map, ui->machine,
> > > +     size = dso__data_read_addr(map__dso(map), map, ui->machine,
> > >                                  addr, (u8 *) data, sizeof(*data));
> > >
> > >       return !(size == sizeof(*data));
> > > @@ -583,7 +583,7 @@ static int entry(u64 ip, struct thread *thread,
> > >       pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n",
> > >                al.sym ? al.sym->name : "''",
> > >                ip,
> > > -              al.map ? al.map->map_ip(al.map, ip) : (u64) 0);
> > > +              al.map ? map__map_ip(al.map, ip) : (u64) 0);
> > >
> > >       return cb(&e, arg);
> > >  }
> > > diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
> > > index 7b797ffadd19..cece1ee89031 100644
> > > --- a/tools/perf/util/unwind-libunwind.c
> > > +++ b/tools/perf/util/unwind-libunwind.c
> > > @@ -30,7 +30,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
> > >
> > >       if (maps__addr_space(maps)) {
> > >               pr_debug("unwind: thread map already set, dso=%s\n",
> > > -                      map->dso->name);
> > > +                      map__dso(map)->name);
> > >               if (initialized)
> > >                       *initialized = true;
> > >               return 0;
> > > @@ -41,7 +41,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
> > >       if (!machine->env || !machine->env->arch)
> > >               goto out_register;
> > >
> > > -     dso_type = dso__type(map->dso, machine);
> > > +     dso_type = dso__type(map__dso(map), machine);
> > >       if (dso_type == DSO__TYPE_UNKNOWN)
> > >               return 0;
> > >
> > > diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
> > > index 835c39efb80d..ec777ee11493 100644
> > > --- a/tools/perf/util/vdso.c
> > > +++ b/tools/perf/util/vdso.c
> > > @@ -147,7 +147,7 @@ static enum dso_type machine__thread_dso_type(struct machine *machine,
> > >       struct map_rb_node *rb_node;
> > >
> > >       maps__for_each_entry(thread->maps, rb_node) {
> > > -             struct dso *dso = rb_node->map->dso;
> > > +             struct dso *dso = map__dso(rb_node->map);
> > >
> > >               if (!dso || dso->long_name[0] != '/')
> > >                       continue;
> > > --
> > > 2.35.1.265.g69c8d7142f-goog
> >
> > --
> >
> > - Arnaldo

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs
  2022-02-11 19:21       ` Arnaldo Carvalho de Melo
@ 2022-02-11 19:35         ` Ian Rogers
  2022-02-12 15:48           ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-11 19:35 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

On Fri, Feb 11, 2022 at 11:21 AM Arnaldo Carvalho de Melo
<acme@kernel.org> wrote:
>
> Em Fri, Feb 11, 2022 at 09:43:19AM -0800, Ian Rogers escreveu:
> > On Fri, Feb 11, 2022 at 9:13 AM Arnaldo Carvalho de Melo
> > <acme@kernel.org> wrote:
> > >
> > > Em Fri, Feb 11, 2022 at 02:33:56AM -0800, Ian Rogers escreveu:
> > > > Make the pthread mutex on dso use the error check type. This allows
> > > > deadlock checking via the return type. Assert the returned value from
> > > > mutex lock is always 0.
> > >
> > > I think this is too blunt/pervasive source code wise, perhaps we should
> > > wrap this like its done with rwsem in tools/perf/util/rwsem.h to get
> > > away from pthreads primitives and make the source code look more like
> > > a kernel one and then, taking advantage of the so far ideologic
> > > needless indirection, add this BUG_ON if we build with "DEBUG=1" or
> > > something, wdyt?
> >
>
> > My concern with semaphores is that they are a concurrency primitive
>
> I'm not suggesting we switch over to semaphores, just to use the same
> technique of wrapping pthread_mutex_t with some other API that then
> allows us to add these BUG_ON() calls without polluting the source code
> in many places.

Sounds simple enough and would ensure consistency too. I can add it to
the front of this set of changes. A different approach would be to
take what's here and then refactor and cleanup as a follow on patch
set. I'd prefer that as the size of this set of changes is already
larger than I like - albeit that most of it is just introducing the
use of functions to access struct variables. Perhaps I just remove the
BUG_ON and pthread changes here, we work to get this landed and in a
separate set of patches clean up the pthread mutex code to have better
bug checking.

Thanks,
Ian

> - Arnaldo
>
> > that has more flexibility and power than a mutex. I like a mutex as it
> > is quite obvious what is going on and that is good from a tooling
> > point of view. A deadlock with two mutexes is easy to understand. On a
> > semaphore, were we using it like a condition variable? There's more to
> > figure out. I also like the idea of compiling the perf command with
> > emscripten, we could then generate say perf annotate output in your
> > web browser. Emscripten has implementations of standard posix
> > libraries including pthreads, but we may need to have two approaches
> > in the perf code if we want to compile with emscripten and use
> > semaphores when targeting linux.
> >
> > Where this change comes from is that I worried that extending the
> > locked regions to cover the race that'd been found would then expose
> > the kind of recursive deadlock that pthread mutexes all too willingly
> > allow. With this code we at least see the bug and don't just hang. I
> > don't think we need the change to the mutexes for this change, but we
> > do need to extend the regions to fix the data race.
> >
> > Let me know how you prefer it and I can roll it into a v4 version.
> >
> > Thanks,
> > Ian
> >
> > > - Arnaldo
> > >
> > > > Signed-off-by: Ian Rogers <irogers@google.com>
> > > > ---
> > > >  tools/perf/util/dso.c    | 12 +++++++++---
> > > >  tools/perf/util/symbol.c |  2 +-
> > > >  2 files changed, 10 insertions(+), 4 deletions(-)
> > > >
> > > > diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> > > > index 9cc8a1772b4b..6beccffeef7b 100644
> > > > --- a/tools/perf/util/dso.c
> > > > +++ b/tools/perf/util/dso.c
> > > > @@ -784,7 +784,7 @@ dso_cache__free(struct dso *dso)
> > > >       struct rb_root *root = &dso->data.cache;
> > > >       struct rb_node *next = rb_first(root);
> > > >
> > > > -     pthread_mutex_lock(&dso->lock);
> > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > >       while (next) {
> > > >               struct dso_cache *cache;
> > > >
> > > > @@ -830,7 +830,7 @@ dso_cache__insert(struct dso *dso, struct dso_cache *new)
> > > >       struct dso_cache *cache;
> > > >       u64 offset = new->offset;
> > > >
> > > > -     pthread_mutex_lock(&dso->lock);
> > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > >       while (*p != NULL) {
> > > >               u64 end;
> > > >
> > > > @@ -1259,6 +1259,8 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> > > >       struct dso *dso = calloc(1, sizeof(*dso) + strlen(name) + 1);
> > > >
> > > >       if (dso != NULL) {
> > > > +             pthread_mutexattr_t lock_attr;
> > > > +
> > > >               strcpy(dso->name, name);
> > > >               if (id)
> > > >                       dso->id = *id;
> > > > @@ -1286,8 +1288,12 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> > > >               dso->root = NULL;
> > > >               INIT_LIST_HEAD(&dso->node);
> > > >               INIT_LIST_HEAD(&dso->data.open_entry);
> > > > -             pthread_mutex_init(&dso->lock, NULL);
> > > > +             pthread_mutexattr_init(&lock_attr);
> > > > +             pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_ERRORCHECK);
> > > > +             pthread_mutex_init(&dso->lock, &lock_attr);
> > > > +             pthread_mutexattr_destroy(&lock_attr);
> > > >               refcount_set(&dso->refcnt, 1);
> > > > +
> > > >       }
> > > >
> > > >       return dso;
> > > > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> > > > index b2ed3140a1fa..43f47532696f 100644
> > > > --- a/tools/perf/util/symbol.c
> > > > +++ b/tools/perf/util/symbol.c
> > > > @@ -1783,7 +1783,7 @@ int dso__load(struct dso *dso, struct map *map)
> > > >       }
> > > >
> > > >       nsinfo__mountns_enter(dso->nsinfo, &nsc);
> > > > -     pthread_mutex_lock(&dso->lock);
> > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > >
> > > >       /* check again under the dso->lock */
> > > >       if (dso__loaded(dso)) {
> > > > --
> > > > 2.35.1.265.g69c8d7142f-goog
> > >
> > > --
> > >
> > > - Arnaldo
>
> --
>
> - Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 17/22] perf map: Changes to reference counting
  2022-02-11 10:34 ` [PATCH v3 17/22] perf map: Changes to reference counting Ian Rogers
@ 2022-02-12  8:45   ` Masami Hiramatsu
  2022-02-12 20:48     ` Ian Rogers
  0 siblings, 1 reply; 58+ messages in thread
From: Masami Hiramatsu @ 2022-02-12  8:45 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	"André Almeida",
	James Clark, John Garry, Riccardo Mancini, Yury Norov,
	Andy Shevchenko, Andrew Morton, Jin Yao, Adrian Hunter, Leo Yan,
	Andi Kleen, Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Steven Rostedt, Miaoqian Lin,
	Stephen Brennan, Kajol Jain, Alexey Bayduraev, German Gomez,
	linux-perf-users, linux-kernel, Eric Dumazet, Dmitry Vyukov,
	Hao Luo, eranian

Hi Ian,

On Fri, 11 Feb 2022 02:34:10 -0800
Ian Rogers <irogers@google.com> wrote:

> When a pointer to a map exists do a get, when that pointer is
> overwritten or freed, put the map. This avoids issues with gets and
> puts being inconsistently used causing, use after puts, etc. Reference
> count checking and address sanitizer were used to identify issues.

OK, and please add comments in the code what should be actually done
so the others can understand it correctly, since this changes the
map object handling model.

Previously;

  map__get(map);
  map_operations(map);
  map__put(map);

Now, we have to use the object returned from get() ops.
This is more likely to the memdup()/free().

  new = map__get(map);
  map_operations(new);
  map__put(new);

To update the object in the other object (e.g. machine__update_kernel_mmap())
The original one must be put because it has the old copy.

Previous;

  map__get(parent_obj->map);
  update_operation(parent_obj->map);
  map__put(parent_obj->map;

Is now;

  orig = parent_obj->map;
  new = map__get(orig);
  update_operation(new);
  parent_obj->map = new;
  map__put(orig);

I think this change also should be documented with some concrete example
patterns so that someone can program it correctly. :-)

(This is the reason why I asked you to introduce object-token instead
 of modifying object pointer itself.)

Thank you,

> 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/tests/hists_cumulate.c     | 14 ++++-
>  tools/perf/tests/hists_filter.c       | 14 ++++-
>  tools/perf/tests/hists_link.c         | 18 +++++-
>  tools/perf/tests/hists_output.c       | 12 +++-
>  tools/perf/tests/mmap-thread-lookup.c |  3 +-
>  tools/perf/util/callchain.c           |  9 +--
>  tools/perf/util/event.c               |  8 ++-
>  tools/perf/util/hist.c                | 10 ++--
>  tools/perf/util/machine.c             | 80 ++++++++++++++++-----------
>  9 files changed, 118 insertions(+), 50 deletions(-)
> 
> diff --git a/tools/perf/tests/hists_cumulate.c b/tools/perf/tests/hists_cumulate.c
> index 17f4fcd6bdce..28f5eb41eed9 100644
> --- a/tools/perf/tests/hists_cumulate.c
> +++ b/tools/perf/tests/hists_cumulate.c
> @@ -112,6 +112,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
>  		}
>  
>  		fake_samples[i].thread = al.thread;
> +		map__put(fake_samples[i].map);
>  		fake_samples[i].map = al.map;
>  		fake_samples[i].sym = al.sym;
>  	}
> @@ -147,15 +148,23 @@ static void del_hist_entries(struct hists *hists)
>  	}
>  }
>  
> +static void put_fake_samples(void)
> +{
> +	size_t i;
> +
> +	for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> +		map__put(fake_samples[i].map);
> +}
> +
>  typedef int (*test_fn_t)(struct evsel *, struct machine *);
>  
>  #define COMM(he)  (thread__comm_str(he->thread))
> -#define DSO(he)   (he->ms.map->dso->short_name)
> +#define DSO(he)   (map__dso(he->ms.map)->short_name)
>  #define SYM(he)   (he->ms.sym->name)
>  #define CPU(he)   (he->cpu)
>  #define PID(he)   (he->thread->tid)
>  #define DEPTH(he) (he->callchain->max_depth)
> -#define CDSO(cl)  (cl->ms.map->dso->short_name)
> +#define CDSO(cl)  (map__dso(cl->ms.map)->short_name)
>  #define CSYM(cl)  (cl->ms.sym->name)
>  
>  struct result {
> @@ -733,6 +742,7 @@ static int test__hists_cumulate(struct test_suite *test __maybe_unused, int subt
>  	/* tear down everything */
>  	evlist__delete(evlist);
>  	machines__exit(&machines);
> +	put_fake_samples();
>  
>  	return err;
>  }
> diff --git a/tools/perf/tests/hists_filter.c b/tools/perf/tests/hists_filter.c
> index 08cbeb9e39ae..bcd46244182a 100644
> --- a/tools/perf/tests/hists_filter.c
> +++ b/tools/perf/tests/hists_filter.c
> @@ -89,6 +89,7 @@ static int add_hist_entries(struct evlist *evlist,
>  			}
>  
>  			fake_samples[i].thread = al.thread;
> +			map__put(fake_samples[i].map);
>  			fake_samples[i].map = al.map;
>  			fake_samples[i].sym = al.sym;
>  		}
> @@ -101,6 +102,14 @@ static int add_hist_entries(struct evlist *evlist,
>  	return TEST_FAIL;
>  }
>  
> +static void put_fake_samples(void)
> +{
> +	size_t i;
> +
> +	for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> +		map__put(fake_samples[i].map);
> +}
> +
>  static int test__hists_filter(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
>  {
>  	int err = TEST_FAIL;
> @@ -194,7 +203,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
>  		hists__filter_by_thread(hists);
>  
>  		/* now applying dso filter for 'kernel' */
> -		hists->dso_filter = fake_samples[0].map->dso;
> +		hists->dso_filter = map__dso(fake_samples[0].map);
>  		hists__filter_by_dso(hists);
>  
>  		if (verbose > 2) {
> @@ -288,7 +297,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
>  
>  		/* now applying all filters at once. */
>  		hists->thread_filter = fake_samples[1].thread;
> -		hists->dso_filter = fake_samples[1].map->dso;
> +		hists->dso_filter = map__dso(fake_samples[1].map);
>  		hists__filter_by_thread(hists);
>  		hists__filter_by_dso(hists);
>  
> @@ -322,6 +331,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
>  	evlist__delete(evlist);
>  	reset_output_field();
>  	machines__exit(&machines);
> +	put_fake_samples();
>  
>  	return err;
>  }
> diff --git a/tools/perf/tests/hists_link.c b/tools/perf/tests/hists_link.c
> index c575e13a850d..060e8731feff 100644
> --- a/tools/perf/tests/hists_link.c
> +++ b/tools/perf/tests/hists_link.c
> @@ -6,6 +6,7 @@
>  #include "evsel.h"
>  #include "evlist.h"
>  #include "machine.h"
> +#include "map.h"
>  #include "parse-events.h"
>  #include "hists_common.h"
>  #include "util/mmap.h"
> @@ -94,6 +95,7 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
>  			}
>  
>  			fake_common_samples[k].thread = al.thread;
> +			map__put(fake_common_samples[k].map);
>  			fake_common_samples[k].map = al.map;
>  			fake_common_samples[k].sym = al.sym;
>  		}
> @@ -126,11 +128,24 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
>  	return -1;
>  }
>  
> +static void put_fake_samples(void)
> +{
> +	size_t i, j;
> +
> +	for (i = 0; i < ARRAY_SIZE(fake_common_samples); i++)
> +		map__put(fake_common_samples[i].map);
> +	for (i = 0; i < ARRAY_SIZE(fake_samples); i++) {
> +		for (j = 0; j < ARRAY_SIZE(fake_samples[0]); j++)
> +			map__put(fake_samples[i][j].map);
> +	}
> +}
> +
>  static int find_sample(struct sample *samples, size_t nr_samples,
>  		       struct thread *t, struct map *m, struct symbol *s)
>  {
>  	while (nr_samples--) {
> -		if (samples->thread == t && samples->map == m &&
> +		if (samples->thread == t &&
> +		    samples->map == m &&
>  		    samples->sym == s)
>  			return 1;
>  		samples++;
> @@ -336,6 +351,7 @@ static int test__hists_link(struct test_suite *test __maybe_unused, int subtest
>  	evlist__delete(evlist);
>  	reset_output_field();
>  	machines__exit(&machines);
> +	put_fake_samples();
>  
>  	return err;
>  }
> diff --git a/tools/perf/tests/hists_output.c b/tools/perf/tests/hists_output.c
> index 0bde4a768c15..4af6916491e5 100644
> --- a/tools/perf/tests/hists_output.c
> +++ b/tools/perf/tests/hists_output.c
> @@ -78,6 +78,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
>  		}
>  
>  		fake_samples[i].thread = al.thread;
> +		map__put(fake_samples[i].map);
>  		fake_samples[i].map = al.map;
>  		fake_samples[i].sym = al.sym;
>  	}
> @@ -113,10 +114,18 @@ static void del_hist_entries(struct hists *hists)
>  	}
>  }
>  
> +static void put_fake_samples(void)
> +{
> +	size_t i;
> +
> +	for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> +		map__put(fake_samples[i].map);
> +}
> +
>  typedef int (*test_fn_t)(struct evsel *, struct machine *);
>  
>  #define COMM(he)  (thread__comm_str(he->thread))
> -#define DSO(he)   (he->ms.map->dso->short_name)
> +#define DSO(he)   (map__dso(he->ms.map)->short_name)
>  #define SYM(he)   (he->ms.sym->name)
>  #define CPU(he)   (he->cpu)
>  #define PID(he)   (he->thread->tid)
> @@ -620,6 +629,7 @@ static int test__hists_output(struct test_suite *test __maybe_unused, int subtes
>  	/* tear down everything */
>  	evlist__delete(evlist);
>  	machines__exit(&machines);
> +	put_fake_samples();
>  
>  	return err;
>  }
> diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c
> index a4301fc7b770..898eda55b7a8 100644
> --- a/tools/perf/tests/mmap-thread-lookup.c
> +++ b/tools/perf/tests/mmap-thread-lookup.c
> @@ -202,7 +202,8 @@ static int mmap_events(synth_cb synth)
>  			break;
>  		}
>  
> -		pr_debug("map %p, addr %" PRIx64 "\n", al.map, al.map->start);
> +		pr_debug("map %p, addr %" PRIx64 "\n", al.map, map__start(al.map));
> +		map__put(al.map);
>  	}
>  
>  	machine__delete_threads(machine);
> diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
> index a8cfd31a3ff0..ae65b7bc9ab7 100644
> --- a/tools/perf/util/callchain.c
> +++ b/tools/perf/util/callchain.c
> @@ -583,7 +583,7 @@ fill_node(struct callchain_node *node, struct callchain_cursor *cursor)
>  		}
>  		call->ip = cursor_node->ip;
>  		call->ms = cursor_node->ms;
> -		map__get(call->ms.map);
> +		call->ms.map = map__get(call->ms.map);
>  		call->srcline = cursor_node->srcline;
>  
>  		if (cursor_node->branch) {
> @@ -1061,7 +1061,7 @@ int callchain_cursor_append(struct callchain_cursor *cursor,
>  	node->ip = ip;
>  	map__zput(node->ms.map);
>  	node->ms = *ms;
> -	map__get(node->ms.map);
> +	node->ms.map = map__get(node->ms.map);
>  	node->branch = branch;
>  	node->nr_loop_iter = nr_loop_iter;
>  	node->iter_cycles = iter_cycles;
> @@ -1109,7 +1109,8 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
>  	struct machine *machine = maps__machine(node->ms.maps);
>  
>  	al->maps = node->ms.maps;
> -	al->map = node->ms.map;
> +	map__put(al->map);
> +	al->map = map__get(node->ms.map);
>  	al->sym = node->ms.sym;
>  	al->srcline = node->srcline;
>  	al->addr = node->ip;
> @@ -1530,7 +1531,7 @@ int callchain_node__make_parent_list(struct callchain_node *node)
>  				goto out;
>  			*new = *chain;
>  			new->has_children = false;
> -			map__get(new->ms.map);
> +			new->ms.map = map__get(new->ms.map);
>  			list_add_tail(&new->list, &head);
>  		}
>  		parent = parent->parent;
> diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> index 54a1d4df5f70..266318d5d006 100644
> --- a/tools/perf/util/event.c
> +++ b/tools/perf/util/event.c
> @@ -484,13 +484,14 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
>  	if (machine) {
>  		struct addr_location al;
>  
> -		al.map = maps__find(machine__kernel_maps(machine), tp->addr);
> +		al.map = map__get(maps__find(machine__kernel_maps(machine), tp->addr));
>  		if (al.map && map__load(al.map) >= 0) {
>  			al.addr = map__map_ip(al.map, tp->addr);
>  			al.sym = map__find_symbol(al.map, al.addr);
>  			if (al.sym)
>  				ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
>  		}
> +		map__put(al.map);
>  	}
>  	ret += fprintf(fp, " old len %u new len %u\n", tp->old_len, tp->new_len);
>  	old = true;
> @@ -581,6 +582,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
>  	al->filtered = 0;
>  
>  	if (machine == NULL) {
> +		map__put(al->map);
>  		al->map = NULL;
>  		return NULL;
>  	}
> @@ -599,6 +601,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
>  		al->level = 'u';
>  	} else {
>  		al->level = 'H';
> +		map__put(al->map);
>  		al->map = NULL;
>  
>  		if ((cpumode == PERF_RECORD_MISC_GUEST_USER ||
> @@ -613,7 +616,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
>  		return NULL;
>  	}
>  
> -	al->map = maps__find(maps, al->addr);
> +	al->map = map__get(maps__find(maps, al->addr));
>  	if (al->map != NULL) {
>  		/*
>  		 * Kernel maps might be changed when loading symbols so loading
> @@ -768,6 +771,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
>   */
>  void addr_location__put(struct addr_location *al)
>  {
> +	map__zput(al->map);
>  	thread__zput(al->thread);
>  }
>  
> diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> index f19ac6eb4775..4dbb1dbf3679 100644
> --- a/tools/perf/util/hist.c
> +++ b/tools/perf/util/hist.c
> @@ -446,7 +446,7 @@ static int hist_entry__init(struct hist_entry *he,
>  			memset(&he->stat, 0, sizeof(he->stat));
>  	}
>  
> -	map__get(he->ms.map);
> +	he->ms.map = map__get(he->ms.map);
>  
>  	if (he->branch_info) {
>  		/*
> @@ -461,13 +461,13 @@ static int hist_entry__init(struct hist_entry *he,
>  		memcpy(he->branch_info, template->branch_info,
>  		       sizeof(*he->branch_info));
>  
> -		map__get(he->branch_info->from.ms.map);
> -		map__get(he->branch_info->to.ms.map);
> +		he->branch_info->from.ms.map = map__get(he->branch_info->from.ms.map);
> +		he->branch_info->to.ms.map = map__get(he->branch_info->to.ms.map);
>  	}
>  
>  	if (he->mem_info) {
> -		map__get(he->mem_info->iaddr.ms.map);
> -		map__get(he->mem_info->daddr.ms.map);
> +		he->mem_info->iaddr.ms.map = map__get(he->mem_info->iaddr.ms.map);
> +		he->mem_info->daddr.ms.map = map__get(he->mem_info->daddr.ms.map);
>  	}
>  
>  	if (hist_entry__has_callchains(he) && symbol_conf.use_callchain)
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index 940fb2a50dfd..49e4891e92b7 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -783,33 +783,42 @@ static int machine__process_ksymbol_register(struct machine *machine,
>  {
>  	struct symbol *sym;
>  	struct map *map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
> +	bool put_map = false;
> +	int err = 0;
>  
>  	if (!map) {
>  		struct dso *dso = dso__new(event->ksymbol.name);
> -		int err;
>  
> -		if (dso) {
> -			dso->kernel = DSO_SPACE__KERNEL;
> -			map = map__new2(0, dso);
> -			dso__put(dso);
> +		if (!dso) {
> +			err = -ENOMEM;
> +			goto out;
>  		}
> -
> -		if (!dso || !map) {
> -			return -ENOMEM;
> +		dso->kernel = DSO_SPACE__KERNEL;
> +		map = map__new2(0, dso);
> +		dso__put(dso);
> +		if (!map) {
> +			err = -ENOMEM;
> +			goto out;
>  		}
> -
> +		/*
> +		 * The inserted map has a get on it, we need to put to release
> +		 * the reference count here, but do it after all accesses are
> +		 * done.
> +		 */
> +		put_map = true;
>  		if (event->ksymbol.ksym_type == PERF_RECORD_KSYMBOL_TYPE_OOL) {
> -			map->dso->binary_type = DSO_BINARY_TYPE__OOL;
> -			map->dso->data.file_size = event->ksymbol.len;
> -			dso__set_loaded(map->dso);
> +			map__dso(map)->binary_type = DSO_BINARY_TYPE__OOL;
> +			map__dso(map)->data.file_size = event->ksymbol.len;
> +			dso__set_loaded(map__dso(map));
>  		}
>  
>  		map->start = event->ksymbol.addr;
> -		map->end = map->start + event->ksymbol.len;
> +		map->end = map__start(map) + event->ksymbol.len;
>  		err = maps__insert(machine__kernel_maps(machine), map);
> -		map__put(map);
> -		if (err)
> -			return err;
> +		if (err) {
> +			err = -ENOMEM;
> +			goto out;
> +		}
>  
>  		dso__set_loaded(dso);
>  
> @@ -819,13 +828,18 @@ static int machine__process_ksymbol_register(struct machine *machine,
>  		}
>  	}
>  
> -	sym = symbol__new(map->map_ip(map, map->start),
> +	sym = symbol__new(map__map_ip(map, map__start(map)),
>  			  event->ksymbol.len,
>  			  0, 0, event->ksymbol.name);
> -	if (!sym)
> -		return -ENOMEM;
> -	dso__insert_symbol(map->dso, sym);
> -	return 0;
> +	if (!sym) {
> +		err = -ENOMEM;
> +		goto out;
> +	}
> +	dso__insert_symbol(map__dso(map), sym);
> +out:
> +	if (put_map)
> +		map__put(map);
> +	return err;
>  }
>  
>  static int machine__process_ksymbol_unregister(struct machine *machine,
> @@ -925,14 +939,11 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
>  		goto out;
>  
>  	err = maps__insert(machine__kernel_maps(machine), map);
> -
> -	/* Put the map here because maps__insert already got it */
> -	map__put(map);
> -
>  	/* If maps__insert failed, return NULL. */
> -	if (err)
> +	if (err) {
> +		map__put(map);
>  		map = NULL;
> -
> +	}
>  out:
>  	/* put the dso here, corresponding to  machine__findnew_module_dso */
>  	dso__put(dso);
> @@ -1228,6 +1239,7 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
>  	/* In case of renewal the kernel map, destroy previous one */
>  	machine__destroy_kernel_maps(machine);
>  
> +	map__put(machine->vmlinux_map);
>  	machine->vmlinux_map = map__new2(0, kernel);
>  	if (machine->vmlinux_map == NULL)
>  		return -ENOMEM;
> @@ -1513,6 +1525,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
>  	map->end = start + size;
>  
>  	dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
> +	map__put(map);
>  	return 0;
>  }
>  
> @@ -1558,16 +1571,18 @@ static void machine__set_kernel_mmap(struct machine *machine,
>  static int machine__update_kernel_mmap(struct machine *machine,
>  				     u64 start, u64 end)
>  {
> -	struct map *map = machine__kernel_map(machine);
> +	struct map *orig, *updated;
>  	int err;
>  
> -	map__get(map);
> -	maps__remove(machine__kernel_maps(machine), map);
> +	orig = machine->vmlinux_map;
> +	updated = map__get(orig);
>  
> +	machine->vmlinux_map = updated;
>  	machine__set_kernel_mmap(machine, start, end);
> +	maps__remove(machine__kernel_maps(machine), orig);
> +	err = maps__insert(machine__kernel_maps(machine), updated);
> +	map__put(orig);
>  
> -	err = maps__insert(machine__kernel_maps(machine), map);
> -	map__put(map);
>  	return err;
>  }
>  
> @@ -2246,6 +2261,7 @@ static int add_callchain_ip(struct thread *thread,
>  	err = callchain_cursor_append(cursor, ip, &ms,
>  				      branch, flags, nr_loop_iter,
>  				      iter_cycles, branch_from, srcline);
> +	map__put(al.map);
>  	return err;
>  }
>  
> -- 
> 2.35.1.265.g69c8d7142f-goog
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 04/22] perf dso: Hold lock when accessing nsinfo
  2022-02-11 10:33 ` [PATCH v3 04/22] perf dso: Hold lock when accessing nsinfo Ian Rogers
  2022-02-11 17:14   ` Arnaldo Carvalho de Melo
@ 2022-02-12 11:30   ` Jiri Olsa
  1 sibling, 0 replies; 58+ messages in thread
From: Jiri Olsa @ 2022-02-12 11:30 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

On Fri, Feb 11, 2022 at 02:33:57AM -0800, Ian Rogers wrote:
> There may be threads racing to update dso->nsinfo:
> https://lore.kernel.org/linux-perf-users/CAP-5=fWZH20L4kv-BwVtGLwR=Em3AOOT+Q4QGivvQuYn5AsPRg@mail.gmail.com/
> Holding the dso->lock avoids use-after-free, memory leaks and other
> such bugs. Apply the fix in:
> https://lore.kernel.org/linux-perf-users/20211118193714.2293728-1-irogers@google.com/
> of there being a missing nsinfo__put now that the accesses are data race
> free.
> 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/builtin-inject.c   | 4 ++++
>  tools/perf/util/dso.c         | 5 ++++-
>  tools/perf/util/map.c         | 3 +++
>  tools/perf/util/probe-event.c | 2 ++
>  tools/perf/util/symbol.c      | 2 +-
>  5 files changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
> index fbf43a454cba..bede332bf0e2 100644
> --- a/tools/perf/builtin-inject.c
> +++ b/tools/perf/builtin-inject.c
> @@ -363,8 +363,10 @@ static struct dso *findnew_dso(int pid, int tid, const char *filename,
>  	}
>  
>  	if (dso) {
> +		BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  		nsinfo__put(dso->nsinfo);
>  		dso->nsinfo = nsi;
> +		pthread_mutex_unlock(&dso->lock);
>  	} else
>  		nsinfo__put(nsi);
>  
> @@ -547,7 +549,9 @@ static int dso__read_build_id(struct dso *dso)
>  	if (dso->has_build_id)
>  		return 0;
>  
> +	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  	nsinfo__mountns_enter(dso->nsinfo, &nsc);
> +	pthread_mutex_unlock(&dso->lock);

so this separates nsinfo__mountns_enter and nsinfo__put,
should we care also about nsinfo__mountns_exit?

jirka

>  	if (filename__read_build_id(dso->long_name, &dso->bid) > 0)
>  		dso->has_build_id = true;
>  	nsinfo__mountns_exit(&nsc);
> diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> index 6beccffeef7b..b2f570adba35 100644
> --- a/tools/perf/util/dso.c
> +++ b/tools/perf/util/dso.c
> @@ -548,8 +548,11 @@ static int open_dso(struct dso *dso, struct machine *machine)
>  	int fd;
>  	struct nscookie nsc;
>  
> -	if (dso->binary_type != DSO_BINARY_TYPE__BUILD_ID_CACHE)
> +	if (dso->binary_type != DSO_BINARY_TYPE__BUILD_ID_CACHE) {
> +		BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  		nsinfo__mountns_enter(dso->nsinfo, &nsc);
> +		pthread_mutex_unlock(&dso->lock);
> +	}
>  	fd = __open_dso(dso, machine);
>  	if (dso->binary_type != DSO_BINARY_TYPE__BUILD_ID_CACHE)
>  		nsinfo__mountns_exit(&nsc);
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index 8af693d9678c..ae99b52502d5 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -192,7 +192,10 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
>  			if (!(prot & PROT_EXEC))
>  				dso__set_loaded(dso);
>  		}
> +		BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> +		nsinfo__put(dso->nsinfo);
>  		dso->nsinfo = nsi;
> +		pthread_mutex_unlock(&dso->lock);
>  
>  		if (build_id__is_defined(bid))
>  			dso__set_build_id(dso, bid);
> diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
> index a834918a0a0d..7444e689ece7 100644
> --- a/tools/perf/util/probe-event.c
> +++ b/tools/perf/util/probe-event.c
> @@ -180,8 +180,10 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
>  
>  		map = dso__new_map(target);
>  		if (map && map->dso) {
> +			BUG_ON(pthread_mutex_lock(&map->dso->lock) != 0);
>  			nsinfo__put(map->dso->nsinfo);
>  			map->dso->nsinfo = nsinfo__get(nsi);
> +			pthread_mutex_unlock(&map->dso->lock);
>  		}
>  		return map;
>  	} else {
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index 43f47532696f..a504346feb05 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -1774,6 +1774,7 @@ int dso__load(struct dso *dso, struct map *map)
>  	char newmapname[PATH_MAX];
>  	const char *map_path = dso->long_name;
>  
> +	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  	perfmap = strncmp(dso->name, "/tmp/perf-", 10) == 0;
>  	if (perfmap) {
>  		if (dso->nsinfo && (dso__find_perf_map(newmapname,
> @@ -1783,7 +1784,6 @@ int dso__load(struct dso *dso, struct map *map)
>  	}
>  
>  	nsinfo__mountns_enter(dso->nsinfo, &nsc);
> -	BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
>  
>  	/* check again under the dso->lock */
>  	if (dso__loaded(dso)) {
> -- 
> 2.35.1.265.g69c8d7142f-goog
> 

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs
  2022-02-11 19:35         ` Ian Rogers
@ 2022-02-12 15:48           ` Arnaldo Carvalho de Melo
  2022-02-12 15:49             ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-12 15:48 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 11:35:05AM -0800, Ian Rogers escreveu:
> On Fri, Feb 11, 2022 at 11:21 AM Arnaldo Carvalho de Melo
> <acme@kernel.org> wrote:
> >
> > Em Fri, Feb 11, 2022 at 09:43:19AM -0800, Ian Rogers escreveu:
> > > On Fri, Feb 11, 2022 at 9:13 AM Arnaldo Carvalho de Melo
> > > <acme@kernel.org> wrote:
> > > >
> > > > Em Fri, Feb 11, 2022 at 02:33:56AM -0800, Ian Rogers escreveu:
> > > > > Make the pthread mutex on dso use the error check type. This allows
> > > > > deadlock checking via the return type. Assert the returned value from
> > > > > mutex lock is always 0.
> > > >
> > > > I think this is too blunt/pervasive source code wise, perhaps we should
> > > > wrap this like its done with rwsem in tools/perf/util/rwsem.h to get
> > > > away from pthreads primitives and make the source code look more like
> > > > a kernel one and then, taking advantage of the so far ideologic
> > > > needless indirection, add this BUG_ON if we build with "DEBUG=1" or
> > > > something, wdyt?
> > >
> >
> > > My concern with semaphores is that they are a concurrency primitive
> >
> > I'm not suggesting we switch over to semaphores, just to use the same
> > technique of wrapping pthread_mutex_t with some other API that then
> > allows us to add these BUG_ON() calls without polluting the source code
> > in many places.
> 
> Sounds simple enough and would ensure consistency too. I can add it to
> the front of this set of changes. A different approach would be to
> take what's here and then refactor and cleanup as a follow on patch
> set. I'd prefer that as the size of this set of changes is already
> larger than I like - albeit that most of it is just introducing the

So, the first 4 patches in this series were already merged, as they are
just prep work that don't add clutter, having those in the front of the
patchkit helps picking up the low hanging fruit.

I usually try to pick even if it comes later, to make progress, I'll
recheck the rest of the patchkit to see what more I can pick to reduce
its size.

- Arnaldo

> use of functions to access struct variables. Perhaps I just remove the
> BUG_ON and pthread changes here, we work to get this landed and in a
> separate set of patches clean up the pthread mutex code to have better
> bug checking.
> 
> Thanks,
> Ian
> 
> > - Arnaldo
> >
> > > that has more flexibility and power than a mutex. I like a mutex as it
> > > is quite obvious what is going on and that is good from a tooling
> > > point of view. A deadlock with two mutexes is easy to understand. On a
> > > semaphore, were we using it like a condition variable? There's more to
> > > figure out. I also like the idea of compiling the perf command with
> > > emscripten, we could then generate say perf annotate output in your
> > > web browser. Emscripten has implementations of standard posix
> > > libraries including pthreads, but we may need to have two approaches
> > > in the perf code if we want to compile with emscripten and use
> > > semaphores when targeting linux.
> > >
> > > Where this change comes from is that I worried that extending the
> > > locked regions to cover the race that'd been found would then expose
> > > the kind of recursive deadlock that pthread mutexes all too willingly
> > > allow. With this code we at least see the bug and don't just hang. I
> > > don't think we need the change to the mutexes for this change, but we
> > > do need to extend the regions to fix the data race.
> > >
> > > Let me know how you prefer it and I can roll it into a v4 version.
> > >
> > > Thanks,
> > > Ian
> > >
> > > > - Arnaldo
> > > >
> > > > > Signed-off-by: Ian Rogers <irogers@google.com>
> > > > > ---
> > > > >  tools/perf/util/dso.c    | 12 +++++++++---
> > > > >  tools/perf/util/symbol.c |  2 +-
> > > > >  2 files changed, 10 insertions(+), 4 deletions(-)
> > > > >
> > > > > diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> > > > > index 9cc8a1772b4b..6beccffeef7b 100644
> > > > > --- a/tools/perf/util/dso.c
> > > > > +++ b/tools/perf/util/dso.c
> > > > > @@ -784,7 +784,7 @@ dso_cache__free(struct dso *dso)
> > > > >       struct rb_root *root = &dso->data.cache;
> > > > >       struct rb_node *next = rb_first(root);
> > > > >
> > > > > -     pthread_mutex_lock(&dso->lock);
> > > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > > >       while (next) {
> > > > >               struct dso_cache *cache;
> > > > >
> > > > > @@ -830,7 +830,7 @@ dso_cache__insert(struct dso *dso, struct dso_cache *new)
> > > > >       struct dso_cache *cache;
> > > > >       u64 offset = new->offset;
> > > > >
> > > > > -     pthread_mutex_lock(&dso->lock);
> > > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > > >       while (*p != NULL) {
> > > > >               u64 end;
> > > > >
> > > > > @@ -1259,6 +1259,8 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> > > > >       struct dso *dso = calloc(1, sizeof(*dso) + strlen(name) + 1);
> > > > >
> > > > >       if (dso != NULL) {
> > > > > +             pthread_mutexattr_t lock_attr;
> > > > > +
> > > > >               strcpy(dso->name, name);
> > > > >               if (id)
> > > > >                       dso->id = *id;
> > > > > @@ -1286,8 +1288,12 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> > > > >               dso->root = NULL;
> > > > >               INIT_LIST_HEAD(&dso->node);
> > > > >               INIT_LIST_HEAD(&dso->data.open_entry);
> > > > > -             pthread_mutex_init(&dso->lock, NULL);
> > > > > +             pthread_mutexattr_init(&lock_attr);
> > > > > +             pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_ERRORCHECK);
> > > > > +             pthread_mutex_init(&dso->lock, &lock_attr);
> > > > > +             pthread_mutexattr_destroy(&lock_attr);
> > > > >               refcount_set(&dso->refcnt, 1);
> > > > > +
> > > > >       }
> > > > >
> > > > >       return dso;
> > > > > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> > > > > index b2ed3140a1fa..43f47532696f 100644
> > > > > --- a/tools/perf/util/symbol.c
> > > > > +++ b/tools/perf/util/symbol.c
> > > > > @@ -1783,7 +1783,7 @@ int dso__load(struct dso *dso, struct map *map)
> > > > >       }
> > > > >
> > > > >       nsinfo__mountns_enter(dso->nsinfo, &nsc);
> > > > > -     pthread_mutex_lock(&dso->lock);
> > > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > > >
> > > > >       /* check again under the dso->lock */
> > > > >       if (dso__loaded(dso)) {
> > > > > --
> > > > > 2.35.1.265.g69c8d7142f-goog
> > > >
> > > > --
> > > >
> > > > - Arnaldo
> >
> > --
> >
> > - Arnaldo

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs
  2022-02-12 15:48           ` Arnaldo Carvalho de Melo
@ 2022-02-12 15:49             ` Arnaldo Carvalho de Melo
  2022-02-12 20:59               ` Ian Rogers
  0 siblings, 1 reply; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-12 15:49 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Sat, Feb 12, 2022 at 12:48:37PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Fri, Feb 11, 2022 at 11:35:05AM -0800, Ian Rogers escreveu:
> > On Fri, Feb 11, 2022 at 11:21 AM Arnaldo Carvalho de Melo
> > <acme@kernel.org> wrote:
> > >
> > > Em Fri, Feb 11, 2022 at 09:43:19AM -0800, Ian Rogers escreveu:
> > > > On Fri, Feb 11, 2022 at 9:13 AM Arnaldo Carvalho de Melo
> > > > <acme@kernel.org> wrote:
> > > > >
> > > > > Em Fri, Feb 11, 2022 at 02:33:56AM -0800, Ian Rogers escreveu:
> > > > > > Make the pthread mutex on dso use the error check type. This allows
> > > > > > deadlock checking via the return type. Assert the returned value from
> > > > > > mutex lock is always 0.
> > > > >
> > > > > I think this is too blunt/pervasive source code wise, perhaps we should
> > > > > wrap this like its done with rwsem in tools/perf/util/rwsem.h to get
> > > > > away from pthreads primitives and make the source code look more like
> > > > > a kernel one and then, taking advantage of the so far ideologic
> > > > > needless indirection, add this BUG_ON if we build with "DEBUG=1" or
> > > > > something, wdyt?
> > > >
> > >
> > > > My concern with semaphores is that they are a concurrency primitive
> > >
> > > I'm not suggesting we switch over to semaphores, just to use the same
> > > technique of wrapping pthread_mutex_t with some other API that then
> > > allows us to add these BUG_ON() calls without polluting the source code
> > > in many places.
> > 
> > Sounds simple enough and would ensure consistency too. I can add it to
> > the front of this set of changes. A different approach would be to
> > take what's here and then refactor and cleanup as a follow on patch
> > set. I'd prefer that as the size of this set of changes is already
> > larger than I like - albeit that most of it is just introducing the
> 
> So, the first 4 patches in this series were already merged, as they are
> just prep work that don't add clutter, having those in the front of the
> patchkit helps picking up the low hanging fruit.

Forgot to mention, I merged, tested and alreay published it in
perf/core, i.e. no more rebases for that lot, that is how it will get
into 5.18.

Alexei's threaded record patchkit is there as well, BTW, so should help
reducing the possibility of clashes with your (and others) work.

- Arnaldo
 
> I usually try to pick even if it comes later, to make progress, I'll
> recheck the rest of the patchkit to see what more I can pick to reduce
> its size.
> 
> - Arnaldo
> 
> > use of functions to access struct variables. Perhaps I just remove the
> > BUG_ON and pthread changes here, we work to get this landed and in a
> > separate set of patches clean up the pthread mutex code to have better
> > bug checking.
> > 
> > Thanks,
> > Ian
> > 
> > > - Arnaldo
> > >
> > > > that has more flexibility and power than a mutex. I like a mutex as it
> > > > is quite obvious what is going on and that is good from a tooling
> > > > point of view. A deadlock with two mutexes is easy to understand. On a
> > > > semaphore, were we using it like a condition variable? There's more to
> > > > figure out. I also like the idea of compiling the perf command with
> > > > emscripten, we could then generate say perf annotate output in your
> > > > web browser. Emscripten has implementations of standard posix
> > > > libraries including pthreads, but we may need to have two approaches
> > > > in the perf code if we want to compile with emscripten and use
> > > > semaphores when targeting linux.
> > > >
> > > > Where this change comes from is that I worried that extending the
> > > > locked regions to cover the race that'd been found would then expose
> > > > the kind of recursive deadlock that pthread mutexes all too willingly
> > > > allow. With this code we at least see the bug and don't just hang. I
> > > > don't think we need the change to the mutexes for this change, but we
> > > > do need to extend the regions to fix the data race.
> > > >
> > > > Let me know how you prefer it and I can roll it into a v4 version.
> > > >
> > > > Thanks,
> > > > Ian
> > > >
> > > > > - Arnaldo
> > > > >
> > > > > > Signed-off-by: Ian Rogers <irogers@google.com>
> > > > > > ---
> > > > > >  tools/perf/util/dso.c    | 12 +++++++++---
> > > > > >  tools/perf/util/symbol.c |  2 +-
> > > > > >  2 files changed, 10 insertions(+), 4 deletions(-)
> > > > > >
> > > > > > diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> > > > > > index 9cc8a1772b4b..6beccffeef7b 100644
> > > > > > --- a/tools/perf/util/dso.c
> > > > > > +++ b/tools/perf/util/dso.c
> > > > > > @@ -784,7 +784,7 @@ dso_cache__free(struct dso *dso)
> > > > > >       struct rb_root *root = &dso->data.cache;
> > > > > >       struct rb_node *next = rb_first(root);
> > > > > >
> > > > > > -     pthread_mutex_lock(&dso->lock);
> > > > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > > > >       while (next) {
> > > > > >               struct dso_cache *cache;
> > > > > >
> > > > > > @@ -830,7 +830,7 @@ dso_cache__insert(struct dso *dso, struct dso_cache *new)
> > > > > >       struct dso_cache *cache;
> > > > > >       u64 offset = new->offset;
> > > > > >
> > > > > > -     pthread_mutex_lock(&dso->lock);
> > > > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > > > >       while (*p != NULL) {
> > > > > >               u64 end;
> > > > > >
> > > > > > @@ -1259,6 +1259,8 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> > > > > >       struct dso *dso = calloc(1, sizeof(*dso) + strlen(name) + 1);
> > > > > >
> > > > > >       if (dso != NULL) {
> > > > > > +             pthread_mutexattr_t lock_attr;
> > > > > > +
> > > > > >               strcpy(dso->name, name);
> > > > > >               if (id)
> > > > > >                       dso->id = *id;
> > > > > > @@ -1286,8 +1288,12 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> > > > > >               dso->root = NULL;
> > > > > >               INIT_LIST_HEAD(&dso->node);
> > > > > >               INIT_LIST_HEAD(&dso->data.open_entry);
> > > > > > -             pthread_mutex_init(&dso->lock, NULL);
> > > > > > +             pthread_mutexattr_init(&lock_attr);
> > > > > > +             pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_ERRORCHECK);
> > > > > > +             pthread_mutex_init(&dso->lock, &lock_attr);
> > > > > > +             pthread_mutexattr_destroy(&lock_attr);
> > > > > >               refcount_set(&dso->refcnt, 1);
> > > > > > +
> > > > > >       }
> > > > > >
> > > > > >       return dso;
> > > > > > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> > > > > > index b2ed3140a1fa..43f47532696f 100644
> > > > > > --- a/tools/perf/util/symbol.c
> > > > > > +++ b/tools/perf/util/symbol.c
> > > > > > @@ -1783,7 +1783,7 @@ int dso__load(struct dso *dso, struct map *map)
> > > > > >       }
> > > > > >
> > > > > >       nsinfo__mountns_enter(dso->nsinfo, &nsc);
> > > > > > -     pthread_mutex_lock(&dso->lock);
> > > > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > > > >
> > > > > >       /* check again under the dso->lock */
> > > > > >       if (dso__loaded(dso)) {
> > > > > > --
> > > > > > 2.35.1.265.g69c8d7142f-goog
> > > > >
> > > > > --
> > > > >
> > > > > - Arnaldo
> > >
> > > --
> > >
> > > - Arnaldo
> 
> -- 
> 
> - Arnaldo

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 17/22] perf map: Changes to reference counting
  2022-02-12  8:45   ` Masami Hiramatsu
@ 2022-02-12 20:48     ` Ian Rogers
  2022-02-14  2:00       ` Masami Hiramatsu
  2022-02-14 18:56       ` Arnaldo Carvalho de Melo
  0 siblings, 2 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-12 20:48 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo, eranian

On Sat, Feb 12, 2022 at 12:46 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> Hi Ian,
>
> On Fri, 11 Feb 2022 02:34:10 -0800
> Ian Rogers <irogers@google.com> wrote:
>
> > When a pointer to a map exists do a get, when that pointer is
> > overwritten or freed, put the map. This avoids issues with gets and
> > puts being inconsistently used causing, use after puts, etc. Reference
> > count checking and address sanitizer were used to identify issues.
>
> OK, and please add comments in the code what should be actually done
> so the others can understand it correctly, since this changes the
> map object handling model.
>
> Previously;
>
>   map__get(map);
>   map_operations(map);
>   map__put(map);
>
> Now, we have to use the object returned from get() ops.
> This is more likely to the memdup()/free().
>
>   new = map__get(map);
>   map_operations(new);
>   map__put(new);
>
> To update the object in the other object (e.g. machine__update_kernel_mmap())
> The original one must be put because it has the old copy.
>
> Previous;
>
>   map__get(parent_obj->map);
>   update_operation(parent_obj->map);
>   map__put(parent_obj->map;
>
> Is now;
>
>   orig = parent_obj->map;
>   new = map__get(orig);
>   update_operation(new);
>   parent_obj->map = new;
>   map__put(orig);

Hi Masami,

Thanks as always for the input! This is a top post and so I lack the
context for what you are describing. The model should always be get,
operation, put, but the map code does not follow this model. I suspect
the map code at some point in time didn't have reference counts,
someone added them for one use case and then didn't add gets and puts
elsewhere. Because a crash is worse than a memory leak extra gets were
added, or puts missed, and the code is pretty much spaghetti today. We
now need to be able to pair gets with puts and hence the model that
this change introduces. There are cases where the new code needs to
distinguish between a reference that is put and a reference that will
be kept alive and say returned, this is fiddly and adds extra state -
this seems to be what you're describing. I don't see how having a
concept of a token is clearing this matter up. We have one pointer
that can't be used after a function has consumed it, we introduce
another pointer via a get so that we can keep the value for the sake
of future use. In C++ we'd have two smart pointers for this case.

> I think this change also should be documented with some concrete example
> patterns so that someone can program it correctly. :-)

So the example change was the cpumap one (i.e. patch set v1). It is a
literal patch set and so I'm not sure why it isn't a concrete example.
In terms of motivation, the map code is about the worst thing that
could try to be fixed and that's why I tackled it. It'd have been much
easier to implement the approach in cpumap and ask someone else to do
the map case :-) The problem I face is that in introducing the
refactors and the approach, perf now crashes because of the technique
working. That's why I've also had to fix the bugs in map the approach
identifies - the code passes most of perf test enabled and with
sanitizers.  There are more issues. To be specific on one example,
addr_location references a map but lacks any kind of init/exit
protocol. The exit should put any map the addr_location has. There are
191 uses of addr_location that need to be looked at and a proper
get/put, init/exit approach introduced. I've fixed the ones that are
crashes and show stoppers. With this approach enabled we can catch
more and avoid human error.

Thanks,
Ian

> (This is the reason why I asked you to introduce object-token instead
>  of modifying object pointer itself.)
>
> Thank you,
>
> >
> > Signed-off-by: Ian Rogers <irogers@google.com>
> > ---
> >  tools/perf/tests/hists_cumulate.c     | 14 ++++-
> >  tools/perf/tests/hists_filter.c       | 14 ++++-
> >  tools/perf/tests/hists_link.c         | 18 +++++-
> >  tools/perf/tests/hists_output.c       | 12 +++-
> >  tools/perf/tests/mmap-thread-lookup.c |  3 +-
> >  tools/perf/util/callchain.c           |  9 +--
> >  tools/perf/util/event.c               |  8 ++-
> >  tools/perf/util/hist.c                | 10 ++--
> >  tools/perf/util/machine.c             | 80 ++++++++++++++++-----------
> >  9 files changed, 118 insertions(+), 50 deletions(-)
> >
> > diff --git a/tools/perf/tests/hists_cumulate.c b/tools/perf/tests/hists_cumulate.c
> > index 17f4fcd6bdce..28f5eb41eed9 100644
> > --- a/tools/perf/tests/hists_cumulate.c
> > +++ b/tools/perf/tests/hists_cumulate.c
> > @@ -112,6 +112,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
> >               }
> >
> >               fake_samples[i].thread = al.thread;
> > +             map__put(fake_samples[i].map);
> >               fake_samples[i].map = al.map;
> >               fake_samples[i].sym = al.sym;
> >       }
> > @@ -147,15 +148,23 @@ static void del_hist_entries(struct hists *hists)
> >       }
> >  }
> >
> > +static void put_fake_samples(void)
> > +{
> > +     size_t i;
> > +
> > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> > +             map__put(fake_samples[i].map);
> > +}
> > +
> >  typedef int (*test_fn_t)(struct evsel *, struct machine *);
> >
> >  #define COMM(he)  (thread__comm_str(he->thread))
> > -#define DSO(he)   (he->ms.map->dso->short_name)
> > +#define DSO(he)   (map__dso(he->ms.map)->short_name)
> >  #define SYM(he)   (he->ms.sym->name)
> >  #define CPU(he)   (he->cpu)
> >  #define PID(he)   (he->thread->tid)
> >  #define DEPTH(he) (he->callchain->max_depth)
> > -#define CDSO(cl)  (cl->ms.map->dso->short_name)
> > +#define CDSO(cl)  (map__dso(cl->ms.map)->short_name)
> >  #define CSYM(cl)  (cl->ms.sym->name)
> >
> >  struct result {
> > @@ -733,6 +742,7 @@ static int test__hists_cumulate(struct test_suite *test __maybe_unused, int subt
> >       /* tear down everything */
> >       evlist__delete(evlist);
> >       machines__exit(&machines);
> > +     put_fake_samples();
> >
> >       return err;
> >  }
> > diff --git a/tools/perf/tests/hists_filter.c b/tools/perf/tests/hists_filter.c
> > index 08cbeb9e39ae..bcd46244182a 100644
> > --- a/tools/perf/tests/hists_filter.c
> > +++ b/tools/perf/tests/hists_filter.c
> > @@ -89,6 +89,7 @@ static int add_hist_entries(struct evlist *evlist,
> >                       }
> >
> >                       fake_samples[i].thread = al.thread;
> > +                     map__put(fake_samples[i].map);
> >                       fake_samples[i].map = al.map;
> >                       fake_samples[i].sym = al.sym;
> >               }
> > @@ -101,6 +102,14 @@ static int add_hist_entries(struct evlist *evlist,
> >       return TEST_FAIL;
> >  }
> >
> > +static void put_fake_samples(void)
> > +{
> > +     size_t i;
> > +
> > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> > +             map__put(fake_samples[i].map);
> > +}
> > +
> >  static int test__hists_filter(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
> >  {
> >       int err = TEST_FAIL;
> > @@ -194,7 +203,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
> >               hists__filter_by_thread(hists);
> >
> >               /* now applying dso filter for 'kernel' */
> > -             hists->dso_filter = fake_samples[0].map->dso;
> > +             hists->dso_filter = map__dso(fake_samples[0].map);
> >               hists__filter_by_dso(hists);
> >
> >               if (verbose > 2) {
> > @@ -288,7 +297,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
> >
> >               /* now applying all filters at once. */
> >               hists->thread_filter = fake_samples[1].thread;
> > -             hists->dso_filter = fake_samples[1].map->dso;
> > +             hists->dso_filter = map__dso(fake_samples[1].map);
> >               hists__filter_by_thread(hists);
> >               hists__filter_by_dso(hists);
> >
> > @@ -322,6 +331,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
> >       evlist__delete(evlist);
> >       reset_output_field();
> >       machines__exit(&machines);
> > +     put_fake_samples();
> >
> >       return err;
> >  }
> > diff --git a/tools/perf/tests/hists_link.c b/tools/perf/tests/hists_link.c
> > index c575e13a850d..060e8731feff 100644
> > --- a/tools/perf/tests/hists_link.c
> > +++ b/tools/perf/tests/hists_link.c
> > @@ -6,6 +6,7 @@
> >  #include "evsel.h"
> >  #include "evlist.h"
> >  #include "machine.h"
> > +#include "map.h"
> >  #include "parse-events.h"
> >  #include "hists_common.h"
> >  #include "util/mmap.h"
> > @@ -94,6 +95,7 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
> >                       }
> >
> >                       fake_common_samples[k].thread = al.thread;
> > +                     map__put(fake_common_samples[k].map);
> >                       fake_common_samples[k].map = al.map;
> >                       fake_common_samples[k].sym = al.sym;
> >               }
> > @@ -126,11 +128,24 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
> >       return -1;
> >  }
> >
> > +static void put_fake_samples(void)
> > +{
> > +     size_t i, j;
> > +
> > +     for (i = 0; i < ARRAY_SIZE(fake_common_samples); i++)
> > +             map__put(fake_common_samples[i].map);
> > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++) {
> > +             for (j = 0; j < ARRAY_SIZE(fake_samples[0]); j++)
> > +                     map__put(fake_samples[i][j].map);
> > +     }
> > +}
> > +
> >  static int find_sample(struct sample *samples, size_t nr_samples,
> >                      struct thread *t, struct map *m, struct symbol *s)
> >  {
> >       while (nr_samples--) {
> > -             if (samples->thread == t && samples->map == m &&
> > +             if (samples->thread == t &&
> > +                 samples->map == m &&
> >                   samples->sym == s)
> >                       return 1;
> >               samples++;
> > @@ -336,6 +351,7 @@ static int test__hists_link(struct test_suite *test __maybe_unused, int subtest
> >       evlist__delete(evlist);
> >       reset_output_field();
> >       machines__exit(&machines);
> > +     put_fake_samples();
> >
> >       return err;
> >  }
> > diff --git a/tools/perf/tests/hists_output.c b/tools/perf/tests/hists_output.c
> > index 0bde4a768c15..4af6916491e5 100644
> > --- a/tools/perf/tests/hists_output.c
> > +++ b/tools/perf/tests/hists_output.c
> > @@ -78,6 +78,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
> >               }
> >
> >               fake_samples[i].thread = al.thread;
> > +             map__put(fake_samples[i].map);
> >               fake_samples[i].map = al.map;
> >               fake_samples[i].sym = al.sym;
> >       }
> > @@ -113,10 +114,18 @@ static void del_hist_entries(struct hists *hists)
> >       }
> >  }
> >
> > +static void put_fake_samples(void)
> > +{
> > +     size_t i;
> > +
> > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> > +             map__put(fake_samples[i].map);
> > +}
> > +
> >  typedef int (*test_fn_t)(struct evsel *, struct machine *);
> >
> >  #define COMM(he)  (thread__comm_str(he->thread))
> > -#define DSO(he)   (he->ms.map->dso->short_name)
> > +#define DSO(he)   (map__dso(he->ms.map)->short_name)
> >  #define SYM(he)   (he->ms.sym->name)
> >  #define CPU(he)   (he->cpu)
> >  #define PID(he)   (he->thread->tid)
> > @@ -620,6 +629,7 @@ static int test__hists_output(struct test_suite *test __maybe_unused, int subtes
> >       /* tear down everything */
> >       evlist__delete(evlist);
> >       machines__exit(&machines);
> > +     put_fake_samples();
> >
> >       return err;
> >  }
> > diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c
> > index a4301fc7b770..898eda55b7a8 100644
> > --- a/tools/perf/tests/mmap-thread-lookup.c
> > +++ b/tools/perf/tests/mmap-thread-lookup.c
> > @@ -202,7 +202,8 @@ static int mmap_events(synth_cb synth)
> >                       break;
> >               }
> >
> > -             pr_debug("map %p, addr %" PRIx64 "\n", al.map, al.map->start);
> > +             pr_debug("map %p, addr %" PRIx64 "\n", al.map, map__start(al.map));
> > +             map__put(al.map);
> >       }
> >
> >       machine__delete_threads(machine);
> > diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
> > index a8cfd31a3ff0..ae65b7bc9ab7 100644
> > --- a/tools/perf/util/callchain.c
> > +++ b/tools/perf/util/callchain.c
> > @@ -583,7 +583,7 @@ fill_node(struct callchain_node *node, struct callchain_cursor *cursor)
> >               }
> >               call->ip = cursor_node->ip;
> >               call->ms = cursor_node->ms;
> > -             map__get(call->ms.map);
> > +             call->ms.map = map__get(call->ms.map);
> >               call->srcline = cursor_node->srcline;
> >
> >               if (cursor_node->branch) {
> > @@ -1061,7 +1061,7 @@ int callchain_cursor_append(struct callchain_cursor *cursor,
> >       node->ip = ip;
> >       map__zput(node->ms.map);
> >       node->ms = *ms;
> > -     map__get(node->ms.map);
> > +     node->ms.map = map__get(node->ms.map);
> >       node->branch = branch;
> >       node->nr_loop_iter = nr_loop_iter;
> >       node->iter_cycles = iter_cycles;
> > @@ -1109,7 +1109,8 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
> >       struct machine *machine = maps__machine(node->ms.maps);
> >
> >       al->maps = node->ms.maps;
> > -     al->map = node->ms.map;
> > +     map__put(al->map);
> > +     al->map = map__get(node->ms.map);
> >       al->sym = node->ms.sym;
> >       al->srcline = node->srcline;
> >       al->addr = node->ip;
> > @@ -1530,7 +1531,7 @@ int callchain_node__make_parent_list(struct callchain_node *node)
> >                               goto out;
> >                       *new = *chain;
> >                       new->has_children = false;
> > -                     map__get(new->ms.map);
> > +                     new->ms.map = map__get(new->ms.map);
> >                       list_add_tail(&new->list, &head);
> >               }
> >               parent = parent->parent;
> > diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> > index 54a1d4df5f70..266318d5d006 100644
> > --- a/tools/perf/util/event.c
> > +++ b/tools/perf/util/event.c
> > @@ -484,13 +484,14 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
> >       if (machine) {
> >               struct addr_location al;
> >
> > -             al.map = maps__find(machine__kernel_maps(machine), tp->addr);
> > +             al.map = map__get(maps__find(machine__kernel_maps(machine), tp->addr));
> >               if (al.map && map__load(al.map) >= 0) {
> >                       al.addr = map__map_ip(al.map, tp->addr);
> >                       al.sym = map__find_symbol(al.map, al.addr);
> >                       if (al.sym)
> >                               ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
> >               }
> > +             map__put(al.map);
> >       }
> >       ret += fprintf(fp, " old len %u new len %u\n", tp->old_len, tp->new_len);
> >       old = true;
> > @@ -581,6 +582,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> >       al->filtered = 0;
> >
> >       if (machine == NULL) {
> > +             map__put(al->map);
> >               al->map = NULL;
> >               return NULL;
> >       }
> > @@ -599,6 +601,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> >               al->level = 'u';
> >       } else {
> >               al->level = 'H';
> > +             map__put(al->map);
> >               al->map = NULL;
> >
> >               if ((cpumode == PERF_RECORD_MISC_GUEST_USER ||
> > @@ -613,7 +616,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> >               return NULL;
> >       }
> >
> > -     al->map = maps__find(maps, al->addr);
> > +     al->map = map__get(maps__find(maps, al->addr));
> >       if (al->map != NULL) {
> >               /*
> >                * Kernel maps might be changed when loading symbols so loading
> > @@ -768,6 +771,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
> >   */
> >  void addr_location__put(struct addr_location *al)
> >  {
> > +     map__zput(al->map);
> >       thread__zput(al->thread);
> >  }
> >
> > diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> > index f19ac6eb4775..4dbb1dbf3679 100644
> > --- a/tools/perf/util/hist.c
> > +++ b/tools/perf/util/hist.c
> > @@ -446,7 +446,7 @@ static int hist_entry__init(struct hist_entry *he,
> >                       memset(&he->stat, 0, sizeof(he->stat));
> >       }
> >
> > -     map__get(he->ms.map);
> > +     he->ms.map = map__get(he->ms.map);
> >
> >       if (he->branch_info) {
> >               /*
> > @@ -461,13 +461,13 @@ static int hist_entry__init(struct hist_entry *he,
> >               memcpy(he->branch_info, template->branch_info,
> >                      sizeof(*he->branch_info));
> >
> > -             map__get(he->branch_info->from.ms.map);
> > -             map__get(he->branch_info->to.ms.map);
> > +             he->branch_info->from.ms.map = map__get(he->branch_info->from.ms.map);
> > +             he->branch_info->to.ms.map = map__get(he->branch_info->to.ms.map);
> >       }
> >
> >       if (he->mem_info) {
> > -             map__get(he->mem_info->iaddr.ms.map);
> > -             map__get(he->mem_info->daddr.ms.map);
> > +             he->mem_info->iaddr.ms.map = map__get(he->mem_info->iaddr.ms.map);
> > +             he->mem_info->daddr.ms.map = map__get(he->mem_info->daddr.ms.map);
> >       }
> >
> >       if (hist_entry__has_callchains(he) && symbol_conf.use_callchain)
> > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > index 940fb2a50dfd..49e4891e92b7 100644
> > --- a/tools/perf/util/machine.c
> > +++ b/tools/perf/util/machine.c
> > @@ -783,33 +783,42 @@ static int machine__process_ksymbol_register(struct machine *machine,
> >  {
> >       struct symbol *sym;
> >       struct map *map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
> > +     bool put_map = false;
> > +     int err = 0;
> >
> >       if (!map) {
> >               struct dso *dso = dso__new(event->ksymbol.name);
> > -             int err;
> >
> > -             if (dso) {
> > -                     dso->kernel = DSO_SPACE__KERNEL;
> > -                     map = map__new2(0, dso);
> > -                     dso__put(dso);
> > +             if (!dso) {
> > +                     err = -ENOMEM;
> > +                     goto out;
> >               }
> > -
> > -             if (!dso || !map) {
> > -                     return -ENOMEM;
> > +             dso->kernel = DSO_SPACE__KERNEL;
> > +             map = map__new2(0, dso);
> > +             dso__put(dso);
> > +             if (!map) {
> > +                     err = -ENOMEM;
> > +                     goto out;
> >               }
> > -
> > +             /*
> > +              * The inserted map has a get on it, we need to put to release
> > +              * the reference count here, but do it after all accesses are
> > +              * done.
> > +              */
> > +             put_map = true;
> >               if (event->ksymbol.ksym_type == PERF_RECORD_KSYMBOL_TYPE_OOL) {
> > -                     map->dso->binary_type = DSO_BINARY_TYPE__OOL;
> > -                     map->dso->data.file_size = event->ksymbol.len;
> > -                     dso__set_loaded(map->dso);
> > +                     map__dso(map)->binary_type = DSO_BINARY_TYPE__OOL;
> > +                     map__dso(map)->data.file_size = event->ksymbol.len;
> > +                     dso__set_loaded(map__dso(map));
> >               }
> >
> >               map->start = event->ksymbol.addr;
> > -             map->end = map->start + event->ksymbol.len;
> > +             map->end = map__start(map) + event->ksymbol.len;
> >               err = maps__insert(machine__kernel_maps(machine), map);
> > -             map__put(map);
> > -             if (err)
> > -                     return err;
> > +             if (err) {
> > +                     err = -ENOMEM;
> > +                     goto out;
> > +             }
> >
> >               dso__set_loaded(dso);
> >
> > @@ -819,13 +828,18 @@ static int machine__process_ksymbol_register(struct machine *machine,
> >               }
> >       }
> >
> > -     sym = symbol__new(map->map_ip(map, map->start),
> > +     sym = symbol__new(map__map_ip(map, map__start(map)),
> >                         event->ksymbol.len,
> >                         0, 0, event->ksymbol.name);
> > -     if (!sym)
> > -             return -ENOMEM;
> > -     dso__insert_symbol(map->dso, sym);
> > -     return 0;
> > +     if (!sym) {
> > +             err = -ENOMEM;
> > +             goto out;
> > +     }
> > +     dso__insert_symbol(map__dso(map), sym);
> > +out:
> > +     if (put_map)
> > +             map__put(map);
> > +     return err;
> >  }
> >
> >  static int machine__process_ksymbol_unregister(struct machine *machine,
> > @@ -925,14 +939,11 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
> >               goto out;
> >
> >       err = maps__insert(machine__kernel_maps(machine), map);
> > -
> > -     /* Put the map here because maps__insert already got it */
> > -     map__put(map);
> > -
> >       /* If maps__insert failed, return NULL. */
> > -     if (err)
> > +     if (err) {
> > +             map__put(map);
> >               map = NULL;
> > -
> > +     }
> >  out:
> >       /* put the dso here, corresponding to  machine__findnew_module_dso */
> >       dso__put(dso);
> > @@ -1228,6 +1239,7 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
> >       /* In case of renewal the kernel map, destroy previous one */
> >       machine__destroy_kernel_maps(machine);
> >
> > +     map__put(machine->vmlinux_map);
> >       machine->vmlinux_map = map__new2(0, kernel);
> >       if (machine->vmlinux_map == NULL)
> >               return -ENOMEM;
> > @@ -1513,6 +1525,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
> >       map->end = start + size;
> >
> >       dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
> > +     map__put(map);
> >       return 0;
> >  }
> >
> > @@ -1558,16 +1571,18 @@ static void machine__set_kernel_mmap(struct machine *machine,
> >  static int machine__update_kernel_mmap(struct machine *machine,
> >                                    u64 start, u64 end)
> >  {
> > -     struct map *map = machine__kernel_map(machine);
> > +     struct map *orig, *updated;
> >       int err;
> >
> > -     map__get(map);
> > -     maps__remove(machine__kernel_maps(machine), map);
> > +     orig = machine->vmlinux_map;
> > +     updated = map__get(orig);
> >
> > +     machine->vmlinux_map = updated;
> >       machine__set_kernel_mmap(machine, start, end);
> > +     maps__remove(machine__kernel_maps(machine), orig);
> > +     err = maps__insert(machine__kernel_maps(machine), updated);
> > +     map__put(orig);
> >
> > -     err = maps__insert(machine__kernel_maps(machine), map);
> > -     map__put(map);
> >       return err;
> >  }
> >
> > @@ -2246,6 +2261,7 @@ static int add_callchain_ip(struct thread *thread,
> >       err = callchain_cursor_append(cursor, ip, &ms,
> >                                     branch, flags, nr_loop_iter,
> >                                     iter_cycles, branch_from, srcline);
> > +     map__put(al.map);
> >       return err;
> >  }
> >
> > --
> > 2.35.1.265.g69c8d7142f-goog
> >
>
>
> --
> Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs
  2022-02-12 15:49             ` Arnaldo Carvalho de Melo
@ 2022-02-12 20:59               ` Ian Rogers
  0 siblings, 0 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-12 20:59 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

On Sat, Feb 12, 2022 at 7:50 AM Arnaldo Carvalho de Melo
<acme@kernel.org> wrote:
>
> Em Sat, Feb 12, 2022 at 12:48:37PM -0300, Arnaldo Carvalho de Melo escreveu:
> > Em Fri, Feb 11, 2022 at 11:35:05AM -0800, Ian Rogers escreveu:
> > > On Fri, Feb 11, 2022 at 11:21 AM Arnaldo Carvalho de Melo
> > > <acme@kernel.org> wrote:
> > > >
> > > > Em Fri, Feb 11, 2022 at 09:43:19AM -0800, Ian Rogers escreveu:
> > > > > On Fri, Feb 11, 2022 at 9:13 AM Arnaldo Carvalho de Melo
> > > > > <acme@kernel.org> wrote:
> > > > > >
> > > > > > Em Fri, Feb 11, 2022 at 02:33:56AM -0800, Ian Rogers escreveu:
> > > > > > > Make the pthread mutex on dso use the error check type. This allows
> > > > > > > deadlock checking via the return type. Assert the returned value from
> > > > > > > mutex lock is always 0.
> > > > > >
> > > > > > I think this is too blunt/pervasive source code wise, perhaps we should
> > > > > > wrap this like its done with rwsem in tools/perf/util/rwsem.h to get
> > > > > > away from pthreads primitives and make the source code look more like
> > > > > > a kernel one and then, taking advantage of the so far ideologic
> > > > > > needless indirection, add this BUG_ON if we build with "DEBUG=1" or
> > > > > > something, wdyt?
> > > > >
> > > >
> > > > > My concern with semaphores is that they are a concurrency primitive
> > > >
> > > > I'm not suggesting we switch over to semaphores, just to use the same
> > > > technique of wrapping pthread_mutex_t with some other API that then
> > > > allows us to add these BUG_ON() calls without polluting the source code
> > > > in many places.
> > >
> > > Sounds simple enough and would ensure consistency too. I can add it to
> > > the front of this set of changes. A different approach would be to
> > > take what's here and then refactor and cleanup as a follow on patch
> > > set. I'd prefer that as the size of this set of changes is already
> > > larger than I like - albeit that most of it is just introducing the
> >
> > So, the first 4 patches in this series were already merged, as they are
> > just prep work that don't add clutter, having those in the front of the
> > patchkit helps picking up the low hanging fruit.
>
> Forgot to mention, I merged, tested and alreay published it in
> perf/core, i.e. no more rebases for that lot, that is how it will get
> into 5.18.
>
> Alexei's threaded record patchkit is there as well, BTW, so should help
> reducing the possibility of clashes with your (and others) work.

Thanks Arnaldo,

I'm working off-of your perf/core branch and pushing things there will
mean they disappear from the v4 patch set when I rebase. For v4 I
have, separating out the pthread BUG_ONs, the map__map_ip changes (do
the obvious fix in its own patch so that we can easily blame it, keep
the code looking sane) and I'll try to make sure I'm addressing all
feedback on the changes.

I've been really happy to be getting feedback, the feedback is coming
fast so it is easy for me to take action upon it. I feel bad this
change is so large and quite a lot to look through. This does mean it
is more than just a toy. The approach solves the same problem that
would motivate the use of Rust and C++ for perf. That's not to say we
should never do those things, but any transition will be helped by
having the best C code base we can get together. If one day we're
running perf top in less than 1GB of RAM with stability, then the
churn here will have been worthwhile.

Thanks,
Ian

> - Arnaldo
>
> > I usually try to pick even if it comes later, to make progress, I'll
> > recheck the rest of the patchkit to see what more I can pick to reduce
> > its size.
> >
> > - Arnaldo
> >
> > > use of functions to access struct variables. Perhaps I just remove the
> > > BUG_ON and pthread changes here, we work to get this landed and in a
> > > separate set of patches clean up the pthread mutex code to have better
> > > bug checking.
> > >
> > > Thanks,
> > > Ian
> > >
> > > > - Arnaldo
> > > >
> > > > > that has more flexibility and power than a mutex. I like a mutex as it
> > > > > is quite obvious what is going on and that is good from a tooling
> > > > > point of view. A deadlock with two mutexes is easy to understand. On a
> > > > > semaphore, were we using it like a condition variable? There's more to
> > > > > figure out. I also like the idea of compiling the perf command with
> > > > > emscripten, we could then generate say perf annotate output in your
> > > > > web browser. Emscripten has implementations of standard posix
> > > > > libraries including pthreads, but we may need to have two approaches
> > > > > in the perf code if we want to compile with emscripten and use
> > > > > semaphores when targeting linux.
> > > > >
> > > > > Where this change comes from is that I worried that extending the
> > > > > locked regions to cover the race that'd been found would then expose
> > > > > the kind of recursive deadlock that pthread mutexes all too willingly
> > > > > allow. With this code we at least see the bug and don't just hang. I
> > > > > don't think we need the change to the mutexes for this change, but we
> > > > > do need to extend the regions to fix the data race.
> > > > >
> > > > > Let me know how you prefer it and I can roll it into a v4 version.
> > > > >
> > > > > Thanks,
> > > > > Ian
> > > > >
> > > > > > - Arnaldo
> > > > > >
> > > > > > > Signed-off-by: Ian Rogers <irogers@google.com>
> > > > > > > ---
> > > > > > >  tools/perf/util/dso.c    | 12 +++++++++---
> > > > > > >  tools/perf/util/symbol.c |  2 +-
> > > > > > >  2 files changed, 10 insertions(+), 4 deletions(-)
> > > > > > >
> > > > > > > diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
> > > > > > > index 9cc8a1772b4b..6beccffeef7b 100644
> > > > > > > --- a/tools/perf/util/dso.c
> > > > > > > +++ b/tools/perf/util/dso.c
> > > > > > > @@ -784,7 +784,7 @@ dso_cache__free(struct dso *dso)
> > > > > > >       struct rb_root *root = &dso->data.cache;
> > > > > > >       struct rb_node *next = rb_first(root);
> > > > > > >
> > > > > > > -     pthread_mutex_lock(&dso->lock);
> > > > > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > > > > >       while (next) {
> > > > > > >               struct dso_cache *cache;
> > > > > > >
> > > > > > > @@ -830,7 +830,7 @@ dso_cache__insert(struct dso *dso, struct dso_cache *new)
> > > > > > >       struct dso_cache *cache;
> > > > > > >       u64 offset = new->offset;
> > > > > > >
> > > > > > > -     pthread_mutex_lock(&dso->lock);
> > > > > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > > > > >       while (*p != NULL) {
> > > > > > >               u64 end;
> > > > > > >
> > > > > > > @@ -1259,6 +1259,8 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> > > > > > >       struct dso *dso = calloc(1, sizeof(*dso) + strlen(name) + 1);
> > > > > > >
> > > > > > >       if (dso != NULL) {
> > > > > > > +             pthread_mutexattr_t lock_attr;
> > > > > > > +
> > > > > > >               strcpy(dso->name, name);
> > > > > > >               if (id)
> > > > > > >                       dso->id = *id;
> > > > > > > @@ -1286,8 +1288,12 @@ struct dso *dso__new_id(const char *name, struct dso_id *id)
> > > > > > >               dso->root = NULL;
> > > > > > >               INIT_LIST_HEAD(&dso->node);
> > > > > > >               INIT_LIST_HEAD(&dso->data.open_entry);
> > > > > > > -             pthread_mutex_init(&dso->lock, NULL);
> > > > > > > +             pthread_mutexattr_init(&lock_attr);
> > > > > > > +             pthread_mutexattr_settype(&lock_attr, PTHREAD_MUTEX_ERRORCHECK);
> > > > > > > +             pthread_mutex_init(&dso->lock, &lock_attr);
> > > > > > > +             pthread_mutexattr_destroy(&lock_attr);
> > > > > > >               refcount_set(&dso->refcnt, 1);
> > > > > > > +
> > > > > > >       }
> > > > > > >
> > > > > > >       return dso;
> > > > > > > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> > > > > > > index b2ed3140a1fa..43f47532696f 100644
> > > > > > > --- a/tools/perf/util/symbol.c
> > > > > > > +++ b/tools/perf/util/symbol.c
> > > > > > > @@ -1783,7 +1783,7 @@ int dso__load(struct dso *dso, struct map *map)
> > > > > > >       }
> > > > > > >
> > > > > > >       nsinfo__mountns_enter(dso->nsinfo, &nsc);
> > > > > > > -     pthread_mutex_lock(&dso->lock);
> > > > > > > +     BUG_ON(pthread_mutex_lock(&dso->lock) != 0);
> > > > > > >
> > > > > > >       /* check again under the dso->lock */
> > > > > > >       if (dso__loaded(dso)) {
> > > > > > > --
> > > > > > > 2.35.1.265.g69c8d7142f-goog
> > > > > >
> > > > > > --
> > > > > >
> > > > > > - Arnaldo
> > > >
> > > > --
> > > >
> > > > - Arnaldo
> >
> > --
> >
> > - Arnaldo
>
> --
>
> - Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 17/22] perf map: Changes to reference counting
  2022-02-12 20:48     ` Ian Rogers
@ 2022-02-14  2:00       ` Masami Hiramatsu
  2022-02-14 18:56       ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 58+ messages in thread
From: Masami Hiramatsu @ 2022-02-14  2:00 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso,
	André Almeida, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Jin Yao,
	Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo, eranian

On Sat, 12 Feb 2022 12:48:45 -0800
Ian Rogers <irogers@google.com> wrote:

> On Sat, Feb 12, 2022 at 12:46 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> >
> > Hi Ian,
> >
> > On Fri, 11 Feb 2022 02:34:10 -0800
> > Ian Rogers <irogers@google.com> wrote:
> >
> > > When a pointer to a map exists do a get, when that pointer is
> > > overwritten or freed, put the map. This avoids issues with gets and
> > > puts being inconsistently used causing, use after puts, etc. Reference
> > > count checking and address sanitizer were used to identify issues.
> >
> > OK, and please add comments in the code what should be actually done
> > so the others can understand it correctly, since this changes the
> > map object handling model.
> >
> > Previously;
> >
> >   map__get(map);
> >   map_operations(map);
> >   map__put(map);
> >
> > Now, we have to use the object returned from get() ops.
> > This is more likely to the memdup()/free().
> >
> >   new = map__get(map);
> >   map_operations(new);
> >   map__put(new);
> >
> > To update the object in the other object (e.g. machine__update_kernel_mmap())
> > The original one must be put because it has the old copy.
> >
> > Previous;
> >
> >   map__get(parent_obj->map);
> >   update_operation(parent_obj->map);
> >   map__put(parent_obj->map;
> >
> > Is now;
> >
> >   orig = parent_obj->map;
> >   new = map__get(orig);
> >   update_operation(new);
> >   parent_obj->map = new;
> >   map__put(orig);
> 
> Hi Masami,
> 
> Thanks as always for the input! This is a top post and so I lack the
> context for what you are describing. The model should always be get,
> operation, put, but the map code does not follow this model. I suspect
> the map code at some point in time didn't have reference counts,
> someone added them for one use case and then didn't add gets and puts
> elsewhere. Because a crash is worse than a memory leak extra gets were
> added, or puts missed, and the code is pretty much spaghetti today. We
> now need to be able to pair gets with puts and hence the model that
> this change introduces.

Hi Ian,

Thanks for the reply.
And I agree there are some issues in the current code. I thought those
are fixed in patches before this, because those are bugs.

> There are cases where the new code needs to
> distinguish between a reference that is put and a reference that will
> be kept alive and say returned, this is fiddly and adds extra state -
> this seems to be what you're describing. I don't see how having a
> concept of a token is clearing this matter up. We have one pointer
> that can't be used after a function has consumed it, we introduce
> another pointer via a get so that we can keep the value for the sake
> of future use. In C++ we'd have two smart pointers for this case.

Yeah, C++ has such smart pointer concept, but C doesn't. So I would
suggest that leaving some comments will help followers. Imagine that
if you see the smart pointer without any prerequisite knowledge.

> > I think this change also should be documented with some concrete example
> > patterns so that someone can program it correctly. :-)
> 
> So the example change was the cpumap one (i.e. patch set v1). It is a
> literal patch set and so I'm not sure why it isn't a concrete example.

Of course these are the good examples, but I can not see any comment
why this is done in the code or document. If this programming model
is enough general, you don't need it, but first parson needs to leave
why and how to.

> In terms of motivation, the map code is about the worst thing that
> could try to be fixed and that's why I tackled it. It'd have been much
> easier to implement the approach in cpumap and ask someone else to do
> the map case :-)

Indeed, this approach can help others to find the issues, and would
you think if there are more document and comments, followers can
do it easier?

> The problem I face is that in introducing the
> refactors and the approach, perf now crashes because of the technique
> working. That's why I've also had to fix the bugs in map the approach
> identifies - the code passes most of perf test enabled and with
> sanitizers.  There are more issues. To be specific on one example,
> addr_location references a map but lacks any kind of init/exit
> protocol. The exit should put any map the addr_location has. There are
> 191 uses of addr_location that need to be looked at and a proper
> get/put, init/exit approach introduced.

Yeah, I also felt that there are less API document in perf in general.

> I've fixed the ones that are
> crashes and show stoppers. With this approach enabled we can catch
> more and avoid human error.

I think just crashing without telling what is the correct way to
handle it, is not kind (and may waste someone's time until they
find the correct way.)

BTW, your approach can crash below pattern (but this is standard
refcount'd object handling in C.)

   map = map__new_and_get();
...
   map__get(map);
   map_operations(map);
   map__put(map);
...
   map__get(map); <- crash! because "map" is already freed.

This must be like the below now;

   map = map__new_and_get();
...
   cur = map__get(map);
   map_operations(cur);
   map__put(cur);
...
   cur2 = map__get(map);

Can this find below case?

   map = map__new_and_get();
...
   /* Do not get map */
   map_operations(map);


Thank you,

> 
> Thanks,
> Ian
> 
> > (This is the reason why I asked you to introduce object-token instead
> >  of modifying object pointer itself.)
> >
> > Thank you,
> >
> > >
> > > Signed-off-by: Ian Rogers <irogers@google.com>
> > > ---
> > >  tools/perf/tests/hists_cumulate.c     | 14 ++++-
> > >  tools/perf/tests/hists_filter.c       | 14 ++++-
> > >  tools/perf/tests/hists_link.c         | 18 +++++-
> > >  tools/perf/tests/hists_output.c       | 12 +++-
> > >  tools/perf/tests/mmap-thread-lookup.c |  3 +-
> > >  tools/perf/util/callchain.c           |  9 +--
> > >  tools/perf/util/event.c               |  8 ++-
> > >  tools/perf/util/hist.c                | 10 ++--
> > >  tools/perf/util/machine.c             | 80 ++++++++++++++++-----------
> > >  9 files changed, 118 insertions(+), 50 deletions(-)
> > >
> > > diff --git a/tools/perf/tests/hists_cumulate.c b/tools/perf/tests/hists_cumulate.c
> > > index 17f4fcd6bdce..28f5eb41eed9 100644
> > > --- a/tools/perf/tests/hists_cumulate.c
> > > +++ b/tools/perf/tests/hists_cumulate.c
> > > @@ -112,6 +112,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
> > >               }
> > >
> > >               fake_samples[i].thread = al.thread;
> > > +             map__put(fake_samples[i].map);
> > >               fake_samples[i].map = al.map;
> > >               fake_samples[i].sym = al.sym;
> > >       }
> > > @@ -147,15 +148,23 @@ static void del_hist_entries(struct hists *hists)
> > >       }
> > >  }
> > >
> > > +static void put_fake_samples(void)
> > > +{
> > > +     size_t i;
> > > +
> > > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> > > +             map__put(fake_samples[i].map);
> > > +}
> > > +
> > >  typedef int (*test_fn_t)(struct evsel *, struct machine *);
> > >
> > >  #define COMM(he)  (thread__comm_str(he->thread))
> > > -#define DSO(he)   (he->ms.map->dso->short_name)
> > > +#define DSO(he)   (map__dso(he->ms.map)->short_name)
> > >  #define SYM(he)   (he->ms.sym->name)
> > >  #define CPU(he)   (he->cpu)
> > >  #define PID(he)   (he->thread->tid)
> > >  #define DEPTH(he) (he->callchain->max_depth)
> > > -#define CDSO(cl)  (cl->ms.map->dso->short_name)
> > > +#define CDSO(cl)  (map__dso(cl->ms.map)->short_name)
> > >  #define CSYM(cl)  (cl->ms.sym->name)
> > >
> > >  struct result {
> > > @@ -733,6 +742,7 @@ static int test__hists_cumulate(struct test_suite *test __maybe_unused, int subt
> > >       /* tear down everything */
> > >       evlist__delete(evlist);
> > >       machines__exit(&machines);
> > > +     put_fake_samples();
> > >
> > >       return err;
> > >  }
> > > diff --git a/tools/perf/tests/hists_filter.c b/tools/perf/tests/hists_filter.c
> > > index 08cbeb9e39ae..bcd46244182a 100644
> > > --- a/tools/perf/tests/hists_filter.c
> > > +++ b/tools/perf/tests/hists_filter.c
> > > @@ -89,6 +89,7 @@ static int add_hist_entries(struct evlist *evlist,
> > >                       }
> > >
> > >                       fake_samples[i].thread = al.thread;
> > > +                     map__put(fake_samples[i].map);
> > >                       fake_samples[i].map = al.map;
> > >                       fake_samples[i].sym = al.sym;
> > >               }
> > > @@ -101,6 +102,14 @@ static int add_hist_entries(struct evlist *evlist,
> > >       return TEST_FAIL;
> > >  }
> > >
> > > +static void put_fake_samples(void)
> > > +{
> > > +     size_t i;
> > > +
> > > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> > > +             map__put(fake_samples[i].map);
> > > +}
> > > +
> > >  static int test__hists_filter(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
> > >  {
> > >       int err = TEST_FAIL;
> > > @@ -194,7 +203,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
> > >               hists__filter_by_thread(hists);
> > >
> > >               /* now applying dso filter for 'kernel' */
> > > -             hists->dso_filter = fake_samples[0].map->dso;
> > > +             hists->dso_filter = map__dso(fake_samples[0].map);
> > >               hists__filter_by_dso(hists);
> > >
> > >               if (verbose > 2) {
> > > @@ -288,7 +297,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
> > >
> > >               /* now applying all filters at once. */
> > >               hists->thread_filter = fake_samples[1].thread;
> > > -             hists->dso_filter = fake_samples[1].map->dso;
> > > +             hists->dso_filter = map__dso(fake_samples[1].map);
> > >               hists__filter_by_thread(hists);
> > >               hists__filter_by_dso(hists);
> > >
> > > @@ -322,6 +331,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
> > >       evlist__delete(evlist);
> > >       reset_output_field();
> > >       machines__exit(&machines);
> > > +     put_fake_samples();
> > >
> > >       return err;
> > >  }
> > > diff --git a/tools/perf/tests/hists_link.c b/tools/perf/tests/hists_link.c
> > > index c575e13a850d..060e8731feff 100644
> > > --- a/tools/perf/tests/hists_link.c
> > > +++ b/tools/perf/tests/hists_link.c
> > > @@ -6,6 +6,7 @@
> > >  #include "evsel.h"
> > >  #include "evlist.h"
> > >  #include "machine.h"
> > > +#include "map.h"
> > >  #include "parse-events.h"
> > >  #include "hists_common.h"
> > >  #include "util/mmap.h"
> > > @@ -94,6 +95,7 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
> > >                       }
> > >
> > >                       fake_common_samples[k].thread = al.thread;
> > > +                     map__put(fake_common_samples[k].map);
> > >                       fake_common_samples[k].map = al.map;
> > >                       fake_common_samples[k].sym = al.sym;
> > >               }
> > > @@ -126,11 +128,24 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
> > >       return -1;
> > >  }
> > >
> > > +static void put_fake_samples(void)
> > > +{
> > > +     size_t i, j;
> > > +
> > > +     for (i = 0; i < ARRAY_SIZE(fake_common_samples); i++)
> > > +             map__put(fake_common_samples[i].map);
> > > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++) {
> > > +             for (j = 0; j < ARRAY_SIZE(fake_samples[0]); j++)
> > > +                     map__put(fake_samples[i][j].map);
> > > +     }
> > > +}
> > > +
> > >  static int find_sample(struct sample *samples, size_t nr_samples,
> > >                      struct thread *t, struct map *m, struct symbol *s)
> > >  {
> > >       while (nr_samples--) {
> > > -             if (samples->thread == t && samples->map == m &&
> > > +             if (samples->thread == t &&
> > > +                 samples->map == m &&
> > >                   samples->sym == s)
> > >                       return 1;
> > >               samples++;
> > > @@ -336,6 +351,7 @@ static int test__hists_link(struct test_suite *test __maybe_unused, int subtest
> > >       evlist__delete(evlist);
> > >       reset_output_field();
> > >       machines__exit(&machines);
> > > +     put_fake_samples();
> > >
> > >       return err;
> > >  }
> > > diff --git a/tools/perf/tests/hists_output.c b/tools/perf/tests/hists_output.c
> > > index 0bde4a768c15..4af6916491e5 100644
> > > --- a/tools/perf/tests/hists_output.c
> > > +++ b/tools/perf/tests/hists_output.c
> > > @@ -78,6 +78,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
> > >               }
> > >
> > >               fake_samples[i].thread = al.thread;
> > > +             map__put(fake_samples[i].map);
> > >               fake_samples[i].map = al.map;
> > >               fake_samples[i].sym = al.sym;
> > >       }
> > > @@ -113,10 +114,18 @@ static void del_hist_entries(struct hists *hists)
> > >       }
> > >  }
> > >
> > > +static void put_fake_samples(void)
> > > +{
> > > +     size_t i;
> > > +
> > > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> > > +             map__put(fake_samples[i].map);
> > > +}
> > > +
> > >  typedef int (*test_fn_t)(struct evsel *, struct machine *);
> > >
> > >  #define COMM(he)  (thread__comm_str(he->thread))
> > > -#define DSO(he)   (he->ms.map->dso->short_name)
> > > +#define DSO(he)   (map__dso(he->ms.map)->short_name)
> > >  #define SYM(he)   (he->ms.sym->name)
> > >  #define CPU(he)   (he->cpu)
> > >  #define PID(he)   (he->thread->tid)
> > > @@ -620,6 +629,7 @@ static int test__hists_output(struct test_suite *test __maybe_unused, int subtes
> > >       /* tear down everything */
> > >       evlist__delete(evlist);
> > >       machines__exit(&machines);
> > > +     put_fake_samples();
> > >
> > >       return err;
> > >  }
> > > diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c
> > > index a4301fc7b770..898eda55b7a8 100644
> > > --- a/tools/perf/tests/mmap-thread-lookup.c
> > > +++ b/tools/perf/tests/mmap-thread-lookup.c
> > > @@ -202,7 +202,8 @@ static int mmap_events(synth_cb synth)
> > >                       break;
> > >               }
> > >
> > > -             pr_debug("map %p, addr %" PRIx64 "\n", al.map, al.map->start);
> > > +             pr_debug("map %p, addr %" PRIx64 "\n", al.map, map__start(al.map));
> > > +             map__put(al.map);
> > >       }
> > >
> > >       machine__delete_threads(machine);
> > > diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
> > > index a8cfd31a3ff0..ae65b7bc9ab7 100644
> > > --- a/tools/perf/util/callchain.c
> > > +++ b/tools/perf/util/callchain.c
> > > @@ -583,7 +583,7 @@ fill_node(struct callchain_node *node, struct callchain_cursor *cursor)
> > >               }
> > >               call->ip = cursor_node->ip;
> > >               call->ms = cursor_node->ms;
> > > -             map__get(call->ms.map);
> > > +             call->ms.map = map__get(call->ms.map);
> > >               call->srcline = cursor_node->srcline;
> > >
> > >               if (cursor_node->branch) {
> > > @@ -1061,7 +1061,7 @@ int callchain_cursor_append(struct callchain_cursor *cursor,
> > >       node->ip = ip;
> > >       map__zput(node->ms.map);
> > >       node->ms = *ms;
> > > -     map__get(node->ms.map);
> > > +     node->ms.map = map__get(node->ms.map);
> > >       node->branch = branch;
> > >       node->nr_loop_iter = nr_loop_iter;
> > >       node->iter_cycles = iter_cycles;
> > > @@ -1109,7 +1109,8 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
> > >       struct machine *machine = maps__machine(node->ms.maps);
> > >
> > >       al->maps = node->ms.maps;
> > > -     al->map = node->ms.map;
> > > +     map__put(al->map);
> > > +     al->map = map__get(node->ms.map);
> > >       al->sym = node->ms.sym;
> > >       al->srcline = node->srcline;
> > >       al->addr = node->ip;
> > > @@ -1530,7 +1531,7 @@ int callchain_node__make_parent_list(struct callchain_node *node)
> > >                               goto out;
> > >                       *new = *chain;
> > >                       new->has_children = false;
> > > -                     map__get(new->ms.map);
> > > +                     new->ms.map = map__get(new->ms.map);
> > >                       list_add_tail(&new->list, &head);
> > >               }
> > >               parent = parent->parent;
> > > diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> > > index 54a1d4df5f70..266318d5d006 100644
> > > --- a/tools/perf/util/event.c
> > > +++ b/tools/perf/util/event.c
> > > @@ -484,13 +484,14 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
> > >       if (machine) {
> > >               struct addr_location al;
> > >
> > > -             al.map = maps__find(machine__kernel_maps(machine), tp->addr);
> > > +             al.map = map__get(maps__find(machine__kernel_maps(machine), tp->addr));
> > >               if (al.map && map__load(al.map) >= 0) {
> > >                       al.addr = map__map_ip(al.map, tp->addr);
> > >                       al.sym = map__find_symbol(al.map, al.addr);
> > >                       if (al.sym)
> > >                               ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
> > >               }
> > > +             map__put(al.map);
> > >       }
> > >       ret += fprintf(fp, " old len %u new len %u\n", tp->old_len, tp->new_len);
> > >       old = true;
> > > @@ -581,6 +582,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> > >       al->filtered = 0;
> > >
> > >       if (machine == NULL) {
> > > +             map__put(al->map);
> > >               al->map = NULL;
> > >               return NULL;
> > >       }
> > > @@ -599,6 +601,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> > >               al->level = 'u';
> > >       } else {
> > >               al->level = 'H';
> > > +             map__put(al->map);
> > >               al->map = NULL;
> > >
> > >               if ((cpumode == PERF_RECORD_MISC_GUEST_USER ||
> > > @@ -613,7 +616,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> > >               return NULL;
> > >       }
> > >
> > > -     al->map = maps__find(maps, al->addr);
> > > +     al->map = map__get(maps__find(maps, al->addr));
> > >       if (al->map != NULL) {
> > >               /*
> > >                * Kernel maps might be changed when loading symbols so loading
> > > @@ -768,6 +771,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
> > >   */
> > >  void addr_location__put(struct addr_location *al)
> > >  {
> > > +     map__zput(al->map);
> > >       thread__zput(al->thread);
> > >  }
> > >
> > > diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> > > index f19ac6eb4775..4dbb1dbf3679 100644
> > > --- a/tools/perf/util/hist.c
> > > +++ b/tools/perf/util/hist.c
> > > @@ -446,7 +446,7 @@ static int hist_entry__init(struct hist_entry *he,
> > >                       memset(&he->stat, 0, sizeof(he->stat));
> > >       }
> > >
> > > -     map__get(he->ms.map);
> > > +     he->ms.map = map__get(he->ms.map);
> > >
> > >       if (he->branch_info) {
> > >               /*
> > > @@ -461,13 +461,13 @@ static int hist_entry__init(struct hist_entry *he,
> > >               memcpy(he->branch_info, template->branch_info,
> > >                      sizeof(*he->branch_info));
> > >
> > > -             map__get(he->branch_info->from.ms.map);
> > > -             map__get(he->branch_info->to.ms.map);
> > > +             he->branch_info->from.ms.map = map__get(he->branch_info->from.ms.map);
> > > +             he->branch_info->to.ms.map = map__get(he->branch_info->to.ms.map);
> > >       }
> > >
> > >       if (he->mem_info) {
> > > -             map__get(he->mem_info->iaddr.ms.map);
> > > -             map__get(he->mem_info->daddr.ms.map);
> > > +             he->mem_info->iaddr.ms.map = map__get(he->mem_info->iaddr.ms.map);
> > > +             he->mem_info->daddr.ms.map = map__get(he->mem_info->daddr.ms.map);
> > >       }
> > >
> > >       if (hist_entry__has_callchains(he) && symbol_conf.use_callchain)
> > > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > > index 940fb2a50dfd..49e4891e92b7 100644
> > > --- a/tools/perf/util/machine.c
> > > +++ b/tools/perf/util/machine.c
> > > @@ -783,33 +783,42 @@ static int machine__process_ksymbol_register(struct machine *machine,
> > >  {
> > >       struct symbol *sym;
> > >       struct map *map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
> > > +     bool put_map = false;
> > > +     int err = 0;
> > >
> > >       if (!map) {
> > >               struct dso *dso = dso__new(event->ksymbol.name);
> > > -             int err;
> > >
> > > -             if (dso) {
> > > -                     dso->kernel = DSO_SPACE__KERNEL;
> > > -                     map = map__new2(0, dso);
> > > -                     dso__put(dso);
> > > +             if (!dso) {
> > > +                     err = -ENOMEM;
> > > +                     goto out;
> > >               }
> > > -
> > > -             if (!dso || !map) {
> > > -                     return -ENOMEM;
> > > +             dso->kernel = DSO_SPACE__KERNEL;
> > > +             map = map__new2(0, dso);
> > > +             dso__put(dso);
> > > +             if (!map) {
> > > +                     err = -ENOMEM;
> > > +                     goto out;
> > >               }
> > > -
> > > +             /*
> > > +              * The inserted map has a get on it, we need to put to release
> > > +              * the reference count here, but do it after all accesses are
> > > +              * done.
> > > +              */
> > > +             put_map = true;
> > >               if (event->ksymbol.ksym_type == PERF_RECORD_KSYMBOL_TYPE_OOL) {
> > > -                     map->dso->binary_type = DSO_BINARY_TYPE__OOL;
> > > -                     map->dso->data.file_size = event->ksymbol.len;
> > > -                     dso__set_loaded(map->dso);
> > > +                     map__dso(map)->binary_type = DSO_BINARY_TYPE__OOL;
> > > +                     map__dso(map)->data.file_size = event->ksymbol.len;
> > > +                     dso__set_loaded(map__dso(map));
> > >               }
> > >
> > >               map->start = event->ksymbol.addr;
> > > -             map->end = map->start + event->ksymbol.len;
> > > +             map->end = map__start(map) + event->ksymbol.len;
> > >               err = maps__insert(machine__kernel_maps(machine), map);
> > > -             map__put(map);
> > > -             if (err)
> > > -                     return err;
> > > +             if (err) {
> > > +                     err = -ENOMEM;
> > > +                     goto out;
> > > +             }
> > >
> > >               dso__set_loaded(dso);
> > >
> > > @@ -819,13 +828,18 @@ static int machine__process_ksymbol_register(struct machine *machine,
> > >               }
> > >       }
> > >
> > > -     sym = symbol__new(map->map_ip(map, map->start),
> > > +     sym = symbol__new(map__map_ip(map, map__start(map)),
> > >                         event->ksymbol.len,
> > >                         0, 0, event->ksymbol.name);
> > > -     if (!sym)
> > > -             return -ENOMEM;
> > > -     dso__insert_symbol(map->dso, sym);
> > > -     return 0;
> > > +     if (!sym) {
> > > +             err = -ENOMEM;
> > > +             goto out;
> > > +     }
> > > +     dso__insert_symbol(map__dso(map), sym);
> > > +out:
> > > +     if (put_map)
> > > +             map__put(map);
> > > +     return err;
> > >  }
> > >
> > >  static int machine__process_ksymbol_unregister(struct machine *machine,
> > > @@ -925,14 +939,11 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
> > >               goto out;
> > >
> > >       err = maps__insert(machine__kernel_maps(machine), map);
> > > -
> > > -     /* Put the map here because maps__insert already got it */
> > > -     map__put(map);
> > > -
> > >       /* If maps__insert failed, return NULL. */
> > > -     if (err)
> > > +     if (err) {
> > > +             map__put(map);
> > >               map = NULL;
> > > -
> > > +     }
> > >  out:
> > >       /* put the dso here, corresponding to  machine__findnew_module_dso */
> > >       dso__put(dso);
> > > @@ -1228,6 +1239,7 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
> > >       /* In case of renewal the kernel map, destroy previous one */
> > >       machine__destroy_kernel_maps(machine);
> > >
> > > +     map__put(machine->vmlinux_map);
> > >       machine->vmlinux_map = map__new2(0, kernel);
> > >       if (machine->vmlinux_map == NULL)
> > >               return -ENOMEM;
> > > @@ -1513,6 +1525,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
> > >       map->end = start + size;
> > >
> > >       dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
> > > +     map__put(map);
> > >       return 0;
> > >  }
> > >
> > > @@ -1558,16 +1571,18 @@ static void machine__set_kernel_mmap(struct machine *machine,
> > >  static int machine__update_kernel_mmap(struct machine *machine,
> > >                                    u64 start, u64 end)
> > >  {
> > > -     struct map *map = machine__kernel_map(machine);
> > > +     struct map *orig, *updated;
> > >       int err;
> > >
> > > -     map__get(map);
> > > -     maps__remove(machine__kernel_maps(machine), map);
> > > +     orig = machine->vmlinux_map;
> > > +     updated = map__get(orig);
> > >
> > > +     machine->vmlinux_map = updated;
> > >       machine__set_kernel_mmap(machine, start, end);
> > > +     maps__remove(machine__kernel_maps(machine), orig);
> > > +     err = maps__insert(machine__kernel_maps(machine), updated);
> > > +     map__put(orig);
> > >
> > > -     err = maps__insert(machine__kernel_maps(machine), map);
> > > -     map__put(map);
> > >       return err;
> > >  }
> > >
> > > @@ -2246,6 +2261,7 @@ static int add_callchain_ip(struct thread *thread,
> > >       err = callchain_cursor_append(cursor, ip, &ms,
> > >                                     branch, flags, nr_loop_iter,
> > >                                     iter_cycles, branch_from, srcline);
> > > +     map__put(al.map);
> > >       return err;
> > >  }
> > >
> > > --
> > > 2.35.1.265.g69c8d7142f-goog
> > >
> >
> >
> > --
> > Masami Hiramatsu <mhiramat@kernel.org>


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 17/22] perf map: Changes to reference counting
  2022-02-12 20:48     ` Ian Rogers
  2022-02-14  2:00       ` Masami Hiramatsu
@ 2022-02-14 18:56       ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-14 18:56 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Masami Hiramatsu, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Thomas Gleixner,
	Darren Hart, Davidlohr Bueso, André Almeida, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Steven Rostedt, Miaoqian Lin,
	Stephen Brennan, Kajol Jain, Alexey Bayduraev, German Gomez,
	linux-perf-users, linux-kernel, Eric Dumazet, Dmitry Vyukov,
	Hao Luo, eranian

Em Sat, Feb 12, 2022 at 12:48:45PM -0800, Ian Rogers escreveu:
> On Sat, Feb 12, 2022 at 12:46 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > Hi Ian,
> >
> > On Fri, 11 Feb 2022 02:34:10 -0800
> > Ian Rogers <irogers@google.com> wrote:
> >
> > > When a pointer to a map exists do a get, when that pointer is
> > > overwritten or freed, put the map. This avoids issues with gets and
> > > puts being inconsistently used causing, use after puts, etc. Reference
> > > count checking and address sanitizer were used to identify issues.
> >
> > OK, and please add comments in the code what should be actually done
> > so the others can understand it correctly, since this changes the
> > map object handling model.
> >
> > Previously;
> >
> >   map__get(map);
> >   map_operations(map);
> >   map__put(map);
> >
> > Now, we have to use the object returned from get() ops.
> > This is more likely to the memdup()/free().
> >
> >   new = map__get(map);
> >   map_operations(new);
> >   map__put(new);
> >
> > To update the object in the other object (e.g. machine__update_kernel_mmap())
> > The original one must be put because it has the old copy.
> >
> > Previous;
> >
> >   map__get(parent_obj->map);
> >   update_operation(parent_obj->map);
> >   map__put(parent_obj->map;
> >
> > Is now;
> >
> >   orig = parent_obj->map;
> >   new = map__get(orig);
> >   update_operation(new);
> >   parent_obj->map = new;
> >   map__put(orig);
> 
> Hi Masami,
> 
> Thanks as always for the input! This is a top post and so I lack the
> context for what you are describing. The model should always be get,
> operation, put, but the map code does not follow this model.

It should.

> I suspect the map code at some point in time didn't have reference
> counts,

That is right.

> someone added them for one use case and then didn't add gets and puts
> elsewhere.

That is the case, but then the missing gets and puts are bugs and should
be fixed as we find problems or do audits, like you're doing and Masami
did once.

> Because a crash is worse than a memory leak extra gets were
> added, or puts missed, and the code is pretty much spaghetti today.

That is not the kind of pasta I like, indeed :-\

> We now need to be able to pair gets with puts and hence the model that
> this change introduces.

> There are cases where the new code needs to distinguish between a
> reference that is put and a reference that will be kept alive and say
> returned, this is fiddly and adds extra state - this seems to be what
> you're describing. I don't see how having a concept of a token is
> clearing this matter up. We have one pointer that can't be used after
> a function has consumed it, we introduce another pointer via a get so
> that we can keep the value for the sake of future use. In C++ we'd
> have two smart pointers for this case.
 
> > I think this change also should be documented with some concrete example
> > patterns so that someone can program it correctly. :-)
> 
> So the example change was the cpumap one (i.e. patch set v1). It is a
> literal patch set and so I'm not sure why it isn't a concrete example.
> In terms of motivation, the map code is about the worst thing that
> could try to be fixed and that's why I tackled it. It'd have been much
> easier to implement the approach in cpumap and ask someone else to do
> the map case :-) The problem I face is that in introducing the
> refactors and the approach, perf now crashes because of the technique
> working.

And those are the things I want to cherry pick, i.e. the fixes, while
the mechanism for helping to find the problems gets discussed and
polished not to pollute too much the source code.

> That's why I've also had to fix the bugs in map the approach
> identifies - the code passes most of perf test enabled and with
> sanitizers.  There are more issues. To be specific on one example,
> addr_location references a map but lacks any kind of init/exit
> protocol. The exit should put any map the addr_location has. There are
> 191 uses of addr_location that need to be looked at and a proper
> get/put, init/exit approach introduced. I've fixed the ones that are
> crashes and show stoppers.

Cool, lets try and land those first.

- Arnaldo

> With this approach enabled we can catch more and avoid human error.
 
> Thanks,
> Ian
> 
> > (This is the reason why I asked you to introduce object-token instead
> >  of modifying object pointer itself.)
> >
> > Thank you,
> >
> > >
> > > Signed-off-by: Ian Rogers <irogers@google.com>
> > > ---
> > >  tools/perf/tests/hists_cumulate.c     | 14 ++++-
> > >  tools/perf/tests/hists_filter.c       | 14 ++++-
> > >  tools/perf/tests/hists_link.c         | 18 +++++-
> > >  tools/perf/tests/hists_output.c       | 12 +++-
> > >  tools/perf/tests/mmap-thread-lookup.c |  3 +-
> > >  tools/perf/util/callchain.c           |  9 +--
> > >  tools/perf/util/event.c               |  8 ++-
> > >  tools/perf/util/hist.c                | 10 ++--
> > >  tools/perf/util/machine.c             | 80 ++++++++++++++++-----------
> > >  9 files changed, 118 insertions(+), 50 deletions(-)
> > >
> > > diff --git a/tools/perf/tests/hists_cumulate.c b/tools/perf/tests/hists_cumulate.c
> > > index 17f4fcd6bdce..28f5eb41eed9 100644
> > > --- a/tools/perf/tests/hists_cumulate.c
> > > +++ b/tools/perf/tests/hists_cumulate.c
> > > @@ -112,6 +112,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
> > >               }
> > >
> > >               fake_samples[i].thread = al.thread;
> > > +             map__put(fake_samples[i].map);
> > >               fake_samples[i].map = al.map;
> > >               fake_samples[i].sym = al.sym;
> > >       }
> > > @@ -147,15 +148,23 @@ static void del_hist_entries(struct hists *hists)
> > >       }
> > >  }
> > >
> > > +static void put_fake_samples(void)
> > > +{
> > > +     size_t i;
> > > +
> > > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> > > +             map__put(fake_samples[i].map);
> > > +}
> > > +
> > >  typedef int (*test_fn_t)(struct evsel *, struct machine *);
> > >
> > >  #define COMM(he)  (thread__comm_str(he->thread))
> > > -#define DSO(he)   (he->ms.map->dso->short_name)
> > > +#define DSO(he)   (map__dso(he->ms.map)->short_name)
> > >  #define SYM(he)   (he->ms.sym->name)
> > >  #define CPU(he)   (he->cpu)
> > >  #define PID(he)   (he->thread->tid)
> > >  #define DEPTH(he) (he->callchain->max_depth)
> > > -#define CDSO(cl)  (cl->ms.map->dso->short_name)
> > > +#define CDSO(cl)  (map__dso(cl->ms.map)->short_name)
> > >  #define CSYM(cl)  (cl->ms.sym->name)
> > >
> > >  struct result {
> > > @@ -733,6 +742,7 @@ static int test__hists_cumulate(struct test_suite *test __maybe_unused, int subt
> > >       /* tear down everything */
> > >       evlist__delete(evlist);
> > >       machines__exit(&machines);
> > > +     put_fake_samples();
> > >
> > >       return err;
> > >  }
> > > diff --git a/tools/perf/tests/hists_filter.c b/tools/perf/tests/hists_filter.c
> > > index 08cbeb9e39ae..bcd46244182a 100644
> > > --- a/tools/perf/tests/hists_filter.c
> > > +++ b/tools/perf/tests/hists_filter.c
> > > @@ -89,6 +89,7 @@ static int add_hist_entries(struct evlist *evlist,
> > >                       }
> > >
> > >                       fake_samples[i].thread = al.thread;
> > > +                     map__put(fake_samples[i].map);
> > >                       fake_samples[i].map = al.map;
> > >                       fake_samples[i].sym = al.sym;
> > >               }
> > > @@ -101,6 +102,14 @@ static int add_hist_entries(struct evlist *evlist,
> > >       return TEST_FAIL;
> > >  }
> > >
> > > +static void put_fake_samples(void)
> > > +{
> > > +     size_t i;
> > > +
> > > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> > > +             map__put(fake_samples[i].map);
> > > +}
> > > +
> > >  static int test__hists_filter(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
> > >  {
> > >       int err = TEST_FAIL;
> > > @@ -194,7 +203,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
> > >               hists__filter_by_thread(hists);
> > >
> > >               /* now applying dso filter for 'kernel' */
> > > -             hists->dso_filter = fake_samples[0].map->dso;
> > > +             hists->dso_filter = map__dso(fake_samples[0].map);
> > >               hists__filter_by_dso(hists);
> > >
> > >               if (verbose > 2) {
> > > @@ -288,7 +297,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
> > >
> > >               /* now applying all filters at once. */
> > >               hists->thread_filter = fake_samples[1].thread;
> > > -             hists->dso_filter = fake_samples[1].map->dso;
> > > +             hists->dso_filter = map__dso(fake_samples[1].map);
> > >               hists__filter_by_thread(hists);
> > >               hists__filter_by_dso(hists);
> > >
> > > @@ -322,6 +331,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
> > >       evlist__delete(evlist);
> > >       reset_output_field();
> > >       machines__exit(&machines);
> > > +     put_fake_samples();
> > >
> > >       return err;
> > >  }
> > > diff --git a/tools/perf/tests/hists_link.c b/tools/perf/tests/hists_link.c
> > > index c575e13a850d..060e8731feff 100644
> > > --- a/tools/perf/tests/hists_link.c
> > > +++ b/tools/perf/tests/hists_link.c
> > > @@ -6,6 +6,7 @@
> > >  #include "evsel.h"
> > >  #include "evlist.h"
> > >  #include "machine.h"
> > > +#include "map.h"
> > >  #include "parse-events.h"
> > >  #include "hists_common.h"
> > >  #include "util/mmap.h"
> > > @@ -94,6 +95,7 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
> > >                       }
> > >
> > >                       fake_common_samples[k].thread = al.thread;
> > > +                     map__put(fake_common_samples[k].map);
> > >                       fake_common_samples[k].map = al.map;
> > >                       fake_common_samples[k].sym = al.sym;
> > >               }
> > > @@ -126,11 +128,24 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
> > >       return -1;
> > >  }
> > >
> > > +static void put_fake_samples(void)
> > > +{
> > > +     size_t i, j;
> > > +
> > > +     for (i = 0; i < ARRAY_SIZE(fake_common_samples); i++)
> > > +             map__put(fake_common_samples[i].map);
> > > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++) {
> > > +             for (j = 0; j < ARRAY_SIZE(fake_samples[0]); j++)
> > > +                     map__put(fake_samples[i][j].map);
> > > +     }
> > > +}
> > > +
> > >  static int find_sample(struct sample *samples, size_t nr_samples,
> > >                      struct thread *t, struct map *m, struct symbol *s)
> > >  {
> > >       while (nr_samples--) {
> > > -             if (samples->thread == t && samples->map == m &&
> > > +             if (samples->thread == t &&
> > > +                 samples->map == m &&
> > >                   samples->sym == s)
> > >                       return 1;
> > >               samples++;
> > > @@ -336,6 +351,7 @@ static int test__hists_link(struct test_suite *test __maybe_unused, int subtest
> > >       evlist__delete(evlist);
> > >       reset_output_field();
> > >       machines__exit(&machines);
> > > +     put_fake_samples();
> > >
> > >       return err;
> > >  }
> > > diff --git a/tools/perf/tests/hists_output.c b/tools/perf/tests/hists_output.c
> > > index 0bde4a768c15..4af6916491e5 100644
> > > --- a/tools/perf/tests/hists_output.c
> > > +++ b/tools/perf/tests/hists_output.c
> > > @@ -78,6 +78,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
> > >               }
> > >
> > >               fake_samples[i].thread = al.thread;
> > > +             map__put(fake_samples[i].map);
> > >               fake_samples[i].map = al.map;
> > >               fake_samples[i].sym = al.sym;
> > >       }
> > > @@ -113,10 +114,18 @@ static void del_hist_entries(struct hists *hists)
> > >       }
> > >  }
> > >
> > > +static void put_fake_samples(void)
> > > +{
> > > +     size_t i;
> > > +
> > > +     for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
> > > +             map__put(fake_samples[i].map);
> > > +}
> > > +
> > >  typedef int (*test_fn_t)(struct evsel *, struct machine *);
> > >
> > >  #define COMM(he)  (thread__comm_str(he->thread))
> > > -#define DSO(he)   (he->ms.map->dso->short_name)
> > > +#define DSO(he)   (map__dso(he->ms.map)->short_name)
> > >  #define SYM(he)   (he->ms.sym->name)
> > >  #define CPU(he)   (he->cpu)
> > >  #define PID(he)   (he->thread->tid)
> > > @@ -620,6 +629,7 @@ static int test__hists_output(struct test_suite *test __maybe_unused, int subtes
> > >       /* tear down everything */
> > >       evlist__delete(evlist);
> > >       machines__exit(&machines);
> > > +     put_fake_samples();
> > >
> > >       return err;
> > >  }
> > > diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c
> > > index a4301fc7b770..898eda55b7a8 100644
> > > --- a/tools/perf/tests/mmap-thread-lookup.c
> > > +++ b/tools/perf/tests/mmap-thread-lookup.c
> > > @@ -202,7 +202,8 @@ static int mmap_events(synth_cb synth)
> > >                       break;
> > >               }
> > >
> > > -             pr_debug("map %p, addr %" PRIx64 "\n", al.map, al.map->start);
> > > +             pr_debug("map %p, addr %" PRIx64 "\n", al.map, map__start(al.map));
> > > +             map__put(al.map);
> > >       }
> > >
> > >       machine__delete_threads(machine);
> > > diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
> > > index a8cfd31a3ff0..ae65b7bc9ab7 100644
> > > --- a/tools/perf/util/callchain.c
> > > +++ b/tools/perf/util/callchain.c
> > > @@ -583,7 +583,7 @@ fill_node(struct callchain_node *node, struct callchain_cursor *cursor)
> > >               }
> > >               call->ip = cursor_node->ip;
> > >               call->ms = cursor_node->ms;
> > > -             map__get(call->ms.map);
> > > +             call->ms.map = map__get(call->ms.map);
> > >               call->srcline = cursor_node->srcline;
> > >
> > >               if (cursor_node->branch) {
> > > @@ -1061,7 +1061,7 @@ int callchain_cursor_append(struct callchain_cursor *cursor,
> > >       node->ip = ip;
> > >       map__zput(node->ms.map);
> > >       node->ms = *ms;
> > > -     map__get(node->ms.map);
> > > +     node->ms.map = map__get(node->ms.map);
> > >       node->branch = branch;
> > >       node->nr_loop_iter = nr_loop_iter;
> > >       node->iter_cycles = iter_cycles;
> > > @@ -1109,7 +1109,8 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
> > >       struct machine *machine = maps__machine(node->ms.maps);
> > >
> > >       al->maps = node->ms.maps;
> > > -     al->map = node->ms.map;
> > > +     map__put(al->map);
> > > +     al->map = map__get(node->ms.map);
> > >       al->sym = node->ms.sym;
> > >       al->srcline = node->srcline;
> > >       al->addr = node->ip;
> > > @@ -1530,7 +1531,7 @@ int callchain_node__make_parent_list(struct callchain_node *node)
> > >                               goto out;
> > >                       *new = *chain;
> > >                       new->has_children = false;
> > > -                     map__get(new->ms.map);
> > > +                     new->ms.map = map__get(new->ms.map);
> > >                       list_add_tail(&new->list, &head);
> > >               }
> > >               parent = parent->parent;
> > > diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> > > index 54a1d4df5f70..266318d5d006 100644
> > > --- a/tools/perf/util/event.c
> > > +++ b/tools/perf/util/event.c
> > > @@ -484,13 +484,14 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
> > >       if (machine) {
> > >               struct addr_location al;
> > >
> > > -             al.map = maps__find(machine__kernel_maps(machine), tp->addr);
> > > +             al.map = map__get(maps__find(machine__kernel_maps(machine), tp->addr));
> > >               if (al.map && map__load(al.map) >= 0) {
> > >                       al.addr = map__map_ip(al.map, tp->addr);
> > >                       al.sym = map__find_symbol(al.map, al.addr);
> > >                       if (al.sym)
> > >                               ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
> > >               }
> > > +             map__put(al.map);
> > >       }
> > >       ret += fprintf(fp, " old len %u new len %u\n", tp->old_len, tp->new_len);
> > >       old = true;
> > > @@ -581,6 +582,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> > >       al->filtered = 0;
> > >
> > >       if (machine == NULL) {
> > > +             map__put(al->map);
> > >               al->map = NULL;
> > >               return NULL;
> > >       }
> > > @@ -599,6 +601,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> > >               al->level = 'u';
> > >       } else {
> > >               al->level = 'H';
> > > +             map__put(al->map);
> > >               al->map = NULL;
> > >
> > >               if ((cpumode == PERF_RECORD_MISC_GUEST_USER ||
> > > @@ -613,7 +616,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
> > >               return NULL;
> > >       }
> > >
> > > -     al->map = maps__find(maps, al->addr);
> > > +     al->map = map__get(maps__find(maps, al->addr));
> > >       if (al->map != NULL) {
> > >               /*
> > >                * Kernel maps might be changed when loading symbols so loading
> > > @@ -768,6 +771,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
> > >   */
> > >  void addr_location__put(struct addr_location *al)
> > >  {
> > > +     map__zput(al->map);
> > >       thread__zput(al->thread);
> > >  }
> > >
> > > diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
> > > index f19ac6eb4775..4dbb1dbf3679 100644
> > > --- a/tools/perf/util/hist.c
> > > +++ b/tools/perf/util/hist.c
> > > @@ -446,7 +446,7 @@ static int hist_entry__init(struct hist_entry *he,
> > >                       memset(&he->stat, 0, sizeof(he->stat));
> > >       }
> > >
> > > -     map__get(he->ms.map);
> > > +     he->ms.map = map__get(he->ms.map);
> > >
> > >       if (he->branch_info) {
> > >               /*
> > > @@ -461,13 +461,13 @@ static int hist_entry__init(struct hist_entry *he,
> > >               memcpy(he->branch_info, template->branch_info,
> > >                      sizeof(*he->branch_info));
> > >
> > > -             map__get(he->branch_info->from.ms.map);
> > > -             map__get(he->branch_info->to.ms.map);
> > > +             he->branch_info->from.ms.map = map__get(he->branch_info->from.ms.map);
> > > +             he->branch_info->to.ms.map = map__get(he->branch_info->to.ms.map);
> > >       }
> > >
> > >       if (he->mem_info) {
> > > -             map__get(he->mem_info->iaddr.ms.map);
> > > -             map__get(he->mem_info->daddr.ms.map);
> > > +             he->mem_info->iaddr.ms.map = map__get(he->mem_info->iaddr.ms.map);
> > > +             he->mem_info->daddr.ms.map = map__get(he->mem_info->daddr.ms.map);
> > >       }
> > >
> > >       if (hist_entry__has_callchains(he) && symbol_conf.use_callchain)
> > > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > > index 940fb2a50dfd..49e4891e92b7 100644
> > > --- a/tools/perf/util/machine.c
> > > +++ b/tools/perf/util/machine.c
> > > @@ -783,33 +783,42 @@ static int machine__process_ksymbol_register(struct machine *machine,
> > >  {
> > >       struct symbol *sym;
> > >       struct map *map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
> > > +     bool put_map = false;
> > > +     int err = 0;
> > >
> > >       if (!map) {
> > >               struct dso *dso = dso__new(event->ksymbol.name);
> > > -             int err;
> > >
> > > -             if (dso) {
> > > -                     dso->kernel = DSO_SPACE__KERNEL;
> > > -                     map = map__new2(0, dso);
> > > -                     dso__put(dso);
> > > +             if (!dso) {
> > > +                     err = -ENOMEM;
> > > +                     goto out;
> > >               }
> > > -
> > > -             if (!dso || !map) {
> > > -                     return -ENOMEM;
> > > +             dso->kernel = DSO_SPACE__KERNEL;
> > > +             map = map__new2(0, dso);
> > > +             dso__put(dso);
> > > +             if (!map) {
> > > +                     err = -ENOMEM;
> > > +                     goto out;
> > >               }
> > > -
> > > +             /*
> > > +              * The inserted map has a get on it, we need to put to release
> > > +              * the reference count here, but do it after all accesses are
> > > +              * done.
> > > +              */
> > > +             put_map = true;
> > >               if (event->ksymbol.ksym_type == PERF_RECORD_KSYMBOL_TYPE_OOL) {
> > > -                     map->dso->binary_type = DSO_BINARY_TYPE__OOL;
> > > -                     map->dso->data.file_size = event->ksymbol.len;
> > > -                     dso__set_loaded(map->dso);
> > > +                     map__dso(map)->binary_type = DSO_BINARY_TYPE__OOL;
> > > +                     map__dso(map)->data.file_size = event->ksymbol.len;
> > > +                     dso__set_loaded(map__dso(map));
> > >               }
> > >
> > >               map->start = event->ksymbol.addr;
> > > -             map->end = map->start + event->ksymbol.len;
> > > +             map->end = map__start(map) + event->ksymbol.len;
> > >               err = maps__insert(machine__kernel_maps(machine), map);
> > > -             map__put(map);
> > > -             if (err)
> > > -                     return err;
> > > +             if (err) {
> > > +                     err = -ENOMEM;
> > > +                     goto out;
> > > +             }
> > >
> > >               dso__set_loaded(dso);
> > >
> > > @@ -819,13 +828,18 @@ static int machine__process_ksymbol_register(struct machine *machine,
> > >               }
> > >       }
> > >
> > > -     sym = symbol__new(map->map_ip(map, map->start),
> > > +     sym = symbol__new(map__map_ip(map, map__start(map)),
> > >                         event->ksymbol.len,
> > >                         0, 0, event->ksymbol.name);
> > > -     if (!sym)
> > > -             return -ENOMEM;
> > > -     dso__insert_symbol(map->dso, sym);
> > > -     return 0;
> > > +     if (!sym) {
> > > +             err = -ENOMEM;
> > > +             goto out;
> > > +     }
> > > +     dso__insert_symbol(map__dso(map), sym);
> > > +out:
> > > +     if (put_map)
> > > +             map__put(map);
> > > +     return err;
> > >  }
> > >
> > >  static int machine__process_ksymbol_unregister(struct machine *machine,
> > > @@ -925,14 +939,11 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
> > >               goto out;
> > >
> > >       err = maps__insert(machine__kernel_maps(machine), map);
> > > -
> > > -     /* Put the map here because maps__insert already got it */
> > > -     map__put(map);
> > > -
> > >       /* If maps__insert failed, return NULL. */
> > > -     if (err)
> > > +     if (err) {
> > > +             map__put(map);
> > >               map = NULL;
> > > -
> > > +     }
> > >  out:
> > >       /* put the dso here, corresponding to  machine__findnew_module_dso */
> > >       dso__put(dso);
> > > @@ -1228,6 +1239,7 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
> > >       /* In case of renewal the kernel map, destroy previous one */
> > >       machine__destroy_kernel_maps(machine);
> > >
> > > +     map__put(machine->vmlinux_map);
> > >       machine->vmlinux_map = map__new2(0, kernel);
> > >       if (machine->vmlinux_map == NULL)
> > >               return -ENOMEM;
> > > @@ -1513,6 +1525,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
> > >       map->end = start + size;
> > >
> > >       dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
> > > +     map__put(map);
> > >       return 0;
> > >  }
> > >
> > > @@ -1558,16 +1571,18 @@ static void machine__set_kernel_mmap(struct machine *machine,
> > >  static int machine__update_kernel_mmap(struct machine *machine,
> > >                                    u64 start, u64 end)
> > >  {
> > > -     struct map *map = machine__kernel_map(machine);
> > > +     struct map *orig, *updated;
> > >       int err;
> > >
> > > -     map__get(map);
> > > -     maps__remove(machine__kernel_maps(machine), map);
> > > +     orig = machine->vmlinux_map;
> > > +     updated = map__get(orig);
> > >
> > > +     machine->vmlinux_map = updated;
> > >       machine__set_kernel_mmap(machine, start, end);
> > > +     maps__remove(machine__kernel_maps(machine), orig);
> > > +     err = maps__insert(machine__kernel_maps(machine), updated);
> > > +     map__put(orig);
> > >
> > > -     err = maps__insert(machine__kernel_maps(machine), map);
> > > -     map__put(map);
> > >       return err;
> > >  }
> > >
> > > @@ -2246,6 +2261,7 @@ static int add_callchain_ip(struct thread *thread,
> > >       err = callchain_cursor_append(cursor, ip, &ms,
> > >                                     branch, flags, nr_loop_iter,
> > >                                     iter_cycles, branch_from, srcline);
> > > +     map__put(al.map);
> > >       return err;
> > >  }
> > >
> > > --
> > > 2.35.1.265.g69c8d7142f-goog
> > >
> >
> >
> > --
> > Masami Hiramatsu <mhiramat@kernel.org>

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 05/22] perf maps: Use a pointer for kmaps
  2022-02-11 17:23   ` Arnaldo Carvalho de Melo
@ 2022-02-14 19:45     ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-14 19:45 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:23:03PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Fri, Feb 11, 2022 at 02:33:58AM -0800, Ian Rogers escreveu:
> > struct maps is reference counted, using a pointer is more idiomatic.
> 
> So, I tried to apply this after adding this to the cset comming log to
> make sure reviewers know that this is just a clarifying comming, no code
> change:
> 
> Committer notes:
> 
> Definition of machine__kernel_maps(machine), the replacement of &machine->kmaps
> 
> static inline
> struct maps *machine__kernel_maps(struct machine *machine)
> {
>         return machine->kmaps;
> }
> 
> but then when building on a f34 system I got:
> 
>   CC      /tmp/build/perf/bench/inject-buildid.o
> In file included from /var/home/acme/git/perf/tools/perf/util/build-id.h:10,
>                  from /var/home/acme/git/perf/tools/perf/util/dso.h:13,
>                  from tests/vmlinux-kallsyms.c:8:
> In function ‘machine__kernel_maps’,
>     inlined from ‘test__vmlinux_matches_kallsyms’ at tests/vmlinux-kallsyms.c:122:22:
> /var/home/acme/git/perf/tools/perf/util/machine.h:86:23: error: ‘vmlinux.kmaps’ is used uninitialized [-Werror=uninitialized]
>    86 |         return machine->kmaps;
>       |                ~~~~~~~^~~~~~~
> tests/vmlinux-kallsyms.c: In function ‘test__vmlinux_matches_kallsyms’:
> tests/vmlinux-kallsyms.c:121:34: note: ‘vmlinux’ declared here
>   121 |         struct machine kallsyms, vmlinux;
>       |                                  ^~~~~~~
> cc1: all warnings being treated as errors
> make[4]: *** [/var/home/acme/git/perf/tools/build/Makefile.build:96: /tmp/build/perf/tests/vmlinux-kallsyms.o] Error 1
> make[4]: *** Waiting for unfinished jobs....
>   CC      /tmp/build/perf/util/config.o
>   CC      /tmp/build/perf/arch/x86/util/archinsn.o
>   CC      /tmp/build/perf/arch/x86/util/intel-pt.o
>   CC      /tmp/build/perf/arch/x86/util/intel-bts.o
>   CC      /tmp/build/perf/util/db-export.o
>   CC      /tmp/build/perf/util/event.o
> make[3]: *** [/var/home/acme/git/perf/tools/build/Makefile.build:139: tests] Error 2
> make[3]: *** Waiting for unfinished jobs....
> 
> Can you please  take a look at that?

I'm applying this on top:

diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index 84bf5f64006560f5..93dee542a177ed1d 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -119,7 +119,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 	struct symbol *sym;
 	struct map *kallsyms_map, *vmlinux_map, *map;
 	struct machine kallsyms, vmlinux;
-	struct maps *maps = machine__kernel_maps(&vmlinux);
+	struct maps *maps;
 	u64 mem_start, mem_end;
 	bool header_printed;
 
@@ -132,6 +132,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 	machine__init(&kallsyms, "", HOST_KERNEL_ID);
 	machine__init(&vmlinux, "", HOST_KERNEL_ID);
 
+	maps = machine__kernel_maps(&vmlinux);
+
 	/*
 	 * Step 2:
 	 *

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 06/22] perf test: Use pointer for maps
  2022-02-11 10:33 ` [PATCH v3 06/22] perf test: Use pointer for maps Ian Rogers
  2022-02-11 17:24   ` Arnaldo Carvalho de Melo
@ 2022-02-14 19:48   ` Arnaldo Carvalho de Melo
  2022-02-14 19:50     ` Arnaldo Carvalho de Melo
  1 sibling, 1 reply; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-14 19:48 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:33:59AM -0800, Ian Rogers escreveu:
> struct maps is reference counted, using a pointer is more idiomatic.
> 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/tests/maps.c | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
> index e308a3296cef..6f53f17f788e 100644
> --- a/tools/perf/tests/maps.c
> +++ b/tools/perf/tests/maps.c
> @@ -35,7 +35,7 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
>  
>  static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest __maybe_unused)
>  {
> -	struct maps maps;
> +	struct maps *maps;
>  	unsigned int i;
>  	struct map_def bpf_progs[] = {
>  		{ "bpf_prog_1", 200, 300 },
> @@ -64,7 +64,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
>  	struct map *map_kcore1, *map_kcore2, *map_kcore3;
>  	int ret;
>  
> -	maps__init(&maps, NULL);
> +	maps = maps__new(NULL);

Now that is dynamicly allocated we need to check for the constructor
result, I'm fixing this up.

- Arnaldo
  
>  	for (i = 0; i < ARRAY_SIZE(bpf_progs); i++) {
>  		struct map *map;
> @@ -74,7 +74,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
>  
>  		map->start = bpf_progs[i].start;
>  		map->end   = bpf_progs[i].end;
> -		maps__insert(&maps, map);
> +		maps__insert(maps, map);
>  		map__put(map);
>  	}
>  
> @@ -99,25 +99,25 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
>  	map_kcore3->start = 880;
>  	map_kcore3->end   = 1100;
>  
> -	ret = maps__merge_in(&maps, map_kcore1);
> +	ret = maps__merge_in(maps, map_kcore1);
>  	TEST_ASSERT_VAL("failed to merge map", !ret);
>  
> -	ret = check_maps(merged12, ARRAY_SIZE(merged12), &maps);
> +	ret = check_maps(merged12, ARRAY_SIZE(merged12), maps);
>  	TEST_ASSERT_VAL("merge check failed", !ret);
>  
> -	ret = maps__merge_in(&maps, map_kcore2);
> +	ret = maps__merge_in(maps, map_kcore2);
>  	TEST_ASSERT_VAL("failed to merge map", !ret);
>  
> -	ret = check_maps(merged12, ARRAY_SIZE(merged12), &maps);
> +	ret = check_maps(merged12, ARRAY_SIZE(merged12), maps);
>  	TEST_ASSERT_VAL("merge check failed", !ret);
>  
> -	ret = maps__merge_in(&maps, map_kcore3);
> +	ret = maps__merge_in(maps, map_kcore3);
>  	TEST_ASSERT_VAL("failed to merge map", !ret);
>  
> -	ret = check_maps(merged3, ARRAY_SIZE(merged3), &maps);
> +	ret = check_maps(merged3, ARRAY_SIZE(merged3), maps);
>  	TEST_ASSERT_VAL("merge check failed", !ret);
>  
> -	maps__exit(&maps);
> +	maps__delete(maps);
>  	return TEST_OK;
>  }
>  
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 06/22] perf test: Use pointer for maps
  2022-02-14 19:48   ` Arnaldo Carvalho de Melo
@ 2022-02-14 19:50     ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-14 19:50 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Mon, Feb 14, 2022 at 04:48:35PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Fri, Feb 11, 2022 at 02:33:59AM -0800, Ian Rogers escreveu:
> > struct maps is reference counted, using a pointer is more idiomatic.
> > 
> > Signed-off-by: Ian Rogers <irogers@google.com>
> > ---
> >  tools/perf/tests/maps.c | 20 ++++++++++----------
> >  1 file changed, 10 insertions(+), 10 deletions(-)
> > 
> > diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
> > index e308a3296cef..6f53f17f788e 100644
> > --- a/tools/perf/tests/maps.c
> > +++ b/tools/perf/tests/maps.c
> > @@ -35,7 +35,7 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
> >  
> >  static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest __maybe_unused)
> >  {
> > -	struct maps maps;
> > +	struct maps *maps;
> >  	unsigned int i;
> >  	struct map_def bpf_progs[] = {
> >  		{ "bpf_prog_1", 200, 300 },
> > @@ -64,7 +64,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
> >  	struct map *map_kcore1, *map_kcore2, *map_kcore3;
> >  	int ret;
> >  
> > -	maps__init(&maps, NULL);
> > +	maps = maps__new(NULL);
> 
> Now that is dynamicly allocated we need to check for the constructor
> result, I'm fixing this up.

I.e. added this:

diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
index 6f53f17f788e7dd7..a69988a89d265211 100644
--- a/tools/perf/tests/maps.c
+++ b/tools/perf/tests/maps.c
@@ -35,7 +35,6 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
 
 static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest __maybe_unused)
 {
-	struct maps *maps;
 	unsigned int i;
 	struct map_def bpf_progs[] = {
 		{ "bpf_prog_1", 200, 300 },
@@ -63,8 +62,9 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
 	};
 	struct map *map_kcore1, *map_kcore2, *map_kcore3;
 	int ret;
+	struct maps *maps = maps__new(NULL);
 
-	maps = maps__new(NULL);
+	TEST_ASSERT_VAL("failed to create maps", maps);
 
 	for (i = 0; i < ARRAY_SIZE(bpf_progs); i++) {
 		struct map *map;

^ permalink raw reply related	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 08/22] perf maps: Move maps code to own C file
  2022-02-11 10:34 ` [PATCH v3 08/22] perf maps: Move maps code to own C file Ian Rogers
  2022-02-11 17:27   ` Arnaldo Carvalho de Melo
@ 2022-02-14 19:58   ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-14 19:58 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:34:01AM -0800, Ian Rogers escreveu:
> The maps code has its own header, move the corresponding C function
> definitions to their own C file. In the process tidy and minimize
> includes.

You removed the static in front of maps__init() and maps__exit() you
just did in the previous patch:

  CC      /tmp/build/perf/util/auxtrace.o
util/maps.c:15:6: error: no previous prototype for ‘maps__init’ [-Werror=missing-prototypes]
   15 | void maps__init(struct maps *maps, struct machine *machine)
      |      ^~~~~~~~~~
util/maps.c:104:6: error: no previous prototype for ‘maps__exit’ [-Werror=missing-prototypes]
  104 | void maps__exit(struct maps *maps)
      |      ^~~~~~~~~~
cc1: all warnings being treated as errors
  CC      /tmp/build/perf/util/scripting-engines/trace-event-perl.o

I'm fixing this up.
 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/util/Build  |   1 +
>  tools/perf/util/map.c  | 417 +----------------------------------------
>  tools/perf/util/map.h  |   2 +
>  tools/perf/util/maps.c | 403 +++++++++++++++++++++++++++++++++++++++
>  4 files changed, 414 insertions(+), 409 deletions(-)
>  create mode 100644 tools/perf/util/maps.c
> 
> diff --git a/tools/perf/util/Build b/tools/perf/util/Build
> index 2a403cefcaf2..9a7209a99e16 100644
> --- a/tools/perf/util/Build
> +++ b/tools/perf/util/Build
> @@ -56,6 +56,7 @@ perf-y += debug.o
>  perf-y += fncache.o
>  perf-y += machine.o
>  perf-y += map.o
> +perf-y += maps.o
>  perf-y += pstack.o
>  perf-y += session.o
>  perf-y += sample-raw.o
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index 4d1de363c19a..2cfe5744b86c 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -1,31 +1,20 @@
>  // SPDX-License-Identifier: GPL-2.0
> -#include "symbol.h"
> -#include <assert.h>
> -#include <errno.h>
>  #include <inttypes.h>
>  #include <limits.h>
> +#include <stdio.h>
>  #include <stdlib.h>
>  #include <string.h>
> -#include <stdio.h>
> -#include <unistd.h>
> +#include <linux/string.h>
> +#include <linux/zalloc.h>
>  #include <uapi/linux/mman.h> /* To get things like MAP_HUGETLB even on older libc headers */
> +#include "debug.h"
>  #include "dso.h"
>  #include "map.h"
> -#include "map_symbol.h"
> +#include "namespaces.h"
> +#include "srcline.h"
> +#include "symbol.h"
>  #include "thread.h"
>  #include "vdso.h"
> -#include "build-id.h"
> -#include "debug.h"
> -#include "machine.h"
> -#include <linux/string.h>
> -#include <linux/zalloc.h>
> -#include "srcline.h"
> -#include "namespaces.h"
> -#include "unwind.h"
> -#include "srccode.h"
> -#include "ui/ui.h"
> -
> -static void __maps__insert(struct maps *maps, struct map *map);
>  
>  static inline int is_android_lib(const char *filename)
>  {
> @@ -527,403 +516,13 @@ u64 map__objdump_2mem(struct map *map, u64 ip)
>  	return ip + map->reloc;
>  }
>  
> -static void maps__init(struct maps *maps, struct machine *machine)
> -{
> -	maps->entries = RB_ROOT;
> -	init_rwsem(&maps->lock);
> -	maps->machine = machine;
> -	maps->last_search_by_name = NULL;
> -	maps->nr_maps = 0;
> -	maps->maps_by_name = NULL;
> -	refcount_set(&maps->refcnt, 1);
> -}
> -
> -static void __maps__free_maps_by_name(struct maps *maps)
> -{
> -	/*
> -	 * Free everything to try to do it from the rbtree in the next search
> -	 */
> -	zfree(&maps->maps_by_name);
> -	maps->nr_maps_allocated = 0;
> -}
> -
> -void maps__insert(struct maps *maps, struct map *map)
> -{
> -	down_write(&maps->lock);
> -	__maps__insert(maps, map);
> -	++maps->nr_maps;
> -
> -	if (map->dso && map->dso->kernel) {
> -		struct kmap *kmap = map__kmap(map);
> -
> -		if (kmap)
> -			kmap->kmaps = maps;
> -		else
> -			pr_err("Internal error: kernel dso with non kernel map\n");
> -	}
> -
> -
> -	/*
> -	 * If we already performed some search by name, then we need to add the just
> -	 * inserted map and resort.
> -	 */
> -	if (maps->maps_by_name) {
> -		if (maps->nr_maps > maps->nr_maps_allocated) {
> -			int nr_allocate = maps->nr_maps * 2;
> -			struct map **maps_by_name = realloc(maps->maps_by_name, nr_allocate * sizeof(map));
> -
> -			if (maps_by_name == NULL) {
> -				__maps__free_maps_by_name(maps);
> -				up_write(&maps->lock);
> -				return;
> -			}
> -
> -			maps->maps_by_name = maps_by_name;
> -			maps->nr_maps_allocated = nr_allocate;
> -		}
> -		maps->maps_by_name[maps->nr_maps - 1] = map;
> -		__maps__sort_by_name(maps);
> -	}
> -	up_write(&maps->lock);
> -}
> -
> -static void __maps__remove(struct maps *maps, struct map *map)
> -{
> -	rb_erase_init(&map->rb_node, &maps->entries);
> -	map__put(map);
> -}
> -
> -void maps__remove(struct maps *maps, struct map *map)
> -{
> -	down_write(&maps->lock);
> -	if (maps->last_search_by_name == map)
> -		maps->last_search_by_name = NULL;
> -
> -	__maps__remove(maps, map);
> -	--maps->nr_maps;
> -	if (maps->maps_by_name)
> -		__maps__free_maps_by_name(maps);
> -	up_write(&maps->lock);
> -}
> -
> -static void __maps__purge(struct maps *maps)
> -{
> -	struct map *pos, *next;
> -
> -	maps__for_each_entry_safe(maps, pos, next) {
> -		rb_erase_init(&pos->rb_node,  &maps->entries);
> -		map__put(pos);
> -	}
> -}
> -
> -static void maps__exit(struct maps *maps)
> -{
> -	down_write(&maps->lock);
> -	__maps__purge(maps);
> -	up_write(&maps->lock);
> -}
> -
> -bool maps__empty(struct maps *maps)
> -{
> -	return !maps__first(maps);
> -}
> -
> -struct maps *maps__new(struct machine *machine)
> -{
> -	struct maps *maps = zalloc(sizeof(*maps));
> -
> -	if (maps != NULL)
> -		maps__init(maps, machine);
> -
> -	return maps;
> -}
> -
> -void maps__delete(struct maps *maps)
> -{
> -	maps__exit(maps);
> -	unwind__finish_access(maps);
> -	free(maps);
> -}
> -
> -void maps__put(struct maps *maps)
> -{
> -	if (maps && refcount_dec_and_test(&maps->refcnt))
> -		maps__delete(maps);
> -}
> -
> -struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
> -{
> -	struct map *map = maps__find(maps, addr);
> -
> -	/* Ensure map is loaded before using map->map_ip */
> -	if (map != NULL && map__load(map) >= 0) {
> -		if (mapp != NULL)
> -			*mapp = map;
> -		return map__find_symbol(map, map->map_ip(map, addr));
> -	}
> -
> -	return NULL;
> -}
> -
> -static bool map__contains_symbol(struct map *map, struct symbol *sym)
> +bool map__contains_symbol(struct map *map, struct symbol *sym)
>  {
>  	u64 ip = map->unmap_ip(map, sym->start);
>  
>  	return ip >= map->start && ip < map->end;
>  }
>  
> -struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct map **mapp)
> -{
> -	struct symbol *sym;
> -	struct map *pos;
> -
> -	down_read(&maps->lock);
> -
> -	maps__for_each_entry(maps, pos) {
> -		sym = map__find_symbol_by_name(pos, name);
> -
> -		if (sym == NULL)
> -			continue;
> -		if (!map__contains_symbol(pos, sym)) {
> -			sym = NULL;
> -			continue;
> -		}
> -		if (mapp != NULL)
> -			*mapp = pos;
> -		goto out;
> -	}
> -
> -	sym = NULL;
> -out:
> -	up_read(&maps->lock);
> -	return sym;
> -}
> -
> -int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
> -{
> -	if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
> -		if (maps == NULL)
> -			return -1;
> -		ams->ms.map = maps__find(maps, ams->addr);
> -		if (ams->ms.map == NULL)
> -			return -1;
> -	}
> -
> -	ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
> -	ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
> -
> -	return ams->ms.sym ? 0 : -1;
> -}
> -
> -size_t maps__fprintf(struct maps *maps, FILE *fp)
> -{
> -	size_t printed = 0;
> -	struct map *pos;
> -
> -	down_read(&maps->lock);
> -
> -	maps__for_each_entry(maps, pos) {
> -		printed += fprintf(fp, "Map:");
> -		printed += map__fprintf(pos, fp);
> -		if (verbose > 2) {
> -			printed += dso__fprintf(pos->dso, fp);
> -			printed += fprintf(fp, "--\n");
> -		}
> -	}
> -
> -	up_read(&maps->lock);
> -
> -	return printed;
> -}
> -
> -int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> -{
> -	struct rb_root *root;
> -	struct rb_node *next, *first;
> -	int err = 0;
> -
> -	down_write(&maps->lock);
> -
> -	root = &maps->entries;
> -
> -	/*
> -	 * Find first map where end > map->start.
> -	 * Same as find_vma() in kernel.
> -	 */
> -	next = root->rb_node;
> -	first = NULL;
> -	while (next) {
> -		struct map *pos = rb_entry(next, struct map, rb_node);
> -
> -		if (pos->end > map->start) {
> -			first = next;
> -			if (pos->start <= map->start)
> -				break;
> -			next = next->rb_left;
> -		} else
> -			next = next->rb_right;
> -	}
> -
> -	next = first;
> -	while (next) {
> -		struct map *pos = rb_entry(next, struct map, rb_node);
> -		next = rb_next(&pos->rb_node);
> -
> -		/*
> -		 * Stop if current map starts after map->end.
> -		 * Maps are ordered by start: next will not overlap for sure.
> -		 */
> -		if (pos->start >= map->end)
> -			break;
> -
> -		if (verbose >= 2) {
> -
> -			if (use_browser) {
> -				pr_debug("overlapping maps in %s (disable tui for more info)\n",
> -					   map->dso->name);
> -			} else {
> -				fputs("overlapping maps:\n", fp);
> -				map__fprintf(map, fp);
> -				map__fprintf(pos, fp);
> -			}
> -		}
> -
> -		rb_erase_init(&pos->rb_node, root);
> -		/*
> -		 * Now check if we need to create new maps for areas not
> -		 * overlapped by the new map:
> -		 */
> -		if (map->start > pos->start) {
> -			struct map *before = map__clone(pos);
> -
> -			if (before == NULL) {
> -				err = -ENOMEM;
> -				goto put_map;
> -			}
> -
> -			before->end = map->start;
> -			__maps__insert(maps, before);
> -			if (verbose >= 2 && !use_browser)
> -				map__fprintf(before, fp);
> -			map__put(before);
> -		}
> -
> -		if (map->end < pos->end) {
> -			struct map *after = map__clone(pos);
> -
> -			if (after == NULL) {
> -				err = -ENOMEM;
> -				goto put_map;
> -			}
> -
> -			after->start = map->end;
> -			after->pgoff += map->end - pos->start;
> -			assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
> -			__maps__insert(maps, after);
> -			if (verbose >= 2 && !use_browser)
> -				map__fprintf(after, fp);
> -			map__put(after);
> -		}
> -put_map:
> -		map__put(pos);
> -
> -		if (err)
> -			goto out;
> -	}
> -
> -	err = 0;
> -out:
> -	up_write(&maps->lock);
> -	return err;
> -}
> -
> -/*
> - * XXX This should not really _copy_ te maps, but refcount them.
> - */
> -int maps__clone(struct thread *thread, struct maps *parent)
> -{
> -	struct maps *maps = thread->maps;
> -	int err;
> -	struct map *map;
> -
> -	down_read(&parent->lock);
> -
> -	maps__for_each_entry(parent, map) {
> -		struct map *new = map__clone(map);
> -
> -		if (new == NULL) {
> -			err = -ENOMEM;
> -			goto out_unlock;
> -		}
> -
> -		err = unwind__prepare_access(maps, new, NULL);
> -		if (err)
> -			goto out_unlock;
> -
> -		maps__insert(maps, new);
> -		map__put(new);
> -	}
> -
> -	err = 0;
> -out_unlock:
> -	up_read(&parent->lock);
> -	return err;
> -}
> -
> -static void __maps__insert(struct maps *maps, struct map *map)
> -{
> -	struct rb_node **p = &maps->entries.rb_node;
> -	struct rb_node *parent = NULL;
> -	const u64 ip = map->start;
> -	struct map *m;
> -
> -	while (*p != NULL) {
> -		parent = *p;
> -		m = rb_entry(parent, struct map, rb_node);
> -		if (ip < m->start)
> -			p = &(*p)->rb_left;
> -		else
> -			p = &(*p)->rb_right;
> -	}
> -
> -	rb_link_node(&map->rb_node, parent, p);
> -	rb_insert_color(&map->rb_node, &maps->entries);
> -	map__get(map);
> -}
> -
> -struct map *maps__find(struct maps *maps, u64 ip)
> -{
> -	struct rb_node *p;
> -	struct map *m;
> -
> -	down_read(&maps->lock);
> -
> -	p = maps->entries.rb_node;
> -	while (p != NULL) {
> -		m = rb_entry(p, struct map, rb_node);
> -		if (ip < m->start)
> -			p = p->rb_left;
> -		else if (ip >= m->end)
> -			p = p->rb_right;
> -		else
> -			goto out;
> -	}
> -
> -	m = NULL;
> -out:
> -	up_read(&maps->lock);
> -	return m;
> -}
> -
> -struct map *maps__first(struct maps *maps)
> -{
> -	struct rb_node *first = rb_first(&maps->entries);
> -
> -	if (first)
> -		return rb_entry(first, struct map, rb_node);
> -	return NULL;
> -}
> -
>  static struct map *__map__next(struct map *map)
>  {
>  	struct rb_node *next = rb_next(&map->rb_node);
> diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
> index d32f5b28c1fb..973dce27b253 100644
> --- a/tools/perf/util/map.h
> +++ b/tools/perf/util/map.h
> @@ -160,6 +160,8 @@ static inline bool __map__is_kmodule(const struct map *map)
>  
>  bool map__has_symbols(const struct map *map);
>  
> +bool map__contains_symbol(struct map *map, struct symbol *sym);
> +
>  #define ENTRY_TRAMPOLINE_NAME "__entry_SYSCALL_64_trampoline"
>  
>  static inline bool is_entry_trampoline(const char *name)
> diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
> new file mode 100644
> index 000000000000..ededabf0a230
> --- /dev/null
> +++ b/tools/perf/util/maps.c
> @@ -0,0 +1,403 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <errno.h>
> +#include <stdlib.h>
> +#include <linux/zalloc.h>
> +#include "debug.h"
> +#include "dso.h"
> +#include "map.h"
> +#include "maps.h"
> +#include "thread.h"
> +#include "ui/ui.h"
> +#include "unwind.h"
> +
> +static void __maps__insert(struct maps *maps, struct map *map);
> +
> +void maps__init(struct maps *maps, struct machine *machine)
> +{
> +	maps->entries = RB_ROOT;
> +	init_rwsem(&maps->lock);
> +	maps->machine = machine;
> +	maps->last_search_by_name = NULL;
> +	maps->nr_maps = 0;
> +	maps->maps_by_name = NULL;
> +	refcount_set(&maps->refcnt, 1);
> +}
> +
> +static void __maps__free_maps_by_name(struct maps *maps)
> +{
> +	/*
> +	 * Free everything to try to do it from the rbtree in the next search
> +	 */
> +	zfree(&maps->maps_by_name);
> +	maps->nr_maps_allocated = 0;
> +}
> +
> +void maps__insert(struct maps *maps, struct map *map)
> +{
> +	down_write(&maps->lock);
> +	__maps__insert(maps, map);
> +	++maps->nr_maps;
> +
> +	if (map->dso && map->dso->kernel) {
> +		struct kmap *kmap = map__kmap(map);
> +
> +		if (kmap)
> +			kmap->kmaps = maps;
> +		else
> +			pr_err("Internal error: kernel dso with non kernel map\n");
> +	}
> +
> +
> +	/*
> +	 * If we already performed some search by name, then we need to add the just
> +	 * inserted map and resort.
> +	 */
> +	if (maps->maps_by_name) {
> +		if (maps->nr_maps > maps->nr_maps_allocated) {
> +			int nr_allocate = maps->nr_maps * 2;
> +			struct map **maps_by_name = realloc(maps->maps_by_name, nr_allocate * sizeof(map));
> +
> +			if (maps_by_name == NULL) {
> +				__maps__free_maps_by_name(maps);
> +				up_write(&maps->lock);
> +				return;
> +			}
> +
> +			maps->maps_by_name = maps_by_name;
> +			maps->nr_maps_allocated = nr_allocate;
> +		}
> +		maps->maps_by_name[maps->nr_maps - 1] = map;
> +		__maps__sort_by_name(maps);
> +	}
> +	up_write(&maps->lock);
> +}
> +
> +static void __maps__remove(struct maps *maps, struct map *map)
> +{
> +	rb_erase_init(&map->rb_node, &maps->entries);
> +	map__put(map);
> +}
> +
> +void maps__remove(struct maps *maps, struct map *map)
> +{
> +	down_write(&maps->lock);
> +	if (maps->last_search_by_name == map)
> +		maps->last_search_by_name = NULL;
> +
> +	__maps__remove(maps, map);
> +	--maps->nr_maps;
> +	if (maps->maps_by_name)
> +		__maps__free_maps_by_name(maps);
> +	up_write(&maps->lock);
> +}
> +
> +static void __maps__purge(struct maps *maps)
> +{
> +	struct map *pos, *next;
> +
> +	maps__for_each_entry_safe(maps, pos, next) {
> +		rb_erase_init(&pos->rb_node,  &maps->entries);
> +		map__put(pos);
> +	}
> +}
> +
> +void maps__exit(struct maps *maps)
> +{
> +	down_write(&maps->lock);
> +	__maps__purge(maps);
> +	up_write(&maps->lock);
> +}
> +
> +bool maps__empty(struct maps *maps)
> +{
> +	return !maps__first(maps);
> +}
> +
> +struct maps *maps__new(struct machine *machine)
> +{
> +	struct maps *maps = zalloc(sizeof(*maps));
> +
> +	if (maps != NULL)
> +		maps__init(maps, machine);
> +
> +	return maps;
> +}
> +
> +void maps__delete(struct maps *maps)
> +{
> +	maps__exit(maps);
> +	unwind__finish_access(maps);
> +	free(maps);
> +}
> +
> +void maps__put(struct maps *maps)
> +{
> +	if (maps && refcount_dec_and_test(&maps->refcnt))
> +		maps__delete(maps);
> +}
> +
> +struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
> +{
> +	struct map *map = maps__find(maps, addr);
> +
> +	/* Ensure map is loaded before using map->map_ip */
> +	if (map != NULL && map__load(map) >= 0) {
> +		if (mapp != NULL)
> +			*mapp = map;
> +		return map__find_symbol(map, map->map_ip(map, addr));
> +	}
> +
> +	return NULL;
> +}
> +
> +struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct map **mapp)
> +{
> +	struct symbol *sym;
> +	struct map *pos;
> +
> +	down_read(&maps->lock);
> +
> +	maps__for_each_entry(maps, pos) {
> +		sym = map__find_symbol_by_name(pos, name);
> +
> +		if (sym == NULL)
> +			continue;
> +		if (!map__contains_symbol(pos, sym)) {
> +			sym = NULL;
> +			continue;
> +		}
> +		if (mapp != NULL)
> +			*mapp = pos;
> +		goto out;
> +	}
> +
> +	sym = NULL;
> +out:
> +	up_read(&maps->lock);
> +	return sym;
> +}
> +
> +int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
> +{
> +	if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
> +		if (maps == NULL)
> +			return -1;
> +		ams->ms.map = maps__find(maps, ams->addr);
> +		if (ams->ms.map == NULL)
> +			return -1;
> +	}
> +
> +	ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
> +	ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
> +
> +	return ams->ms.sym ? 0 : -1;
> +}
> +
> +size_t maps__fprintf(struct maps *maps, FILE *fp)
> +{
> +	size_t printed = 0;
> +	struct map *pos;
> +
> +	down_read(&maps->lock);
> +
> +	maps__for_each_entry(maps, pos) {
> +		printed += fprintf(fp, "Map:");
> +		printed += map__fprintf(pos, fp);
> +		if (verbose > 2) {
> +			printed += dso__fprintf(pos->dso, fp);
> +			printed += fprintf(fp, "--\n");
> +		}
> +	}
> +
> +	up_read(&maps->lock);
> +
> +	return printed;
> +}
> +
> +int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> +{
> +	struct rb_root *root;
> +	struct rb_node *next, *first;
> +	int err = 0;
> +
> +	down_write(&maps->lock);
> +
> +	root = &maps->entries;
> +
> +	/*
> +	 * Find first map where end > map->start.
> +	 * Same as find_vma() in kernel.
> +	 */
> +	next = root->rb_node;
> +	first = NULL;
> +	while (next) {
> +		struct map *pos = rb_entry(next, struct map, rb_node);
> +
> +		if (pos->end > map->start) {
> +			first = next;
> +			if (pos->start <= map->start)
> +				break;
> +			next = next->rb_left;
> +		} else
> +			next = next->rb_right;
> +	}
> +
> +	next = first;
> +	while (next) {
> +		struct map *pos = rb_entry(next, struct map, rb_node);
> +		next = rb_next(&pos->rb_node);
> +
> +		/*
> +		 * Stop if current map starts after map->end.
> +		 * Maps are ordered by start: next will not overlap for sure.
> +		 */
> +		if (pos->start >= map->end)
> +			break;
> +
> +		if (verbose >= 2) {
> +
> +			if (use_browser) {
> +				pr_debug("overlapping maps in %s (disable tui for more info)\n",
> +					   map->dso->name);
> +			} else {
> +				fputs("overlapping maps:\n", fp);
> +				map__fprintf(map, fp);
> +				map__fprintf(pos, fp);
> +			}
> +		}
> +
> +		rb_erase_init(&pos->rb_node, root);
> +		/*
> +		 * Now check if we need to create new maps for areas not
> +		 * overlapped by the new map:
> +		 */
> +		if (map->start > pos->start) {
> +			struct map *before = map__clone(pos);
> +
> +			if (before == NULL) {
> +				err = -ENOMEM;
> +				goto put_map;
> +			}
> +
> +			before->end = map->start;
> +			__maps__insert(maps, before);
> +			if (verbose >= 2 && !use_browser)
> +				map__fprintf(before, fp);
> +			map__put(before);
> +		}
> +
> +		if (map->end < pos->end) {
> +			struct map *after = map__clone(pos);
> +
> +			if (after == NULL) {
> +				err = -ENOMEM;
> +				goto put_map;
> +			}
> +
> +			after->start = map->end;
> +			after->pgoff += map->end - pos->start;
> +			assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
> +			__maps__insert(maps, after);
> +			if (verbose >= 2 && !use_browser)
> +				map__fprintf(after, fp);
> +			map__put(after);
> +		}
> +put_map:
> +		map__put(pos);
> +
> +		if (err)
> +			goto out;
> +	}
> +
> +	err = 0;
> +out:
> +	up_write(&maps->lock);
> +	return err;
> +}
> +
> +/*
> + * XXX This should not really _copy_ te maps, but refcount them.
> + */
> +int maps__clone(struct thread *thread, struct maps *parent)
> +{
> +	struct maps *maps = thread->maps;
> +	int err;
> +	struct map *map;
> +
> +	down_read(&parent->lock);
> +
> +	maps__for_each_entry(parent, map) {
> +		struct map *new = map__clone(map);
> +
> +		if (new == NULL) {
> +			err = -ENOMEM;
> +			goto out_unlock;
> +		}
> +
> +		err = unwind__prepare_access(maps, new, NULL);
> +		if (err)
> +			goto out_unlock;
> +
> +		maps__insert(maps, new);
> +		map__put(new);
> +	}
> +
> +	err = 0;
> +out_unlock:
> +	up_read(&parent->lock);
> +	return err;
> +}
> +
> +static void __maps__insert(struct maps *maps, struct map *map)
> +{
> +	struct rb_node **p = &maps->entries.rb_node;
> +	struct rb_node *parent = NULL;
> +	const u64 ip = map->start;
> +	struct map *m;
> +
> +	while (*p != NULL) {
> +		parent = *p;
> +		m = rb_entry(parent, struct map, rb_node);
> +		if (ip < m->start)
> +			p = &(*p)->rb_left;
> +		else
> +			p = &(*p)->rb_right;
> +	}
> +
> +	rb_link_node(&map->rb_node, parent, p);
> +	rb_insert_color(&map->rb_node, &maps->entries);
> +	map__get(map);
> +}
> +
> +struct map *maps__find(struct maps *maps, u64 ip)
> +{
> +	struct rb_node *p;
> +	struct map *m;
> +
> +	down_read(&maps->lock);
> +
> +	p = maps->entries.rb_node;
> +	while (p != NULL) {
> +		m = rb_entry(p, struct map, rb_node);
> +		if (ip < m->start)
> +			p = p->rb_left;
> +		else if (ip >= m->end)
> +			p = p->rb_right;
> +		else
> +			goto out;
> +	}
> +
> +	m = NULL;
> +out:
> +	up_read(&maps->lock);
> +	return m;
> +}
> +
> +struct map *maps__first(struct maps *maps)
> +{
> +	struct rb_node *first = rb_first(&maps->entries);
> +
> +	if (first)
> +		return rb_entry(first, struct map, rb_node);
> +	return NULL;
> +}
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 12/22] perf maps: Remove rb_node from struct map
  2022-02-11 10:34 ` [PATCH v3 12/22] perf maps: Remove rb_node from struct map Ian Rogers
@ 2022-02-16 14:08   ` Arnaldo Carvalho de Melo
  2022-02-16 17:36     ` Ian Rogers
  0 siblings, 1 reply; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-16 14:08 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Fri, Feb 11, 2022 at 02:34:05AM -0800, Ian Rogers escreveu:
> struct map is reference counted, having it also be a node in an
> red-black tree complicates the reference counting.

In what way?

If I have some refcounted data structure and I want to add it to some
container (an rb_tree, a list, etc) all I have to do is to grab a
refcount when adding and dropping it after removing it from the list.

IOW, in other words it is refcounted so that we can add it to a
red-black tree, amongst other uses.

> Switch to having a map_rb_node which is a red-block tree node but
> points at the reference counted struct map. This reference is
> responsible for a single reference count.

This makes every insertion incur in an allocation that has to be
checked, etc, when we know that maps will live in rb_trees, so having
the node structure allocated at the same time as the map is
advantageous.

We don't have to check if adding a data structure to a rbtree works, as
all that is needed is already preallocated.

- Arnaldo
 
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
>  tools/perf/arch/x86/util/event.c    |  13 +-
>  tools/perf/builtin-report.c         |   6 +-
>  tools/perf/tests/maps.c             |   8 +-
>  tools/perf/tests/vmlinux-kallsyms.c |  17 +--
>  tools/perf/util/machine.c           |  62 ++++++----
>  tools/perf/util/map.c               |  16 ---
>  tools/perf/util/map.h               |   1 -
>  tools/perf/util/maps.c              | 182 ++++++++++++++++++----------
>  tools/perf/util/maps.h              |  17 ++-
>  tools/perf/util/probe-event.c       |  18 +--
>  tools/perf/util/symbol-elf.c        |   9 +-
>  tools/perf/util/symbol.c            |  77 +++++++-----
>  tools/perf/util/synthetic-events.c  |  26 ++--
>  tools/perf/util/thread.c            |  10 +-
>  tools/perf/util/vdso.c              |   7 +-
>  15 files changed, 288 insertions(+), 181 deletions(-)
> 
> diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
> index e670f3547581..7b6b0c98fb36 100644
> --- a/tools/perf/arch/x86/util/event.c
> +++ b/tools/perf/arch/x86/util/event.c
> @@ -17,7 +17,7 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
>  				       struct machine *machine)
>  {
>  	int rc = 0;
> -	struct map *pos;
> +	struct map_rb_node *pos;
>  	struct maps *kmaps = machine__kernel_maps(machine);
>  	union perf_event *event = zalloc(sizeof(event->mmap) +
>  					 machine->id_hdr_size);
> @@ -31,11 +31,12 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
>  	maps__for_each_entry(kmaps, pos) {
>  		struct kmap *kmap;
>  		size_t size;
> +		struct map *map = pos->map;
>  
> -		if (!__map__is_extra_kernel_map(pos))
> +		if (!__map__is_extra_kernel_map(map))
>  			continue;
>  
> -		kmap = map__kmap(pos);
> +		kmap = map__kmap(map);
>  
>  		size = sizeof(event->mmap) - sizeof(event->mmap.filename) +
>  		       PERF_ALIGN(strlen(kmap->name) + 1, sizeof(u64)) +
> @@ -56,9 +57,9 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
>  
>  		event->mmap.header.size = size;
>  
> -		event->mmap.start = pos->start;
> -		event->mmap.len   = pos->end - pos->start;
> -		event->mmap.pgoff = pos->pgoff;
> +		event->mmap.start = map->start;
> +		event->mmap.len   = map->end - map->start;
> +		event->mmap.pgoff = map->pgoff;
>  		event->mmap.pid   = machine->pid;
>  
>  		strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
> diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
> index 1dd92d8c9279..57611ef725c3 100644
> --- a/tools/perf/builtin-report.c
> +++ b/tools/perf/builtin-report.c
> @@ -799,9 +799,11 @@ static struct task *tasks_list(struct task *task, struct machine *machine)
>  static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
>  {
>  	size_t printed = 0;
> -	struct map *map;
> +	struct map_rb_node *rb_node;
> +
> +	maps__for_each_entry(maps, rb_node) {
> +		struct map *map = rb_node->map;
>  
> -	maps__for_each_entry(maps, map) {
>  		printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
>  				   indent, "", map->start, map->end,
>  				   map->prot & PROT_READ ? 'r' : '-',
> diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
> index 6f53f17f788e..a58274598587 100644
> --- a/tools/perf/tests/maps.c
> +++ b/tools/perf/tests/maps.c
> @@ -15,10 +15,12 @@ struct map_def {
>  
>  static int check_maps(struct map_def *merged, unsigned int size, struct maps *maps)
>  {
> -	struct map *map;
> +	struct map_rb_node *rb_node;
>  	unsigned int i = 0;
>  
> -	maps__for_each_entry(maps, map) {
> +	maps__for_each_entry(maps, rb_node) {
> +		struct map *map = rb_node->map;
> +
>  		if (i > 0)
>  			TEST_ASSERT_VAL("less maps expected", (map && i < size) || (!map && i == size));
>  
> @@ -74,7 +76,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
>  
>  		map->start = bpf_progs[i].start;
>  		map->end   = bpf_progs[i].end;
> -		maps__insert(maps, map);
> +		TEST_ASSERT_VAL("failed to insert map", maps__insert(maps, map) == 0);
>  		map__put(map);
>  	}
>  
> diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
> index 84bf5f640065..11a230ee5894 100644
> --- a/tools/perf/tests/vmlinux-kallsyms.c
> +++ b/tools/perf/tests/vmlinux-kallsyms.c
> @@ -117,7 +117,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  	int err = -1;
>  	struct rb_node *nd;
>  	struct symbol *sym;
> -	struct map *kallsyms_map, *vmlinux_map, *map;
> +	struct map *kallsyms_map, *vmlinux_map;
> +	struct map_rb_node *rb_node;
>  	struct machine kallsyms, vmlinux;
>  	struct maps *maps = machine__kernel_maps(&vmlinux);
>  	u64 mem_start, mem_end;
> @@ -285,15 +286,15 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  
>  	header_printed = false;
>  
> -	maps__for_each_entry(maps, map) {
> -		struct map *
> +	maps__for_each_entry(maps, rb_node) {
> +		struct map *map = rb_node->map;
>  		/*
>  		 * If it is the kernel, kallsyms is always "[kernel.kallsyms]", while
>  		 * the kernel will have the path for the vmlinux file being used,
>  		 * so use the short name, less descriptive but the same ("[kernel]" in
>  		 * both cases.
>  		 */
> -		pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
> +		struct map *pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
>  								map->dso->short_name :
>  								map->dso->name));
>  		if (pair) {
> @@ -309,8 +310,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  
>  	header_printed = false;
>  
> -	maps__for_each_entry(maps, map) {
> -		struct map *pair;
> +	maps__for_each_entry(maps, rb_node) {
> +		struct map *pair, *map = rb_node->map;
>  
>  		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
>  		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
> @@ -339,7 +340,9 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
>  
>  	maps = machine__kernel_maps(&kallsyms);
>  
> -	maps__for_each_entry(maps, map) {
> +	maps__for_each_entry(maps, rb_node) {
> +		struct map *map = rb_node->map;
> +
>  		if (!map->priv) {
>  			if (!header_printed) {
>  				pr_info("WARN: Maps only in kallsyms:\n");
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index 57fbdba66425..fa25174cabf7 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -786,6 +786,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
>  
>  	if (!map) {
>  		struct dso *dso = dso__new(event->ksymbol.name);
> +		int err;
>  
>  		if (dso) {
>  			dso->kernel = DSO_SPACE__KERNEL;
> @@ -805,8 +806,11 @@ static int machine__process_ksymbol_register(struct machine *machine,
>  
>  		map->start = event->ksymbol.addr;
>  		map->end = map->start + event->ksymbol.len;
> -		maps__insert(machine__kernel_maps(machine), map);
> +		err = maps__insert(machine__kernel_maps(machine), map);
>  		map__put(map);
> +		if (err)
> +			return err;
> +
>  		dso__set_loaded(dso);
>  
>  		if (is_bpf_image(event->ksymbol.name)) {
> @@ -906,6 +910,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
>  	struct map *map = NULL;
>  	struct kmod_path m;
>  	struct dso *dso;
> +	int err;
>  
>  	if (kmod_path__parse_name(&m, filename))
>  		return NULL;
> @@ -918,10 +923,14 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
>  	if (map == NULL)
>  		goto out;
>  
> -	maps__insert(machine__kernel_maps(machine), map);
> +	err = maps__insert(machine__kernel_maps(machine), map);
>  
>  	/* Put the map here because maps__insert already got it */
>  	map__put(map);
> +
> +	/* If maps__insert failed, return NULL. */
> +	if (err)
> +		map = NULL;
>  out:
>  	/* put the dso here, corresponding to  machine__findnew_module_dso */
>  	dso__put(dso);
> @@ -1092,10 +1101,11 @@ int machine__create_extra_kernel_map(struct machine *machine,
>  {
>  	struct kmap *kmap;
>  	struct map *map;
> +	int err;
>  
>  	map = map__new2(xm->start, kernel);
>  	if (!map)
> -		return -1;
> +		return -ENOMEM;
>  
>  	map->end   = xm->end;
>  	map->pgoff = xm->pgoff;
> @@ -1104,14 +1114,16 @@ int machine__create_extra_kernel_map(struct machine *machine,
>  
>  	strlcpy(kmap->name, xm->name, KMAP_NAME_LEN);
>  
> -	maps__insert(machine__kernel_maps(machine), map);
> +	err = maps__insert(machine__kernel_maps(machine), map);
>  
> -	pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
> -		  kmap->name, map->start, map->end);
> +	if (!err) {
> +		pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
> +			kmap->name, map->start, map->end);
> +	}
>  
>  	map__put(map);
>  
> -	return 0;
> +	return err;
>  }
>  
>  static u64 find_entry_trampoline(struct dso *dso)
> @@ -1152,16 +1164,16 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
>  	struct maps *kmaps = machine__kernel_maps(machine);
>  	int nr_cpus_avail, cpu;
>  	bool found = false;
> -	struct map *map;
> +	struct map_rb_node *rb_node;
>  	u64 pgoff;
>  
>  	/*
>  	 * In the vmlinux case, pgoff is a virtual address which must now be
>  	 * mapped to a vmlinux offset.
>  	 */
> -	maps__for_each_entry(kmaps, map) {
> +	maps__for_each_entry(kmaps, rb_node) {
> +		struct map *dest_map, *map = rb_node->map;
>  		struct kmap *kmap = __map__kmap(map);
> -		struct map *dest_map;
>  
>  		if (!kmap || !is_entry_trampoline(kmap->name))
>  			continue;
> @@ -1216,11 +1228,10 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
>  
>  	machine->vmlinux_map = map__new2(0, kernel);
>  	if (machine->vmlinux_map == NULL)
> -		return -1;
> +		return -ENOMEM;
>  
>  	machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
> -	maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
> -	return 0;
> +	return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
>  }
>  
>  void machine__destroy_kernel_maps(struct machine *machine)
> @@ -1542,25 +1553,26 @@ static void machine__set_kernel_mmap(struct machine *machine,
>  		machine->vmlinux_map->end = ~0ULL;
>  }
>  
> -static void machine__update_kernel_mmap(struct machine *machine,
> +static int machine__update_kernel_mmap(struct machine *machine,
>  				     u64 start, u64 end)
>  {
>  	struct map *map = machine__kernel_map(machine);
> +	int err;
>  
>  	map__get(map);
>  	maps__remove(machine__kernel_maps(machine), map);
>  
>  	machine__set_kernel_mmap(machine, start, end);
>  
> -	maps__insert(machine__kernel_maps(machine), map);
> +	err = maps__insert(machine__kernel_maps(machine), map);
>  	map__put(map);
> +	return err;
>  }
>  
>  int machine__create_kernel_maps(struct machine *machine)
>  {
>  	struct dso *kernel = machine__get_kernel(machine);
>  	const char *name = NULL;
> -	struct map *map;
>  	u64 start = 0, end = ~0ULL;
>  	int ret;
>  
> @@ -1592,7 +1604,9 @@ int machine__create_kernel_maps(struct machine *machine)
>  		 * we have a real start address now, so re-order the kmaps
>  		 * assume it's the last in the kmaps
>  		 */
> -		machine__update_kernel_mmap(machine, start, end);
> +		ret = machine__update_kernel_mmap(machine, start, end);
> +		if (ret < 0)
> +			goto out_put;
>  	}
>  
>  	if (machine__create_extra_kernel_maps(machine, kernel))
> @@ -1600,9 +1614,12 @@ int machine__create_kernel_maps(struct machine *machine)
>  
>  	if (end == ~0ULL) {
>  		/* update end address of the kernel map using adjacent module address */
> -		map = map__next(machine__kernel_map(machine));
> -		if (map)
> -			machine__set_kernel_mmap(machine, start, map->start);
> +		struct map_rb_node *rb_node = maps__find_node(machine__kernel_maps(machine),
> +							machine__kernel_map(machine));
> +		struct map_rb_node *next = map_rb_node__next(rb_node);
> +
> +		if (next)
> +			machine__set_kernel_mmap(machine, start, next->map->start);
>  	}
>  
>  out_put:
> @@ -1726,7 +1743,10 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
>  		if (strstr(kernel->long_name, "vmlinux"))
>  			dso__set_short_name(kernel, "[kernel.vmlinux]", false);
>  
> -		machine__update_kernel_mmap(machine, xm->start, xm->end);
> +		if (machine__update_kernel_mmap(machine, xm->start, xm->end) < 0) {
> +			dso__put(kernel);
> +			goto out_problem;
> +		}
>  
>  		if (build_id__is_defined(bid))
>  			dso__set_build_id(kernel, bid);
> diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> index 8bbf9246a3cf..dfa5f6b7381f 100644
> --- a/tools/perf/util/map.c
> +++ b/tools/perf/util/map.c
> @@ -111,7 +111,6 @@ void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
>  	map->dso      = dso__get(dso);
>  	map->map_ip   = map__map_ip;
>  	map->unmap_ip = map__unmap_ip;
> -	RB_CLEAR_NODE(&map->rb_node);
>  	map->erange_warned = false;
>  	refcount_set(&map->refcnt, 1);
>  }
> @@ -383,7 +382,6 @@ struct map *map__clone(struct map *from)
>  	map = memdup(from, size);
>  	if (map != NULL) {
>  		refcount_set(&map->refcnt, 1);
> -		RB_CLEAR_NODE(&map->rb_node);
>  		dso__get(map->dso);
>  	}
>  
> @@ -523,20 +521,6 @@ bool map__contains_symbol(const struct map *map, const struct symbol *sym)
>  	return ip >= map->start && ip < map->end;
>  }
>  
> -static struct map *__map__next(struct map *map)
> -{
> -	struct rb_node *next = rb_next(&map->rb_node);
> -
> -	if (next)
> -		return rb_entry(next, struct map, rb_node);
> -	return NULL;
> -}
> -
> -struct map *map__next(struct map *map)
> -{
> -	return map ? __map__next(map) : NULL;
> -}
> -
>  struct kmap *__map__kmap(struct map *map)
>  {
>  	if (!map->dso || !map->dso->kernel)
> diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
> index 2879cae05ee0..d1a6f85fd31d 100644
> --- a/tools/perf/util/map.h
> +++ b/tools/perf/util/map.h
> @@ -16,7 +16,6 @@ struct maps;
>  struct machine;
>  
>  struct map {
> -	struct rb_node		rb_node;
>  	u64			start;
>  	u64			end;
>  	bool			erange_warned:1;
> diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
> index ededabf0a230..beb09b9a122c 100644
> --- a/tools/perf/util/maps.c
> +++ b/tools/perf/util/maps.c
> @@ -10,9 +10,7 @@
>  #include "ui/ui.h"
>  #include "unwind.h"
>  
> -static void __maps__insert(struct maps *maps, struct map *map);
> -
> -void maps__init(struct maps *maps, struct machine *machine)
> +static void maps__init(struct maps *maps, struct machine *machine)
>  {
>  	maps->entries = RB_ROOT;
>  	init_rwsem(&maps->lock);
> @@ -32,10 +30,44 @@ static void __maps__free_maps_by_name(struct maps *maps)
>  	maps->nr_maps_allocated = 0;
>  }
>  
> -void maps__insert(struct maps *maps, struct map *map)
> +static int __maps__insert(struct maps *maps, struct map *map)
> +{
> +	struct rb_node **p = &maps->entries.rb_node;
> +	struct rb_node *parent = NULL;
> +	const u64 ip = map->start;
> +	struct map_rb_node *m, *new_rb_node;
> +
> +	new_rb_node = malloc(sizeof(*new_rb_node));
> +	if (!new_rb_node)
> +		return -ENOMEM;
> +
> +	RB_CLEAR_NODE(&new_rb_node->rb_node);
> +	new_rb_node->map = map;
> +
> +	while (*p != NULL) {
> +		parent = *p;
> +		m = rb_entry(parent, struct map_rb_node, rb_node);
> +		if (ip < m->map->start)
> +			p = &(*p)->rb_left;
> +		else
> +			p = &(*p)->rb_right;
> +	}
> +
> +	rb_link_node(&new_rb_node->rb_node, parent, p);
> +	rb_insert_color(&new_rb_node->rb_node, &maps->entries);
> +	map__get(map);
> +	return 0;
> +}
> +
> +int maps__insert(struct maps *maps, struct map *map)
>  {
> +	int err;
> +
>  	down_write(&maps->lock);
> -	__maps__insert(maps, map);
> +	err = __maps__insert(maps, map);
> +	if (err)
> +		goto out;
> +
>  	++maps->nr_maps;
>  
>  	if (map->dso && map->dso->kernel) {
> @@ -59,8 +91,8 @@ void maps__insert(struct maps *maps, struct map *map)
>  
>  			if (maps_by_name == NULL) {
>  				__maps__free_maps_by_name(maps);
> -				up_write(&maps->lock);
> -				return;
> +				err = -ENOMEM;
> +				goto out;
>  			}
>  
>  			maps->maps_by_name = maps_by_name;
> @@ -69,22 +101,29 @@ void maps__insert(struct maps *maps, struct map *map)
>  		maps->maps_by_name[maps->nr_maps - 1] = map;
>  		__maps__sort_by_name(maps);
>  	}
> +out:
>  	up_write(&maps->lock);
> +	return err;
>  }
>  
> -static void __maps__remove(struct maps *maps, struct map *map)
> +static void __maps__remove(struct maps *maps, struct map_rb_node *rb_node)
>  {
> -	rb_erase_init(&map->rb_node, &maps->entries);
> -	map__put(map);
> +	rb_erase_init(&rb_node->rb_node, &maps->entries);
> +	map__put(rb_node->map);
> +	free(rb_node);
>  }
>  
>  void maps__remove(struct maps *maps, struct map *map)
>  {
> +	struct map_rb_node *rb_node;
> +
>  	down_write(&maps->lock);
>  	if (maps->last_search_by_name == map)
>  		maps->last_search_by_name = NULL;
>  
> -	__maps__remove(maps, map);
> +	rb_node = maps__find_node(maps, map);
> +	assert(rb_node->map == map);
> +	__maps__remove(maps, rb_node);
>  	--maps->nr_maps;
>  	if (maps->maps_by_name)
>  		__maps__free_maps_by_name(maps);
> @@ -93,15 +132,16 @@ void maps__remove(struct maps *maps, struct map *map)
>  
>  static void __maps__purge(struct maps *maps)
>  {
> -	struct map *pos, *next;
> +	struct map_rb_node *pos, *next;
>  
>  	maps__for_each_entry_safe(maps, pos, next) {
>  		rb_erase_init(&pos->rb_node,  &maps->entries);
> -		map__put(pos);
> +		map__put(pos->map);
> +		free(pos);
>  	}
>  }
>  
> -void maps__exit(struct maps *maps)
> +static void maps__exit(struct maps *maps)
>  {
>  	down_write(&maps->lock);
>  	__maps__purge(maps);
> @@ -153,21 +193,21 @@ struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
>  struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct map **mapp)
>  {
>  	struct symbol *sym;
> -	struct map *pos;
> +	struct map_rb_node *pos;
>  
>  	down_read(&maps->lock);
>  
>  	maps__for_each_entry(maps, pos) {
> -		sym = map__find_symbol_by_name(pos, name);
> +		sym = map__find_symbol_by_name(pos->map, name);
>  
>  		if (sym == NULL)
>  			continue;
> -		if (!map__contains_symbol(pos, sym)) {
> +		if (!map__contains_symbol(pos->map, sym)) {
>  			sym = NULL;
>  			continue;
>  		}
>  		if (mapp != NULL)
> -			*mapp = pos;
> +			*mapp = pos->map;
>  		goto out;
>  	}
>  
> @@ -196,15 +236,15 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
>  size_t maps__fprintf(struct maps *maps, FILE *fp)
>  {
>  	size_t printed = 0;
> -	struct map *pos;
> +	struct map_rb_node *pos;
>  
>  	down_read(&maps->lock);
>  
>  	maps__for_each_entry(maps, pos) {
>  		printed += fprintf(fp, "Map:");
> -		printed += map__fprintf(pos, fp);
> +		printed += map__fprintf(pos->map, fp);
>  		if (verbose > 2) {
> -			printed += dso__fprintf(pos->dso, fp);
> +			printed += dso__fprintf(pos->map->dso, fp);
>  			printed += fprintf(fp, "--\n");
>  		}
>  	}
> @@ -231,11 +271,11 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  	next = root->rb_node;
>  	first = NULL;
>  	while (next) {
> -		struct map *pos = rb_entry(next, struct map, rb_node);
> +		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
>  
> -		if (pos->end > map->start) {
> +		if (pos->map->end > map->start) {
>  			first = next;
> -			if (pos->start <= map->start)
> +			if (pos->map->start <= map->start)
>  				break;
>  			next = next->rb_left;
>  		} else
> @@ -244,14 +284,14 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  
>  	next = first;
>  	while (next) {
> -		struct map *pos = rb_entry(next, struct map, rb_node);
> +		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
>  		next = rb_next(&pos->rb_node);
>  
>  		/*
>  		 * Stop if current map starts after map->end.
>  		 * Maps are ordered by start: next will not overlap for sure.
>  		 */
> -		if (pos->start >= map->end)
> +		if (pos->map->start >= map->end)
>  			break;
>  
>  		if (verbose >= 2) {
> @@ -262,7 +302,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  			} else {
>  				fputs("overlapping maps:\n", fp);
>  				map__fprintf(map, fp);
> -				map__fprintf(pos, fp);
> +				map__fprintf(pos->map, fp);
>  			}
>  		}
>  
> @@ -271,8 +311,8 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  		 * Now check if we need to create new maps for areas not
>  		 * overlapped by the new map:
>  		 */
> -		if (map->start > pos->start) {
> -			struct map *before = map__clone(pos);
> +		if (map->start > pos->map->start) {
> +			struct map *before = map__clone(pos->map);
>  
>  			if (before == NULL) {
>  				err = -ENOMEM;
> @@ -280,14 +320,17 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  			}
>  
>  			before->end = map->start;
> -			__maps__insert(maps, before);
> +			err = __maps__insert(maps, before);
> +			if (err)
> +				goto put_map;
> +
>  			if (verbose >= 2 && !use_browser)
>  				map__fprintf(before, fp);
>  			map__put(before);
>  		}
>  
> -		if (map->end < pos->end) {
> -			struct map *after = map__clone(pos);
> +		if (map->end < pos->map->end) {
> +			struct map *after = map__clone(pos->map);
>  
>  			if (after == NULL) {
>  				err = -ENOMEM;
> @@ -295,15 +338,19 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
>  			}
>  
>  			after->start = map->end;
> -			after->pgoff += map->end - pos->start;
> -			assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
> -			__maps__insert(maps, after);
> +			after->pgoff += map->end - pos->map->start;
> +			assert(pos->map->map_ip(pos->map, map->end) ==
> +				after->map_ip(after, map->end));
> +			err = __maps__insert(maps, after);
> +			if (err)
> +				goto put_map;
> +
>  			if (verbose >= 2 && !use_browser)
>  				map__fprintf(after, fp);
>  			map__put(after);
>  		}
>  put_map:
> -		map__put(pos);
> +		map__put(pos->map);
>  
>  		if (err)
>  			goto out;
> @@ -322,12 +369,12 @@ int maps__clone(struct thread *thread, struct maps *parent)
>  {
>  	struct maps *maps = thread->maps;
>  	int err;
> -	struct map *map;
> +	struct map_rb_node *rb_node;
>  
>  	down_read(&parent->lock);
>  
> -	maps__for_each_entry(parent, map) {
> -		struct map *new = map__clone(map);
> +	maps__for_each_entry(parent, rb_node) {
> +		struct map *new = map__clone(rb_node->map);
>  
>  		if (new == NULL) {
>  			err = -ENOMEM;
> @@ -338,7 +385,10 @@ int maps__clone(struct thread *thread, struct maps *parent)
>  		if (err)
>  			goto out_unlock;
>  
> -		maps__insert(maps, new);
> +		err = maps__insert(maps, new);
> +		if (err)
> +			goto out_unlock;
> +
>  		map__put(new);
>  	}
>  
> @@ -348,40 +398,30 @@ int maps__clone(struct thread *thread, struct maps *parent)
>  	return err;
>  }
>  
> -static void __maps__insert(struct maps *maps, struct map *map)
> +struct map_rb_node *maps__find_node(struct maps *maps, struct map *map)
>  {
> -	struct rb_node **p = &maps->entries.rb_node;
> -	struct rb_node *parent = NULL;
> -	const u64 ip = map->start;
> -	struct map *m;
> +	struct map_rb_node *rb_node;
>  
> -	while (*p != NULL) {
> -		parent = *p;
> -		m = rb_entry(parent, struct map, rb_node);
> -		if (ip < m->start)
> -			p = &(*p)->rb_left;
> -		else
> -			p = &(*p)->rb_right;
> +	maps__for_each_entry(maps, rb_node) {
> +		if (rb_node->map == map)
> +			return rb_node;
>  	}
> -
> -	rb_link_node(&map->rb_node, parent, p);
> -	rb_insert_color(&map->rb_node, &maps->entries);
> -	map__get(map);
> +	return NULL;
>  }
>  
>  struct map *maps__find(struct maps *maps, u64 ip)
>  {
>  	struct rb_node *p;
> -	struct map *m;
> +	struct map_rb_node *m;
>  
>  	down_read(&maps->lock);
>  
>  	p = maps->entries.rb_node;
>  	while (p != NULL) {
> -		m = rb_entry(p, struct map, rb_node);
> -		if (ip < m->start)
> +		m = rb_entry(p, struct map_rb_node, rb_node);
> +		if (ip < m->map->start)
>  			p = p->rb_left;
> -		else if (ip >= m->end)
> +		else if (ip >= m->map->end)
>  			p = p->rb_right;
>  		else
>  			goto out;
> @@ -390,14 +430,30 @@ struct map *maps__find(struct maps *maps, u64 ip)
>  	m = NULL;
>  out:
>  	up_read(&maps->lock);
> -	return m;
> +
> +	return m ? m->map : NULL;
>  }
>  
> -struct map *maps__first(struct maps *maps)
> +struct map_rb_node *maps__first(struct maps *maps)
>  {
>  	struct rb_node *first = rb_first(&maps->entries);
>  
>  	if (first)
> -		return rb_entry(first, struct map, rb_node);
> +		return rb_entry(first, struct map_rb_node, rb_node);
>  	return NULL;
>  }
> +
> +struct map_rb_node *map_rb_node__next(struct map_rb_node *node)
> +{
> +	struct rb_node *next;
> +
> +	if (!node)
> +		return NULL;
> +
> +	next = rb_next(&node->rb_node);
> +
> +	if (!next)
> +		return NULL;
> +
> +	return rb_entry(next, struct map_rb_node, rb_node);
> +}
> diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
> index 7e729ff42749..512746ec0f9a 100644
> --- a/tools/perf/util/maps.h
> +++ b/tools/perf/util/maps.h
> @@ -15,15 +15,22 @@ struct map;
>  struct maps;
>  struct thread;
>  
> +struct map_rb_node {
> +	struct rb_node rb_node;
> +	struct map *map;
> +};
> +
> +struct map_rb_node *maps__first(struct maps *maps);
> +struct map_rb_node *map_rb_node__next(struct map_rb_node *node);
> +struct map_rb_node *maps__find_node(struct maps *maps, struct map *map);
>  struct map *maps__find(struct maps *maps, u64 addr);
> -struct map *maps__first(struct maps *maps);
> -struct map *map__next(struct map *map);
>  
>  #define maps__for_each_entry(maps, map) \
> -	for (map = maps__first(maps); map; map = map__next(map))
> +	for (map = maps__first(maps); map; map = map_rb_node__next(map))
>  
>  #define maps__for_each_entry_safe(maps, map, next) \
> -	for (map = maps__first(maps), next = map__next(map); map; map = next, next = map__next(map))
> +	for (map = maps__first(maps), next = map_rb_node__next(map); map; \
> +	     map = next, next = map_rb_node__next(map))
>  
>  struct maps {
>  	struct rb_root      entries;
> @@ -63,7 +70,7 @@ void maps__put(struct maps *maps);
>  int maps__clone(struct thread *thread, struct maps *parent);
>  size_t maps__fprintf(struct maps *maps, FILE *fp);
>  
> -void maps__insert(struct maps *maps, struct map *map);
> +int maps__insert(struct maps *maps, struct map *map);
>  
>  void maps__remove(struct maps *maps, struct map *map);
>  
> diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
> index bc5ab782ace5..f9fbf611f2bf 100644
> --- a/tools/perf/util/probe-event.c
> +++ b/tools/perf/util/probe-event.c
> @@ -150,23 +150,27 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
>  static struct map *kernel_get_module_map(const char *module)
>  {
>  	struct maps *maps = machine__kernel_maps(host_machine);
> -	struct map *pos;
> +	struct map_rb_node *pos;
>  
>  	/* A file path -- this is an offline module */
>  	if (module && strchr(module, '/'))
>  		return dso__new_map(module);
>  
>  	if (!module) {
> -		pos = machine__kernel_map(host_machine);
> -		return map__get(pos);
> +		struct map *map = machine__kernel_map(host_machine);
> +
> +		return map__get(map);
>  	}
>  
>  	maps__for_each_entry(maps, pos) {
>  		/* short_name is "[module]" */
> -		if (strncmp(pos->dso->short_name + 1, module,
> -			    pos->dso->short_name_len - 2) == 0 &&
> -		    module[pos->dso->short_name_len - 2] == '\0') {
> -			return map__get(pos);
> +		const char *short_name = pos->map->dso->short_name;
> +		u16 short_name_len =  pos->map->dso->short_name_len;
> +
> +		if (strncmp(short_name + 1, module,
> +			    short_name_len - 2) == 0 &&
> +		    module[short_name_len - 2] == '\0') {
> +			return map__get(pos->map);
>  		}
>  	}
>  	return NULL;
> diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
> index 31cd59a2b66e..4607c9438866 100644
> --- a/tools/perf/util/symbol-elf.c
> +++ b/tools/perf/util/symbol-elf.c
> @@ -1000,10 +1000,14 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  			map->unmap_ip = map__unmap_ip;
>  			/* Ensure maps are correctly ordered */
>  			if (kmaps) {
> +				int err;
> +
>  				map__get(map);
>  				maps__remove(kmaps, map);
> -				maps__insert(kmaps, map);
> +				err = maps__insert(kmaps, map);
>  				map__put(map);
> +				if (err)
> +					return err;
>  			}
>  		}
>  
> @@ -1056,7 +1060,8 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
>  			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
>  		}
>  		curr_dso->symtab_type = dso->symtab_type;
> -		maps__insert(kmaps, curr_map);
> +		if (maps__insert(kmaps, curr_map))
> +			return -1;
>  		/*
>  		 * Add it before we drop the reference to curr_map, i.e. while
>  		 * we still are sure to have a reference to this DSO via
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index 99accae7d3b8..266c65bb8bbb 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -247,13 +247,13 @@ void symbols__fixup_end(struct rb_root_cached *symbols)
>  
>  void maps__fixup_end(struct maps *maps)
>  {
> -	struct map *prev = NULL, *curr;
> +	struct map_rb_node *prev = NULL, *curr;
>  
>  	down_write(&maps->lock);
>  
>  	maps__for_each_entry(maps, curr) {
> -		if (prev != NULL && !prev->end)
> -			prev->end = curr->start;
> +		if (prev != NULL && !prev->map->end)
> +			prev->map->end = curr->map->start;
>  
>  		prev = curr;
>  	}
> @@ -262,8 +262,8 @@ void maps__fixup_end(struct maps *maps)
>  	 * We still haven't the actual symbols, so guess the
>  	 * last map final address.
>  	 */
> -	if (curr && !curr->end)
> -		curr->end = ~0ULL;
> +	if (curr && !curr->map->end)
> +		curr->map->end = ~0ULL;
>  
>  	up_write(&maps->lock);
>  }
> @@ -911,7 +911,10 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
>  			}
>  
>  			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> -			maps__insert(kmaps, curr_map);
> +			if (maps__insert(kmaps, curr_map)) {
> +				dso__put(ndso);
> +				return -1;
> +			}
>  			++kernel_range;
>  		} else if (delta) {
>  			/* Kernel was relocated at boot time */
> @@ -1099,14 +1102,15 @@ int compare_proc_modules(const char *from, const char *to)
>  static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
>  {
>  	struct rb_root modules = RB_ROOT;
> -	struct map *old_map;
> +	struct map_rb_node *old_node;
>  	int err;
>  
>  	err = read_proc_modules(filename, &modules);
>  	if (err)
>  		return err;
>  
> -	maps__for_each_entry(kmaps, old_map) {
> +	maps__for_each_entry(kmaps, old_node) {
> +		struct map *old_map = old_node->map;
>  		struct module_info *mi;
>  
>  		if (!__map__is_kmodule(old_map)) {
> @@ -1224,10 +1228,13 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
>   */
>  int maps__merge_in(struct maps *kmaps, struct map *new_map)
>  {
> -	struct map *old_map;
> +	struct map_rb_node *rb_node;
>  	LIST_HEAD(merged);
> +	int err = 0;
> +
> +	maps__for_each_entry(kmaps, rb_node) {
> +		struct map *old_map = rb_node->map;
>  
> -	maps__for_each_entry(kmaps, old_map) {
>  		/* no overload with this one */
>  		if (new_map->end < old_map->start ||
>  		    new_map->start >= old_map->end)
> @@ -1252,13 +1259,16 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
>  				struct map_list_node *m;
>  
>  				m = malloc(sizeof(*m));
> -				if (!m)
> -					return -ENOMEM;
> +				if (!m) {
> +					err = -ENOMEM;
> +					goto out;
> +				}
>  
>  				m->map = map__clone(new_map);
>  				if (!m->map) {
>  					free(m);
> -					return -ENOMEM;
> +					err = -ENOMEM;
> +					goto out;
>  				}
>  
>  				m->map->end = old_map->start;
> @@ -1290,21 +1300,24 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
>  		}
>  	}
>  
> +out:
>  	while (!list_empty(&merged)) {
>  		struct map_list_node *old_node;
>  
>  		old_node = list_entry(merged.next, struct map_list_node, node);
>  		list_del_init(&old_node->node);
> -		maps__insert(kmaps, old_node->map);
> +		if (!err)
> +			err = maps__insert(kmaps, old_node->map);
>  		map__put(old_node->map);
>  		free(old_node);
>  	}
>  
>  	if (new_map) {
> -		maps__insert(kmaps, new_map);
> +		if (!err)
> +			err = maps__insert(kmaps, new_map);
>  		map__put(new_map);
>  	}
> -	return 0;
> +	return err;
>  }
>  
>  static int dso__load_kcore(struct dso *dso, struct map *map,
> @@ -1312,7 +1325,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  {
>  	struct maps *kmaps = map__kmaps(map);
>  	struct kcore_mapfn_data md;
> -	struct map *old_map, *replacement_map = NULL, *next;
> +	struct map *replacement_map = NULL;
> +	struct map_rb_node *old_node, *next;
>  	struct machine *machine;
>  	bool is_64_bit;
>  	int err, fd;
> @@ -1359,7 +1373,9 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  	}
>  
>  	/* Remove old maps */
> -	maps__for_each_entry_safe(kmaps, old_map, next) {
> +	maps__for_each_entry_safe(kmaps, old_node, next) {
> +		struct map *old_map = old_node->map;
> +
>  		/*
>  		 * We need to preserve eBPF maps even if they are
>  		 * covered by kcore, because we need to access
> @@ -1400,17 +1416,21 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  			/* Ensure maps are correctly ordered */
>  			map__get(map);
>  			maps__remove(kmaps, map);
> -			maps__insert(kmaps, map);
> +			err = maps__insert(kmaps, map);
>  			map__put(map);
>  			map__put(new_node->map);
> +			if (err)
> +				goto out_err;
>  		} else {
>  			/*
>  			 * Merge kcore map into existing maps,
>  			 * and ensure that current maps (eBPF)
>  			 * stay intact.
>  			 */
> -			if (maps__merge_in(kmaps, new_node->map))
> +			if (maps__merge_in(kmaps, new_node->map)) {
> +				err = -EINVAL;
>  				goto out_err;
> +			}
>  		}
>  		free(new_node);
>  	}
> @@ -1457,7 +1477,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
>  		free(list_node);
>  	}
>  	close(fd);
> -	return -EINVAL;
> +	return err;
>  }
>  
>  /*
> @@ -1991,8 +2011,9 @@ void __maps__sort_by_name(struct maps *maps)
>  
>  static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
>  {
> -	struct map *map;
> -	struct map **maps_by_name = realloc(maps->maps_by_name, maps->nr_maps * sizeof(map));
> +	struct map_rb_node *rb_node;
> +	struct map **maps_by_name = realloc(maps->maps_by_name,
> +					    maps->nr_maps * sizeof(struct map *));
>  	int i = 0;
>  
>  	if (maps_by_name == NULL)
> @@ -2001,8 +2022,8 @@ static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
>  	maps->maps_by_name = maps_by_name;
>  	maps->nr_maps_allocated = maps->nr_maps;
>  
> -	maps__for_each_entry(maps, map)
> -		maps_by_name[i++] = map;
> +	maps__for_each_entry(maps, rb_node)
> +		maps_by_name[i++] = rb_node->map;
>  
>  	__maps__sort_by_name(maps);
>  	return 0;
> @@ -2024,6 +2045,7 @@ static struct map *__maps__find_by_name(struct maps *maps, const char *name)
>  
>  struct map *maps__find_by_name(struct maps *maps, const char *name)
>  {
> +	struct map_rb_node *rb_node;
>  	struct map *map;
>  
>  	down_read(&maps->lock);
> @@ -2042,12 +2064,13 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
>  		goto out_unlock;
>  
>  	/* Fallback to traversing the rbtree... */
> -	maps__for_each_entry(maps, map)
> +	maps__for_each_entry(maps, rb_node) {
> +		map = rb_node->map;
>  		if (strcmp(map->dso->short_name, name) == 0) {
>  			maps->last_search_by_name = map;
>  			goto out_unlock;
>  		}
> -
> +	}
>  	map = NULL;
>  
>  out_unlock:
> diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
> index 70f095624a0b..ed2d55d224aa 100644
> --- a/tools/perf/util/synthetic-events.c
> +++ b/tools/perf/util/synthetic-events.c
> @@ -639,7 +639,7 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
>  				   struct machine *machine)
>  {
>  	int rc = 0;
> -	struct map *pos;
> +	struct map_rb_node *pos;
>  	struct maps *maps = machine__kernel_maps(machine);
>  	union perf_event *event;
>  	size_t size = symbol_conf.buildid_mmap2 ?
> @@ -662,37 +662,39 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
>  		event->header.misc = PERF_RECORD_MISC_GUEST_KERNEL;
>  
>  	maps__for_each_entry(maps, pos) {
> -		if (!__map__is_kmodule(pos))
> +		struct map *map = pos->map;
> +
> +		if (!__map__is_kmodule(map))
>  			continue;
>  
>  		if (symbol_conf.buildid_mmap2) {
> -			size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
> +			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
>  			event->mmap2.header.type = PERF_RECORD_MMAP2;
>  			event->mmap2.header.size = (sizeof(event->mmap2) -
>  						(sizeof(event->mmap2.filename) - size));
>  			memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
>  			event->mmap2.header.size += machine->id_hdr_size;
> -			event->mmap2.start = pos->start;
> -			event->mmap2.len   = pos->end - pos->start;
> +			event->mmap2.start = map->start;
> +			event->mmap2.len   = map->end - map->start;
>  			event->mmap2.pid   = machine->pid;
>  
> -			memcpy(event->mmap2.filename, pos->dso->long_name,
> -			       pos->dso->long_name_len + 1);
> +			memcpy(event->mmap2.filename, map->dso->long_name,
> +			       map->dso->long_name_len + 1);
>  
>  			perf_record_mmap2__read_build_id(&event->mmap2, false);
>  		} else {
> -			size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
> +			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
>  			event->mmap.header.type = PERF_RECORD_MMAP;
>  			event->mmap.header.size = (sizeof(event->mmap) -
>  						(sizeof(event->mmap.filename) - size));
>  			memset(event->mmap.filename + size, 0, machine->id_hdr_size);
>  			event->mmap.header.size += machine->id_hdr_size;
> -			event->mmap.start = pos->start;
> -			event->mmap.len   = pos->end - pos->start;
> +			event->mmap.start = map->start;
> +			event->mmap.len   = map->end - map->start;
>  			event->mmap.pid   = machine->pid;
>  
> -			memcpy(event->mmap.filename, pos->dso->long_name,
> -			       pos->dso->long_name_len + 1);
> +			memcpy(event->mmap.filename, map->dso->long_name,
> +			       map->dso->long_name_len + 1);
>  		}
>  
>  		if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
> diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
> index 665e5c0618ed..4baf4db8af65 100644
> --- a/tools/perf/util/thread.c
> +++ b/tools/perf/util/thread.c
> @@ -338,9 +338,7 @@ int thread__insert_map(struct thread *thread, struct map *map)
>  		return ret;
>  
>  	maps__fixup_overlappings(thread->maps, map, stderr);
> -	maps__insert(thread->maps, map);
> -
> -	return 0;
> +	return maps__insert(thread->maps, map);
>  }
>  
>  static int __thread__prepare_access(struct thread *thread)
> @@ -348,12 +346,12 @@ static int __thread__prepare_access(struct thread *thread)
>  	bool initialized = false;
>  	int err = 0;
>  	struct maps *maps = thread->maps;
> -	struct map *map;
> +	struct map_rb_node *rb_node;
>  
>  	down_read(&maps->lock);
>  
> -	maps__for_each_entry(maps, map) {
> -		err = unwind__prepare_access(thread->maps, map, &initialized);
> +	maps__for_each_entry(maps, rb_node) {
> +		err = unwind__prepare_access(thread->maps, rb_node->map, &initialized);
>  		if (err || initialized)
>  			break;
>  	}
> diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
> index 43beb169631d..835c39efb80d 100644
> --- a/tools/perf/util/vdso.c
> +++ b/tools/perf/util/vdso.c
> @@ -144,10 +144,11 @@ static enum dso_type machine__thread_dso_type(struct machine *machine,
>  					      struct thread *thread)
>  {
>  	enum dso_type dso_type = DSO__TYPE_UNKNOWN;
> -	struct map *map;
> +	struct map_rb_node *rb_node;
> +
> +	maps__for_each_entry(thread->maps, rb_node) {
> +		struct dso *dso = rb_node->map->dso;
>  
> -	maps__for_each_entry(thread->maps, map) {
> -		struct dso *dso = map->dso;
>  		if (!dso || dso->long_name[0] != '/')
>  			continue;
>  		dso_type = dso__type(dso, machine);
> -- 
> 2.35.1.265.g69c8d7142f-goog

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 12/22] perf maps: Remove rb_node from struct map
  2022-02-16 14:08   ` Arnaldo Carvalho de Melo
@ 2022-02-16 17:36     ` Ian Rogers
  2022-02-16 20:12       ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 58+ messages in thread
From: Ian Rogers @ 2022-02-16 17:36 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

On Wed, Feb 16, 2022 at 6:08 AM Arnaldo Carvalho de Melo
<acme@kernel.org> wrote:
>
> Em Fri, Feb 11, 2022 at 02:34:05AM -0800, Ian Rogers escreveu:
> > struct map is reference counted, having it also be a node in an
> > red-black tree complicates the reference counting.
>
> In what way?
>
> If I have some refcounted data structure and I want to add it to some
> container (an rb_tree, a list, etc) all I have to do is to grab a
> refcount when adding and dropping it after removing it from the list.
>
> IOW, in other words it is refcounted so that we can add it to a
> red-black tree, amongst other uses.

Thanks Arnaldo. So I'm not disputing that you can make reference
counted collections. With reference counting every reference should
have a count associated with it. So when symbol.c is using the list, a
node may be referenced from a prev and a next pointer, so being in the
list requires a reference count of 2. When you find something in the
list which reference count is that associated with? It doesn't matter
normally as you'd increment the reference count again and return that.
In the perf code find doesn't increment a reference count so I want to
return the "get" that belongs to the list. That's "get" singular,
hence wanting to add in the pointer indirection that incurs cost. To
make insertion and deletion work properly on list with a reference
count means reworking list.h.

The rbtree is the same problem only more-so, as you need pointers for
parent, left and right child.

> > Switch to having a map_rb_node which is a red-block tree node but
> > points at the reference counted struct map. This reference is
> > responsible for a single reference count.
>
> This makes every insertion incur in an allocation that has to be
> checked, etc, when we know that maps will live in rb_trees, so having
> the node structure allocated at the same time as the map is
> advantageous.

So this pattern is common in other languages, the default kernel style
is what at Google gets called invasive - you put the storage for list
nodes, reference counts, etc. into the referenced object itself. This
lowers the overhead within the struct, and I don't disagree it adds a
cost to insertion, unless maps are shared which isn't a use-case we
have at the moment. So this change is backing out an optimization, but
frankly fixing this properly is a much bigger overhaul than this
already big overhaul and I don't think losing the optimization is
really costing that much performance - a memory allocation costs in
the region of 40 cycles with an optimized implementation like
tcmalloc. We also don't use the invasive style for maps_by_name, it is
just a sorted array.

A side note, I see a lot of overhead in symbol allocation and part of
that is the size of the two invasive rbtree nodes (2 * 3 * 8 bytes =
48bytes). Were the symbols just referenced by a sorted array, like
maps_by_name, insertion and sorting would still be O(n*log(n)) but
we'd reduce the memory usage to a third. rbtree is a cool data
structure, but I think we could be over using it.

> We don't have to check if adding a data structure to a rbtree works, as
> all that is needed is already preallocated.

The issue here is that a find, or similar, wants to pass around
something that is owned by a list or an rbtree. We can have the idea
of ownership by adding a token/cookie and passing that around
everywhere, it gets problematic then to spot use after put and I think
that approach is overall more invasive to the APIs than what is in
these changes.

A better solution can be to keep the rbtree being invasive and at all
the find and similar routines, make sure a getted version is returned
- so the code outside of maps is never working with the rbtree's
reference counted version. The problem with this is that it is an
overhaul to all the uses of map. The reference count checker would
find misuse but again it'd be a far large patch series than what is
here - that is trying to fix the code base as it is.

I think the having our cake and eating solution (best performance +
checking) is that approach, but we need to get to a point where
checking is working. So if we focus on (1) checking and fixing those
bugs (the changes here), then (2) change the APIs so that everything
is getted and fix the leaks that introduces, then (3) go back to being
invasive I think we get to that solution. I like step (2) from a
cleanliness point-of-view, I'm fine with (3) I'm just not sure anybody
would notice the performance difference.

Thanks,
Ian

> - Arnaldo
>
> > Signed-off-by: Ian Rogers <irogers@google.com>
> > ---
> >  tools/perf/arch/x86/util/event.c    |  13 +-
> >  tools/perf/builtin-report.c         |   6 +-
> >  tools/perf/tests/maps.c             |   8 +-
> >  tools/perf/tests/vmlinux-kallsyms.c |  17 +--
> >  tools/perf/util/machine.c           |  62 ++++++----
> >  tools/perf/util/map.c               |  16 ---
> >  tools/perf/util/map.h               |   1 -
> >  tools/perf/util/maps.c              | 182 ++++++++++++++++++----------
> >  tools/perf/util/maps.h              |  17 ++-
> >  tools/perf/util/probe-event.c       |  18 +--
> >  tools/perf/util/symbol-elf.c        |   9 +-
> >  tools/perf/util/symbol.c            |  77 +++++++-----
> >  tools/perf/util/synthetic-events.c  |  26 ++--
> >  tools/perf/util/thread.c            |  10 +-
> >  tools/perf/util/vdso.c              |   7 +-
> >  15 files changed, 288 insertions(+), 181 deletions(-)
> >
> > diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
> > index e670f3547581..7b6b0c98fb36 100644
> > --- a/tools/perf/arch/x86/util/event.c
> > +++ b/tools/perf/arch/x86/util/event.c
> > @@ -17,7 +17,7 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
> >                                      struct machine *machine)
> >  {
> >       int rc = 0;
> > -     struct map *pos;
> > +     struct map_rb_node *pos;
> >       struct maps *kmaps = machine__kernel_maps(machine);
> >       union perf_event *event = zalloc(sizeof(event->mmap) +
> >                                        machine->id_hdr_size);
> > @@ -31,11 +31,12 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
> >       maps__for_each_entry(kmaps, pos) {
> >               struct kmap *kmap;
> >               size_t size;
> > +             struct map *map = pos->map;
> >
> > -             if (!__map__is_extra_kernel_map(pos))
> > +             if (!__map__is_extra_kernel_map(map))
> >                       continue;
> >
> > -             kmap = map__kmap(pos);
> > +             kmap = map__kmap(map);
> >
> >               size = sizeof(event->mmap) - sizeof(event->mmap.filename) +
> >                      PERF_ALIGN(strlen(kmap->name) + 1, sizeof(u64)) +
> > @@ -56,9 +57,9 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
> >
> >               event->mmap.header.size = size;
> >
> > -             event->mmap.start = pos->start;
> > -             event->mmap.len   = pos->end - pos->start;
> > -             event->mmap.pgoff = pos->pgoff;
> > +             event->mmap.start = map->start;
> > +             event->mmap.len   = map->end - map->start;
> > +             event->mmap.pgoff = map->pgoff;
> >               event->mmap.pid   = machine->pid;
> >
> >               strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
> > diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
> > index 1dd92d8c9279..57611ef725c3 100644
> > --- a/tools/perf/builtin-report.c
> > +++ b/tools/perf/builtin-report.c
> > @@ -799,9 +799,11 @@ static struct task *tasks_list(struct task *task, struct machine *machine)
> >  static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
> >  {
> >       size_t printed = 0;
> > -     struct map *map;
> > +     struct map_rb_node *rb_node;
> > +
> > +     maps__for_each_entry(maps, rb_node) {
> > +             struct map *map = rb_node->map;
> >
> > -     maps__for_each_entry(maps, map) {
> >               printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
> >                                  indent, "", map->start, map->end,
> >                                  map->prot & PROT_READ ? 'r' : '-',
> > diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
> > index 6f53f17f788e..a58274598587 100644
> > --- a/tools/perf/tests/maps.c
> > +++ b/tools/perf/tests/maps.c
> > @@ -15,10 +15,12 @@ struct map_def {
> >
> >  static int check_maps(struct map_def *merged, unsigned int size, struct maps *maps)
> >  {
> > -     struct map *map;
> > +     struct map_rb_node *rb_node;
> >       unsigned int i = 0;
> >
> > -     maps__for_each_entry(maps, map) {
> > +     maps__for_each_entry(maps, rb_node) {
> > +             struct map *map = rb_node->map;
> > +
> >               if (i > 0)
> >                       TEST_ASSERT_VAL("less maps expected", (map && i < size) || (!map && i == size));
> >
> > @@ -74,7 +76,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
> >
> >               map->start = bpf_progs[i].start;
> >               map->end   = bpf_progs[i].end;
> > -             maps__insert(maps, map);
> > +             TEST_ASSERT_VAL("failed to insert map", maps__insert(maps, map) == 0);
> >               map__put(map);
> >       }
> >
> > diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
> > index 84bf5f640065..11a230ee5894 100644
> > --- a/tools/perf/tests/vmlinux-kallsyms.c
> > +++ b/tools/perf/tests/vmlinux-kallsyms.c
> > @@ -117,7 +117,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> >       int err = -1;
> >       struct rb_node *nd;
> >       struct symbol *sym;
> > -     struct map *kallsyms_map, *vmlinux_map, *map;
> > +     struct map *kallsyms_map, *vmlinux_map;
> > +     struct map_rb_node *rb_node;
> >       struct machine kallsyms, vmlinux;
> >       struct maps *maps = machine__kernel_maps(&vmlinux);
> >       u64 mem_start, mem_end;
> > @@ -285,15 +286,15 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> >
> >       header_printed = false;
> >
> > -     maps__for_each_entry(maps, map) {
> > -             struct map *
> > +     maps__for_each_entry(maps, rb_node) {
> > +             struct map *map = rb_node->map;
> >               /*
> >                * If it is the kernel, kallsyms is always "[kernel.kallsyms]", while
> >                * the kernel will have the path for the vmlinux file being used,
> >                * so use the short name, less descriptive but the same ("[kernel]" in
> >                * both cases.
> >                */
> > -             pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
> > +             struct map *pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
> >                                                               map->dso->short_name :
> >                                                               map->dso->name));
> >               if (pair) {
> > @@ -309,8 +310,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> >
> >       header_printed = false;
> >
> > -     maps__for_each_entry(maps, map) {
> > -             struct map *pair;
> > +     maps__for_each_entry(maps, rb_node) {
> > +             struct map *pair, *map = rb_node->map;
> >
> >               mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
> >               mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
> > @@ -339,7 +340,9 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
> >
> >       maps = machine__kernel_maps(&kallsyms);
> >
> > -     maps__for_each_entry(maps, map) {
> > +     maps__for_each_entry(maps, rb_node) {
> > +             struct map *map = rb_node->map;
> > +
> >               if (!map->priv) {
> >                       if (!header_printed) {
> >                               pr_info("WARN: Maps only in kallsyms:\n");
> > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > index 57fbdba66425..fa25174cabf7 100644
> > --- a/tools/perf/util/machine.c
> > +++ b/tools/perf/util/machine.c
> > @@ -786,6 +786,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
> >
> >       if (!map) {
> >               struct dso *dso = dso__new(event->ksymbol.name);
> > +             int err;
> >
> >               if (dso) {
> >                       dso->kernel = DSO_SPACE__KERNEL;
> > @@ -805,8 +806,11 @@ static int machine__process_ksymbol_register(struct machine *machine,
> >
> >               map->start = event->ksymbol.addr;
> >               map->end = map->start + event->ksymbol.len;
> > -             maps__insert(machine__kernel_maps(machine), map);
> > +             err = maps__insert(machine__kernel_maps(machine), map);
> >               map__put(map);
> > +             if (err)
> > +                     return err;
> > +
> >               dso__set_loaded(dso);
> >
> >               if (is_bpf_image(event->ksymbol.name)) {
> > @@ -906,6 +910,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
> >       struct map *map = NULL;
> >       struct kmod_path m;
> >       struct dso *dso;
> > +     int err;
> >
> >       if (kmod_path__parse_name(&m, filename))
> >               return NULL;
> > @@ -918,10 +923,14 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
> >       if (map == NULL)
> >               goto out;
> >
> > -     maps__insert(machine__kernel_maps(machine), map);
> > +     err = maps__insert(machine__kernel_maps(machine), map);
> >
> >       /* Put the map here because maps__insert already got it */
> >       map__put(map);
> > +
> > +     /* If maps__insert failed, return NULL. */
> > +     if (err)
> > +             map = NULL;
> >  out:
> >       /* put the dso here, corresponding to  machine__findnew_module_dso */
> >       dso__put(dso);
> > @@ -1092,10 +1101,11 @@ int machine__create_extra_kernel_map(struct machine *machine,
> >  {
> >       struct kmap *kmap;
> >       struct map *map;
> > +     int err;
> >
> >       map = map__new2(xm->start, kernel);
> >       if (!map)
> > -             return -1;
> > +             return -ENOMEM;
> >
> >       map->end   = xm->end;
> >       map->pgoff = xm->pgoff;
> > @@ -1104,14 +1114,16 @@ int machine__create_extra_kernel_map(struct machine *machine,
> >
> >       strlcpy(kmap->name, xm->name, KMAP_NAME_LEN);
> >
> > -     maps__insert(machine__kernel_maps(machine), map);
> > +     err = maps__insert(machine__kernel_maps(machine), map);
> >
> > -     pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
> > -               kmap->name, map->start, map->end);
> > +     if (!err) {
> > +             pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
> > +                     kmap->name, map->start, map->end);
> > +     }
> >
> >       map__put(map);
> >
> > -     return 0;
> > +     return err;
> >  }
> >
> >  static u64 find_entry_trampoline(struct dso *dso)
> > @@ -1152,16 +1164,16 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
> >       struct maps *kmaps = machine__kernel_maps(machine);
> >       int nr_cpus_avail, cpu;
> >       bool found = false;
> > -     struct map *map;
> > +     struct map_rb_node *rb_node;
> >       u64 pgoff;
> >
> >       /*
> >        * In the vmlinux case, pgoff is a virtual address which must now be
> >        * mapped to a vmlinux offset.
> >        */
> > -     maps__for_each_entry(kmaps, map) {
> > +     maps__for_each_entry(kmaps, rb_node) {
> > +             struct map *dest_map, *map = rb_node->map;
> >               struct kmap *kmap = __map__kmap(map);
> > -             struct map *dest_map;
> >
> >               if (!kmap || !is_entry_trampoline(kmap->name))
> >                       continue;
> > @@ -1216,11 +1228,10 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
> >
> >       machine->vmlinux_map = map__new2(0, kernel);
> >       if (machine->vmlinux_map == NULL)
> > -             return -1;
> > +             return -ENOMEM;
> >
> >       machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
> > -     maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
> > -     return 0;
> > +     return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
> >  }
> >
> >  void machine__destroy_kernel_maps(struct machine *machine)
> > @@ -1542,25 +1553,26 @@ static void machine__set_kernel_mmap(struct machine *machine,
> >               machine->vmlinux_map->end = ~0ULL;
> >  }
> >
> > -static void machine__update_kernel_mmap(struct machine *machine,
> > +static int machine__update_kernel_mmap(struct machine *machine,
> >                                    u64 start, u64 end)
> >  {
> >       struct map *map = machine__kernel_map(machine);
> > +     int err;
> >
> >       map__get(map);
> >       maps__remove(machine__kernel_maps(machine), map);
> >
> >       machine__set_kernel_mmap(machine, start, end);
> >
> > -     maps__insert(machine__kernel_maps(machine), map);
> > +     err = maps__insert(machine__kernel_maps(machine), map);
> >       map__put(map);
> > +     return err;
> >  }
> >
> >  int machine__create_kernel_maps(struct machine *machine)
> >  {
> >       struct dso *kernel = machine__get_kernel(machine);
> >       const char *name = NULL;
> > -     struct map *map;
> >       u64 start = 0, end = ~0ULL;
> >       int ret;
> >
> > @@ -1592,7 +1604,9 @@ int machine__create_kernel_maps(struct machine *machine)
> >                * we have a real start address now, so re-order the kmaps
> >                * assume it's the last in the kmaps
> >                */
> > -             machine__update_kernel_mmap(machine, start, end);
> > +             ret = machine__update_kernel_mmap(machine, start, end);
> > +             if (ret < 0)
> > +                     goto out_put;
> >       }
> >
> >       if (machine__create_extra_kernel_maps(machine, kernel))
> > @@ -1600,9 +1614,12 @@ int machine__create_kernel_maps(struct machine *machine)
> >
> >       if (end == ~0ULL) {
> >               /* update end address of the kernel map using adjacent module address */
> > -             map = map__next(machine__kernel_map(machine));
> > -             if (map)
> > -                     machine__set_kernel_mmap(machine, start, map->start);
> > +             struct map_rb_node *rb_node = maps__find_node(machine__kernel_maps(machine),
> > +                                                     machine__kernel_map(machine));
> > +             struct map_rb_node *next = map_rb_node__next(rb_node);
> > +
> > +             if (next)
> > +                     machine__set_kernel_mmap(machine, start, next->map->start);
> >       }
> >
> >  out_put:
> > @@ -1726,7 +1743,10 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
> >               if (strstr(kernel->long_name, "vmlinux"))
> >                       dso__set_short_name(kernel, "[kernel.vmlinux]", false);
> >
> > -             machine__update_kernel_mmap(machine, xm->start, xm->end);
> > +             if (machine__update_kernel_mmap(machine, xm->start, xm->end) < 0) {
> > +                     dso__put(kernel);
> > +                     goto out_problem;
> > +             }
> >
> >               if (build_id__is_defined(bid))
> >                       dso__set_build_id(kernel, bid);
> > diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
> > index 8bbf9246a3cf..dfa5f6b7381f 100644
> > --- a/tools/perf/util/map.c
> > +++ b/tools/perf/util/map.c
> > @@ -111,7 +111,6 @@ void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
> >       map->dso      = dso__get(dso);
> >       map->map_ip   = map__map_ip;
> >       map->unmap_ip = map__unmap_ip;
> > -     RB_CLEAR_NODE(&map->rb_node);
> >       map->erange_warned = false;
> >       refcount_set(&map->refcnt, 1);
> >  }
> > @@ -383,7 +382,6 @@ struct map *map__clone(struct map *from)
> >       map = memdup(from, size);
> >       if (map != NULL) {
> >               refcount_set(&map->refcnt, 1);
> > -             RB_CLEAR_NODE(&map->rb_node);
> >               dso__get(map->dso);
> >       }
> >
> > @@ -523,20 +521,6 @@ bool map__contains_symbol(const struct map *map, const struct symbol *sym)
> >       return ip >= map->start && ip < map->end;
> >  }
> >
> > -static struct map *__map__next(struct map *map)
> > -{
> > -     struct rb_node *next = rb_next(&map->rb_node);
> > -
> > -     if (next)
> > -             return rb_entry(next, struct map, rb_node);
> > -     return NULL;
> > -}
> > -
> > -struct map *map__next(struct map *map)
> > -{
> > -     return map ? __map__next(map) : NULL;
> > -}
> > -
> >  struct kmap *__map__kmap(struct map *map)
> >  {
> >       if (!map->dso || !map->dso->kernel)
> > diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
> > index 2879cae05ee0..d1a6f85fd31d 100644
> > --- a/tools/perf/util/map.h
> > +++ b/tools/perf/util/map.h
> > @@ -16,7 +16,6 @@ struct maps;
> >  struct machine;
> >
> >  struct map {
> > -     struct rb_node          rb_node;
> >       u64                     start;
> >       u64                     end;
> >       bool                    erange_warned:1;
> > diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
> > index ededabf0a230..beb09b9a122c 100644
> > --- a/tools/perf/util/maps.c
> > +++ b/tools/perf/util/maps.c
> > @@ -10,9 +10,7 @@
> >  #include "ui/ui.h"
> >  #include "unwind.h"
> >
> > -static void __maps__insert(struct maps *maps, struct map *map);
> > -
> > -void maps__init(struct maps *maps, struct machine *machine)
> > +static void maps__init(struct maps *maps, struct machine *machine)
> >  {
> >       maps->entries = RB_ROOT;
> >       init_rwsem(&maps->lock);
> > @@ -32,10 +30,44 @@ static void __maps__free_maps_by_name(struct maps *maps)
> >       maps->nr_maps_allocated = 0;
> >  }
> >
> > -void maps__insert(struct maps *maps, struct map *map)
> > +static int __maps__insert(struct maps *maps, struct map *map)
> > +{
> > +     struct rb_node **p = &maps->entries.rb_node;
> > +     struct rb_node *parent = NULL;
> > +     const u64 ip = map->start;
> > +     struct map_rb_node *m, *new_rb_node;
> > +
> > +     new_rb_node = malloc(sizeof(*new_rb_node));
> > +     if (!new_rb_node)
> > +             return -ENOMEM;
> > +
> > +     RB_CLEAR_NODE(&new_rb_node->rb_node);
> > +     new_rb_node->map = map;
> > +
> > +     while (*p != NULL) {
> > +             parent = *p;
> > +             m = rb_entry(parent, struct map_rb_node, rb_node);
> > +             if (ip < m->map->start)
> > +                     p = &(*p)->rb_left;
> > +             else
> > +                     p = &(*p)->rb_right;
> > +     }
> > +
> > +     rb_link_node(&new_rb_node->rb_node, parent, p);
> > +     rb_insert_color(&new_rb_node->rb_node, &maps->entries);
> > +     map__get(map);
> > +     return 0;
> > +}
> > +
> > +int maps__insert(struct maps *maps, struct map *map)
> >  {
> > +     int err;
> > +
> >       down_write(&maps->lock);
> > -     __maps__insert(maps, map);
> > +     err = __maps__insert(maps, map);
> > +     if (err)
> > +             goto out;
> > +
> >       ++maps->nr_maps;
> >
> >       if (map->dso && map->dso->kernel) {
> > @@ -59,8 +91,8 @@ void maps__insert(struct maps *maps, struct map *map)
> >
> >                       if (maps_by_name == NULL) {
> >                               __maps__free_maps_by_name(maps);
> > -                             up_write(&maps->lock);
> > -                             return;
> > +                             err = -ENOMEM;
> > +                             goto out;
> >                       }
> >
> >                       maps->maps_by_name = maps_by_name;
> > @@ -69,22 +101,29 @@ void maps__insert(struct maps *maps, struct map *map)
> >               maps->maps_by_name[maps->nr_maps - 1] = map;
> >               __maps__sort_by_name(maps);
> >       }
> > +out:
> >       up_write(&maps->lock);
> > +     return err;
> >  }
> >
> > -static void __maps__remove(struct maps *maps, struct map *map)
> > +static void __maps__remove(struct maps *maps, struct map_rb_node *rb_node)
> >  {
> > -     rb_erase_init(&map->rb_node, &maps->entries);
> > -     map__put(map);
> > +     rb_erase_init(&rb_node->rb_node, &maps->entries);
> > +     map__put(rb_node->map);
> > +     free(rb_node);
> >  }
> >
> >  void maps__remove(struct maps *maps, struct map *map)
> >  {
> > +     struct map_rb_node *rb_node;
> > +
> >       down_write(&maps->lock);
> >       if (maps->last_search_by_name == map)
> >               maps->last_search_by_name = NULL;
> >
> > -     __maps__remove(maps, map);
> > +     rb_node = maps__find_node(maps, map);
> > +     assert(rb_node->map == map);
> > +     __maps__remove(maps, rb_node);
> >       --maps->nr_maps;
> >       if (maps->maps_by_name)
> >               __maps__free_maps_by_name(maps);
> > @@ -93,15 +132,16 @@ void maps__remove(struct maps *maps, struct map *map)
> >
> >  static void __maps__purge(struct maps *maps)
> >  {
> > -     struct map *pos, *next;
> > +     struct map_rb_node *pos, *next;
> >
> >       maps__for_each_entry_safe(maps, pos, next) {
> >               rb_erase_init(&pos->rb_node,  &maps->entries);
> > -             map__put(pos);
> > +             map__put(pos->map);
> > +             free(pos);
> >       }
> >  }
> >
> > -void maps__exit(struct maps *maps)
> > +static void maps__exit(struct maps *maps)
> >  {
> >       down_write(&maps->lock);
> >       __maps__purge(maps);
> > @@ -153,21 +193,21 @@ struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
> >  struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct map **mapp)
> >  {
> >       struct symbol *sym;
> > -     struct map *pos;
> > +     struct map_rb_node *pos;
> >
> >       down_read(&maps->lock);
> >
> >       maps__for_each_entry(maps, pos) {
> > -             sym = map__find_symbol_by_name(pos, name);
> > +             sym = map__find_symbol_by_name(pos->map, name);
> >
> >               if (sym == NULL)
> >                       continue;
> > -             if (!map__contains_symbol(pos, sym)) {
> > +             if (!map__contains_symbol(pos->map, sym)) {
> >                       sym = NULL;
> >                       continue;
> >               }
> >               if (mapp != NULL)
> > -                     *mapp = pos;
> > +                     *mapp = pos->map;
> >               goto out;
> >       }
> >
> > @@ -196,15 +236,15 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
> >  size_t maps__fprintf(struct maps *maps, FILE *fp)
> >  {
> >       size_t printed = 0;
> > -     struct map *pos;
> > +     struct map_rb_node *pos;
> >
> >       down_read(&maps->lock);
> >
> >       maps__for_each_entry(maps, pos) {
> >               printed += fprintf(fp, "Map:");
> > -             printed += map__fprintf(pos, fp);
> > +             printed += map__fprintf(pos->map, fp);
> >               if (verbose > 2) {
> > -                     printed += dso__fprintf(pos->dso, fp);
> > +                     printed += dso__fprintf(pos->map->dso, fp);
> >                       printed += fprintf(fp, "--\n");
> >               }
> >       }
> > @@ -231,11 +271,11 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >       next = root->rb_node;
> >       first = NULL;
> >       while (next) {
> > -             struct map *pos = rb_entry(next, struct map, rb_node);
> > +             struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
> >
> > -             if (pos->end > map->start) {
> > +             if (pos->map->end > map->start) {
> >                       first = next;
> > -                     if (pos->start <= map->start)
> > +                     if (pos->map->start <= map->start)
> >                               break;
> >                       next = next->rb_left;
> >               } else
> > @@ -244,14 +284,14 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >
> >       next = first;
> >       while (next) {
> > -             struct map *pos = rb_entry(next, struct map, rb_node);
> > +             struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
> >               next = rb_next(&pos->rb_node);
> >
> >               /*
> >                * Stop if current map starts after map->end.
> >                * Maps are ordered by start: next will not overlap for sure.
> >                */
> > -             if (pos->start >= map->end)
> > +             if (pos->map->start >= map->end)
> >                       break;
> >
> >               if (verbose >= 2) {
> > @@ -262,7 +302,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >                       } else {
> >                               fputs("overlapping maps:\n", fp);
> >                               map__fprintf(map, fp);
> > -                             map__fprintf(pos, fp);
> > +                             map__fprintf(pos->map, fp);
> >                       }
> >               }
> >
> > @@ -271,8 +311,8 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >                * Now check if we need to create new maps for areas not
> >                * overlapped by the new map:
> >                */
> > -             if (map->start > pos->start) {
> > -                     struct map *before = map__clone(pos);
> > +             if (map->start > pos->map->start) {
> > +                     struct map *before = map__clone(pos->map);
> >
> >                       if (before == NULL) {
> >                               err = -ENOMEM;
> > @@ -280,14 +320,17 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >                       }
> >
> >                       before->end = map->start;
> > -                     __maps__insert(maps, before);
> > +                     err = __maps__insert(maps, before);
> > +                     if (err)
> > +                             goto put_map;
> > +
> >                       if (verbose >= 2 && !use_browser)
> >                               map__fprintf(before, fp);
> >                       map__put(before);
> >               }
> >
> > -             if (map->end < pos->end) {
> > -                     struct map *after = map__clone(pos);
> > +             if (map->end < pos->map->end) {
> > +                     struct map *after = map__clone(pos->map);
> >
> >                       if (after == NULL) {
> >                               err = -ENOMEM;
> > @@ -295,15 +338,19 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
> >                       }
> >
> >                       after->start = map->end;
> > -                     after->pgoff += map->end - pos->start;
> > -                     assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
> > -                     __maps__insert(maps, after);
> > +                     after->pgoff += map->end - pos->map->start;
> > +                     assert(pos->map->map_ip(pos->map, map->end) ==
> > +                             after->map_ip(after, map->end));
> > +                     err = __maps__insert(maps, after);
> > +                     if (err)
> > +                             goto put_map;
> > +
> >                       if (verbose >= 2 && !use_browser)
> >                               map__fprintf(after, fp);
> >                       map__put(after);
> >               }
> >  put_map:
> > -             map__put(pos);
> > +             map__put(pos->map);
> >
> >               if (err)
> >                       goto out;
> > @@ -322,12 +369,12 @@ int maps__clone(struct thread *thread, struct maps *parent)
> >  {
> >       struct maps *maps = thread->maps;
> >       int err;
> > -     struct map *map;
> > +     struct map_rb_node *rb_node;
> >
> >       down_read(&parent->lock);
> >
> > -     maps__for_each_entry(parent, map) {
> > -             struct map *new = map__clone(map);
> > +     maps__for_each_entry(parent, rb_node) {
> > +             struct map *new = map__clone(rb_node->map);
> >
> >               if (new == NULL) {
> >                       err = -ENOMEM;
> > @@ -338,7 +385,10 @@ int maps__clone(struct thread *thread, struct maps *parent)
> >               if (err)
> >                       goto out_unlock;
> >
> > -             maps__insert(maps, new);
> > +             err = maps__insert(maps, new);
> > +             if (err)
> > +                     goto out_unlock;
> > +
> >               map__put(new);
> >       }
> >
> > @@ -348,40 +398,30 @@ int maps__clone(struct thread *thread, struct maps *parent)
> >       return err;
> >  }
> >
> > -static void __maps__insert(struct maps *maps, struct map *map)
> > +struct map_rb_node *maps__find_node(struct maps *maps, struct map *map)
> >  {
> > -     struct rb_node **p = &maps->entries.rb_node;
> > -     struct rb_node *parent = NULL;
> > -     const u64 ip = map->start;
> > -     struct map *m;
> > +     struct map_rb_node *rb_node;
> >
> > -     while (*p != NULL) {
> > -             parent = *p;
> > -             m = rb_entry(parent, struct map, rb_node);
> > -             if (ip < m->start)
> > -                     p = &(*p)->rb_left;
> > -             else
> > -                     p = &(*p)->rb_right;
> > +     maps__for_each_entry(maps, rb_node) {
> > +             if (rb_node->map == map)
> > +                     return rb_node;
> >       }
> > -
> > -     rb_link_node(&map->rb_node, parent, p);
> > -     rb_insert_color(&map->rb_node, &maps->entries);
> > -     map__get(map);
> > +     return NULL;
> >  }
> >
> >  struct map *maps__find(struct maps *maps, u64 ip)
> >  {
> >       struct rb_node *p;
> > -     struct map *m;
> > +     struct map_rb_node *m;
> >
> >       down_read(&maps->lock);
> >
> >       p = maps->entries.rb_node;
> >       while (p != NULL) {
> > -             m = rb_entry(p, struct map, rb_node);
> > -             if (ip < m->start)
> > +             m = rb_entry(p, struct map_rb_node, rb_node);
> > +             if (ip < m->map->start)
> >                       p = p->rb_left;
> > -             else if (ip >= m->end)
> > +             else if (ip >= m->map->end)
> >                       p = p->rb_right;
> >               else
> >                       goto out;
> > @@ -390,14 +430,30 @@ struct map *maps__find(struct maps *maps, u64 ip)
> >       m = NULL;
> >  out:
> >       up_read(&maps->lock);
> > -     return m;
> > +
> > +     return m ? m->map : NULL;
> >  }
> >
> > -struct map *maps__first(struct maps *maps)
> > +struct map_rb_node *maps__first(struct maps *maps)
> >  {
> >       struct rb_node *first = rb_first(&maps->entries);
> >
> >       if (first)
> > -             return rb_entry(first, struct map, rb_node);
> > +             return rb_entry(first, struct map_rb_node, rb_node);
> >       return NULL;
> >  }
> > +
> > +struct map_rb_node *map_rb_node__next(struct map_rb_node *node)
> > +{
> > +     struct rb_node *next;
> > +
> > +     if (!node)
> > +             return NULL;
> > +
> > +     next = rb_next(&node->rb_node);
> > +
> > +     if (!next)
> > +             return NULL;
> > +
> > +     return rb_entry(next, struct map_rb_node, rb_node);
> > +}
> > diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
> > index 7e729ff42749..512746ec0f9a 100644
> > --- a/tools/perf/util/maps.h
> > +++ b/tools/perf/util/maps.h
> > @@ -15,15 +15,22 @@ struct map;
> >  struct maps;
> >  struct thread;
> >
> > +struct map_rb_node {
> > +     struct rb_node rb_node;
> > +     struct map *map;
> > +};
> > +
> > +struct map_rb_node *maps__first(struct maps *maps);
> > +struct map_rb_node *map_rb_node__next(struct map_rb_node *node);
> > +struct map_rb_node *maps__find_node(struct maps *maps, struct map *map);
> >  struct map *maps__find(struct maps *maps, u64 addr);
> > -struct map *maps__first(struct maps *maps);
> > -struct map *map__next(struct map *map);
> >
> >  #define maps__for_each_entry(maps, map) \
> > -     for (map = maps__first(maps); map; map = map__next(map))
> > +     for (map = maps__first(maps); map; map = map_rb_node__next(map))
> >
> >  #define maps__for_each_entry_safe(maps, map, next) \
> > -     for (map = maps__first(maps), next = map__next(map); map; map = next, next = map__next(map))
> > +     for (map = maps__first(maps), next = map_rb_node__next(map); map; \
> > +          map = next, next = map_rb_node__next(map))
> >
> >  struct maps {
> >       struct rb_root      entries;
> > @@ -63,7 +70,7 @@ void maps__put(struct maps *maps);
> >  int maps__clone(struct thread *thread, struct maps *parent);
> >  size_t maps__fprintf(struct maps *maps, FILE *fp);
> >
> > -void maps__insert(struct maps *maps, struct map *map);
> > +int maps__insert(struct maps *maps, struct map *map);
> >
> >  void maps__remove(struct maps *maps, struct map *map);
> >
> > diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
> > index bc5ab782ace5..f9fbf611f2bf 100644
> > --- a/tools/perf/util/probe-event.c
> > +++ b/tools/perf/util/probe-event.c
> > @@ -150,23 +150,27 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
> >  static struct map *kernel_get_module_map(const char *module)
> >  {
> >       struct maps *maps = machine__kernel_maps(host_machine);
> > -     struct map *pos;
> > +     struct map_rb_node *pos;
> >
> >       /* A file path -- this is an offline module */
> >       if (module && strchr(module, '/'))
> >               return dso__new_map(module);
> >
> >       if (!module) {
> > -             pos = machine__kernel_map(host_machine);
> > -             return map__get(pos);
> > +             struct map *map = machine__kernel_map(host_machine);
> > +
> > +             return map__get(map);
> >       }
> >
> >       maps__for_each_entry(maps, pos) {
> >               /* short_name is "[module]" */
> > -             if (strncmp(pos->dso->short_name + 1, module,
> > -                         pos->dso->short_name_len - 2) == 0 &&
> > -                 module[pos->dso->short_name_len - 2] == '\0') {
> > -                     return map__get(pos);
> > +             const char *short_name = pos->map->dso->short_name;
> > +             u16 short_name_len =  pos->map->dso->short_name_len;
> > +
> > +             if (strncmp(short_name + 1, module,
> > +                         short_name_len - 2) == 0 &&
> > +                 module[short_name_len - 2] == '\0') {
> > +                     return map__get(pos->map);
> >               }
> >       }
> >       return NULL;
> > diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
> > index 31cd59a2b66e..4607c9438866 100644
> > --- a/tools/perf/util/symbol-elf.c
> > +++ b/tools/perf/util/symbol-elf.c
> > @@ -1000,10 +1000,14 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> >                       map->unmap_ip = map__unmap_ip;
> >                       /* Ensure maps are correctly ordered */
> >                       if (kmaps) {
> > +                             int err;
> > +
> >                               map__get(map);
> >                               maps__remove(kmaps, map);
> > -                             maps__insert(kmaps, map);
> > +                             err = maps__insert(kmaps, map);
> >                               map__put(map);
> > +                             if (err)
> > +                                     return err;
> >                       }
> >               }
> >
> > @@ -1056,7 +1060,8 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
> >                       curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> >               }
> >               curr_dso->symtab_type = dso->symtab_type;
> > -             maps__insert(kmaps, curr_map);
> > +             if (maps__insert(kmaps, curr_map))
> > +                     return -1;
> >               /*
> >                * Add it before we drop the reference to curr_map, i.e. while
> >                * we still are sure to have a reference to this DSO via
> > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> > index 99accae7d3b8..266c65bb8bbb 100644
> > --- a/tools/perf/util/symbol.c
> > +++ b/tools/perf/util/symbol.c
> > @@ -247,13 +247,13 @@ void symbols__fixup_end(struct rb_root_cached *symbols)
> >
> >  void maps__fixup_end(struct maps *maps)
> >  {
> > -     struct map *prev = NULL, *curr;
> > +     struct map_rb_node *prev = NULL, *curr;
> >
> >       down_write(&maps->lock);
> >
> >       maps__for_each_entry(maps, curr) {
> > -             if (prev != NULL && !prev->end)
> > -                     prev->end = curr->start;
> > +             if (prev != NULL && !prev->map->end)
> > +                     prev->map->end = curr->map->start;
> >
> >               prev = curr;
> >       }
> > @@ -262,8 +262,8 @@ void maps__fixup_end(struct maps *maps)
> >        * We still haven't the actual symbols, so guess the
> >        * last map final address.
> >        */
> > -     if (curr && !curr->end)
> > -             curr->end = ~0ULL;
> > +     if (curr && !curr->map->end)
> > +             curr->map->end = ~0ULL;
> >
> >       up_write(&maps->lock);
> >  }
> > @@ -911,7 +911,10 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
> >                       }
> >
> >                       curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
> > -                     maps__insert(kmaps, curr_map);
> > +                     if (maps__insert(kmaps, curr_map)) {
> > +                             dso__put(ndso);
> > +                             return -1;
> > +                     }
> >                       ++kernel_range;
> >               } else if (delta) {
> >                       /* Kernel was relocated at boot time */
> > @@ -1099,14 +1102,15 @@ int compare_proc_modules(const char *from, const char *to)
> >  static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
> >  {
> >       struct rb_root modules = RB_ROOT;
> > -     struct map *old_map;
> > +     struct map_rb_node *old_node;
> >       int err;
> >
> >       err = read_proc_modules(filename, &modules);
> >       if (err)
> >               return err;
> >
> > -     maps__for_each_entry(kmaps, old_map) {
> > +     maps__for_each_entry(kmaps, old_node) {
> > +             struct map *old_map = old_node->map;
> >               struct module_info *mi;
> >
> >               if (!__map__is_kmodule(old_map)) {
> > @@ -1224,10 +1228,13 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
> >   */
> >  int maps__merge_in(struct maps *kmaps, struct map *new_map)
> >  {
> > -     struct map *old_map;
> > +     struct map_rb_node *rb_node;
> >       LIST_HEAD(merged);
> > +     int err = 0;
> > +
> > +     maps__for_each_entry(kmaps, rb_node) {
> > +             struct map *old_map = rb_node->map;
> >
> > -     maps__for_each_entry(kmaps, old_map) {
> >               /* no overload with this one */
> >               if (new_map->end < old_map->start ||
> >                   new_map->start >= old_map->end)
> > @@ -1252,13 +1259,16 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
> >                               struct map_list_node *m;
> >
> >                               m = malloc(sizeof(*m));
> > -                             if (!m)
> > -                                     return -ENOMEM;
> > +                             if (!m) {
> > +                                     err = -ENOMEM;
> > +                                     goto out;
> > +                             }
> >
> >                               m->map = map__clone(new_map);
> >                               if (!m->map) {
> >                                       free(m);
> > -                                     return -ENOMEM;
> > +                                     err = -ENOMEM;
> > +                                     goto out;
> >                               }
> >
> >                               m->map->end = old_map->start;
> > @@ -1290,21 +1300,24 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
> >               }
> >       }
> >
> > +out:
> >       while (!list_empty(&merged)) {
> >               struct map_list_node *old_node;
> >
> >               old_node = list_entry(merged.next, struct map_list_node, node);
> >               list_del_init(&old_node->node);
> > -             maps__insert(kmaps, old_node->map);
> > +             if (!err)
> > +                     err = maps__insert(kmaps, old_node->map);
> >               map__put(old_node->map);
> >               free(old_node);
> >       }
> >
> >       if (new_map) {
> > -             maps__insert(kmaps, new_map);
> > +             if (!err)
> > +                     err = maps__insert(kmaps, new_map);
> >               map__put(new_map);
> >       }
> > -     return 0;
> > +     return err;
> >  }
> >
> >  static int dso__load_kcore(struct dso *dso, struct map *map,
> > @@ -1312,7 +1325,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> >  {
> >       struct maps *kmaps = map__kmaps(map);
> >       struct kcore_mapfn_data md;
> > -     struct map *old_map, *replacement_map = NULL, *next;
> > +     struct map *replacement_map = NULL;
> > +     struct map_rb_node *old_node, *next;
> >       struct machine *machine;
> >       bool is_64_bit;
> >       int err, fd;
> > @@ -1359,7 +1373,9 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> >       }
> >
> >       /* Remove old maps */
> > -     maps__for_each_entry_safe(kmaps, old_map, next) {
> > +     maps__for_each_entry_safe(kmaps, old_node, next) {
> > +             struct map *old_map = old_node->map;
> > +
> >               /*
> >                * We need to preserve eBPF maps even if they are
> >                * covered by kcore, because we need to access
> > @@ -1400,17 +1416,21 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> >                       /* Ensure maps are correctly ordered */
> >                       map__get(map);
> >                       maps__remove(kmaps, map);
> > -                     maps__insert(kmaps, map);
> > +                     err = maps__insert(kmaps, map);
> >                       map__put(map);
> >                       map__put(new_node->map);
> > +                     if (err)
> > +                             goto out_err;
> >               } else {
> >                       /*
> >                        * Merge kcore map into existing maps,
> >                        * and ensure that current maps (eBPF)
> >                        * stay intact.
> >                        */
> > -                     if (maps__merge_in(kmaps, new_node->map))
> > +                     if (maps__merge_in(kmaps, new_node->map)) {
> > +                             err = -EINVAL;
> >                               goto out_err;
> > +                     }
> >               }
> >               free(new_node);
> >       }
> > @@ -1457,7 +1477,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
> >               free(list_node);
> >       }
> >       close(fd);
> > -     return -EINVAL;
> > +     return err;
> >  }
> >
> >  /*
> > @@ -1991,8 +2011,9 @@ void __maps__sort_by_name(struct maps *maps)
> >
> >  static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
> >  {
> > -     struct map *map;
> > -     struct map **maps_by_name = realloc(maps->maps_by_name, maps->nr_maps * sizeof(map));
> > +     struct map_rb_node *rb_node;
> > +     struct map **maps_by_name = realloc(maps->maps_by_name,
> > +                                         maps->nr_maps * sizeof(struct map *));
> >       int i = 0;
> >
> >       if (maps_by_name == NULL)
> > @@ -2001,8 +2022,8 @@ static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
> >       maps->maps_by_name = maps_by_name;
> >       maps->nr_maps_allocated = maps->nr_maps;
> >
> > -     maps__for_each_entry(maps, map)
> > -             maps_by_name[i++] = map;
> > +     maps__for_each_entry(maps, rb_node)
> > +             maps_by_name[i++] = rb_node->map;
> >
> >       __maps__sort_by_name(maps);
> >       return 0;
> > @@ -2024,6 +2045,7 @@ static struct map *__maps__find_by_name(struct maps *maps, const char *name)
> >
> >  struct map *maps__find_by_name(struct maps *maps, const char *name)
> >  {
> > +     struct map_rb_node *rb_node;
> >       struct map *map;
> >
> >       down_read(&maps->lock);
> > @@ -2042,12 +2064,13 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
> >               goto out_unlock;
> >
> >       /* Fallback to traversing the rbtree... */
> > -     maps__for_each_entry(maps, map)
> > +     maps__for_each_entry(maps, rb_node) {
> > +             map = rb_node->map;
> >               if (strcmp(map->dso->short_name, name) == 0) {
> >                       maps->last_search_by_name = map;
> >                       goto out_unlock;
> >               }
> > -
> > +     }
> >       map = NULL;
> >
> >  out_unlock:
> > diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
> > index 70f095624a0b..ed2d55d224aa 100644
> > --- a/tools/perf/util/synthetic-events.c
> > +++ b/tools/perf/util/synthetic-events.c
> > @@ -639,7 +639,7 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
> >                                  struct machine *machine)
> >  {
> >       int rc = 0;
> > -     struct map *pos;
> > +     struct map_rb_node *pos;
> >       struct maps *maps = machine__kernel_maps(machine);
> >       union perf_event *event;
> >       size_t size = symbol_conf.buildid_mmap2 ?
> > @@ -662,37 +662,39 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
> >               event->header.misc = PERF_RECORD_MISC_GUEST_KERNEL;
> >
> >       maps__for_each_entry(maps, pos) {
> > -             if (!__map__is_kmodule(pos))
> > +             struct map *map = pos->map;
> > +
> > +             if (!__map__is_kmodule(map))
> >                       continue;
> >
> >               if (symbol_conf.buildid_mmap2) {
> > -                     size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
> > +                     size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
> >                       event->mmap2.header.type = PERF_RECORD_MMAP2;
> >                       event->mmap2.header.size = (sizeof(event->mmap2) -
> >                                               (sizeof(event->mmap2.filename) - size));
> >                       memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
> >                       event->mmap2.header.size += machine->id_hdr_size;
> > -                     event->mmap2.start = pos->start;
> > -                     event->mmap2.len   = pos->end - pos->start;
> > +                     event->mmap2.start = map->start;
> > +                     event->mmap2.len   = map->end - map->start;
> >                       event->mmap2.pid   = machine->pid;
> >
> > -                     memcpy(event->mmap2.filename, pos->dso->long_name,
> > -                            pos->dso->long_name_len + 1);
> > +                     memcpy(event->mmap2.filename, map->dso->long_name,
> > +                            map->dso->long_name_len + 1);
> >
> >                       perf_record_mmap2__read_build_id(&event->mmap2, false);
> >               } else {
> > -                     size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
> > +                     size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
> >                       event->mmap.header.type = PERF_RECORD_MMAP;
> >                       event->mmap.header.size = (sizeof(event->mmap) -
> >                                               (sizeof(event->mmap.filename) - size));
> >                       memset(event->mmap.filename + size, 0, machine->id_hdr_size);
> >                       event->mmap.header.size += machine->id_hdr_size;
> > -                     event->mmap.start = pos->start;
> > -                     event->mmap.len   = pos->end - pos->start;
> > +                     event->mmap.start = map->start;
> > +                     event->mmap.len   = map->end - map->start;
> >                       event->mmap.pid   = machine->pid;
> >
> > -                     memcpy(event->mmap.filename, pos->dso->long_name,
> > -                            pos->dso->long_name_len + 1);
> > +                     memcpy(event->mmap.filename, map->dso->long_name,
> > +                            map->dso->long_name_len + 1);
> >               }
> >
> >               if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
> > diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
> > index 665e5c0618ed..4baf4db8af65 100644
> > --- a/tools/perf/util/thread.c
> > +++ b/tools/perf/util/thread.c
> > @@ -338,9 +338,7 @@ int thread__insert_map(struct thread *thread, struct map *map)
> >               return ret;
> >
> >       maps__fixup_overlappings(thread->maps, map, stderr);
> > -     maps__insert(thread->maps, map);
> > -
> > -     return 0;
> > +     return maps__insert(thread->maps, map);
> >  }
> >
> >  static int __thread__prepare_access(struct thread *thread)
> > @@ -348,12 +346,12 @@ static int __thread__prepare_access(struct thread *thread)
> >       bool initialized = false;
> >       int err = 0;
> >       struct maps *maps = thread->maps;
> > -     struct map *map;
> > +     struct map_rb_node *rb_node;
> >
> >       down_read(&maps->lock);
> >
> > -     maps__for_each_entry(maps, map) {
> > -             err = unwind__prepare_access(thread->maps, map, &initialized);
> > +     maps__for_each_entry(maps, rb_node) {
> > +             err = unwind__prepare_access(thread->maps, rb_node->map, &initialized);
> >               if (err || initialized)
> >                       break;
> >       }
> > diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
> > index 43beb169631d..835c39efb80d 100644
> > --- a/tools/perf/util/vdso.c
> > +++ b/tools/perf/util/vdso.c
> > @@ -144,10 +144,11 @@ static enum dso_type machine__thread_dso_type(struct machine *machine,
> >                                             struct thread *thread)
> >  {
> >       enum dso_type dso_type = DSO__TYPE_UNKNOWN;
> > -     struct map *map;
> > +     struct map_rb_node *rb_node;
> > +
> > +     maps__for_each_entry(thread->maps, rb_node) {
> > +             struct dso *dso = rb_node->map->dso;
> >
> > -     maps__for_each_entry(thread->maps, map) {
> > -             struct dso *dso = map->dso;
> >               if (!dso || dso->long_name[0] != '/')
> >                       continue;
> >               dso_type = dso__type(dso, machine);
> > --
> > 2.35.1.265.g69c8d7142f-goog
>
> --
>
> - Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 12/22] perf maps: Remove rb_node from struct map
  2022-02-16 17:36     ` Ian Rogers
@ 2022-02-16 20:12       ` Arnaldo Carvalho de Melo
  2022-02-16 22:07         ` Ian Rogers
  0 siblings, 1 reply; 58+ messages in thread
From: Arnaldo Carvalho de Melo @ 2022-02-16 20:12 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

Em Wed, Feb 16, 2022 at 09:36:20AM -0800, Ian Rogers escreveu:
> On Wed, Feb 16, 2022 at 6:08 AM Arnaldo Carvalho de Melo
> <acme@kernel.org> wrote:
> >
> > Em Fri, Feb 11, 2022 at 02:34:05AM -0800, Ian Rogers escreveu:
> > > struct map is reference counted, having it also be a node in an
> > > red-black tree complicates the reference counting.
> >
> > In what way?
> >
> > If I have some refcounted data structure and I want to add it to some
> > container (an rb_tree, a list, etc) all I have to do is to grab a
> > refcount when adding and dropping it after removing it from the list.
> >
> > IOW, in other words it is refcounted so that we can add it to a
> > red-black tree, amongst other uses.
> 
> Thanks Arnaldo. So I'm not disputing that you can make reference
> counted collections. With reference counting every reference should
> have a count associated with it. So when symbol.c is using the list, a
> node may be referenced from a prev and a next pointer, so being in the
> list requires a reference count of 2. When you find something in the

Humm, just one reference is needed for being in a list, removing from
the list needs locking the access to the list, removing the object,
unlocking the list and then dropping the access for the then out of the
list refcounted object, no?

> list which reference count is that associated with? It doesn't matter
> normally as you'd increment the reference count again and return that.
> In the perf code find doesn't increment a reference count so I want to

I'd say point that out and fix the bug, if you will return an object
from a list that is refcounted, grab the refcount before dropping the
list lock and then return it, knowing the lookup user will have a
refcount that will keep that object alive.

> return the "get" that belongs to the list. That's "get" singular,
> hence wanting to add in the pointer indirection that incurs cost. To
> make insertion and deletion work properly on list with a reference
> count means reworking list.h.
> 
> The rbtree is the same problem only more-so, as you need pointers for
> parent, left and right child.
> 
> > > Switch to having a map_rb_node which is a red-block tree node but
> > > points at the reference counted struct map. This reference is
> > > responsible for a single reference count.
> >
> > This makes every insertion incur in an allocation that has to be
> > checked, etc, when we know that maps will live in rb_trees, so having
> > the node structure allocated at the same time as the map is
> > advantageous.

perf tries to mimic kernel code, but multithreading didn't come at the
very beginning, so, yeah, there are bugs and inconsistencies, which we
should fix.

This discussion is how to do it, attempts like Masami's years ago
uncovered problems that got fixed, your current attempt is also
uncovering bugs and those are getting fixed, which is super cool.

> So this pattern is common in other languages, the default kernel style
> is what at Google gets called invasive - you put the storage for list
> nodes, reference counts, etc. into the referenced object itself. This
> lowers the overhead within the struct, and I don't disagree it adds a
> cost to insertion, unless maps are shared which isn't a use-case we
> have at the moment. So this change is backing out an optimization, but
> frankly fixing this properly is a much bigger overhaul than this
> already big overhaul and I don't think losing the optimization is
> really costing that much performance - a memory allocation costs in
> the region of 40 cycles with an optimized implementation like
> tcmalloc. We also don't use the invasive style for maps_by_name, it is
> just a sorted array.
> 
> A side note, I see a lot of overhead in symbol allocation and part of
> that is the size of the two invasive rbtree nodes (2 * 3 * 8 bytes =
> 48bytes). Were the symbols just referenced by a sorted array, like
> maps_by_name, insertion and sorting would still be O(n*log(n)) but
> we'd reduce the memory usage to a third. rbtree is a cool data
> structure, but I think we could be over using it.

Right, numbers talk, so it would be really nice to use, humm, perf to
measure these changes, to help assess the impact and sometimes accept
things at first "ugly" versus performance improvements.
 
> > We don't have to check if adding a data structure to a rbtree works, as
> > all that is needed is already preallocated.
> 
> The issue here is that a find, or similar, wants to pass around
> something that is owned by a list or an rbtree. We can have the idea
> of ownership by adding a token/cookie and passing that around
> everywhere, it gets problematic then to spot use after put and I think
> that approach is overall more invasive to the APIs than what is in
> these changes.

> A better solution can be to keep the rbtree being invasive and at all
> the find and similar routines, make sure a getted version is returned
> - so the code outside of maps is never working with the rbtree's
> reference counted version. The problem with this is that it is an
> overhaul to all the uses of map. The reference count checker would
> find misuse but again it'd be a far large patch series than what is
> here - that is trying to fix the code base as it is.

I've been trailing on the discussion with Masami, so what you want is to
somehow match a get with a put by passing a token returned by a get to
the put?

Wrt patch queue size, we can try to reduce it to series of at most 10
patches, that do leg work, rinse, repeat, I recently saw a discussion on
netdev, with Jakub Kicinski asking for patchsets to be limited to under
10 patches for this exact same reason.

I usually try to cherry pick as much as possible from a series, while it
being discussed, so that the patch submitter don't have to suffer too
much with keeping a long series building.

I'm now willing and able to process things faster, that should help too,
I hope.
 
> I think the having our cake and eating solution (best performance +
> checking) is that approach, but we need to get to a point where
> checking is working. So if we focus on (1) checking and fixing those
> bugs (the changes here), then (2) change the APIs so that everything
> is getted and fix the leaks that introduces, then (3) go back to being
> invasive I think we get to that solution. I like step (2) from a
> cleanliness point-of-view, I'm fine with (3) I'm just not sure anybody
> would notice the performance difference.

I'll continue looking at what you guys did to try to get up to speed and
contribute more to this effort, please bear with me a bit more.

- Arnaldo

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [PATCH v3 12/22] perf maps: Remove rb_node from struct map
  2022-02-16 20:12       ` Arnaldo Carvalho de Melo
@ 2022-02-16 22:07         ` Ian Rogers
  0 siblings, 0 replies; 58+ messages in thread
From: Ian Rogers @ 2022-02-16 22:07 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, André Almeida, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Jin Yao, Adrian Hunter, Leo Yan, Andi Kleen, Thomas Richter,
	Kan Liang, Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo, eranian

On Wed, Feb 16, 2022 at 12:12 PM Arnaldo Carvalho de Melo
<acme@kernel.org> wrote:
>
> Em Wed, Feb 16, 2022 at 09:36:20AM -0800, Ian Rogers escreveu:
> > On Wed, Feb 16, 2022 at 6:08 AM Arnaldo Carvalho de Melo
> > <acme@kernel.org> wrote:
> > >
> > > Em Fri, Feb 11, 2022 at 02:34:05AM -0800, Ian Rogers escreveu:
> > > > struct map is reference counted, having it also be a node in an
> > > > red-black tree complicates the reference counting.
> > >
> > > In what way?
> > >
> > > If I have some refcounted data structure and I want to add it to some
> > > container (an rb_tree, a list, etc) all I have to do is to grab a
> > > refcount when adding and dropping it after removing it from the list.
> > >
> > > IOW, in other words it is refcounted so that we can add it to a
> > > red-black tree, amongst other uses.
> >
> > Thanks Arnaldo. So I'm not disputing that you can make reference
> > counted collections. With reference counting every reference should
> > have a count associated with it. So when symbol.c is using the list, a
> > node may be referenced from a prev and a next pointer, so being in the
> > list requires a reference count of 2. When you find something in the
>
> Humm, just one reference is needed for being in a list, removing from
> the list needs locking the access to the list, removing the object,
> unlocking the list and then dropping the access for the then out of the
> list refcounted object, no?

Just one reference count is needed but if we were looking to automate
reference counting then we'd associate one reference count with every
pointer to the object. So with the invasive doubly linked list that we
know as list, there are two pointers to a node and so the reference
count should really be two. Using just 1 as the reference count is
really an optimization.

> > list which reference count is that associated with? It doesn't matter
> > normally as you'd increment the reference count again and return that.
> > In the perf code find doesn't increment a reference count so I want to
>
> I'd say point that out and fix the bug, if you will return an object
> from a list that is refcounted, grab the refcount before dropping the
> list lock and then return it, knowing the lookup user will have a
> refcount that will keep that object alive.

I agree, but in doing that you need to make every user do a put and
the problem snow balls.

> > return the "get" that belongs to the list. That's "get" singular,
> > hence wanting to add in the pointer indirection that incurs cost. To
> > make insertion and deletion work properly on list with a reference
> > count means reworking list.h.
> >
> > The rbtree is the same problem only more-so, as you need pointers for
> > parent, left and right child.
> >
> > > > Switch to having a map_rb_node which is a red-block tree node but
> > > > points at the reference counted struct map. This reference is
> > > > responsible for a single reference count.
> > >
> > > This makes every insertion incur in an allocation that has to be
> > > checked, etc, when we know that maps will live in rb_trees, so having
> > > the node structure allocated at the same time as the map is
> > > advantageous.
>
> perf tries to mimic kernel code, but multithreading didn't come at the
> very beginning, so, yeah, there are bugs and inconsistencies, which we
> should fix.
>
> This discussion is how to do it, attempts like Masami's years ago
> uncovered problems that got fixed, your current attempt is also
> uncovering bugs and those are getting fixed, which is super cool.

Thanks. The approach I'm doing is dumb, it is a poor man's smart
pointer by way of memory allocations and sanitizers. My hope from the
beginning was that this is something lightweight enough that we can
get it merged given that sanitizers alone weren't going to save us.

> > So this pattern is common in other languages, the default kernel style
> > is what at Google gets called invasive - you put the storage for list
> > nodes, reference counts, etc. into the referenced object itself. This
> > lowers the overhead within the struct, and I don't disagree it adds a
> > cost to insertion, unless maps are shared which isn't a use-case we
> > have at the moment. So this change is backing out an optimization, but
> > frankly fixing this properly is a much bigger overhaul than this
> > already big overhaul and I don't think losing the optimization is
> > really costing that much performance - a memory allocation costs in
> > the region of 40 cycles with an optimized implementation like
> > tcmalloc. We also don't use the invasive style for maps_by_name, it is
> > just a sorted array.
> >
> > A side note, I see a lot of overhead in symbol allocation and part of
> > that is the size of the two invasive rbtree nodes (2 * 3 * 8 bytes =
> > 48bytes). Were the symbols just referenced by a sorted array, like
> > maps_by_name, insertion and sorting would still be O(n*log(n)) but
> > we'd reduce the memory usage to a third. rbtree is a cool data
> > structure, but I think we could be over using it.
>
> Right, numbers talk, so it would be really nice to use, humm, perf to
> measure these changes, to help assess the impact and sometimes accept
> things at first "ugly" versus performance improvements.

Sure. Do you have a benchmark in mind? For cpumap, nsinfo and maps
there is no overhead when the checking isn't enabled. For map, the
refactoring of the list and rbtree add an indirection and a memory
allocation.

> > > We don't have to check if adding a data structure to a rbtree works, as
> > > all that is needed is already preallocated.
> >
> > The issue here is that a find, or similar, wants to pass around
> > something that is owned by a list or an rbtree. We can have the idea
> > of ownership by adding a token/cookie and passing that around
> > everywhere, it gets problematic then to spot use after put and I think
> > that approach is overall more invasive to the APIs than what is in
> > these changes.
>
> > A better solution can be to keep the rbtree being invasive and at all
> > the find and similar routines, make sure a getted version is returned
> > - so the code outside of maps is never working with the rbtree's
> > reference counted version. The problem with this is that it is an
> > overhaul to all the uses of map. The reference count checker would
> > find misuse but again it'd be a far large patch series than what is
> > here - that is trying to fix the code base as it is.
>
> I've been trailing on the discussion with Masami, so what you want is to
> somehow match a get with a put by passing a token returned by a get to
> the put?

Yes, and that's the approach in ref tracker too:
https://lwn.net/Articles/877603/

> Wrt patch queue size, we can try to reduce it to series of at most 10
> patches, that do leg work, rinse, repeat, I recently saw a discussion on
> netdev, with Jakub Kicinski asking for patchsets to be limited to under
> 10 patches for this exact same reason.
>
> I usually try to cherry pick as much as possible from a series, while it
> being discussed, so that the patch submitter don't have to suffer too
> much with keeping a long series building.
>
> I'm now willing and able to process things faster, that should help too,
> I hope.

It does, thanks! The small patch set size causes me a lot of work as I
have to go and move things into constituent parts. For example:
https://lore.kernel.org/linux-perf-users/YgaeAAKkdVBNbErT@kernel.org/
I guess I should have done it from the outset.

> > I think the having our cake and eating solution (best performance +
> > checking) is that approach, but we need to get to a point where
> > checking is working. So if we focus on (1) checking and fixing those
> > bugs (the changes here), then (2) change the APIs so that everything
> > is getted and fix the leaks that introduces, then (3) go back to being
> > invasive I think we get to that solution. I like step (2) from a
> > cleanliness point-of-view, I'm fine with (3) I'm just not sure anybody
> > would notice the performance difference.
>
> I'll continue looking at what you guys did to try to get up to speed and
> contribute more to this effort, please bear with me a bit more.
>
> - Arnaldo

Np, tbh I didn't have some big agenda with this work. I was thinking
through how I could solve the problem of:
https://lore.kernel.org/linux-perf-users/20211118193714.2293728-1-irogers@google.com/
Dmitry Vyukov suggested Eric Dumazet's ref tracker work but in looking
at ref tracker I was concerned about needing a pair of values for
every reference counted thing. It would add a lot to the API. The ref
tracker work allocates a token/cookie for a get and that's where the
idea of allocating an indirection comes from. It has worked remarkably
well in combination with address and leak sanitizer, fixing the nsinfo
issue which actually turned out to be a data race. There weren't any
known issues with cpumap and maps, but it is good to have the
reference count checking confirming this. map is a rats nest and I
purposefully went after it as the worst case of what we could look to
fix with the approach. I expected it to cause controversy, in
particular the rbtree and list refactors - but heck, I'd throw away 1%
performance for something like perf top not consuming gigabytes of RAM
(not that I have any privilege to throw away performance :-) ).

Anyway, I keep pushing along with the tidy up to the patches as a
background job. I hope I can get v4 out this week.

Another issue nagging at me from the pre-5.16 reverts is:
https://lore.kernel.org/lkml/CAP-5=fX4-kmkm+qn9m22O_4A2_8j=uAm=vcXh9x2RqqDKEdnBg@mail.gmail.com/
This requires a lot of Makefile cleanup. It'd be great if someone
could take a look. There's also Debian builds being in a mess, I guess
it is good to be busy.

Thanks,
Ian

^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, other threads:[~2022-02-16 22:08 UTC | newest]

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-11 10:33 [PATCH v3 00/22] Reference count checker and related fixes Ian Rogers
2022-02-11 10:33 ` [PATCH v3 01/22] perf cpumap: Migrate to libperf cpumap api Ian Rogers
2022-02-11 17:02   ` Arnaldo Carvalho de Melo
2022-02-11 10:33 ` [PATCH v3 02/22] perf cpumap: Use for each loop Ian Rogers
2022-02-11 17:04   ` Arnaldo Carvalho de Melo
2022-02-11 10:33 ` [PATCH v3 03/22] perf dso: Make lock error check and add BUG_ONs Ian Rogers
2022-02-11 17:13   ` Arnaldo Carvalho de Melo
2022-02-11 17:43     ` Ian Rogers
2022-02-11 19:21       ` Arnaldo Carvalho de Melo
2022-02-11 19:35         ` Ian Rogers
2022-02-12 15:48           ` Arnaldo Carvalho de Melo
2022-02-12 15:49             ` Arnaldo Carvalho de Melo
2022-02-12 20:59               ` Ian Rogers
2022-02-11 10:33 ` [PATCH v3 04/22] perf dso: Hold lock when accessing nsinfo Ian Rogers
2022-02-11 17:14   ` Arnaldo Carvalho de Melo
2022-02-12 11:30   ` Jiri Olsa
2022-02-11 10:33 ` [PATCH v3 05/22] perf maps: Use a pointer for kmaps Ian Rogers
2022-02-11 17:23   ` Arnaldo Carvalho de Melo
2022-02-14 19:45     ` Arnaldo Carvalho de Melo
2022-02-11 10:33 ` [PATCH v3 06/22] perf test: Use pointer for maps Ian Rogers
2022-02-11 17:24   ` Arnaldo Carvalho de Melo
2022-02-14 19:48   ` Arnaldo Carvalho de Melo
2022-02-14 19:50     ` Arnaldo Carvalho de Melo
2022-02-11 10:34 ` [PATCH v3 07/22] perf maps: Reduce scope of init and exit Ian Rogers
2022-02-11 17:26   ` Arnaldo Carvalho de Melo
2022-02-11 10:34 ` [PATCH v3 08/22] perf maps: Move maps code to own C file Ian Rogers
2022-02-11 17:27   ` Arnaldo Carvalho de Melo
2022-02-14 19:58   ` Arnaldo Carvalho de Melo
2022-02-11 10:34 ` [PATCH v3 09/22] perf map: Add const to map_ip and unmap_ip Ian Rogers
2022-02-11 17:28   ` Arnaldo Carvalho de Melo
2022-02-11 10:34 ` [PATCH v3 10/22] perf map: Make map__contains_symbol args const Ian Rogers
2022-02-11 17:28   ` Arnaldo Carvalho de Melo
2022-02-11 10:34 ` [PATCH v3 11/22] perf map: Move map list node into symbol Ian Rogers
2022-02-11 10:34 ` [PATCH v3 12/22] perf maps: Remove rb_node from struct map Ian Rogers
2022-02-16 14:08   ` Arnaldo Carvalho de Melo
2022-02-16 17:36     ` Ian Rogers
2022-02-16 20:12       ` Arnaldo Carvalho de Melo
2022-02-16 22:07         ` Ian Rogers
2022-02-11 10:34 ` [PATCH v3 13/22] perf namespaces: Add functions to access nsinfo Ian Rogers
2022-02-11 17:31   ` Arnaldo Carvalho de Melo
2022-02-11 10:34 ` [PATCH v3 14/22] perf maps: Add functions to access maps Ian Rogers
2022-02-11 17:33   ` Arnaldo Carvalho de Melo
2022-02-11 10:34 ` [PATCH v3 15/22] perf map: Use functions to access the variables in map Ian Rogers
2022-02-11 17:35   ` Arnaldo Carvalho de Melo
2022-02-11 17:36   ` Arnaldo Carvalho de Melo
2022-02-11 17:54     ` Ian Rogers
2022-02-11 19:22       ` Arnaldo Carvalho de Melo
2022-02-11 10:34 ` [PATCH v3 16/22] perf test: Add extra diagnostics to maps test Ian Rogers
2022-02-11 10:34 ` [PATCH v3 17/22] perf map: Changes to reference counting Ian Rogers
2022-02-12  8:45   ` Masami Hiramatsu
2022-02-12 20:48     ` Ian Rogers
2022-02-14  2:00       ` Masami Hiramatsu
2022-02-14 18:56       ` Arnaldo Carvalho de Melo
2022-02-11 10:34 ` [PATCH v3 18/22] libperf: Add reference count checking macros Ian Rogers
2022-02-11 10:34 ` [PATCH v3 19/22] perf cpumap: Add reference count checking Ian Rogers
2022-02-11 10:34 ` [PATCH v3 20/22] perf namespaces: " Ian Rogers
2022-02-11 10:34 ` [PATCH v3 21/22] perf maps: " Ian Rogers
2022-02-11 10:34 ` [PATCH v3 22/22] perf map: " Ian Rogers

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).