linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 00/17] Reference count checker and related fixes
@ 2023-03-20 21:22 Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 01/17] perf map: Move map list node into symbol Ian Rogers
                   ` (17 more replies)
  0 siblings, 18 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

The perf tool has a class of memory problems where reference counts
are used incorrectly. Memory/address sanitizers and valgrind don't
provide useful ways to debug these problems, you see a memory leak
where the only pertinent information is the original allocation
site. What would be more useful is knowing where a get fails to have a
corresponding put, where there are double puts, etc.

This work was motivated by the roll-back of:
https://lore.kernel.org/linux-perf-users/20211118193714.2293728-1-irogers@google.com/
where fixing a missed put resulted in a use-after-free in a different
context. There was a sense in fixing the issue that a game of
wac-a-mole had been embarked upon in adding missed gets and puts.

The basic approach of the change is to add a level of indirection at
the get and put calls. Get allocates a level of indirection that, if
no corresponding put is called, becomes a memory leak (and associated
stack trace) that leak sanitizer can report. Similarly if two puts are
called for the same get, then a double free can be detected by address
sanitizer. This can also detect the use after put, which should also
yield a segv without a sanitizer.

Adding reference count checking to cpu map was done as a proof of
concept, it yielded little other than a location where the use of get
could be cleaner by using its result. Reference count checking on
nsinfo identified a double free of the indirection layer and the
related threads, thereby identifying a data race as discussed here:
 https://lore.kernel.org/linux-perf-users/CAP-5=fWZH20L4kv-BwVtGLwR=Em3AOOT+Q4QGivvQuYn5AsPRg@mail.gmail.com/
Accordingly the dso->lock was extended and use to cover the race.

The v3 version addresses problems in v2, in particular using macros to
avoid #ifdefs. The v3 version applies the reference count checking
approach to two more data structures, maps and map. While maps was
straightforward, struct map showed a problem where reference counted
thing can be on lists and rb-trees that are oblivious to the
reference count. To sanitize this, struct map is changed so that it is
referenced by either a list or rb-tree node and not part of it. This
simplifies the reference count and the patches have caught and fixed a
number of missed or mismatched reference counts relating to struct
map.

The patches are arranged so that API refactors and bug fixes appear
first, then the reference count checker itself appears. This allows
for the refactor and fixes to be applied upstream first, as has
already happened with cpumap.

A wider discussion of the approach is on the mailing list:
 https://lore.kernel.org/linux-perf-users/YffqnynWcc5oFkI5@kernel.org/T/#mf25ccd7a2e03de92cec29d36e2999a8ab5ec7f88
Comparing it to a past approach:
 https://lore.kernel.org/all/20151209021047.10245.8918.stgit@localhost.localdomain/
and to ref_tracker:
 https://lwn.net/Articles/877603/

v5. rebase removing 5 merged changes. Add map_list_node__new to the
    1st patch (perf map: Move map list node into symbol) as suggested
    by Arnaldo. Remove unnecessary map__puts from patch 12 (perf map:
    Changes to reference counting) as suggested by Adrian. A summary
    of the sizes of the remaining patches is:
74fd7ffafdd0 perf map: Add reference count checking
 12 files changed, 136 insertions(+), 114 deletions(-)
4719196db8d3 perf maps: Add reference count checking.
 8 files changed, 64 insertions(+), 56 deletions(-)
03943e7594cf perf namespaces: Add reference count checking
 7 files changed, 83 insertions(+), 62 deletions(-)
0bb382cc52d7 perf cpumap: Add reference count checking
 6 files changed, 81 insertions(+), 71 deletions(-)
ef39f550c40d libperf: Add reference count checking macros.
 1 file changed, 94 insertions(+)
d9ac37c750e0 perf map: Changes to reference counting
 11 files changed, 112 insertions(+), 44 deletions(-)
476014bc9b55 perf maps: Modify maps_by_name to hold a reference to a map
 2 files changed, 33 insertions(+), 18 deletions(-)
91384676fddd perf test: Add extra diagnostics to maps test
 1 file changed, 36 insertions(+), 15 deletions(-)
fdc30434f826 perf map: Add accessors for pgoff and reloc
 9 files changed, 33 insertions(+), 23 deletions(-)
368fe015adb2 perf map: Add accessors for prot, priv and flags
 6 files changed, 28 insertions(+), 12 deletions(-)
2c6a8169826a perf map: Add helper for map_ip and unmap_ip
 23 files changed, 80 insertions(+), 65 deletions(-)
929e59d49f4b perf map: Rename map_ip and unmap_ip
 6 files changed, 13 insertions(+), 13 deletions(-)
4a38194aaaf5 perf map: Add accessor for start and end
 24 files changed, 114 insertions(+), 103 deletions(-)
02b63e5c415e perf map: Add accessor for dso
 48 files changed, 404 insertions(+), 293 deletions(-)
9324af6ccf42 perf maps: Add functions to access maps
 20 files changed, 175 insertions(+), 111 deletions(-)
5c590d36a308 perf maps: Remove rb_node from struct map
 16 files changed, 291 insertions(+), 184 deletions(-)
af1d142eb777 perf map: Move map list node into symbol
 2 files changed, 63 insertions(+), 35 deletions(-)
 
v4. rebases on to acme's perf-tools-next, fixes more issues with
    map/maps and breaks apart the accessor functions to reduce
    individual patch sizes. The accessor functions are mechanical
    changes where the single biggest one is refactoring use of
    map->dso to be map__dso(map).

The v3 change is available here:
https://lore.kernel.org/lkml/20220211103415.2737789-1-irogers@google.com/

Ian Rogers (17):
  perf map: Move map list node into symbol
  perf maps: Remove rb_node from struct map
  perf maps: Add functions to access maps
  perf map: Add accessor for dso
  perf map: Add accessor for start and end
  perf map: Rename map_ip and unmap_ip
  perf map: Add helper for map_ip and unmap_ip
  perf map: Add accessors for prot, priv and flags
  perf map: Add accessors for pgoff and reloc
  perf test: Add extra diagnostics to maps test
  perf maps: Modify maps_by_name to hold a reference to a map
  perf map: Changes to reference counting
  libperf: Add reference count checking macros.
  perf cpumap: Add reference count checking
  perf namespaces: Add reference count checking
  perf maps: Add reference count checking.
  perf map: Add reference count checking

 tools/lib/perf/Makefile                       |   2 +-
 tools/lib/perf/cpumap.c                       |  94 ++---
 tools/lib/perf/include/internal/cpumap.h      |   4 +-
 tools/lib/perf/include/internal/rc_check.h    |  94 +++++
 tools/perf/arch/s390/annotate/instructions.c  |   4 +-
 tools/perf/arch/x86/tests/dwarf-unwind.c      |   2 +-
 tools/perf/arch/x86/util/event.c              |  13 +-
 tools/perf/builtin-annotate.c                 |  11 +-
 tools/perf/builtin-buildid-list.c             |   4 +-
 tools/perf/builtin-inject.c                   |  12 +-
 tools/perf/builtin-kallsyms.c                 |   6 +-
 tools/perf/builtin-kmem.c                     |   4 +-
 tools/perf/builtin-lock.c                     |   4 +-
 tools/perf/builtin-mem.c                      |  10 +-
 tools/perf/builtin-report.c                   |  26 +-
 tools/perf/builtin-script.c                   |  27 +-
 tools/perf/builtin-top.c                      |  17 +-
 tools/perf/builtin-trace.c                    |   2 +-
 .../scripts/python/Perf-Trace-Util/Context.c  |  13 +-
 tools/perf/tests/code-reading.c               |  37 +-
 tools/perf/tests/cpumap.c                     |   4 +-
 tools/perf/tests/hists_common.c               |   8 +-
 tools/perf/tests/hists_cumulate.c             |  14 +-
 tools/perf/tests/hists_filter.c               |  14 +-
 tools/perf/tests/hists_link.c                 |  18 +-
 tools/perf/tests/hists_output.c               |  12 +-
 tools/perf/tests/maps.c                       |  69 ++--
 tools/perf/tests/mmap-thread-lookup.c         |   3 +-
 tools/perf/tests/symbols.c                    |   6 +-
 tools/perf/tests/thread-maps-share.c          |  29 +-
 tools/perf/tests/vmlinux-kallsyms.c           |  54 +--
 tools/perf/ui/browsers/annotate.c             |   9 +-
 tools/perf/ui/browsers/hists.c                |  19 +-
 tools/perf/ui/browsers/map.c                  |   4 +-
 tools/perf/util/annotate.c                    |  40 ++-
 tools/perf/util/auxtrace.c                    |   2 +-
 tools/perf/util/block-info.c                  |   4 +-
 tools/perf/util/bpf-event.c                   |  10 +-
 tools/perf/util/bpf_lock_contention.c         |   6 +-
 tools/perf/util/build-id.c                    |   2 +-
 tools/perf/util/callchain.c                   |  24 +-
 tools/perf/util/cpumap.c                      |  40 ++-
 tools/perf/util/data-convert-json.c           |  10 +-
 tools/perf/util/db-export.c                   |  16 +-
 tools/perf/util/dlfilter.c                    |  28 +-
 tools/perf/util/dso.c                         |   8 +-
 tools/perf/util/dsos.c                        |   2 +-
 tools/perf/util/event.c                       |  27 +-
 tools/perf/util/evsel_fprintf.c               |   4 +-
 tools/perf/util/hist.c                        |  22 +-
 tools/perf/util/intel-pt.c                    |  63 ++--
 tools/perf/util/machine.c                     | 252 ++++++++------
 tools/perf/util/map.c                         | 217 ++++++------
 tools/perf/util/map.h                         |  74 +++-
 tools/perf/util/maps.c                        | 318 ++++++++++-------
 tools/perf/util/maps.h                        |  67 +++-
 tools/perf/util/namespaces.c                  | 132 +++++---
 tools/perf/util/namespaces.h                  |   3 +-
 tools/perf/util/pmu.c                         |   8 +-
 tools/perf/util/probe-event.c                 |  62 ++--
 .../util/scripting-engines/trace-event-perl.c |  10 +-
 .../scripting-engines/trace-event-python.c    |  26 +-
 tools/perf/util/sort.c                        |  67 ++--
 tools/perf/util/symbol-elf.c                  |  41 ++-
 tools/perf/util/symbol.c                      | 320 +++++++++++-------
 tools/perf/util/symbol_fprintf.c              |   2 +-
 tools/perf/util/synthetic-events.c            |  34 +-
 tools/perf/util/thread-stack.c                |   4 +-
 tools/perf/util/thread.c                      |  39 +--
 tools/perf/util/unwind-libdw.c                |  20 +-
 tools/perf/util/unwind-libunwind-local.c      |  16 +-
 tools/perf/util/unwind-libunwind.c            |  33 +-
 tools/perf/util/vdso.c                        |   7 +-
 73 files changed, 1665 insertions(+), 1044 deletions(-)
 create mode 100644 tools/lib/perf/include/internal/rc_check.h

-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH v5 01/17] perf map: Move map list node into symbol
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 02/17] perf maps: Remove rb_node from struct map Ian Rogers
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Using a perf map as a list node is only done in symbol. Move the
list_node struct into symbol as a single pointer to the map. This
makes reference count behavior more obvious and easy to check.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/util/map.h    |  5 +--
 tools/perf/util/symbol.c | 93 ++++++++++++++++++++++++++--------------
 2 files changed, 63 insertions(+), 35 deletions(-)

diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 3dcfe06db6b3..2879cae05ee0 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -16,10 +16,7 @@ struct maps;
 struct machine;
 
 struct map {
-	union {
-		struct rb_node	rb_node;
-		struct list_head node;
-	};
+	struct rb_node		rb_node;
 	u64			start;
 	u64			end;
 	bool			erange_warned:1;
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index a458aa8b87bb..65e0c3d126f1 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -48,6 +48,11 @@ static bool symbol__is_idle(const char *name);
 int vmlinux_path__nr_entries;
 char **vmlinux_path;
 
+struct map_list_node {
+	struct list_head node;
+	struct map *map;
+};
+
 struct symbol_conf symbol_conf = {
 	.nanosecs		= false,
 	.use_modules		= true,
@@ -85,6 +90,11 @@ static enum dso_binary_type binary_type_symtab[] = {
 
 #define DSO_BINARY_TYPE__SYMTAB_CNT ARRAY_SIZE(binary_type_symtab)
 
+static struct map_list_node *map_list_node__new(void)
+{
+	return malloc(sizeof(struct map_list_node));
+}
+
 static bool symbol_type__filter(char symbol_type)
 {
 	symbol_type = toupper(symbol_type);
@@ -1219,16 +1229,21 @@ struct kcore_mapfn_data {
 static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
 {
 	struct kcore_mapfn_data *md = data;
-	struct map *map;
+	struct map_list_node *list_node = map_list_node__new();
 
-	map = map__new2(start, md->dso);
-	if (map == NULL)
+	if (!list_node)
 		return -ENOMEM;
 
-	map->end = map->start + len;
-	map->pgoff = pgoff;
+	list_node->map = map__new2(start, md->dso);
+	if (!list_node->map) {
+		free(list_node);
+		return -ENOMEM;
+	}
+
+	list_node->map->end = list_node->map->start + len;
+	list_node->map->pgoff = pgoff;
 
-	list_add(&map->node, &md->maps);
+	list_add(&list_node->node, &md->maps);
 
 	return 0;
 }
@@ -1264,12 +1279,18 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 				 * |new.............| -> |new..|       |new..|
 				 *       |old....|    ->       |old....|
 				 */
-				struct map *m = map__clone(new_map);
+				struct map_list_node *m = map_list_node__new();
 
 				if (!m)
 					return -ENOMEM;
 
-				m->end = old_map->start;
+				m->map = map__clone(new_map);
+				if (!m->map) {
+					free(m);
+					return -ENOMEM;
+				}
+
+				m->map->end = old_map->start;
 				list_add_tail(&m->node, &merged);
 				new_map->pgoff += old_map->end - new_map->start;
 				new_map->start = old_map->end;
@@ -1299,10 +1320,13 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 	}
 
 	while (!list_empty(&merged)) {
-		old_map = list_entry(merged.next, struct map, node);
-		list_del_init(&old_map->node);
-		maps__insert(kmaps, old_map);
-		map__put(old_map);
+		struct map_list_node *old_node;
+
+		old_node = list_entry(merged.next, struct map_list_node, node);
+		list_del_init(&old_node->node);
+		maps__insert(kmaps, old_node->map);
+		map__put(old_node->map);
+		free(old_node);
 	}
 
 	if (new_map) {
@@ -1317,7 +1341,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 {
 	struct maps *kmaps = map__kmaps(map);
 	struct kcore_mapfn_data md;
-	struct map *old_map, *new_map, *replacement_map = NULL, *next;
+	struct map *old_map, *replacement_map = NULL, *next;
 	struct machine *machine;
 	bool is_64_bit;
 	int err, fd;
@@ -1378,11 +1402,12 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	/* Find the kernel map using the '_stext' symbol */
 	if (!kallsyms__get_function_start(kallsyms_filename, "_stext", &stext)) {
 		u64 replacement_size = 0;
+		struct map_list_node *new_node;
 
-		list_for_each_entry(new_map, &md.maps, node) {
-			u64 new_size = new_map->end - new_map->start;
+		list_for_each_entry(new_node, &md.maps, node) {
+			u64 new_size = new_node->map->end - new_node->map->start;
 
-			if (!(stext >= new_map->start && stext < new_map->end))
+			if (!(stext >= new_node->map->start && stext < new_node->map->end))
 				continue;
 
 			/*
@@ -1392,40 +1417,43 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 			 * falls within more than one in the list.
 			 */
 			if (!replacement_map || new_size < replacement_size) {
-				replacement_map = new_map;
+				replacement_map = new_node->map;
 				replacement_size = new_size;
 			}
 		}
 	}
 
 	if (!replacement_map)
-		replacement_map = list_entry(md.maps.next, struct map, node);
+		replacement_map = list_entry(md.maps.next, struct map_list_node, node)->map;
 
 	/* Add new maps */
 	while (!list_empty(&md.maps)) {
-		new_map = list_entry(md.maps.next, struct map, node);
-		list_del_init(&new_map->node);
-		if (new_map == replacement_map) {
-			map->start	= new_map->start;
-			map->end	= new_map->end;
-			map->pgoff	= new_map->pgoff;
-			map->map_ip	= new_map->map_ip;
-			map->unmap_ip	= new_map->unmap_ip;
+		struct map_list_node *new_node;
+
+		new_node = list_entry(md.maps.next, struct map_list_node, node);
+		list_del_init(&new_node->node);
+		if (new_node->map == replacement_map) {
+			map->start	= new_node->map->start;
+			map->end	= new_node->map->end;
+			map->pgoff	= new_node->map->pgoff;
+			map->map_ip	= new_node->map->map_ip;
+			map->unmap_ip	= new_node->map->unmap_ip;
 			/* Ensure maps are correctly ordered */
 			map__get(map);
 			maps__remove(kmaps, map);
 			maps__insert(kmaps, map);
 			map__put(map);
-			map__put(new_map);
+			map__put(new_node->map);
 		} else {
 			/*
 			 * Merge kcore map into existing maps,
 			 * and ensure that current maps (eBPF)
 			 * stay intact.
 			 */
-			if (maps__merge_in(kmaps, new_map))
+			if (maps__merge_in(kmaps, new_node->map))
 				goto out_err;
 		}
+		free(new_node);
 	}
 
 	if (machine__is(machine, "x86_64")) {
@@ -1462,9 +1490,12 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 
 out_err:
 	while (!list_empty(&md.maps)) {
-		map = list_entry(md.maps.next, struct map, node);
-		list_del_init(&map->node);
-		map__put(map);
+		struct map_list_node *list_node;
+
+		list_node = list_entry(md.maps.next, struct map_list_node, node);
+		list_del_init(&list_node->node);
+		map__put(list_node->map);
+		free(list_node);
 	}
 	close(fd);
 	return -EINVAL;
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 02/17] perf maps: Remove rb_node from struct map
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 01/17] perf map: Move map list node into symbol Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 03/17] perf maps: Add functions to access maps Ian Rogers
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

struct map is reference counted, having it also be a node in an
red-black tree complicates the reference counting. Switch to having a
map_rb_node which is a red-block tree node but points at the reference
counted struct map. This reference is responsible for a single reference
count.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/arch/x86/util/event.c      |  13 +-
 tools/perf/builtin-report.c           |   6 +-
 tools/perf/tests/maps.c               |   8 +-
 tools/perf/tests/vmlinux-kallsyms.c   |  17 ++-
 tools/perf/util/bpf_lock_contention.c |   2 +-
 tools/perf/util/machine.c             |  68 ++++++----
 tools/perf/util/map.c                 |  16 ---
 tools/perf/util/map.h                 |   1 -
 tools/perf/util/maps.c                | 180 +++++++++++++++++---------
 tools/perf/util/maps.h                |  17 ++-
 tools/perf/util/probe-event.c         |  18 ++-
 tools/perf/util/symbol-elf.c          |   9 +-
 tools/perf/util/symbol.c              |  77 +++++++----
 tools/perf/util/synthetic-events.c    |  26 ++--
 tools/perf/util/thread.c              |  10 +-
 tools/perf/util/vdso.c                |   7 +-
 16 files changed, 291 insertions(+), 184 deletions(-)

diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
index e4288d09f3a0..17bf60babfbd 100644
--- a/tools/perf/arch/x86/util/event.c
+++ b/tools/perf/arch/x86/util/event.c
@@ -19,7 +19,7 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
 				       struct machine *machine)
 {
 	int rc = 0;
-	struct map *pos;
+	struct map_rb_node *pos;
 	struct maps *kmaps = machine__kernel_maps(machine);
 	union perf_event *event = zalloc(sizeof(event->mmap) +
 					 machine->id_hdr_size);
@@ -33,11 +33,12 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
 	maps__for_each_entry(kmaps, pos) {
 		struct kmap *kmap;
 		size_t size;
+		struct map *map = pos->map;
 
-		if (!__map__is_extra_kernel_map(pos))
+		if (!__map__is_extra_kernel_map(map))
 			continue;
 
-		kmap = map__kmap(pos);
+		kmap = map__kmap(map);
 
 		size = sizeof(event->mmap) - sizeof(event->mmap.filename) +
 		       PERF_ALIGN(strlen(kmap->name) + 1, sizeof(u64)) +
@@ -58,9 +59,9 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
 
 		event->mmap.header.size = size;
 
-		event->mmap.start = pos->start;
-		event->mmap.len   = pos->end - pos->start;
-		event->mmap.pgoff = pos->pgoff;
+		event->mmap.start = map->start;
+		event->mmap.len   = map->end - map->start;
+		event->mmap.pgoff = map->pgoff;
 		event->mmap.pid   = machine->pid;
 
 		strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 6400615b5e98..c453b7fa7418 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -840,9 +840,11 @@ static struct task *tasks_list(struct task *task, struct machine *machine)
 static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
 {
 	size_t printed = 0;
-	struct map *map;
+	struct map_rb_node *rb_node;
+
+	maps__for_each_entry(maps, rb_node) {
+		struct map *map = rb_node->map;
 
-	maps__for_each_entry(maps, map) {
 		printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
 				   indent, "", map->start, map->end,
 				   map->prot & PROT_READ ? 'r' : '-',
diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
index a69988a89d26..8246d37e4b7a 100644
--- a/tools/perf/tests/maps.c
+++ b/tools/perf/tests/maps.c
@@ -15,10 +15,12 @@ struct map_def {
 
 static int check_maps(struct map_def *merged, unsigned int size, struct maps *maps)
 {
-	struct map *map;
+	struct map_rb_node *rb_node;
 	unsigned int i = 0;
 
-	maps__for_each_entry(maps, map) {
+	maps__for_each_entry(maps, rb_node) {
+		struct map *map = rb_node->map;
+
 		if (i > 0)
 			TEST_ASSERT_VAL("less maps expected", (map && i < size) || (!map && i == size));
 
@@ -74,7 +76,7 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
 
 		map->start = bpf_progs[i].start;
 		map->end   = bpf_progs[i].end;
-		maps__insert(maps, map);
+		TEST_ASSERT_VAL("failed to insert map", maps__insert(maps, map) == 0);
 		map__put(map);
 	}
 
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index 8ab035b55875..c8abb3ca8347 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -118,7 +118,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 	int err = TEST_FAIL;
 	struct rb_node *nd;
 	struct symbol *sym;
-	struct map *kallsyms_map, *vmlinux_map, *map;
+	struct map *kallsyms_map, *vmlinux_map;
+	struct map_rb_node *rb_node;
 	struct machine kallsyms, vmlinux;
 	struct maps *maps;
 	u64 mem_start, mem_end;
@@ -290,15 +291,15 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 
 	header_printed = false;
 
-	maps__for_each_entry(maps, map) {
-		struct map *
+	maps__for_each_entry(maps, rb_node) {
+		struct map *map = rb_node->map;
 		/*
 		 * If it is the kernel, kallsyms is always "[kernel.kallsyms]", while
 		 * the kernel will have the path for the vmlinux file being used,
 		 * so use the short name, less descriptive but the same ("[kernel]" in
 		 * both cases.
 		 */
-		pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
+		struct map *pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
 								map->dso->short_name :
 								map->dso->name));
 		if (pair) {
@@ -314,8 +315,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 
 	header_printed = false;
 
-	maps__for_each_entry(maps, map) {
-		struct map *pair;
+	maps__for_each_entry(maps, rb_node) {
+		struct map *pair, *map = rb_node->map;
 
 		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
 		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
@@ -344,7 +345,9 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 
 	maps = machine__kernel_maps(&kallsyms);
 
-	maps__for_each_entry(maps, map) {
+	maps__for_each_entry(maps, rb_node) {
+		struct map *map = rb_node->map;
+
 		if (!map->priv) {
 			if (!header_printed) {
 				pr_info("WARN: Maps only in kallsyms:\n");
diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
index 235fc7150545..0b47863d2460 100644
--- a/tools/perf/util/bpf_lock_contention.c
+++ b/tools/perf/util/bpf_lock_contention.c
@@ -282,7 +282,7 @@ int lock_contention_read(struct lock_contention *con)
 	}
 
 	/* make sure it loads the kernel map */
-	map__load(maps__first(machine->kmaps));
+	map__load(maps__first(machine->kmaps)->map);
 
 	prev_key = NULL;
 	while (!bpf_map_get_next_key(fd, prev_key, &key)) {
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 803c9d1803dd..93a07079d174 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -882,6 +882,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
 
 	if (!map) {
 		struct dso *dso = dso__new(event->ksymbol.name);
+		int err;
 
 		if (dso) {
 			dso->kernel = DSO_SPACE__KERNEL;
@@ -901,8 +902,11 @@ static int machine__process_ksymbol_register(struct machine *machine,
 
 		map->start = event->ksymbol.addr;
 		map->end = map->start + event->ksymbol.len;
-		maps__insert(machine__kernel_maps(machine), map);
+		err = maps__insert(machine__kernel_maps(machine), map);
 		map__put(map);
+		if (err)
+			return err;
+
 		dso__set_loaded(dso);
 
 		if (is_bpf_image(event->ksymbol.name)) {
@@ -1002,6 +1006,7 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
 	struct map *map = NULL;
 	struct kmod_path m;
 	struct dso *dso;
+	int err;
 
 	if (kmod_path__parse_name(&m, filename))
 		return NULL;
@@ -1014,10 +1019,14 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
 	if (map == NULL)
 		goto out;
 
-	maps__insert(machine__kernel_maps(machine), map);
+	err = maps__insert(machine__kernel_maps(machine), map);
 
 	/* Put the map here because maps__insert already got it */
 	map__put(map);
+
+	/* If maps__insert failed, return NULL. */
+	if (err)
+		map = NULL;
 out:
 	/* put the dso here, corresponding to  machine__findnew_module_dso */
 	dso__put(dso);
@@ -1184,10 +1193,11 @@ int machine__create_extra_kernel_map(struct machine *machine,
 {
 	struct kmap *kmap;
 	struct map *map;
+	int err;
 
 	map = map__new2(xm->start, kernel);
 	if (!map)
-		return -1;
+		return -ENOMEM;
 
 	map->end   = xm->end;
 	map->pgoff = xm->pgoff;
@@ -1196,14 +1206,16 @@ int machine__create_extra_kernel_map(struct machine *machine,
 
 	strlcpy(kmap->name, xm->name, KMAP_NAME_LEN);
 
-	maps__insert(machine__kernel_maps(machine), map);
+	err = maps__insert(machine__kernel_maps(machine), map);
 
-	pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
-		  kmap->name, map->start, map->end);
+	if (!err) {
+		pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
+			kmap->name, map->start, map->end);
+	}
 
 	map__put(map);
 
-	return 0;
+	return err;
 }
 
 static u64 find_entry_trampoline(struct dso *dso)
@@ -1244,16 +1256,16 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
 	struct maps *kmaps = machine__kernel_maps(machine);
 	int nr_cpus_avail, cpu;
 	bool found = false;
-	struct map *map;
+	struct map_rb_node *rb_node;
 	u64 pgoff;
 
 	/*
 	 * In the vmlinux case, pgoff is a virtual address which must now be
 	 * mapped to a vmlinux offset.
 	 */
-	maps__for_each_entry(kmaps, map) {
+	maps__for_each_entry(kmaps, rb_node) {
+		struct map *dest_map, *map = rb_node->map;
 		struct kmap *kmap = __map__kmap(map);
-		struct map *dest_map;
 
 		if (!kmap || !is_entry_trampoline(kmap->name))
 			continue;
@@ -1308,11 +1320,10 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
 
 	machine->vmlinux_map = map__new2(0, kernel);
 	if (machine->vmlinux_map == NULL)
-		return -1;
+		return -ENOMEM;
 
 	machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
-	maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
-	return 0;
+	return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
 }
 
 void machine__destroy_kernel_maps(struct machine *machine)
@@ -1634,25 +1645,26 @@ static void machine__set_kernel_mmap(struct machine *machine,
 		machine->vmlinux_map->end = ~0ULL;
 }
 
-static void machine__update_kernel_mmap(struct machine *machine,
+static int machine__update_kernel_mmap(struct machine *machine,
 				     u64 start, u64 end)
 {
 	struct map *map = machine__kernel_map(machine);
+	int err;
 
 	map__get(map);
 	maps__remove(machine__kernel_maps(machine), map);
 
 	machine__set_kernel_mmap(machine, start, end);
 
-	maps__insert(machine__kernel_maps(machine), map);
+	err = maps__insert(machine__kernel_maps(machine), map);
 	map__put(map);
+	return err;
 }
 
 int machine__create_kernel_maps(struct machine *machine)
 {
 	struct dso *kernel = machine__get_kernel(machine);
 	const char *name = NULL;
-	struct map *map;
 	u64 start = 0, end = ~0ULL;
 	int ret;
 
@@ -1684,7 +1696,9 @@ int machine__create_kernel_maps(struct machine *machine)
 		 * we have a real start address now, so re-order the kmaps
 		 * assume it's the last in the kmaps
 		 */
-		machine__update_kernel_mmap(machine, start, end);
+		ret = machine__update_kernel_mmap(machine, start, end);
+		if (ret < 0)
+			goto out_put;
 	}
 
 	if (machine__create_extra_kernel_maps(machine, kernel))
@@ -1692,9 +1706,12 @@ int machine__create_kernel_maps(struct machine *machine)
 
 	if (end == ~0ULL) {
 		/* update end address of the kernel map using adjacent module address */
-		map = map__next(machine__kernel_map(machine));
-		if (map)
-			machine__set_kernel_mmap(machine, start, map->start);
+		struct map_rb_node *rb_node = maps__find_node(machine__kernel_maps(machine),
+							machine__kernel_map(machine));
+		struct map_rb_node *next = map_rb_node__next(rb_node);
+
+		if (next)
+			machine__set_kernel_mmap(machine, start, next->map->start);
 	}
 
 out_put:
@@ -1827,7 +1844,10 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
 		if (strstr(kernel->long_name, "vmlinux"))
 			dso__set_short_name(kernel, "[kernel.vmlinux]", false);
 
-		machine__update_kernel_mmap(machine, xm->start, xm->end);
+		if (machine__update_kernel_mmap(machine, xm->start, xm->end) < 0) {
+			dso__put(kernel);
+			goto out_problem;
+		}
 
 		if (build_id__is_defined(bid))
 			dso__set_build_id(kernel, bid);
@@ -3325,11 +3345,11 @@ int machine__for_each_dso(struct machine *machine, machine__dso_t fn, void *priv
 int machine__for_each_kernel_map(struct machine *machine, machine__map_t fn, void *priv)
 {
 	struct maps *maps = machine__kernel_maps(machine);
-	struct map *map;
+	struct map_rb_node *pos;
 	int err = 0;
 
-	for (map = maps__first(maps); map != NULL; map = map__next(map)) {
-		err = fn(map, priv);
+	maps__for_each_entry(maps, pos) {
+		err = fn(pos->map, priv);
 		if (err != 0) {
 			break;
 		}
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index f3a3d9b3a40d..7620cfa114d4 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -111,7 +111,6 @@ void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
 	map->dso      = dso__get(dso);
 	map->map_ip   = map__map_ip;
 	map->unmap_ip = map__unmap_ip;
-	RB_CLEAR_NODE(&map->rb_node);
 	map->erange_warned = false;
 	refcount_set(&map->refcnt, 1);
 }
@@ -397,7 +396,6 @@ struct map *map__clone(struct map *from)
 	map = memdup(from, size);
 	if (map != NULL) {
 		refcount_set(&map->refcnt, 1);
-		RB_CLEAR_NODE(&map->rb_node);
 		dso__get(map->dso);
 	}
 
@@ -537,20 +535,6 @@ bool map__contains_symbol(const struct map *map, const struct symbol *sym)
 	return ip >= map->start && ip < map->end;
 }
 
-static struct map *__map__next(struct map *map)
-{
-	struct rb_node *next = rb_next(&map->rb_node);
-
-	if (next)
-		return rb_entry(next, struct map, rb_node);
-	return NULL;
-}
-
-struct map *map__next(struct map *map)
-{
-	return map ? __map__next(map) : NULL;
-}
-
 struct kmap *__map__kmap(struct map *map)
 {
 	if (!map->dso || !map->dso->kernel)
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 2879cae05ee0..d1a6f85fd31d 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -16,7 +16,6 @@ struct maps;
 struct machine;
 
 struct map {
-	struct rb_node		rb_node;
 	u64			start;
 	u64			end;
 	bool			erange_warned:1;
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 37bd5b40000d..83ec126bcbe5 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -10,8 +10,6 @@
 #include "ui/ui.h"
 #include "unwind.h"
 
-static void __maps__insert(struct maps *maps, struct map *map);
-
 static void maps__init(struct maps *maps, struct machine *machine)
 {
 	maps->entries = RB_ROOT;
@@ -32,10 +30,44 @@ static void __maps__free_maps_by_name(struct maps *maps)
 	maps->nr_maps_allocated = 0;
 }
 
-void maps__insert(struct maps *maps, struct map *map)
+static int __maps__insert(struct maps *maps, struct map *map)
 {
+	struct rb_node **p = &maps->entries.rb_node;
+	struct rb_node *parent = NULL;
+	const u64 ip = map->start;
+	struct map_rb_node *m, *new_rb_node;
+
+	new_rb_node = malloc(sizeof(*new_rb_node));
+	if (!new_rb_node)
+		return -ENOMEM;
+
+	RB_CLEAR_NODE(&new_rb_node->rb_node);
+	new_rb_node->map = map;
+
+	while (*p != NULL) {
+		parent = *p;
+		m = rb_entry(parent, struct map_rb_node, rb_node);
+		if (ip < m->map->start)
+			p = &(*p)->rb_left;
+		else
+			p = &(*p)->rb_right;
+	}
+
+	rb_link_node(&new_rb_node->rb_node, parent, p);
+	rb_insert_color(&new_rb_node->rb_node, &maps->entries);
+	map__get(map);
+	return 0;
+}
+
+int maps__insert(struct maps *maps, struct map *map)
+{
+	int err;
+
 	down_write(&maps->lock);
-	__maps__insert(maps, map);
+	err = __maps__insert(maps, map);
+	if (err)
+		goto out;
+
 	++maps->nr_maps;
 
 	if (map->dso && map->dso->kernel) {
@@ -59,32 +91,39 @@ void maps__insert(struct maps *maps, struct map *map)
 
 			if (maps_by_name == NULL) {
 				__maps__free_maps_by_name(maps);
-				up_write(&maps->lock);
-				return;
+				err = -ENOMEM;
+				goto out;
 			}
 
 			maps->maps_by_name = maps_by_name;
 			maps->nr_maps_allocated = nr_allocate;
-		}
+}
 		maps->maps_by_name[maps->nr_maps - 1] = map;
 		__maps__sort_by_name(maps);
 	}
+ out:
 	up_write(&maps->lock);
+	return err;
 }
 
-static void __maps__remove(struct maps *maps, struct map *map)
+static void __maps__remove(struct maps *maps, struct map_rb_node *rb_node)
 {
-	rb_erase_init(&map->rb_node, &maps->entries);
-	map__put(map);
+	rb_erase_init(&rb_node->rb_node, &maps->entries);
+	map__put(rb_node->map);
+	free(rb_node);
 }
 
 void maps__remove(struct maps *maps, struct map *map)
 {
+	struct map_rb_node *rb_node;
+
 	down_write(&maps->lock);
 	if (maps->last_search_by_name == map)
 		maps->last_search_by_name = NULL;
 
-	__maps__remove(maps, map);
+	rb_node = maps__find_node(maps, map);
+	assert(rb_node->map == map);
+	__maps__remove(maps, rb_node);
 	--maps->nr_maps;
 	if (maps->maps_by_name)
 		__maps__free_maps_by_name(maps);
@@ -93,11 +132,12 @@ void maps__remove(struct maps *maps, struct map *map)
 
 static void __maps__purge(struct maps *maps)
 {
-	struct map *pos, *next;
+	struct map_rb_node *pos, *next;
 
 	maps__for_each_entry_safe(maps, pos, next) {
 		rb_erase_init(&pos->rb_node,  &maps->entries);
-		map__put(pos);
+		map__put(pos->map);
+		free(pos);
 	}
 }
 
@@ -153,21 +193,21 @@ struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
 struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, struct map **mapp)
 {
 	struct symbol *sym;
-	struct map *pos;
+	struct map_rb_node *pos;
 
 	down_read(&maps->lock);
 
 	maps__for_each_entry(maps, pos) {
-		sym = map__find_symbol_by_name(pos, name);
+		sym = map__find_symbol_by_name(pos->map, name);
 
 		if (sym == NULL)
 			continue;
-		if (!map__contains_symbol(pos, sym)) {
+		if (!map__contains_symbol(pos->map, sym)) {
 			sym = NULL;
 			continue;
 		}
 		if (mapp != NULL)
-			*mapp = pos;
+			*mapp = pos->map;
 		goto out;
 	}
 
@@ -196,15 +236,15 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
 size_t maps__fprintf(struct maps *maps, FILE *fp)
 {
 	size_t printed = 0;
-	struct map *pos;
+	struct map_rb_node *pos;
 
 	down_read(&maps->lock);
 
 	maps__for_each_entry(maps, pos) {
 		printed += fprintf(fp, "Map:");
-		printed += map__fprintf(pos, fp);
+		printed += map__fprintf(pos->map, fp);
 		if (verbose > 2) {
-			printed += dso__fprintf(pos->dso, fp);
+			printed += dso__fprintf(pos->map->dso, fp);
 			printed += fprintf(fp, "--\n");
 		}
 	}
@@ -231,11 +271,11 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 	next = root->rb_node;
 	first = NULL;
 	while (next) {
-		struct map *pos = rb_entry(next, struct map, rb_node);
+		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
 
-		if (pos->end > map->start) {
+		if (pos->map->end > map->start) {
 			first = next;
-			if (pos->start <= map->start)
+			if (pos->map->start <= map->start)
 				break;
 			next = next->rb_left;
 		} else
@@ -244,14 +284,14 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 
 	next = first;
 	while (next) {
-		struct map *pos = rb_entry(next, struct map, rb_node);
+		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
 		next = rb_next(&pos->rb_node);
 
 		/*
 		 * Stop if current map starts after map->end.
 		 * Maps are ordered by start: next will not overlap for sure.
 		 */
-		if (pos->start >= map->end)
+		if (pos->map->start >= map->end)
 			break;
 
 		if (verbose >= 2) {
@@ -262,7 +302,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 			} else {
 				fputs("overlapping maps:\n", fp);
 				map__fprintf(map, fp);
-				map__fprintf(pos, fp);
+				map__fprintf(pos->map, fp);
 			}
 		}
 
@@ -271,8 +311,8 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 		 * Now check if we need to create new maps for areas not
 		 * overlapped by the new map:
 		 */
-		if (map->start > pos->start) {
-			struct map *before = map__clone(pos);
+		if (map->start > pos->map->start) {
+			struct map *before = map__clone(pos->map);
 
 			if (before == NULL) {
 				err = -ENOMEM;
@@ -280,14 +320,17 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 			}
 
 			before->end = map->start;
-			__maps__insert(maps, before);
+			err = __maps__insert(maps, before);
+			if (err)
+				goto put_map;
+
 			if (verbose >= 2 && !use_browser)
 				map__fprintf(before, fp);
 			map__put(before);
 		}
 
-		if (map->end < pos->end) {
-			struct map *after = map__clone(pos);
+		if (map->end < pos->map->end) {
+			struct map *after = map__clone(pos->map);
 
 			if (after == NULL) {
 				err = -ENOMEM;
@@ -295,15 +338,19 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 			}
 
 			after->start = map->end;
-			after->pgoff += map->end - pos->start;
-			assert(pos->map_ip(pos, map->end) == after->map_ip(after, map->end));
-			__maps__insert(maps, after);
+			after->pgoff += map->end - pos->map->start;
+			assert(pos->map->map_ip(pos->map, map->end) ==
+				after->map_ip(after, map->end));
+			err = __maps__insert(maps, after);
+			if (err)
+				goto put_map;
+
 			if (verbose >= 2 && !use_browser)
 				map__fprintf(after, fp);
 			map__put(after);
 		}
 put_map:
-		map__put(pos);
+		map__put(pos->map);
 
 		if (err)
 			goto out;
@@ -322,12 +369,12 @@ int maps__clone(struct thread *thread, struct maps *parent)
 {
 	struct maps *maps = thread->maps;
 	int err;
-	struct map *map;
+	struct map_rb_node *rb_node;
 
 	down_read(&parent->lock);
 
-	maps__for_each_entry(parent, map) {
-		struct map *new = map__clone(map);
+	maps__for_each_entry(parent, rb_node) {
+		struct map *new = map__clone(rb_node->map);
 
 		if (new == NULL) {
 			err = -ENOMEM;
@@ -338,7 +385,10 @@ int maps__clone(struct thread *thread, struct maps *parent)
 		if (err)
 			goto out_unlock;
 
-		maps__insert(maps, new);
+		err = maps__insert(maps, new);
+		if (err)
+			goto out_unlock;
+
 		map__put(new);
 	}
 
@@ -348,40 +398,31 @@ int maps__clone(struct thread *thread, struct maps *parent)
 	return err;
 }
 
-static void __maps__insert(struct maps *maps, struct map *map)
+struct map_rb_node *maps__find_node(struct maps *maps, struct map *map)
 {
-	struct rb_node **p = &maps->entries.rb_node;
-	struct rb_node *parent = NULL;
-	const u64 ip = map->start;
-	struct map *m;
+	struct map_rb_node *rb_node;
 
-	while (*p != NULL) {
-		parent = *p;
-		m = rb_entry(parent, struct map, rb_node);
-		if (ip < m->start)
-			p = &(*p)->rb_left;
-		else
-			p = &(*p)->rb_right;
+	maps__for_each_entry(maps, rb_node) {
+		if (rb_node->map == map)
+			return rb_node;
 	}
-
-	rb_link_node(&map->rb_node, parent, p);
-	rb_insert_color(&map->rb_node, &maps->entries);
-	map__get(map);
+	return NULL;
 }
 
 struct map *maps__find(struct maps *maps, u64 ip)
 {
 	struct rb_node *p;
-	struct map *m;
+	struct map_rb_node *m;
+
 
 	down_read(&maps->lock);
 
 	p = maps->entries.rb_node;
 	while (p != NULL) {
-		m = rb_entry(p, struct map, rb_node);
-		if (ip < m->start)
+		m = rb_entry(p, struct map_rb_node, rb_node);
+		if (ip < m->map->start)
 			p = p->rb_left;
-		else if (ip >= m->end)
+		else if (ip >= m->map->end)
 			p = p->rb_right;
 		else
 			goto out;
@@ -390,14 +431,29 @@ struct map *maps__find(struct maps *maps, u64 ip)
 	m = NULL;
 out:
 	up_read(&maps->lock);
-	return m;
+	return m ? m->map : NULL;
 }
 
-struct map *maps__first(struct maps *maps)
+struct map_rb_node *maps__first(struct maps *maps)
 {
 	struct rb_node *first = rb_first(&maps->entries);
 
 	if (first)
-		return rb_entry(first, struct map, rb_node);
+		return rb_entry(first, struct map_rb_node, rb_node);
 	return NULL;
 }
+
+struct map_rb_node *map_rb_node__next(struct map_rb_node *node)
+{
+	struct rb_node *next;
+
+	if (!node)
+		return NULL;
+
+	next = rb_next(&node->rb_node);
+
+	if (!next)
+		return NULL;
+
+	return rb_entry(next, struct map_rb_node, rb_node);
+}
diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
index 7e729ff42749..512746ec0f9a 100644
--- a/tools/perf/util/maps.h
+++ b/tools/perf/util/maps.h
@@ -15,15 +15,22 @@ struct map;
 struct maps;
 struct thread;
 
+struct map_rb_node {
+	struct rb_node rb_node;
+	struct map *map;
+};
+
+struct map_rb_node *maps__first(struct maps *maps);
+struct map_rb_node *map_rb_node__next(struct map_rb_node *node);
+struct map_rb_node *maps__find_node(struct maps *maps, struct map *map);
 struct map *maps__find(struct maps *maps, u64 addr);
-struct map *maps__first(struct maps *maps);
-struct map *map__next(struct map *map);
 
 #define maps__for_each_entry(maps, map) \
-	for (map = maps__first(maps); map; map = map__next(map))
+	for (map = maps__first(maps); map; map = map_rb_node__next(map))
 
 #define maps__for_each_entry_safe(maps, map, next) \
-	for (map = maps__first(maps), next = map__next(map); map; map = next, next = map__next(map))
+	for (map = maps__first(maps), next = map_rb_node__next(map); map; \
+	     map = next, next = map_rb_node__next(map))
 
 struct maps {
 	struct rb_root      entries;
@@ -63,7 +70,7 @@ void maps__put(struct maps *maps);
 int maps__clone(struct thread *thread, struct maps *parent);
 size_t maps__fprintf(struct maps *maps, FILE *fp);
 
-void maps__insert(struct maps *maps, struct map *map);
+int maps__insert(struct maps *maps, struct map *map);
 
 void maps__remove(struct maps *maps, struct map *map);
 
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index 881d94f65a6b..cdf5d655d84c 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -151,23 +151,27 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
 static struct map *kernel_get_module_map(const char *module)
 {
 	struct maps *maps = machine__kernel_maps(host_machine);
-	struct map *pos;
+	struct map_rb_node *pos;
 
 	/* A file path -- this is an offline module */
 	if (module && strchr(module, '/'))
 		return dso__new_map(module);
 
 	if (!module) {
-		pos = machine__kernel_map(host_machine);
-		return map__get(pos);
+		struct map *map = machine__kernel_map(host_machine);
+
+		return map__get(map);
 	}
 
 	maps__for_each_entry(maps, pos) {
 		/* short_name is "[module]" */
-		if (strncmp(pos->dso->short_name + 1, module,
-			    pos->dso->short_name_len - 2) == 0 &&
-		    module[pos->dso->short_name_len - 2] == '\0') {
-			return map__get(pos);
+		const char *short_name = pos->map->dso->short_name;
+		u16 short_name_len =  pos->map->dso->short_name_len;
+
+		if (strncmp(short_name + 1, module,
+			    short_name_len - 2) == 0 &&
+		    module[short_name_len - 2] == '\0') {
+			return map__get(pos->map);
 		}
 	}
 	return NULL;
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index c0a2de42c51b..325fbeea8dff 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1355,10 +1355,14 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 			map->unmap_ip = map__unmap_ip;
 			/* Ensure maps are correctly ordered */
 			if (kmaps) {
+				int err;
+
 				map__get(map);
 				maps__remove(kmaps, map);
-				maps__insert(kmaps, map);
+				err = maps__insert(kmaps, map);
 				map__put(map);
+				if (err)
+					return err;
 			}
 		}
 
@@ -1411,7 +1415,8 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
 		}
 		curr_dso->symtab_type = dso->symtab_type;
-		maps__insert(kmaps, curr_map);
+		if (maps__insert(kmaps, curr_map))
+			return -1;
 		/*
 		 * Add it before we drop the reference to curr_map, i.e. while
 		 * we still are sure to have a reference to this DSO via
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 65e0c3d126f1..d9aa41e20d5f 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -273,13 +273,13 @@ void symbols__fixup_end(struct rb_root_cached *symbols, bool is_kallsyms)
 
 void maps__fixup_end(struct maps *maps)
 {
-	struct map *prev = NULL, *curr;
+	struct map_rb_node *prev = NULL, *curr;
 
 	down_write(&maps->lock);
 
 	maps__for_each_entry(maps, curr) {
-		if (prev != NULL && !prev->end)
-			prev->end = curr->start;
+		if (prev != NULL && !prev->map->end)
+			prev->map->end = curr->map->start;
 
 		prev = curr;
 	}
@@ -288,8 +288,8 @@ void maps__fixup_end(struct maps *maps)
 	 * We still haven't the actual symbols, so guess the
 	 * last map final address.
 	 */
-	if (curr && !curr->end)
-		curr->end = ~0ULL;
+	if (curr && !curr->map->end)
+		curr->map->end = ~0ULL;
 
 	up_write(&maps->lock);
 }
@@ -942,7 +942,10 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 			}
 
 			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
-			maps__insert(kmaps, curr_map);
+			if (maps__insert(kmaps, curr_map)) {
+				dso__put(ndso);
+				return -1;
+			}
 			++kernel_range;
 		} else if (delta) {
 			/* Kernel was relocated at boot time */
@@ -1130,14 +1133,15 @@ int compare_proc_modules(const char *from, const char *to)
 static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
 {
 	struct rb_root modules = RB_ROOT;
-	struct map *old_map;
+	struct map_rb_node *old_node;
 	int err;
 
 	err = read_proc_modules(filename, &modules);
 	if (err)
 		return err;
 
-	maps__for_each_entry(kmaps, old_map) {
+	maps__for_each_entry(kmaps, old_node) {
+		struct map *old_map = old_node->map;
 		struct module_info *mi;
 
 		if (!__map__is_kmodule(old_map)) {
@@ -1254,10 +1258,13 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
  */
 int maps__merge_in(struct maps *kmaps, struct map *new_map)
 {
-	struct map *old_map;
+	struct map_rb_node *rb_node;
 	LIST_HEAD(merged);
+	int err = 0;
+
+	maps__for_each_entry(kmaps, rb_node) {
+		struct map *old_map = rb_node->map;
 
-	maps__for_each_entry(kmaps, old_map) {
 		/* no overload with this one */
 		if (new_map->end < old_map->start ||
 		    new_map->start >= old_map->end)
@@ -1281,13 +1288,16 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 				 */
 				struct map_list_node *m = map_list_node__new();
 
-				if (!m)
-					return -ENOMEM;
+				if (!m) {
+					err = -ENOMEM;
+					goto out;
+				}
 
 				m->map = map__clone(new_map);
 				if (!m->map) {
 					free(m);
-					return -ENOMEM;
+					err = -ENOMEM;
+					goto out;
 				}
 
 				m->map->end = old_map->start;
@@ -1319,21 +1329,24 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 		}
 	}
 
+out:
 	while (!list_empty(&merged)) {
 		struct map_list_node *old_node;
 
 		old_node = list_entry(merged.next, struct map_list_node, node);
 		list_del_init(&old_node->node);
-		maps__insert(kmaps, old_node->map);
+		if (!err)
+			err = maps__insert(kmaps, old_node->map);
 		map__put(old_node->map);
 		free(old_node);
 	}
 
 	if (new_map) {
-		maps__insert(kmaps, new_map);
+		if (!err)
+			err = maps__insert(kmaps, new_map);
 		map__put(new_map);
 	}
-	return 0;
+	return err;
 }
 
 static int dso__load_kcore(struct dso *dso, struct map *map,
@@ -1341,7 +1354,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 {
 	struct maps *kmaps = map__kmaps(map);
 	struct kcore_mapfn_data md;
-	struct map *old_map, *replacement_map = NULL, *next;
+	struct map *replacement_map = NULL;
+	struct map_rb_node *old_node, *next;
 	struct machine *machine;
 	bool is_64_bit;
 	int err, fd;
@@ -1388,7 +1402,9 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	}
 
 	/* Remove old maps */
-	maps__for_each_entry_safe(kmaps, old_map, next) {
+	maps__for_each_entry_safe(kmaps, old_node, next) {
+		struct map *old_map = old_node->map;
+
 		/*
 		 * We need to preserve eBPF maps even if they are
 		 * covered by kcore, because we need to access
@@ -1441,17 +1457,21 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 			/* Ensure maps are correctly ordered */
 			map__get(map);
 			maps__remove(kmaps, map);
-			maps__insert(kmaps, map);
+			err = maps__insert(kmaps, map);
 			map__put(map);
 			map__put(new_node->map);
+			if (err)
+				goto out_err;
 		} else {
 			/*
 			 * Merge kcore map into existing maps,
 			 * and ensure that current maps (eBPF)
 			 * stay intact.
 			 */
-			if (maps__merge_in(kmaps, new_node->map))
+			if (maps__merge_in(kmaps, new_node->map)) {
+				err = -EINVAL;
 				goto out_err;
+			}
 		}
 		free(new_node);
 	}
@@ -1498,7 +1518,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 		free(list_node);
 	}
 	close(fd);
-	return -EINVAL;
+	return err;
 }
 
 /*
@@ -2042,8 +2062,9 @@ void __maps__sort_by_name(struct maps *maps)
 
 static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
 {
-	struct map *map;
-	struct map **maps_by_name = realloc(maps->maps_by_name, maps->nr_maps * sizeof(map));
+	struct map_rb_node *rb_node;
+	struct map **maps_by_name = realloc(maps->maps_by_name,
+					    maps->nr_maps * sizeof(struct map *));
 	int i = 0;
 
 	if (maps_by_name == NULL)
@@ -2055,8 +2076,8 @@ static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
 	maps->maps_by_name = maps_by_name;
 	maps->nr_maps_allocated = maps->nr_maps;
 
-	maps__for_each_entry(maps, map)
-		maps_by_name[i++] = map;
+	maps__for_each_entry(maps, rb_node)
+		maps_by_name[i++] = rb_node->map;
 
 	__maps__sort_by_name(maps);
 
@@ -2082,6 +2103,7 @@ static struct map *__maps__find_by_name(struct maps *maps, const char *name)
 
 struct map *maps__find_by_name(struct maps *maps, const char *name)
 {
+	struct map_rb_node *rb_node;
 	struct map *map;
 
 	down_read(&maps->lock);
@@ -2100,12 +2122,13 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 		goto out_unlock;
 
 	/* Fallback to traversing the rbtree... */
-	maps__for_each_entry(maps, map)
+	maps__for_each_entry(maps, rb_node) {
+		map = rb_node->map;
 		if (strcmp(map->dso->short_name, name) == 0) {
 			maps->last_search_by_name = map;
 			goto out_unlock;
 		}
-
+	}
 	map = NULL;
 
 out_unlock:
diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
index 6def01036eb5..57b95c1d7e39 100644
--- a/tools/perf/util/synthetic-events.c
+++ b/tools/perf/util/synthetic-events.c
@@ -669,7 +669,7 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
 				   struct machine *machine)
 {
 	int rc = 0;
-	struct map *pos;
+	struct map_rb_node *pos;
 	struct maps *maps = machine__kernel_maps(machine);
 	union perf_event *event;
 	size_t size = symbol_conf.buildid_mmap2 ?
@@ -692,37 +692,39 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
 		event->header.misc = PERF_RECORD_MISC_GUEST_KERNEL;
 
 	maps__for_each_entry(maps, pos) {
-		if (!__map__is_kmodule(pos))
+		struct map *map = pos->map;
+
+		if (!__map__is_kmodule(map))
 			continue;
 
 		if (symbol_conf.buildid_mmap2) {
-			size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
+			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
 			event->mmap2.header.type = PERF_RECORD_MMAP2;
 			event->mmap2.header.size = (sizeof(event->mmap2) -
 						(sizeof(event->mmap2.filename) - size));
 			memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
 			event->mmap2.header.size += machine->id_hdr_size;
-			event->mmap2.start = pos->start;
-			event->mmap2.len   = pos->end - pos->start;
+			event->mmap2.start = map->start;
+			event->mmap2.len   = map->end - map->start;
 			event->mmap2.pid   = machine->pid;
 
-			memcpy(event->mmap2.filename, pos->dso->long_name,
-			       pos->dso->long_name_len + 1);
+			memcpy(event->mmap2.filename, map->dso->long_name,
+			       map->dso->long_name_len + 1);
 
 			perf_record_mmap2__read_build_id(&event->mmap2, machine, false);
 		} else {
-			size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
+			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
 			event->mmap.header.type = PERF_RECORD_MMAP;
 			event->mmap.header.size = (sizeof(event->mmap) -
 						(sizeof(event->mmap.filename) - size));
 			memset(event->mmap.filename + size, 0, machine->id_hdr_size);
 			event->mmap.header.size += machine->id_hdr_size;
-			event->mmap.start = pos->start;
-			event->mmap.len   = pos->end - pos->start;
+			event->mmap.start = map->start;
+			event->mmap.len   = map->end - map->start;
 			event->mmap.pid   = machine->pid;
 
-			memcpy(event->mmap.filename, pos->dso->long_name,
-			       pos->dso->long_name_len + 1);
+			memcpy(event->mmap.filename, map->dso->long_name,
+			       map->dso->long_name_len + 1);
 		}
 
 		if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
index a2490a20eb56..24e53bd55f7d 100644
--- a/tools/perf/util/thread.c
+++ b/tools/perf/util/thread.c
@@ -352,9 +352,7 @@ int thread__insert_map(struct thread *thread, struct map *map)
 		return ret;
 
 	maps__fixup_overlappings(thread->maps, map, stderr);
-	maps__insert(thread->maps, map);
-
-	return 0;
+	return maps__insert(thread->maps, map);
 }
 
 static int __thread__prepare_access(struct thread *thread)
@@ -362,12 +360,12 @@ static int __thread__prepare_access(struct thread *thread)
 	bool initialized = false;
 	int err = 0;
 	struct maps *maps = thread->maps;
-	struct map *map;
+	struct map_rb_node *rb_node;
 
 	down_read(&maps->lock);
 
-	maps__for_each_entry(maps, map) {
-		err = unwind__prepare_access(thread->maps, map, &initialized);
+	maps__for_each_entry(maps, rb_node) {
+		err = unwind__prepare_access(thread->maps, rb_node->map, &initialized);
 		if (err || initialized)
 			break;
 	}
diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
index 43beb169631d..835c39efb80d 100644
--- a/tools/perf/util/vdso.c
+++ b/tools/perf/util/vdso.c
@@ -144,10 +144,11 @@ static enum dso_type machine__thread_dso_type(struct machine *machine,
 					      struct thread *thread)
 {
 	enum dso_type dso_type = DSO__TYPE_UNKNOWN;
-	struct map *map;
+	struct map_rb_node *rb_node;
+
+	maps__for_each_entry(thread->maps, rb_node) {
+		struct dso *dso = rb_node->map->dso;
 
-	maps__for_each_entry(thread->maps, map) {
-		struct dso *dso = map->dso;
 		if (!dso || dso->long_name[0] != '/')
 			continue;
 		dso_type = dso__type(dso, machine);
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 03/17] perf maps: Add functions to access maps
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 01/17] perf map: Move map list node into symbol Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 02/17] perf maps: Remove rb_node from struct map Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 04/17] perf map: Add accessor for dso Ian Rogers
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Introduce functions to access struct maps. These functions reduce the
number of places reference counting is necessary. While tidying APIs do
some small const-ification, in particlar to unwind_libunwind_ops.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 .../scripts/python/Perf-Trace-Util/Context.c  |  7 +-
 tools/perf/tests/code-reading.c               |  2 +-
 tools/perf/ui/browsers/hists.c                |  3 +-
 tools/perf/util/callchain.c                   |  9 +--
 tools/perf/util/db-export.c                   | 12 ++--
 tools/perf/util/dlfilter.c                    |  8 ++-
 tools/perf/util/event.c                       |  4 +-
 tools/perf/util/hist.c                        |  2 +-
 tools/perf/util/machine.c                     |  2 +-
 tools/perf/util/map.c                         | 14 ++--
 tools/perf/util/maps.c                        | 71 +++++++++++--------
 tools/perf/util/maps.h                        | 47 +++++++++---
 .../scripting-engines/trace-event-python.c    |  2 +-
 tools/perf/util/sort.c                        |  2 +-
 tools/perf/util/symbol-elf.c                  |  2 +-
 tools/perf/util/symbol.c                      | 44 ++++++------
 tools/perf/util/thread-stack.c                |  4 +-
 tools/perf/util/thread.c                      |  4 +-
 tools/perf/util/unwind-libunwind-local.c      | 16 +++--
 tools/perf/util/unwind-libunwind.c            | 31 ++++----
 20 files changed, 175 insertions(+), 111 deletions(-)

diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
index b0d449f41650..feedd02b3b3d 100644
--- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c
+++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
@@ -100,10 +100,11 @@ static PyObject *perf_sample_insn(PyObject *obj, PyObject *args)
 	if (!c)
 		return NULL;
 
-	if (c->sample->ip && !c->sample->insn_len &&
-	    c->al->thread->maps && c->al->thread->maps->machine)
-		script_fetch_insn(c->sample, c->al->thread, c->al->thread->maps->machine);
+	if (c->sample->ip && !c->sample->insn_len && c->al->thread->maps) {
+		struct machine *machine =  maps__machine(c->al->thread->maps);
 
+		script_fetch_insn(c->sample, c->al->thread, machine);
+	}
 	if (!c->sample->insn_len)
 		Py_RETURN_NONE; /* N.B. This is a return statement */
 
diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
index fb67fd5ebd9f..8d2036f2f944 100644
--- a/tools/perf/tests/code-reading.c
+++ b/tools/perf/tests/code-reading.c
@@ -269,7 +269,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 		len = al.map->end - addr;
 
 	/* Read the object code using perf */
-	ret_len = dso__data_read_offset(al.map->dso, thread->maps->machine,
+	ret_len = dso__data_read_offset(al.map->dso, maps__machine(thread->maps),
 					al.addr, buf1, len);
 	if (ret_len != len) {
 		pr_debug("dso__data_read_offset failed\n");
diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
index b72ee6822222..572ff38ceb0f 100644
--- a/tools/perf/ui/browsers/hists.c
+++ b/tools/perf/ui/browsers/hists.c
@@ -3139,7 +3139,8 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
 			continue;
 		case 'k':
 			if (browser->selection != NULL)
-				hists_browser__zoom_map(browser, browser->selection->maps->machine->vmlinux_map);
+				hists_browser__zoom_map(browser,
+					      maps__machine(browser->selection->maps)->vmlinux_map);
 			continue;
 		case 'V':
 			verbose = (verbose + 1) % 4;
diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
index a093a15f048f..0aa979f64565 100644
--- a/tools/perf/util/callchain.c
+++ b/tools/perf/util/callchain.c
@@ -1112,6 +1112,8 @@ int hist_entry__append_callchain(struct hist_entry *he, struct perf_sample *samp
 int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *node,
 			bool hide_unresolved)
 {
+	struct machine *machine = maps__machine(node->ms.maps);
+
 	al->maps = node->ms.maps;
 	al->map = node->ms.map;
 	al->sym = node->ms.sym;
@@ -1124,9 +1126,8 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
 		if (al->map == NULL)
 			goto out;
 	}
-
-	if (al->maps == machine__kernel_maps(al->maps->machine)) {
-		if (machine__is_host(al->maps->machine)) {
+	if (al->maps == machine__kernel_maps(machine)) {
+		if (machine__is_host(machine)) {
 			al->cpumode = PERF_RECORD_MISC_KERNEL;
 			al->level = 'k';
 		} else {
@@ -1134,7 +1135,7 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
 			al->level = 'g';
 		}
 	} else {
-		if (machine__is_host(al->maps->machine)) {
+		if (machine__is_host(machine)) {
 			al->cpumode = PERF_RECORD_MISC_USER;
 			al->level = '.';
 		} else if (perf_guest) {
diff --git a/tools/perf/util/db-export.c b/tools/perf/util/db-export.c
index e0d4f08839fb..1cfcfdd3cf52 100644
--- a/tools/perf/util/db-export.c
+++ b/tools/perf/util/db-export.c
@@ -181,7 +181,7 @@ static int db_ids_from_al(struct db_export *dbe, struct addr_location *al,
 	if (al->map) {
 		struct dso *dso = al->map->dso;
 
-		err = db_export__dso(dbe, dso, al->maps->machine);
+		err = db_export__dso(dbe, dso, maps__machine(al->maps));
 		if (err)
 			return err;
 		*dso_db_id = dso->db_id;
@@ -354,19 +354,21 @@ int db_export__sample(struct db_export *dbe, union perf_event *event,
 	};
 	struct thread *main_thread;
 	struct comm *comm = NULL;
+	struct machine *machine;
 	int err;
 
 	err = db_export__evsel(dbe, evsel);
 	if (err)
 		return err;
 
-	err = db_export__machine(dbe, al->maps->machine);
+	machine = maps__machine(al->maps);
+	err = db_export__machine(dbe, machine);
 	if (err)
 		return err;
 
-	main_thread = thread__main_thread(al->maps->machine, thread);
+	main_thread = thread__main_thread(machine, thread);
 
-	err = db_export__threads(dbe, thread, main_thread, al->maps->machine, &comm);
+	err = db_export__threads(dbe, thread, main_thread, machine, &comm);
 	if (err)
 		goto out_put;
 
@@ -380,7 +382,7 @@ int db_export__sample(struct db_export *dbe, union perf_event *event,
 		goto out_put;
 
 	if (dbe->cpr) {
-		struct call_path *cp = call_path_from_sample(dbe, al->maps->machine,
+		struct call_path *cp = call_path_from_sample(dbe, machine,
 							     thread, sample,
 							     evsel);
 		if (cp) {
diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
index 37beb7530288..fe2a0752a0f6 100644
--- a/tools/perf/util/dlfilter.c
+++ b/tools/perf/util/dlfilter.c
@@ -197,8 +197,12 @@ static const __u8 *dlfilter__insn(void *ctx, __u32 *len)
 		if (!al->thread && machine__resolve(d->machine, al, d->sample) < 0)
 			return NULL;
 
-		if (al->thread->maps && al->thread->maps->machine)
-			script_fetch_insn(d->sample, al->thread, al->thread->maps->machine);
+		if (al->thread->maps) {
+			struct machine *machine = maps__machine(al->thread->maps);
+
+			if (machine)
+				script_fetch_insn(d->sample, al->thread, machine);
+		}
 	}
 
 	if (!d->sample->insn_len)
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 1fa14598b916..f40cdd6ac126 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -572,7 +572,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
 			     struct addr_location *al)
 {
 	struct maps *maps = thread->maps;
-	struct machine *machine = maps->machine;
+	struct machine *machine = maps__machine(maps);
 	bool load_map = false;
 
 	al->maps = maps;
@@ -637,7 +637,7 @@ struct map *thread__find_map_fb(struct thread *thread, u8 cpumode, u64 addr,
 				struct addr_location *al)
 {
 	struct map *map = thread__find_map(thread, cpumode, addr, al);
-	struct machine *machine = thread->maps->machine;
+	struct machine *machine = maps__machine(thread->maps);
 	u8 addr_cpumode = machine__addr_cpumode(machine, cpumode, addr);
 
 	if (map || addr_cpumode == cpumode)
diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index 3670136a0074..1b0e89cd5d99 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -241,7 +241,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 
 	if (h->cgroup) {
 		const char *cgrp_name = "unknown";
-		struct cgroup *cgrp = cgroup__find(h->ms.maps->machine->env,
+		struct cgroup *cgrp = cgroup__find(maps__machine(h->ms.maps)->env,
 						   h->cgroup);
 		if (cgrp != NULL)
 			cgrp_name = cgrp->name;
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 93a07079d174..446c0273259d 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -2842,7 +2842,7 @@ static int find_prev_cpumode(struct ip_callchain *chain, struct thread *thread,
 static u64 get_leaf_frame_caller(struct perf_sample *sample,
 		struct thread *thread, int usr_idx)
 {
-	if (machine__normalized_is(thread->maps->machine, "arm64"))
+	if (machine__normalized_is(maps__machine(thread->maps), "arm64"))
 		return get_leaf_frame_caller_aarch64(sample, thread, usr_idx);
 	else
 		return 0;
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 7620cfa114d4..a99dbde656a2 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -234,7 +234,7 @@ bool __map__is_kernel(const struct map *map)
 {
 	if (!map->dso->kernel)
 		return false;
-	return machine__kernel_map(map__kmaps((struct map *)map)->machine) == map;
+	return machine__kernel_map(maps__machine(map__kmaps((struct map *)map))) == map;
 }
 
 bool __map__is_extra_kernel_map(const struct map *map)
@@ -475,11 +475,15 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
 	 * kcore may not either. However the trampoline object code is on the
 	 * main kernel map, so just use that instead.
 	 */
-	if (kmap && is_entry_trampoline(kmap->name) && kmap->kmaps && kmap->kmaps->machine) {
-		struct map *kernel_map = machine__kernel_map(kmap->kmaps->machine);
+	if (kmap && is_entry_trampoline(kmap->name) && kmap->kmaps) {
+		struct machine *machine = maps__machine(kmap->kmaps);
 
-		if (kernel_map)
-			map = kernel_map;
+		if (machine) {
+			struct map *kernel_map = machine__kernel_map(machine);
+
+			if (kernel_map)
+				map = kernel_map;
+		}
 	}
 
 	if (!map->dso->adjust_symbols)
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 83ec126bcbe5..91bb015caede 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -13,7 +13,7 @@
 static void maps__init(struct maps *maps, struct machine *machine)
 {
 	maps->entries = RB_ROOT;
-	init_rwsem(&maps->lock);
+	init_rwsem(maps__lock(maps));
 	maps->machine = machine;
 	maps->last_search_by_name = NULL;
 	maps->nr_maps = 0;
@@ -32,7 +32,7 @@ static void __maps__free_maps_by_name(struct maps *maps)
 
 static int __maps__insert(struct maps *maps, struct map *map)
 {
-	struct rb_node **p = &maps->entries.rb_node;
+	struct rb_node **p = &maps__entries(maps)->rb_node;
 	struct rb_node *parent = NULL;
 	const u64 ip = map->start;
 	struct map_rb_node *m, *new_rb_node;
@@ -54,7 +54,7 @@ static int __maps__insert(struct maps *maps, struct map *map)
 	}
 
 	rb_link_node(&new_rb_node->rb_node, parent, p);
-	rb_insert_color(&new_rb_node->rb_node, &maps->entries);
+	rb_insert_color(&new_rb_node->rb_node, maps__entries(maps));
 	map__get(map);
 	return 0;
 }
@@ -63,7 +63,7 @@ int maps__insert(struct maps *maps, struct map *map)
 {
 	int err;
 
-	down_write(&maps->lock);
+	down_write(maps__lock(maps));
 	err = __maps__insert(maps, map);
 	if (err)
 		goto out;
@@ -84,10 +84,11 @@ int maps__insert(struct maps *maps, struct map *map)
 	 * If we already performed some search by name, then we need to add the just
 	 * inserted map and resort.
 	 */
-	if (maps->maps_by_name) {
-		if (maps->nr_maps > maps->nr_maps_allocated) {
-			int nr_allocate = maps->nr_maps * 2;
-			struct map **maps_by_name = realloc(maps->maps_by_name, nr_allocate * sizeof(map));
+	if (maps__maps_by_name(maps)) {
+		if (maps__nr_maps(maps) > maps->nr_maps_allocated) {
+			int nr_allocate = maps__nr_maps(maps) * 2;
+			struct map **maps_by_name = realloc(maps__maps_by_name(maps),
+							    nr_allocate * sizeof(map));
 
 			if (maps_by_name == NULL) {
 				__maps__free_maps_by_name(maps);
@@ -97,18 +98,18 @@ int maps__insert(struct maps *maps, struct map *map)
 
 			maps->maps_by_name = maps_by_name;
 			maps->nr_maps_allocated = nr_allocate;
-}
-		maps->maps_by_name[maps->nr_maps - 1] = map;
+		}
+		maps__maps_by_name(maps)[maps__nr_maps(maps) - 1] = map;
 		__maps__sort_by_name(maps);
 	}
  out:
-	up_write(&maps->lock);
+	up_write(maps__lock(maps));
 	return err;
 }
 
 static void __maps__remove(struct maps *maps, struct map_rb_node *rb_node)
 {
-	rb_erase_init(&rb_node->rb_node, &maps->entries);
+	rb_erase_init(&rb_node->rb_node, maps__entries(maps));
 	map__put(rb_node->map);
 	free(rb_node);
 }
@@ -117,7 +118,7 @@ void maps__remove(struct maps *maps, struct map *map)
 {
 	struct map_rb_node *rb_node;
 
-	down_write(&maps->lock);
+	down_write(maps__lock(maps));
 	if (maps->last_search_by_name == map)
 		maps->last_search_by_name = NULL;
 
@@ -125,9 +126,9 @@ void maps__remove(struct maps *maps, struct map *map)
 	assert(rb_node->map == map);
 	__maps__remove(maps, rb_node);
 	--maps->nr_maps;
-	if (maps->maps_by_name)
+	if (maps__maps_by_name(maps))
 		__maps__free_maps_by_name(maps);
-	up_write(&maps->lock);
+	up_write(maps__lock(maps));
 }
 
 static void __maps__purge(struct maps *maps)
@@ -135,7 +136,7 @@ static void __maps__purge(struct maps *maps)
 	struct map_rb_node *pos, *next;
 
 	maps__for_each_entry_safe(maps, pos, next) {
-		rb_erase_init(&pos->rb_node,  &maps->entries);
+		rb_erase_init(&pos->rb_node,  maps__entries(maps));
 		map__put(pos->map);
 		free(pos);
 	}
@@ -143,9 +144,9 @@ static void __maps__purge(struct maps *maps)
 
 static void maps__exit(struct maps *maps)
 {
-	down_write(&maps->lock);
+	down_write(maps__lock(maps));
 	__maps__purge(maps);
-	up_write(&maps->lock);
+	up_write(maps__lock(maps));
 }
 
 bool maps__empty(struct maps *maps)
@@ -170,6 +171,14 @@ void maps__delete(struct maps *maps)
 	free(maps);
 }
 
+struct maps *maps__get(struct maps *maps)
+{
+	if (maps)
+		refcount_inc(&maps->refcnt);
+
+	return maps;
+}
+
 void maps__put(struct maps *maps)
 {
 	if (maps && refcount_dec_and_test(&maps->refcnt))
@@ -195,7 +204,7 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
 	struct symbol *sym;
 	struct map_rb_node *pos;
 
-	down_read(&maps->lock);
+	down_read(maps__lock(maps));
 
 	maps__for_each_entry(maps, pos) {
 		sym = map__find_symbol_by_name(pos->map, name);
@@ -213,7 +222,7 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
 
 	sym = NULL;
 out:
-	up_read(&maps->lock);
+	up_read(maps__lock(maps));
 	return sym;
 }
 
@@ -238,7 +247,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
 	size_t printed = 0;
 	struct map_rb_node *pos;
 
-	down_read(&maps->lock);
+	down_read(maps__lock(maps));
 
 	maps__for_each_entry(maps, pos) {
 		printed += fprintf(fp, "Map:");
@@ -249,7 +258,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
 		}
 	}
 
-	up_read(&maps->lock);
+	up_read(maps__lock(maps));
 
 	return printed;
 }
@@ -260,9 +269,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 	struct rb_node *next, *first;
 	int err = 0;
 
-	down_write(&maps->lock);
+	down_write(maps__lock(maps));
 
-	root = &maps->entries;
+	root = maps__entries(maps);
 
 	/*
 	 * Find first map where end > map->start.
@@ -358,7 +367,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 
 	err = 0;
 out:
-	up_write(&maps->lock);
+	up_write(maps__lock(maps));
 	return err;
 }
 
@@ -371,7 +380,7 @@ int maps__clone(struct thread *thread, struct maps *parent)
 	int err;
 	struct map_rb_node *rb_node;
 
-	down_read(&parent->lock);
+	down_read(maps__lock(parent));
 
 	maps__for_each_entry(parent, rb_node) {
 		struct map *new = map__clone(rb_node->map);
@@ -394,7 +403,7 @@ int maps__clone(struct thread *thread, struct maps *parent)
 
 	err = 0;
 out_unlock:
-	up_read(&parent->lock);
+	up_read(maps__lock(parent));
 	return err;
 }
 
@@ -415,9 +424,9 @@ struct map *maps__find(struct maps *maps, u64 ip)
 	struct map_rb_node *m;
 
 
-	down_read(&maps->lock);
+	down_read(maps__lock(maps));
 
-	p = maps->entries.rb_node;
+	p = maps__entries(maps)->rb_node;
 	while (p != NULL) {
 		m = rb_entry(p, struct map_rb_node, rb_node);
 		if (ip < m->map->start)
@@ -430,13 +439,13 @@ struct map *maps__find(struct maps *maps, u64 ip)
 
 	m = NULL;
 out:
-	up_read(&maps->lock);
+	up_read(maps__lock(maps));
 	return m ? m->map : NULL;
 }
 
 struct map_rb_node *maps__first(struct maps *maps)
 {
-	struct rb_node *first = rb_first(&maps->entries);
+	struct rb_node *first = rb_first(maps__entries(maps));
 
 	if (first)
 		return rb_entry(first, struct map_rb_node, rb_node);
diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
index 512746ec0f9a..bde3390c7096 100644
--- a/tools/perf/util/maps.h
+++ b/tools/perf/util/maps.h
@@ -43,7 +43,7 @@ struct maps {
 	unsigned int	 nr_maps_allocated;
 #ifdef HAVE_LIBUNWIND_SUPPORT
 	void				*addr_space;
-	struct unwind_libunwind_ops	*unwind_libunwind_ops;
+	const struct unwind_libunwind_ops *unwind_libunwind_ops;
 #endif
 };
 
@@ -58,20 +58,51 @@ struct kmap {
 struct maps *maps__new(struct machine *machine);
 void maps__delete(struct maps *maps);
 bool maps__empty(struct maps *maps);
+int maps__clone(struct thread *thread, struct maps *parent);
+
+struct maps *maps__get(struct maps *maps);
+void maps__put(struct maps *maps);
 
-static inline struct maps *maps__get(struct maps *maps)
+static inline struct rb_root *maps__entries(struct maps *maps)
 {
-	if (maps)
-		refcount_inc(&maps->refcnt);
-	return maps;
+	return &maps->entries;
 }
 
-void maps__put(struct maps *maps);
-int maps__clone(struct thread *thread, struct maps *parent);
+static inline struct machine *maps__machine(struct maps *maps)
+{
+	return maps->machine;
+}
+
+static inline struct rw_semaphore *maps__lock(struct maps *maps)
+{
+	return &maps->lock;
+}
+
+static inline struct map **maps__maps_by_name(struct maps *maps)
+{
+	return maps->maps_by_name;
+}
+
+static inline unsigned int maps__nr_maps(const struct maps *maps)
+{
+	return maps->nr_maps;
+}
+
+#ifdef HAVE_LIBUNWIND_SUPPORT
+static inline void *maps__addr_space(struct maps *maps)
+{
+	return maps->addr_space;
+}
+
+static inline const struct unwind_libunwind_ops *maps__unwind_libunwind_ops(const struct maps *maps)
+{
+	return maps->unwind_libunwind_ops;
+}
+#endif
+
 size_t maps__fprintf(struct maps *maps, FILE *fp);
 
 int maps__insert(struct maps *maps, struct map *map);
-
 void maps__remove(struct maps *maps, struct map *map);
 
 struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp);
diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
index 0f4ef61f2ffa..e5cc18f6fcda 100644
--- a/tools/perf/util/scripting-engines/trace-event-python.c
+++ b/tools/perf/util/scripting-engines/trace-event-python.c
@@ -1288,7 +1288,7 @@ static void python_export_sample_table(struct db_export *dbe,
 
 	tuple_set_d64(t, 0, es->db_id);
 	tuple_set_d64(t, 1, es->evsel->db_id);
-	tuple_set_d64(t, 2, es->al->maps->machine->db_id);
+	tuple_set_d64(t, 2, maps__machine(es->al->maps)->db_id);
 	tuple_set_d64(t, 3, es->al->thread->db_id);
 	tuple_set_d64(t, 4, es->comm_db_id);
 	tuple_set_d64(t, 5, es->dso_db_id);
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index 093a0c8b2e3d..e04d9bddba11 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -762,7 +762,7 @@ static int hist_entry__cgroup_snprintf(struct hist_entry *he,
 	const char *cgrp_name = "N/A";
 
 	if (he->cgroup) {
-		struct cgroup *cgrp = cgroup__find(he->ms.maps->machine->env,
+		struct cgroup *cgrp = cgroup__find(maps__machine(he->ms.maps)->env,
 						   he->cgroup);
 		if (cgrp != NULL)
 			cgrp_name = cgrp->name;
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 325fbeea8dff..ccdafc3971ac 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1422,7 +1422,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		 * we still are sure to have a reference to this DSO via
 		 * *curr_map->dso.
 		 */
-		dsos__add(&kmaps->machine->dsos, curr_dso);
+		dsos__add(&maps__machine(kmaps)->dsos, curr_dso);
 		/* kmaps already got it */
 		map__put(curr_map);
 		dso__set_loaded(curr_dso);
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index d9aa41e20d5f..efd047bab373 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -275,7 +275,7 @@ void maps__fixup_end(struct maps *maps)
 {
 	struct map_rb_node *prev = NULL, *curr;
 
-	down_write(&maps->lock);
+	down_write(maps__lock(maps));
 
 	maps__for_each_entry(maps, curr) {
 		if (prev != NULL && !prev->map->end)
@@ -291,7 +291,7 @@ void maps__fixup_end(struct maps *maps)
 	if (curr && !curr->map->end)
 		curr->map->end = ~0ULL;
 
-	up_write(&maps->lock);
+	up_write(maps__lock(maps));
 }
 
 struct symbol *symbol__new(u64 start, u64 len, u8 binding, u8 type, const char *name)
@@ -844,7 +844,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 	if (!kmaps)
 		return -1;
 
-	machine = kmaps->machine;
+	machine = maps__machine(kmaps);
 
 	x86_64 = machine__is(machine, "x86_64");
 
@@ -968,7 +968,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 
 	if (curr_map != initial_map &&
 	    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
-	    machine__is_default_guest(kmaps->machine)) {
+	    machine__is_default_guest(maps__machine(kmaps))) {
 		dso__set_loaded(curr_map->dso);
 	}
 
@@ -1365,7 +1365,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	if (!kmaps)
 		return -EINVAL;
 
-	machine = kmaps->machine;
+	machine = maps__machine(kmaps);
 
 	/* This function requires that the map is the kernel map */
 	if (!__map__is_kernel(map))
@@ -1892,7 +1892,7 @@ int dso__load(struct dso *dso, struct map *map)
 		else if (dso->kernel == DSO_SPACE__KERNEL_GUEST)
 			ret = dso__load_guest_kernel_sym(dso, map);
 
-		machine = map__kmaps(map)->machine;
+		machine = maps__machine(map__kmaps(map));
 		if (machine__is(machine, "x86_64"))
 			machine__map_x86_64_entry_trampolines(machine, dso);
 		goto out;
@@ -2057,32 +2057,32 @@ static int map__strcmp_name(const void *name, const void *b)
 
 void __maps__sort_by_name(struct maps *maps)
 {
-	qsort(maps->maps_by_name, maps->nr_maps, sizeof(struct map *), map__strcmp);
+	qsort(maps__maps_by_name(maps), maps__nr_maps(maps), sizeof(struct map *), map__strcmp);
 }
 
 static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
 {
 	struct map_rb_node *rb_node;
-	struct map **maps_by_name = realloc(maps->maps_by_name,
-					    maps->nr_maps * sizeof(struct map *));
+	struct map **maps_by_name = realloc(maps__maps_by_name(maps),
+					    maps__nr_maps(maps) * sizeof(struct map *));
 	int i = 0;
 
 	if (maps_by_name == NULL)
 		return -1;
 
-	up_read(&maps->lock);
-	down_write(&maps->lock);
+	up_read(maps__lock(maps));
+	down_write(maps__lock(maps));
 
 	maps->maps_by_name = maps_by_name;
-	maps->nr_maps_allocated = maps->nr_maps;
+	maps->nr_maps_allocated = maps__nr_maps(maps);
 
 	maps__for_each_entry(maps, rb_node)
 		maps_by_name[i++] = rb_node->map;
 
 	__maps__sort_by_name(maps);
 
-	up_write(&maps->lock);
-	down_read(&maps->lock);
+	up_write(maps__lock(maps));
+	down_read(maps__lock(maps));
 
 	return 0;
 }
@@ -2091,11 +2091,12 @@ static struct map *__maps__find_by_name(struct maps *maps, const char *name)
 {
 	struct map **mapp;
 
-	if (maps->maps_by_name == NULL &&
+	if (maps__maps_by_name(maps) == NULL &&
 	    map__groups__sort_by_name_from_rbtree(maps))
 		return NULL;
 
-	mapp = bsearch(name, maps->maps_by_name, maps->nr_maps, sizeof(*mapp), map__strcmp_name);
+	mapp = bsearch(name, maps__maps_by_name(maps), maps__nr_maps(maps),
+		       sizeof(*mapp), map__strcmp_name);
 	if (mapp)
 		return *mapp;
 	return NULL;
@@ -2106,9 +2107,10 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 	struct map_rb_node *rb_node;
 	struct map *map;
 
-	down_read(&maps->lock);
+	down_read(maps__lock(maps));
 
-	if (maps->last_search_by_name && strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
+	if (maps->last_search_by_name &&
+	    strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
 		map = maps->last_search_by_name;
 		goto out_unlock;
 	}
@@ -2118,7 +2120,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 	 * made.
 	 */
 	map = __maps__find_by_name(maps, name);
-	if (map || maps->maps_by_name != NULL)
+	if (map || maps__maps_by_name(maps) != NULL)
 		goto out_unlock;
 
 	/* Fallback to traversing the rbtree... */
@@ -2132,7 +2134,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 	map = NULL;
 
 out_unlock:
-	up_read(&maps->lock);
+	up_read(maps__lock(maps));
 	return map;
 }
 
@@ -2384,7 +2386,7 @@ static int dso__load_guest_kernel_sym(struct dso *dso, struct map *map)
 {
 	int err;
 	const char *kallsyms_filename;
-	struct machine *machine = map__kmaps(map)->machine;
+	struct machine *machine = maps__machine(map__kmaps(map));
 	char path[PATH_MAX];
 
 	if (machine->kallsyms_filename) {
diff --git a/tools/perf/util/thread-stack.c b/tools/perf/util/thread-stack.c
index 1b992bbba4e8..4b85c1728012 100644
--- a/tools/perf/util/thread-stack.c
+++ b/tools/perf/util/thread-stack.c
@@ -155,8 +155,8 @@ static int thread_stack__init(struct thread_stack *ts, struct thread *thread,
 		ts->br_stack_sz = br_stack_sz;
 	}
 
-	if (thread->maps && thread->maps->machine) {
-		struct machine *machine = thread->maps->machine;
+	if (thread->maps && maps__machine(thread->maps)) {
+		struct machine *machine = maps__machine(thread->maps);
 		const char *arch = perf_env__arch(machine->env);
 
 		ts->kernel_start = machine__kernel_start(machine);
diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
index 24e53bd55f7d..292585a52281 100644
--- a/tools/perf/util/thread.c
+++ b/tools/perf/util/thread.c
@@ -362,7 +362,7 @@ static int __thread__prepare_access(struct thread *thread)
 	struct maps *maps = thread->maps;
 	struct map_rb_node *rb_node;
 
-	down_read(&maps->lock);
+	down_read(maps__lock(maps));
 
 	maps__for_each_entry(maps, rb_node) {
 		err = unwind__prepare_access(thread->maps, rb_node->map, &initialized);
@@ -370,7 +370,7 @@ static int __thread__prepare_access(struct thread *thread)
 			break;
 	}
 
-	up_read(&maps->lock);
+	up_read(maps__lock(maps));
 
 	return err;
 }
diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
index 81b6bd6e1536..952c5ee66fe7 100644
--- a/tools/perf/util/unwind-libunwind-local.c
+++ b/tools/perf/util/unwind-libunwind-local.c
@@ -665,24 +665,26 @@ static unw_accessors_t accessors = {
 
 static int _unwind__prepare_access(struct maps *maps)
 {
-	maps->addr_space = unw_create_addr_space(&accessors, 0);
-	if (!maps->addr_space) {
+	void *addr_space = unw_create_addr_space(&accessors, 0);
+
+	maps->addr_space = addr_space;
+	if (!addr_space) {
 		pr_err("unwind: Can't create unwind address space.\n");
 		return -ENOMEM;
 	}
 
-	unw_set_caching_policy(maps->addr_space, UNW_CACHE_GLOBAL);
+	unw_set_caching_policy(addr_space, UNW_CACHE_GLOBAL);
 	return 0;
 }
 
 static void _unwind__flush_access(struct maps *maps)
 {
-	unw_flush_cache(maps->addr_space, 0, 0);
+	unw_flush_cache(maps__addr_space(maps), 0, 0);
 }
 
 static void _unwind__finish_access(struct maps *maps)
 {
-	unw_destroy_addr_space(maps->addr_space);
+	unw_destroy_addr_space(maps__addr_space(maps));
 }
 
 static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
@@ -707,7 +709,7 @@ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
 	 */
 	if (max_stack - 1 > 0) {
 		WARN_ONCE(!ui->thread, "WARNING: ui->thread is NULL");
-		addr_space = ui->thread->maps->addr_space;
+		addr_space = maps__addr_space(ui->thread->maps);
 
 		if (addr_space == NULL)
 			return -1;
@@ -757,7 +759,7 @@ static int _unwind__get_entries(unwind_entry_cb_t cb, void *arg,
 	struct unwind_info ui = {
 		.sample       = data,
 		.thread       = thread,
-		.machine      = thread->maps->machine,
+		.machine      = maps__machine(thread->maps),
 		.best_effort  = best_effort
 	};
 
diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
index 509c287ee762..c14f04082377 100644
--- a/tools/perf/util/unwind-libunwind.c
+++ b/tools/perf/util/unwind-libunwind.c
@@ -22,12 +22,13 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 	const char *arch;
 	enum dso_type dso_type;
 	struct unwind_libunwind_ops *ops = local_unwind_libunwind_ops;
+	struct machine *machine;
 	int err;
 
 	if (!dwarf_callchain_users)
 		return 0;
 
-	if (maps->addr_space) {
+	if (maps__addr_space(maps)) {
 		pr_debug("unwind: thread map already set, dso=%s\n",
 			 map->dso->name);
 		if (initialized)
@@ -35,15 +36,16 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 		return 0;
 	}
 
+	machine = maps__machine(maps);
 	/* env->arch is NULL for live-mode (i.e. perf top) */
-	if (!maps->machine->env || !maps->machine->env->arch)
+	if (!machine->env || !machine->env->arch)
 		goto out_register;
 
-	dso_type = dso__type(map->dso, maps->machine);
+	dso_type = dso__type(map->dso, machine);
 	if (dso_type == DSO__TYPE_UNKNOWN)
 		return 0;
 
-	arch = perf_env__arch(maps->machine->env);
+	arch = perf_env__arch(machine->env);
 
 	if (!strcmp(arch, "x86")) {
 		if (dso_type != DSO__TYPE_64BIT)
@@ -60,7 +62,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 out_register:
 	unwind__register_ops(maps, ops);
 
-	err = maps->unwind_libunwind_ops->prepare_access(maps);
+	err = maps__unwind_libunwind_ops(maps)->prepare_access(maps);
 	if (initialized)
 		*initialized = err ? false : true;
 	return err;
@@ -68,14 +70,18 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 
 void unwind__flush_access(struct maps *maps)
 {
-	if (maps->unwind_libunwind_ops)
-		maps->unwind_libunwind_ops->flush_access(maps);
+	const struct unwind_libunwind_ops *ops = maps__unwind_libunwind_ops(maps);
+
+	if (ops)
+		ops->flush_access(maps);
 }
 
 void unwind__finish_access(struct maps *maps)
 {
-	if (maps->unwind_libunwind_ops)
-		maps->unwind_libunwind_ops->finish_access(maps);
+	const struct unwind_libunwind_ops *ops = maps__unwind_libunwind_ops(maps);
+
+	if (ops)
+		ops->finish_access(maps);
 }
 
 int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
@@ -83,8 +89,9 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
 			 struct perf_sample *data, int max_stack,
 			 bool best_effort)
 {
-	if (thread->maps->unwind_libunwind_ops)
-		return thread->maps->unwind_libunwind_ops->get_entries(cb, arg, thread, data,
-								       max_stack, best_effort);
+	const struct unwind_libunwind_ops *ops = maps__unwind_libunwind_ops(thread->maps);
+
+	if (ops)
+		return ops->get_entries(cb, arg, thread, data, max_stack);
 	return 0;
 }
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 04/17] perf map: Add accessor for dso
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (2 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 03/17] perf maps: Add functions to access maps Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 05/17] perf map: Add accessor for start and end Ian Rogers
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Later changes will add reference count checking for struct map, with
dso being the most frequently accessed variable. Add an accessor so
that the reference count check is only necessary in one place.

Additional changes:
 - add a dso variable to avoid repeated map__dso calls.
 - in builtin-mem.c dump_raw_samples, code only partially tested for
   dso == NULL. Make the possibility of NULL consistent.
 - in thread.c thread__memcpy fix use of spaces and use tabs.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/builtin-annotate.c                 | 11 ++-
 tools/perf/builtin-buildid-list.c             |  2 +-
 tools/perf/builtin-inject.c                   |  8 +-
 tools/perf/builtin-kallsyms.c                 |  4 +-
 tools/perf/builtin-mem.c                      | 10 +-
 tools/perf/builtin-report.c                   |  7 +-
 tools/perf/builtin-script.c                   | 19 ++--
 tools/perf/builtin-top.c                      | 11 ++-
 tools/perf/builtin-trace.c                    |  2 +-
 .../scripts/python/Perf-Trace-Util/Context.c  |  6 +-
 tools/perf/tests/code-reading.c               | 28 +++---
 tools/perf/tests/hists_common.c               |  8 +-
 tools/perf/tests/hists_cumulate.c             |  4 +-
 tools/perf/tests/hists_filter.c               |  4 +-
 tools/perf/tests/hists_output.c               |  2 +-
 tools/perf/tests/maps.c                       |  2 +-
 tools/perf/tests/symbols.c                    |  6 +-
 tools/perf/tests/vmlinux-kallsyms.c           | 13 ++-
 tools/perf/ui/browsers/annotate.c             |  9 +-
 tools/perf/ui/browsers/hists.c                | 16 ++--
 tools/perf/ui/browsers/map.c                  |  4 +-
 tools/perf/util/annotate.c                    | 16 ++--
 tools/perf/util/auxtrace.c                    |  2 +-
 tools/perf/util/block-info.c                  |  4 +-
 tools/perf/util/bpf-event.c                   | 10 +-
 tools/perf/util/build-id.c                    |  2 +-
 tools/perf/util/callchain.c                   |  6 +-
 tools/perf/util/data-convert-json.c           | 10 +-
 tools/perf/util/db-export.c                   |  4 +-
 tools/perf/util/dlfilter.c                    | 10 +-
 tools/perf/util/event.c                       |  9 +-
 tools/perf/util/evsel_fprintf.c               |  2 +-
 tools/perf/util/hist.c                        | 10 +-
 tools/perf/util/intel-pt.c                    | 45 +++++----
 tools/perf/util/machine.c                     | 70 ++++++++------
 tools/perf/util/map.c                         | 96 ++++++++++++-------
 tools/perf/util/map.h                         |  7 +-
 tools/perf/util/maps.c                        |  7 +-
 tools/perf/util/probe-event.c                 | 30 +++---
 .../util/scripting-engines/trace-event-perl.c | 10 +-
 .../scripting-engines/trace-event-python.c    | 16 ++--
 tools/perf/util/sort.c                        | 49 +++++-----
 tools/perf/util/symbol-elf.c                  |  2 +-
 tools/perf/util/symbol.c                      | 55 +++++++----
 tools/perf/util/synthetic-events.c            | 12 +--
 tools/perf/util/thread.c                      | 25 +++--
 tools/perf/util/unwind-libdw.c                | 10 +-
 tools/perf/util/vdso.c                        |  2 +-
 48 files changed, 404 insertions(+), 293 deletions(-)

diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
index 4750fac7bf93..9e220159e320 100644
--- a/tools/perf/builtin-annotate.c
+++ b/tools/perf/builtin-annotate.c
@@ -205,7 +205,7 @@ static int process_branch_callback(struct evsel *evsel,
 		return 0;
 
 	if (a.map != NULL)
-		a.map->dso->hit = 1;
+		map__dso(a.map)->hit = 1;
 
 	hist__account_cycles(sample->branch_stack, al, sample, false, NULL);
 
@@ -235,10 +235,11 @@ static int evsel__add_sample(struct evsel *evsel, struct perf_sample *sample,
 		 * the DSO?
 		 */
 		if (al->sym != NULL) {
-			rb_erase_cached(&al->sym->rb_node,
-				 &al->map->dso->symbols);
+			struct dso *dso = map__dso(al->map);
+
+			rb_erase_cached(&al->sym->rb_node, &dso->symbols);
 			symbol__delete(al->sym);
-			dso__reset_find_symbol_cache(al->map->dso);
+			dso__reset_find_symbol_cache(dso);
 		}
 		return 0;
 	}
@@ -320,7 +321,7 @@ static void hists__find_annotations(struct hists *hists,
 		struct hist_entry *he = rb_entry(nd, struct hist_entry, rb_node);
 		struct annotation *notes;
 
-		if (he->ms.sym == NULL || he->ms.map->dso->annotate_warned)
+		if (he->ms.sym == NULL || map__dso(he->ms.map)->annotate_warned)
 			goto find_next;
 
 		if (ann->sym_hist_filter &&
diff --git a/tools/perf/builtin-buildid-list.c b/tools/perf/builtin-buildid-list.c
index 00bfe89f0b5d..cad9ed44ce7c 100644
--- a/tools/perf/builtin-buildid-list.c
+++ b/tools/perf/builtin-buildid-list.c
@@ -24,7 +24,7 @@
 
 static int buildid__map_cb(struct map *map, void *arg __maybe_unused)
 {
-	const struct dso *dso = map->dso;
+	const struct dso *dso = map__dso(map);
 	char bid_buf[SBUILD_ID_SIZE];
 
 	memset(bid_buf, 0, sizeof(bid_buf));
diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index 10bb1d494258..8f6909dd8a54 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -753,9 +753,11 @@ int perf_event__inject_buildid(struct perf_tool *tool, union perf_event *event,
 	}
 
 	if (thread__find_map(thread, sample->cpumode, sample->ip, &al)) {
-		if (!al.map->dso->hit) {
-			al.map->dso->hit = 1;
-			dso__inject_build_id(al.map->dso, tool, machine,
+		struct dso *dso = map__dso(al.map);
+
+		if (!dso->hit) {
+			dso->hit = 1;
+			dso__inject_build_id(dso, tool, machine,
 					     sample->cpumode, al.map->flags);
 		}
 	}
diff --git a/tools/perf/builtin-kallsyms.c b/tools/perf/builtin-kallsyms.c
index c08ee81529e8..5638ca4dbd8e 100644
--- a/tools/perf/builtin-kallsyms.c
+++ b/tools/perf/builtin-kallsyms.c
@@ -28,6 +28,7 @@ static int __cmd_kallsyms(int argc, const char **argv)
 
 	for (i = 0; i < argc; ++i) {
 		struct map *map;
+		const struct dso *dso;
 		struct symbol *symbol = machine__find_kernel_symbol_by_name(machine, argv[i], &map);
 
 		if (symbol == NULL) {
@@ -35,8 +36,9 @@ static int __cmd_kallsyms(int argc, const char **argv)
 			continue;
 		}
 
+		dso = map__dso(map);
 		printf("%s: %s %s %#" PRIx64 "-%#" PRIx64 " (%#" PRIx64 "-%#" PRIx64")\n",
-			symbol->name, map->dso->short_name, map->dso->long_name,
+			symbol->name, dso->short_name, dso->long_name,
 			map->unmap_ip(map, symbol->start), map->unmap_ip(map, symbol->end),
 			symbol->start, symbol->end);
 	}
diff --git a/tools/perf/builtin-mem.c b/tools/perf/builtin-mem.c
index dedd612eae5e..1e27188b0de1 100644
--- a/tools/perf/builtin-mem.c
+++ b/tools/perf/builtin-mem.c
@@ -200,6 +200,7 @@ dump_raw_samples(struct perf_tool *tool,
 	struct addr_location al;
 	const char *fmt, *field_sep;
 	char str[PAGE_SIZE_NAME_LEN];
+	struct dso *dso = NULL;
 
 	if (machine__resolve(machine, &al, sample) < 0) {
 		fprintf(stderr, "problem processing %d event, skipping it.\n",
@@ -210,8 +211,11 @@ dump_raw_samples(struct perf_tool *tool,
 	if (al.filtered || (mem->hide_unresolved && al.sym == NULL))
 		goto out_put;
 
-	if (al.map != NULL)
-		al.map->dso->hit = 1;
+	if (al.map != NULL) {
+		dso = map__dso(al.map);
+		if (dso)
+			dso->hit = 1;
+	}
 
 	field_sep = symbol_conf.field_sep;
 	if (field_sep) {
@@ -252,7 +256,7 @@ dump_raw_samples(struct perf_tool *tool,
 		symbol_conf.field_sep,
 		sample->data_src,
 		symbol_conf.field_sep,
-		al.map ? (al.map->dso ? al.map->dso->long_name : "???") : "???",
+		dso ? dso->long_name : "???",
 		al.sym ? al.sym->name : "???");
 out_put:
 	addr_location__put(&al);
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index c453b7fa7418..02ca87c13e91 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -314,7 +314,7 @@ static int process_sample_event(struct perf_tool *tool,
 	}
 
 	if (al.map != NULL)
-		al.map->dso->hit = 1;
+		map__dso(al.map)->hit = 1;
 
 	if (ui__has_annotation() || rep->symbol_ipc || rep->total_cycles_mode) {
 		hist__account_cycles(sample->branch_stack, &al, sample,
@@ -603,7 +603,7 @@ static void report__warn_kptr_restrict(const struct report *rep)
 		return;
 
 	if (kernel_map == NULL ||
-	    (kernel_map->dso->hit &&
+	     (map__dso(kernel_map)->hit &&
 	     (kernel_kmap->ref_reloc_sym == NULL ||
 	      kernel_kmap->ref_reloc_sym->addr == 0))) {
 		const char *desc =
@@ -844,6 +844,7 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
 
 	maps__for_each_entry(maps, rb_node) {
 		struct map *map = rb_node->map;
+		const struct dso *dso = map__dso(map);
 
 		printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
 				   indent, "", map->start, map->end,
@@ -852,7 +853,7 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
 				   map->prot & PROT_EXEC ? 'x' : '-',
 				   map->flags & MAP_SHARED ? 's' : 'p',
 				   map->pgoff,
-				   map->dso->id.ino, map->dso->name);
+				   dso->id.ino, dso->name);
 	}
 
 	return printed;
diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
index 976f8bfe099c..9c7eb900ff7c 100644
--- a/tools/perf/builtin-script.c
+++ b/tools/perf/builtin-script.c
@@ -1011,11 +1011,11 @@ static int perf_sample__fprintf_brstackoff(struct perf_sample *sample,
 		to   = entries[i].to;
 
 		if (thread__find_map_fb(thread, sample->cpumode, from, &alf) &&
-		    !alf.map->dso->adjust_symbols)
+		    !map__dso(alf.map)->adjust_symbols)
 			from = map__map_ip(alf.map, from);
 
 		if (thread__find_map_fb(thread, sample->cpumode, to, &alt) &&
-		    !alt.map->dso->adjust_symbols)
+		    !map__dso(alt.map)->adjust_symbols)
 			to = map__map_ip(alt.map, to);
 
 		printed += fprintf(fp, " 0x%"PRIx64, from);
@@ -1044,6 +1044,7 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
 	long offset, len;
 	struct addr_location al;
 	bool kernel;
+	struct dso *dso;
 
 	if (!start || !end)
 		return 0;
@@ -1074,11 +1075,12 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
 		return 0;
 	}
 
-	if (!thread__find_map(thread, *cpumode, start, &al) || !al.map->dso) {
+	dso = map__dso(al.map);
+	if (!thread__find_map(thread, *cpumode, start, &al) || !dso) {
 		pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
 		return 0;
 	}
-	if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR) {
+	if (dso->data.status == DSO_DATA_STATUS_ERROR) {
 		pr_debug("\tcannot resolve %" PRIx64 "-%" PRIx64 "\n", start, end);
 		return 0;
 	}
@@ -1087,10 +1089,10 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
 	map__load(al.map);
 
 	offset = al.map->map_ip(al.map, start);
-	len = dso__data_read_offset(al.map->dso, machine, offset, (u8 *)buffer,
+	len = dso__data_read_offset(dso, machine, offset, (u8 *)buffer,
 				    end - start + MAXINSN);
 
-	*is64bit = al.map->dso->is_64_bit;
+	*is64bit = dso->is_64_bit;
 	if (len <= 0)
 		pr_debug("\tcannot fetch code for block at %" PRIx64 "-%" PRIx64 "\n",
 			start, end);
@@ -1104,10 +1106,11 @@ static int map__fprintf_srccode(struct map *map, u64 addr, FILE *fp, struct srcc
 	unsigned line;
 	int len;
 	char *srccode;
+	struct dso *dso = map__dso(map);
 
-	if (!map || !map->dso)
+	if (!map || !dso)
 		return 0;
-	srcfile = get_srcline_split(map->dso,
+	srcfile = get_srcline_split(dso,
 				    map__rip_2objdump(map, addr),
 				    &line);
 	if (!srcfile)
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index d4b5b02bab73..5010eee8fbae 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -114,6 +114,7 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
 	struct symbol *sym;
 	struct annotation *notes;
 	struct map *map;
+	struct dso *dso;
 	int err = -1;
 
 	if (!he || !he->ms.sym)
@@ -123,12 +124,12 @@ static int perf_top__parse_source(struct perf_top *top, struct hist_entry *he)
 
 	sym = he->ms.sym;
 	map = he->ms.map;
+	dso = map__dso(map);
 
 	/*
 	 * We can't annotate with just /proc/kallsyms
 	 */
-	if (map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
-	    !dso__is_kcore(map->dso)) {
+	if (dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS && !dso__is_kcore(dso)) {
 		pr_err("Can't annotate %s: No vmlinux file was found in the "
 		       "path\n", sym->name);
 		sleep(1);
@@ -169,6 +170,7 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
 {
 	struct utsname uts;
 	int err = uname(&uts);
+	struct dso *dso = map__dso(map);
 
 	ui__warning("Out of bounds address found:\n\n"
 		    "Addr:   %" PRIx64 "\n"
@@ -180,7 +182,7 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
 		    "Tools:  %s\n\n"
 		    "Not all samples will be on the annotation output.\n\n"
 		    "Please report to linux-kernel@vger.kernel.org\n",
-		    ip, map->dso->long_name, dso__symtab_origin(map->dso),
+		    ip, dso->long_name, dso__symtab_origin(dso),
 		    map->start, map->end, sym->start, sym->end,
 		    sym->binding == STB_GLOBAL ? 'g' :
 		    sym->binding == STB_LOCAL  ? 'l' : 'w', sym->name,
@@ -810,7 +812,8 @@ static void perf_event__process_sample(struct perf_tool *tool,
 		    __map__is_kernel(al.map) && map__has_symbols(al.map)) {
 			if (symbol_conf.vmlinux_name) {
 				char serr[256];
-				dso__strerror_load(al.map->dso, serr, sizeof(serr));
+
+				dso__strerror_load(map__dso(al.map), serr, sizeof(serr));
 				ui__warning("The %s file can't be used: %s\n%s",
 					    symbol_conf.vmlinux_name, serr, msg);
 			} else {
diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index b363c609818b..72ef0bebb06b 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -2863,7 +2863,7 @@ static void print_location(FILE *f, struct perf_sample *sample,
 {
 
 	if ((verbose > 0 || print_dso) && al->map)
-		fprintf(f, "%s@", al->map->dso->long_name);
+		fprintf(f, "%s@", map__dso(al->map)->long_name);
 
 	if ((verbose > 0 || print_sym) && al->sym)
 		fprintf(f, "%s+0x%" PRIx64, al->sym->name,
diff --git a/tools/perf/scripts/python/Perf-Trace-Util/Context.c b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
index feedd02b3b3d..53b1587db403 100644
--- a/tools/perf/scripts/python/Perf-Trace-Util/Context.c
+++ b/tools/perf/scripts/python/Perf-Trace-Util/Context.c
@@ -145,6 +145,7 @@ static PyObject *perf_sample_src(PyObject *obj, PyObject *args, bool get_srccode
 	char *srccode = NULL;
 	PyObject *result;
 	struct map *map;
+	struct dso *dso;
 	int len = 0;
 	u64 addr;
 
@@ -153,9 +154,10 @@ static PyObject *perf_sample_src(PyObject *obj, PyObject *args, bool get_srccode
 
 	map = c->al->map;
 	addr = c->al->addr;
+	dso = map ? map__dso(map) : NULL;
 
-	if (map && map->dso)
-		srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
+	if (dso)
+		srcfile = get_srcline_split(dso, map__rip_2objdump(map, addr), &line);
 
 	if (get_srccode) {
 		if (srcfile)
diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
index 8d2036f2f944..936c61546e64 100644
--- a/tools/perf/tests/code-reading.c
+++ b/tools/perf/tests/code-reading.c
@@ -237,10 +237,11 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 	char decomp_name[KMOD_DECOMP_LEN];
 	bool decomp = false;
 	int ret, err = 0;
+	struct dso *dso;
 
 	pr_debug("Reading object code for memory address: %#"PRIx64"\n", addr);
 
-	if (!thread__find_map(thread, cpumode, addr, &al) || !al.map->dso) {
+	if (!thread__find_map(thread, cpumode, addr, &al) || !map__dso(al.map)) {
 		if (cpumode == PERF_RECORD_MISC_HYPERVISOR) {
 			pr_debug("Hypervisor address can not be resolved - skipping\n");
 			goto out;
@@ -250,11 +251,10 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 		err = -1;
 		goto out;
 	}
+	dso = map__dso(al.map);
+	pr_debug("File is: %s\n", dso->long_name);
 
-	pr_debug("File is: %s\n", al.map->dso->long_name);
-
-	if (al.map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS &&
-	    !dso__is_kcore(al.map->dso)) {
+	if (dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS && !dso__is_kcore(dso)) {
 		pr_debug("Unexpected kernel address - skipping\n");
 		goto out;
 	}
@@ -269,7 +269,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 		len = al.map->end - addr;
 
 	/* Read the object code using perf */
-	ret_len = dso__data_read_offset(al.map->dso, maps__machine(thread->maps),
+	ret_len = dso__data_read_offset(dso, maps__machine(thread->maps),
 					al.addr, buf1, len);
 	if (ret_len != len) {
 		pr_debug("dso__data_read_offset failed\n");
@@ -287,7 +287,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 	}
 
 	/* objdump struggles with kcore - try each map only once */
-	if (dso__is_kcore(al.map->dso)) {
+	if (dso__is_kcore(dso)) {
 		size_t d;
 
 		for (d = 0; d < state->done_cnt; d++) {
@@ -304,9 +304,9 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 		state->done[state->done_cnt++] = al.map->start;
 	}
 
-	objdump_name = al.map->dso->long_name;
-	if (dso__needs_decompress(al.map->dso)) {
-		if (dso__decompress_kmodule_path(al.map->dso, objdump_name,
+	objdump_name = dso->long_name;
+	if (dso__needs_decompress(dso)) {
+		if (dso__decompress_kmodule_path(dso, objdump_name,
 						 decomp_name,
 						 sizeof(decomp_name)) < 0) {
 			pr_debug("decompression failed\n");
@@ -335,7 +335,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 			len -= ret;
 			if (len) {
 				pr_debug("Reducing len to %zu\n", len);
-			} else if (dso__is_kcore(al.map->dso)) {
+			} else if (dso__is_kcore(dso)) {
 				/*
 				 * objdump cannot handle very large segments
 				 * that may be found in kcore.
@@ -572,6 +572,7 @@ static int do_test_code_reading(bool try_kcore)
 	pid_t pid;
 	struct map *map;
 	bool have_vmlinux, have_kcore, excl_kernel = false;
+	struct dso *dso;
 
 	pid = getpid();
 
@@ -595,8 +596,9 @@ static int do_test_code_reading(bool try_kcore)
 		pr_debug("map__load failed\n");
 		goto out_err;
 	}
-	have_vmlinux = dso__is_vmlinux(map->dso);
-	have_kcore = dso__is_kcore(map->dso);
+	dso = map__dso(map);
+	have_vmlinux = dso__is_vmlinux(dso);
+	have_kcore = dso__is_kcore(dso);
 
 	/* 2nd time through we just try kcore */
 	if (try_kcore && !have_kcore)
diff --git a/tools/perf/tests/hists_common.c b/tools/perf/tests/hists_common.c
index 6f34d08b84e5..745ab18d17db 100644
--- a/tools/perf/tests/hists_common.c
+++ b/tools/perf/tests/hists_common.c
@@ -179,9 +179,11 @@ void print_hists_in(struct hists *hists)
 		he = rb_entry(node, struct hist_entry, rb_node_in);
 
 		if (!he->filtered) {
+			struct dso *dso = map__dso(he->ms.map);
+
 			pr_info("%2d: entry: %-8s [%-8s] %20s: period = %"PRIu64"\n",
 				i, thread__comm_str(he->thread),
-				he->ms.map->dso->short_name,
+				dso->short_name,
 				he->ms.sym->name, he->stat.period);
 		}
 
@@ -206,9 +208,11 @@ void print_hists_out(struct hists *hists)
 		he = rb_entry(node, struct hist_entry, rb_node);
 
 		if (!he->filtered) {
+			struct dso *dso = map__dso(he->ms.map);
+
 			pr_info("%2d: entry: %8s:%5d [%-8s] %20s: period = %"PRIu64"/%"PRIu64"\n",
 				i, thread__comm_str(he->thread), he->thread->tid,
-				he->ms.map->dso->short_name,
+				dso->short_name,
 				he->ms.sym->name, he->stat.period,
 				he->stat_acc ? he->stat_acc->period : 0);
 		}
diff --git a/tools/perf/tests/hists_cumulate.c b/tools/perf/tests/hists_cumulate.c
index b42d37ff2399..f00ec9abdbcd 100644
--- a/tools/perf/tests/hists_cumulate.c
+++ b/tools/perf/tests/hists_cumulate.c
@@ -150,12 +150,12 @@ static void del_hist_entries(struct hists *hists)
 typedef int (*test_fn_t)(struct evsel *, struct machine *);
 
 #define COMM(he)  (thread__comm_str(he->thread))
-#define DSO(he)   (he->ms.map->dso->short_name)
+#define DSO(he)   (map__dso(he->ms.map)->short_name)
 #define SYM(he)   (he->ms.sym->name)
 #define CPU(he)   (he->cpu)
 #define PID(he)   (he->thread->tid)
 #define DEPTH(he) (he->callchain->max_depth)
-#define CDSO(cl)  (cl->ms.map->dso->short_name)
+#define CDSO(cl)  (map__dso(cl->ms.map)->short_name)
 #define CSYM(cl)  (cl->ms.sym->name)
 
 struct result {
diff --git a/tools/perf/tests/hists_filter.c b/tools/perf/tests/hists_filter.c
index 8e1ceeb9b7b6..7c552549f4a4 100644
--- a/tools/perf/tests/hists_filter.c
+++ b/tools/perf/tests/hists_filter.c
@@ -194,7 +194,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
 		hists__filter_by_thread(hists);
 
 		/* now applying dso filter for 'kernel' */
-		hists->dso_filter = fake_samples[0].map->dso;
+		hists->dso_filter = map__dso(fake_samples[0].map);
 		hists__filter_by_dso(hists);
 
 		if (verbose > 2) {
@@ -288,7 +288,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
 
 		/* now applying all filters at once. */
 		hists->thread_filter = fake_samples[1].thread;
-		hists->dso_filter = fake_samples[1].map->dso;
+		hists->dso_filter = map__dso(fake_samples[1].map);
 		hists__filter_by_thread(hists);
 		hists__filter_by_dso(hists);
 
diff --git a/tools/perf/tests/hists_output.c b/tools/perf/tests/hists_output.c
index 62b0093253e3..428d11a938f2 100644
--- a/tools/perf/tests/hists_output.c
+++ b/tools/perf/tests/hists_output.c
@@ -116,7 +116,7 @@ static void del_hist_entries(struct hists *hists)
 typedef int (*test_fn_t)(struct evsel *, struct machine *);
 
 #define COMM(he)  (thread__comm_str(he->thread))
-#define DSO(he)   (he->ms.map->dso->short_name)
+#define DSO(he)   (map__dso(he->ms.map)->short_name)
 #define SYM(he)   (he->ms.sym->name)
 #define CPU(he)   (he->cpu)
 #define PID(he)   (he->thread->tid)
diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
index 8246d37e4b7a..ae7028fbf79e 100644
--- a/tools/perf/tests/maps.c
+++ b/tools/perf/tests/maps.c
@@ -26,7 +26,7 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
 
 		TEST_ASSERT_VAL("wrong map start",  map->start == merged[i].start);
 		TEST_ASSERT_VAL("wrong map end",    map->end == merged[i].end);
-		TEST_ASSERT_VAL("wrong map name",  !strcmp(map->dso->name, merged[i].name));
+		TEST_ASSERT_VAL("wrong map name",  !strcmp(map__dso(map)->name, merged[i].name));
 		TEST_ASSERT_VAL("wrong map refcnt", refcount_read(&map->refcnt) == 1);
 
 		i++;
diff --git a/tools/perf/tests/symbols.c b/tools/perf/tests/symbols.c
index 0793f8f419e2..2d1aa42d36a9 100644
--- a/tools/perf/tests/symbols.c
+++ b/tools/perf/tests/symbols.c
@@ -102,6 +102,7 @@ static int test_file(struct test_info *ti, char *filename)
 {
 	struct map *map = NULL;
 	int ret, nr;
+	struct dso *dso;
 
 	pr_debug("Testing %s\n", filename);
 
@@ -109,7 +110,8 @@ static int test_file(struct test_info *ti, char *filename)
 	if (ret != TEST_OK)
 		return ret;
 
-	nr = dso__load(map->dso, map);
+	dso = map__dso(map);
+	nr = dso__load(dso, map);
 	if (nr < 0) {
 		pr_debug("dso__load() failed!\n");
 		ret = TEST_FAIL;
@@ -122,7 +124,7 @@ static int test_file(struct test_info *ti, char *filename)
 		goto out_put;
 	}
 
-	ret = test_dso(map->dso);
+	ret = test_dso(dso);
 out_put:
 	map__put(map);
 
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index c8abb3ca8347..c614c2db7e89 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -293,15 +293,16 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 
 	maps__for_each_entry(maps, rb_node) {
 		struct map *map = rb_node->map;
+		struct dso *dso = map__dso(map);
 		/*
 		 * If it is the kernel, kallsyms is always "[kernel.kallsyms]", while
 		 * the kernel will have the path for the vmlinux file being used,
 		 * so use the short name, less descriptive but the same ("[kernel]" in
 		 * both cases.
 		 */
-		struct map *pair = maps__find_by_name(kallsyms.kmaps, (map->dso->kernel ?
-								map->dso->short_name :
-								map->dso->name));
+		struct map *pair = maps__find_by_name(kallsyms.kmaps, (dso->kernel ?
+								dso->short_name :
+								dso->name));
 		if (pair) {
 			pair->priv = 1;
 		} else {
@@ -326,17 +327,19 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 			continue;
 
 		if (pair->start == mem_start) {
+			struct dso *dso = map__dso(map);
+
 			if (!header_printed) {
 				pr_info("WARN: Maps in vmlinux with a different name in kallsyms:\n");
 				header_printed = true;
 			}
 
 			pr_info("WARN: %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s in kallsyms as",
-				map->start, map->end, map->pgoff, map->dso->name);
+				map->start, map->end, map->pgoff, dso->name);
 			if (mem_end != pair->end)
 				pr_info(":\nWARN: *%" PRIx64 "-%" PRIx64 " %" PRIx64,
 					pair->start, pair->end, pair->pgoff);
-			pr_info(" %s\n", pair->dso->name);
+			pr_info(" %s\n", dso->name);
 			pair->priv = 1;
 		}
 	}
diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
index c03fa76c02ff..12c3ce530e42 100644
--- a/tools/perf/ui/browsers/annotate.c
+++ b/tools/perf/ui/browsers/annotate.c
@@ -441,7 +441,8 @@ static void ui_browser__init_asm_mode(struct ui_browser *browser)
 static int sym_title(struct symbol *sym, struct map *map, char *title,
 		     size_t sz, int percent_type)
 {
-	return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name, map->dso->long_name,
+	return snprintf(title, sz, "%s  %s [Percent: %s]", sym->name,
+			map__dso(map)->long_name,
 			percent_type_str(percent_type));
 }
 
@@ -964,20 +965,22 @@ int symbol__tui_annotate(struct map_symbol *ms, struct evsel *evsel,
 		},
 		.opts = opts,
 	};
+	struct dso *dso;
 	int ret = -1, err;
 	int not_annotated = list_empty(&notes->src->source);
 
 	if (sym == NULL)
 		return -1;
 
-	if (ms->map->dso->annotate_warned)
+	dso = map__dso(ms->map);
+	if (dso->annotate_warned)
 		return -1;
 
 	if (not_annotated) {
 		err = symbol__annotate2(ms, evsel, opts, &browser.arch);
 		if (err) {
 			char msg[BUFSIZ];
-			ms->map->dso->annotate_warned = true;
+			dso->annotate_warned = true;
 			symbol__strerror_disassemble(ms, err, msg, sizeof(msg));
 			ui__error("Couldn't annotate %s:\n%s", sym->name, msg);
 			goto out_free_offsets;
diff --git a/tools/perf/ui/browsers/hists.c b/tools/perf/ui/browsers/hists.c
index 572ff38ceb0f..66d8c0802ecd 100644
--- a/tools/perf/ui/browsers/hists.c
+++ b/tools/perf/ui/browsers/hists.c
@@ -2487,7 +2487,7 @@ static struct symbol *symbol__new_unresolved(u64 addr, struct map *map)
 			return NULL;
 		}
 
-		dso__insert_symbol(map->dso, sym);
+		dso__insert_symbol(map__dso(map), sym);
 	}
 
 	return sym;
@@ -2499,7 +2499,7 @@ add_annotate_opt(struct hist_browser *browser __maybe_unused,
 		 struct map_symbol *ms,
 		 u64 addr)
 {
-	if (!ms->map || !ms->map->dso || ms->map->dso->annotate_warned)
+	if (!ms->map || !map__dso(ms->map) || map__dso(ms->map)->annotate_warned)
 		return 0;
 
 	if (!ms->sym)
@@ -2590,8 +2590,10 @@ static int hists_browser__zoom_map(struct hist_browser *browser, struct map *map
 		ui_helpline__pop();
 	} else {
 		ui_helpline__fpush("To zoom out press ESC or ENTER + \"Zoom out of %s DSO\"",
-				   __map__is_kernel(map) ? "the Kernel" : map->dso->short_name);
-		browser->hists->dso_filter = map->dso;
+				   __map__is_kernel(map)
+				   ? "the Kernel"
+				   : map__dso(map)->short_name);
+		browser->hists->dso_filter = map__dso(map);
 		perf_hpp__set_elide(HISTC_DSO, true);
 		pstack__push(browser->pstack, &browser->hists->dso_filter);
 	}
@@ -2616,7 +2618,7 @@ add_dso_opt(struct hist_browser *browser, struct popup_action *act,
 
 	if (asprintf(optstr, "Zoom %s %s DSO (use the 'k' hotkey to zoom directly into the kernel)",
 		     browser->hists->dso_filter ? "out of" : "into",
-		     __map__is_kernel(map) ? "the Kernel" : map->dso->short_name) < 0)
+		     __map__is_kernel(map) ? "the Kernel" : map__dso(map)->short_name) < 0)
 		return 0;
 
 	act->ms.map = map;
@@ -3091,8 +3093,8 @@ static int evsel__hists_browse(struct evsel *evsel, int nr_events, const char *h
 
 			if (!browser->selection ||
 			    !browser->selection->map ||
-			    !browser->selection->map->dso ||
-			    browser->selection->map->dso->annotate_warned) {
+			    !map__dso(browser->selection->map) ||
+			    map__dso(browser->selection->map)->annotate_warned) {
 				continue;
 			}
 
diff --git a/tools/perf/ui/browsers/map.c b/tools/perf/ui/browsers/map.c
index 3d49b916c9e4..3d1b958d8832 100644
--- a/tools/perf/ui/browsers/map.c
+++ b/tools/perf/ui/browsers/map.c
@@ -76,7 +76,7 @@ static int map_browser__run(struct map_browser *browser)
 {
 	int key;
 
-	if (ui_browser__show(&browser->b, browser->map->dso->long_name,
+	if (ui_browser__show(&browser->b, map__dso(browser->map)->long_name,
 			     "Press ESC to exit, %s / to search",
 			     verbose > 0 ? "" : "restart with -v to use") < 0)
 		return -1;
@@ -106,7 +106,7 @@ int map__browse(struct map *map)
 {
 	struct map_browser mb = {
 		.b = {
-			.entries = &map->dso->symbols,
+			.entries = &map__dso(map)->symbols,
 			.refresh = ui_browser__rb_tree_refresh,
 			.seek	 = ui_browser__rb_tree_seek,
 			.write	 = map_browser__write,
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index db475e44f42f..9494b34e84fc 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1593,7 +1593,7 @@ static void delete_last_nop(struct symbol *sym)
 
 int symbol__strerror_disassemble(struct map_symbol *ms, int errnum, char *buf, size_t buflen)
 {
-	struct dso *dso = ms->map->dso;
+	struct dso *dso = map__dso(ms->map);
 
 	BUG_ON(buflen == 0);
 
@@ -1735,7 +1735,7 @@ static int symbol__disassemble_bpf(struct symbol *sym,
 	struct map *map = args->ms.map;
 	struct perf_bpil *info_linear;
 	struct disassemble_info info;
-	struct dso *dso = map->dso;
+	struct dso *dso = map__dso(map);
 	int pc = 0, count, sub_id;
 	struct btf *btf = NULL;
 	char tpath[PATH_MAX];
@@ -1958,7 +1958,7 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
 {
 	struct annotation_options *opts = args->options;
 	struct map *map = args->ms.map;
-	struct dso *dso = map->dso;
+	struct dso *dso = map__dso(map);
 	char *command;
 	FILE *file;
 	char symfs_filename[PATH_MAX];
@@ -2403,7 +2403,7 @@ int symbol__annotate_printf(struct map_symbol *ms, struct evsel *evsel,
 {
 	struct map *map = ms->map;
 	struct symbol *sym = ms->sym;
-	struct dso *dso = map->dso;
+	struct dso *dso = map__dso(map);
 	char *filename;
 	const char *d_filename;
 	const char *evsel_name = evsel__name(evsel);
@@ -2586,7 +2586,7 @@ int map_symbol__annotation_dump(struct map_symbol *ms, struct evsel *evsel,
 	}
 
 	fprintf(fp, "%s() %s\nEvent: %s\n\n",
-		ms->sym->name, ms->map->dso->long_name, ev_name);
+		ms->sym->name, map__dso(ms->map)->long_name, ev_name);
 	symbol__annotate_fprintf2(ms->sym, fp, opts);
 
 	fclose(fp);
@@ -2812,7 +2812,7 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,
 		if (percent_max <= 0.5)
 			continue;
 
-		al->path = get_srcline(map->dso, notes->start + al->offset, NULL,
+		al->path = get_srcline(map__dso(map), notes->start + al->offset, NULL,
 				       false, true, notes->start + al->offset);
 		insert_source_line(&tmp_root, al, opts);
 	}
@@ -2831,7 +2831,7 @@ static void symbol__calc_lines(struct map_symbol *ms, struct rb_root *root,
 int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
 			  struct annotation_options *opts)
 {
-	struct dso *dso = ms->map->dso;
+	struct dso *dso = map__dso(ms->map);
 	struct symbol *sym = ms->sym;
 	struct rb_root source_line = RB_ROOT;
 	struct hists *hists = evsel__hists(evsel);
@@ -2867,7 +2867,7 @@ int symbol__tty_annotate2(struct map_symbol *ms, struct evsel *evsel,
 int symbol__tty_annotate(struct map_symbol *ms, struct evsel *evsel,
 			 struct annotation_options *opts)
 {
-	struct dso *dso = ms->map->dso;
+	struct dso *dso = map__dso(ms->map);
 	struct symbol *sym = ms->sym;
 	struct rb_root source_line = RB_ROOT;
 	int err;
diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
index 498ff7f24463..2341de8573c0 100644
--- a/tools/perf/util/auxtrace.c
+++ b/tools/perf/util/auxtrace.c
@@ -2557,7 +2557,7 @@ static struct dso *load_dso(const char *name)
 	if (map__load(map) < 0)
 		pr_err("File '%s' not found or has no symbols.\n", name);
 
-	dso = dso__get(map->dso);
+	dso = dso__get(map__dso(map));
 
 	map__put(map);
 
diff --git a/tools/perf/util/block-info.c b/tools/perf/util/block-info.c
index 5ecd4f401f32..16a7b4adcf18 100644
--- a/tools/perf/util/block-info.c
+++ b/tools/perf/util/block-info.c
@@ -317,9 +317,9 @@ static int block_dso_entry(struct perf_hpp_fmt *fmt, struct perf_hpp *hpp,
 	struct block_fmt *block_fmt = container_of(fmt, struct block_fmt, fmt);
 	struct map *map = he->ms.map;
 
-	if (map && map->dso) {
+	if (map && map__dso(map)) {
 		return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
-				 map->dso->short_name);
+				 map__dso(map)->short_name);
 	}
 
 	return scnprintf(hpp->buf, hpp->size, "%*s", block_fmt->width,
diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
index 025f331b3867..38fcf3ba5749 100644
--- a/tools/perf/util/bpf-event.c
+++ b/tools/perf/util/bpf-event.c
@@ -57,10 +57,12 @@ static int machine__process_bpf_event_load(struct machine *machine,
 		struct map *map = maps__find(machine__kernel_maps(machine), addr);
 
 		if (map) {
-			map->dso->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
-			map->dso->bpf_prog.id = id;
-			map->dso->bpf_prog.sub_id = i;
-			map->dso->bpf_prog.env = env;
+			struct dso *dso = map__dso(map);
+
+			dso->binary_type = DSO_BINARY_TYPE__BPF_PROG_INFO;
+			dso->bpf_prog.id = id;
+			dso->bpf_prog.sub_id = i;
+			dso->bpf_prog.env = env;
 		}
 	}
 	return 0;
diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
index ea9c083ab1e3..06a8cd88cbef 100644
--- a/tools/perf/util/build-id.c
+++ b/tools/perf/util/build-id.c
@@ -59,7 +59,7 @@ int build_id__mark_dso_hit(struct perf_tool *tool __maybe_unused,
 	}
 
 	if (thread__find_map(thread, sample->cpumode, sample->ip, &al))
-		al.map->dso->hit = 1;
+		map__dso(al.map)->hit = 1;
 
 	thread__put(thread);
 	return 0;
diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
index 0aa979f64565..9e9c39dd9d2b 100644
--- a/tools/perf/util/callchain.c
+++ b/tools/perf/util/callchain.c
@@ -701,8 +701,8 @@ static enum match_result match_chain_strings(const char *left,
 static enum match_result match_chain_dso_addresses(struct map *left_map, u64 left_ip,
 						   struct map *right_map, u64 right_ip)
 {
-	struct dso *left_dso = left_map ? left_map->dso : NULL;
-	struct dso *right_dso = right_map ? right_map->dso : NULL;
+	struct dso *left_dso = left_map ? map__dso(left_map) : NULL;
+	struct dso *right_dso = right_map ? map__dso(right_map) : NULL;
 
 	if (left_dso != right_dso)
 		return left_dso < right_dso ? MATCH_LT : MATCH_GT;
@@ -1174,7 +1174,7 @@ char *callchain_list__sym_name(struct callchain_list *cl,
 	if (show_dso)
 		scnprintf(bf + printed, bfsize - printed, " %s",
 			  cl->ms.map ?
-			  cl->ms.map->dso->short_name :
+			  map__dso(cl->ms.map)->short_name :
 			  "unknown");
 
 	return bf;
diff --git a/tools/perf/util/data-convert-json.c b/tools/perf/util/data-convert-json.c
index ba9d93ce9463..653709ab867a 100644
--- a/tools/perf/util/data-convert-json.c
+++ b/tools/perf/util/data-convert-json.c
@@ -128,15 +128,17 @@ static void output_sample_callchain_entry(struct perf_tool *tool,
 	output_json_key_format(out, false, 5, "ip", "\"0x%" PRIx64 "\"", ip);
 
 	if (al && al->sym && al->sym->namelen) {
+		struct dso *dso = al->map ? map__dso(al->map) : NULL;
+
 		fputc(',', out);
 		output_json_key_string(out, false, 5, "symbol", al->sym->name);
 
-		if (al->map && al->map->dso) {
-			const char *dso = al->map->dso->short_name;
+		if (dso) {
+			const char *dso_name = dso->short_name;
 
-			if (dso && strlen(dso) > 0) {
+			if (dso_name && strlen(dso_name) > 0) {
 				fputc(',', out);
-				output_json_key_string(out, false, 5, "dso", dso);
+				output_json_key_string(out, false, 5, "dso", dso_name);
 			}
 		}
 	}
diff --git a/tools/perf/util/db-export.c b/tools/perf/util/db-export.c
index 1cfcfdd3cf52..84c970c11794 100644
--- a/tools/perf/util/db-export.c
+++ b/tools/perf/util/db-export.c
@@ -179,7 +179,7 @@ static int db_ids_from_al(struct db_export *dbe, struct addr_location *al,
 	int err;
 
 	if (al->map) {
-		struct dso *dso = al->map->dso;
+		struct dso *dso = map__dso(al->map);
 
 		err = db_export__dso(dbe, dso, maps__machine(al->maps));
 		if (err)
@@ -255,7 +255,7 @@ static struct call_path *call_path_from_sample(struct db_export *dbe,
 		al.addr = node->ip;
 
 		if (al.map && !al.sym)
-			al.sym = dso__find_symbol(al.map->dso, al.addr);
+			al.sym = dso__find_symbol(map__dso(al.map), al.addr);
 
 		db_ids_from_al(dbe, &al, &dso_db_id, &sym_db_id, &offset);
 
diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
index fe2a0752a0f6..8a7ffe0d805a 100644
--- a/tools/perf/util/dlfilter.c
+++ b/tools/perf/util/dlfilter.c
@@ -29,7 +29,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
 
 	d_al->size = sizeof(*d_al);
 	if (al->map) {
-		struct dso *dso = al->map->dso;
+		struct dso *dso = map__dso(al->map);
 
 		if (symbol_conf.show_kernel_path && dso->long_name)
 			d_al->dso = dso->long_name;
@@ -220,6 +220,7 @@ static const char *dlfilter__srcline(void *ctx, __u32 *line_no)
 	unsigned int line = 0;
 	char *srcfile = NULL;
 	struct map *map;
+	struct dso *dso;
 	u64 addr;
 
 	if (!d->ctx_valid || !line_no)
@@ -231,9 +232,10 @@ static const char *dlfilter__srcline(void *ctx, __u32 *line_no)
 
 	map = al->map;
 	addr = al->addr;
+	dso = map ? map__dso(map) : NULL;
 
-	if (map && map->dso)
-		srcfile = get_srcline_split(map->dso, map__rip_2objdump(map, addr), &line);
+	if (dso)
+		srcfile = get_srcline_split(dso, map__rip_2objdump(map, addr), &line);
 
 	*line_no = line;
 	return srcfile;
@@ -279,7 +281,7 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
 	offset = map->map_ip(map, ip);
 	if (ip + len >= map->end)
 		len = map->end - ip;
-	return dso__data_read_offset(map->dso, d->machine, offset, buf, len);
+	return dso__data_read_offset(map__dso(map), d->machine, offset, buf, len);
 }
 
 static const struct perf_dlfilter_fns perf_dlfilter_fns = {
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index f40cdd6ac126..2ddc75dee019 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -685,6 +685,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
 		     struct perf_sample *sample)
 {
 	struct thread *thread;
+	struct dso *dso;
 
 	if (symbol_conf.guest_code && !machine__is_host(machine))
 		thread = machine__findnew_guest_code(machine, sample->pid);
@@ -695,9 +696,11 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
 
 	dump_printf(" ... thread: %s:%d\n", thread__comm_str(thread), thread->tid);
 	thread__find_map(thread, sample->cpumode, sample->ip, al);
+	dso = al->map ? map__dso(al->map) : NULL;
 	dump_printf(" ...... dso: %s\n",
-		    al->map ? al->map->dso->long_name :
-			al->level == 'H' ? "[hypervisor]" : "<not found>");
+		dso
+		? dso->long_name
+		: (al->level == 'H' ? "[hypervisor]" : "<not found>"));
 
 	if (thread__is_filtered(thread))
 		al->filtered |= (1 << HIST_FILTER__THREAD);
@@ -715,8 +718,6 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
 	}
 
 	if (al->map) {
-		struct dso *dso = al->map->dso;
-
 		if (symbol_conf.dso_list &&
 		    (!dso || !(strlist__has_entry(symbol_conf.dso_list,
 						  dso->short_name) ||
diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c
index bd22c4932d10..dff5d8c4b06d 100644
--- a/tools/perf/util/evsel_fprintf.c
+++ b/tools/perf/util/evsel_fprintf.c
@@ -155,7 +155,7 @@ int sample__fprintf_callchain(struct perf_sample *sample, int left_alignment,
 
 			if (print_ip) {
 				/* Show binary offset for userspace addr */
-				if (map && !map->dso->kernel)
+				if (map && !map__dso(map)->kernel)
 					printed += fprintf(fp, "%c%16" PRIx64, s, addr);
 				else
 					printed += fprintf(fp, "%c%16" PRIx64, s, node->ip);
diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index 1b0e89cd5d99..fdf0562d2fd3 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -106,7 +106,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 		hists__set_col_len(hists, HISTC_THREAD, len + 8);
 
 	if (h->ms.map) {
-		len = dso__name_len(h->ms.map->dso);
+		len = dso__name_len(map__dso(h->ms.map));
 		hists__new_col_len(hists, HISTC_DSO, len);
 	}
 
@@ -120,7 +120,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 				symlen += BITS_PER_LONG / 4 + 2 + 3;
 			hists__new_col_len(hists, HISTC_SYMBOL_FROM, symlen);
 
-			symlen = dso__name_len(h->branch_info->from.ms.map->dso);
+			symlen = dso__name_len(map__dso(h->branch_info->from.ms.map));
 			hists__new_col_len(hists, HISTC_DSO_FROM, symlen);
 		} else {
 			symlen = unresolved_col_width + 4 + 2;
@@ -135,7 +135,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 				symlen += BITS_PER_LONG / 4 + 2 + 3;
 			hists__new_col_len(hists, HISTC_SYMBOL_TO, symlen);
 
-			symlen = dso__name_len(h->branch_info->to.ms.map->dso);
+			symlen = dso__name_len(map__dso(h->branch_info->to.ms.map));
 			hists__new_col_len(hists, HISTC_DSO_TO, symlen);
 		} else {
 			symlen = unresolved_col_width + 4 + 2;
@@ -180,7 +180,7 @@ void hists__calc_col_len(struct hists *hists, struct hist_entry *h)
 		}
 
 		if (h->mem_info->daddr.ms.map) {
-			symlen = dso__name_len(h->mem_info->daddr.ms.map->dso);
+			symlen = dso__name_len(map__dso(h->mem_info->daddr.ms.map));
 			hists__new_col_len(hists, HISTC_MEM_DADDR_DSO,
 					   symlen);
 		} else {
@@ -2104,7 +2104,7 @@ static bool hists__filter_entry_by_dso(struct hists *hists,
 				       struct hist_entry *he)
 {
 	if (hists->dso_filter != NULL &&
-	    (he->ms.map == NULL || he->ms.map->dso != hists->dso_filter)) {
+	    (he->ms.map == NULL || map__dso(he->ms.map) != hists->dso_filter)) {
 		he->filtered |= (1 << HIST_FILTER__DSO);
 		return true;
 	}
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index 955c1b9dc6a4..8cec88e09792 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -801,17 +801,19 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 	}
 
 	while (1) {
-		if (!thread__find_map(thread, cpumode, *ip, &al) || !al.map->dso) {
+		struct dso *dso;
+
+		if (!thread__find_map(thread, cpumode, *ip, &al) || !map__dso(al.map)) {
 			if (al.map)
 				intel_pt_log("ERROR: thread has no dso for %#" PRIx64 "\n", *ip);
 			else
 				intel_pt_log("ERROR: thread has no map for %#" PRIx64 "\n", *ip);
 			return -EINVAL;
 		}
+		dso = map__dso(al.map);
 
-		if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR &&
-		    dso__data_status_seen(al.map->dso,
-					  DSO_DATA_STATUS_SEEN_ITRACE))
+		if (dso->data.status == DSO_DATA_STATUS_ERROR &&
+		    dso__data_status_seen(dso, DSO_DATA_STATUS_SEEN_ITRACE))
 			return -ENOENT;
 
 		offset = al.map->map_ip(al.map, *ip);
@@ -819,7 +821,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 		if (!to_ip && one_map) {
 			struct intel_pt_cache_entry *e;
 
-			e = intel_pt_cache_lookup(al.map->dso, machine, offset);
+			e = intel_pt_cache_lookup(dso, machine, offset);
 			if (e &&
 			    (!max_insn_cnt || e->insn_cnt <= max_insn_cnt)) {
 				*insn_cnt_ptr = e->insn_cnt;
@@ -829,8 +831,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 				intel_pt_insn->emulated_ptwrite = e->emulated_ptwrite;
 				intel_pt_insn->length = e->length;
 				intel_pt_insn->rel = e->rel;
-				memcpy(intel_pt_insn->buf, e->insn,
-				       INTEL_PT_INSN_BUF_SZ);
+				memcpy(intel_pt_insn->buf, e->insn, INTEL_PT_INSN_BUF_SZ);
 				intel_pt_log_insn_no_data(intel_pt_insn, *ip);
 				return 0;
 			}
@@ -842,17 +843,17 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 		/* Load maps to ensure dso->is_64_bit has been updated */
 		map__load(al.map);
 
-		x86_64 = al.map->dso->is_64_bit;
+		x86_64 = dso->is_64_bit;
 
 		while (1) {
-			len = dso__data_read_offset(al.map->dso, machine,
+			len = dso__data_read_offset(dso, machine,
 						    offset, buf,
 						    INTEL_PT_INSN_BUF_SZ);
 			if (len <= 0) {
 				intel_pt_log("ERROR: failed to read at offset %#" PRIx64 " ",
 					     offset);
 				if (intel_pt_enable_logging)
-					dso__fprintf(al.map->dso, intel_pt_log_fp());
+					dso__fprintf(dso, intel_pt_log_fp());
 				return -EINVAL;
 			}
 
@@ -871,7 +872,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 					goto out;
 				/* Check for emulated ptwrite */
 				offs = offset + intel_pt_insn->length;
-				eptw = intel_pt_emulated_ptwrite(al.map->dso, machine, offs);
+				eptw = intel_pt_emulated_ptwrite(dso, machine, offs);
 				intel_pt_insn->emulated_ptwrite = eptw;
 				goto out;
 			}
@@ -906,13 +907,13 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 	if (to_ip) {
 		struct intel_pt_cache_entry *e;
 
-		e = intel_pt_cache_lookup(al.map->dso, machine, start_offset);
+		e = intel_pt_cache_lookup(map__dso(al.map), machine, start_offset);
 		if (e)
 			return 0;
 	}
 
 	/* Ignore cache errors */
-	intel_pt_cache_add(al.map->dso, machine, start_offset, insn_cnt,
+	intel_pt_cache_add(map__dso(al.map), machine, start_offset, insn_cnt,
 			   *ip - start_ip, intel_pt_insn);
 
 	return 0;
@@ -983,13 +984,12 @@ static int __intel_pt_pgd_ip(uint64_t ip, void *data)
 	if (!thread)
 		return -EINVAL;
 
-	if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso)
+	if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map))
 		return -EINVAL;
 
 	offset = al.map->map_ip(al.map, ip);
 
-	return intel_pt_match_pgd_ip(ptq->pt, ip, offset,
-				     al.map->dso->long_name);
+	return intel_pt_match_pgd_ip(ptq->pt, ip, offset, map__dso(al.map)->long_name);
 }
 
 static bool intel_pt_pgd_ip(uint64_t ip, void *data)
@@ -2744,7 +2744,7 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
 	if (map__load(map))
 		return 0;
 
-	start = dso__first_symbol(map->dso);
+	start = dso__first_symbol(map__dso(map));
 
 	for (sym = start; sym; sym = dso__next_symbol(sym)) {
 		if (sym->binding == STB_GLOBAL &&
@@ -3381,18 +3381,21 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
 		return 0;
 
 	for (; cnt; cnt--, addr--) {
+		struct dso *dso;
+
 		if (intel_pt_find_map(thread, cpumode, addr, &al)) {
 			if (addr < event->text_poke.addr)
 				return 0;
 			continue;
 		}
 
-		if (!al.map->dso || !al.map->dso->auxtrace_cache)
+		dso = map__dso(al.map);
+		if (!dso || !dso->auxtrace_cache)
 			continue;
 
 		offset = al.map->map_ip(al.map, addr);
 
-		e = intel_pt_cache_lookup(al.map->dso, machine, offset);
+		e = intel_pt_cache_lookup(dso, machine, offset);
 		if (!e)
 			continue;
 
@@ -3405,9 +3408,9 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
 			if (e->branch != INTEL_PT_BR_NO_BRANCH)
 				return 0;
 		} else {
-			intel_pt_cache_invalidate(al.map->dso, machine, offset);
+			intel_pt_cache_invalidate(dso, machine, offset);
 			intel_pt_log("Invalidated instruction cache for %s at %#"PRIx64"\n",
-				     al.map->dso->long_name, addr);
+				     dso->long_name, addr);
 		}
 	}
 
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 446c0273259d..6e32344e66dc 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -47,7 +47,7 @@ static void __machine__remove_thread(struct machine *machine, struct thread *th,
 
 static struct dso *machine__kernel_dso(struct machine *machine)
 {
-	return machine->vmlinux_map->dso;
+	return map__dso(machine->vmlinux_map);
 }
 
 static void dsos__init(struct dsos *dsos)
@@ -878,12 +878,13 @@ static int machine__process_ksymbol_register(struct machine *machine,
 					     struct perf_sample *sample __maybe_unused)
 {
 	struct symbol *sym;
+	struct dso *dso;
 	struct map *map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
 
 	if (!map) {
-		struct dso *dso = dso__new(event->ksymbol.name);
 		int err;
 
+		dso = dso__new(event->ksymbol.name);
 		if (dso) {
 			dso->kernel = DSO_SPACE__KERNEL;
 			map = map__new2(0, dso);
@@ -895,9 +896,9 @@ static int machine__process_ksymbol_register(struct machine *machine,
 		}
 
 		if (event->ksymbol.ksym_type == PERF_RECORD_KSYMBOL_TYPE_OOL) {
-			map->dso->binary_type = DSO_BINARY_TYPE__OOL;
-			map->dso->data.file_size = event->ksymbol.len;
-			dso__set_loaded(map->dso);
+			dso->binary_type = DSO_BINARY_TYPE__OOL;
+			dso->data.file_size = event->ksymbol.len;
+			dso__set_loaded(dso);
 		}
 
 		map->start = event->ksymbol.addr;
@@ -913,6 +914,8 @@ static int machine__process_ksymbol_register(struct machine *machine,
 			dso->binary_type = DSO_BINARY_TYPE__BPF_IMAGE;
 			dso__set_long_name(dso, "", false);
 		}
+	} else {
+		dso = map__dso(map);
 	}
 
 	sym = symbol__new(map->map_ip(map, map->start),
@@ -920,7 +923,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
 			  0, 0, event->ksymbol.name);
 	if (!sym)
 		return -ENOMEM;
-	dso__insert_symbol(map->dso, sym);
+	dso__insert_symbol(dso, sym);
 	return 0;
 }
 
@@ -938,9 +941,11 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
 	if (map != machine->vmlinux_map)
 		maps__remove(machine__kernel_maps(machine), map);
 	else {
-		sym = dso__find_symbol(map->dso, map->map_ip(map, map->start));
+		struct dso *dso = map__dso(map);
+
+		sym = dso__find_symbol(dso, map->map_ip(map, map->start));
 		if (sym)
-			dso__delete_symbol(map->dso, sym);
+			dso__delete_symbol(dso, sym);
 	}
 
 	return 0;
@@ -964,6 +969,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
 {
 	struct map *map = maps__find(machine__kernel_maps(machine), event->text_poke.addr);
 	u8 cpumode = event->header.misc & PERF_RECORD_MISC_CPUMODE_MASK;
+	struct dso *dso = map ? map__dso(map) : NULL;
 
 	if (dump_trace)
 		perf_event__fprintf_text_poke(event, machine, stdout);
@@ -976,7 +982,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
 		return 0;
 	}
 
-	if (map && map->dso) {
+	if (dso) {
 		u8 *new_bytes = event->text_poke.bytes + event->text_poke.old_len;
 		int ret;
 
@@ -985,7 +991,7 @@ int machine__process_text_poke(struct machine *machine, union perf_event *event,
 		 * must be done prior to using kernel maps.
 		 */
 		map__load(map);
-		ret = dso__data_write_cache_addr(map->dso, map, machine,
+		ret = dso__data_write_cache_addr(dso, map, machine,
 						 event->text_poke.addr,
 						 new_bytes,
 						 event->text_poke.new_len);
@@ -1421,10 +1427,11 @@ int machines__create_kernel_maps(struct machines *machines, pid_t pid)
 int machine__load_kallsyms(struct machine *machine, const char *filename)
 {
 	struct map *map = machine__kernel_map(machine);
-	int ret = __dso__load_kallsyms(map->dso, filename, map, true);
+	struct dso *dso = map__dso(map);
+	int ret = __dso__load_kallsyms(dso, filename, map, true);
 
 	if (ret > 0) {
-		dso__set_loaded(map->dso);
+		dso__set_loaded(dso);
 		/*
 		 * Since /proc/kallsyms will have multiple sessions for the
 		 * kernel, with modules between them, fixup the end of all
@@ -1439,10 +1446,11 @@ int machine__load_kallsyms(struct machine *machine, const char *filename)
 int machine__load_vmlinux_path(struct machine *machine)
 {
 	struct map *map = machine__kernel_map(machine);
-	int ret = dso__load_vmlinux_path(map->dso, map);
+	struct dso *dso = map__dso(map);
+	int ret = dso__load_vmlinux_path(dso, map);
 
 	if (ret > 0)
-		dso__set_loaded(map->dso);
+		dso__set_loaded(dso);
 
 	return ret;
 }
@@ -1484,6 +1492,7 @@ static bool is_kmod_dso(struct dso *dso)
 static int maps__set_module_path(struct maps *maps, const char *path, struct kmod_path *m)
 {
 	char *long_name;
+	struct dso *dso;
 	struct map *map = maps__find_by_name(maps, m->name);
 
 	if (map == NULL)
@@ -1493,16 +1502,17 @@ static int maps__set_module_path(struct maps *maps, const char *path, struct kmo
 	if (long_name == NULL)
 		return -ENOMEM;
 
-	dso__set_long_name(map->dso, long_name, true);
-	dso__kernel_module_get_build_id(map->dso, "");
+	dso = map__dso(map);
+	dso__set_long_name(dso, long_name, true);
+	dso__kernel_module_get_build_id(dso, "");
 
 	/*
 	 * Full name could reveal us kmod compression, so
 	 * we need to update the symtab_type if needed.
 	 */
-	if (m->comp && is_kmod_dso(map->dso)) {
-		map->dso->symtab_type++;
-		map->dso->comp = m->comp;
+	if (m->comp && is_kmod_dso(dso)) {
+		dso->symtab_type++;
+		dso->comp = m->comp;
 	}
 
 	return 0;
@@ -1601,7 +1611,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
 		return -1;
 	map->end = start + size;
 
-	dso__kernel_module_get_build_id(map->dso, machine->root_dir);
+	dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
 
 	return 0;
 }
@@ -1787,7 +1797,7 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
 		map->end = map->start + xm->end - xm->start;
 
 		if (build_id__is_defined(bid))
-			dso__set_build_id(map->dso, bid);
+			dso__set_build_id(map__dso(map), bid);
 
 	} else if (is_kernel_mmap) {
 		const char *symbol_name = xm->name + strlen(mmap_name);
@@ -2247,18 +2257,20 @@ static char *callchain_srcline(struct map_symbol *ms, u64 ip)
 {
 	struct map *map = ms->map;
 	char *srcline = NULL;
+	struct dso *dso;
 
 	if (!map || callchain_param.key == CCKEY_FUNCTION)
 		return srcline;
 
-	srcline = srcline__tree_find(&map->dso->srclines, ip);
+	dso = map__dso(map);
+	srcline = srcline__tree_find(&dso->srclines, ip);
 	if (!srcline) {
 		bool show_sym = false;
 		bool show_addr = callchain_param.key == CCKEY_ADDRESS;
 
-		srcline = get_srcline(map->dso, map__rip_2objdump(map, ip),
+		srcline = get_srcline(dso, map__rip_2objdump(map, ip),
 				      ms->sym, show_sym, show_addr, ip);
-		srcline__tree_insert(&map->dso->srclines, ip, srcline);
+		srcline__tree_insert(&dso->srclines, ip, srcline);
 	}
 
 	return srcline;
@@ -3034,6 +3046,7 @@ static int append_inlines(struct callchain_cursor *cursor, struct map_symbol *ms
 	struct map *map = ms->map;
 	struct inline_node *inline_node;
 	struct inline_list *ilist;
+	struct dso *dso;
 	u64 addr;
 	int ret = 1;
 
@@ -3042,13 +3055,14 @@ static int append_inlines(struct callchain_cursor *cursor, struct map_symbol *ms
 
 	addr = map__map_ip(map, ip);
 	addr = map__rip_2objdump(map, addr);
+	dso = map__dso(map);
 
-	inline_node = inlines__tree_find(&map->dso->inlined_nodes, addr);
+	inline_node = inlines__tree_find(&dso->inlined_nodes, addr);
 	if (!inline_node) {
-		inline_node = dso__parse_addr_inlines(map->dso, addr, sym);
+		inline_node = dso__parse_addr_inlines(dso, addr, sym);
 		if (!inline_node)
 			return ret;
-		inlines__tree_insert(&map->dso->inlined_nodes, inline_node);
+		inlines__tree_insert(&dso->inlined_nodes, inline_node);
 	}
 
 	list_for_each_entry(ilist, &inline_node->val, list) {
@@ -3325,7 +3339,7 @@ char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, ch
 	if (sym == NULL)
 		return NULL;
 
-	*modp = __map__is_kmodule(map) ? (char *)map->dso->short_name : NULL;
+	*modp = __map__is_kmodule(map) ? (char *)map__dso(map)->short_name : NULL;
 	*addrp = map->unmap_ip(map, sym->start);
 	return sym->name;
 }
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index a99dbde656a2..90062af6675a 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -232,7 +232,7 @@ struct map *map__new2(u64 start, struct dso *dso)
 
 bool __map__is_kernel(const struct map *map)
 {
-	if (!map->dso->kernel)
+	if (!map__dso(map)->kernel)
 		return false;
 	return machine__kernel_map(maps__machine(map__kmaps((struct map *)map))) == map;
 }
@@ -247,8 +247,9 @@ bool __map__is_extra_kernel_map(const struct map *map)
 bool __map__is_bpf_prog(const struct map *map)
 {
 	const char *name;
+	struct dso *dso = map__dso(map);
 
-	if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
+	if (dso->binary_type == DSO_BINARY_TYPE__BPF_PROG_INFO)
 		return true;
 
 	/*
@@ -256,15 +257,16 @@ bool __map__is_bpf_prog(const struct map *map)
 	 * type of DSO_BINARY_TYPE__BPF_PROG_INFO. In such cases, we can
 	 * guess the type based on name.
 	 */
-	name = map->dso->short_name;
+	name = dso->short_name;
 	return name && (strstr(name, "bpf_prog_") == name);
 }
 
 bool __map__is_bpf_image(const struct map *map)
 {
 	const char *name;
+	struct dso *dso = map__dso(map);
 
-	if (map->dso->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
+	if (dso->binary_type == DSO_BINARY_TYPE__BPF_IMAGE)
 		return true;
 
 	/*
@@ -272,18 +274,20 @@ bool __map__is_bpf_image(const struct map *map)
 	 * type of DSO_BINARY_TYPE__BPF_IMAGE. In such cases, we can
 	 * guess the type based on name.
 	 */
-	name = map->dso->short_name;
+	name = dso->short_name;
 	return name && is_bpf_image(name);
 }
 
 bool __map__is_ool(const struct map *map)
 {
-	return map->dso && map->dso->binary_type == DSO_BINARY_TYPE__OOL;
+	const struct dso *dso = map__dso(map);
+
+	return dso && dso->binary_type == DSO_BINARY_TYPE__OOL;
 }
 
 bool map__has_symbols(const struct map *map)
 {
-	return dso__has_symbols(map->dso);
+	return dso__has_symbols(map__dso(map));
 }
 
 static void map__exit(struct map *map)
@@ -306,18 +310,23 @@ void map__put(struct map *map)
 
 void map__fixup_start(struct map *map)
 {
-	struct rb_root_cached *symbols = &map->dso->symbols;
+	struct dso *dso = map__dso(map);
+	struct rb_root_cached *symbols = &dso->symbols;
 	struct rb_node *nd = rb_first_cached(symbols);
+
 	if (nd != NULL) {
 		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
+
 		map->start = sym->start;
 	}
 }
 
 void map__fixup_end(struct map *map)
 {
-	struct rb_root_cached *symbols = &map->dso->symbols;
+	struct dso *dso = map__dso(map);
+	struct rb_root_cached *symbols = &dso->symbols;
 	struct rb_node *nd = rb_last(&symbols->rb_root);
+
 	if (nd != NULL) {
 		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
 		map->end = sym->end;
@@ -328,18 +337,19 @@ void map__fixup_end(struct map *map)
 
 int map__load(struct map *map)
 {
-	const char *name = map->dso->long_name;
+	struct dso *dso = map__dso(map);
+	const char *name = dso->long_name;
 	int nr;
 
-	if (dso__loaded(map->dso))
+	if (dso__loaded(dso))
 		return 0;
 
-	nr = dso__load(map->dso, map);
+	nr = dso__load(dso, map);
 	if (nr < 0) {
-		if (map->dso->has_build_id) {
+		if (dso->has_build_id) {
 			char sbuild_id[SBUILD_ID_SIZE];
 
-			build_id__sprintf(&map->dso->bid, sbuild_id);
+			build_id__sprintf(&dso->bid, sbuild_id);
 			pr_debug("%s with build id %s not found", name, sbuild_id);
 		} else
 			pr_debug("Failed to open %s", name);
@@ -371,32 +381,36 @@ struct symbol *map__find_symbol(struct map *map, u64 addr)
 	if (map__load(map) < 0)
 		return NULL;
 
-	return dso__find_symbol(map->dso, addr);
+	return dso__find_symbol(map__dso(map), addr);
 }
 
 struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
 {
+	struct dso *dso;
+
 	if (map__load(map) < 0)
 		return NULL;
 
-	if (!dso__sorted_by_name(map->dso))
-		dso__sort_by_name(map->dso);
+	dso = map__dso(map);
+	if (!dso__sorted_by_name(dso))
+		dso__sort_by_name(dso);
 
-	return dso__find_symbol_by_name(map->dso, name);
+	return dso__find_symbol_by_name(dso, name);
 }
 
 struct map *map__clone(struct map *from)
 {
 	size_t size = sizeof(struct map);
 	struct map *map;
+	struct dso *dso = map__dso(from);
 
-	if (from->dso && from->dso->kernel)
+	if (dso && dso->kernel)
 		size += sizeof(struct kmap);
 
 	map = memdup(from, size);
 	if (map != NULL) {
 		refcount_set(&map->refcnt, 1);
-		dso__get(map->dso);
+		dso__get(dso);
 	}
 
 	return map;
@@ -404,20 +418,23 @@ struct map *map__clone(struct map *from)
 
 size_t map__fprintf(struct map *map, FILE *fp)
 {
+	const struct dso *dso = map__dso(map);
+
 	return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s\n",
-		       map->start, map->end, map->pgoff, map->dso->name);
+		       map->start, map->end, map->pgoff, dso->name);
 }
 
 size_t map__fprintf_dsoname(struct map *map, FILE *fp)
 {
 	char buf[symbol_conf.pad_output_len_dso + 1];
 	const char *dsoname = "[unknown]";
+	const struct dso *dso = map ? map__dso(map) : NULL;
 
-	if (map && map->dso) {
-		if (symbol_conf.show_kernel_path && map->dso->long_name)
-			dsoname = map->dso->long_name;
+	if (dso) {
+		if (symbol_conf.show_kernel_path && dso->long_name)
+			dsoname = dso->long_name;
 		else
-			dsoname = map->dso->name;
+			dsoname = dso->name;
 	}
 
 	if (symbol_conf.pad_output_len_dso) {
@@ -432,15 +449,17 @@ char *map__srcline(struct map *map, u64 addr, struct symbol *sym)
 {
 	if (map == NULL)
 		return SRCLINE_UNKNOWN;
-	return get_srcline(map->dso, map__rip_2objdump(map, addr), sym, true, true, addr);
+
+	return get_srcline(map__dso(map), map__rip_2objdump(map, addr), sym, true, true, addr);
 }
 
 int map__fprintf_srcline(struct map *map, u64 addr, const char *prefix,
 			 FILE *fp)
 {
+	const struct dso *dso = map ? map__dso(map) : NULL;
 	int ret = 0;
 
-	if (map && map->dso) {
+	if (dso) {
 		char *srcline = map__srcline(map, addr, NULL);
 		if (strncmp(srcline, SRCLINE_UNKNOWN, strlen(SRCLINE_UNKNOWN)) != 0)
 			ret = fprintf(fp, "%s%s", prefix, srcline);
@@ -469,6 +488,7 @@ void srccode_state_free(struct srccode_state *state)
 u64 map__rip_2objdump(struct map *map, u64 rip)
 {
 	struct kmap *kmap = __map__kmap(map);
+	const struct dso *dso = map__dso(map);
 
 	/*
 	 * vmlinux does not have program headers for PTI entry trampolines and
@@ -486,18 +506,18 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
 		}
 	}
 
-	if (!map->dso->adjust_symbols)
+	if (!dso->adjust_symbols)
 		return rip;
 
-	if (map->dso->rel)
+	if (dso->rel)
 		return rip - map->pgoff;
 
 	/*
 	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
 	 * but all kernel modules are ET_REL, so won't get here.
 	 */
-	if (map->dso->kernel == DSO_SPACE__USER)
-		return rip + map->dso->text_offset;
+	if (dso->kernel == DSO_SPACE__USER)
+		return rip + dso->text_offset;
 
 	return map->unmap_ip(map, rip) - map->reloc;
 }
@@ -516,18 +536,20 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
  */
 u64 map__objdump_2mem(struct map *map, u64 ip)
 {
-	if (!map->dso->adjust_symbols)
+	const struct dso *dso = map__dso(map);
+
+	if (!dso->adjust_symbols)
 		return map->unmap_ip(map, ip);
 
-	if (map->dso->rel)
+	if (dso->rel)
 		return map->unmap_ip(map, ip + map->pgoff);
 
 	/*
 	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
 	 * but all kernel modules are ET_REL, so won't get here.
 	 */
-	if (map->dso->kernel == DSO_SPACE__USER)
-		return map->unmap_ip(map, ip - map->dso->text_offset);
+	if (dso->kernel == DSO_SPACE__USER)
+		return map->unmap_ip(map, ip - dso->text_offset);
 
 	return ip + map->reloc;
 }
@@ -541,7 +563,9 @@ bool map__contains_symbol(const struct map *map, const struct symbol *sym)
 
 struct kmap *__map__kmap(struct map *map)
 {
-	if (!map->dso || !map->dso->kernel)
+	const struct dso *dso = map__dso(map);
+
+	if (!dso || !dso->kernel)
 		return NULL;
 	return (struct kmap *)(map + 1);
 }
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index d1a6f85fd31d..36c5add0144d 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -47,6 +47,11 @@ u64 map__unmap_ip(const struct map *map, u64 ip);
 /* Returns ip */
 u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip);
 
+static inline struct dso *map__dso(const struct map *map)
+{
+	return map->dso;
+}
+
 static inline size_t map__size(const struct map *map)
 {
 	return map->end - map->start;
@@ -69,7 +74,7 @@ struct thread;
  * Note: caller must ensure map->dso is not NULL (map is loaded).
  */
 #define map__for_each_symbol(map, pos, n)	\
-	dso__for_each_symbol(map->dso, pos, n)
+	dso__for_each_symbol(map__dso(map), pos, n)
 
 /* map__for_each_symbol_with_name - iterate over the symbols in the given map
  *                                  that have the given name
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 91bb015caede..09ec6bbafcbc 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -62,6 +62,7 @@ static int __maps__insert(struct maps *maps, struct map *map)
 int maps__insert(struct maps *maps, struct map *map)
 {
 	int err;
+	const struct dso *dso = map__dso(map);
 
 	down_write(maps__lock(maps));
 	err = __maps__insert(maps, map);
@@ -70,7 +71,7 @@ int maps__insert(struct maps *maps, struct map *map)
 
 	++maps->nr_maps;
 
-	if (map->dso && map->dso->kernel) {
+	if (dso && dso->kernel) {
 		struct kmap *kmap = map__kmap(map);
 
 		if (kmap)
@@ -253,7 +254,7 @@ size_t maps__fprintf(struct maps *maps, FILE *fp)
 		printed += fprintf(fp, "Map:");
 		printed += map__fprintf(pos->map, fp);
 		if (verbose > 2) {
-			printed += dso__fprintf(pos->map->dso, fp);
+			printed += dso__fprintf(map__dso(pos->map), fp);
 			printed += fprintf(fp, "--\n");
 		}
 	}
@@ -307,7 +308,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 
 			if (use_browser) {
 				pr_debug("overlapping maps in %s (disable tui for more info)\n",
-					   map->dso->name);
+					 map__dso(map)->name);
 			} else {
 				fputs("overlapping maps:\n", fp);
 				map__fprintf(map, fp);
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index cdf5d655d84c..b26670a26005 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -165,8 +165,9 @@ static struct map *kernel_get_module_map(const char *module)
 
 	maps__for_each_entry(maps, pos) {
 		/* short_name is "[module]" */
-		const char *short_name = pos->map->dso->short_name;
-		u16 short_name_len =  pos->map->dso->short_name_len;
+		struct dso *dso = map__dso(pos->map);
+		const char *short_name = dso->short_name;
+		u16 short_name_len =  dso->short_name_len;
 
 		if (strncmp(short_name + 1, module,
 			    short_name_len - 2) == 0 &&
@@ -182,13 +183,15 @@ struct map *get_target_map(const char *target, struct nsinfo *nsi, bool user)
 	/* Init maps of given executable or kernel */
 	if (user) {
 		struct map *map;
+		struct dso *dso;
 
 		map = dso__new_map(target);
-		if (map && map->dso) {
-			mutex_lock(&map->dso->lock);
-			nsinfo__put(map->dso->nsinfo);
-			map->dso->nsinfo = nsinfo__get(nsi);
-			mutex_unlock(&map->dso->lock);
+		dso = map ? map__dso(map) : NULL;
+		if (dso) {
+			mutex_lock(&dso->lock);
+			nsinfo__put(dso->nsinfo);
+			dso->nsinfo = nsinfo__get(nsi);
+			mutex_unlock(&dso->lock);
 		}
 		return map;
 	} else {
@@ -341,7 +344,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
 		snprintf(module_name, sizeof(module_name), "[%s]", module);
 		map = maps__find_by_name(machine__kernel_maps(host_machine), module_name);
 		if (map) {
-			dso = map->dso;
+			dso = map__dso(map);
 			goto found;
 		}
 		pr_debug("Failed to find module %s.\n", module);
@@ -349,7 +352,7 @@ static int kernel_get_module_dso(const char *module, struct dso **pdso)
 	}
 
 	map = machine__kernel_map(host_machine);
-	dso = map->dso;
+	dso = map__dso(map);
 	if (!dso->has_build_id)
 		dso__read_running_kernel_build_id(dso, host_machine);
 
@@ -3737,6 +3740,7 @@ int show_available_funcs(const char *target, struct nsinfo *nsi,
 {
         struct rb_node *nd;
 	struct map *map;
+	struct dso *dso;
 	int ret;
 
 	ret = init_probe_symbol_maps(user);
@@ -3762,14 +3766,14 @@ int show_available_funcs(const char *target, struct nsinfo *nsi,
 			       (target) ? : "kernel");
 		goto end;
 	}
-	if (!dso__sorted_by_name(map->dso))
-		dso__sort_by_name(map->dso);
+	dso = map__dso(map);
+	if (!dso__sorted_by_name(dso))
+		dso__sort_by_name(dso);
 
 	/* Show all (filtered) symbols */
 	setup_pager();
 
-	for (nd = rb_first_cached(&map->dso->symbol_names); nd;
-	     nd = rb_next(nd)) {
+	for (nd = rb_first_cached(&dso->symbol_names); nd; nd = rb_next(nd)) {
 		struct symbol_name_rb_node *pos = rb_entry(nd, struct symbol_name_rb_node, rb_node);
 
 		if (strfilter__compare(_filter, pos->sym.name))
diff --git a/tools/perf/util/scripting-engines/trace-event-perl.c b/tools/perf/util/scripting-engines/trace-event-perl.c
index 83fd2fd0ba16..039d0365ad41 100644
--- a/tools/perf/util/scripting-engines/trace-event-perl.c
+++ b/tools/perf/util/scripting-engines/trace-event-perl.c
@@ -315,12 +315,14 @@ static SV *perl_process_callchain(struct perf_sample *sample,
 
 		if (node->ms.map) {
 			struct map *map = node->ms.map;
+			struct dso *dso = map ? map__dso(map) : NULL;
 			const char *dsoname = "[unknown]";
-			if (map && map->dso) {
-				if (symbol_conf.show_kernel_path && map->dso->long_name)
-					dsoname = map->dso->long_name;
+
+			if (dso) {
+				if (symbol_conf.show_kernel_path && dso->long_name)
+					dsoname = dso->long_name;
 				else
-					dsoname = map->dso->name;
+					dsoname = dso->name;
 			}
 			if (!hv_stores(elem, "dso", newSVpv(dsoname,0))) {
 				hv_undef(elem);
diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
index e5cc18f6fcda..b8e5c6f61d80 100644
--- a/tools/perf/util/scripting-engines/trace-event-python.c
+++ b/tools/perf/util/scripting-engines/trace-event-python.c
@@ -390,12 +390,13 @@ static PyObject *get_field_numeric_entry(struct tep_event *event,
 static const char *get_dsoname(struct map *map)
 {
 	const char *dsoname = "[unknown]";
+	struct dso *dso = map ? map__dso(map) : NULL;
 
-	if (map && map->dso) {
-		if (symbol_conf.show_kernel_path && map->dso->long_name)
-			dsoname = map->dso->long_name;
+	if (dso) {
+		if (symbol_conf.show_kernel_path && dso->long_name)
+			dsoname = dso->long_name;
 		else
-			dsoname = map->dso->name;
+			dsoname = dso->name;
 	}
 
 	return dsoname;
@@ -780,9 +781,10 @@ static void set_sym_in_dict(PyObject *dict, struct addr_location *al,
 	char sbuild_id[SBUILD_ID_SIZE];
 
 	if (al->map) {
-		pydict_set_item_string_decref(dict, dso_field,
-			_PyUnicode_FromString(al->map->dso->name));
-		build_id__sprintf(&al->map->dso->bid, sbuild_id);
+		struct dso *dso = map__dso(al->map);
+
+		pydict_set_item_string_decref(dict, dso_field, _PyUnicode_FromString(dso->name));
+		build_id__sprintf(&dso->bid, sbuild_id);
 		pydict_set_item_string_decref(dict, dso_bid_field,
 			_PyUnicode_FromString(sbuild_id));
 		pydict_set_item_string_decref(dict, dso_map_start,
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index e04d9bddba11..d7b6b734bf90 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -184,8 +184,8 @@ struct sort_entry sort_comm = {
 
 static int64_t _sort__dso_cmp(struct map *map_l, struct map *map_r)
 {
-	struct dso *dso_l = map_l ? map_l->dso : NULL;
-	struct dso *dso_r = map_r ? map_r->dso : NULL;
+	struct dso *dso_l = map_l ? map__dso(map_l) : NULL;
+	struct dso *dso_r = map_r ? map__dso(map_r) : NULL;
 	const char *dso_name_l, *dso_name_r;
 
 	if (!dso_l || !dso_r)
@@ -211,13 +211,13 @@ sort__dso_cmp(struct hist_entry *left, struct hist_entry *right)
 static int _hist_entry__dso_snprintf(struct map *map, char *bf,
 				     size_t size, unsigned int width)
 {
-	if (map && map->dso) {
-		const char *dso_name = verbose > 0 ? map->dso->long_name :
-			map->dso->short_name;
-		return repsep_snprintf(bf, size, "%-*.*s", width, width, dso_name);
-	}
+	const struct dso *dso = map ? map__dso(map) : NULL;
+	const char *dso_name = "[unknown]";
+
+	if (dso)
+		dso_name = verbose > 0 ? dso->long_name : dso->short_name;
 
-	return repsep_snprintf(bf, size, "%-*.*s", width, width, "[unknown]");
+	return repsep_snprintf(bf, size, "%-*.*s", width, width, dso_name);
 }
 
 static int hist_entry__dso_snprintf(struct hist_entry *he, char *bf,
@@ -233,7 +233,7 @@ static int hist_entry__dso_filter(struct hist_entry *he, int type, const void *a
 	if (type != HIST_FILTER__DSO)
 		return -1;
 
-	return dso && (!he->ms.map || he->ms.map->dso != dso);
+	return dso && (!he->ms.map || map__dso(he->ms.map) != dso);
 }
 
 struct sort_entry sort_dso = {
@@ -313,11 +313,11 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
 	size_t ret = 0;
 
 	if (verbose > 0) {
-		char o = map ? dso__symtab_origin(map->dso) : '!';
+		struct dso *dso = map ? map__dso(map) : NULL;
+		char o = dso ? dso__symtab_origin(dso) : '!';
 		u64 rip = ip;
 
-		if (map && map->dso && map->dso->kernel
-		    && map->dso->adjust_symbols)
+		if (dso && dso->kernel && dso->adjust_symbols)
 			rip = map->unmap_ip(map, ip);
 
 		ret += repsep_snprintf(bf, size, "%-#*llx %c ",
@@ -595,7 +595,7 @@ static char *hist_entry__get_srcfile(struct hist_entry *e)
 	if (!map)
 		return no_srcfile;
 
-	sf = __get_srcline(map->dso, map__rip_2objdump(map, e->ip),
+	sf = __get_srcline(map__dso(map), map__rip_2objdump(map, e->ip),
 			 e->ms.sym, false, true, true, e->ip);
 	if (!strcmp(sf, SRCLINE_UNKNOWN))
 		return no_srcfile;
@@ -941,7 +941,7 @@ static int hist_entry__dso_from_filter(struct hist_entry *he, int type,
 		return -1;
 
 	return dso && (!he->branch_info || !he->branch_info->from.ms.map ||
-		       he->branch_info->from.ms.map->dso != dso);
+		map__dso(he->branch_info->from.ms.map) != dso);
 }
 
 static int64_t
@@ -973,7 +973,7 @@ static int hist_entry__dso_to_filter(struct hist_entry *he, int type,
 		return -1;
 
 	return dso && (!he->branch_info || !he->branch_info->to.ms.map ||
-		       he->branch_info->to.ms.map->dso != dso);
+		map__dso(he->branch_info->to.ms.map) != dso);
 }
 
 static int64_t
@@ -1465,6 +1465,7 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
 {
 	u64 l, r;
 	struct map *l_map, *r_map;
+	struct dso *l_dso, *r_dso;
 	int rc;
 
 	if (!left->mem_info)  return -1;
@@ -1484,7 +1485,9 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
 	if (!l_map) return -1;
 	if (!r_map) return 1;
 
-	rc = dso__cmp_id(l_map->dso, r_map->dso);
+	l_dso = map__dso(l_map);
+	r_dso = map__dso(r_map);
+	rc = dso__cmp_id(l_dso, r_dso);
 	if (rc)
 		return rc;
 	/*
@@ -1496,9 +1499,8 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
 	 */
 
 	if ((left->cpumode != PERF_RECORD_MISC_KERNEL) &&
-	    (!(l_map->flags & MAP_SHARED)) &&
-	    !l_map->dso->id.maj && !l_map->dso->id.min &&
-	    !l_map->dso->id.ino && !l_map->dso->id.ino_generation) {
+	    (!(l_map->flags & MAP_SHARED)) && !l_dso->id.maj && !l_dso->id.min &&
+	    !l_dso->id.ino && !l_dso->id.ino_generation) {
 		/* userspace anonymous */
 
 		if (left->thread->pid_ > right->thread->pid_) return -1;
@@ -1526,6 +1528,7 @@ static int hist_entry__dcacheline_snprintf(struct hist_entry *he, char *bf,
 
 	if (he->mem_info) {
 		struct map *map = he->mem_info->daddr.ms.map;
+		struct dso *dso = map__dso(map);
 
 		addr = cl_address(he->mem_info->daddr.al_addr, chk_double_cl);
 		ms = &he->mem_info->daddr.ms;
@@ -1534,8 +1537,7 @@ static int hist_entry__dcacheline_snprintf(struct hist_entry *he, char *bf,
 		if ((he->cpumode != PERF_RECORD_MISC_KERNEL) &&
 		     map && !(map->prot & PROT_EXEC) &&
 		    (map->flags & MAP_SHARED) &&
-		    (map->dso->id.maj || map->dso->id.min ||
-		     map->dso->id.ino || map->dso->id.ino_generation))
+		    (dso->id.maj || dso->id.min || dso->id.ino || dso->id.ino_generation))
 			level = 's';
 		else if (!map)
 			level = 'X';
@@ -2031,9 +2033,8 @@ sort__dso_size_cmp(struct hist_entry *left, struct hist_entry *right)
 static int _hist_entry__dso_size_snprintf(struct map *map, char *bf,
 					  size_t bf_size, unsigned int width)
 {
-	if (map && map->dso)
-		return repsep_snprintf(bf, bf_size, "%*d", width,
-				       map__size(map));
+	if (map && map__dso(map))
+		return repsep_snprintf(bf, bf_size, "%*d", width, map__size(map));
 
 	return repsep_snprintf(bf, bf_size, "%*s", width, "unknown");
 }
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index ccdafc3971ac..97085ad7fe9b 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1429,7 +1429,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		*curr_mapp = curr_map;
 		*curr_dsop = curr_dso;
 	} else
-		*curr_dsop = curr_map->dso;
+		*curr_dsop = map__dso(curr_map);
 
 	return 0;
 }
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index efd047bab373..13176ed5bd27 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -791,6 +791,7 @@ static int maps__split_kallsyms_for_kcore(struct maps *kmaps, struct dso *dso)
 	*root = RB_ROOT_CACHED;
 
 	while (next) {
+		struct dso *curr_map_dso;
 		char *module;
 
 		pos = rb_entry(next, struct symbol, rb_node);
@@ -808,13 +809,13 @@ static int maps__split_kallsyms_for_kcore(struct maps *kmaps, struct dso *dso)
 			symbol__delete(pos);
 			continue;
 		}
-
+		curr_map_dso = map__dso(curr_map);
 		pos->start -= curr_map->start - curr_map->pgoff;
 		if (pos->end > curr_map->end)
 			pos->end = curr_map->end;
 		if (pos->end)
 			pos->end -= curr_map->start - curr_map->pgoff;
-		symbols__insert(&curr_map->dso->symbols, pos);
+		symbols__insert(&curr_map_dso->symbols, pos);
 		++count;
 	}
 
@@ -856,12 +857,14 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 
 		module = strchr(pos->name, '\t');
 		if (module) {
+			struct dso *curr_map_dso;
+
 			if (!symbol_conf.use_modules)
 				goto discard_symbol;
 
 			*module++ = '\0';
-
-			if (strcmp(curr_map->dso->short_name, module)) {
+			curr_map_dso = map__dso(curr_map);
+			if (strcmp(curr_map_dso->short_name, module)) {
 				if (curr_map != initial_map &&
 				    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
 				    machine__is_default_guest(machine)) {
@@ -872,7 +875,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 					 * symbols are in its kmap. Mark it as
 					 * loaded.
 					 */
-					dso__set_loaded(curr_map->dso);
+					dso__set_loaded(curr_map_dso);
 				}
 
 				curr_map = maps__find_by_name(kmaps, module);
@@ -884,8 +887,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 					curr_map = initial_map;
 					goto discard_symbol;
 				}
-
-				if (curr_map->dso->loaded &&
+				curr_map_dso = map__dso(curr_map);
+				if (curr_map_dso->loaded &&
 				    !machine__is_default_guest(machine))
 					goto discard_symbol;
 			}
@@ -954,8 +957,10 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 		}
 add_symbol:
 		if (curr_map != initial_map) {
+			struct dso *curr_map_dso = map__dso(curr_map);
+
 			rb_erase_cached(&pos->rb_node, root);
-			symbols__insert(&curr_map->dso->symbols, pos);
+			symbols__insert(&curr_map_dso->symbols, pos);
 			++moved;
 		} else
 			++count;
@@ -969,7 +974,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 	if (curr_map != initial_map &&
 	    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
 	    machine__is_default_guest(maps__machine(kmaps))) {
-		dso__set_loaded(curr_map->dso);
+		dso__set_loaded(map__dso(curr_map));
 	}
 
 	return count + moved;
@@ -1143,13 +1148,14 @@ static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
 	maps__for_each_entry(kmaps, old_node) {
 		struct map *old_map = old_node->map;
 		struct module_info *mi;
+		struct dso *dso;
 
 		if (!__map__is_kmodule(old_map)) {
 			continue;
 		}
-
+		dso = map__dso(old_map);
 		/* Module must be in memory at the same address */
-		mi = find_module(old_map->dso->short_name, &modules);
+		mi = find_module(dso->short_name, &modules);
 		if (!mi || mi->start != old_map->start) {
 			err = -EINVAL;
 			goto out;
@@ -2045,14 +2051,17 @@ int dso__load(struct dso *dso, struct map *map)
 
 static int map__strcmp(const void *a, const void *b)
 {
-	const struct map *ma = *(const struct map **)a, *mb = *(const struct map **)b;
-	return strcmp(ma->dso->short_name, mb->dso->short_name);
+	const struct dso *dso_a = map__dso(*(const struct map **)a);
+	const struct dso *dso_b = map__dso(*(const struct map **)b);
+
+	return strcmp(dso_a->short_name, dso_b->short_name);
 }
 
 static int map__strcmp_name(const void *name, const void *b)
 {
-	const struct map *map = *(const struct map **)b;
-	return strcmp(name, map->dso->short_name);
+	const struct dso *dso = map__dso(*(const struct map **)b);
+
+	return strcmp(name, dso->short_name);
 }
 
 void __maps__sort_by_name(struct maps *maps)
@@ -2109,10 +2118,13 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 
 	down_read(maps__lock(maps));
 
-	if (maps->last_search_by_name &&
-	    strcmp(maps->last_search_by_name->dso->short_name, name) == 0) {
-		map = maps->last_search_by_name;
-		goto out_unlock;
+	if (maps->last_search_by_name) {
+		const struct dso *dso = map__dso(maps->last_search_by_name);
+
+		if (strcmp(dso->short_name, name) == 0) {
+			map = maps->last_search_by_name;
+			goto out_unlock;
+		}
 	}
 	/*
 	 * If we have maps->maps_by_name, then the name isn't in the rbtree,
@@ -2125,8 +2137,11 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 
 	/* Fallback to traversing the rbtree... */
 	maps__for_each_entry(maps, rb_node) {
+		struct dso *dso;
+
 		map = rb_node->map;
-		if (strcmp(map->dso->short_name, name) == 0) {
+		dso = map__dso(map);
+		if (strcmp(dso->short_name, name) == 0) {
 			maps->last_search_by_name = map;
 			goto out_unlock;
 		}
diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
index 57b95c1d7e39..fbd1a882b013 100644
--- a/tools/perf/util/synthetic-events.c
+++ b/tools/perf/util/synthetic-events.c
@@ -693,12 +693,14 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
 
 	maps__for_each_entry(maps, pos) {
 		struct map *map = pos->map;
+		struct dso *dso;
 
 		if (!__map__is_kmodule(map))
 			continue;
 
+		dso = map__dso(map);
 		if (symbol_conf.buildid_mmap2) {
-			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
+			size = PERF_ALIGN(dso->long_name_len + 1, sizeof(u64));
 			event->mmap2.header.type = PERF_RECORD_MMAP2;
 			event->mmap2.header.size = (sizeof(event->mmap2) -
 						(sizeof(event->mmap2.filename) - size));
@@ -708,12 +710,11 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
 			event->mmap2.len   = map->end - map->start;
 			event->mmap2.pid   = machine->pid;
 
-			memcpy(event->mmap2.filename, map->dso->long_name,
-			       map->dso->long_name_len + 1);
+			memcpy(event->mmap2.filename, dso->long_name, dso->long_name_len + 1);
 
 			perf_record_mmap2__read_build_id(&event->mmap2, machine, false);
 		} else {
-			size = PERF_ALIGN(map->dso->long_name_len + 1, sizeof(u64));
+			size = PERF_ALIGN(dso->long_name_len + 1, sizeof(u64));
 			event->mmap.header.type = PERF_RECORD_MMAP;
 			event->mmap.header.size = (sizeof(event->mmap) -
 						(sizeof(event->mmap.filename) - size));
@@ -723,8 +724,7 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
 			event->mmap.len   = map->end - map->start;
 			event->mmap.pid   = machine->pid;
 
-			memcpy(event->mmap.filename, map->dso->long_name,
-			       map->dso->long_name_len + 1);
+			memcpy(event->mmap.filename, dso->long_name, dso->long_name_len + 1);
 		}
 
 		if (perf_tool__process_synth_event(tool, event, machine, process) != 0) {
diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
index 292585a52281..42fdc80a6f2e 100644
--- a/tools/perf/util/thread.c
+++ b/tools/perf/util/thread.c
@@ -448,23 +448,22 @@ struct thread *thread__main_thread(struct machine *machine, struct thread *threa
 int thread__memcpy(struct thread *thread, struct machine *machine,
 		   void *buf, u64 ip, int len, bool *is64bit)
 {
-       u8 cpumode = PERF_RECORD_MISC_USER;
-       struct addr_location al;
-       long offset;
+	u8 cpumode = PERF_RECORD_MISC_USER;
+	struct addr_location al;
+	long offset;
 
-       if (machine__kernel_ip(machine, ip))
-               cpumode = PERF_RECORD_MISC_KERNEL;
+	if (machine__kernel_ip(machine, ip))
+		cpumode = PERF_RECORD_MISC_KERNEL;
 
-       if (!thread__find_map(thread, cpumode, ip, &al) || !al.map->dso ||
-	   al.map->dso->data.status == DSO_DATA_STATUS_ERROR ||
-	   map__load(al.map) < 0)
-               return -1;
+	if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map) ||
+		map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR || map__load(al.map) < 0)
+		return -1;
 
-       offset = al.map->map_ip(al.map, ip);
-       if (is64bit)
-               *is64bit = al.map->dso->is_64_bit;
+	offset = al.map->map_ip(al.map, ip);
+	if (is64bit)
+		*is64bit = map__dso(al.map)->is_64_bit;
 
-       return dso__data_read_offset(al.map->dso, machine, offset, buf, len);
+	return dso__data_read_offset(map__dso(al.map), machine, offset, buf, len);
 }
 
 void thread__free_stitch_list(struct thread *thread)
diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
index 94aa40f6e348..c8cba9d4bfd9 100644
--- a/tools/perf/util/unwind-libdw.c
+++ b/tools/perf/util/unwind-libdw.c
@@ -52,7 +52,7 @@ static int __report_module(struct addr_location *al, u64 ip,
 	thread__find_symbol(ui->thread, PERF_RECORD_MISC_USER, ip, al);
 
 	if (al->map)
-		dso = al->map->dso;
+		dso = map__dso(al->map);
 
 	if (!dso)
 		return 0;
@@ -134,17 +134,17 @@ static int access_dso_mem(struct unwind_info *ui, Dwarf_Addr addr,
 {
 	struct addr_location al;
 	ssize_t size;
+	struct dso *dso;
 
 	if (!thread__find_map(ui->thread, PERF_RECORD_MISC_USER, addr, &al)) {
 		pr_debug("unwind: no map for %lx\n", (unsigned long)addr);
 		return -1;
 	}
-
-	if (!al.map->dso)
+	dso = map__dso(al.map);
+	if (!dso)
 		return -1;
 
-	size = dso__data_read_addr(al.map->dso, al.map, ui->machine,
-				   addr, (u8 *) data, sizeof(*data));
+	size = dso__data_read_addr(dso, al.map, ui->machine, addr, (u8 *) data, sizeof(*data));
 
 	return !(size == sizeof(*data));
 }
diff --git a/tools/perf/util/vdso.c b/tools/perf/util/vdso.c
index 835c39efb80d..ec777ee11493 100644
--- a/tools/perf/util/vdso.c
+++ b/tools/perf/util/vdso.c
@@ -147,7 +147,7 @@ static enum dso_type machine__thread_dso_type(struct machine *machine,
 	struct map_rb_node *rb_node;
 
 	maps__for_each_entry(thread->maps, rb_node) {
-		struct dso *dso = rb_node->map->dso;
+		struct dso *dso = map__dso(rb_node->map);
 
 		if (!dso || dso->long_name[0] != '/')
 			continue;
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 05/17] perf map: Add accessor for start and end
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (3 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 04/17] perf map: Add accessor for dso Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 06/17] perf map: Rename map_ip and unmap_ip Ian Rogers
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Later changes will add reference count checking for struct map, start
and end are frequently accessed variables. Add an accessor so that the
reference count check is only necessary in one place.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/arch/x86/tests/dwarf-unwind.c      |  2 +-
 tools/perf/arch/x86/util/event.c              |  4 +-
 tools/perf/builtin-buildid-list.c             |  2 +-
 tools/perf/builtin-report.c                   |  2 +-
 tools/perf/builtin-script.c                   |  2 +-
 tools/perf/builtin-top.c                      |  2 +-
 tools/perf/tests/code-reading.c               |  8 +--
 tools/perf/tests/maps.c                       |  4 +-
 tools/perf/tests/mmap-thread-lookup.c         |  2 +-
 tools/perf/tests/vmlinux-kallsyms.c           | 14 +++---
 tools/perf/util/annotate.c                    |  4 +-
 tools/perf/util/dlfilter.c                    |  8 +--
 tools/perf/util/intel-pt.c                    |  8 +--
 tools/perf/util/machine.c                     | 14 +++---
 tools/perf/util/map.c                         |  8 +--
 tools/perf/util/map.h                         | 12 ++++-
 tools/perf/util/maps.c                        | 30 ++++++------
 tools/perf/util/probe-event.c                 |  4 +-
 .../scripting-engines/trace-event-python.c    |  6 +--
 tools/perf/util/symbol-elf.c                  |  8 +--
 tools/perf/util/symbol.c                      | 49 ++++++++++---------
 tools/perf/util/symbol_fprintf.c              |  2 +-
 tools/perf/util/synthetic-events.c            | 16 +++---
 tools/perf/util/unwind-libdw.c                |  6 +--
 24 files changed, 114 insertions(+), 103 deletions(-)

diff --git a/tools/perf/arch/x86/tests/dwarf-unwind.c b/tools/perf/arch/x86/tests/dwarf-unwind.c
index a54dea7c112f..497593be80f2 100644
--- a/tools/perf/arch/x86/tests/dwarf-unwind.c
+++ b/tools/perf/arch/x86/tests/dwarf-unwind.c
@@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample,
 		return -1;
 	}
 
-	stack_size = map->end - sp;
+	stack_size = map__end(map) - sp;
 	stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size;
 
 	memcpy(buf, (void *) sp, stack_size);
diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
index 17bf60babfbd..3b2475707756 100644
--- a/tools/perf/arch/x86/util/event.c
+++ b/tools/perf/arch/x86/util/event.c
@@ -59,8 +59,8 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
 
 		event->mmap.header.size = size;
 
-		event->mmap.start = map->start;
-		event->mmap.len   = map->end - map->start;
+		event->mmap.start = map__start(map);
+		event->mmap.len   = map__size(map);
 		event->mmap.pgoff = map->pgoff;
 		event->mmap.pid   = machine->pid;
 
diff --git a/tools/perf/builtin-buildid-list.c b/tools/perf/builtin-buildid-list.c
index cad9ed44ce7c..eea28cbcc0b7 100644
--- a/tools/perf/builtin-buildid-list.c
+++ b/tools/perf/builtin-buildid-list.c
@@ -30,7 +30,7 @@ static int buildid__map_cb(struct map *map, void *arg __maybe_unused)
 	memset(bid_buf, 0, sizeof(bid_buf));
 	if (dso->has_build_id)
 		build_id__sprintf(&dso->bid, bid_buf);
-	printf("%s %16" PRIx64 " %16" PRIx64, bid_buf, map->start, map->end);
+	printf("%s %16" PRIx64 " %16" PRIx64, bid_buf, map__start(map), map__end(map));
 	if (dso->long_name != NULL) {
 		printf(" %s", dso->long_name);
 	} else if (dso->short_name != NULL) {
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 02ca87c13e91..4ce1aef3e253 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -847,7 +847,7 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
 		const struct dso *dso = map__dso(map);
 
 		printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
-				   indent, "", map->start, map->end,
+				   indent, "", map__start(map), map__end(map),
 				   map->prot & PROT_READ ? 'r' : '-',
 				   map->prot & PROT_WRITE ? 'w' : '-',
 				   map->prot & PROT_EXEC ? 'x' : '-',
diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
index 9c7eb900ff7c..eb49689d0f00 100644
--- a/tools/perf/builtin-script.c
+++ b/tools/perf/builtin-script.c
@@ -1209,7 +1209,7 @@ static int ip__fprintf_sym(uint64_t addr, struct thread *thread,
 	if (al.addr < al.sym->end)
 		off = al.addr - al.sym->start;
 	else
-		off = al.addr - al.map->start - al.sym->start;
+		off = al.addr - map__start(al.map) - al.sym->start;
 	printed += fprintf(fp, "\t%s", al.sym->name);
 	if (off)
 		printed += fprintf(fp, "%+d", off);
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index 5010eee8fbae..b45565f718f4 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -183,7 +183,7 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
 		    "Not all samples will be on the annotation output.\n\n"
 		    "Please report to linux-kernel@vger.kernel.org\n",
 		    ip, dso->long_name, dso__symtab_origin(dso),
-		    map->start, map->end, sym->start, sym->end,
+		    map__start(map), map__end(map), sym->start, sym->end,
 		    sym->binding == STB_GLOBAL ? 'g' :
 		    sym->binding == STB_LOCAL  ? 'l' : 'w', sym->name,
 		    err ? "[unknown]" : uts.machine,
diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
index 936c61546e64..1545fcaa95c6 100644
--- a/tools/perf/tests/code-reading.c
+++ b/tools/perf/tests/code-reading.c
@@ -265,8 +265,8 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 		len = BUFSZ;
 
 	/* Do not go off the map */
-	if (addr + len > al.map->end)
-		len = al.map->end - addr;
+	if (addr + len > map__end(al.map))
+		len = map__end(al.map) - addr;
 
 	/* Read the object code using perf */
 	ret_len = dso__data_read_offset(dso, maps__machine(thread->maps),
@@ -291,7 +291,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 		size_t d;
 
 		for (d = 0; d < state->done_cnt; d++) {
-			if (state->done[d] == al.map->start) {
+			if (state->done[d] == map__start(al.map)) {
 				pr_debug("kcore map tested already");
 				pr_debug(" - skipping\n");
 				goto out;
@@ -301,7 +301,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 			pr_debug("Too many kcore maps - skipping\n");
 			goto out;
 		}
-		state->done[state->done_cnt++] = al.map->start;
+		state->done[state->done_cnt++] = map__start(al.map);
 	}
 
 	objdump_name = dso->long_name;
diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
index ae7028fbf79e..fd0c464fcf95 100644
--- a/tools/perf/tests/maps.c
+++ b/tools/perf/tests/maps.c
@@ -24,8 +24,8 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
 		if (i > 0)
 			TEST_ASSERT_VAL("less maps expected", (map && i < size) || (!map && i == size));
 
-		TEST_ASSERT_VAL("wrong map start",  map->start == merged[i].start);
-		TEST_ASSERT_VAL("wrong map end",    map->end == merged[i].end);
+		TEST_ASSERT_VAL("wrong map start",  map__start(map) == merged[i].start);
+		TEST_ASSERT_VAL("wrong map end",    map__end(map) == merged[i].end);
 		TEST_ASSERT_VAL("wrong map name",  !strcmp(map__dso(map)->name, merged[i].name));
 		TEST_ASSERT_VAL("wrong map refcnt", refcount_read(&map->refcnt) == 1);
 
diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c
index a4301fc7b770..5cc4644e353d 100644
--- a/tools/perf/tests/mmap-thread-lookup.c
+++ b/tools/perf/tests/mmap-thread-lookup.c
@@ -202,7 +202,7 @@ static int mmap_events(synth_cb synth)
 			break;
 		}
 
-		pr_debug("map %p, addr %" PRIx64 "\n", al.map, al.map->start);
+		pr_debug("map %p, addr %" PRIx64 "\n", al.map, map__start(al.map));
 	}
 
 	machine__delete_threads(machine);
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index c614c2db7e89..0a75623172c2 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -267,7 +267,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 
 				continue;
 			}
-		} else if (mem_start == kallsyms.vmlinux_map->end) {
+		} else if (mem_start == map__end(kallsyms.vmlinux_map)) {
 			/*
 			 * Ignore aliases to _etext, i.e. to the end of the kernel text area,
 			 * such as __indirect_thunk_end.
@@ -319,14 +319,14 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 	maps__for_each_entry(maps, rb_node) {
 		struct map *pair, *map = rb_node->map;
 
-		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map->start);
-		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map->end);
+		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map__start(map));
+		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map__end(map));
 
 		pair = maps__find(kallsyms.kmaps, mem_start);
 		if (pair == NULL || pair->priv)
 			continue;
 
-		if (pair->start == mem_start) {
+		if (map__start(pair) == mem_start) {
 			struct dso *dso = map__dso(map);
 
 			if (!header_printed) {
@@ -335,10 +335,10 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 			}
 
 			pr_info("WARN: %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s in kallsyms as",
-				map->start, map->end, map->pgoff, dso->name);
-			if (mem_end != pair->end)
+				map__start(map), map__end(map), map->pgoff, dso->name);
+			if (mem_end != map__end(pair))
 				pr_info(":\nWARN: *%" PRIx64 "-%" PRIx64 " %" PRIx64,
-					pair->start, pair->end, pair->pgoff);
+					map__start(pair), map__end(pair), pair->pgoff);
 			pr_info(" %s\n", dso->name);
 			pair->priv = 1;
 		}
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 9494b34e84fc..f60f5efb2ad9 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1016,13 +1016,13 @@ int addr_map_symbol__account_cycles(struct addr_map_symbol *ams,
 	if (start &&
 		(start->ms.sym == ams->ms.sym ||
 		 (ams->ms.sym &&
-		   start->addr == ams->ms.sym->start + ams->ms.map->start)))
+		  start->addr == ams->ms.sym->start + map__start(ams->ms.map))))
 		saddr = start->al_addr;
 	if (saddr == 0)
 		pr_debug2("BB with bad start: addr %"PRIx64" start %"PRIx64" sym %"PRIx64" saddr %"PRIx64"\n",
 			ams->addr,
 			start ? start->addr : 0,
-			ams->ms.sym ? ams->ms.sym->start + ams->ms.map->start : 0,
+			ams->ms.sym ? ams->ms.sym->start + map__start(ams->ms.map) : 0,
 			saddr);
 	err = symbol__account_cycles(ams->al_addr, saddr, ams->ms.sym, cycles);
 	if (err)
diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
index 8a7ffe0d805a..fe401fa4be02 100644
--- a/tools/perf/util/dlfilter.c
+++ b/tools/perf/util/dlfilter.c
@@ -51,7 +51,7 @@ static void al_to_d_al(struct addr_location *al, struct perf_dlfilter_al *d_al)
 		if (al->addr < sym->end)
 			d_al->symoff = al->addr - sym->start;
 		else
-			d_al->symoff = al->addr - al->map->start - sym->start;
+			d_al->symoff = al->addr - map__start(al->map) - sym->start;
 		d_al->sym_binding = sym->binding;
 	} else {
 		d_al->sym = NULL;
@@ -268,7 +268,7 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
 
 	map = al->map;
 
-	if (map && ip >= map->start && ip < map->end &&
+	if (map && ip >= map__start(map) && ip < map__end(map) &&
 	    machine__kernel_ip(d->machine, ip) == machine__kernel_ip(d->machine, d->sample->ip))
 		goto have_map;
 
@@ -279,8 +279,8 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
 	map = a.map;
 have_map:
 	offset = map->map_ip(map, ip);
-	if (ip + len >= map->end)
-		len = map->end - ip;
+	if (ip + len >= map__end(map))
+		len = map__end(map) - ip;
 	return dso__data_read_offset(map__dso(map), d->machine, offset, buf, len);
 }
 
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index 8cec88e09792..a2e62daa708e 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -887,7 +887,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 				goto out_no_cache;
 			}
 
-			if (*ip >= al.map->end)
+			if (*ip >= map__end(al.map))
 				break;
 
 			offset += intel_pt_insn->length;
@@ -2750,7 +2750,7 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
 		if (sym->binding == STB_GLOBAL &&
 		    !strcmp(sym->name, "__switch_to")) {
 			ip = map->unmap_ip(map, sym->start);
-			if (ip >= map->start && ip < map->end) {
+			if (ip >= map__start(map) && ip < map__end(map)) {
 				switch_ip = ip;
 				break;
 			}
@@ -2768,7 +2768,7 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
 	for (sym = start; sym; sym = dso__next_symbol(sym)) {
 		if (!strcmp(sym->name, ptss)) {
 			ip = map->unmap_ip(map, sym->start);
-			if (ip >= map->start && ip < map->end) {
+			if (ip >= map__start(map) && ip < map__end(map)) {
 				*ptss_ip = ip;
 				break;
 			}
@@ -3356,7 +3356,7 @@ static int intel_pt_process_aux_output_hw_id(struct intel_pt *pt,
 static int intel_pt_find_map(struct thread *thread, u8 cpumode, u64 addr,
 			     struct addr_location *al)
 {
-	if (!al->map || addr < al->map->start || addr >= al->map->end) {
+	if (!al->map || addr < map__start(al->map) || addr >= map__end(al->map)) {
 		if (!thread__find_map(thread, cpumode, addr, al))
 			return -1;
 	}
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 6e32344e66dc..08fb3ab0c205 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -902,7 +902,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
 		}
 
 		map->start = event->ksymbol.addr;
-		map->end = map->start + event->ksymbol.len;
+		map->end = map__start(map) + event->ksymbol.len;
 		err = maps__insert(machine__kernel_maps(machine), map);
 		map__put(map);
 		if (err)
@@ -918,7 +918,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
 		dso = map__dso(map);
 	}
 
-	sym = symbol__new(map->map_ip(map, map->start),
+	sym = symbol__new(map->map_ip(map, map__start(map)),
 			  event->ksymbol.len,
 			  0, 0, event->ksymbol.name);
 	if (!sym)
@@ -943,7 +943,7 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
 	else {
 		struct dso *dso = map__dso(map);
 
-		sym = dso__find_symbol(dso, map->map_ip(map, map->start));
+		sym = dso__find_symbol(dso, map->map_ip(map, map__start(map)));
 		if (sym)
 			dso__delete_symbol(dso, sym);
 	}
@@ -1216,7 +1216,7 @@ int machine__create_extra_kernel_map(struct machine *machine,
 
 	if (!err) {
 		pr_debug2("Added extra kernel map %s %" PRIx64 "-%" PRIx64 "\n",
-			kmap->name, map->start, map->end);
+			kmap->name, map__start(map), map__end(map));
 	}
 
 	map__put(map);
@@ -1721,7 +1721,7 @@ int machine__create_kernel_maps(struct machine *machine)
 		struct map_rb_node *next = map_rb_node__next(rb_node);
 
 		if (next)
-			machine__set_kernel_mmap(machine, start, next->map->start);
+			machine__set_kernel_mmap(machine, start, map__start(next->map));
 	}
 
 out_put:
@@ -1794,7 +1794,7 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
 		if (map == NULL)
 			goto out_problem;
 
-		map->end = map->start + xm->end - xm->start;
+		map->end = map__start(map) + xm->end - xm->start;
 
 		if (build_id__is_defined(bid))
 			dso__set_build_id(map__dso(map), bid);
@@ -3288,7 +3288,7 @@ int machine__get_kernel_start(struct machine *machine)
 		 * kernel_start = 1ULL << 63 for x86_64.
 		 */
 		if (!err && !machine__is(machine, "x86_64"))
-			machine->kernel_start = map->start;
+			machine->kernel_start = map__start(map);
 	}
 	return err;
 }
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 90062af6675a..416fc449bde8 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -421,7 +421,7 @@ size_t map__fprintf(struct map *map, FILE *fp)
 	const struct dso *dso = map__dso(map);
 
 	return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s\n",
-		       map->start, map->end, map->pgoff, dso->name);
+		       map__start(map), map__end(map), map->pgoff, dso->name);
 }
 
 size_t map__fprintf_dsoname(struct map *map, FILE *fp)
@@ -558,7 +558,7 @@ bool map__contains_symbol(const struct map *map, const struct symbol *sym)
 {
 	u64 ip = map->unmap_ip(map, sym->start);
 
-	return ip >= map->start && ip < map->end;
+	return ip >= map__start(map) && ip < map__end(map);
 }
 
 struct kmap *__map__kmap(struct map *map)
@@ -592,12 +592,12 @@ struct maps *map__kmaps(struct map *map)
 
 u64 map__map_ip(const struct map *map, u64 ip)
 {
-	return ip - map->start + map->pgoff;
+	return ip - map__start(map) + map->pgoff;
 }
 
 u64 map__unmap_ip(const struct map *map, u64 ip)
 {
-	return ip + map->start - map->pgoff;
+	return ip + map__start(map) - map->pgoff;
 }
 
 u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip)
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 36c5add0144d..16646b94fa3a 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -52,9 +52,19 @@ static inline struct dso *map__dso(const struct map *map)
 	return map->dso;
 }
 
+static inline u64 map__start(const struct map *map)
+{
+	return map->start;
+}
+
+static inline u64 map__end(const struct map *map)
+{
+	return map->end;
+}
+
 static inline size_t map__size(const struct map *map)
 {
-	return map->end - map->start;
+	return map__end(map) - map__start(map);
 }
 
 /* rip/ip <-> addr suitable for passing to `objdump --start-address=` */
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 09ec6bbafcbc..1fd57db72226 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -34,7 +34,7 @@ static int __maps__insert(struct maps *maps, struct map *map)
 {
 	struct rb_node **p = &maps__entries(maps)->rb_node;
 	struct rb_node *parent = NULL;
-	const u64 ip = map->start;
+	const u64 ip = map__start(map);
 	struct map_rb_node *m, *new_rb_node;
 
 	new_rb_node = malloc(sizeof(*new_rb_node));
@@ -47,7 +47,7 @@ static int __maps__insert(struct maps *maps, struct map *map)
 	while (*p != NULL) {
 		parent = *p;
 		m = rb_entry(parent, struct map_rb_node, rb_node);
-		if (ip < m->map->start)
+		if (ip < map__start(m->map))
 			p = &(*p)->rb_left;
 		else
 			p = &(*p)->rb_right;
@@ -229,7 +229,7 @@ struct symbol *maps__find_symbol_by_name(struct maps *maps, const char *name, st
 
 int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
 {
-	if (ams->addr < ams->ms.map->start || ams->addr >= ams->ms.map->end) {
+	if (ams->addr < map__start(ams->ms.map) || ams->addr >= map__end(ams->ms.map)) {
 		if (maps == NULL)
 			return -1;
 		ams->ms.map = maps__find(maps, ams->addr);
@@ -283,9 +283,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 	while (next) {
 		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
 
-		if (pos->map->end > map->start) {
+		if (map__end(pos->map) > map__start(map)) {
 			first = next;
-			if (pos->map->start <= map->start)
+			if (map__start(pos->map) <= map__start(map))
 				break;
 			next = next->rb_left;
 		} else
@@ -301,7 +301,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 		 * Stop if current map starts after map->end.
 		 * Maps are ordered by start: next will not overlap for sure.
 		 */
-		if (pos->map->start >= map->end)
+		if (map__start(pos->map) >= map__end(map))
 			break;
 
 		if (verbose >= 2) {
@@ -321,7 +321,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 		 * Now check if we need to create new maps for areas not
 		 * overlapped by the new map:
 		 */
-		if (map->start > pos->map->start) {
+		if (map__start(map) > map__start(pos->map)) {
 			struct map *before = map__clone(pos->map);
 
 			if (before == NULL) {
@@ -329,7 +329,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 				goto put_map;
 			}
 
-			before->end = map->start;
+			before->end = map__start(map);
 			err = __maps__insert(maps, before);
 			if (err)
 				goto put_map;
@@ -339,7 +339,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 			map__put(before);
 		}
 
-		if (map->end < pos->map->end) {
+		if (map->end < map__end(pos->map)) {
 			struct map *after = map__clone(pos->map);
 
 			if (after == NULL) {
@@ -347,10 +347,10 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 				goto put_map;
 			}
 
-			after->start = map->end;
-			after->pgoff += map->end - pos->map->start;
-			assert(pos->map->map_ip(pos->map, map->end) ==
-				after->map_ip(after, map->end));
+			after->start = map__end(map);
+			after->pgoff += map__end(map) - map__start(pos->map);
+			assert(pos->map->map_ip(pos->map, map__end(map)) ==
+				after->map_ip(after, map__end(map)));
 			err = __maps__insert(maps, after);
 			if (err)
 				goto put_map;
@@ -430,9 +430,9 @@ struct map *maps__find(struct maps *maps, u64 ip)
 	p = maps__entries(maps)->rb_node;
 	while (p != NULL) {
 		m = rb_entry(p, struct map_rb_node, rb_node);
-		if (ip < m->map->start)
+		if (ip < map__start(m->map))
 			p = p->rb_left;
-		else if (ip >= m->map->end)
+		else if (ip >= map__end(m->map))
 			p = p->rb_right;
 		else
 			goto out;
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index b26670a26005..4d9dbeeb6014 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -143,7 +143,7 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
 			return -ENOENT;
 		*addr = map->unmap_ip(map, sym->start) -
 			((reloc) ? 0 : map->reloc) -
-			((reladdr) ? map->start : 0);
+			((reladdr) ? map__start(map) : 0);
 	}
 	return 0;
 }
@@ -257,7 +257,7 @@ static bool kprobe_warn_out_range(const char *symbol, u64 address)
 
 	map = kernel_get_module_map(NULL);
 	if (map) {
-		ret = address <= map->start || map->end < address;
+		ret = address <= map__start(map) || map__end(map) < address;
 		if (ret)
 			pr_warning("%s is out of .text, skip it.\n", symbol);
 		map__put(map);
diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
index b8e5c6f61d80..cbf09eaf3734 100644
--- a/tools/perf/util/scripting-engines/trace-event-python.c
+++ b/tools/perf/util/scripting-engines/trace-event-python.c
@@ -409,7 +409,7 @@ static unsigned long get_offset(struct symbol *sym, struct addr_location *al)
 	if (al->addr < sym->end)
 		offset = al->addr - sym->start;
 	else
-		offset = al->addr - al->map->start - sym->start;
+		offset = al->addr - map__start(al->map) - sym->start;
 
 	return offset;
 }
@@ -788,9 +788,9 @@ static void set_sym_in_dict(PyObject *dict, struct addr_location *al,
 		pydict_set_item_string_decref(dict, dso_bid_field,
 			_PyUnicode_FromString(sbuild_id));
 		pydict_set_item_string_decref(dict, dso_map_start,
-			PyLong_FromUnsignedLong(al->map->start));
+			PyLong_FromUnsignedLong(map__start(al->map)));
 		pydict_set_item_string_decref(dict, dso_map_end,
-			PyLong_FromUnsignedLong(al->map->end));
+			PyLong_FromUnsignedLong(map__end(al->map)));
 	}
 	if (al->sym) {
 		pydict_set_item_string_decref(dict, sym_field,
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 97085ad7fe9b..0542985ecaf6 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1349,7 +1349,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		if (*remap_kernel && dso->kernel && !kmodule) {
 			*remap_kernel = false;
 			map->start = shdr->sh_addr + ref_reloc(kmap);
-			map->end = map->start + shdr->sh_size;
+			map->end = map__start(map) + shdr->sh_size;
 			map->pgoff = shdr->sh_offset;
 			map->map_ip = map__map_ip;
 			map->unmap_ip = map__unmap_ip;
@@ -1391,7 +1391,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		u64 start = sym->st_value;
 
 		if (kmodule)
-			start += map->start + shdr->sh_offset;
+			start += map__start(map) + shdr->sh_offset;
 
 		curr_dso = dso__new(dso_name);
 		if (curr_dso == NULL)
@@ -1409,7 +1409,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 
 		if (adjust_kernel_syms) {
 			curr_map->start  = shdr->sh_addr + ref_reloc(kmap);
-			curr_map->end	 = curr_map->start + shdr->sh_size;
+			curr_map->end	 = map__start(curr_map) + shdr->sh_size;
 			curr_map->pgoff	 = shdr->sh_offset;
 		} else {
 			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
@@ -1530,7 +1530,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
 	 * attempted to prelink vdso to its virtual address.
 	 */
 	if (dso__is_vdso(dso))
-		map->reloc = map->start - dso->text_offset;
+		map->reloc = map__start(map) - dso->text_offset;
 
 	dso->adjust_symbols = runtime_ss->adjust_symbols || ref_reloc(kmap);
 	/*
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 13176ed5bd27..c76582dbe7ff 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -278,8 +278,8 @@ void maps__fixup_end(struct maps *maps)
 	down_write(maps__lock(maps));
 
 	maps__for_each_entry(maps, curr) {
-		if (prev != NULL && !prev->map->end)
-			prev->map->end = curr->map->start;
+		if (prev != NULL && !map__end(prev->map))
+			prev->map->end = map__start(curr->map);
 
 		prev = curr;
 	}
@@ -288,7 +288,7 @@ void maps__fixup_end(struct maps *maps)
 	 * We still haven't the actual symbols, so guess the
 	 * last map final address.
 	 */
-	if (curr && !curr->map->end)
+	if (curr && !map__end(curr->map))
 		curr->map->end = ~0ULL;
 
 	up_write(maps__lock(maps));
@@ -810,11 +810,11 @@ static int maps__split_kallsyms_for_kcore(struct maps *kmaps, struct dso *dso)
 			continue;
 		}
 		curr_map_dso = map__dso(curr_map);
-		pos->start -= curr_map->start - curr_map->pgoff;
-		if (pos->end > curr_map->end)
-			pos->end = curr_map->end;
+		pos->start -= map__start(curr_map) - curr_map->pgoff;
+		if (pos->end > map__end(curr_map))
+			pos->end = map__end(curr_map);
 		if (pos->end)
-			pos->end -= curr_map->start - curr_map->pgoff;
+			pos->end -= map__start(curr_map) - curr_map->pgoff;
 		symbols__insert(&curr_map_dso->symbols, pos);
 		++count;
 	}
@@ -1156,7 +1156,7 @@ static int do_validate_kcore_modules(const char *filename, struct maps *kmaps)
 		dso = map__dso(old_map);
 		/* Module must be in memory at the same address */
 		mi = find_module(dso->short_name, &modules);
-		if (!mi || mi->start != old_map->start) {
+		if (!mi || mi->start != map__start(old_map)) {
 			err = -EINVAL;
 			goto out;
 		}
@@ -1250,7 +1250,7 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
 		return -ENOMEM;
 	}
 
-	list_node->map->end = list_node->map->start + len;
+	list_node->map->end = map__start(list_node->map) + len;
 	list_node->map->pgoff = pgoff;
 
 	list_add(&list_node->node, &md->maps);
@@ -1272,21 +1272,21 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 		struct map *old_map = rb_node->map;
 
 		/* no overload with this one */
-		if (new_map->end < old_map->start ||
-		    new_map->start >= old_map->end)
+		if (map__end(new_map) < map__start(old_map) ||
+		    map__start(new_map) >= map__end(old_map))
 			continue;
 
-		if (new_map->start < old_map->start) {
+		if (map__start(new_map) < map__start(old_map)) {
 			/*
 			 * |new......
 			 *       |old....
 			 */
-			if (new_map->end < old_map->end) {
+			if (map__end(new_map) < map__end(old_map)) {
 				/*
 				 * |new......|     -> |new..|
 				 *       |old....| ->       |old....|
 				 */
-				new_map->end = old_map->start;
+				new_map->end = map__start(old_map);
 			} else {
 				/*
 				 * |new.............| -> |new..|       |new..|
@@ -1306,17 +1306,17 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 					goto out;
 				}
 
-				m->map->end = old_map->start;
+				m->map->end = map__start(old_map);
 				list_add_tail(&m->node, &merged);
-				new_map->pgoff += old_map->end - new_map->start;
-				new_map->start = old_map->end;
+				new_map->pgoff += map__end(old_map) - map__start(new_map);
+				new_map->start = map__end(old_map);
 			}
 		} else {
 			/*
 			 *      |new......
 			 * |old....
 			 */
-			if (new_map->end < old_map->end) {
+			if (map__end(new_map) < map__end(old_map)) {
 				/*
 				 *      |new..|   -> x
 				 * |old.........| -> |old.........|
@@ -1329,8 +1329,8 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 				 *      |new......| ->         |new...|
 				 * |old....|        -> |old....|
 				 */
-				new_map->pgoff += old_map->end - new_map->start;
-				new_map->start = old_map->end;
+				new_map->pgoff += map__end(old_map) - map__start(new_map);
+				new_map->start = map__end(old_map);
 			}
 		}
 	}
@@ -1427,9 +1427,10 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 		struct map_list_node *new_node;
 
 		list_for_each_entry(new_node, &md.maps, node) {
-			u64 new_size = new_node->map->end - new_node->map->start;
+			u64 new_size = map__size(new_node->map);
 
-			if (!(stext >= new_node->map->start && stext < new_node->map->end))
+			if (!(stext >= map__start(new_node->map) &&
+			      stext < map__end(new_node->map)))
 				continue;
 
 			/*
@@ -1455,8 +1456,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 		new_node = list_entry(md.maps.next, struct map_list_node, node);
 		list_del_init(&new_node->node);
 		if (new_node->map == replacement_map) {
-			map->start	= new_node->map->start;
-			map->end	= new_node->map->end;
+			map->start	= map__start(new_node->map);
+			map->end	= map__end(new_node->map);
 			map->pgoff	= new_node->map->pgoff;
 			map->map_ip	= new_node->map->map_ip;
 			map->unmap_ip	= new_node->map->unmap_ip;
diff --git a/tools/perf/util/symbol_fprintf.c b/tools/perf/util/symbol_fprintf.c
index 2664fb65e47a..d9e5ad040b6a 100644
--- a/tools/perf/util/symbol_fprintf.c
+++ b/tools/perf/util/symbol_fprintf.c
@@ -30,7 +30,7 @@ size_t __symbol__fprintf_symname_offs(const struct symbol *sym,
 			if (al->addr < sym->end)
 				offset = al->addr - sym->start;
 			else
-				offset = al->addr - al->map->start - sym->start;
+				offset = al->addr - map__start(al->map) - sym->start;
 			length += fprintf(fp, "+0x%lx", offset);
 		}
 		return length;
diff --git a/tools/perf/util/synthetic-events.c b/tools/perf/util/synthetic-events.c
index fbd1a882b013..b2e4afa5efa1 100644
--- a/tools/perf/util/synthetic-events.c
+++ b/tools/perf/util/synthetic-events.c
@@ -706,8 +706,8 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
 						(sizeof(event->mmap2.filename) - size));
 			memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
 			event->mmap2.header.size += machine->id_hdr_size;
-			event->mmap2.start = map->start;
-			event->mmap2.len   = map->end - map->start;
+			event->mmap2.start = map__start(map);
+			event->mmap2.len   = map__size(map);
 			event->mmap2.pid   = machine->pid;
 
 			memcpy(event->mmap2.filename, dso->long_name, dso->long_name_len + 1);
@@ -720,8 +720,8 @@ int perf_event__synthesize_modules(struct perf_tool *tool, perf_event__handler_t
 						(sizeof(event->mmap.filename) - size));
 			memset(event->mmap.filename + size, 0, machine->id_hdr_size);
 			event->mmap.header.size += machine->id_hdr_size;
-			event->mmap.start = map->start;
-			event->mmap.len   = map->end - map->start;
+			event->mmap.start = map__start(map);
+			event->mmap.len   = map__size(map);
 			event->mmap.pid   = machine->pid;
 
 			memcpy(event->mmap.filename, dso->long_name, dso->long_name_len + 1);
@@ -1143,8 +1143,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
 		event->mmap2.header.size = (sizeof(event->mmap2) -
 				(sizeof(event->mmap2.filename) - size) + machine->id_hdr_size);
 		event->mmap2.pgoff = kmap->ref_reloc_sym->addr;
-		event->mmap2.start = map->start;
-		event->mmap2.len   = map->end - event->mmap.start;
+		event->mmap2.start = map__start(map);
+		event->mmap2.len   = map__end(map) - event->mmap.start;
 		event->mmap2.pid   = machine->pid;
 
 		perf_record_mmap2__read_build_id(&event->mmap2, machine, true);
@@ -1156,8 +1156,8 @@ static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
 		event->mmap.header.size = (sizeof(event->mmap) -
 				(sizeof(event->mmap.filename) - size) + machine->id_hdr_size);
 		event->mmap.pgoff = kmap->ref_reloc_sym->addr;
-		event->mmap.start = map->start;
-		event->mmap.len   = map->end - event->mmap.start;
+		event->mmap.start = map__start(map);
+		event->mmap.len   = map__end(map) - event->mmap.start;
 		event->mmap.pid   = machine->pid;
 	}
 
diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
index c8cba9d4bfd9..b79f57e5648f 100644
--- a/tools/perf/util/unwind-libdw.c
+++ b/tools/perf/util/unwind-libdw.c
@@ -62,19 +62,19 @@ static int __report_module(struct addr_location *al, u64 ip,
 		Dwarf_Addr s;
 
 		dwfl_module_info(mod, NULL, &s, NULL, NULL, NULL, NULL, NULL);
-		if (s != al->map->start - al->map->pgoff)
+		if (s != map__start(al->map) - al->map->pgoff)
 			mod = 0;
 	}
 
 	if (!mod)
 		mod = dwfl_report_elf(ui->dwfl, dso->short_name, dso->long_name, -1,
-				      al->map->start - al->map->pgoff, false);
+				      map__start(al->map) - al->map->pgoff, false);
 	if (!mod) {
 		char filename[PATH_MAX];
 
 		if (dso__build_id_filename(dso, filename, sizeof(filename), false))
 			mod = dwfl_report_elf(ui->dwfl, dso->short_name, filename, -1,
-					      al->map->start - al->map->pgoff, false);
+					      map__start(al->map) - al->map->pgoff, false);
 	}
 
 	if (mod) {
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 06/17] perf map: Rename map_ip and unmap_ip
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (4 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 05/17] perf map: Add accessor for start and end Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 07/17] perf map: Add helper for " Ian Rogers
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Add dso to match comment. This avoids a naming conflict with later
added accessor functions for variables in struct map.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/builtin-kmem.c    | 2 +-
 tools/perf/builtin-script.c  | 4 ++--
 tools/perf/util/machine.c    | 4 ++--
 tools/perf/util/map.c        | 8 ++++----
 tools/perf/util/map.h        | 4 ++--
 tools/perf/util/symbol-elf.c | 4 ++--
 6 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
index f3029742b800..4d4b770a401c 100644
--- a/tools/perf/builtin-kmem.c
+++ b/tools/perf/builtin-kmem.c
@@ -423,7 +423,7 @@ static u64 find_callsite(struct evsel *evsel, struct perf_sample *sample)
 		if (!caller) {
 			/* found */
 			if (node->ms.map)
-				addr = map__unmap_ip(node->ms.map, node->ip);
+				addr = map__dso_unmap_ip(node->ms.map, node->ip);
 			else
 				addr = node->ip;
 
diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
index eb49689d0f00..21944adf4c17 100644
--- a/tools/perf/builtin-script.c
+++ b/tools/perf/builtin-script.c
@@ -1012,11 +1012,11 @@ static int perf_sample__fprintf_brstackoff(struct perf_sample *sample,
 
 		if (thread__find_map_fb(thread, sample->cpumode, from, &alf) &&
 		    !map__dso(alf.map)->adjust_symbols)
-			from = map__map_ip(alf.map, from);
+			from = map__dso_map_ip(alf.map, from);
 
 		if (thread__find_map_fb(thread, sample->cpumode, to, &alt) &&
 		    !map__dso(alt.map)->adjust_symbols)
-			to = map__map_ip(alt.map, to);
+			to = map__dso_map_ip(alt.map, to);
 
 		printed += fprintf(fp, " 0x%"PRIx64, from);
 		if (PRINT_FIELD(DSO)) {
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 08fb3ab0c205..5bf035b23a79 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -3053,7 +3053,7 @@ static int append_inlines(struct callchain_cursor *cursor, struct map_symbol *ms
 	if (!symbol_conf.inline_name || !map || !sym)
 		return ret;
 
-	addr = map__map_ip(map, ip);
+	addr = map__dso_map_ip(map, ip);
 	addr = map__rip_2objdump(map, addr);
 	dso = map__dso(map);
 
@@ -3098,7 +3098,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg)
 	 * its corresponding binary.
 	 */
 	if (entry->ms.map)
-		addr = map__map_ip(entry->ms.map, entry->ip);
+		addr = map__dso_map_ip(entry->ms.map, entry->ip);
 
 	srcline = callchain_srcline(&entry->ms, addr);
 	return callchain_cursor_append(cursor, entry->ip, &entry->ms,
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 416fc449bde8..d97a6d20626f 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -109,8 +109,8 @@ void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
 	map->pgoff    = pgoff;
 	map->reloc    = 0;
 	map->dso      = dso__get(dso);
-	map->map_ip   = map__map_ip;
-	map->unmap_ip = map__unmap_ip;
+	map->map_ip   = map__dso_map_ip;
+	map->unmap_ip = map__dso_unmap_ip;
 	map->erange_warned = false;
 	refcount_set(&map->refcnt, 1);
 }
@@ -590,12 +590,12 @@ struct maps *map__kmaps(struct map *map)
 	return kmap->kmaps;
 }
 
-u64 map__map_ip(const struct map *map, u64 ip)
+u64 map__dso_map_ip(const struct map *map, u64 ip)
 {
 	return ip - map__start(map) + map->pgoff;
 }
 
-u64 map__unmap_ip(const struct map *map, u64 ip)
+u64 map__dso_unmap_ip(const struct map *map, u64 ip)
 {
 	return ip + map__start(map) - map->pgoff;
 }
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 16646b94fa3a..9b0a84e46e48 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -41,9 +41,9 @@ struct kmap *map__kmap(struct map *map);
 struct maps *map__kmaps(struct map *map);
 
 /* ip -> dso rip */
-u64 map__map_ip(const struct map *map, u64 ip);
+u64 map__dso_map_ip(const struct map *map, u64 ip);
 /* dso rip -> ip */
-u64 map__unmap_ip(const struct map *map, u64 ip);
+u64 map__dso_unmap_ip(const struct map *map, u64 ip);
 /* Returns ip */
 u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip);
 
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 0542985ecaf6..93ae3f22fd03 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1351,8 +1351,8 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 			map->start = shdr->sh_addr + ref_reloc(kmap);
 			map->end = map__start(map) + shdr->sh_size;
 			map->pgoff = shdr->sh_offset;
-			map->map_ip = map__map_ip;
-			map->unmap_ip = map__unmap_ip;
+			map->map_ip = map__dso_map_ip;
+			map->unmap_ip = map__dso_unmap_ip;
 			/* Ensure maps are correctly ordered */
 			if (kmaps) {
 				int err;
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 07/17] perf map: Add helper for map_ip and unmap_ip
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (5 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 06/17] perf map: Rename map_ip and unmap_ip Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 08/17] perf map: Add accessors for prot, priv and flags Ian Rogers
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Later changes will add reference count checking for struct map, add a
helper function to invoke the map_ip and unmap_ip function
pointers. The helper allows the reference count check to be in fewer
places.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/arch/s390/annotate/instructions.c     |  3 ++-
 tools/perf/builtin-kallsyms.c                    |  2 +-
 tools/perf/builtin-kmem.c                        |  2 +-
 tools/perf/builtin-lock.c                        |  4 ++--
 tools/perf/builtin-script.c                      |  2 +-
 tools/perf/tests/vmlinux-kallsyms.c              | 10 +++++-----
 tools/perf/util/annotate.c                       | 16 +++++++++-------
 tools/perf/util/bpf_lock_contention.c            |  4 ++--
 tools/perf/util/dlfilter.c                       |  2 +-
 tools/perf/util/dso.c                            |  6 ++++--
 tools/perf/util/event.c                          |  8 ++++----
 tools/perf/util/evsel_fprintf.c                  |  2 +-
 tools/perf/util/intel-pt.c                       | 10 +++++-----
 tools/perf/util/machine.c                        | 16 ++++++++--------
 tools/perf/util/map.c                            | 10 +++++-----
 tools/perf/util/map.h                            | 10 ++++++++++
 tools/perf/util/maps.c                           |  8 ++++----
 tools/perf/util/probe-event.c                    |  8 ++++----
 .../util/scripting-engines/trace-event-python.c  |  2 +-
 tools/perf/util/sort.c                           | 12 ++++++------
 tools/perf/util/symbol.c                         |  4 ++--
 tools/perf/util/thread.c                         |  2 +-
 tools/perf/util/unwind-libdw.c                   |  2 +-
 23 files changed, 80 insertions(+), 65 deletions(-)

diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c
index 0e136630659e..6548933e8dc0 100644
--- a/tools/perf/arch/s390/annotate/instructions.c
+++ b/tools/perf/arch/s390/annotate/instructions.c
@@ -39,7 +39,8 @@ static int s390_call__parse(struct arch *arch, struct ins_operands *ops,
 	target.addr = map__objdump_2mem(map, ops->target.addr);
 
 	if (maps__find_ams(ms->maps, &target) == 0 &&
-	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
+	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) ==
+	    ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
 	return 0;
diff --git a/tools/perf/builtin-kallsyms.c b/tools/perf/builtin-kallsyms.c
index 5638ca4dbd8e..3751df744577 100644
--- a/tools/perf/builtin-kallsyms.c
+++ b/tools/perf/builtin-kallsyms.c
@@ -39,7 +39,7 @@ static int __cmd_kallsyms(int argc, const char **argv)
 		dso = map__dso(map);
 		printf("%s: %s %s %#" PRIx64 "-%#" PRIx64 " (%#" PRIx64 "-%#" PRIx64")\n",
 			symbol->name, dso->short_name, dso->long_name,
-			map->unmap_ip(map, symbol->start), map->unmap_ip(map, symbol->end),
+			map__unmap_ip(map, symbol->start), map__unmap_ip(map, symbol->end),
 			symbol->start, symbol->end);
 	}
 
diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
index 4d4b770a401c..fcd2ef3bd3f5 100644
--- a/tools/perf/builtin-kmem.c
+++ b/tools/perf/builtin-kmem.c
@@ -1024,7 +1024,7 @@ static void __print_slab_result(struct rb_root *root,
 
 		if (sym != NULL)
 			snprintf(buf, sizeof(buf), "%s+%" PRIx64 "", sym->name,
-				 addr - map->unmap_ip(map, sym->start));
+				 addr - map__unmap_ip(map, sym->start));
 		else
 			snprintf(buf, sizeof(buf), "%#" PRIx64 "", addr);
 		printf(" %-34s |", buf);
diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
index 3c8a19ebc496..40b1e53e2d23 100644
--- a/tools/perf/builtin-lock.c
+++ b/tools/perf/builtin-lock.c
@@ -900,7 +900,7 @@ static int get_symbol_name_offset(struct map *map, struct symbol *sym, u64 ip,
 		return 0;
 	}
 
-	offset = map->map_ip(map, ip) - sym->start;
+	offset = map__map_ip(map, ip) - sym->start;
 
 	if (offset)
 		return scnprintf(buf, size, "%s+%#lx", sym->name, offset);
@@ -1070,7 +1070,7 @@ static int report_lock_contention_begin_event(struct evsel *evsel,
 				return -ENOMEM;
 			}
 
-			addrs[filters.nr_addrs++] = kmap->unmap_ip(kmap, sym->start);
+			addrs[filters.nr_addrs++] = map__unmap_ip(kmap, sym->start);
 			filters.addrs = addrs;
 		}
 	}
diff --git a/tools/perf/builtin-script.c b/tools/perf/builtin-script.c
index 21944adf4c17..9dc3193f7c1a 100644
--- a/tools/perf/builtin-script.c
+++ b/tools/perf/builtin-script.c
@@ -1088,7 +1088,7 @@ static int grab_bb(u8 *buffer, u64 start, u64 end,
 	/* Load maps to ensure dso->is_64_bit has been updated */
 	map__load(al.map);
 
-	offset = al.map->map_ip(al.map, start);
+	offset = map__map_ip(al.map, start);
 	len = dso__data_read_offset(dso, machine, offset, (u8 *)buffer,
 				    end - start + MAXINSN);
 
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index 0a75623172c2..05a322ea3f9f 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -13,7 +13,7 @@
 #include "debug.h"
 #include "machine.h"
 
-#define UM(x) kallsyms_map->unmap_ip(kallsyms_map, (x))
+#define UM(x) map__unmap_ip(kallsyms_map, (x))
 
 static bool is_ignored_symbol(const char *name, char type)
 {
@@ -221,8 +221,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 		if (sym->start == sym->end)
 			continue;
 
-		mem_start = vmlinux_map->unmap_ip(vmlinux_map, sym->start);
-		mem_end = vmlinux_map->unmap_ip(vmlinux_map, sym->end);
+		mem_start = map__unmap_ip(vmlinux_map, sym->start);
+		mem_end = map__unmap_ip(vmlinux_map, sym->end);
 
 		first_pair = machine__find_kernel_symbol(&kallsyms, mem_start, NULL);
 		pair = first_pair;
@@ -319,8 +319,8 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 	maps__for_each_entry(maps, rb_node) {
 		struct map *pair, *map = rb_node->map;
 
-		mem_start = vmlinux_map->unmap_ip(vmlinux_map, map__start(map));
-		mem_end = vmlinux_map->unmap_ip(vmlinux_map, map__end(map));
+		mem_start = map__unmap_ip(vmlinux_map, map__start(map));
+		mem_end = map__unmap_ip(vmlinux_map, map__end(map));
 
 		pair = maps__find(kallsyms.kmaps, mem_start);
 		if (pair == NULL || pair->priv)
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index f60f5efb2ad9..e8570b7cc36f 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -280,7 +280,8 @@ static int call__parse(struct arch *arch, struct ins_operands *ops, struct map_s
 	target.addr = map__objdump_2mem(map, ops->target.addr);
 
 	if (maps__find_ams(ms->maps, &target) == 0 &&
-	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
+	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) ==
+	    ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
 	return 0;
@@ -384,8 +385,8 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
 	}
 
 	target.addr = map__objdump_2mem(map, ops->target.addr);
-	start = map->unmap_ip(map, sym->start),
-	end = map->unmap_ip(map, sym->end);
+	start = map__unmap_ip(map, sym->start);
+	end = map__unmap_ip(map, sym->end);
 
 	ops->target.outside = target.addr < start || target.addr > end;
 
@@ -408,7 +409,8 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
 	 * the symbol searching and disassembly should be done.
 	 */
 	if (maps__find_ams(ms->maps, &target) == 0 &&
-	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) == ops->target.addr)
+	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) ==
+	    ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
 	if (!ops->target.outside) {
@@ -889,7 +891,7 @@ static int __symbol__inc_addr_samples(struct map_symbol *ms,
 	unsigned offset;
 	struct sym_hist *h;
 
-	pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, ms->map->unmap_ip(ms->map, addr));
+	pr_debug3("%s: addr=%#" PRIx64 "\n", __func__, map__unmap_ip(ms->map, addr));
 
 	if ((addr < sym->start || addr >= sym->end) &&
 	    (addr != sym->end || sym->start != sym->end)) {
@@ -1985,8 +1987,8 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
 		return err;
 
 	pr_debug("%s: filename=%s, sym=%s, start=%#" PRIx64 ", end=%#" PRIx64 "\n", __func__,
-		 symfs_filename, sym->name, map->unmap_ip(map, sym->start),
-		 map->unmap_ip(map, sym->end));
+		 symfs_filename, sym->name, map__unmap_ip(map, sym->start),
+		 map__unmap_ip(map, sym->end));
 
 	pr_debug("annotating [%p] %30s : [%p] %30s\n",
 		 dso, dso->long_name, sym, sym->name);
diff --git a/tools/perf/util/bpf_lock_contention.c b/tools/perf/util/bpf_lock_contention.c
index 0b47863d2460..9a76fc6484b4 100644
--- a/tools/perf/util/bpf_lock_contention.c
+++ b/tools/perf/util/bpf_lock_contention.c
@@ -74,7 +74,7 @@ int lock_contention_prepare(struct lock_contention *con)
 				continue;
 			}
 
-			addrs[con->filters->nr_addrs++] = kmap->unmap_ip(kmap, sym->start);
+			addrs[con->filters->nr_addrs++] = map__unmap_ip(kmap, sym->start);
 			con->filters->addrs = addrs;
 		}
 		naddrs = con->filters->nr_addrs;
@@ -233,7 +233,7 @@ static const char *lock_contention_get_name(struct lock_contention *con,
 	if (sym) {
 		unsigned long offset;
 
-		offset = kmap->map_ip(kmap, addr) - sym->start;
+		offset = map__map_ip(kmap, addr) - sym->start;
 
 		if (offset == 0)
 			return sym->name;
diff --git a/tools/perf/util/dlfilter.c b/tools/perf/util/dlfilter.c
index fe401fa4be02..16238f823a5e 100644
--- a/tools/perf/util/dlfilter.c
+++ b/tools/perf/util/dlfilter.c
@@ -278,7 +278,7 @@ static __s32 dlfilter__object_code(void *ctx, __u64 ip, void *buf, __u32 len)
 
 	map = a.map;
 have_map:
-	offset = map->map_ip(map, ip);
+	offset = map__map_ip(map, ip);
 	if (ip + len >= map__end(map))
 		len = map__end(map) - ip;
 	return dso__data_read_offset(map__dso(map), d->machine, offset, buf, len);
diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
index f1a14c0ad26d..e36b418df2c6 100644
--- a/tools/perf/util/dso.c
+++ b/tools/perf/util/dso.c
@@ -1122,7 +1122,8 @@ ssize_t dso__data_read_addr(struct dso *dso, struct map *map,
 			    struct machine *machine, u64 addr,
 			    u8 *data, ssize_t size)
 {
-	u64 offset = map->map_ip(map, addr);
+	u64 offset = map__map_ip(map, addr);
+
 	return dso__data_read_offset(dso, machine, offset, data, size);
 }
 
@@ -1162,7 +1163,8 @@ ssize_t dso__data_write_cache_addr(struct dso *dso, struct map *map,
 				   struct machine *machine, u64 addr,
 				   const u8 *data, ssize_t size)
 {
-	u64 offset = map->map_ip(map, addr);
+	u64 offset = map__map_ip(map, addr);
+
 	return dso__data_write_cache_offs(dso, machine, offset, data, size);
 }
 
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 2ddc75dee019..2712d1a8264e 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -487,7 +487,7 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
 
 		al.map = maps__find(machine__kernel_maps(machine), tp->addr);
 		if (al.map && map__load(al.map) >= 0) {
-			al.addr = al.map->map_ip(al.map, tp->addr);
+			al.addr = map__map_ip(al.map, tp->addr);
 			al.sym = map__find_symbol(al.map, al.addr);
 			if (al.sym)
 				ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
@@ -622,7 +622,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
 		 */
 		if (load_map)
 			map__load(al->map);
-		al->addr = al->map->map_ip(al->map, al->addr);
+		al->addr = map__map_ip(al->map, al->addr);
 	}
 
 	return al->map;
@@ -743,12 +743,12 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
 		}
 		if (!ret && al->sym) {
 			snprintf(al_addr_str, sz, "0x%"PRIx64,
-				al->map->unmap_ip(al->map, al->sym->start));
+				 map__unmap_ip(al->map, al->sym->start));
 			ret = strlist__has_entry(symbol_conf.sym_list,
 						al_addr_str);
 		}
 		if (!ret && symbol_conf.addr_list && al->map) {
-			unsigned long addr = al->map->unmap_ip(al->map, al->addr);
+			unsigned long addr = map__unmap_ip(al->map, al->addr);
 
 			ret = intlist__has_entry(symbol_conf.addr_list, addr);
 			if (!ret && symbol_conf.addr_range) {
diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c
index dff5d8c4b06d..a09ac00810b7 100644
--- a/tools/perf/util/evsel_fprintf.c
+++ b/tools/perf/util/evsel_fprintf.c
@@ -151,7 +151,7 @@ int sample__fprintf_callchain(struct perf_sample *sample, int left_alignment,
 				printed += fprintf(fp, " <-");
 
 			if (map)
-				addr = map->map_ip(map, node->ip);
+				addr = map__map_ip(map, node->ip);
 
 			if (print_ip) {
 				/* Show binary offset for userspace addr */
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index a2e62daa708e..fe893c9bab3f 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -816,7 +816,7 @@ static int intel_pt_walk_next_insn(struct intel_pt_insn *intel_pt_insn,
 		    dso__data_status_seen(dso, DSO_DATA_STATUS_SEEN_ITRACE))
 			return -ENOENT;
 
-		offset = al.map->map_ip(al.map, *ip);
+		offset = map__map_ip(al.map, *ip);
 
 		if (!to_ip && one_map) {
 			struct intel_pt_cache_entry *e;
@@ -987,7 +987,7 @@ static int __intel_pt_pgd_ip(uint64_t ip, void *data)
 	if (!thread__find_map(thread, cpumode, ip, &al) || !map__dso(al.map))
 		return -EINVAL;
 
-	offset = al.map->map_ip(al.map, ip);
+	offset = map__map_ip(al.map, ip);
 
 	return intel_pt_match_pgd_ip(ptq->pt, ip, offset, map__dso(al.map)->long_name);
 }
@@ -2749,7 +2749,7 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
 	for (sym = start; sym; sym = dso__next_symbol(sym)) {
 		if (sym->binding == STB_GLOBAL &&
 		    !strcmp(sym->name, "__switch_to")) {
-			ip = map->unmap_ip(map, sym->start);
+			ip = map__unmap_ip(map, sym->start);
 			if (ip >= map__start(map) && ip < map__end(map)) {
 				switch_ip = ip;
 				break;
@@ -2767,7 +2767,7 @@ static u64 intel_pt_switch_ip(struct intel_pt *pt, u64 *ptss_ip)
 
 	for (sym = start; sym; sym = dso__next_symbol(sym)) {
 		if (!strcmp(sym->name, ptss)) {
-			ip = map->unmap_ip(map, sym->start);
+			ip = map__unmap_ip(map, sym->start);
 			if (ip >= map__start(map) && ip < map__end(map)) {
 				*ptss_ip = ip;
 				break;
@@ -3393,7 +3393,7 @@ static int intel_pt_text_poke(struct intel_pt *pt, union perf_event *event)
 		if (!dso || !dso->auxtrace_cache)
 			continue;
 
-		offset = al.map->map_ip(al.map, addr);
+		offset = map__map_ip(al.map, addr);
 
 		e = intel_pt_cache_lookup(dso, machine, offset);
 		if (!e)
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 5bf035b23a79..afb77bd161e2 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -918,7 +918,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
 		dso = map__dso(map);
 	}
 
-	sym = symbol__new(map->map_ip(map, map__start(map)),
+	sym = symbol__new(map__map_ip(map, map__start(map)),
 			  event->ksymbol.len,
 			  0, 0, event->ksymbol.name);
 	if (!sym)
@@ -943,7 +943,7 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
 	else {
 		struct dso *dso = map__dso(map);
 
-		sym = dso__find_symbol(dso, map->map_ip(map, map__start(map)));
+		sym = dso__find_symbol(dso, map__map_ip(map, map__start(map)));
 		if (sym)
 			dso__delete_symbol(dso, sym);
 	}
@@ -1278,7 +1278,7 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
 
 		dest_map = maps__find(kmaps, map->pgoff);
 		if (dest_map != map)
-			map->pgoff = dest_map->map_ip(dest_map, map->pgoff);
+			map->pgoff = map__map_ip(dest_map, map->pgoff);
 		found = true;
 	}
 	if (found || machine->trampolines_mapped)
@@ -3340,7 +3340,7 @@ char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, ch
 		return NULL;
 
 	*modp = __map__is_kmodule(map) ? (char *)map__dso(map)->short_name : NULL;
-	*addrp = map->unmap_ip(map, sym->start);
+	*addrp = map__unmap_ip(map, sym->start);
 	return sym->name;
 }
 
@@ -3383,17 +3383,17 @@ bool machine__is_lock_function(struct machine *machine, u64 addr)
 			return false;
 		}
 
-		machine->sched.text_start = kmap->unmap_ip(kmap, sym->start);
+		machine->sched.text_start = map__unmap_ip(kmap, sym->start);
 
 		/* should not fail from here */
 		sym = machine__find_kernel_symbol_by_name(machine, "__sched_text_end", &kmap);
-		machine->sched.text_end = kmap->unmap_ip(kmap, sym->start);
+		machine->sched.text_end = map__unmap_ip(kmap, sym->start);
 
 		sym = machine__find_kernel_symbol_by_name(machine, "__lock_text_start", &kmap);
-		machine->lock.text_start = kmap->unmap_ip(kmap, sym->start);
+		machine->lock.text_start = map__unmap_ip(kmap, sym->start);
 
 		sym = machine__find_kernel_symbol_by_name(machine, "__lock_text_end", &kmap);
-		machine->lock.text_end = kmap->unmap_ip(kmap, sym->start);
+		machine->lock.text_end = map__unmap_ip(kmap, sym->start);
 	}
 
 	/* failed to get kernel symbols */
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index d97a6d20626f..816bffbbf344 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -519,7 +519,7 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
 	if (dso->kernel == DSO_SPACE__USER)
 		return rip + dso->text_offset;
 
-	return map->unmap_ip(map, rip) - map->reloc;
+	return map__unmap_ip(map, rip) - map->reloc;
 }
 
 /**
@@ -539,24 +539,24 @@ u64 map__objdump_2mem(struct map *map, u64 ip)
 	const struct dso *dso = map__dso(map);
 
 	if (!dso->adjust_symbols)
-		return map->unmap_ip(map, ip);
+		return map__unmap_ip(map, ip);
 
 	if (dso->rel)
-		return map->unmap_ip(map, ip + map->pgoff);
+		return map__unmap_ip(map, ip + map->pgoff);
 
 	/*
 	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
 	 * but all kernel modules are ET_REL, so won't get here.
 	 */
 	if (dso->kernel == DSO_SPACE__USER)
-		return map->unmap_ip(map, ip - dso->text_offset);
+		return map__unmap_ip(map, ip - dso->text_offset);
 
 	return ip + map->reloc;
 }
 
 bool map__contains_symbol(const struct map *map, const struct symbol *sym)
 {
-	u64 ip = map->unmap_ip(map, sym->start);
+	u64 ip = map__unmap_ip(map, sym->start);
 
 	return ip >= map__start(map) && ip < map__end(map);
 }
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 9b0a84e46e48..9118eba71032 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -52,6 +52,16 @@ static inline struct dso *map__dso(const struct map *map)
 	return map->dso;
 }
 
+static inline u64 map__map_ip(const struct map *map, u64 ip)
+{
+	return map->map_ip(map, ip);
+}
+
+static inline u64 map__unmap_ip(const struct map *map, u64 ip)
+{
+	return map->unmap_ip(map, ip);
+}
+
 static inline u64 map__start(const struct map *map)
 {
 	return map->start;
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 1fd57db72226..ffd4a4a64026 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -194,7 +194,7 @@ struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
 	if (map != NULL && map__load(map) >= 0) {
 		if (mapp != NULL)
 			*mapp = map;
-		return map__find_symbol(map, map->map_ip(map, addr));
+		return map__find_symbol(map, map__map_ip(map, addr));
 	}
 
 	return NULL;
@@ -237,7 +237,7 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
 			return -1;
 	}
 
-	ams->al_addr = ams->ms.map->map_ip(ams->ms.map, ams->addr);
+	ams->al_addr = map__map_ip(ams->ms.map, ams->addr);
 	ams->ms.sym = map__find_symbol(ams->ms.map, ams->al_addr);
 
 	return ams->ms.sym ? 0 : -1;
@@ -349,8 +349,8 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 
 			after->start = map__end(map);
 			after->pgoff += map__end(map) - map__start(pos->map);
-			assert(pos->map->map_ip(pos->map, map__end(map)) ==
-				after->map_ip(after, map__end(map)));
+			assert(map__map_ip(pos->map, map__end(map)) ==
+				map__map_ip(after, map__end(map)));
 			err = __maps__insert(maps, after);
 			if (err)
 				goto put_map;
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index 4d9dbeeb6014..bb44a3798df8 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -141,7 +141,7 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
 		sym = machine__find_kernel_symbol_by_name(host_machine, name, &map);
 		if (!sym)
 			return -ENOENT;
-		*addr = map->unmap_ip(map, sym->start) -
+		*addr = map__unmap_ip(map, sym->start) -
 			((reloc) ? 0 : map->reloc) -
 			((reladdr) ? map__start(map) : 0);
 	}
@@ -400,7 +400,7 @@ static int find_alternative_probe_point(struct debuginfo *dinfo,
 					   "Consider identifying the final function used at run time and set the probe directly on that.\n",
 					   pp->function);
 		} else
-			address = map->unmap_ip(map, sym->start) - map->reloc;
+			address = map__unmap_ip(map, sym->start) - map->reloc;
 		break;
 	}
 	if (!address) {
@@ -2249,7 +2249,7 @@ static int find_perf_probe_point_from_map(struct probe_trace_point *tp,
 		goto out;
 
 	pp->retprobe = tp->retprobe;
-	pp->offset = addr - map->unmap_ip(map, sym->start);
+	pp->offset = addr - map__unmap_ip(map, sym->start);
 	pp->function = strdup(sym->name);
 	ret = pp->function ? 0 : -ENOMEM;
 
@@ -3123,7 +3123,7 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
 			goto err_out;
 		}
 		/* Add one probe point */
-		tp->address = map->unmap_ip(map, sym->start) + pp->offset;
+		tp->address = map__unmap_ip(map, sym->start) + pp->offset;
 
 		/* Check the kprobe (not in module) is within .text  */
 		if (!pev->uprobes && !pev->target &&
diff --git a/tools/perf/util/scripting-engines/trace-event-python.c b/tools/perf/util/scripting-engines/trace-event-python.c
index cbf09eaf3734..41d4f9e6a8b7 100644
--- a/tools/perf/util/scripting-engines/trace-event-python.c
+++ b/tools/perf/util/scripting-engines/trace-event-python.c
@@ -471,7 +471,7 @@ static PyObject *python_process_callchain(struct perf_sample *sample,
 				struct addr_location node_al;
 				unsigned long offset;
 
-				node_al.addr = map->map_ip(map, node->ip);
+				node_al.addr = map__map_ip(map, node->ip);
 				node_al.map  = map;
 				offset = get_offset(node->ms.sym, &node_al);
 
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index d7b6b734bf90..321d4859ae16 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -318,7 +318,7 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
 		u64 rip = ip;
 
 		if (dso && dso->kernel && dso->adjust_symbols)
-			rip = map->unmap_ip(map, ip);
+			rip = map__unmap_ip(map, ip);
 
 		ret += repsep_snprintf(bf, size, "%-#*llx %c ",
 				       BITS_PER_LONG / 4 + 2, rip, o);
@@ -329,7 +329,7 @@ static int _hist_entry__sym_snprintf(struct map_symbol *ms,
 		if (sym->type == STT_OBJECT) {
 			ret += repsep_snprintf(bf + ret, size - ret, "%s", sym->name);
 			ret += repsep_snprintf(bf + ret, size - ret, "+0x%llx",
-					ip - map->unmap_ip(map, sym->start));
+					ip - map__unmap_ip(map, sym->start));
 		} else {
 			ret += repsep_snprintf(bf + ret, size - ret, "%.*s",
 					       width - ret,
@@ -1106,7 +1106,7 @@ static int _hist_entry__addr_snprintf(struct map_symbol *ms,
 		if (sym->type == STT_OBJECT) {
 			ret += repsep_snprintf(bf + ret, size - ret, "%s", sym->name);
 			ret += repsep_snprintf(bf + ret, size - ret, "+0x%llx",
-					ip - map->unmap_ip(map, sym->start));
+					ip - map__unmap_ip(map, sym->start));
 		} else {
 			ret += repsep_snprintf(bf + ret, size - ret, "%.*s",
 					       width - ret,
@@ -2063,9 +2063,9 @@ sort__addr_cmp(struct hist_entry *left, struct hist_entry *right)
 	struct map *right_map = right->ms.map;
 
 	if (left_map)
-		left_ip = left_map->unmap_ip(left_map, left_ip);
+		left_ip = map__unmap_ip(left_map, left_ip);
 	if (right_map)
-		right_ip = right_map->unmap_ip(right_map, right_ip);
+		right_ip = map__unmap_ip(right_map, right_ip);
 
 	return _sort__addr_cmp(left_ip, right_ip);
 }
@@ -2077,7 +2077,7 @@ static int hist_entry__addr_snprintf(struct hist_entry *he, char *bf,
 	struct map *map = he->ms.map;
 
 	if (map)
-		ip = map->unmap_ip(map, ip);
+		ip = map__unmap_ip(map, ip);
 
 	return repsep_snprintf(bf, size, "%-#*llx", width, ip);
 }
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index c76582dbe7ff..128d4a66cc0e 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -896,8 +896,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 			 * So that we look just like we get from .ko files,
 			 * i.e. not prelinked, relative to initial_map->start.
 			 */
-			pos->start = curr_map->map_ip(curr_map, pos->start);
-			pos->end   = curr_map->map_ip(curr_map, pos->end);
+			pos->start = map__map_ip(curr_map, pos->start);
+			pos->end   = map__map_ip(curr_map, pos->end);
 		} else if (x86_64 && is_entry_trampoline(pos->name)) {
 			/*
 			 * These symbols are not needed anymore since the
diff --git a/tools/perf/util/thread.c b/tools/perf/util/thread.c
index 42fdc80a6f2e..6fe503da712b 100644
--- a/tools/perf/util/thread.c
+++ b/tools/perf/util/thread.c
@@ -459,7 +459,7 @@ int thread__memcpy(struct thread *thread, struct machine *machine,
 		map__dso(al.map)->data.status == DSO_DATA_STATUS_ERROR || map__load(al.map) < 0)
 		return -1;
 
-	offset = al.map->map_ip(al.map, ip);
+	offset = map__map_ip(al.map, ip);
 	if (is64bit)
 		*is64bit = map__dso(al.map)->is_64_bit;
 
diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
index b79f57e5648f..538320e4260c 100644
--- a/tools/perf/util/unwind-libdw.c
+++ b/tools/perf/util/unwind-libdw.c
@@ -115,7 +115,7 @@ static int entry(u64 ip, struct unwind_info *ui)
 	pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n",
 		 al.sym ? al.sym->name : "''",
 		 ip,
-		 al.map ? al.map->map_ip(al.map, ip) : (u64) 0);
+		 al.map ? map__map_ip(al.map, ip) : (u64) 0);
 	return 0;
 }
 
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 08/17] perf map: Add accessors for prot, priv and flags
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (6 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 07/17] perf map: Add helper for " Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 09/17] perf map: Add accessors for pgoff and reloc Ian Rogers
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Later changes will add reference count checking for struct map. Add an
accessor so that the reference count check is only necessary in one
place.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/builtin-inject.c         |  2 +-
 tools/perf/builtin-report.c         |  9 +++++----
 tools/perf/tests/vmlinux-kallsyms.c |  4 ++--
 tools/perf/util/map.h               | 15 +++++++++++++++
 tools/perf/util/sort.c              |  6 +++---
 tools/perf/util/symbol.c            |  4 ++--
 6 files changed, 28 insertions(+), 12 deletions(-)

diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index 8f6909dd8a54..fd2b38458a5d 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -758,7 +758,7 @@ int perf_event__inject_buildid(struct perf_tool *tool, union perf_event *event,
 		if (!dso->hit) {
 			dso->hit = 1;
 			dso__inject_build_id(dso, tool, machine,
-					     sample->cpumode, al.map->flags);
+					     sample->cpumode, map__flags(al.map));
 		}
 	}
 
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 4ce1aef3e253..8650d9503b77 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -845,13 +845,14 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
 	maps__for_each_entry(maps, rb_node) {
 		struct map *map = rb_node->map;
 		const struct dso *dso = map__dso(map);
+		u32 prot = map__prot(map);
 
 		printed += fprintf(fp, "%*s  %" PRIx64 "-%" PRIx64 " %c%c%c%c %08" PRIx64 " %" PRIu64 " %s\n",
 				   indent, "", map__start(map), map__end(map),
-				   map->prot & PROT_READ ? 'r' : '-',
-				   map->prot & PROT_WRITE ? 'w' : '-',
-				   map->prot & PROT_EXEC ? 'x' : '-',
-				   map->flags & MAP_SHARED ? 's' : 'p',
+				   prot & PROT_READ ? 'r' : '-',
+				   prot & PROT_WRITE ? 'w' : '-',
+				   prot & PROT_EXEC ? 'x' : '-',
+				   map__flags(map) ? 's' : 'p',
 				   map->pgoff,
 				   dso->id.ino, dso->name);
 	}
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index 05a322ea3f9f..7db102868bc2 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -323,7 +323,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 		mem_end = map__unmap_ip(vmlinux_map, map__end(map));
 
 		pair = maps__find(kallsyms.kmaps, mem_start);
-		if (pair == NULL || pair->priv)
+		if (pair == NULL || map__priv(pair))
 			continue;
 
 		if (map__start(pair) == mem_start) {
@@ -351,7 +351,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 	maps__for_each_entry(maps, rb_node) {
 		struct map *map = rb_node->map;
 
-		if (!map->priv) {
+		if (!map__priv(map)) {
 			if (!header_printed) {
 				pr_info("WARN: Maps only in kallsyms:\n");
 				header_printed = true;
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 9118eba71032..fd440c9c279e 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -72,6 +72,21 @@ static inline u64 map__end(const struct map *map)
 	return map->end;
 }
 
+static inline u32 map__flags(const struct map *map)
+{
+	return map->flags;
+}
+
+static inline u32 map__prot(const struct map *map)
+{
+	return map->prot;
+}
+
+static inline bool map__priv(const struct map *map)
+{
+	return map->priv;
+}
+
 static inline size_t map__size(const struct map *map)
 {
 	return map__end(map) - map__start(map);
diff --git a/tools/perf/util/sort.c b/tools/perf/util/sort.c
index 321d4859ae16..31a8df42cb2f 100644
--- a/tools/perf/util/sort.c
+++ b/tools/perf/util/sort.c
@@ -1499,7 +1499,7 @@ sort__dcacheline_cmp(struct hist_entry *left, struct hist_entry *right)
 	 */
 
 	if ((left->cpumode != PERF_RECORD_MISC_KERNEL) &&
-	    (!(l_map->flags & MAP_SHARED)) && !l_dso->id.maj && !l_dso->id.min &&
+	    (!(map__flags(l_map) & MAP_SHARED)) && !l_dso->id.maj && !l_dso->id.min &&
 	    !l_dso->id.ino && !l_dso->id.ino_generation) {
 		/* userspace anonymous */
 
@@ -1535,8 +1535,8 @@ static int hist_entry__dcacheline_snprintf(struct hist_entry *he, char *bf,
 
 		/* print [s] for shared data mmaps */
 		if ((he->cpumode != PERF_RECORD_MISC_KERNEL) &&
-		     map && !(map->prot & PROT_EXEC) &&
-		    (map->flags & MAP_SHARED) &&
+		     map && !(map__prot(map) & PROT_EXEC) &&
+		     (map__flags(map) & MAP_SHARED) &&
 		    (dso->id.maj || dso->id.min || dso->id.ino || dso->id.ino_generation))
 			level = 's';
 		else if (!map)
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 128d4a66cc0e..e3758519e4d1 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1396,7 +1396,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	}
 
 	/* Read new maps into temporary lists */
-	err = file__read_maps(fd, map->prot & PROT_EXEC, kcore_mapfn, &md,
+	err = file__read_maps(fd, map__prot(map) & PROT_EXEC, kcore_mapfn, &md,
 			      &is_64_bit);
 	if (err)
 		goto out_err;
@@ -1508,7 +1508,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 
 	close(fd);
 
-	if (map->prot & PROT_EXEC)
+	if (map__prot(map) & PROT_EXEC)
 		pr_debug("Using %s for kernel object code\n", kcore_filename);
 	else
 		pr_debug("Using %s for kernel data\n", kcore_filename);
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 09/17] perf map: Add accessors for pgoff and reloc
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (7 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 08/17] perf map: Add accessors for prot, priv and flags Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 10/17] perf test: Add extra diagnostics to maps test Ian Rogers
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Later changes will add reference count checking for struct map. Add
accessors so that the reference count check is only necessary in one
place.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/arch/x86/util/event.c    |  2 +-
 tools/perf/builtin-report.c         |  2 +-
 tools/perf/tests/vmlinux-kallsyms.c |  4 ++--
 tools/perf/util/machine.c           |  4 ++--
 tools/perf/util/map.c               | 14 +++++++-------
 tools/perf/util/map.h               | 10 ++++++++++
 tools/perf/util/probe-event.c       |  8 ++++----
 tools/perf/util/symbol.c            |  6 +++---
 tools/perf/util/unwind-libdw.c      |  6 +++---
 9 files changed, 33 insertions(+), 23 deletions(-)

diff --git a/tools/perf/arch/x86/util/event.c b/tools/perf/arch/x86/util/event.c
index 3b2475707756..5741ffe47312 100644
--- a/tools/perf/arch/x86/util/event.c
+++ b/tools/perf/arch/x86/util/event.c
@@ -61,7 +61,7 @@ int perf_event__synthesize_extra_kmaps(struct perf_tool *tool,
 
 		event->mmap.start = map__start(map);
 		event->mmap.len   = map__size(map);
-		event->mmap.pgoff = map->pgoff;
+		event->mmap.pgoff = map__pgoff(map);
 		event->mmap.pid   = machine->pid;
 
 		strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 8650d9503b77..c7e4160f64ad 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -853,7 +853,7 @@ static size_t maps__fprintf_task(struct maps *maps, int indent, FILE *fp)
 				   prot & PROT_WRITE ? 'w' : '-',
 				   prot & PROT_EXEC ? 'x' : '-',
 				   map__flags(map) ? 's' : 'p',
-				   map->pgoff,
+				   map__pgoff(map),
 				   dso->id.ino, dso->name);
 	}
 
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index 7db102868bc2..af511233c764 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -335,10 +335,10 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 			}
 
 			pr_info("WARN: %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s in kallsyms as",
-				map__start(map), map__end(map), map->pgoff, dso->name);
+				map__start(map), map__end(map), map__pgoff(map), dso->name);
 			if (mem_end != map__end(pair))
 				pr_info(":\nWARN: *%" PRIx64 "-%" PRIx64 " %" PRIx64,
-					map__start(pair), map__end(pair), pair->pgoff);
+					map__start(pair), map__end(pair), map__pgoff(pair));
 			pr_info(" %s\n", dso->name);
 			pair->priv = 1;
 		}
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index afb77bd161e2..916d98885128 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -1276,9 +1276,9 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
 		if (!kmap || !is_entry_trampoline(kmap->name))
 			continue;
 
-		dest_map = maps__find(kmaps, map->pgoff);
+		dest_map = maps__find(kmaps, map__pgoff(map));
 		if (dest_map != map)
-			map->pgoff = map__map_ip(dest_map, map->pgoff);
+			map->pgoff = map__map_ip(dest_map, map__pgoff(map));
 		found = true;
 	}
 	if (found || machine->trampolines_mapped)
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 816bffbbf344..1fe367e2cf19 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -421,7 +421,7 @@ size_t map__fprintf(struct map *map, FILE *fp)
 	const struct dso *dso = map__dso(map);
 
 	return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s\n",
-		       map__start(map), map__end(map), map->pgoff, dso->name);
+		       map__start(map), map__end(map), map__pgoff(map), dso->name);
 }
 
 size_t map__fprintf_dsoname(struct map *map, FILE *fp)
@@ -510,7 +510,7 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
 		return rip;
 
 	if (dso->rel)
-		return rip - map->pgoff;
+		return rip - map__pgoff(map);
 
 	/*
 	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
@@ -519,7 +519,7 @@ u64 map__rip_2objdump(struct map *map, u64 rip)
 	if (dso->kernel == DSO_SPACE__USER)
 		return rip + dso->text_offset;
 
-	return map__unmap_ip(map, rip) - map->reloc;
+	return map__unmap_ip(map, rip) - map__reloc(map);
 }
 
 /**
@@ -542,7 +542,7 @@ u64 map__objdump_2mem(struct map *map, u64 ip)
 		return map__unmap_ip(map, ip);
 
 	if (dso->rel)
-		return map__unmap_ip(map, ip + map->pgoff);
+		return map__unmap_ip(map, ip + map__pgoff(map));
 
 	/*
 	 * kernel modules also have DSO_TYPE_USER in dso->kernel,
@@ -551,7 +551,7 @@ u64 map__objdump_2mem(struct map *map, u64 ip)
 	if (dso->kernel == DSO_SPACE__USER)
 		return map__unmap_ip(map, ip - dso->text_offset);
 
-	return ip + map->reloc;
+	return ip + map__reloc(map);
 }
 
 bool map__contains_symbol(const struct map *map, const struct symbol *sym)
@@ -592,12 +592,12 @@ struct maps *map__kmaps(struct map *map)
 
 u64 map__dso_map_ip(const struct map *map, u64 ip)
 {
-	return ip - map__start(map) + map->pgoff;
+	return ip - map__start(map) + map__pgoff(map);
 }
 
 u64 map__dso_unmap_ip(const struct map *map, u64 ip)
 {
-	return ip + map__start(map) - map->pgoff;
+	return ip + map__start(map) - map__pgoff(map);
 }
 
 u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip)
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index fd440c9c279e..102485699aa8 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -72,6 +72,16 @@ static inline u64 map__end(const struct map *map)
 	return map->end;
 }
 
+static inline u64 map__pgoff(const struct map *map)
+{
+	return map->pgoff;
+}
+
+static inline u64 map__reloc(const struct map *map)
+{
+	return map->reloc;
+}
+
 static inline u32 map__flags(const struct map *map)
 {
 	return map->flags;
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index bb44a3798df8..6e2110d605fb 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -135,14 +135,14 @@ static int kernel_get_symbol_address_by_name(const char *name, u64 *addr,
 	/* ref_reloc_sym is just a label. Need a special fix*/
 	reloc_sym = kernel_get_ref_reloc_sym(&map);
 	if (reloc_sym && strcmp(name, reloc_sym->name) == 0)
-		*addr = (!map->reloc || reloc) ? reloc_sym->addr :
+		*addr = (!map__reloc(map) || reloc) ? reloc_sym->addr :
 			reloc_sym->unrelocated_addr;
 	else {
 		sym = machine__find_kernel_symbol_by_name(host_machine, name, &map);
 		if (!sym)
 			return -ENOENT;
 		*addr = map__unmap_ip(map, sym->start) -
-			((reloc) ? 0 : map->reloc) -
+			((reloc) ? 0 : map__reloc(map)) -
 			((reladdr) ? map__start(map) : 0);
 	}
 	return 0;
@@ -400,7 +400,7 @@ static int find_alternative_probe_point(struct debuginfo *dinfo,
 					   "Consider identifying the final function used at run time and set the probe directly on that.\n",
 					   pp->function);
 		} else
-			address = map__unmap_ip(map, sym->start) - map->reloc;
+			address = map__unmap_ip(map, sym->start) - map__reloc(map);
 		break;
 	}
 	if (!address) {
@@ -866,7 +866,7 @@ post_process_kernel_probe_trace_events(struct probe_trace_event *tevs,
 			free(tevs[i].point.symbol);
 		tevs[i].point.symbol = tmp;
 		tevs[i].point.offset = tevs[i].point.address -
-			(map->reloc ? reloc_sym->unrelocated_addr :
+			(map__reloc(map) ? reloc_sym->unrelocated_addr :
 				      reloc_sym->addr);
 	}
 	return skipped;
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index e3758519e4d1..ec7a312e7cc1 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -810,11 +810,11 @@ static int maps__split_kallsyms_for_kcore(struct maps *kmaps, struct dso *dso)
 			continue;
 		}
 		curr_map_dso = map__dso(curr_map);
-		pos->start -= map__start(curr_map) - curr_map->pgoff;
+		pos->start -= map__start(curr_map) - map__pgoff(curr_map);
 		if (pos->end > map__end(curr_map))
 			pos->end = map__end(curr_map);
 		if (pos->end)
-			pos->end -= map__start(curr_map) - curr_map->pgoff;
+			pos->end -= map__start(curr_map) - map__pgoff(curr_map);
 		symbols__insert(&curr_map_dso->symbols, pos);
 		++count;
 	}
@@ -1458,7 +1458,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 		if (new_node->map == replacement_map) {
 			map->start	= map__start(new_node->map);
 			map->end	= map__end(new_node->map);
-			map->pgoff	= new_node->map->pgoff;
+			map->pgoff	= map__pgoff(new_node->map);
 			map->map_ip	= new_node->map->map_ip;
 			map->unmap_ip	= new_node->map->unmap_ip;
 			/* Ensure maps are correctly ordered */
diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
index 538320e4260c..9565f9906e5d 100644
--- a/tools/perf/util/unwind-libdw.c
+++ b/tools/perf/util/unwind-libdw.c
@@ -62,19 +62,19 @@ static int __report_module(struct addr_location *al, u64 ip,
 		Dwarf_Addr s;
 
 		dwfl_module_info(mod, NULL, &s, NULL, NULL, NULL, NULL, NULL);
-		if (s != map__start(al->map) - al->map->pgoff)
+		if (s != map__start(al->map) - map__pgoff(al->map))
 			mod = 0;
 	}
 
 	if (!mod)
 		mod = dwfl_report_elf(ui->dwfl, dso->short_name, dso->long_name, -1,
-				      map__start(al->map) - al->map->pgoff, false);
+				      map__start(al->map) - map__pgoff(al->map), false);
 	if (!mod) {
 		char filename[PATH_MAX];
 
 		if (dso__build_id_filename(dso, filename, sizeof(filename), false))
 			mod = dwfl_report_elf(ui->dwfl, dso->short_name, filename, -1,
-					      map__start(al->map) - al->map->pgoff, false);
+					      map__start(al->map) - map__pgoff(al->map), false);
 	}
 
 	if (mod) {
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 10/17] perf test: Add extra diagnostics to maps test
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (8 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 09/17] perf map: Add accessors for pgoff and reloc Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 11/17] perf maps: Modify maps_by_name to hold a reference to a map Ian Rogers
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Dump the resultant and comparison maps on failure.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/tests/maps.c | 51 +++++++++++++++++++++++++++++------------
 1 file changed, 36 insertions(+), 15 deletions(-)

diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
index fd0c464fcf95..1c7293476aca 100644
--- a/tools/perf/tests/maps.c
+++ b/tools/perf/tests/maps.c
@@ -1,4 +1,5 @@
 // SPDX-License-Identifier: GPL-2.0
+#include <inttypes.h>
 #include <linux/compiler.h>
 #include <linux/kernel.h>
 #include "tests.h"
@@ -17,22 +18,42 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
 {
 	struct map_rb_node *rb_node;
 	unsigned int i = 0;
-
-	maps__for_each_entry(maps, rb_node) {
-		struct map *map = rb_node->map;
-
-		if (i > 0)
-			TEST_ASSERT_VAL("less maps expected", (map && i < size) || (!map && i == size));
-
-		TEST_ASSERT_VAL("wrong map start",  map__start(map) == merged[i].start);
-		TEST_ASSERT_VAL("wrong map end",    map__end(map) == merged[i].end);
-		TEST_ASSERT_VAL("wrong map name",  !strcmp(map__dso(map)->name, merged[i].name));
-		TEST_ASSERT_VAL("wrong map refcnt", refcount_read(&map->refcnt) == 1);
-
-		i++;
+	bool failed = false;
+
+	if (maps__nr_maps(maps) != size) {
+		pr_debug("Expected %d maps, got %d", size, maps__nr_maps(maps));
+		failed = true;
+	} else {
+		maps__for_each_entry(maps, rb_node) {
+			struct map *map = rb_node->map;
+
+			if (map__start(map) != merged[i].start ||
+			    map__end(map) != merged[i].end ||
+			    strcmp(map__dso(map)->name, merged[i].name) ||
+			    refcount_read(&map->refcnt) != 1) {
+				failed = true;
+			}
+			i++;
+		}
 	}
-
-	return TEST_OK;
+	if (failed) {
+		pr_debug("Expected:\n");
+		for (i = 0; i < size; i++) {
+			pr_debug("\tstart: %" PRIu64 " end: %" PRIu64 " name: '%s' refcnt: 1\n",
+				merged[i].start, merged[i].end, merged[i].name);
+		}
+		pr_debug("Got:\n");
+		maps__for_each_entry(maps, rb_node) {
+			struct map *map = rb_node->map;
+
+			pr_debug("\tstart: %" PRIu64 " end: %" PRIu64 " name: '%s' refcnt: %d\n",
+				map__start(map),
+				map__end(map),
+				map__dso(map)->name,
+				refcount_read(&map->refcnt));
+		}
+	}
+	return failed ? TEST_FAIL : TEST_OK;
 }
 
 static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest __maybe_unused)
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 11/17] perf maps: Modify maps_by_name to hold a reference to a map
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (9 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 10/17] perf test: Add extra diagnostics to maps test Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 12/17] perf map: Changes to reference counting Ian Rogers
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

To make it clearer about the ownership of a reference count split the
by-name case from the regular start-address sorted tree. Put the
reference count when maps_by_name is freed, which requires moving a
decrement to nr_maps in maps__remove. Add two missing map puts in
maps__fixup_overlappings in the event maps__insert fails.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/util/maps.c   | 30 ++++++++++++++++--------------
 tools/perf/util/symbol.c | 21 +++++++++++++++++----
 2 files changed, 33 insertions(+), 18 deletions(-)

diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index ffd4a4a64026..74e3133f5007 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -26,6 +26,9 @@ static void __maps__free_maps_by_name(struct maps *maps)
 	/*
 	 * Free everything to try to do it from the rbtree in the next search
 	 */
+	for (unsigned int i = 0; i < maps__nr_maps(maps); i++)
+		map__put(maps__maps_by_name(maps)[i]);
+
 	zfree(&maps->maps_by_name);
 	maps->nr_maps_allocated = 0;
 }
@@ -42,7 +45,7 @@ static int __maps__insert(struct maps *maps, struct map *map)
 		return -ENOMEM;
 
 	RB_CLEAR_NODE(&new_rb_node->rb_node);
-	new_rb_node->map = map;
+	new_rb_node->map = map__get(map);
 
 	while (*p != NULL) {
 		parent = *p;
@@ -55,7 +58,6 @@ static int __maps__insert(struct maps *maps, struct map *map)
 
 	rb_link_node(&new_rb_node->rb_node, parent, p);
 	rb_insert_color(&new_rb_node->rb_node, maps__entries(maps));
-	map__get(map);
 	return 0;
 }
 
@@ -100,7 +102,7 @@ int maps__insert(struct maps *maps, struct map *map)
 			maps->maps_by_name = maps_by_name;
 			maps->nr_maps_allocated = nr_allocate;
 		}
-		maps__maps_by_name(maps)[maps__nr_maps(maps) - 1] = map;
+		maps__maps_by_name(maps)[maps__nr_maps(maps) - 1] = map__get(map);
 		__maps__sort_by_name(maps);
 	}
  out:
@@ -126,9 +128,9 @@ void maps__remove(struct maps *maps, struct map *map)
 	rb_node = maps__find_node(maps, map);
 	assert(rb_node->map == map);
 	__maps__remove(maps, rb_node);
-	--maps->nr_maps;
 	if (maps__maps_by_name(maps))
 		__maps__free_maps_by_name(maps);
+	--maps->nr_maps;
 	up_write(maps__lock(maps));
 }
 
@@ -136,6 +138,9 @@ static void __maps__purge(struct maps *maps)
 {
 	struct map_rb_node *pos, *next;
 
+	if (maps__maps_by_name(maps))
+		__maps__free_maps_by_name(maps);
+
 	maps__for_each_entry_safe(maps, pos, next) {
 		rb_erase_init(&pos->rb_node,  maps__entries(maps));
 		map__put(pos->map);
@@ -293,7 +298,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 	}
 
 	next = first;
-	while (next) {
+	while (next && !err) {
 		struct map_rb_node *pos = rb_entry(next, struct map_rb_node, rb_node);
 		next = rb_next(&pos->rb_node);
 
@@ -331,8 +336,10 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 
 			before->end = map__start(map);
 			err = __maps__insert(maps, before);
-			if (err)
+			if (err) {
+				map__put(before);
 				goto put_map;
+			}
 
 			if (verbose >= 2 && !use_browser)
 				map__fprintf(before, fp);
@@ -352,22 +359,17 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 			assert(map__map_ip(pos->map, map__end(map)) ==
 				map__map_ip(after, map__end(map)));
 			err = __maps__insert(maps, after);
-			if (err)
+			if (err) {
+				map__put(after);
 				goto put_map;
-
+			}
 			if (verbose >= 2 && !use_browser)
 				map__fprintf(after, fp);
 			map__put(after);
 		}
 put_map:
 		map__put(pos->map);
-
-		if (err)
-			goto out;
 	}
-
-	err = 0;
-out:
 	up_write(maps__lock(maps));
 	return err;
 }
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index ec7a312e7cc1..7904bfff7d0e 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -2052,10 +2052,23 @@ int dso__load(struct dso *dso, struct map *map)
 
 static int map__strcmp(const void *a, const void *b)
 {
-	const struct dso *dso_a = map__dso(*(const struct map **)a);
-	const struct dso *dso_b = map__dso(*(const struct map **)b);
+	const struct map *map_a = *(const struct map **)a;
+	const struct map *map_b = *(const struct map **)b;
+	const struct dso *dso_a = map__dso(map_a);
+	const struct dso *dso_b = map__dso(map_b);
+	int ret = strcmp(dso_a->short_name, dso_b->short_name);
 
-	return strcmp(dso_a->short_name, dso_b->short_name);
+	if (ret == 0 && map_a != map_b) {
+		/*
+		 * Ensure distinct but name equal maps have an order in part to
+		 * aid reference counting.
+		 */
+		ret = (int)map__start(map_a) - (int)map__start(map_b);
+		if (ret == 0)
+			ret = (int)((intptr_t)map_a - (intptr_t)map_b);
+	}
+
+	return ret;
 }
 
 static int map__strcmp_name(const void *name, const void *b)
@@ -2087,7 +2100,7 @@ static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
 	maps->nr_maps_allocated = maps__nr_maps(maps);
 
 	maps__for_each_entry(maps, rb_node)
-		maps_by_name[i++] = rb_node->map;
+		maps_by_name[i++] = map__get(rb_node->map);
 
 	__maps__sort_by_name(maps);
 
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 12/17] perf map: Changes to reference counting
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (10 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 11/17] perf maps: Modify maps_by_name to hold a reference to a map Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 13/17] libperf: Add reference count checking macros Ian Rogers
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

When a pointer to a map exists do a get, when that pointer is
overwritten or freed, put the map. This avoids issues with gets and
puts being inconsistently used causing, use after puts, etc. For
example, the map in struct addr_location is changed to hold a
reference count. Reference count checking and address sanitizer were
used to identify issues.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/tests/code-reading.c       |  1 +
 tools/perf/tests/hists_cumulate.c     | 10 ++++
 tools/perf/tests/hists_filter.c       | 10 ++++
 tools/perf/tests/hists_link.c         | 18 +++++-
 tools/perf/tests/hists_output.c       | 10 ++++
 tools/perf/tests/mmap-thread-lookup.c |  1 +
 tools/perf/util/callchain.c           |  9 +--
 tools/perf/util/event.c               |  6 +-
 tools/perf/util/hist.c                | 10 ++--
 tools/perf/util/machine.c             | 79 ++++++++++++++++-----------
 tools/perf/util/map.c                 |  2 +-
 11 files changed, 112 insertions(+), 44 deletions(-)

diff --git a/tools/perf/tests/code-reading.c b/tools/perf/tests/code-reading.c
index 1545fcaa95c6..efe026a35010 100644
--- a/tools/perf/tests/code-reading.c
+++ b/tools/perf/tests/code-reading.c
@@ -366,6 +366,7 @@ static int read_object_code(u64 addr, size_t len, u8 cpumode,
 	}
 	pr_debug("Bytes read match those read by objdump\n");
 out:
+	map__put(al.map);
 	return err;
 }
 
diff --git a/tools/perf/tests/hists_cumulate.c b/tools/perf/tests/hists_cumulate.c
index f00ec9abdbcd..8c0e3f334747 100644
--- a/tools/perf/tests/hists_cumulate.c
+++ b/tools/perf/tests/hists_cumulate.c
@@ -112,6 +112,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
 		}
 
 		fake_samples[i].thread = al.thread;
+		map__put(fake_samples[i].map);
 		fake_samples[i].map = al.map;
 		fake_samples[i].sym = al.sym;
 	}
@@ -147,6 +148,14 @@ static void del_hist_entries(struct hists *hists)
 	}
 }
 
+static void put_fake_samples(void)
+{
+	size_t i;
+
+	for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
+		map__put(fake_samples[i].map);
+}
+
 typedef int (*test_fn_t)(struct evsel *, struct machine *);
 
 #define COMM(he)  (thread__comm_str(he->thread))
@@ -733,6 +742,7 @@ static int test__hists_cumulate(struct test_suite *test __maybe_unused, int subt
 	/* tear down everything */
 	evlist__delete(evlist);
 	machines__exit(&machines);
+	put_fake_samples();
 
 	return err;
 }
diff --git a/tools/perf/tests/hists_filter.c b/tools/perf/tests/hists_filter.c
index 7c552549f4a4..98eff5935a1c 100644
--- a/tools/perf/tests/hists_filter.c
+++ b/tools/perf/tests/hists_filter.c
@@ -89,6 +89,7 @@ static int add_hist_entries(struct evlist *evlist,
 			}
 
 			fake_samples[i].thread = al.thread;
+			map__put(fake_samples[i].map);
 			fake_samples[i].map = al.map;
 			fake_samples[i].sym = al.sym;
 		}
@@ -101,6 +102,14 @@ static int add_hist_entries(struct evlist *evlist,
 	return TEST_FAIL;
 }
 
+static void put_fake_samples(void)
+{
+	size_t i;
+
+	for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
+		map__put(fake_samples[i].map);
+}
+
 static int test__hists_filter(struct test_suite *test __maybe_unused, int subtest __maybe_unused)
 {
 	int err = TEST_FAIL;
@@ -322,6 +331,7 @@ static int test__hists_filter(struct test_suite *test __maybe_unused, int subtes
 	evlist__delete(evlist);
 	reset_output_field();
 	machines__exit(&machines);
+	put_fake_samples();
 
 	return err;
 }
diff --git a/tools/perf/tests/hists_link.c b/tools/perf/tests/hists_link.c
index e7e4ee57ce04..64ce8097889c 100644
--- a/tools/perf/tests/hists_link.c
+++ b/tools/perf/tests/hists_link.c
@@ -6,6 +6,7 @@
 #include "evsel.h"
 #include "evlist.h"
 #include "machine.h"
+#include "map.h"
 #include "parse-events.h"
 #include "hists_common.h"
 #include "util/mmap.h"
@@ -94,6 +95,7 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
 			}
 
 			fake_common_samples[k].thread = al.thread;
+			map__put(fake_common_samples[k].map);
 			fake_common_samples[k].map = al.map;
 			fake_common_samples[k].sym = al.sym;
 		}
@@ -126,11 +128,24 @@ static int add_hist_entries(struct evlist *evlist, struct machine *machine)
 	return -1;
 }
 
+static void put_fake_samples(void)
+{
+	size_t i, j;
+
+	for (i = 0; i < ARRAY_SIZE(fake_common_samples); i++)
+		map__put(fake_common_samples[i].map);
+	for (i = 0; i < ARRAY_SIZE(fake_samples); i++) {
+		for (j = 0; j < ARRAY_SIZE(fake_samples[0]); j++)
+			map__put(fake_samples[i][j].map);
+	}
+}
+
 static int find_sample(struct sample *samples, size_t nr_samples,
 		       struct thread *t, struct map *m, struct symbol *s)
 {
 	while (nr_samples--) {
-		if (samples->thread == t && samples->map == m &&
+		if (samples->thread == t &&
+		    samples->map == m &&
 		    samples->sym == s)
 			return 1;
 		samples++;
@@ -336,6 +351,7 @@ static int test__hists_link(struct test_suite *test __maybe_unused, int subtest
 	evlist__delete(evlist);
 	reset_output_field();
 	machines__exit(&machines);
+	put_fake_samples();
 
 	return err;
 }
diff --git a/tools/perf/tests/hists_output.c b/tools/perf/tests/hists_output.c
index 428d11a938f2..cebd5226bb12 100644
--- a/tools/perf/tests/hists_output.c
+++ b/tools/perf/tests/hists_output.c
@@ -78,6 +78,7 @@ static int add_hist_entries(struct hists *hists, struct machine *machine)
 		}
 
 		fake_samples[i].thread = al.thread;
+		map__put(fake_samples[i].map);
 		fake_samples[i].map = al.map;
 		fake_samples[i].sym = al.sym;
 	}
@@ -113,6 +114,14 @@ static void del_hist_entries(struct hists *hists)
 	}
 }
 
+static void put_fake_samples(void)
+{
+	size_t i;
+
+	for (i = 0; i < ARRAY_SIZE(fake_samples); i++)
+		map__put(fake_samples[i].map);
+}
+
 typedef int (*test_fn_t)(struct evsel *, struct machine *);
 
 #define COMM(he)  (thread__comm_str(he->thread))
@@ -620,6 +629,7 @@ static int test__hists_output(struct test_suite *test __maybe_unused, int subtes
 	/* tear down everything */
 	evlist__delete(evlist);
 	machines__exit(&machines);
+	put_fake_samples();
 
 	return err;
 }
diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c
index 5cc4644e353d..898eda55b7a8 100644
--- a/tools/perf/tests/mmap-thread-lookup.c
+++ b/tools/perf/tests/mmap-thread-lookup.c
@@ -203,6 +203,7 @@ static int mmap_events(synth_cb synth)
 		}
 
 		pr_debug("map %p, addr %" PRIx64 "\n", al.map, map__start(al.map));
+		map__put(al.map);
 	}
 
 	machine__delete_threads(machine);
diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c
index 9e9c39dd9d2b..78dc7b6f7ff7 100644
--- a/tools/perf/util/callchain.c
+++ b/tools/perf/util/callchain.c
@@ -589,7 +589,7 @@ fill_node(struct callchain_node *node, struct callchain_cursor *cursor)
 		}
 		call->ip = cursor_node->ip;
 		call->ms = cursor_node->ms;
-		map__get(call->ms.map);
+		call->ms.map = map__get(call->ms.map);
 		call->srcline = cursor_node->srcline;
 
 		if (cursor_node->branch) {
@@ -1067,7 +1067,7 @@ int callchain_cursor_append(struct callchain_cursor *cursor,
 	node->ip = ip;
 	map__zput(node->ms.map);
 	node->ms = *ms;
-	map__get(node->ms.map);
+	node->ms.map = map__get(node->ms.map);
 	node->branch = branch;
 	node->nr_loop_iter = nr_loop_iter;
 	node->iter_cycles = iter_cycles;
@@ -1115,7 +1115,8 @@ int fill_callchain_info(struct addr_location *al, struct callchain_cursor_node *
 	struct machine *machine = maps__machine(node->ms.maps);
 
 	al->maps = node->ms.maps;
-	al->map = node->ms.map;
+	map__put(al->map);
+	al->map = map__get(node->ms.map);
 	al->sym = node->ms.sym;
 	al->srcline = node->srcline;
 	al->addr = node->ip;
@@ -1528,7 +1529,7 @@ int callchain_node__make_parent_list(struct callchain_node *node)
 				goto out;
 			*new = *chain;
 			new->has_children = false;
-			map__get(new->ms.map);
+			new->ms.map = map__get(new->ms.map);
 			list_add_tail(&new->list, &head);
 		}
 		parent = parent->parent;
diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 2712d1a8264e..13f7f85e92e1 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -485,13 +485,14 @@ size_t perf_event__fprintf_text_poke(union perf_event *event, struct machine *ma
 	if (machine) {
 		struct addr_location al;
 
-		al.map = maps__find(machine__kernel_maps(machine), tp->addr);
+		al.map = map__get(maps__find(machine__kernel_maps(machine), tp->addr));
 		if (al.map && map__load(al.map) >= 0) {
 			al.addr = map__map_ip(al.map, tp->addr);
 			al.sym = map__find_symbol(al.map, al.addr);
 			if (al.sym)
 				ret += symbol__fprintf_symname_offs(al.sym, &al, fp);
 		}
+		map__put(al.map);
 	}
 	ret += fprintf(fp, " old len %u new len %u\n", tp->old_len, tp->new_len);
 	old = true;
@@ -614,7 +615,7 @@ struct map *thread__find_map(struct thread *thread, u8 cpumode, u64 addr,
 		return NULL;
 	}
 
-	al->map = maps__find(maps, al->addr);
+	al->map = map__get(maps__find(maps, al->addr));
 	if (al->map != NULL) {
 		/*
 		 * Kernel maps might be changed when loading symbols so loading
@@ -773,6 +774,7 @@ int machine__resolve(struct machine *machine, struct addr_location *al,
  */
 void addr_location__put(struct addr_location *al)
 {
+	map__zput(al->map);
 	thread__zput(al->thread);
 }
 
diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c
index fdf0562d2fd3..02b4bf31b1a7 100644
--- a/tools/perf/util/hist.c
+++ b/tools/perf/util/hist.c
@@ -450,7 +450,7 @@ static int hist_entry__init(struct hist_entry *he,
 			memset(&he->stat, 0, sizeof(he->stat));
 	}
 
-	map__get(he->ms.map);
+	he->ms.map = map__get(he->ms.map);
 
 	if (he->branch_info) {
 		/*
@@ -465,13 +465,13 @@ static int hist_entry__init(struct hist_entry *he,
 		memcpy(he->branch_info, template->branch_info,
 		       sizeof(*he->branch_info));
 
-		map__get(he->branch_info->from.ms.map);
-		map__get(he->branch_info->to.ms.map);
+		he->branch_info->from.ms.map = map__get(he->branch_info->from.ms.map);
+		he->branch_info->to.ms.map = map__get(he->branch_info->to.ms.map);
 	}
 
 	if (he->mem_info) {
-		map__get(he->mem_info->iaddr.ms.map);
-		map__get(he->mem_info->daddr.ms.map);
+		he->mem_info->iaddr.ms.map = map__get(he->mem_info->iaddr.ms.map);
+		he->mem_info->daddr.ms.map = map__get(he->mem_info->daddr.ms.map);
 	}
 
 	if (hist_entry__has_callchains(he) && symbol_conf.use_callchain)
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 916d98885128..502e97010a3c 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -880,21 +880,29 @@ static int machine__process_ksymbol_register(struct machine *machine,
 	struct symbol *sym;
 	struct dso *dso;
 	struct map *map = maps__find(machine__kernel_maps(machine), event->ksymbol.addr);
+	bool put_map = false;
+	int err = 0;
 
 	if (!map) {
-		int err;
-
 		dso = dso__new(event->ksymbol.name);
-		if (dso) {
-			dso->kernel = DSO_SPACE__KERNEL;
-			map = map__new2(0, dso);
-			dso__put(dso);
-		}
 
-		if (!dso || !map) {
-			return -ENOMEM;
+		if (!dso) {
+			err = -ENOMEM;
+			goto out;
 		}
-
+		dso->kernel = DSO_SPACE__KERNEL;
+		map = map__new2(0, dso);
+		dso__put(dso);
+		if (!map) {
+			err = -ENOMEM;
+			goto out;
+		}
+		/*
+		 * The inserted map has a get on it, we need to put to release
+		 * the reference count here, but do it after all accesses are
+		 * done.
+		 */
+		put_map = true;
 		if (event->ksymbol.ksym_type == PERF_RECORD_KSYMBOL_TYPE_OOL) {
 			dso->binary_type = DSO_BINARY_TYPE__OOL;
 			dso->data.file_size = event->ksymbol.len;
@@ -904,9 +912,10 @@ static int machine__process_ksymbol_register(struct machine *machine,
 		map->start = event->ksymbol.addr;
 		map->end = map__start(map) + event->ksymbol.len;
 		err = maps__insert(machine__kernel_maps(machine), map);
-		map__put(map);
-		if (err)
-			return err;
+		if (err) {
+			err = -ENOMEM;
+			goto out;
+		}
 
 		dso__set_loaded(dso);
 
@@ -921,10 +930,15 @@ static int machine__process_ksymbol_register(struct machine *machine,
 	sym = symbol__new(map__map_ip(map, map__start(map)),
 			  event->ksymbol.len,
 			  0, 0, event->ksymbol.name);
-	if (!sym)
-		return -ENOMEM;
+	if (!sym) {
+		err = -ENOMEM;
+		goto out;
+	}
 	dso__insert_symbol(dso, sym);
-	return 0;
+out:
+	if (put_map)
+		map__put(map);
+	return err;
 }
 
 static int machine__process_ksymbol_unregister(struct machine *machine,
@@ -1026,13 +1040,11 @@ static struct map *machine__addnew_module_map(struct machine *machine, u64 start
 		goto out;
 
 	err = maps__insert(machine__kernel_maps(machine), map);
-
-	/* Put the map here because maps__insert already got it */
-	map__put(map);
-
 	/* If maps__insert failed, return NULL. */
-	if (err)
+	if (err) {
+		map__put(map);
 		map = NULL;
+	}
 out:
 	/* put the dso here, corresponding to  machine__findnew_module_dso */
 	dso__put(dso);
@@ -1324,6 +1336,7 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
 	/* In case of renewal the kernel map, destroy previous one */
 	machine__destroy_kernel_maps(machine);
 
+	map__put(machine->vmlinux_map);
 	machine->vmlinux_map = map__new2(0, kernel);
 	if (machine->vmlinux_map == NULL)
 		return -ENOMEM;
@@ -1612,7 +1625,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
 	map->end = start + size;
 
 	dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
-
+	map__put(map);
 	return 0;
 }
 
@@ -1658,16 +1671,18 @@ static void machine__set_kernel_mmap(struct machine *machine,
 static int machine__update_kernel_mmap(struct machine *machine,
 				     u64 start, u64 end)
 {
-	struct map *map = machine__kernel_map(machine);
+	struct map *orig, *updated;
 	int err;
 
-	map__get(map);
-	maps__remove(machine__kernel_maps(machine), map);
+	orig = machine->vmlinux_map;
+	updated = map__get(orig);
 
+	machine->vmlinux_map = updated;
 	machine__set_kernel_mmap(machine, start, end);
+	maps__remove(machine__kernel_maps(machine), orig);
+	err = maps__insert(machine__kernel_maps(machine), updated);
+	map__put(orig);
 
-	err = maps__insert(machine__kernel_maps(machine), map);
-	map__put(map);
 	return err;
 }
 
@@ -2294,7 +2309,7 @@ static int add_callchain_ip(struct thread *thread,
 {
 	struct map_symbol ms;
 	struct addr_location al;
-	int nr_loop_iter = 0;
+	int nr_loop_iter = 0, err;
 	u64 iter_cycles = 0;
 	const char *srcline = NULL;
 
@@ -2355,9 +2370,11 @@ static int add_callchain_ip(struct thread *thread,
 	ms.map = al.map;
 	ms.sym = al.sym;
 	srcline = callchain_srcline(&ms, al.addr);
-	return callchain_cursor_append(cursor, ip, &ms,
-				       branch, flags, nr_loop_iter,
-				       iter_cycles, branch_from, srcline);
+	err = callchain_cursor_append(cursor, ip, &ms,
+				      branch, flags, nr_loop_iter,
+				      iter_cycles, branch_from, srcline);
+	map__put(al.map);
+	return err;
 }
 
 struct branch_info *sample__resolve_bstack(struct perf_sample *sample,
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 1fe367e2cf19..acbc37359e06 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -410,7 +410,7 @@ struct map *map__clone(struct map *from)
 	map = memdup(from, size);
 	if (map != NULL) {
 		refcount_set(&map->refcnt, 1);
-		dso__get(dso);
+		map->dso = dso__get(dso);
 	}
 
 	return map;
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 13/17] libperf: Add reference count checking macros.
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (11 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 12/17] perf map: Changes to reference counting Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 14/17] perf cpumap: Add reference count checking Ian Rogers
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

The macros serve as a way to debug use of a reference counted struct.
The macros add a memory allocated pointer that is interposed between
the reference counted original struct at a get and freed by a put.
The pointer replaces the original struct, so use of the struct name
via APIs remains unchanged.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/lib/perf/include/internal/rc_check.h | 94 ++++++++++++++++++++++
 1 file changed, 94 insertions(+)
 create mode 100644 tools/lib/perf/include/internal/rc_check.h

diff --git a/tools/lib/perf/include/internal/rc_check.h b/tools/lib/perf/include/internal/rc_check.h
new file mode 100644
index 000000000000..c0626d8beb59
--- /dev/null
+++ b/tools/lib/perf/include/internal/rc_check.h
@@ -0,0 +1,94 @@
+/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
+#ifndef __LIBPERF_INTERNAL_RC_CHECK_H
+#define __LIBPERF_INTERNAL_RC_CHECK_H
+
+#include <stdlib.h>
+#include <linux/zalloc.h>
+
+/*
+ * Shared reference count checking macros.
+ *
+ * Reference count checking is an approach to sanitizing the use of reference
+ * counted structs. It leverages address and leak sanitizers to make sure gets
+ * are paired with a put. Reference count checking adds a malloc-ed layer of
+ * indirection on a get, and frees it on a put. A missed put will be reported as
+ * a memory leak. A double put will be reported as a double free. Accessing
+ * after a put will cause a use-after-free and/or a segfault.
+ */
+
+#ifndef REFCNT_CHECKING
+/* Replaces "struct foo" so that the pointer may be interposed. */
+#define DECLARE_RC_STRUCT(struct_name)		\
+	struct struct_name
+
+/* Declare a reference counted struct variable. */
+#define RC_STRUCT(struct_name) struct struct_name
+
+/*
+ * Interpose the indirection. Result will hold the indirection and object is the
+ * reference counted struct.
+ */
+#define ADD_RC_CHK(result, object) (result = object, object)
+
+/* Strip the indirection layer. */
+#define RC_CHK_ACCESS(object) object
+
+/* Frees the object and the indirection layer. */
+#define RC_CHK_FREE(object) free(object)
+
+/* A get operation adding the indirection layer. */
+#define RC_CHK_GET(result, object) ADD_RC_CHK(result, object)
+
+/* A put operation removing the indirection layer. */
+#define RC_CHK_PUT(object) {}
+
+#else
+
+/* Replaces "struct foo" so that the pointer may be interposed. */
+#define DECLARE_RC_STRUCT(struct_name)			\
+	struct original_##struct_name;			\
+	struct struct_name {				\
+		struct original_##struct_name *orig;	\
+	};						\
+	struct original_##struct_name
+
+/* Declare a reference counted struct variable. */
+#define RC_STRUCT(struct_name) struct original_##struct_name
+
+/*
+ * Interpose the indirection. Result will hold the indirection and object is the
+ * reference counted struct.
+ */
+#define ADD_RC_CHK(result, object)					\
+	(								\
+		object ? (result = malloc(sizeof(*result)),		\
+			result ? (result->orig = object, result)	\
+			: (result = NULL, NULL))			\
+		: (result = NULL, NULL)					\
+		)
+
+/* Strip the indirection layer. */
+#define RC_CHK_ACCESS(object) object->orig
+
+/* Frees the object and the indirection layer. */
+#define RC_CHK_FREE(object)			\
+	do {					\
+		zfree(&object->orig);		\
+		free(object);			\
+	} while(0)
+
+/* A get operation adding the indirection layer. */
+#define RC_CHK_GET(result, object) ADD_RC_CHK(result, (object ? object->orig : NULL))
+
+/* A put operation removing the indirection layer. */
+#define RC_CHK_PUT(object)			\
+	do {					\
+		if (object) {			\
+			object->orig = NULL;	\
+			free(object);		\
+		}				\
+	} while(0)
+
+#endif
+
+#endif /* __LIBPERF_INTERNAL_RC_CHECK_H */
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 14/17] perf cpumap: Add reference count checking
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (12 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 13/17] libperf: Add reference count checking macros Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 15/17] perf namespaces: " Ian Rogers
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Enabled when REFCNT_CHECKING is defined. The change adds a memory
allocated pointer that is interposed between the reference counted
cpu map at a get and freed by a put. The pointer replaces the original
perf_cpu_map struct, so use of the perf_cpu_map via APIs remains
unchanged. Any use of the cpu map without the API requires two versions,
handled via the RC_CHK_ACCESS macro.

This change is intended to catch:
 - use after put: using a cpumap after you have put it will cause a
   segv.
 - unbalanced puts: two puts for a get will result in a double free
   that can be captured and reported by tools like address sanitizer,
   including with the associated stack traces of allocation and frees.
 - missing puts: if a put is missing then the get turns into a memory
   leak that can be reported by leak sanitizer, including the stack
   trace at the point the get occurs.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/lib/perf/Makefile                  |  2 +-
 tools/lib/perf/cpumap.c                  | 94 +++++++++++++-----------
 tools/lib/perf/include/internal/cpumap.h |  4 +-
 tools/perf/tests/cpumap.c                |  4 +-
 tools/perf/util/cpumap.c                 | 40 +++++-----
 tools/perf/util/pmu.c                    |  8 +-
 6 files changed, 81 insertions(+), 71 deletions(-)

diff --git a/tools/lib/perf/Makefile b/tools/lib/perf/Makefile
index d8cad124e4c5..3a9b2140aa04 100644
--- a/tools/lib/perf/Makefile
+++ b/tools/lib/perf/Makefile
@@ -188,7 +188,7 @@ install_lib: libs
 		cp -fpR $(LIBPERF_ALL) $(DESTDIR)$(libdir_SQ)
 
 HDRS := bpf_perf.h core.h cpumap.h threadmap.h evlist.h evsel.h event.h mmap.h
-INTERNAL_HDRS := cpumap.h evlist.h evsel.h lib.h mmap.h threadmap.h xyarray.h
+INTERNAL_HDRS := cpumap.h evlist.h evsel.h lib.h mmap.h rc_check.h threadmap.h xyarray.h
 
 INSTALL_HDRS_PFX := $(DESTDIR)$(prefix)/include/perf
 INSTALL_HDRS := $(addprefix $(INSTALL_HDRS_PFX)/, $(HDRS))
diff --git a/tools/lib/perf/cpumap.c b/tools/lib/perf/cpumap.c
index 6cd0be7c1bb4..56eed1ac80d9 100644
--- a/tools/lib/perf/cpumap.c
+++ b/tools/lib/perf/cpumap.c
@@ -10,16 +10,16 @@
 #include <ctype.h>
 #include <limits.h>
 
-static struct perf_cpu_map *perf_cpu_map__alloc(int nr_cpus)
+struct perf_cpu_map *perf_cpu_map__alloc(int nr_cpus)
 {
-	struct perf_cpu_map *cpus = malloc(sizeof(*cpus) + sizeof(struct perf_cpu) * nr_cpus);
-
-	if (cpus != NULL) {
+	struct perf_cpu_map *result;
+	RC_STRUCT(perf_cpu_map) *cpus =
+		malloc(sizeof(*cpus) + sizeof(struct perf_cpu) * nr_cpus);
+	if (ADD_RC_CHK(result, cpus)) {
 		cpus->nr = nr_cpus;
 		refcount_set(&cpus->refcnt, 1);
-
 	}
-	return cpus;
+	return result;
 }
 
 struct perf_cpu_map *perf_cpu_map__dummy_new(void)
@@ -27,7 +27,7 @@ struct perf_cpu_map *perf_cpu_map__dummy_new(void)
 	struct perf_cpu_map *cpus = perf_cpu_map__alloc(1);
 
 	if (cpus)
-		cpus->map[0].cpu = -1;
+		RC_CHK_ACCESS(cpus)->map[0].cpu = -1;
 
 	return cpus;
 }
@@ -35,23 +35,30 @@ struct perf_cpu_map *perf_cpu_map__dummy_new(void)
 static void cpu_map__delete(struct perf_cpu_map *map)
 {
 	if (map) {
-		WARN_ONCE(refcount_read(&map->refcnt) != 0,
+		WARN_ONCE(refcount_read(&RC_CHK_ACCESS(map)->refcnt) != 0,
 			  "cpu_map refcnt unbalanced\n");
-		free(map);
+		RC_CHK_FREE(map);
 	}
 }
 
 struct perf_cpu_map *perf_cpu_map__get(struct perf_cpu_map *map)
 {
-	if (map)
-		refcount_inc(&map->refcnt);
-	return map;
+	struct perf_cpu_map *result;
+
+	if (RC_CHK_GET(result, map))
+		refcount_inc(&RC_CHK_ACCESS(map)->refcnt);
+
+	return result;
 }
 
 void perf_cpu_map__put(struct perf_cpu_map *map)
 {
-	if (map && refcount_dec_and_test(&map->refcnt))
-		cpu_map__delete(map);
+	if (map) {
+		if (refcount_dec_and_test(&RC_CHK_ACCESS(map)->refcnt))
+			cpu_map__delete(map);
+		else
+			RC_CHK_PUT(map);
+	}
 }
 
 static struct perf_cpu_map *cpu_map__default_new(void)
@@ -68,7 +75,7 @@ static struct perf_cpu_map *cpu_map__default_new(void)
 		int i;
 
 		for (i = 0; i < nr_cpus; ++i)
-			cpus->map[i].cpu = i;
+			RC_CHK_ACCESS(cpus)->map[i].cpu = i;
 	}
 
 	return cpus;
@@ -94,15 +101,16 @@ static struct perf_cpu_map *cpu_map__trim_new(int nr_cpus, const struct perf_cpu
 	int i, j;
 
 	if (cpus != NULL) {
-		memcpy(cpus->map, tmp_cpus, payload_size);
-		qsort(cpus->map, nr_cpus, sizeof(struct perf_cpu), cmp_cpu);
+		memcpy(RC_CHK_ACCESS(cpus)->map, tmp_cpus, payload_size);
+		qsort(RC_CHK_ACCESS(cpus)->map, nr_cpus, sizeof(struct perf_cpu), cmp_cpu);
 		/* Remove dups */
 		j = 0;
 		for (i = 0; i < nr_cpus; i++) {
-			if (i == 0 || cpus->map[i].cpu != cpus->map[i - 1].cpu)
-				cpus->map[j++].cpu = cpus->map[i].cpu;
+			if (i == 0 ||
+			    RC_CHK_ACCESS(cpus)->map[i].cpu != RC_CHK_ACCESS(cpus)->map[i - 1].cpu)
+				RC_CHK_ACCESS(cpus)->map[j++].cpu = RC_CHK_ACCESS(cpus)->map[i].cpu;
 		}
-		cpus->nr = j;
+		RC_CHK_ACCESS(cpus)->nr = j;
 		assert(j <= nr_cpus);
 	}
 	return cpus;
@@ -263,20 +271,20 @@ struct perf_cpu perf_cpu_map__cpu(const struct perf_cpu_map *cpus, int idx)
 		.cpu = -1
 	};
 
-	if (cpus && idx < cpus->nr)
-		return cpus->map[idx];
+	if (cpus && idx < RC_CHK_ACCESS(cpus)->nr)
+		return RC_CHK_ACCESS(cpus)->map[idx];
 
 	return result;
 }
 
 int perf_cpu_map__nr(const struct perf_cpu_map *cpus)
 {
-	return cpus ? cpus->nr : 1;
+	return cpus ? RC_CHK_ACCESS(cpus)->nr : 1;
 }
 
 bool perf_cpu_map__empty(const struct perf_cpu_map *map)
 {
-	return map ? map->map[0].cpu == -1 : true;
+	return map ? RC_CHK_ACCESS(map)->map[0].cpu == -1 : true;
 }
 
 int perf_cpu_map__idx(const struct perf_cpu_map *cpus, struct perf_cpu cpu)
@@ -287,10 +295,10 @@ int perf_cpu_map__idx(const struct perf_cpu_map *cpus, struct perf_cpu cpu)
 		return -1;
 
 	low = 0;
-	high = cpus->nr;
+	high = RC_CHK_ACCESS(cpus)->nr;
 	while (low < high) {
 		int idx = (low + high) / 2;
-		struct perf_cpu cpu_at_idx = cpus->map[idx];
+		struct perf_cpu cpu_at_idx = RC_CHK_ACCESS(cpus)->map[idx];
 
 		if (cpu_at_idx.cpu == cpu.cpu)
 			return idx;
@@ -316,7 +324,9 @@ struct perf_cpu perf_cpu_map__max(const struct perf_cpu_map *map)
 	};
 
 	// cpu_map__trim_new() qsort()s it, cpu_map__default_new() sorts it as well.
-	return map->nr > 0 ? map->map[map->nr - 1] : result;
+	return RC_CHK_ACCESS(map)->nr > 0
+		? RC_CHK_ACCESS(map)->map[RC_CHK_ACCESS(map)->nr - 1]
+		: result;
 }
 
 /** Is 'b' a subset of 'a'. */
@@ -324,15 +334,15 @@ bool perf_cpu_map__is_subset(const struct perf_cpu_map *a, const struct perf_cpu
 {
 	if (a == b || !b)
 		return true;
-	if (!a || b->nr > a->nr)
+	if (!a || RC_CHK_ACCESS(b)->nr > RC_CHK_ACCESS(a)->nr)
 		return false;
 
-	for (int i = 0, j = 0; i < a->nr; i++) {
-		if (a->map[i].cpu > b->map[j].cpu)
+	for (int i = 0, j = 0; i < RC_CHK_ACCESS(a)->nr; i++) {
+		if (RC_CHK_ACCESS(a)->map[i].cpu > RC_CHK_ACCESS(b)->map[j].cpu)
 			return false;
-		if (a->map[i].cpu == b->map[j].cpu) {
+		if (RC_CHK_ACCESS(a)->map[i].cpu == RC_CHK_ACCESS(b)->map[j].cpu) {
 			j++;
-			if (j == b->nr)
+			if (j == RC_CHK_ACCESS(b)->nr)
 				return true;
 		}
 	}
@@ -362,27 +372,27 @@ struct perf_cpu_map *perf_cpu_map__merge(struct perf_cpu_map *orig,
 		return perf_cpu_map__get(other);
 	}
 
-	tmp_len = orig->nr + other->nr;
+	tmp_len = RC_CHK_ACCESS(orig)->nr + RC_CHK_ACCESS(other)->nr;
 	tmp_cpus = malloc(tmp_len * sizeof(struct perf_cpu));
 	if (!tmp_cpus)
 		return NULL;
 
 	/* Standard merge algorithm from wikipedia */
 	i = j = k = 0;
-	while (i < orig->nr && j < other->nr) {
-		if (orig->map[i].cpu <= other->map[j].cpu) {
-			if (orig->map[i].cpu == other->map[j].cpu)
+	while (i < RC_CHK_ACCESS(orig)->nr && j < RC_CHK_ACCESS(other)->nr) {
+		if (RC_CHK_ACCESS(orig)->map[i].cpu <= RC_CHK_ACCESS(other)->map[j].cpu) {
+			if (RC_CHK_ACCESS(orig)->map[i].cpu == RC_CHK_ACCESS(other)->map[j].cpu)
 				j++;
-			tmp_cpus[k++] = orig->map[i++];
+			tmp_cpus[k++] = RC_CHK_ACCESS(orig)->map[i++];
 		} else
-			tmp_cpus[k++] = other->map[j++];
+			tmp_cpus[k++] = RC_CHK_ACCESS(other)->map[j++];
 	}
 
-	while (i < orig->nr)
-		tmp_cpus[k++] = orig->map[i++];
+	while (i < RC_CHK_ACCESS(orig)->nr)
+		tmp_cpus[k++] = RC_CHK_ACCESS(orig)->map[i++];
 
-	while (j < other->nr)
-		tmp_cpus[k++] = other->map[j++];
+	while (j < RC_CHK_ACCESS(other)->nr)
+		tmp_cpus[k++] = RC_CHK_ACCESS(other)->map[j++];
 	assert(k <= tmp_len);
 
 	merged = cpu_map__trim_new(k, tmp_cpus);
diff --git a/tools/lib/perf/include/internal/cpumap.h b/tools/lib/perf/include/internal/cpumap.h
index 35dd29642296..6c01bee4d048 100644
--- a/tools/lib/perf/include/internal/cpumap.h
+++ b/tools/lib/perf/include/internal/cpumap.h
@@ -4,6 +4,7 @@
 
 #include <linux/refcount.h>
 #include <perf/cpumap.h>
+#include <internal/rc_check.h>
 
 /**
  * A sized, reference counted, sorted array of integers representing CPU
@@ -12,7 +13,7 @@
  * gaps if CPU numbers were used. For events associated with a pid, rather than
  * a CPU, a single dummy map with an entry of -1 is used.
  */
-struct perf_cpu_map {
+DECLARE_RC_STRUCT(perf_cpu_map) {
 	refcount_t	refcnt;
 	/** Length of the map array. */
 	int		nr;
@@ -24,6 +25,7 @@ struct perf_cpu_map {
 #define MAX_NR_CPUS	2048
 #endif
 
+struct perf_cpu_map *perf_cpu_map__alloc(int nr_cpus);
 int perf_cpu_map__idx(const struct perf_cpu_map *cpus, struct perf_cpu cpu);
 bool perf_cpu_map__is_subset(const struct perf_cpu_map *a, const struct perf_cpu_map *b);
 
diff --git a/tools/perf/tests/cpumap.c b/tools/perf/tests/cpumap.c
index 3150fc1fed6f..d6f77b676d11 100644
--- a/tools/perf/tests/cpumap.c
+++ b/tools/perf/tests/cpumap.c
@@ -68,7 +68,7 @@ static int process_event_cpus(struct perf_tool *tool __maybe_unused,
 	TEST_ASSERT_VAL("wrong nr",  perf_cpu_map__nr(map) == 2);
 	TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, 0).cpu == 1);
 	TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, 1).cpu == 256);
-	TEST_ASSERT_VAL("wrong refcnt", refcount_read(&map->refcnt) == 1);
+	TEST_ASSERT_VAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(map)->refcnt) == 1);
 	perf_cpu_map__put(map);
 	return 0;
 }
@@ -94,7 +94,7 @@ static int process_event_range_cpus(struct perf_tool *tool __maybe_unused,
 	TEST_ASSERT_VAL("wrong nr",  perf_cpu_map__nr(map) == 256);
 	TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__cpu(map, 0).cpu == 1);
 	TEST_ASSERT_VAL("wrong cpu", perf_cpu_map__max(map).cpu == 256);
-	TEST_ASSERT_VAL("wrong refcnt", refcount_read(&map->refcnt) == 1);
+	TEST_ASSERT_VAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(map)->refcnt) == 1);
 	perf_cpu_map__put(map);
 	return 0;
 }
diff --git a/tools/perf/util/cpumap.c b/tools/perf/util/cpumap.c
index 5e564974fba4..22453893105f 100644
--- a/tools/perf/util/cpumap.c
+++ b/tools/perf/util/cpumap.c
@@ -77,9 +77,9 @@ static struct perf_cpu_map *cpu_map__from_entries(const struct perf_record_cpu_m
 			 * otherwise it would become 65535.
 			 */
 			if (data->cpus_data.cpu[i] == (u16) -1)
-				map->map[i].cpu = -1;
+				RC_CHK_ACCESS(map)->map[i].cpu = -1;
 			else
-				map->map[i].cpu = (int) data->cpus_data.cpu[i];
+				RC_CHK_ACCESS(map)->map[i].cpu = (int) data->cpus_data.cpu[i];
 		}
 	}
 
@@ -107,7 +107,7 @@ static struct perf_cpu_map *cpu_map__from_mask(const struct perf_record_cpu_map_
 
 		perf_record_cpu_map_data__read_one_mask(data, i, local_copy);
 		for_each_set_bit(cpu, local_copy, 64)
-			map->map[j++].cpu = cpu + cpus_per_i;
+		        RC_CHK_ACCESS(map)->map[j++].cpu = cpu + cpus_per_i;
 	}
 	return map;
 
@@ -124,11 +124,11 @@ static struct perf_cpu_map *cpu_map__from_range(const struct perf_record_cpu_map
 		return NULL;
 
 	if (data->range_cpu_data.any_cpu)
-		map->map[i++].cpu = -1;
+		RC_CHK_ACCESS(map)->map[i++].cpu = -1;
 
 	for (int cpu = data->range_cpu_data.start_cpu; cpu <= data->range_cpu_data.end_cpu;
 	     i++, cpu++)
-		map->map[i].cpu = cpu;
+		RC_CHK_ACCESS(map)->map[i].cpu = cpu;
 
 	return map;
 }
@@ -160,16 +160,13 @@ size_t cpu_map__fprintf(struct perf_cpu_map *map, FILE *fp)
 
 struct perf_cpu_map *perf_cpu_map__empty_new(int nr)
 {
-	struct perf_cpu_map *cpus = malloc(sizeof(*cpus) + sizeof(int) * nr);
+	struct perf_cpu_map *cpus = perf_cpu_map__alloc(nr);
 
 	if (cpus != NULL) {
 		int i;
 
-		cpus->nr = nr;
 		for (i = 0; i < nr; i++)
-			cpus->map[i].cpu = -1;
-
-		refcount_set(&cpus->refcnt, 1);
+			RC_CHK_ACCESS(cpus)->map[i].cpu = -1;
 	}
 
 	return cpus;
@@ -239,7 +236,7 @@ struct cpu_aggr_map *cpu_aggr_map__new(const struct perf_cpu_map *cpus,
 {
 	int idx;
 	struct perf_cpu cpu;
-	struct cpu_aggr_map *c = cpu_aggr_map__empty_new(cpus->nr);
+	struct cpu_aggr_map *c = cpu_aggr_map__empty_new(perf_cpu_map__nr(cpus));
 
 	if (!c)
 		return NULL;
@@ -263,7 +260,7 @@ struct cpu_aggr_map *cpu_aggr_map__new(const struct perf_cpu_map *cpus,
 		}
 	}
 	/* Trim. */
-	if (c->nr != cpus->nr) {
+	if (c->nr != perf_cpu_map__nr(cpus)) {
 		struct cpu_aggr_map *trimmed_c =
 			realloc(c,
 				sizeof(struct cpu_aggr_map) + sizeof(struct aggr_cpu_id) * c->nr);
@@ -582,31 +579,32 @@ size_t cpu_map__snprint(struct perf_cpu_map *map, char *buf, size_t size)
 
 #define COMMA first ? "" : ","
 
-	for (i = 0; i < map->nr + 1; i++) {
+	for (i = 0; i < perf_cpu_map__nr(map) + 1; i++) {
 		struct perf_cpu cpu = { .cpu = INT_MAX };
-		bool last = i == map->nr;
+		bool last = i == perf_cpu_map__nr(map);
 
 		if (!last)
-			cpu = map->map[i];
+			cpu = perf_cpu_map__cpu(map, i);
 
 		if (start == -1) {
 			start = i;
 			if (last) {
 				ret += snprintf(buf + ret, size - ret,
 						"%s%d", COMMA,
-						map->map[i].cpu);
+						perf_cpu_map__cpu(map, i).cpu);
 			}
-		} else if (((i - start) != (cpu.cpu - map->map[start].cpu)) || last) {
+		} else if (((i - start) != (cpu.cpu - perf_cpu_map__cpu(map, start).cpu)) || last) {
 			int end = i - 1;
 
 			if (start == end) {
 				ret += snprintf(buf + ret, size - ret,
 						"%s%d", COMMA,
-						map->map[start].cpu);
+						perf_cpu_map__cpu(map, start).cpu);
 			} else {
 				ret += snprintf(buf + ret, size - ret,
 						"%s%d-%d", COMMA,
-						map->map[start].cpu, map->map[end].cpu);
+						perf_cpu_map__cpu(map, start).cpu,
+						perf_cpu_map__cpu(map, end).cpu);
 			}
 			first = false;
 			start = i;
@@ -633,7 +631,7 @@ size_t cpu_map__snprint_mask(struct perf_cpu_map *map, char *buf, size_t size)
 	int i, cpu;
 	char *ptr = buf;
 	unsigned char *bitmap;
-	struct perf_cpu last_cpu = perf_cpu_map__cpu(map, map->nr - 1);
+	struct perf_cpu last_cpu = perf_cpu_map__cpu(map, perf_cpu_map__nr(map) - 1);
 
 	if (buf == NULL)
 		return 0;
@@ -644,7 +642,7 @@ size_t cpu_map__snprint_mask(struct perf_cpu_map *map, char *buf, size_t size)
 		return 0;
 	}
 
-	for (i = 0; i < map->nr; i++) {
+	for (i = 0; i < perf_cpu_map__nr(map); i++) {
 		cpu = perf_cpu_map__cpu(map, i).cpu;
 		bitmap[cpu / 8] |= 1 << (cpu % 8);
 	}
diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
index 45d9b8e28e16..25bb52e8c147 100644
--- a/tools/perf/util/pmu.c
+++ b/tools/perf/util/pmu.c
@@ -1885,13 +1885,13 @@ int perf_pmu__cpus_match(struct perf_pmu *pmu, struct perf_cpu_map *cpus,
 
 	perf_cpu_map__for_each_cpu(cpu, i, cpus) {
 		if (!perf_cpu_map__has(pmu_cpus, cpu))
-			unmatched_cpus->map[unmatched_nr++] = cpu;
+			RC_CHK_ACCESS(unmatched_cpus)->map[unmatched_nr++] = cpu;
 		else
-			matched_cpus->map[matched_nr++] = cpu;
+			RC_CHK_ACCESS(matched_cpus)->map[matched_nr++] = cpu;
 	}
 
-	unmatched_cpus->nr = unmatched_nr;
-	matched_cpus->nr = matched_nr;
+	RC_CHK_ACCESS(unmatched_cpus)->nr = unmatched_nr;
+	RC_CHK_ACCESS(matched_cpus)->nr = matched_nr;
 	*mcpus_ptr = matched_cpus;
 	*ucpus_ptr = unmatched_cpus;
 	return 0;
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 15/17] perf namespaces: Add reference count checking
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (13 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 14/17] perf cpumap: Add reference count checking Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 16/17] perf maps: " Ian Rogers
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Add reference count checking controlled by REFCNT_CHECKING ifdef. The
reference count checking interposes an allocated pointer between the
reference counted struct on a get and frees the pointer on a put.
Accesses after a put cause faults and use after free, missed puts are
caughts as leaks and double puts are double frees.

This checking helped resolve a memory leak and use after free:
https://lore.kernel.org/linux-perf-users/CAP-5=fWZH20L4kv-BwVtGLwR=Em3AOOT+Q4QGivvQuYn5AsPRg@mail.gmail.com/

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/builtin-inject.c  |   2 +-
 tools/perf/util/annotate.c   |   2 +-
 tools/perf/util/dso.c        |   2 +-
 tools/perf/util/dsos.c       |   2 +-
 tools/perf/util/namespaces.c | 132 ++++++++++++++++++++---------------
 tools/perf/util/namespaces.h |   3 +-
 tools/perf/util/symbol.c     |   2 +-
 7 files changed, 83 insertions(+), 62 deletions(-)

diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index fd2b38458a5d..fe6ddcf7fb1e 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -632,7 +632,7 @@ static int dso__read_build_id(struct dso *dso)
 	else if (dso->nsinfo) {
 		char *new_name;
 
-		new_name = filename_with_chroot(dso->nsinfo->pid,
+		new_name = filename_with_chroot(RC_CHK_ACCESS(dso->nsinfo)->pid,
 						dso->long_name);
 		if (new_name && filename__read_build_id(new_name, &dso->bid) > 0)
 			dso->has_build_id = true;
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index e8570b7cc36f..199f6cd5ad1e 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1701,7 +1701,7 @@ static int dso__disassemble_filename(struct dso *dso, char *filename, size_t fil
 
 		mutex_lock(&dso->lock);
 		if (access(filename, R_OK) && errno == ENOENT && dso->nsinfo) {
-			char *new_name = filename_with_chroot(dso->nsinfo->pid,
+			char *new_name = filename_with_chroot(RC_CHK_ACCESS(dso->nsinfo)->pid,
 							      filename);
 			if (new_name) {
 				strlcpy(filename, new_name, filename_size);
diff --git a/tools/perf/util/dso.c b/tools/perf/util/dso.c
index e36b418df2c6..6c4129598f5d 100644
--- a/tools/perf/util/dso.c
+++ b/tools/perf/util/dso.c
@@ -515,7 +515,7 @@ static int __open_dso(struct dso *dso, struct machine *machine)
 		if (errno != ENOENT || dso->nsinfo == NULL)
 			goto out;
 
-		new_name = filename_with_chroot(dso->nsinfo->pid, name);
+		new_name = filename_with_chroot(RC_CHK_ACCESS(dso->nsinfo)->pid, name);
 		if (!new_name)
 			goto out;
 
diff --git a/tools/perf/util/dsos.c b/tools/perf/util/dsos.c
index 2bd23e4cf19e..53b989072ec5 100644
--- a/tools/perf/util/dsos.c
+++ b/tools/perf/util/dsos.c
@@ -91,7 +91,7 @@ bool __dsos__read_build_ids(struct list_head *head, bool with_hits)
 			have_build_id	  = true;
 			pos->has_build_id = true;
 		} else if (errno == ENOENT && pos->nsinfo) {
-			char *new_name = filename_with_chroot(pos->nsinfo->pid,
+			char *new_name = filename_with_chroot(RC_CHK_ACCESS(pos->nsinfo)->pid,
 							      pos->long_name);
 
 			if (new_name && filename__read_build_id(new_name,
diff --git a/tools/perf/util/namespaces.c b/tools/perf/util/namespaces.c
index dd536220cdb9..8a3b7bd27b19 100644
--- a/tools/perf/util/namespaces.c
+++ b/tools/perf/util/namespaces.c
@@ -60,7 +60,7 @@ void namespaces__free(struct namespaces *namespaces)
 	free(namespaces);
 }
 
-static int nsinfo__get_nspid(struct nsinfo *nsi, const char *path)
+static int nsinfo__get_nspid(pid_t *tgid, pid_t *nstgid, bool *in_pidns, const char *path)
 {
 	FILE *f = NULL;
 	char *statln = NULL;
@@ -74,19 +74,18 @@ static int nsinfo__get_nspid(struct nsinfo *nsi, const char *path)
 	while (getline(&statln, &linesz, f) != -1) {
 		/* Use tgid if CONFIG_PID_NS is not defined. */
 		if (strstr(statln, "Tgid:") != NULL) {
-			nsi->tgid = (pid_t)strtol(strrchr(statln, '\t'),
-						     NULL, 10);
-			nsi->nstgid = nsinfo__tgid(nsi);
+			*tgid = (pid_t)strtol(strrchr(statln, '\t'), NULL, 10);
+			*nstgid = *tgid;
 		}
 
 		if (strstr(statln, "NStgid:") != NULL) {
 			nspid = strrchr(statln, '\t');
-			nsi->nstgid = (pid_t)strtol(nspid, NULL, 10);
+			*nstgid = (pid_t)strtol(nspid, NULL, 10);
 			/*
 			 * If innermost tgid is not the first, process is in a different
 			 * PID namespace.
 			 */
-			nsi->in_pidns = (statln + sizeof("NStgid:") - 1) != nspid;
+			*in_pidns = (statln + sizeof("NStgid:") - 1) != nspid;
 			break;
 		}
 	}
@@ -121,8 +120,8 @@ int nsinfo__init(struct nsinfo *nsi)
 	 * want to switch as part of looking up dso/map data.
 	 */
 	if (old_stat.st_ino != new_stat.st_ino) {
-		nsi->need_setns = true;
-		nsi->mntns_path = newns;
+		RC_CHK_ACCESS(nsi)->need_setns = true;
+		RC_CHK_ACCESS(nsi)->mntns_path = newns;
 		newns = NULL;
 	}
 
@@ -132,13 +131,26 @@ int nsinfo__init(struct nsinfo *nsi)
 	if (snprintf(spath, PATH_MAX, "/proc/%d/status", nsinfo__pid(nsi)) >= PATH_MAX)
 		goto out;
 
-	rv = nsinfo__get_nspid(nsi, spath);
+	rv = nsinfo__get_nspid(&RC_CHK_ACCESS(nsi)->tgid, &RC_CHK_ACCESS(nsi)->nstgid,
+			       &RC_CHK_ACCESS(nsi)->in_pidns, spath);
 
 out:
 	free(newns);
 	return rv;
 }
 
+static struct nsinfo *nsinfo__alloc(void)
+{
+	struct nsinfo *res;
+	RC_STRUCT(nsinfo) *nsi;
+
+	nsi = calloc(1, sizeof(*nsi));
+	if (ADD_RC_CHK(res, nsi))
+		refcount_set(&nsi->refcnt, 1);
+
+	return res;
+}
+
 struct nsinfo *nsinfo__new(pid_t pid)
 {
 	struct nsinfo *nsi;
@@ -146,22 +158,21 @@ struct nsinfo *nsinfo__new(pid_t pid)
 	if (pid == 0)
 		return NULL;
 
-	nsi = calloc(1, sizeof(*nsi));
-	if (nsi != NULL) {
-		nsi->pid = pid;
-		nsi->tgid = pid;
-		nsi->nstgid = pid;
-		nsi->need_setns = false;
-		nsi->in_pidns = false;
-		/* Init may fail if the process exits while we're trying to look
-		 * at its proc information.  In that case, save the pid but
-		 * don't try to enter the namespace.
-		 */
-		if (nsinfo__init(nsi) == -1)
-			nsi->need_setns = false;
+	nsi = nsinfo__alloc();
+	if (!nsi)
+		return NULL;
 
-		refcount_set(&nsi->refcnt, 1);
-	}
+	RC_CHK_ACCESS(nsi)->pid = pid;
+	RC_CHK_ACCESS(nsi)->tgid = pid;
+	RC_CHK_ACCESS(nsi)->nstgid = pid;
+	RC_CHK_ACCESS(nsi)->need_setns = false;
+	RC_CHK_ACCESS(nsi)->in_pidns = false;
+	/* Init may fail if the process exits while we're trying to look at its
+	 * proc information. In that case, save the pid but don't try to enter
+	 * the namespace.
+	 */
+	if (nsinfo__init(nsi) == -1)
+		RC_CHK_ACCESS(nsi)->need_setns = false;
 
 	return nsi;
 }
@@ -173,21 +184,21 @@ struct nsinfo *nsinfo__copy(const struct nsinfo *nsi)
 	if (nsi == NULL)
 		return NULL;
 
-	nnsi = calloc(1, sizeof(*nnsi));
-	if (nnsi != NULL) {
-		nnsi->pid = nsinfo__pid(nsi);
-		nnsi->tgid = nsinfo__tgid(nsi);
-		nnsi->nstgid = nsinfo__nstgid(nsi);
-		nnsi->need_setns = nsinfo__need_setns(nsi);
-		nnsi->in_pidns = nsinfo__in_pidns(nsi);
-		if (nsi->mntns_path) {
-			nnsi->mntns_path = strdup(nsi->mntns_path);
-			if (!nnsi->mntns_path) {
-				free(nnsi);
-				return NULL;
-			}
+	nnsi = nsinfo__alloc();
+	if (!nnsi)
+		return NULL;
+
+	RC_CHK_ACCESS(nnsi)->pid = nsinfo__pid(nsi);
+	RC_CHK_ACCESS(nnsi)->tgid = nsinfo__tgid(nsi);
+	RC_CHK_ACCESS(nnsi)->nstgid = nsinfo__nstgid(nsi);
+	RC_CHK_ACCESS(nnsi)->need_setns = nsinfo__need_setns(nsi);
+	RC_CHK_ACCESS(nnsi)->in_pidns = nsinfo__in_pidns(nsi);
+	if (RC_CHK_ACCESS(nsi)->mntns_path) {
+		RC_CHK_ACCESS(nnsi)->mntns_path = strdup(RC_CHK_ACCESS(nsi)->mntns_path);
+		if (!RC_CHK_ACCESS(nnsi)->mntns_path) {
+			nsinfo__put(nnsi);
+			return NULL;
 		}
-		refcount_set(&nnsi->refcnt, 1);
 	}
 
 	return nnsi;
@@ -195,51 +206,60 @@ struct nsinfo *nsinfo__copy(const struct nsinfo *nsi)
 
 static void nsinfo__delete(struct nsinfo *nsi)
 {
-	zfree(&nsi->mntns_path);
-	free(nsi);
+	if (nsi) {
+		WARN_ONCE(refcount_read(&RC_CHK_ACCESS(nsi)->refcnt) != 0,
+			"nsinfo refcnt unbalanced\n");
+		zfree(&RC_CHK_ACCESS(nsi)->mntns_path);
+		RC_CHK_FREE(nsi);
+	}
 }
 
 struct nsinfo *nsinfo__get(struct nsinfo *nsi)
 {
-	if (nsi)
-		refcount_inc(&nsi->refcnt);
-	return nsi;
+	struct nsinfo *result;
+
+	if (RC_CHK_GET(result, nsi))
+		refcount_inc(&RC_CHK_ACCESS(nsi)->refcnt);
+
+	return result;
 }
 
 void nsinfo__put(struct nsinfo *nsi)
 {
-	if (nsi && refcount_dec_and_test(&nsi->refcnt))
+	if (nsi && refcount_dec_and_test(&RC_CHK_ACCESS(nsi)->refcnt))
 		nsinfo__delete(nsi);
+	else
+		RC_CHK_PUT(nsi);
 }
 
 bool nsinfo__need_setns(const struct nsinfo *nsi)
 {
-        return nsi->need_setns;
+	return RC_CHK_ACCESS(nsi)->need_setns;
 }
 
 void nsinfo__clear_need_setns(struct nsinfo *nsi)
 {
-        nsi->need_setns = false;
+	RC_CHK_ACCESS(nsi)->need_setns = false;
 }
 
 pid_t nsinfo__tgid(const struct nsinfo  *nsi)
 {
-        return nsi->tgid;
+	return RC_CHK_ACCESS(nsi)->tgid;
 }
 
 pid_t nsinfo__nstgid(const struct nsinfo  *nsi)
 {
-        return nsi->nstgid;
+	return RC_CHK_ACCESS(nsi)->nstgid;
 }
 
 pid_t nsinfo__pid(const struct nsinfo  *nsi)
 {
-        return nsi->pid;
+	return RC_CHK_ACCESS(nsi)->pid;
 }
 
 pid_t nsinfo__in_pidns(const struct nsinfo  *nsi)
 {
-        return nsi->in_pidns;
+	return RC_CHK_ACCESS(nsi)->in_pidns;
 }
 
 void nsinfo__mountns_enter(struct nsinfo *nsi,
@@ -256,7 +276,7 @@ void nsinfo__mountns_enter(struct nsinfo *nsi,
 	nc->oldns = -1;
 	nc->newns = -1;
 
-	if (!nsi || !nsi->need_setns)
+	if (!nsi || !RC_CHK_ACCESS(nsi)->need_setns)
 		return;
 
 	if (snprintf(curpath, PATH_MAX, "/proc/self/ns/mnt") >= PATH_MAX)
@@ -270,7 +290,7 @@ void nsinfo__mountns_enter(struct nsinfo *nsi,
 	if (oldns < 0)
 		goto errout;
 
-	newns = open(nsi->mntns_path, O_RDONLY);
+	newns = open(RC_CHK_ACCESS(nsi)->mntns_path, O_RDONLY);
 	if (newns < 0)
 		goto errout;
 
@@ -339,9 +359,9 @@ int nsinfo__stat(const char *filename, struct stat *st, struct nsinfo *nsi)
 
 bool nsinfo__is_in_root_namespace(void)
 {
-	struct nsinfo nsi;
+	pid_t tgid = 0, nstgid = 0;
+	bool in_pidns = false;
 
-	memset(&nsi, 0x0, sizeof(nsi));
-	nsinfo__get_nspid(&nsi, "/proc/self/status");
-	return !nsi.in_pidns;
+	nsinfo__get_nspid(&tgid, &nstgid, &in_pidns, "/proc/self/status");
+	return !in_pidns;
 }
diff --git a/tools/perf/util/namespaces.h b/tools/perf/util/namespaces.h
index 567829262c42..8c0731c6cbb7 100644
--- a/tools/perf/util/namespaces.h
+++ b/tools/perf/util/namespaces.h
@@ -13,6 +13,7 @@
 #include <linux/perf_event.h>
 #include <linux/refcount.h>
 #include <linux/types.h>
+#include <internal/rc_check.h>
 
 #ifndef HAVE_SETNS_SUPPORT
 int setns(int fd, int nstype);
@@ -29,7 +30,7 @@ struct namespaces {
 struct namespaces *namespaces__new(struct perf_record_namespaces *event);
 void namespaces__free(struct namespaces *namespaces);
 
-struct nsinfo {
+DECLARE_RC_STRUCT(nsinfo) {
 	pid_t			pid;
 	pid_t			tgid;
 	pid_t			nstgid;
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 7904bfff7d0e..f5432c7add09 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1962,7 +1962,7 @@ int dso__load(struct dso *dso, struct map *map)
 
 		is_reg = is_regular_file(name);
 		if (!is_reg && errno == ENOENT && dso->nsinfo) {
-			char *new_name = filename_with_chroot(dso->nsinfo->pid,
+			char *new_name = filename_with_chroot(RC_CHK_ACCESS(dso->nsinfo)->pid,
 							      name);
 			if (new_name) {
 				is_reg = is_regular_file(new_name);
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 16/17] perf maps: Add reference count checking.
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (14 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 15/17] perf namespaces: " Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-03-20 21:22 ` [PATCH v5 17/17] perf map: " Ian Rogers
  2023-04-04 15:58 ` [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

Add reference count checking to make sure of good use of get and put.
Add and use accessors to reduce RC_CHK clutter.

The only significant issue was in tests/thread-maps-share.c where
reference counts were released in the reverse order to acquisition,
leading to a use after put. This was fixed by reversing the put order.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/tests/thread-maps-share.c     | 29 ++++++-------
 tools/perf/util/machine.c                |  2 +-
 tools/perf/util/maps.c                   | 53 +++++++++++++-----------
 tools/perf/util/maps.h                   | 17 ++++----
 tools/perf/util/symbol.c                 | 13 +++---
 tools/perf/util/unwind-libdw.c           |  2 +-
 tools/perf/util/unwind-libunwind-local.c |  2 +-
 tools/perf/util/unwind-libunwind.c       |  2 +-
 8 files changed, 64 insertions(+), 56 deletions(-)

diff --git a/tools/perf/tests/thread-maps-share.c b/tools/perf/tests/thread-maps-share.c
index 84edd82c519e..dfe51b21bd7d 100644
--- a/tools/perf/tests/thread-maps-share.c
+++ b/tools/perf/tests/thread-maps-share.c
@@ -43,12 +43,12 @@ static int test__thread_maps_share(struct test_suite *test __maybe_unused, int s
 			leader && t1 && t2 && t3 && other);
 
 	maps = leader->maps;
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&maps->refcnt), 4);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(maps)->refcnt), 4);
 
 	/* test the maps pointer is shared */
-	TEST_ASSERT_VAL("maps don't match", maps == t1->maps);
-	TEST_ASSERT_VAL("maps don't match", maps == t2->maps);
-	TEST_ASSERT_VAL("maps don't match", maps == t3->maps);
+	TEST_ASSERT_VAL("maps don't match", RC_CHK_ACCESS(maps) == RC_CHK_ACCESS(t1->maps));
+	TEST_ASSERT_VAL("maps don't match", RC_CHK_ACCESS(maps) == RC_CHK_ACCESS(t2->maps));
+	TEST_ASSERT_VAL("maps don't match", RC_CHK_ACCESS(maps) == RC_CHK_ACCESS(t3->maps));
 
 	/*
 	 * Verify the other leader was created by previous call.
@@ -71,25 +71,26 @@ static int test__thread_maps_share(struct test_suite *test __maybe_unused, int s
 	machine__remove_thread(machine, other_leader);
 
 	other_maps = other->maps;
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&other_maps->refcnt), 2);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(other_maps)->refcnt), 2);
 
-	TEST_ASSERT_VAL("maps don't match", other_maps == other_leader->maps);
+	TEST_ASSERT_VAL("maps don't match",
+			RC_CHK_ACCESS(other_maps) == RC_CHK_ACCESS(other_leader->maps));
 
 	/* release thread group */
-	thread__put(leader);
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&maps->refcnt), 3);
-
-	thread__put(t1);
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&maps->refcnt), 2);
+	thread__put(t3);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(maps)->refcnt), 3);
 
 	thread__put(t2);
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&maps->refcnt), 1);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(maps)->refcnt), 2);
 
-	thread__put(t3);
+	thread__put(t1);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(maps)->refcnt), 1);
+
+	thread__put(leader);
 
 	/* release other group  */
 	thread__put(other_leader);
-	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&other_maps->refcnt), 1);
+	TEST_ASSERT_EQUAL("wrong refcnt", refcount_read(&RC_CHK_ACCESS(other_maps)->refcnt), 1);
 
 	thread__put(other);
 
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 502e97010a3c..cfbced348335 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -434,7 +434,7 @@ static struct thread *findnew_guest_code(struct machine *machine,
 		return NULL;
 
 	/* Assume maps are set up if there are any */
-	if (thread->maps->nr_maps)
+	if (RC_CHK_ACCESS(thread->maps)->nr_maps)
 		return thread;
 
 	host_thread = machine__find_thread(host_machine, -1, pid);
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 74e3133f5007..3c8bbcb2c204 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -12,13 +12,13 @@
 
 static void maps__init(struct maps *maps, struct machine *machine)
 {
-	maps->entries = RB_ROOT;
+	RC_CHK_ACCESS(maps)->entries = RB_ROOT;
 	init_rwsem(maps__lock(maps));
-	maps->machine = machine;
-	maps->last_search_by_name = NULL;
-	maps->nr_maps = 0;
-	maps->maps_by_name = NULL;
-	refcount_set(&maps->refcnt, 1);
+	RC_CHK_ACCESS(maps)->machine = machine;
+	RC_CHK_ACCESS(maps)->last_search_by_name = NULL;
+	RC_CHK_ACCESS(maps)->nr_maps = 0;
+	RC_CHK_ACCESS(maps)->maps_by_name = NULL;
+	refcount_set(&RC_CHK_ACCESS(maps)->refcnt, 1);
 }
 
 static void __maps__free_maps_by_name(struct maps *maps)
@@ -29,8 +29,8 @@ static void __maps__free_maps_by_name(struct maps *maps)
 	for (unsigned int i = 0; i < maps__nr_maps(maps); i++)
 		map__put(maps__maps_by_name(maps)[i]);
 
-	zfree(&maps->maps_by_name);
-	maps->nr_maps_allocated = 0;
+	zfree(&RC_CHK_ACCESS(maps)->maps_by_name);
+	RC_CHK_ACCESS(maps)->nr_maps_allocated = 0;
 }
 
 static int __maps__insert(struct maps *maps, struct map *map)
@@ -71,7 +71,7 @@ int maps__insert(struct maps *maps, struct map *map)
 	if (err)
 		goto out;
 
-	++maps->nr_maps;
+	++RC_CHK_ACCESS(maps)->nr_maps;
 
 	if (dso && dso->kernel) {
 		struct kmap *kmap = map__kmap(map);
@@ -88,7 +88,7 @@ int maps__insert(struct maps *maps, struct map *map)
 	 * inserted map and resort.
 	 */
 	if (maps__maps_by_name(maps)) {
-		if (maps__nr_maps(maps) > maps->nr_maps_allocated) {
+		if (maps__nr_maps(maps) > RC_CHK_ACCESS(maps)->nr_maps_allocated) {
 			int nr_allocate = maps__nr_maps(maps) * 2;
 			struct map **maps_by_name = realloc(maps__maps_by_name(maps),
 							    nr_allocate * sizeof(map));
@@ -99,8 +99,8 @@ int maps__insert(struct maps *maps, struct map *map)
 				goto out;
 			}
 
-			maps->maps_by_name = maps_by_name;
-			maps->nr_maps_allocated = nr_allocate;
+			RC_CHK_ACCESS(maps)->maps_by_name = maps_by_name;
+			RC_CHK_ACCESS(maps)->nr_maps_allocated = nr_allocate;
 		}
 		maps__maps_by_name(maps)[maps__nr_maps(maps) - 1] = map__get(map);
 		__maps__sort_by_name(maps);
@@ -122,15 +122,15 @@ void maps__remove(struct maps *maps, struct map *map)
 	struct map_rb_node *rb_node;
 
 	down_write(maps__lock(maps));
-	if (maps->last_search_by_name == map)
-		maps->last_search_by_name = NULL;
+	if (RC_CHK_ACCESS(maps)->last_search_by_name == map)
+		RC_CHK_ACCESS(maps)->last_search_by_name = NULL;
 
 	rb_node = maps__find_node(maps, map);
 	assert(rb_node->map == map);
 	__maps__remove(maps, rb_node);
 	if (maps__maps_by_name(maps))
 		__maps__free_maps_by_name(maps);
-	--maps->nr_maps;
+	--RC_CHK_ACCESS(maps)->nr_maps;
 	up_write(maps__lock(maps));
 }
 
@@ -162,33 +162,38 @@ bool maps__empty(struct maps *maps)
 
 struct maps *maps__new(struct machine *machine)
 {
-	struct maps *maps = zalloc(sizeof(*maps));
+	struct maps *res;
+	RC_STRUCT(maps) *maps = zalloc(sizeof(*maps));
 
-	if (maps != NULL)
-		maps__init(maps, machine);
+	if (ADD_RC_CHK(res, maps))
+		maps__init(res, machine);
 
-	return maps;
+	return res;
 }
 
 void maps__delete(struct maps *maps)
 {
 	maps__exit(maps);
 	unwind__finish_access(maps);
-	free(maps);
+	RC_CHK_FREE(maps);
 }
 
 struct maps *maps__get(struct maps *maps)
 {
-	if (maps)
-		refcount_inc(&maps->refcnt);
+	struct maps *result;
 
-	return maps;
+	if (RC_CHK_GET(result, maps))
+		refcount_inc(&RC_CHK_ACCESS(maps)->refcnt);
+
+	return result;
 }
 
 void maps__put(struct maps *maps)
 {
-	if (maps && refcount_dec_and_test(&maps->refcnt))
+	if (maps && refcount_dec_and_test(&RC_CHK_ACCESS(maps)->refcnt))
 		maps__delete(maps);
+	else
+		RC_CHK_PUT(maps);
 }
 
 struct symbol *maps__find_symbol(struct maps *maps, u64 addr, struct map **mapp)
diff --git a/tools/perf/util/maps.h b/tools/perf/util/maps.h
index bde3390c7096..0af4b7e42fca 100644
--- a/tools/perf/util/maps.h
+++ b/tools/perf/util/maps.h
@@ -8,6 +8,7 @@
 #include <stdbool.h>
 #include <linux/types.h>
 #include "rwsem.h"
+#include <internal/rc_check.h>
 
 struct ref_reloc_sym;
 struct machine;
@@ -32,7 +33,7 @@ struct map *maps__find(struct maps *maps, u64 addr);
 	for (map = maps__first(maps), next = map_rb_node__next(map); map; \
 	     map = next, next = map_rb_node__next(map))
 
-struct maps {
+DECLARE_RC_STRUCT(maps) {
 	struct rb_root      entries;
 	struct rw_semaphore lock;
 	struct machine	 *machine;
@@ -65,38 +66,38 @@ void maps__put(struct maps *maps);
 
 static inline struct rb_root *maps__entries(struct maps *maps)
 {
-	return &maps->entries;
+	return &RC_CHK_ACCESS(maps)->entries;
 }
 
 static inline struct machine *maps__machine(struct maps *maps)
 {
-	return maps->machine;
+	return RC_CHK_ACCESS(maps)->machine;
 }
 
 static inline struct rw_semaphore *maps__lock(struct maps *maps)
 {
-	return &maps->lock;
+	return &RC_CHK_ACCESS(maps)->lock;
 }
 
 static inline struct map **maps__maps_by_name(struct maps *maps)
 {
-	return maps->maps_by_name;
+	return RC_CHK_ACCESS(maps)->maps_by_name;
 }
 
 static inline unsigned int maps__nr_maps(const struct maps *maps)
 {
-	return maps->nr_maps;
+	return RC_CHK_ACCESS(maps)->nr_maps;
 }
 
 #ifdef HAVE_LIBUNWIND_SUPPORT
 static inline void *maps__addr_space(struct maps *maps)
 {
-	return maps->addr_space;
+	return RC_CHK_ACCESS(maps)->addr_space;
 }
 
 static inline const struct unwind_libunwind_ops *maps__unwind_libunwind_ops(const struct maps *maps)
 {
-	return maps->unwind_libunwind_ops;
+	return RC_CHK_ACCESS(maps)->unwind_libunwind_ops;
 }
 #endif
 
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index f5432c7add09..d99c8e1bb4bf 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -2096,8 +2096,8 @@ static int map__groups__sort_by_name_from_rbtree(struct maps *maps)
 	up_read(maps__lock(maps));
 	down_write(maps__lock(maps));
 
-	maps->maps_by_name = maps_by_name;
-	maps->nr_maps_allocated = maps__nr_maps(maps);
+	RC_CHK_ACCESS(maps)->maps_by_name = maps_by_name;
+	RC_CHK_ACCESS(maps)->nr_maps_allocated = maps__nr_maps(maps);
 
 	maps__for_each_entry(maps, rb_node)
 		maps_by_name[i++] = map__get(rb_node->map);
@@ -2132,11 +2132,12 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 
 	down_read(maps__lock(maps));
 
-	if (maps->last_search_by_name) {
-		const struct dso *dso = map__dso(maps->last_search_by_name);
+
+	if (RC_CHK_ACCESS(maps)->last_search_by_name) {
+		const struct dso *dso = map__dso(RC_CHK_ACCESS(maps)->last_search_by_name);
 
 		if (strcmp(dso->short_name, name) == 0) {
-			map = maps->last_search_by_name;
+			map = RC_CHK_ACCESS(maps)->last_search_by_name;
 			goto out_unlock;
 		}
 	}
@@ -2156,7 +2157,7 @@ struct map *maps__find_by_name(struct maps *maps, const char *name)
 		map = rb_node->map;
 		dso = map__dso(map);
 		if (strcmp(dso->short_name, name) == 0) {
-			maps->last_search_by_name = map;
+			RC_CHK_ACCESS(maps)->last_search_by_name = map;
 			goto out_unlock;
 		}
 	}
diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
index 9565f9906e5d..bdccfc511b7e 100644
--- a/tools/perf/util/unwind-libdw.c
+++ b/tools/perf/util/unwind-libdw.c
@@ -230,7 +230,7 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
 	struct unwind_info *ui, ui_buf = {
 		.sample		= data,
 		.thread		= thread,
-		.machine	= thread->maps->machine,
+		.machine	= RC_CHK_ACCESS(thread->maps)->machine,
 		.cb		= cb,
 		.arg		= arg,
 		.max_stack	= max_stack,
diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
index 952c5ee66fe7..2947c210576e 100644
--- a/tools/perf/util/unwind-libunwind-local.c
+++ b/tools/perf/util/unwind-libunwind-local.c
@@ -667,7 +667,7 @@ static int _unwind__prepare_access(struct maps *maps)
 {
 	void *addr_space = unw_create_addr_space(&accessors, 0);
 
-	maps->addr_space = addr_space;
+	RC_CHK_ACCESS(maps)->addr_space = addr_space;
 	if (!addr_space) {
 		pr_err("unwind: Can't create unwind address space.\n");
 		return -ENOMEM;
diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
index c14f04082377..48a7aeb3f9ec 100644
--- a/tools/perf/util/unwind-libunwind.c
+++ b/tools/perf/util/unwind-libunwind.c
@@ -14,7 +14,7 @@ struct unwind_libunwind_ops __weak *arm64_unwind_libunwind_ops;
 
 static void unwind__register_ops(struct maps *maps, struct unwind_libunwind_ops *ops)
 {
-	maps->unwind_libunwind_ops = ops;
+	RC_CHK_ACCESS(maps)->unwind_libunwind_ops = ops;
 }
 
 int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized)
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH v5 17/17] perf map: Add reference count checking
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (15 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 16/17] perf maps: " Ian Rogers
@ 2023-03-20 21:22 ` Ian Rogers
  2023-04-04 15:58 ` [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
  17 siblings, 0 replies; 33+ messages in thread
From: Ian Rogers @ 2023-03-20 21:22 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian, Ian Rogers

There's no strict get/put policy with map that leads to leaks or use
after free. Reference count checking identifies correct pairing of gets
and puts.

Signed-off-by: Ian Rogers <irogers@google.com>
---
 tools/perf/arch/s390/annotate/instructions.c |  5 +-
 tools/perf/builtin-top.c                     |  4 +-
 tools/perf/tests/hists_link.c                |  2 +-
 tools/perf/tests/maps.c                      | 20 +++---
 tools/perf/tests/vmlinux-kallsyms.c          |  4 +-
 tools/perf/util/annotate.c                   | 10 +--
 tools/perf/util/machine.c                    | 25 +++----
 tools/perf/util/map.c                        | 69 +++++++++++---------
 tools/perf/util/map.h                        | 32 +++++----
 tools/perf/util/maps.c                       | 13 ++--
 tools/perf/util/symbol-elf.c                 | 26 ++++----
 tools/perf/util/symbol.c                     | 40 +++++++-----
 12 files changed, 136 insertions(+), 114 deletions(-)

diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c
index 6548933e8dc0..9953d510f7c1 100644
--- a/tools/perf/arch/s390/annotate/instructions.c
+++ b/tools/perf/arch/s390/annotate/instructions.c
@@ -39,8 +39,9 @@ static int s390_call__parse(struct arch *arch, struct ins_operands *ops,
 	target.addr = map__objdump_2mem(map, ops->target.addr);
 
 	if (maps__find_ams(ms->maps, &target) == 0 &&
-	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) ==
-	    ops->target.addr)
+	    map__rip_2objdump(target.ms.map,
+			      RC_CHK_ACCESS(map)->map_ip(target.ms.map, target.addr)
+			     ) == ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
 	return 0;
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index b45565f718f4..2605040e6788 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -191,7 +191,7 @@ static void ui__warn_map_erange(struct map *map, struct symbol *sym, u64 ip)
 	if (use_browser <= 0)
 		sleep(5);
 
-	map->erange_warned = true;
+	RC_CHK_ACCESS(map)->erange_warned = true;
 }
 
 static void perf_top__record_precise_ip(struct perf_top *top,
@@ -225,7 +225,7 @@ static void perf_top__record_precise_ip(struct perf_top *top,
 		 */
 		mutex_unlock(&he->hists->lock);
 
-		if (err == -ERANGE && !he->ms.map->erange_warned)
+		if (err == -ERANGE && !RC_CHK_ACCESS(he->ms.map)->erange_warned)
 			ui__warn_map_erange(he->ms.map, sym, ip);
 		else if (err == -ENOMEM) {
 			pr_err("Not enough memory for annotating '%s' symbol!\n",
diff --git a/tools/perf/tests/hists_link.c b/tools/perf/tests/hists_link.c
index 64ce8097889c..141e2972e34f 100644
--- a/tools/perf/tests/hists_link.c
+++ b/tools/perf/tests/hists_link.c
@@ -145,7 +145,7 @@ static int find_sample(struct sample *samples, size_t nr_samples,
 {
 	while (nr_samples--) {
 		if (samples->thread == t &&
-		    samples->map == m &&
+		    RC_CHK_ACCESS(samples->map) == RC_CHK_ACCESS(m) &&
 		    samples->sym == s)
 			return 1;
 		samples++;
diff --git a/tools/perf/tests/maps.c b/tools/perf/tests/maps.c
index 1c7293476aca..b8dab6278bca 100644
--- a/tools/perf/tests/maps.c
+++ b/tools/perf/tests/maps.c
@@ -30,7 +30,7 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
 			if (map__start(map) != merged[i].start ||
 			    map__end(map) != merged[i].end ||
 			    strcmp(map__dso(map)->name, merged[i].name) ||
-			    refcount_read(&map->refcnt) != 1) {
+			    refcount_read(&RC_CHK_ACCESS(map)->refcnt) != 1) {
 				failed = true;
 			}
 			i++;
@@ -50,7 +50,7 @@ static int check_maps(struct map_def *merged, unsigned int size, struct maps *ma
 				map__start(map),
 				map__end(map),
 				map__dso(map)->name,
-				refcount_read(&map->refcnt));
+				refcount_read(&RC_CHK_ACCESS(map)->refcnt));
 		}
 	}
 	return failed ? TEST_FAIL : TEST_OK;
@@ -95,8 +95,8 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
 		map = dso__new_map(bpf_progs[i].name);
 		TEST_ASSERT_VAL("failed to create map", map);
 
-		map->start = bpf_progs[i].start;
-		map->end   = bpf_progs[i].end;
+		RC_CHK_ACCESS(map)->start = bpf_progs[i].start;
+		RC_CHK_ACCESS(map)->end   = bpf_progs[i].end;
 		TEST_ASSERT_VAL("failed to insert map", maps__insert(maps, map) == 0);
 		map__put(map);
 	}
@@ -111,16 +111,16 @@ static int test__maps__merge_in(struct test_suite *t __maybe_unused, int subtest
 	TEST_ASSERT_VAL("failed to create map", map_kcore3);
 
 	/* kcore1 map overlaps over all bpf maps */
-	map_kcore1->start = 100;
-	map_kcore1->end   = 1000;
+	RC_CHK_ACCESS(map_kcore1)->start = 100;
+	RC_CHK_ACCESS(map_kcore1)->end   = 1000;
 
 	/* kcore2 map hides behind bpf_prog_2 */
-	map_kcore2->start = 550;
-	map_kcore2->end   = 570;
+	RC_CHK_ACCESS(map_kcore2)->start = 550;
+	RC_CHK_ACCESS(map_kcore2)->end   = 570;
 
 	/* kcore3 map hides behind bpf_prog_3, kcore1 and adds new map */
-	map_kcore3->start = 880;
-	map_kcore3->end   = 1100;
+	RC_CHK_ACCESS(map_kcore3)->start = 880;
+	RC_CHK_ACCESS(map_kcore3)->end   = 1100;
 
 	ret = maps__merge_in(maps, map_kcore1);
 	TEST_ASSERT_VAL("failed to merge map", !ret);
diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c
index af511233c764..a087b24463ff 100644
--- a/tools/perf/tests/vmlinux-kallsyms.c
+++ b/tools/perf/tests/vmlinux-kallsyms.c
@@ -304,7 +304,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 								dso->short_name :
 								dso->name));
 		if (pair) {
-			pair->priv = 1;
+			RC_CHK_ACCESS(pair)->priv = 1;
 		} else {
 			if (!header_printed) {
 				pr_info("WARN: Maps only in vmlinux:\n");
@@ -340,7 +340,7 @@ static int test__vmlinux_matches_kallsyms(struct test_suite *test __maybe_unused
 				pr_info(":\nWARN: *%" PRIx64 "-%" PRIx64 " %" PRIx64,
 					map__start(pair), map__end(pair), map__pgoff(pair));
 			pr_info(" %s\n", dso->name);
-			pair->priv = 1;
+			RC_CHK_ACCESS(pair)->priv = 1;
 		}
 	}
 
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 199f6cd5ad1e..8bae9d78d8c4 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -280,8 +280,9 @@ static int call__parse(struct arch *arch, struct ins_operands *ops, struct map_s
 	target.addr = map__objdump_2mem(map, ops->target.addr);
 
 	if (maps__find_ams(ms->maps, &target) == 0 &&
-	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) ==
-	    ops->target.addr)
+	    map__rip_2objdump(target.ms.map,
+			      RC_CHK_ACCESS(map)->map_ip(target.ms.map, target.addr)
+			      ) == ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
 	return 0;
@@ -409,8 +410,9 @@ static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_s
 	 * the symbol searching and disassembly should be done.
 	 */
 	if (maps__find_ams(ms->maps, &target) == 0 &&
-	    map__rip_2objdump(target.ms.map, map->map_ip(target.ms.map, target.addr)) ==
-	    ops->target.addr)
+	    map__rip_2objdump(target.ms.map,
+			      RC_CHK_ACCESS(map)->map_ip(target.ms.map, target.addr)
+			      ) == ops->target.addr)
 		ops->target.sym = target.ms.sym;
 
 	if (!ops->target.outside) {
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index cfbced348335..6310d74f6d6d 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -909,8 +909,8 @@ static int machine__process_ksymbol_register(struct machine *machine,
 			dso__set_loaded(dso);
 		}
 
-		map->start = event->ksymbol.addr;
-		map->end = map__start(map) + event->ksymbol.len;
+		RC_CHK_ACCESS(map)->start = event->ksymbol.addr;
+		RC_CHK_ACCESS(map)->end = map__start(map) + event->ksymbol.len;
 		err = maps__insert(machine__kernel_maps(machine), map);
 		if (err) {
 			err = -ENOMEM;
@@ -952,7 +952,7 @@ static int machine__process_ksymbol_unregister(struct machine *machine,
 	if (!map)
 		return 0;
 
-	if (map != machine->vmlinux_map)
+	if (RC_CHK_ACCESS(map) != RC_CHK_ACCESS(machine->vmlinux_map))
 		maps__remove(machine__kernel_maps(machine), map);
 	else {
 		struct dso *dso = map__dso(map);
@@ -1217,8 +1217,8 @@ int machine__create_extra_kernel_map(struct machine *machine,
 	if (!map)
 		return -ENOMEM;
 
-	map->end   = xm->end;
-	map->pgoff = xm->pgoff;
+	RC_CHK_ACCESS(map)->end   = xm->end;
+	RC_CHK_ACCESS(map)->pgoff = xm->pgoff;
 
 	kmap = map__kmap(map);
 
@@ -1290,7 +1290,7 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
 
 		dest_map = maps__find(kmaps, map__pgoff(map));
 		if (dest_map != map)
-			map->pgoff = map__map_ip(dest_map, map__pgoff(map));
+			RC_CHK_ACCESS(map)->pgoff = map__map_ip(dest_map, map__pgoff(map));
 		found = true;
 	}
 	if (found || machine->trampolines_mapped)
@@ -1341,7 +1341,8 @@ __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
 	if (machine->vmlinux_map == NULL)
 		return -ENOMEM;
 
-	machine->vmlinux_map->map_ip = machine->vmlinux_map->unmap_ip = identity__map_ip;
+	RC_CHK_ACCESS(machine->vmlinux_map)->map_ip = identity__map_ip;
+	RC_CHK_ACCESS(machine->vmlinux_map)->unmap_ip = identity__map_ip;
 	return maps__insert(machine__kernel_maps(machine), machine->vmlinux_map);
 }
 
@@ -1622,7 +1623,7 @@ static int machine__create_module(void *arg, const char *name, u64 start,
 	map = machine__addnew_module_map(machine, start, name);
 	if (map == NULL)
 		return -1;
-	map->end = start + size;
+	RC_CHK_ACCESS(map)->end = start + size;
 
 	dso__kernel_module_get_build_id(map__dso(map), machine->root_dir);
 	map__put(map);
@@ -1658,14 +1659,14 @@ static int machine__create_modules(struct machine *machine)
 static void machine__set_kernel_mmap(struct machine *machine,
 				     u64 start, u64 end)
 {
-	machine->vmlinux_map->start = start;
-	machine->vmlinux_map->end   = end;
+	RC_CHK_ACCESS(machine->vmlinux_map)->start = start;
+	RC_CHK_ACCESS(machine->vmlinux_map)->end   = end;
 	/*
 	 * Be a bit paranoid here, some perf.data file came with
 	 * a zero sized synthesized MMAP event for the kernel.
 	 */
 	if (start == 0 && end == 0)
-		machine->vmlinux_map->end = ~0ULL;
+		RC_CHK_ACCESS(machine->vmlinux_map)->end = ~0ULL;
 }
 
 static int machine__update_kernel_mmap(struct machine *machine,
@@ -1809,7 +1810,7 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
 		if (map == NULL)
 			goto out_problem;
 
-		map->end = map__start(map) + xm->end - xm->start;
+		RC_CHK_ACCESS(map)->end = map__start(map) + xm->end - xm->start;
 
 		if (build_id__is_defined(bid))
 			dso__set_build_id(map__dso(map), bid);
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index acbc37359e06..9ac5c909ea9e 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -104,15 +104,15 @@ static inline bool replace_android_lib(const char *filename, char *newfilename)
 
 void map__init(struct map *map, u64 start, u64 end, u64 pgoff, struct dso *dso)
 {
-	map->start    = start;
-	map->end      = end;
-	map->pgoff    = pgoff;
-	map->reloc    = 0;
-	map->dso      = dso__get(dso);
-	map->map_ip   = map__dso_map_ip;
-	map->unmap_ip = map__dso_unmap_ip;
-	map->erange_warned = false;
-	refcount_set(&map->refcnt, 1);
+	RC_CHK_ACCESS(map)->start    = start;
+	RC_CHK_ACCESS(map)->end      = end;
+	RC_CHK_ACCESS(map)->pgoff    = pgoff;
+	RC_CHK_ACCESS(map)->reloc    = 0;
+	RC_CHK_ACCESS(map)->dso      = dso__get(dso);
+	RC_CHK_ACCESS(map)->map_ip   = map__dso_map_ip;
+	RC_CHK_ACCESS(map)->unmap_ip = map__dso_unmap_ip;
+	RC_CHK_ACCESS(map)->erange_warned = false;
+	refcount_set(&RC_CHK_ACCESS(map)->refcnt, 1);
 }
 
 struct map *map__new(struct machine *machine, u64 start, u64 len,
@@ -120,11 +120,13 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 		     u32 prot, u32 flags, struct build_id *bid,
 		     char *filename, struct thread *thread)
 {
-	struct map *map = malloc(sizeof(*map));
+	struct map *res;
+	RC_STRUCT(map) *map;
 	struct nsinfo *nsi = NULL;
 	struct nsinfo *nnsi;
 
-	if (map != NULL) {
+	map = malloc(sizeof(*map));
+	if (ADD_RC_CHK(res, map)) {
 		char newfilename[PATH_MAX];
 		struct dso *dso, *header_bid_dso;
 		int anon, no_dso, vdso, android;
@@ -167,7 +169,7 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 		if (dso == NULL)
 			goto out_delete;
 
-		map__init(map, start, start + len, pgoff, dso);
+		map__init(res, start, start + len, pgoff, dso);
 
 		if (anon || no_dso) {
 			map->map_ip = map->unmap_ip = identity__map_ip;
@@ -204,10 +206,10 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
 		}
 		dso__put(dso);
 	}
-	return map;
+	return res;
 out_delete:
 	nsinfo__put(nsi);
-	free(map);
+	RC_CHK_FREE(res);
 	return NULL;
 }
 
@@ -218,16 +220,18 @@ struct map *map__new(struct machine *machine, u64 start, u64 len,
  */
 struct map *map__new2(u64 start, struct dso *dso)
 {
-	struct map *map = calloc(1, (sizeof(*map) +
-				     (dso->kernel ? sizeof(struct kmap) : 0)));
-	if (map != NULL) {
+	struct map *res;
+	RC_STRUCT(map) *map;
+
+	map = calloc(1, sizeof(*map) + (dso->kernel ? sizeof(struct kmap) : 0));
+	if (ADD_RC_CHK(res, map)) {
 		/*
 		 * ->end will be filled after we load all the symbols
 		 */
-		map__init(map, start, 0, 0, dso);
+		map__init(res, start, 0, 0, dso);
 	}
 
-	return map;
+	return res;
 }
 
 bool __map__is_kernel(const struct map *map)
@@ -292,20 +296,22 @@ bool map__has_symbols(const struct map *map)
 
 static void map__exit(struct map *map)
 {
-	BUG_ON(refcount_read(&map->refcnt) != 0);
-	dso__zput(map->dso);
+	BUG_ON(refcount_read(&RC_CHK_ACCESS(map)->refcnt) != 0);
+	dso__zput(RC_CHK_ACCESS(map)->dso);
 }
 
 void map__delete(struct map *map)
 {
 	map__exit(map);
-	free(map);
+	RC_CHK_FREE(map);
 }
 
 void map__put(struct map *map)
 {
-	if (map && refcount_dec_and_test(&map->refcnt))
+	if (map && refcount_dec_and_test(&RC_CHK_ACCESS(map)->refcnt))
 		map__delete(map);
+	else
+		RC_CHK_PUT(map);
 }
 
 void map__fixup_start(struct map *map)
@@ -317,7 +323,7 @@ void map__fixup_start(struct map *map)
 	if (nd != NULL) {
 		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
 
-		map->start = sym->start;
+		RC_CHK_ACCESS(map)->start = sym->start;
 	}
 }
 
@@ -329,7 +335,7 @@ void map__fixup_end(struct map *map)
 
 	if (nd != NULL) {
 		struct symbol *sym = rb_entry(nd, struct symbol, rb_node);
-		map->end = sym->end;
+		RC_CHK_ACCESS(map)->end = sym->end;
 	}
 }
 
@@ -400,20 +406,21 @@ struct symbol *map__find_symbol_by_name(struct map *map, const char *name)
 
 struct map *map__clone(struct map *from)
 {
-	size_t size = sizeof(struct map);
-	struct map *map;
+	struct map *res;
+	RC_STRUCT(map) *map;
+	size_t size = sizeof(RC_STRUCT(map));
 	struct dso *dso = map__dso(from);
 
 	if (dso && dso->kernel)
 		size += sizeof(struct kmap);
 
-	map = memdup(from, size);
-	if (map != NULL) {
+	map = memdup(RC_CHK_ACCESS(from), size);
+	if (ADD_RC_CHK(res, map)) {
 		refcount_set(&map->refcnt, 1);
 		map->dso = dso__get(dso);
 	}
 
-	return map;
+	return res;
 }
 
 size_t map__fprintf(struct map *map, FILE *fp)
@@ -567,7 +574,7 @@ struct kmap *__map__kmap(struct map *map)
 
 	if (!dso || !dso->kernel)
 		return NULL;
-	return (struct kmap *)(map + 1);
+	return (struct kmap *)(&RC_CHK_ACCESS(map)[1]);
 }
 
 struct kmap *map__kmap(struct map *map)
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 102485699aa8..55d047e818e7 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -10,12 +10,13 @@
 #include <string.h>
 #include <stdbool.h>
 #include <linux/types.h>
+#include <internal/rc_check.h>
 
 struct dso;
 struct maps;
 struct machine;
 
-struct map {
+DECLARE_RC_STRUCT(map) {
 	u64			start;
 	u64			end;
 	bool			erange_warned:1;
@@ -49,52 +50,52 @@ u64 identity__map_ip(const struct map *map __maybe_unused, u64 ip);
 
 static inline struct dso *map__dso(const struct map *map)
 {
-	return map->dso;
+	return RC_CHK_ACCESS(map)->dso;
 }
 
 static inline u64 map__map_ip(const struct map *map, u64 ip)
 {
-	return map->map_ip(map, ip);
+	return RC_CHK_ACCESS(map)->map_ip(map, ip);
 }
 
 static inline u64 map__unmap_ip(const struct map *map, u64 ip)
 {
-	return map->unmap_ip(map, ip);
+	return RC_CHK_ACCESS(map)->unmap_ip(map, ip);
 }
 
 static inline u64 map__start(const struct map *map)
 {
-	return map->start;
+	return RC_CHK_ACCESS(map)->start;
 }
 
 static inline u64 map__end(const struct map *map)
 {
-	return map->end;
+	return RC_CHK_ACCESS(map)->end;
 }
 
 static inline u64 map__pgoff(const struct map *map)
 {
-	return map->pgoff;
+	return RC_CHK_ACCESS(map)->pgoff;
 }
 
 static inline u64 map__reloc(const struct map *map)
 {
-	return map->reloc;
+	return RC_CHK_ACCESS(map)->reloc;
 }
 
 static inline u32 map__flags(const struct map *map)
 {
-	return map->flags;
+	return RC_CHK_ACCESS(map)->flags;
 }
 
 static inline u32 map__prot(const struct map *map)
 {
-	return map->prot;
+	return RC_CHK_ACCESS(map)->prot;
 }
 
 static inline bool map__priv(const struct map *map)
 {
-	return map->priv;
+	return RC_CHK_ACCESS(map)->priv;
 }
 
 static inline size_t map__size(const struct map *map)
@@ -153,9 +154,12 @@ struct map *map__clone(struct map *map);
 
 static inline struct map *map__get(struct map *map)
 {
-	if (map)
-		refcount_inc(&map->refcnt);
-	return map;
+	struct map *result;
+
+	if (RC_CHK_GET(result, map))
+		refcount_inc(&RC_CHK_ACCESS(map)->refcnt);
+
+	return result;
 }
 
 void map__put(struct map *map);
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 3c8bbcb2c204..a33ae321c65a 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -126,7 +126,7 @@ void maps__remove(struct maps *maps, struct map *map)
 		RC_CHK_ACCESS(maps)->last_search_by_name = NULL;
 
 	rb_node = maps__find_node(maps, map);
-	assert(rb_node->map == map);
+	assert(rb_node->RC_CHK_ACCESS(map) == RC_CHK_ACCESS(map));
 	__maps__remove(maps, rb_node);
 	if (maps__maps_by_name(maps))
 		__maps__free_maps_by_name(maps);
@@ -339,7 +339,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 				goto put_map;
 			}
 
-			before->end = map__start(map);
+			RC_CHK_ACCESS(before)->end = map__start(map);
 			err = __maps__insert(maps, before);
 			if (err) {
 				map__put(before);
@@ -351,7 +351,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 			map__put(before);
 		}
 
-		if (map->end < map__end(pos->map)) {
+		if (map__end(map) < map__end(pos->map)) {
 			struct map *after = map__clone(pos->map);
 
 			if (after == NULL) {
@@ -359,8 +359,9 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 				goto put_map;
 			}
 
-			after->start = map__end(map);
-			after->pgoff += map__end(map) - map__start(pos->map);
+			RC_CHK_ACCESS(after)->start = map__end(map);
+			RC_CHK_ACCESS(after)->pgoff +=
+				map__end(map) - map__start(pos->map);
 			assert(map__map_ip(pos->map, map__end(map)) ==
 				map__map_ip(after, map__end(map)));
 			err = __maps__insert(maps, after);
@@ -420,7 +421,7 @@ struct map_rb_node *maps__find_node(struct maps *maps, struct map *map)
 	struct map_rb_node *rb_node;
 
 	maps__for_each_entry(maps, rb_node) {
-		if (rb_node->map == map)
+		if (rb_node->RC_CHK_ACCESS(map) == RC_CHK_ACCESS(map))
 			return rb_node;
 	}
 	return NULL;
diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 93ae3f22fd03..aa6113008627 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1348,11 +1348,11 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		 */
 		if (*remap_kernel && dso->kernel && !kmodule) {
 			*remap_kernel = false;
-			map->start = shdr->sh_addr + ref_reloc(kmap);
-			map->end = map__start(map) + shdr->sh_size;
-			map->pgoff = shdr->sh_offset;
-			map->map_ip = map__dso_map_ip;
-			map->unmap_ip = map__dso_unmap_ip;
+			RC_CHK_ACCESS(map)->start = shdr->sh_addr + ref_reloc(kmap);
+			RC_CHK_ACCESS(map)->end = map__start(map) + shdr->sh_size;
+			RC_CHK_ACCESS(map)->pgoff = shdr->sh_offset;
+			RC_CHK_ACCESS(map)->map_ip = map__dso_map_ip;
+			RC_CHK_ACCESS(map)->unmap_ip = map__dso_unmap_ip;
 			/* Ensure maps are correctly ordered */
 			if (kmaps) {
 				int err;
@@ -1373,7 +1373,7 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 		 */
 		if (*remap_kernel && kmodule) {
 			*remap_kernel = false;
-			map->pgoff = shdr->sh_offset;
+			RC_CHK_ACCESS(map)->pgoff = shdr->sh_offset;
 		}
 
 		*curr_mapp = map;
@@ -1408,11 +1408,13 @@ static int dso__process_kernel_symbol(struct dso *dso, struct map *map,
 			map__kmap(curr_map)->kmaps = kmaps;
 
 		if (adjust_kernel_syms) {
-			curr_map->start  = shdr->sh_addr + ref_reloc(kmap);
-			curr_map->end	 = map__start(curr_map) + shdr->sh_size;
-			curr_map->pgoff	 = shdr->sh_offset;
+			RC_CHK_ACCESS(curr_map)->start  = shdr->sh_addr + ref_reloc(kmap);
+			RC_CHK_ACCESS(curr_map)->end	= map__start(curr_map) +
+							  shdr->sh_size;
+			RC_CHK_ACCESS(curr_map)->pgoff	= shdr->sh_offset;
 		} else {
-			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
+			RC_CHK_ACCESS(curr_map)->map_ip = identity__map_ip;
+			RC_CHK_ACCESS(curr_map)->unmap_ip = identity__map_ip;
 		}
 		curr_dso->symtab_type = dso->symtab_type;
 		if (maps__insert(kmaps, curr_map))
@@ -1519,7 +1521,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
 			if (strcmp(elf_name, kmap->ref_reloc_sym->name))
 				continue;
 			kmap->ref_reloc_sym->unrelocated_addr = sym.st_value;
-			map->reloc = kmap->ref_reloc_sym->addr -
+			RC_CHK_ACCESS(map)->reloc = kmap->ref_reloc_sym->addr -
 				     kmap->ref_reloc_sym->unrelocated_addr;
 			break;
 		}
@@ -1530,7 +1532,7 @@ dso__load_sym_internal(struct dso *dso, struct map *map, struct symsrc *syms_ss,
 	 * attempted to prelink vdso to its virtual address.
 	 */
 	if (dso__is_vdso(dso))
-		map->reloc = map__start(map) - dso->text_offset;
+		RC_CHK_ACCESS(map)->reloc = map__start(map) - dso->text_offset;
 
 	dso->adjust_symbols = runtime_ss->adjust_symbols || ref_reloc(kmap);
 	/*
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index d99c8e1bb4bf..38b2890e4273 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -279,7 +279,7 @@ void maps__fixup_end(struct maps *maps)
 
 	maps__for_each_entry(maps, curr) {
 		if (prev != NULL && !map__end(prev->map))
-			prev->map->end = map__start(curr->map);
+			RC_CHK_ACCESS(prev->map)->end = map__start(curr->map);
 
 		prev = curr;
 	}
@@ -289,7 +289,7 @@ void maps__fixup_end(struct maps *maps)
 	 * last map final address.
 	 */
 	if (curr && !map__end(curr->map))
-		curr->map->end = ~0ULL;
+		RC_CHK_ACCESS(curr->map)->end = ~0ULL;
 
 	up_write(maps__lock(maps));
 }
@@ -865,7 +865,7 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 			*module++ = '\0';
 			curr_map_dso = map__dso(curr_map);
 			if (strcmp(curr_map_dso->short_name, module)) {
-				if (curr_map != initial_map &&
+				if (RC_CHK_ACCESS(curr_map) != RC_CHK_ACCESS(initial_map) &&
 				    dso->kernel == DSO_SPACE__KERNEL_GUEST &&
 				    machine__is_default_guest(machine)) {
 					/*
@@ -944,7 +944,8 @@ static int maps__split_kallsyms(struct maps *kmaps, struct dso *dso, u64 delta,
 				return -1;
 			}
 
-			curr_map->map_ip = curr_map->unmap_ip = identity__map_ip;
+			RC_CHK_ACCESS(curr_map)->map_ip = identity__map_ip;
+			RC_CHK_ACCESS(curr_map)->unmap_ip = identity__map_ip;
 			if (maps__insert(kmaps, curr_map)) {
 				dso__put(ndso);
 				return -1;
@@ -1250,8 +1251,8 @@ static int kcore_mapfn(u64 start, u64 len, u64 pgoff, void *data)
 		return -ENOMEM;
 	}
 
-	list_node->map->end = map__start(list_node->map) + len;
-	list_node->map->pgoff = pgoff;
+	list_node->RC_CHK_ACCESS(map)->end = map__start(list_node->map) + len;
+	list_node->RC_CHK_ACCESS(map)->pgoff = pgoff;
 
 	list_add(&list_node->node, &md->maps);
 
@@ -1286,7 +1287,7 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 				 * |new......|     -> |new..|
 				 *       |old....| ->       |old....|
 				 */
-				new_map->end = map__start(old_map);
+				RC_CHK_ACCESS(new_map)->end = map__start(old_map);
 			} else {
 				/*
 				 * |new.............| -> |new..|       |new..|
@@ -1306,10 +1307,12 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 					goto out;
 				}
 
-				m->map->end = map__start(old_map);
+
+				RC_CHK_ACCESS(m->map)->end = map__start(old_map);
 				list_add_tail(&m->node, &merged);
-				new_map->pgoff += map__end(old_map) - map__start(new_map);
-				new_map->start = map__end(old_map);
+				RC_CHK_ACCESS(new_map)->pgoff +=
+					map__end(old_map) - map__start(new_map);
+				RC_CHK_ACCESS(new_map)->start = map__end(old_map);
 			}
 		} else {
 			/*
@@ -1329,8 +1332,9 @@ int maps__merge_in(struct maps *kmaps, struct map *new_map)
 				 *      |new......| ->         |new...|
 				 * |old....|        -> |old....|
 				 */
-				new_map->pgoff += map__end(old_map) - map__start(new_map);
-				new_map->start = map__end(old_map);
+				RC_CHK_ACCESS(new_map)->pgoff +=
+					map__end(old_map) - map__start(new_map);
+				RC_CHK_ACCESS(new_map)->start = map__end(old_map);
 			}
 		}
 	}
@@ -1455,12 +1459,12 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 
 		new_node = list_entry(md.maps.next, struct map_list_node, node);
 		list_del_init(&new_node->node);
-		if (new_node->map == replacement_map) {
-			map->start	= map__start(new_node->map);
-			map->end	= map__end(new_node->map);
-			map->pgoff	= map__pgoff(new_node->map);
-			map->map_ip	= new_node->map->map_ip;
-			map->unmap_ip	= new_node->map->unmap_ip;
+		if (RC_CHK_ACCESS(new_node->map) == RC_CHK_ACCESS(replacement_map)) {
+			RC_CHK_ACCESS(map)->start = map__start(new_node->map);
+			RC_CHK_ACCESS(map)->end   = map__end(new_node->map);
+			RC_CHK_ACCESS(map)->pgoff = map__pgoff(new_node->map);
+			RC_CHK_ACCESS(map)->map_ip = RC_CHK_ACCESS(new_node->map)->map_ip;
+			RC_CHK_ACCESS(map)->unmap_ip = RC_CHK_ACCESS(new_node->map)->unmap_ip;
 			/* Ensure maps are correctly ordered */
 			map__get(map);
 			maps__remove(kmaps, map);
-- 
2.40.0.rc1.284.g88254d51c5-goog


^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
                   ` (16 preceding siblings ...)
  2023-03-20 21:22 ` [PATCH v5 17/17] perf map: " Ian Rogers
@ 2023-04-04 15:58 ` Ian Rogers
  2023-04-04 17:02   ` Arnaldo Carvalho de Melo
  2023-04-04 17:25   ` Adrian Hunter
  17 siblings, 2 replies; 33+ messages in thread
From: Ian Rogers @ 2023-04-04 15:58 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Adrian Hunter, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian

Ping. It would be nice to have this landed or at least the first 10
patches that refactor the map API and are the bulk of the
lines-of-code changed. Having those landed would make it easier to
rebase in the future, but I also think the whole series is ready to
go.

Thanks,
Ian

On Mon, Mar 20, 2023 at 2:23 PM Ian Rogers <irogers@google.com> wrote:
>
> The perf tool has a class of memory problems where reference counts
> are used incorrectly. Memory/address sanitizers and valgrind don't
> provide useful ways to debug these problems, you see a memory leak
> where the only pertinent information is the original allocation
> site. What would be more useful is knowing where a get fails to have a
> corresponding put, where there are double puts, etc.
>
> This work was motivated by the roll-back of:
> https://lore.kernel.org/linux-perf-users/20211118193714.2293728-1-irogers@google.com/
> where fixing a missed put resulted in a use-after-free in a different
> context. There was a sense in fixing the issue that a game of
> wac-a-mole had been embarked upon in adding missed gets and puts.
>
> The basic approach of the change is to add a level of indirection at
> the get and put calls. Get allocates a level of indirection that, if
> no corresponding put is called, becomes a memory leak (and associated
> stack trace) that leak sanitizer can report. Similarly if two puts are
> called for the same get, then a double free can be detected by address
> sanitizer. This can also detect the use after put, which should also
> yield a segv without a sanitizer.
>
> Adding reference count checking to cpu map was done as a proof of
> concept, it yielded little other than a location where the use of get
> could be cleaner by using its result. Reference count checking on
> nsinfo identified a double free of the indirection layer and the
> related threads, thereby identifying a data race as discussed here:
>  https://lore.kernel.org/linux-perf-users/CAP-5=fWZH20L4kv-BwVtGLwR=Em3AOOT+Q4QGivvQuYn5AsPRg@mail.gmail.com/
> Accordingly the dso->lock was extended and use to cover the race.
>
> The v3 version addresses problems in v2, in particular using macros to
> avoid #ifdefs. The v3 version applies the reference count checking
> approach to two more data structures, maps and map. While maps was
> straightforward, struct map showed a problem where reference counted
> thing can be on lists and rb-trees that are oblivious to the
> reference count. To sanitize this, struct map is changed so that it is
> referenced by either a list or rb-tree node and not part of it. This
> simplifies the reference count and the patches have caught and fixed a
> number of missed or mismatched reference counts relating to struct
> map.
>
> The patches are arranged so that API refactors and bug fixes appear
> first, then the reference count checker itself appears. This allows
> for the refactor and fixes to be applied upstream first, as has
> already happened with cpumap.
>
> A wider discussion of the approach is on the mailing list:
>  https://lore.kernel.org/linux-perf-users/YffqnynWcc5oFkI5@kernel.org/T/#mf25ccd7a2e03de92cec29d36e2999a8ab5ec7f88
> Comparing it to a past approach:
>  https://lore.kernel.org/all/20151209021047.10245.8918.stgit@localhost.localdomain/
> and to ref_tracker:
>  https://lwn.net/Articles/877603/
>
> v5. rebase removing 5 merged changes. Add map_list_node__new to the
>     1st patch (perf map: Move map list node into symbol) as suggested
>     by Arnaldo. Remove unnecessary map__puts from patch 12 (perf map:
>     Changes to reference counting) as suggested by Adrian. A summary
>     of the sizes of the remaining patches is:
> 74fd7ffafdd0 perf map: Add reference count checking
>  12 files changed, 136 insertions(+), 114 deletions(-)
> 4719196db8d3 perf maps: Add reference count checking.
>  8 files changed, 64 insertions(+), 56 deletions(-)
> 03943e7594cf perf namespaces: Add reference count checking
>  7 files changed, 83 insertions(+), 62 deletions(-)
> 0bb382cc52d7 perf cpumap: Add reference count checking
>  6 files changed, 81 insertions(+), 71 deletions(-)
> ef39f550c40d libperf: Add reference count checking macros.
>  1 file changed, 94 insertions(+)
> d9ac37c750e0 perf map: Changes to reference counting
>  11 files changed, 112 insertions(+), 44 deletions(-)
> 476014bc9b55 perf maps: Modify maps_by_name to hold a reference to a map
>  2 files changed, 33 insertions(+), 18 deletions(-)
> 91384676fddd perf test: Add extra diagnostics to maps test
>  1 file changed, 36 insertions(+), 15 deletions(-)
> fdc30434f826 perf map: Add accessors for pgoff and reloc
>  9 files changed, 33 insertions(+), 23 deletions(-)
> 368fe015adb2 perf map: Add accessors for prot, priv and flags
>  6 files changed, 28 insertions(+), 12 deletions(-)
> 2c6a8169826a perf map: Add helper for map_ip and unmap_ip
>  23 files changed, 80 insertions(+), 65 deletions(-)
> 929e59d49f4b perf map: Rename map_ip and unmap_ip
>  6 files changed, 13 insertions(+), 13 deletions(-)
> 4a38194aaaf5 perf map: Add accessor for start and end
>  24 files changed, 114 insertions(+), 103 deletions(-)
> 02b63e5c415e perf map: Add accessor for dso
>  48 files changed, 404 insertions(+), 293 deletions(-)
> 9324af6ccf42 perf maps: Add functions to access maps
>  20 files changed, 175 insertions(+), 111 deletions(-)
> 5c590d36a308 perf maps: Remove rb_node from struct map
>  16 files changed, 291 insertions(+), 184 deletions(-)
> af1d142eb777 perf map: Move map list node into symbol
>  2 files changed, 63 insertions(+), 35 deletions(-)
>
> v4. rebases on to acme's perf-tools-next, fixes more issues with
>     map/maps and breaks apart the accessor functions to reduce
>     individual patch sizes. The accessor functions are mechanical
>     changes where the single biggest one is refactoring use of
>     map->dso to be map__dso(map).
>
> The v3 change is available here:
> https://lore.kernel.org/lkml/20220211103415.2737789-1-irogers@google.com/
>
> Ian Rogers (17):
>   perf map: Move map list node into symbol
>   perf maps: Remove rb_node from struct map
>   perf maps: Add functions to access maps
>   perf map: Add accessor for dso
>   perf map: Add accessor for start and end
>   perf map: Rename map_ip and unmap_ip
>   perf map: Add helper for map_ip and unmap_ip
>   perf map: Add accessors for prot, priv and flags
>   perf map: Add accessors for pgoff and reloc
>   perf test: Add extra diagnostics to maps test
>   perf maps: Modify maps_by_name to hold a reference to a map
>   perf map: Changes to reference counting
>   libperf: Add reference count checking macros.
>   perf cpumap: Add reference count checking
>   perf namespaces: Add reference count checking
>   perf maps: Add reference count checking.
>   perf map: Add reference count checking
>
>  tools/lib/perf/Makefile                       |   2 +-
>  tools/lib/perf/cpumap.c                       |  94 ++---
>  tools/lib/perf/include/internal/cpumap.h      |   4 +-
>  tools/lib/perf/include/internal/rc_check.h    |  94 +++++
>  tools/perf/arch/s390/annotate/instructions.c  |   4 +-
>  tools/perf/arch/x86/tests/dwarf-unwind.c      |   2 +-
>  tools/perf/arch/x86/util/event.c              |  13 +-
>  tools/perf/builtin-annotate.c                 |  11 +-
>  tools/perf/builtin-buildid-list.c             |   4 +-
>  tools/perf/builtin-inject.c                   |  12 +-
>  tools/perf/builtin-kallsyms.c                 |   6 +-
>  tools/perf/builtin-kmem.c                     |   4 +-
>  tools/perf/builtin-lock.c                     |   4 +-
>  tools/perf/builtin-mem.c                      |  10 +-
>  tools/perf/builtin-report.c                   |  26 +-
>  tools/perf/builtin-script.c                   |  27 +-
>  tools/perf/builtin-top.c                      |  17 +-
>  tools/perf/builtin-trace.c                    |   2 +-
>  .../scripts/python/Perf-Trace-Util/Context.c  |  13 +-
>  tools/perf/tests/code-reading.c               |  37 +-
>  tools/perf/tests/cpumap.c                     |   4 +-
>  tools/perf/tests/hists_common.c               |   8 +-
>  tools/perf/tests/hists_cumulate.c             |  14 +-
>  tools/perf/tests/hists_filter.c               |  14 +-
>  tools/perf/tests/hists_link.c                 |  18 +-
>  tools/perf/tests/hists_output.c               |  12 +-
>  tools/perf/tests/maps.c                       |  69 ++--
>  tools/perf/tests/mmap-thread-lookup.c         |   3 +-
>  tools/perf/tests/symbols.c                    |   6 +-
>  tools/perf/tests/thread-maps-share.c          |  29 +-
>  tools/perf/tests/vmlinux-kallsyms.c           |  54 +--
>  tools/perf/ui/browsers/annotate.c             |   9 +-
>  tools/perf/ui/browsers/hists.c                |  19 +-
>  tools/perf/ui/browsers/map.c                  |   4 +-
>  tools/perf/util/annotate.c                    |  40 ++-
>  tools/perf/util/auxtrace.c                    |   2 +-
>  tools/perf/util/block-info.c                  |   4 +-
>  tools/perf/util/bpf-event.c                   |  10 +-
>  tools/perf/util/bpf_lock_contention.c         |   6 +-
>  tools/perf/util/build-id.c                    |   2 +-
>  tools/perf/util/callchain.c                   |  24 +-
>  tools/perf/util/cpumap.c                      |  40 ++-
>  tools/perf/util/data-convert-json.c           |  10 +-
>  tools/perf/util/db-export.c                   |  16 +-
>  tools/perf/util/dlfilter.c                    |  28 +-
>  tools/perf/util/dso.c                         |   8 +-
>  tools/perf/util/dsos.c                        |   2 +-
>  tools/perf/util/event.c                       |  27 +-
>  tools/perf/util/evsel_fprintf.c               |   4 +-
>  tools/perf/util/hist.c                        |  22 +-
>  tools/perf/util/intel-pt.c                    |  63 ++--
>  tools/perf/util/machine.c                     | 252 ++++++++------
>  tools/perf/util/map.c                         | 217 ++++++------
>  tools/perf/util/map.h                         |  74 +++-
>  tools/perf/util/maps.c                        | 318 ++++++++++-------
>  tools/perf/util/maps.h                        |  67 +++-
>  tools/perf/util/namespaces.c                  | 132 +++++---
>  tools/perf/util/namespaces.h                  |   3 +-
>  tools/perf/util/pmu.c                         |   8 +-
>  tools/perf/util/probe-event.c                 |  62 ++--
>  .../util/scripting-engines/trace-event-perl.c |  10 +-
>  .../scripting-engines/trace-event-python.c    |  26 +-
>  tools/perf/util/sort.c                        |  67 ++--
>  tools/perf/util/symbol-elf.c                  |  41 ++-
>  tools/perf/util/symbol.c                      | 320 +++++++++++-------
>  tools/perf/util/symbol_fprintf.c              |   2 +-
>  tools/perf/util/synthetic-events.c            |  34 +-
>  tools/perf/util/thread-stack.c                |   4 +-
>  tools/perf/util/thread.c                      |  39 +--
>  tools/perf/util/unwind-libdw.c                |  20 +-
>  tools/perf/util/unwind-libunwind-local.c      |  16 +-
>  tools/perf/util/unwind-libunwind.c            |  33 +-
>  tools/perf/util/vdso.c                        |   7 +-
>  73 files changed, 1665 insertions(+), 1044 deletions(-)
>  create mode 100644 tools/lib/perf/include/internal/rc_check.h
>
> --
> 2.40.0.rc1.284.g88254d51c5-goog
>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 15:58 ` [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
@ 2023-04-04 17:02   ` Arnaldo Carvalho de Melo
  2023-04-04 17:07     ` Arnaldo Carvalho de Melo
  2023-04-04 17:25   ` Adrian Hunter
  1 sibling, 1 reply; 33+ messages in thread
From: Arnaldo Carvalho de Melo @ 2023-04-04 17:02 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Adrian Hunter,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

Em Tue, Apr 04, 2023 at 08:58:55AM -0700, Ian Rogers escreveu:
> Ping. It would be nice to have this landed or at least the first 10
> patches that refactor the map API and are the bulk of the
> lines-of-code changed. Having those landed would make it easier to
> rebase in the future, but I also think the whole series is ready to
> go.

I'm trying to get it to compile:

  CC      /tmp/build/perf-tools-next/util/bpf-event.o
In file included from util/machine.h:7,
                 from util/session.h:8,
                 from util/unwind-libunwind-local.c:35:
util/unwind-libunwind-local.c: In function ‘read_unwind_spec_eh_frame’:
util/maps.h:29:18: error: assignment to ‘struct map *’ from incompatible pointer type ‘struct map_rb_node *’ [-Werror=incompatible-pointer-types]
   29 |         for (map = maps__first(maps); map; map = map_rb_node__next(map))
      |                  ^
util/unwind-libunwind-local.c:328:9: note: in expansion of macro ‘maps__for_each_entry’
  328 |         maps__for_each_entry(ui->thread->maps, map) {
      |         ^~~~~~~~~~~~~~~~~~~~
util/unwind-libunwind-local.c:328:48: error: passing argument 1 of ‘map_rb_node__next’ from incompatible pointer type [-Werror=incompatible-pointer-types]
  328 |         maps__for_each_entry(ui->thread->maps, map) {
      |                                                ^~~
      |                                                |
      |                                                struct map *
util/maps.h:29:68: note: in definition of macro ‘maps__for_each_entry’
   29 |         for (map = maps__first(maps); map; map = map_rb_node__next(map))
      |                                                                    ^~~
util/maps.h:24:59: note: expected ‘struct map_rb_node *’ but argument is of type ‘struct map *’
   24 | struct map_rb_node *map_rb_node__next(struct map_rb_node *node);
      |                                       ~~~~~~~~~~~~~~~~~~~~^~~~
util/maps.h:29:48: error: assignment to ‘struct map *’ from incompatible pointer type ‘struct map_rb_node *’ [-Werror=incompatible-pointer-types]
   29 |         for (map = maps__first(maps); map; map = map_rb_node__next(map))
      |                                                ^
util/unwind-libunwind-local.c:328:9: note: in expansion of macro ‘maps__for_each_entry’
  328 |         maps__for_each_entry(ui->thread->maps, map) {
      |         ^~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make[4]: *** [/var/home/acme/git/perf-tools-next/tools/build/Makefile.build:97: /tmp/build/perf-tools-next/util/unwind-libunwind-local.o] Error 1
make[4]: *** Waiting for unfinished jobs....
  LD      /tmp/build/perf-tools-next/util/scripting-engines/perf-in.o
make[3]: *** [/var/home/acme/git/perf-tools-next/tools/build/Makefile.build:140: util] Error 2
make[2]: *** [Makefile.perf:676: /tmp/build/perf-tools-next/perf-in.o] Error 2
make[1]: *** [Makefile.perf:236: sub-make] Error 2
make: *** [Makefile:113: install-bin] Error 2
make: Leaving directory '/var/home/acme/git/perf-tools-next/tools/perf'

 Performance counter stats for 'make -k BUILD_BPF_SKEL=1 CORESIGHT=1 PYTHON=python3 O=/tmp/build/perf-tools-next -C tools/perf install-bin':

      260622279301      cycles:u
      285362743453      instructions:u                   #    1.09  insn per cycle

       6.001315366 seconds time elapsed

      62.979105000 seconds user
      13.088797000 seconds sys


⬢[acme@toolbox perf-tools-next]$ git log --oneline -1
51a0f26e88c893ac (HEAD) perf maps: Remove rb_node from struct map
⬢[acme@toolbox perf-tools-next]$

I'm also making some changes to reduce the number of patch lines and
conserve the project 'git blame' usefulness, not changing the logic in
your patches.

- Arnaldo
 
> Thanks,
> Ian
> 
> On Mon, Mar 20, 2023 at 2:23 PM Ian Rogers <irogers@google.com> wrote:
> >
> > The perf tool has a class of memory problems where reference counts
> > are used incorrectly. Memory/address sanitizers and valgrind don't
> > provide useful ways to debug these problems, you see a memory leak
> > where the only pertinent information is the original allocation
> > site. What would be more useful is knowing where a get fails to have a
> > corresponding put, where there are double puts, etc.
> >
> > This work was motivated by the roll-back of:
> > https://lore.kernel.org/linux-perf-users/20211118193714.2293728-1-irogers@google.com/
> > where fixing a missed put resulted in a use-after-free in a different
> > context. There was a sense in fixing the issue that a game of
> > wac-a-mole had been embarked upon in adding missed gets and puts.
> >
> > The basic approach of the change is to add a level of indirection at
> > the get and put calls. Get allocates a level of indirection that, if
> > no corresponding put is called, becomes a memory leak (and associated
> > stack trace) that leak sanitizer can report. Similarly if two puts are
> > called for the same get, then a double free can be detected by address
> > sanitizer. This can also detect the use after put, which should also
> > yield a segv without a sanitizer.
> >
> > Adding reference count checking to cpu map was done as a proof of
> > concept, it yielded little other than a location where the use of get
> > could be cleaner by using its result. Reference count checking on
> > nsinfo identified a double free of the indirection layer and the
> > related threads, thereby identifying a data race as discussed here:
> >  https://lore.kernel.org/linux-perf-users/CAP-5=fWZH20L4kv-BwVtGLwR=Em3AOOT+Q4QGivvQuYn5AsPRg@mail.gmail.com/
> > Accordingly the dso->lock was extended and use to cover the race.
> >
> > The v3 version addresses problems in v2, in particular using macros to
> > avoid #ifdefs. The v3 version applies the reference count checking
> > approach to two more data structures, maps and map. While maps was
> > straightforward, struct map showed a problem where reference counted
> > thing can be on lists and rb-trees that are oblivious to the
> > reference count. To sanitize this, struct map is changed so that it is
> > referenced by either a list or rb-tree node and not part of it. This
> > simplifies the reference count and the patches have caught and fixed a
> > number of missed or mismatched reference counts relating to struct
> > map.
> >
> > The patches are arranged so that API refactors and bug fixes appear
> > first, then the reference count checker itself appears. This allows
> > for the refactor and fixes to be applied upstream first, as has
> > already happened with cpumap.
> >
> > A wider discussion of the approach is on the mailing list:
> >  https://lore.kernel.org/linux-perf-users/YffqnynWcc5oFkI5@kernel.org/T/#mf25ccd7a2e03de92cec29d36e2999a8ab5ec7f88
> > Comparing it to a past approach:
> >  https://lore.kernel.org/all/20151209021047.10245.8918.stgit@localhost.localdomain/
> > and to ref_tracker:
> >  https://lwn.net/Articles/877603/
> >
> > v5. rebase removing 5 merged changes. Add map_list_node__new to the
> >     1st patch (perf map: Move map list node into symbol) as suggested
> >     by Arnaldo. Remove unnecessary map__puts from patch 12 (perf map:
> >     Changes to reference counting) as suggested by Adrian. A summary
> >     of the sizes of the remaining patches is:
> > 74fd7ffafdd0 perf map: Add reference count checking
> >  12 files changed, 136 insertions(+), 114 deletions(-)
> > 4719196db8d3 perf maps: Add reference count checking.
> >  8 files changed, 64 insertions(+), 56 deletions(-)
> > 03943e7594cf perf namespaces: Add reference count checking
> >  7 files changed, 83 insertions(+), 62 deletions(-)
> > 0bb382cc52d7 perf cpumap: Add reference count checking
> >  6 files changed, 81 insertions(+), 71 deletions(-)
> > ef39f550c40d libperf: Add reference count checking macros.
> >  1 file changed, 94 insertions(+)
> > d9ac37c750e0 perf map: Changes to reference counting
> >  11 files changed, 112 insertions(+), 44 deletions(-)
> > 476014bc9b55 perf maps: Modify maps_by_name to hold a reference to a map
> >  2 files changed, 33 insertions(+), 18 deletions(-)
> > 91384676fddd perf test: Add extra diagnostics to maps test
> >  1 file changed, 36 insertions(+), 15 deletions(-)
> > fdc30434f826 perf map: Add accessors for pgoff and reloc
> >  9 files changed, 33 insertions(+), 23 deletions(-)
> > 368fe015adb2 perf map: Add accessors for prot, priv and flags
> >  6 files changed, 28 insertions(+), 12 deletions(-)
> > 2c6a8169826a perf map: Add helper for map_ip and unmap_ip
> >  23 files changed, 80 insertions(+), 65 deletions(-)
> > 929e59d49f4b perf map: Rename map_ip and unmap_ip
> >  6 files changed, 13 insertions(+), 13 deletions(-)
> > 4a38194aaaf5 perf map: Add accessor for start and end
> >  24 files changed, 114 insertions(+), 103 deletions(-)
> > 02b63e5c415e perf map: Add accessor for dso
> >  48 files changed, 404 insertions(+), 293 deletions(-)
> > 9324af6ccf42 perf maps: Add functions to access maps
> >  20 files changed, 175 insertions(+), 111 deletions(-)
> > 5c590d36a308 perf maps: Remove rb_node from struct map
> >  16 files changed, 291 insertions(+), 184 deletions(-)
> > af1d142eb777 perf map: Move map list node into symbol
> >  2 files changed, 63 insertions(+), 35 deletions(-)
> >
> > v4. rebases on to acme's perf-tools-next, fixes more issues with
> >     map/maps and breaks apart the accessor functions to reduce
> >     individual patch sizes. The accessor functions are mechanical
> >     changes where the single biggest one is refactoring use of
> >     map->dso to be map__dso(map).
> >
> > The v3 change is available here:
> > https://lore.kernel.org/lkml/20220211103415.2737789-1-irogers@google.com/
> >
> > Ian Rogers (17):
> >   perf map: Move map list node into symbol
> >   perf maps: Remove rb_node from struct map
> >   perf maps: Add functions to access maps
> >   perf map: Add accessor for dso
> >   perf map: Add accessor for start and end
> >   perf map: Rename map_ip and unmap_ip
> >   perf map: Add helper for map_ip and unmap_ip
> >   perf map: Add accessors for prot, priv and flags
> >   perf map: Add accessors for pgoff and reloc
> >   perf test: Add extra diagnostics to maps test
> >   perf maps: Modify maps_by_name to hold a reference to a map
> >   perf map: Changes to reference counting
> >   libperf: Add reference count checking macros.
> >   perf cpumap: Add reference count checking
> >   perf namespaces: Add reference count checking
> >   perf maps: Add reference count checking.
> >   perf map: Add reference count checking
> >
> >  tools/lib/perf/Makefile                       |   2 +-
> >  tools/lib/perf/cpumap.c                       |  94 ++---
> >  tools/lib/perf/include/internal/cpumap.h      |   4 +-
> >  tools/lib/perf/include/internal/rc_check.h    |  94 +++++
> >  tools/perf/arch/s390/annotate/instructions.c  |   4 +-
> >  tools/perf/arch/x86/tests/dwarf-unwind.c      |   2 +-
> >  tools/perf/arch/x86/util/event.c              |  13 +-
> >  tools/perf/builtin-annotate.c                 |  11 +-
> >  tools/perf/builtin-buildid-list.c             |   4 +-
> >  tools/perf/builtin-inject.c                   |  12 +-
> >  tools/perf/builtin-kallsyms.c                 |   6 +-
> >  tools/perf/builtin-kmem.c                     |   4 +-
> >  tools/perf/builtin-lock.c                     |   4 +-
> >  tools/perf/builtin-mem.c                      |  10 +-
> >  tools/perf/builtin-report.c                   |  26 +-
> >  tools/perf/builtin-script.c                   |  27 +-
> >  tools/perf/builtin-top.c                      |  17 +-
> >  tools/perf/builtin-trace.c                    |   2 +-
> >  .../scripts/python/Perf-Trace-Util/Context.c  |  13 +-
> >  tools/perf/tests/code-reading.c               |  37 +-
> >  tools/perf/tests/cpumap.c                     |   4 +-
> >  tools/perf/tests/hists_common.c               |   8 +-
> >  tools/perf/tests/hists_cumulate.c             |  14 +-
> >  tools/perf/tests/hists_filter.c               |  14 +-
> >  tools/perf/tests/hists_link.c                 |  18 +-
> >  tools/perf/tests/hists_output.c               |  12 +-
> >  tools/perf/tests/maps.c                       |  69 ++--
> >  tools/perf/tests/mmap-thread-lookup.c         |   3 +-
> >  tools/perf/tests/symbols.c                    |   6 +-
> >  tools/perf/tests/thread-maps-share.c          |  29 +-
> >  tools/perf/tests/vmlinux-kallsyms.c           |  54 +--
> >  tools/perf/ui/browsers/annotate.c             |   9 +-
> >  tools/perf/ui/browsers/hists.c                |  19 +-
> >  tools/perf/ui/browsers/map.c                  |   4 +-
> >  tools/perf/util/annotate.c                    |  40 ++-
> >  tools/perf/util/auxtrace.c                    |   2 +-
> >  tools/perf/util/block-info.c                  |   4 +-
> >  tools/perf/util/bpf-event.c                   |  10 +-
> >  tools/perf/util/bpf_lock_contention.c         |   6 +-
> >  tools/perf/util/build-id.c                    |   2 +-
> >  tools/perf/util/callchain.c                   |  24 +-
> >  tools/perf/util/cpumap.c                      |  40 ++-
> >  tools/perf/util/data-convert-json.c           |  10 +-
> >  tools/perf/util/db-export.c                   |  16 +-
> >  tools/perf/util/dlfilter.c                    |  28 +-
> >  tools/perf/util/dso.c                         |   8 +-
> >  tools/perf/util/dsos.c                        |   2 +-
> >  tools/perf/util/event.c                       |  27 +-
> >  tools/perf/util/evsel_fprintf.c               |   4 +-
> >  tools/perf/util/hist.c                        |  22 +-
> >  tools/perf/util/intel-pt.c                    |  63 ++--
> >  tools/perf/util/machine.c                     | 252 ++++++++------
> >  tools/perf/util/map.c                         | 217 ++++++------
> >  tools/perf/util/map.h                         |  74 +++-
> >  tools/perf/util/maps.c                        | 318 ++++++++++-------
> >  tools/perf/util/maps.h                        |  67 +++-
> >  tools/perf/util/namespaces.c                  | 132 +++++---
> >  tools/perf/util/namespaces.h                  |   3 +-
> >  tools/perf/util/pmu.c                         |   8 +-
> >  tools/perf/util/probe-event.c                 |  62 ++--
> >  .../util/scripting-engines/trace-event-perl.c |  10 +-
> >  .../scripting-engines/trace-event-python.c    |  26 +-
> >  tools/perf/util/sort.c                        |  67 ++--
> >  tools/perf/util/symbol-elf.c                  |  41 ++-
> >  tools/perf/util/symbol.c                      | 320 +++++++++++-------
> >  tools/perf/util/symbol_fprintf.c              |   2 +-
> >  tools/perf/util/synthetic-events.c            |  34 +-
> >  tools/perf/util/thread-stack.c                |   4 +-
> >  tools/perf/util/thread.c                      |  39 +--
> >  tools/perf/util/unwind-libdw.c                |  20 +-
> >  tools/perf/util/unwind-libunwind-local.c      |  16 +-
> >  tools/perf/util/unwind-libunwind.c            |  33 +-
> >  tools/perf/util/vdso.c                        |   7 +-
> >  73 files changed, 1665 insertions(+), 1044 deletions(-)
> >  create mode 100644 tools/lib/perf/include/internal/rc_check.h
> >
> > --
> > 2.40.0.rc1.284.g88254d51c5-goog
> >

-- 

- Arnaldo

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 17:02   ` Arnaldo Carvalho de Melo
@ 2023-04-04 17:07     ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 33+ messages in thread
From: Arnaldo Carvalho de Melo @ 2023-04-04 17:07 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Adrian Hunter,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

Em Tue, Apr 04, 2023 at 02:02:36PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Tue, Apr 04, 2023 at 08:58:55AM -0700, Ian Rogers escreveu:
> > Ping. It would be nice to have this landed or at least the first 10
> > patches that refactor the map API and are the bulk of the
> > lines-of-code changed. Having those landed would make it easier to
> > rebase in the future, but I also think the whole series is ready to
> > go.
> 
> I'm trying to get it to compile:
> 
>   CC      /tmp/build/perf-tools-next/util/bpf-event.o
> In file included from util/machine.h:7,
>                  from util/session.h:8,
>                  from util/unwind-libunwind-local.c:35:
> util/unwind-libunwind-local.c: In function ‘read_unwind_spec_eh_frame’:
> util/maps.h:29:18: error: assignment to ‘struct map *’ from incompatible pointer type ‘struct map_rb_node *’ [-Werror=incompatible-pointer-types]
>    29 |         for (map = maps__first(maps); map; map = map_rb_node__next(map))
>       |                  ^
> util/unwind-libunwind-local.c:328:9: note: in expansion of macro ‘maps__for_each_entry’
> 
> ⬢[acme@toolbox perf-tools-next]$ git log --oneline -1
> 51a0f26e88c893ac (HEAD) perf maps: Remove rb_node from struct map
> ⬢[acme@toolbox perf-tools-next]$
> 
> I'm also making some changes to reduce the number of patch lines and
> conserve the project 'git blame' usefulness, not changing the logic in
> your patches.

The fix for the above problem demonstrate the changes I made to this
patch, see the

  struct map *map = map_node->map;

Line, to avoid touching the logic right after it.

Now I'm working on this other error:

  CC      /tmp/build/perf-tools-next/util/jitdump.o
  CC      /tmp/build/perf-tools-next/util/bpf-event.o
util/unwind-libunwind.c: In function ‘unwind__get_entries’:
util/unwind-libunwind.c:95:24: error: too few arguments to function ‘ops->get_entries’
   95 |                 return ops->get_entries(cb, arg, thread, data, max_stack);
      |                        ^~~
util/unwind-libunwind.c:90:31: error: unused parameter ‘best_effort’ [-Werror=unused-parameter]
   90 |                          bool best_effort)
      |                               ^
cc1: all warnings being treated as errors
make[4]: *** [/var/home/acme/git/perf-tools-next/tools/build/Makefile.build:97: /tmp/build/perf-tools-next/util/unwind-libunwind.o] Error 1
make[4]: *** Waiting for unfinished jobs....
  LD      /tmp/build/perf-tools-next/ui/browsers/perf-in.o
  LD      /tmp/build/perf-tools-next/ui/perf-in.o
  LD      /tmp/build/perf-tools-next/util/scripting-engines/perf-in.o
make[3]: *** [/var/home/acme/git/perf-tools-next/tools/build/Makefile.build:140: util] Error 2
make[2]: *** [Makefile.perf:676: /tmp/build/perf-tools-next/perf-in.o] Error 2
make[1]: *** [Makefile.perf:236: sub-make] Error 2
make: *** [Makefile:113: install-bin] Error 2
make: Leaving directory '/var/home/acme/git/perf-tools-next/tools/perf'

 Performance counter stats for 'make -k BUILD_BPF_SKEL=1 CORESIGHT=1 PYTHON=python3 O=/tmp/build/perf-tools-next -C tools/perf install-bin':

      162599516548      cycles:u
      194726899066      instructions:u                   #    1.20  insn per cycle

       4.991056085 seconds time elapsed

      39.350659000 seconds user
       8.413527000 seconds sys


⬢[acme@toolbox perf-tools-next]$ git log --oneline -1
a95f2d0f62bfd750 (HEAD) perf maps: Add functions to access maps
⬢[acme@toolbox perf-tools-next]$

- Arnaldo

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 15:58 ` [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
  2023-04-04 17:02   ` Arnaldo Carvalho de Melo
@ 2023-04-04 17:25   ` Adrian Hunter
  2023-04-04 17:35     ` Ian Rogers
  2023-04-04 18:41     ` Arnaldo Carvalho de Melo
  1 sibling, 2 replies; 33+ messages in thread
From: Adrian Hunter @ 2023-04-04 17:25 UTC (permalink / raw)
  To: Ian Rogers, Peter Zijlstra, Ingo Molnar,
	Arnaldo Carvalho de Melo, Mark Rutland, Alexander Shishkin,
	Jiri Olsa, Namhyung Kim, Thomas Gleixner, Darren Hart,
	Davidlohr Bueso, James Clark, John Garry, Riccardo Mancini,
	Yury Norov, Andy Shevchenko, Andrew Morton, Leo Yan, Andi Kleen,
	Thomas Richter, Kan Liang, Madhavan Srinivasan,
	Shunsuke Nakamura, Song Liu, Masami Hiramatsu, Steven Rostedt,
	Miaoqian Lin, Stephen Brennan, Kajol Jain, Alexey Bayduraev,
	German Gomez, linux-perf-users, linux-kernel, Eric Dumazet,
	Dmitry Vyukov, Hao Luo
  Cc: Stephane Eranian

On 4/04/23 18:58, Ian Rogers wrote:
> Ping. It would be nice to have this landed or at least the first 10
> patches that refactor the map API and are the bulk of the
> lines-of-code changed. Having those landed would make it easier to
> rebase in the future, but I also think the whole series is ready to
> go.

I was wondering if the handling of dynamic data like struct map makes
any sense at present.  Perhaps someone can reassure me.

A struct map can be updated when an MMAP event is processed.  So it
seems like anything racing with event processing is already broken, and
reference counting / locking cannot help - unless there is also
copy-on-write (which there isn't at present)?

For struct maps, referencing it while simultaneously processing
events seems to make even less sense?


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 17:25   ` Adrian Hunter
@ 2023-04-04 17:35     ` Ian Rogers
  2023-04-04 18:37       ` Adrian Hunter
  2023-04-04 19:22       ` Arnaldo Carvalho de Melo
  2023-04-04 18:41     ` Arnaldo Carvalho de Melo
  1 sibling, 2 replies; 33+ messages in thread
From: Ian Rogers @ 2023-04-04 17:35 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

On Tue, Apr 4, 2023 at 10:26 AM Adrian Hunter <adrian.hunter@intel.com> wrote:
>
> On 4/04/23 18:58, Ian Rogers wrote:
> > Ping. It would be nice to have this landed or at least the first 10
> > patches that refactor the map API and are the bulk of the
> > lines-of-code changed. Having those landed would make it easier to
> > rebase in the future, but I also think the whole series is ready to
> > go.
>
> I was wondering if the handling of dynamic data like struct map makes
> any sense at present.  Perhaps someone can reassure me.
>
> A struct map can be updated when an MMAP event is processed.  So it
> seems like anything racing with event processing is already broken, and
> reference counting / locking cannot help - unless there is also
> copy-on-write (which there isn't at present)?
>
> For struct maps, referencing it while simultaneously processing
> events seems to make even less sense?

Agreed. The point of this work isn't to reimplement the maps/map APIs
but to add a layer of reference count checking. A refactor to change
the implementation without reference counts can delete the reference
count checking and I think that is great! I'm trying to get the code
base, in its current shape, to be more correct guided by sanitizers.
Unfortunately the sanitizers come from a C++ RAII world where
maintaining reference counts is somewhat trivial, we have to work
harder as done here.

A similar thing to refactoring maps is changing symbol. The rb_node
there accounts for 3*8 bytes of pointer, but is just to sort the
symbol by address. A sorted array would suffice as well complexity
wise, freeing 16-bytes per symbol, and is already done for symbols
sorted by name.

Thanks,
Ian

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 17:35     ` Ian Rogers
@ 2023-04-04 18:37       ` Adrian Hunter
  2023-04-04 19:22       ` Arnaldo Carvalho de Melo
  1 sibling, 0 replies; 33+ messages in thread
From: Adrian Hunter @ 2023-04-04 18:37 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
	Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
	Thomas Gleixner, Darren Hart, Davidlohr Bueso, James Clark,
	John Garry, Riccardo Mancini, Yury Norov, Andy Shevchenko,
	Andrew Morton, Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

On 4/04/23 20:35, Ian Rogers wrote:
> On Tue, Apr 4, 2023 at 10:26 AM Adrian Hunter <adrian.hunter@intel.com> wrote:
>>
>> On 4/04/23 18:58, Ian Rogers wrote:
>>> Ping. It would be nice to have this landed or at least the first 10
>>> patches that refactor the map API and are the bulk of the
>>> lines-of-code changed. Having those landed would make it easier to
>>> rebase in the future, but I also think the whole series is ready to
>>> go.
>>
>> I was wondering if the handling of dynamic data like struct map makes
>> any sense at present.  Perhaps someone can reassure me.
>>
>> A struct map can be updated when an MMAP event is processed.  So it
>> seems like anything racing with event processing is already broken, and
>> reference counting / locking cannot help - unless there is also
>> copy-on-write (which there isn't at present)?
>>
>> For struct maps, referencing it while simultaneously processing
>> events seems to make even less sense?
> 
> Agreed. The point of this work isn't to reimplement the maps/map APIs
> but to add a layer of reference count checking. A refactor to change
> the implementation without reference counts can delete the reference
> count checking and I think that is great! I'm trying to get the code
> base, in its current shape, to be more correct guided by sanitizers.
> Unfortunately the sanitizers come from a C++ RAII world where
> maintaining reference counts is somewhat trivial, we have to work
> harder as done here.
> 
> A similar thing to refactoring maps is changing symbol. The rb_node
> there accounts for 3*8 bytes of pointer, but is just to sort the
> symbol by address. A sorted array would suffice as well complexity
> wise, freeing 16-bytes per symbol, and is already done for symbols
> sorted by name.

Ok, just stuff to keep in mind.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 17:25   ` Adrian Hunter
  2023-04-04 17:35     ` Ian Rogers
@ 2023-04-04 18:41     ` Arnaldo Carvalho de Melo
  2023-04-04 18:54       ` Arnaldo Carvalho de Melo
  1 sibling, 1 reply; 33+ messages in thread
From: Arnaldo Carvalho de Melo @ 2023-04-04 18:41 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ian Rogers, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Thomas Gleixner,
	Darren Hart, Davidlohr Bueso, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

Em Tue, Apr 04, 2023 at 08:25:41PM +0300, Adrian Hunter escreveu:
> On 4/04/23 18:58, Ian Rogers wrote:
> > Ping. It would be nice to have this landed or at least the first 10
> > patches that refactor the map API and are the bulk of the
> > lines-of-code changed. Having those landed would make it easier to
> > rebase in the future, but I also think the whole series is ready to
> > go.
> 
> I was wondering if the handling of dynamic data like struct map makes
> any sense at present.  Perhaps someone can reassure me.
> 
> A struct map can be updated when an MMAP event is processed.  So it

Yes, it can, and the update is made via a new PERF_RECORD_MMAP, right?

So:

	perf_event__process_mmap()
	  machine__process_mmap2_event()
	    map__new() + thread__insert_map(thread, map)
	    	maps__fixup_overlappings()
			maps__insert(thread->maps, map);

Ok, from this point on new samples on ] map->start .. map->end ] will
grab a refcount to this new map in its hist_entry, right?

When we want to sort by dso we will look at hist_entry->map->dso, etc.

> seems like anything racing with event processing is already broken, and
> reference counting / locking cannot help - unless there is also
> copy-on-write (which there isn't at present)?

> For struct maps, referencing it while simultaneously processing
> events seems to make even less sense?

Can you elaborate some more?

- Arnaldo

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 18:41     ` Arnaldo Carvalho de Melo
@ 2023-04-04 18:54       ` Arnaldo Carvalho de Melo
  2023-04-05  8:47         ` Adrian Hunter
  0 siblings, 1 reply; 33+ messages in thread
From: Arnaldo Carvalho de Melo @ 2023-04-04 18:54 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ian Rogers, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Thomas Gleixner,
	Darren Hart, Davidlohr Bueso, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

Em Tue, Apr 04, 2023 at 03:41:38PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Tue, Apr 04, 2023 at 08:25:41PM +0300, Adrian Hunter escreveu:
> > On 4/04/23 18:58, Ian Rogers wrote:
> > > Ping. It would be nice to have this landed or at least the first 10
> > > patches that refactor the map API and are the bulk of the
> > > lines-of-code changed. Having those landed would make it easier to
> > > rebase in the future, but I also think the whole series is ready to
> > > go.
> > 
> > I was wondering if the handling of dynamic data like struct map makes
> > any sense at present.  Perhaps someone can reassure me.
> > 
> > A struct map can be updated when an MMAP event is processed.  So it
> 
> Yes, it can, and the update is made via a new PERF_RECORD_MMAP, right?
> 
> So:
> 
> 	perf_event__process_mmap()
> 	  machine__process_mmap2_event()
> 	    map__new() + thread__insert_map(thread, map)
> 	    	maps__fixup_overlappings()
> 			maps__insert(thread->maps, map);
> 
> Ok, from this point on new samples on ] map->start .. map->end ] will
> grab a refcount to this new map in its hist_entry, right?
> 
> When we want to sort by dso we will look at hist_entry->map->dso, etc.

And in 'perf top' we go decaying hist entries, when we delete the
hist_entry, drop the reference count to things it holds, that will then
be finally deleted when no more hist_entries point to it.

> > seems like anything racing with event processing is already broken, and
> > reference counting / locking cannot help - unless there is also
> > copy-on-write (which there isn't at present)?
> 
> > For struct maps, referencing it while simultaneously processing
> > events seems to make even less sense?
> 
> Can you elaborate some more?

- Arnaldo

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 17:35     ` Ian Rogers
  2023-04-04 18:37       ` Adrian Hunter
@ 2023-04-04 19:22       ` Arnaldo Carvalho de Melo
  2023-04-04 19:53         ` Arnaldo Carvalho de Melo
  1 sibling, 1 reply; 33+ messages in thread
From: Arnaldo Carvalho de Melo @ 2023-04-04 19:22 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Adrian Hunter, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Thomas Gleixner,
	Darren Hart, Davidlohr Bueso, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

Applied to:

perf map: Add accessor for dso


diff --git a/tools/perf/arch/powerpc/util/skip-callchain-idx.c b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
index 20cd6244863b1a09..fe0e4530673c6661 100644
--- a/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+++ b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
@@ -255,7 +255,7 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain)
 	thread__find_symbol(thread, PERF_RECORD_MISC_USER, ip, &al);
 
 	if (al.map)
-		dso = al.map->dso;
+		dso = map__dso(al.map);
 
 	if (!dso) {
 		pr_debug("%" PRIx64 " dso is NULL\n", ip);
diff --git a/tools/perf/arch/powerpc/util/sym-handling.c b/tools/perf/arch/powerpc/util/sym-handling.c
index 0856b32f9e08a1f5..9f99fc88dbff9056 100644
--- a/tools/perf/arch/powerpc/util/sym-handling.c
+++ b/tools/perf/arch/powerpc/util/sym-handling.c
@@ -104,7 +104,7 @@ void arch__fix_tev_from_maps(struct perf_probe_event *pev,
 
 	lep_offset = PPC64_LOCAL_ENTRY_OFFSET(sym->arch_sym);
 
-	if (map->dso->symtab_type == DSO_BINARY_TYPE__KALLSYMS)
+	if (map__dso(map)->symtab_type == DSO_BINARY_TYPE__KALLSYMS)
 		tev->point.offset += PPC64LE_LEP_OFFSET;
 	else if (lep_offset) {
 		if (pev->uprobes)
diff --git a/tools/perf/ui/gtk/annotate.c b/tools/perf/ui/gtk/annotate.c
index a1c021a6d3c1f0f1..2effac77ca8c6742 100644
--- a/tools/perf/ui/gtk/annotate.c
+++ b/tools/perf/ui/gtk/annotate.c
@@ -165,6 +165,7 @@ static int symbol__gtk_annotate(struct map_symbol *ms, struct evsel *evsel,
 				struct annotation_options *options,
 				struct hist_browser_timer *hbt)
 {
+	struct dso *dso = map__dso(ms->map);
 	struct symbol *sym = ms->sym;
 	GtkWidget *window;
 	GtkWidget *notebook;
@@ -172,13 +173,13 @@ static int symbol__gtk_annotate(struct map_symbol *ms, struct evsel *evsel,
 	GtkWidget *tab_label;
 	int err;
 
-	if (ms->map->dso->annotate_warned)
+	if (dso->annotate_warned)
 		return -1;
 
 	err = symbol__annotate(ms, evsel, options, NULL);
 	if (err) {
 		char msg[BUFSIZ];
-		ms->map->dso->annotate_warned = true;
+		dso->annotate_warned = true;
 		symbol__strerror_disassemble(ms, err, msg, sizeof(msg));
 		ui__error("Couldn't annotate %s: %s\n", sym->name, msg);
 		return -1;
diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
index 94e2d02009eb9f72..528a7fb066cfc9ec 100644
--- a/tools/perf/util/cs-etm.c
+++ b/tools/perf/util/cs-etm.c
@@ -865,6 +865,7 @@ static u32 cs_etm__mem_access(struct cs_etm_queue *etmq, u8 trace_chan_id,
 	struct thread *thread;
 	struct machine *machine;
 	struct addr_location al;
+	struct dso *dso;
 	struct cs_etm_traceid_queue *tidq;
 
 	if (!etmq)
@@ -883,27 +884,29 @@ static u32 cs_etm__mem_access(struct cs_etm_queue *etmq, u8 trace_chan_id,
 		thread = etmq->etm->unknown_thread;
 	}
 
-	if (!thread__find_map(thread, cpumode, address, &al) || !al.map->dso)
+	dso = map__dso(al.map);
+
+	if (!thread__find_map(thread, cpumode, address, &al) || !dso)
 		return 0;
 
-	if (al.map->dso->data.status == DSO_DATA_STATUS_ERROR &&
-	    dso__data_status_seen(al.map->dso, DSO_DATA_STATUS_SEEN_ITRACE))
+	if (dso->data.status == DSO_DATA_STATUS_ERROR &&
+	    dso__data_status_seen(dso, DSO_DATA_STATUS_SEEN_ITRACE))
 		return 0;
 
 	offset = al.map->map_ip(al.map, address);
 
 	map__load(al.map);
 
-	len = dso__data_read_offset(al.map->dso, machine, offset, buffer, size);
+	len = dso__data_read_offset(dso, machine, offset, buffer, size);
 
 	if (len <= 0) {
 		ui__warning_once("CS ETM Trace: Missing DSO. Use 'perf archive' or debuginfod to export data from the traced system.\n"
 				 "              Enable CONFIG_PROC_KCORE or use option '-k /path/to/vmlinux' for kernel symbols.\n");
-		if (!al.map->dso->auxtrace_warned) {
+		if (!dso->auxtrace_warned) {
 			pr_err("CS ETM Trace: Debug data not found for address %#"PRIx64" in %s\n",
 				    address,
-				    al.map->dso->long_name ? al.map->dso->long_name : "Unknown");
-			al.map->dso->auxtrace_warned = true;
+				    dso->long_name ? dso->long_name : "Unknown");
+			dso->auxtrace_warned = true;
 		}
 		return 0;
 	}
diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
index c487a249b33c62d4..108f7b1697a73465 100644
--- a/tools/perf/util/unwind-libunwind-local.c
+++ b/tools/perf/util/unwind-libunwind-local.c
@@ -328,7 +328,7 @@ static int read_unwind_spec_eh_frame(struct dso *dso, struct unwind_info *ui,
 	maps__for_each_entry(ui->thread->maps, map_node) {
 		struct map *map = map_node->map;
 
-		if (map->dso == dso && map->start < base_addr)
+		if (map__dso(map) == dso && map->start < base_addr)
 			base_addr = map->start;
 	}
 	base_addr -= dso->data.elf_base_addr;
@@ -424,19 +424,23 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
 {
 	struct unwind_info *ui = arg;
 	struct map *map;
+	struct dso *dso;
 	unw_dyn_info_t di;
 	u64 table_data, segbase, fde_count;
 	int ret = -EINVAL;
 
 	map = find_map(ip, ui);
-	if (!map || !map->dso)
+	if (!map)
 		return -EINVAL;
 
-	pr_debug("unwind: find_proc_info dso %s\n", map->dso->name);
+	dso = map__dso(map);
+	if (!dso)
+		return -EINVAL;
+
+	pr_debug("unwind: find_proc_info dso %s\n", dso->name);
 
 	/* Check the .eh_frame section for unwinding info */
-	if (!read_unwind_spec_eh_frame(map->dso, ui,
-				       &table_data, &segbase, &fde_count)) {
+	if (!read_unwind_spec_eh_frame(dso, ui, &table_data, &segbase, &fde_count)) {
 		memset(&di, 0, sizeof(di));
 		di.format   = UNW_INFO_FORMAT_REMOTE_TABLE;
 		di.start_ip = map->start;
@@ -452,16 +456,16 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
 #ifndef NO_LIBUNWIND_DEBUG_FRAME
 	/* Check the .debug_frame section for unwinding info */
 	if (ret < 0 &&
-	    !read_unwind_spec_debug_frame(map->dso, ui->machine, &segbase)) {
-		int fd = dso__data_get_fd(map->dso, ui->machine);
-		int is_exec = elf_is_exec(fd, map->dso->name);
+	    !read_unwind_spec_debug_frame(dso, ui->machine, &segbase)) {
+		int fd = dso__data_get_fd(dso, ui->machine);
+		int is_exec = elf_is_exec(fd, dso->name);
 		unw_word_t base = is_exec ? 0 : map->start;
 		const char *symfile;
 
 		if (fd >= 0)
-			dso__data_put_fd(map->dso);
+			dso__data_put_fd(dso);
 
-		symfile = map->dso->symsrc_filename ?: map->dso->name;
+		symfile = dso->symsrc_filename ?: dso->name;
 
 		memset(&di, 0, sizeof(di));
 		if (dwarf_find_debug_frame(0, &di, ip, base, symfile,
@@ -513,6 +517,7 @@ static int access_dso_mem(struct unwind_info *ui, unw_word_t addr,
 			  unw_word_t *data)
 {
 	struct map *map;
+	struct dso *dso;
 	ssize_t size;
 
 	map = find_map(addr, ui);
@@ -521,10 +526,12 @@ static int access_dso_mem(struct unwind_info *ui, unw_word_t addr,
 		return -1;
 	}
 
-	if (!map->dso)
+	dso = map__dso(map);
+
+	if (!dso)
 		return -1;
 
-	size = dso__data_read_addr(map->dso, map, ui->machine,
+	size = dso__data_read_addr(dso, map, ui->machine,
 				   addr, (u8 *) data, sizeof(*data));
 
 	return !(size == sizeof(*data));
diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
index 42528ade513e4975..4378daaafcd3b875 100644
--- a/tools/perf/util/unwind-libunwind.c
+++ b/tools/perf/util/unwind-libunwind.c
@@ -22,6 +22,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 	const char *arch;
 	enum dso_type dso_type;
 	struct unwind_libunwind_ops *ops = local_unwind_libunwind_ops;
+	struct dso *dso = map__dso(map);
 	struct machine *machine;
 	int err;
 
@@ -29,8 +30,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 		return 0;
 
 	if (maps__addr_space(maps)) {
-		pr_debug("unwind: thread map already set, dso=%s\n",
-			 map->dso->name);
+		pr_debug("unwind: thread map already set, dso=%s\n", dso->name);
 		if (initialized)
 			*initialized = true;
 		return 0;
@@ -41,7 +41,7 @@ int unwind__prepare_access(struct maps *maps, struct map *map, bool *initialized
 	if (!machine->env || !machine->env->arch)
 		goto out_register;
 
-	dso_type = dso__type(map->dso, machine);
+	dso_type = dso__type(dso, machine);
 	if (dso_type == DSO__TYPE_UNKNOWN)
 		return 0;
 

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 19:22       ` Arnaldo Carvalho de Melo
@ 2023-04-04 19:53         ` Arnaldo Carvalho de Melo
  2023-04-04 19:54           ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 33+ messages in thread
From: Arnaldo Carvalho de Melo @ 2023-04-04 19:53 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Adrian Hunter, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Thomas Gleixner,
	Darren Hart, Davidlohr Bueso, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

Applied to:

perf map: Add accessor for start and end

diff --git a/tools/perf/arch/arm/tests/dwarf-unwind.c b/tools/perf/arch/arm/tests/dwarf-unwind.c
index ccfa87055c4a3b9d..566fb6c0eae737c6 100644
--- a/tools/perf/arch/arm/tests/dwarf-unwind.c
+++ b/tools/perf/arch/arm/tests/dwarf-unwind.c
@@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample,
 		return -1;
 	}
 
-	stack_size = map->end - sp;
+	stack_size = map__end(map) - sp;
 	stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size;
 
 	memcpy(buf, (void *) sp, stack_size);
diff --git a/tools/perf/arch/arm64/tests/dwarf-unwind.c b/tools/perf/arch/arm64/tests/dwarf-unwind.c
index 46147a483049615d..90a7ef293ce76879 100644
--- a/tools/perf/arch/arm64/tests/dwarf-unwind.c
+++ b/tools/perf/arch/arm64/tests/dwarf-unwind.c
@@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample,
 		return -1;
 	}
 
-	stack_size = map->end - sp;
+	stack_size = map__end(map) - sp;
 	stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size;
 
 	memcpy(buf, (void *) sp, stack_size);
diff --git a/tools/perf/arch/powerpc/tests/dwarf-unwind.c b/tools/perf/arch/powerpc/tests/dwarf-unwind.c
index c9cb4b059392f6cf..32fffb593fbf0236 100644
--- a/tools/perf/arch/powerpc/tests/dwarf-unwind.c
+++ b/tools/perf/arch/powerpc/tests/dwarf-unwind.c
@@ -33,7 +33,7 @@ static int sample_ustack(struct perf_sample *sample,
 		return -1;
 	}
 
-	stack_size = map->end - sp;
+	stack_size = map__end(map) - sp;
 	stack_size = stack_size > STACK_SIZE ? STACK_SIZE : stack_size;
 
 	memcpy(buf, (void *) sp, stack_size);
diff --git a/tools/perf/arch/powerpc/util/skip-callchain-idx.c b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
index fe0e4530673c6661..b7223feec770dc33 100644
--- a/tools/perf/arch/powerpc/util/skip-callchain-idx.c
+++ b/tools/perf/arch/powerpc/util/skip-callchain-idx.c
@@ -262,7 +262,7 @@ int arch_skip_callchain_idx(struct thread *thread, struct ip_callchain *chain)
 		return skip_slot;
 	}
 
-	rc = check_return_addr(dso, al.map->start, ip);
+	rc = check_return_addr(dso, map__start(al.map), ip);
 
 	pr_debug("[DSO %s, sym %s, ip 0x%" PRIx64 "] rc %d\n",
 				dso->long_name, al.sym->name, ip, rc);
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 7852b97da10aa336..1cc6f338728f5499 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -903,7 +903,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
 		}
 
 		map->start = event->ksymbol.addr;
-		map->end = map__start(map) + event->ksymbol.len;
+		map__end(map) = map__start(map) + event->ksymbol.len;
 		err = maps__insert(machine__kernel_maps(machine), map);
 		map__put(map);
 		if (err)
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 1fd57db7222678ad..21010a2b8e16cc2e 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -339,7 +339,7 @@ int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
 			map__put(before);
 		}
 
-		if (map->end < map__end(pos->map)) {
+		if (map__end(map) < map__end(pos->map)) {
 			struct map *after = map__clone(pos->map);
 
 			if (after == NULL) {
diff --git a/tools/perf/util/unwind-libunwind-local.c b/tools/perf/util/unwind-libunwind-local.c
index 108f7b1697a73465..1c13f43e7d22c84c 100644
--- a/tools/perf/util/unwind-libunwind-local.c
+++ b/tools/perf/util/unwind-libunwind-local.c
@@ -327,9 +327,10 @@ static int read_unwind_spec_eh_frame(struct dso *dso, struct unwind_info *ui,
 
 	maps__for_each_entry(ui->thread->maps, map_node) {
 		struct map *map = map_node->map;
+		u64 start = map__start(map);
 
-		if (map__dso(map) == dso && map->start < base_addr)
-			base_addr = map->start;
+		if (map__dso(map) == dso && start < base_addr)
+			base_addr = start;
 	}
 	base_addr -= dso->data.elf_base_addr;
 	/* Address of .eh_frame_hdr */
@@ -443,8 +444,8 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
 	if (!read_unwind_spec_eh_frame(dso, ui, &table_data, &segbase, &fde_count)) {
 		memset(&di, 0, sizeof(di));
 		di.format   = UNW_INFO_FORMAT_REMOTE_TABLE;
-		di.start_ip = map->start;
-		di.end_ip   = map->end;
+		di.start_ip = map__start(map);
+		di.end_ip   = map__end(map);
 		di.u.rti.segbase    = segbase;
 		di.u.rti.table_data = table_data;
 		di.u.rti.table_len  = fde_count * sizeof(struct table_entry)
@@ -459,7 +460,8 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
 	    !read_unwind_spec_debug_frame(dso, ui->machine, &segbase)) {
 		int fd = dso__data_get_fd(dso, ui->machine);
 		int is_exec = elf_is_exec(fd, dso->name);
-		unw_word_t base = is_exec ? 0 : map->start;
+		u64 start = map__start(map);
+		unw_word_t base = is_exec ? 0 : start;
 		const char *symfile;
 
 		if (fd >= 0)
@@ -468,8 +470,7 @@ find_proc_info(unw_addr_space_t as, unw_word_t ip, unw_proc_info_t *pi,
 		symfile = dso->symsrc_filename ?: dso->name;
 
 		memset(&di, 0, sizeof(di));
-		if (dwarf_find_debug_frame(0, &di, ip, base, symfile,
-					   map->start, map->end))
+		if (dwarf_find_debug_frame(0, &di, ip, base, symfile, start, map__end(map)))
 			return dwarf_search_unwind_table(as, ip, &di, pi,
 							 need_unwind_info, arg);
 	}

^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 19:53         ` Arnaldo Carvalho de Melo
@ 2023-04-04 19:54           ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 33+ messages in thread
From: Arnaldo Carvalho de Melo @ 2023-04-04 19:54 UTC (permalink / raw)
  To: Ian Rogers
  Cc: Adrian Hunter, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Thomas Gleixner,
	Darren Hart, Davidlohr Bueso, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

Em Tue, Apr 04, 2023 at 04:53:10PM -0300, Arnaldo Carvalho de Melo escreveu:
> Applied to:
> 
> perf map: Add accessor for start and end

> +++ b/tools/perf/util/machine.c
> @@ -903,7 +903,7 @@ static int machine__process_ksymbol_register(struct machine *machine,
>  		}
>  
>  		map->start = event->ksymbol.addr;
> -		map->end = map__start(map) + event->ksymbol.len;
> +		map__end(map) = map__start(map) + event->ksymbol.len;

Ditch this one, duh.

>  		err = maps__insert(machine__kernel_maps(machine), map);
>  		map__put(map);
>  		if (err)

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-04 18:54       ` Arnaldo Carvalho de Melo
@ 2023-04-05  8:47         ` Adrian Hunter
  2023-04-05 13:20           ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 33+ messages in thread
From: Adrian Hunter @ 2023-04-05  8:47 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Ian Rogers, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Thomas Gleixner,
	Darren Hart, Davidlohr Bueso, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

On 4/04/23 21:54, Arnaldo Carvalho de Melo wrote:
> Em Tue, Apr 04, 2023 at 03:41:38PM -0300, Arnaldo Carvalho de Melo escreveu:
>> Em Tue, Apr 04, 2023 at 08:25:41PM +0300, Adrian Hunter escreveu:
>>> On 4/04/23 18:58, Ian Rogers wrote:
>>>> Ping. It would be nice to have this landed or at least the first 10
>>>> patches that refactor the map API and are the bulk of the
>>>> lines-of-code changed. Having those landed would make it easier to
>>>> rebase in the future, but I also think the whole series is ready to
>>>> go.
>>>
>>> I was wondering if the handling of dynamic data like struct map makes
>>> any sense at present.  Perhaps someone can reassure me.
>>>
>>> A struct map can be updated when an MMAP event is processed.  So it
>>
>> Yes, it can, and the update is made via a new PERF_RECORD_MMAP, right?
>>
>> So:
>>
>> 	perf_event__process_mmap()
>> 	  machine__process_mmap2_event()
>> 	    map__new() + thread__insert_map(thread, map)
>> 	    	maps__fixup_overlappings()
>> 			maps__insert(thread->maps, map);
>>
>> Ok, from this point on new samples on ] map->start .. map->end ] will
>> grab a refcount to this new map in its hist_entry, right?
>>
>> When we want to sort by dso we will look at hist_entry->map->dso, etc.
> 
> And in 'perf top' we go decaying hist entries, when we delete the
> hist_entry, drop the reference count to things it holds, that will then
> be finally deleted when no more hist_entries point to it.
> 
>>> seems like anything racing with event processing is already broken, and
>>> reference counting / locking cannot help - unless there is also
>>> copy-on-write (which there isn't at present)?

So I checked, and struct map *is* copy-on-write in
maps__fixup_overlappings(), so that should not be a problem.

>>
>>> For struct maps, referencing it while simultaneously processing
>>> events seems to make even less sense?
>>
>> Can you elaborate some more?

Only that the maps are not necessarily stable e.g. the map that you
need has been replaced in the meantime.

But upon investigation, the only user at the moment is
maps__find_ams().  If we kept the removed maps (we used to),
it might be possible to make maps__find_ams() work correctly
in any case.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-05  8:47         ` Adrian Hunter
@ 2023-04-05 13:20           ` Arnaldo Carvalho de Melo
  2023-04-05 16:25             ` Adrian Hunter
  0 siblings, 1 reply; 33+ messages in thread
From: Arnaldo Carvalho de Melo @ 2023-04-05 13:20 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ian Rogers, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Thomas Gleixner,
	Darren Hart, Davidlohr Bueso, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

Em Wed, Apr 05, 2023 at 11:47:26AM +0300, Adrian Hunter escreveu:
> On 4/04/23 21:54, Arnaldo Carvalho de Melo wrote:
> > Em Tue, Apr 04, 2023 at 03:41:38PM -0300, Arnaldo Carvalho de Melo escreveu:
> >> Em Tue, Apr 04, 2023 at 08:25:41PM +0300, Adrian Hunter escreveu:
> >>> On 4/04/23 18:58, Ian Rogers wrote:
> >>>> Ping. It would be nice to have this landed or at least the first 10
> >>>> patches that refactor the map API and are the bulk of the
> >>>> lines-of-code changed. Having those landed would make it easier to
> >>>> rebase in the future, but I also think the whole series is ready to
> >>>> go.
> >>>
> >>> I was wondering if the handling of dynamic data like struct map makes
> >>> any sense at present.  Perhaps someone can reassure me.
> >>>
> >>> A struct map can be updated when an MMAP event is processed.  So it
> >>
> >> Yes, it can, and the update is made via a new PERF_RECORD_MMAP, right?
> >>
> >> So:
> >>
> >> 	perf_event__process_mmap()
> >> 	  machine__process_mmap2_event()
> >> 	    map__new() + thread__insert_map(thread, map)
> >> 	    	maps__fixup_overlappings()
> >> 			maps__insert(thread->maps, map);
> >>
> >> Ok, from this point on new samples on ] map->start .. map->end ] will
> >> grab a refcount to this new map in its hist_entry, right?
> >>
> >> When we want to sort by dso we will look at hist_entry->map->dso, etc.
> > 
> > And in 'perf top' we go decaying hist entries, when we delete the
> > hist_entry, drop the reference count to things it holds, that will then
> > be finally deleted when no more hist_entries point to it.
> > 
> >>> seems like anything racing with event processing is already broken, and
> >>> reference counting / locking cannot help - unless there is also
> >>> copy-on-write (which there isn't at present)?
 
> So I checked, and struct map *is* copy-on-write in
> maps__fixup_overlappings(), so that should not be a problem.
 
> >>> For struct maps, referencing it while simultaneously processing
> >>> events seems to make even less sense?

> >> Can you elaborate some more?
 
> Only that the maps are not necessarily stable e.g. the map that you
> need has been replaced in the meantime.

Well, it may be sliced in several or shrunk by new ones overlapping it,
but it if completely disappears, say a new map starts before the one
disappearing and ends after it, then it remains with reference counts if
there are hist_entries (or other data structure) pointing to them,
right?
 
> But upon investigation, the only user at the moment is
> maps__find_ams().  If we kept the removed maps (we used to),
> it might be possible to make maps__find_ams() work correctly
> in any case.

Humm, I think I see what you mean, maps__find_ams() is called when we
are annotating a symbol, not when we're processing a sample, so it may
be the case that at the time of annotation the executable that is being
found (its parsing the target IP of a 'call' assembly instruction) was
replaced, is that the case?

- Arnaldo

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-05 13:20           ` Arnaldo Carvalho de Melo
@ 2023-04-05 16:25             ` Adrian Hunter
  2023-04-06 12:51               ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 33+ messages in thread
From: Adrian Hunter @ 2023-04-05 16:25 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Ian Rogers, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Thomas Gleixner,
	Darren Hart, Davidlohr Bueso, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

On 5/04/23 16:20, Arnaldo Carvalho de Melo wrote:
> Em Wed, Apr 05, 2023 at 11:47:26AM +0300, Adrian Hunter escreveu:
>> On 4/04/23 21:54, Arnaldo Carvalho de Melo wrote:
>>> Em Tue, Apr 04, 2023 at 03:41:38PM -0300, Arnaldo Carvalho de Melo escreveu:
>>>> Em Tue, Apr 04, 2023 at 08:25:41PM +0300, Adrian Hunter escreveu:
>>>>> On 4/04/23 18:58, Ian Rogers wrote:
>>>>>> Ping. It would be nice to have this landed or at least the first 10
>>>>>> patches that refactor the map API and are the bulk of the
>>>>>> lines-of-code changed. Having those landed would make it easier to
>>>>>> rebase in the future, but I also think the whole series is ready to
>>>>>> go.
>>>>>
>>>>> I was wondering if the handling of dynamic data like struct map makes
>>>>> any sense at present.  Perhaps someone can reassure me.
>>>>>
>>>>> A struct map can be updated when an MMAP event is processed.  So it
>>>>
>>>> Yes, it can, and the update is made via a new PERF_RECORD_MMAP, right?
>>>>
>>>> So:
>>>>
>>>> 	perf_event__process_mmap()
>>>> 	  machine__process_mmap2_event()
>>>> 	    map__new() + thread__insert_map(thread, map)
>>>> 	    	maps__fixup_overlappings()
>>>> 			maps__insert(thread->maps, map);
>>>>
>>>> Ok, from this point on new samples on ] map->start .. map->end ] will
>>>> grab a refcount to this new map in its hist_entry, right?
>>>>
>>>> When we want to sort by dso we will look at hist_entry->map->dso, etc.
>>>
>>> And in 'perf top' we go decaying hist entries, when we delete the
>>> hist_entry, drop the reference count to things it holds, that will then
>>> be finally deleted when no more hist_entries point to it.
>>>
>>>>> seems like anything racing with event processing is already broken, and
>>>>> reference counting / locking cannot help - unless there is also
>>>>> copy-on-write (which there isn't at present)?
>  
>> So I checked, and struct map *is* copy-on-write in
>> maps__fixup_overlappings(), so that should not be a problem.
>  
>>>>> For struct maps, referencing it while simultaneously processing
>>>>> events seems to make even less sense?
> 
>>>> Can you elaborate some more?
>  
>> Only that the maps are not necessarily stable e.g. the map that you
>> need has been replaced in the meantime.
> 
> Well, it may be sliced in several or shrunk by new ones overlapping it,
> but it if completely disappears, say a new map starts before the one
> disappearing and ends after it, then it remains with reference counts if
> there are hist_entries (or other data structure) pointing to them,
> right?
>  
>> But upon investigation, the only user at the moment is
>> maps__find_ams().  If we kept the removed maps (we used to),
>> it might be possible to make maps__find_ams() work correctly
>> in any case.
> 
> Humm, I think I see what you mean, maps__find_ams() is called when we
> are annotating a symbol, not when we're processing a sample, so it may
> be the case that at the time of annotation the executable that is being
> found (its parsing the target IP of a 'call' assembly instruction) was
> replaced, is that the case?

Yes, that is the possibility


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH v5 00/17] Reference count checker and related fixes
  2023-04-05 16:25             ` Adrian Hunter
@ 2023-04-06 12:51               ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 33+ messages in thread
From: Arnaldo Carvalho de Melo @ 2023-04-06 12:51 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Ian Rogers, Peter Zijlstra, Ingo Molnar, Mark Rutland,
	Alexander Shishkin, Jiri Olsa, Namhyung Kim, Thomas Gleixner,
	Darren Hart, Davidlohr Bueso, James Clark, John Garry,
	Riccardo Mancini, Yury Norov, Andy Shevchenko, Andrew Morton,
	Leo Yan, Andi Kleen, Thomas Richter, Kan Liang,
	Madhavan Srinivasan, Shunsuke Nakamura, Song Liu,
	Masami Hiramatsu, Steven Rostedt, Miaoqian Lin, Stephen Brennan,
	Kajol Jain, Alexey Bayduraev, German Gomez, linux-perf-users,
	linux-kernel, Eric Dumazet, Dmitry Vyukov, Hao Luo,
	Stephane Eranian

Em Wed, Apr 05, 2023 at 07:25:27PM +0300, Adrian Hunter escreveu:
> On 5/04/23 16:20, Arnaldo Carvalho de Melo wrote:
> > Em Wed, Apr 05, 2023 at 11:47:26AM +0300, Adrian Hunter escreveu:
> >> On 4/04/23 21:54, Arnaldo Carvalho de Melo wrote:
> >>> Em Tue, Apr 04, 2023 at 03:41:38PM -0300, Arnaldo Carvalho de Melo escreveu:
> >>>> Em Tue, Apr 04, 2023 at 08:25:41PM +0300, Adrian Hunter escreveu:
> >>>>> I was wondering if the handling of dynamic data like struct map makes
> >>>>> any sense at present.  Perhaps someone can reassure me.

> >>>>> A struct map can be updated when an MMAP event is processed.  So it

> >>>> Yes, it can, and the update is made via a new PERF_RECORD_MMAP, right?

> >>>> So:

> >>>> 	perf_event__process_mmap()
> >>>> 	  machine__process_mmap2_event()
> >>>> 	    map__new() + thread__insert_map(thread, map)
> >>>> 	    	maps__fixup_overlappings()
> >>>> 			maps__insert(thread->maps, map);

> >>>> Ok, from this point on new samples on ] map->start .. map->end ] will
> >>>> grab a refcount to this new map in its hist_entry, right?

> >>>> When we want to sort by dso we will look at hist_entry->map->dso, etc.

> >>> And in 'perf top' we go decaying hist entries, when we delete the
> >>> hist_entry, drop the reference count to things it holds, that will then
> >>> be finally deleted when no more hist_entries point to it.

> >>>>> seems like anything racing with event processing is already broken, and
> >>>>> reference counting / locking cannot help - unless there is also
> >>>>> copy-on-write (which there isn't at present)?
> >  
> >> So I checked, and struct map *is* copy-on-write in
> >> maps__fixup_overlappings(), so that should not be a problem.
> >  
> >>>>> For struct maps, referencing it while simultaneously processing
> >>>>> events seems to make even less sense?
> > 
> >>>> Can you elaborate some more?
> >  
> >> Only that the maps are not necessarily stable e.g. the map that you
> >> need has been replaced in the meantime.
> > 
> > Well, it may be sliced in several or shrunk by new ones overlapping it,
> > but it if completely disappears, say a new map starts before the one
> > disappearing and ends after it, then it remains with reference counts if
> > there are hist_entries (or other data structure) pointing to them,
> > right?

> >> But upon investigation, the only user at the moment is
> >> maps__find_ams().  If we kept the removed maps (we used to),
> >> it might be possible to make maps__find_ams() work correctly
> >> in any case.

> > Humm, I think I see what you mean, maps__find_ams() is called when we
> > are annotating a symbol, not when we're processing a sample, so it may
> > be the case that at the time of annotation the executable that is being
> > found (its parsing the target IP of a 'call' assembly instruction) was
> > replaced, is that the case?
 
> Yes, that is the possibility

Yeah, this one gets a bit more difficult to support, we would have to
keep a sub-bucket for each annotation instruction with the ordered by
timestamp list of maps that were on that location (but then just to
places that had samples, not to all) and then when add some visual cue
to that annotation line to mean that it was patched and show the
original, then the (possibly) various patches and say that samples up to
N units of time were for some original DSO, then to another (overlapping
executable map), then to some patching (that we would catch with
PERF_RECORD_TEXT_POKE for the kernel, right?), etc.

Seems doable, and for most cases would be similar to what we have right
now, as self-modifying code its not so pervasive (famous last words
;-)).

- Arnaldo

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2023-04-06 12:52 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-20 21:22 [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
2023-03-20 21:22 ` [PATCH v5 01/17] perf map: Move map list node into symbol Ian Rogers
2023-03-20 21:22 ` [PATCH v5 02/17] perf maps: Remove rb_node from struct map Ian Rogers
2023-03-20 21:22 ` [PATCH v5 03/17] perf maps: Add functions to access maps Ian Rogers
2023-03-20 21:22 ` [PATCH v5 04/17] perf map: Add accessor for dso Ian Rogers
2023-03-20 21:22 ` [PATCH v5 05/17] perf map: Add accessor for start and end Ian Rogers
2023-03-20 21:22 ` [PATCH v5 06/17] perf map: Rename map_ip and unmap_ip Ian Rogers
2023-03-20 21:22 ` [PATCH v5 07/17] perf map: Add helper for " Ian Rogers
2023-03-20 21:22 ` [PATCH v5 08/17] perf map: Add accessors for prot, priv and flags Ian Rogers
2023-03-20 21:22 ` [PATCH v5 09/17] perf map: Add accessors for pgoff and reloc Ian Rogers
2023-03-20 21:22 ` [PATCH v5 10/17] perf test: Add extra diagnostics to maps test Ian Rogers
2023-03-20 21:22 ` [PATCH v5 11/17] perf maps: Modify maps_by_name to hold a reference to a map Ian Rogers
2023-03-20 21:22 ` [PATCH v5 12/17] perf map: Changes to reference counting Ian Rogers
2023-03-20 21:22 ` [PATCH v5 13/17] libperf: Add reference count checking macros Ian Rogers
2023-03-20 21:22 ` [PATCH v5 14/17] perf cpumap: Add reference count checking Ian Rogers
2023-03-20 21:22 ` [PATCH v5 15/17] perf namespaces: " Ian Rogers
2023-03-20 21:22 ` [PATCH v5 16/17] perf maps: " Ian Rogers
2023-03-20 21:22 ` [PATCH v5 17/17] perf map: " Ian Rogers
2023-04-04 15:58 ` [PATCH v5 00/17] Reference count checker and related fixes Ian Rogers
2023-04-04 17:02   ` Arnaldo Carvalho de Melo
2023-04-04 17:07     ` Arnaldo Carvalho de Melo
2023-04-04 17:25   ` Adrian Hunter
2023-04-04 17:35     ` Ian Rogers
2023-04-04 18:37       ` Adrian Hunter
2023-04-04 19:22       ` Arnaldo Carvalho de Melo
2023-04-04 19:53         ` Arnaldo Carvalho de Melo
2023-04-04 19:54           ` Arnaldo Carvalho de Melo
2023-04-04 18:41     ` Arnaldo Carvalho de Melo
2023-04-04 18:54       ` Arnaldo Carvalho de Melo
2023-04-05  8:47         ` Adrian Hunter
2023-04-05 13:20           ` Arnaldo Carvalho de Melo
2023-04-05 16:25             ` Adrian Hunter
2023-04-06 12:51               ` Arnaldo Carvalho de Melo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).