All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines
@ 2018-05-09 11:43 Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod() Adrian Hunter
                   ` (18 more replies)
  0 siblings, 19 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

Hi

Perf tools do not know about x86_64 KPTI entry trampolines - see example
below.  These patches add a workaround, namely "perf tools: Workaround
missing maps for x86_64 KPTI entry trampolines", which has the limitation
that it hard codes the addresses.  Note that the workaround will work for
old kernels and old perf.data files, but not for future kernels if the
trampoline addresses are ever changed.

At present, perf tools uses /proc/kallsyms to construct a memory map for
the kernel.  Recording such a map in the perf.data file is necessary to
deal with kernel relocation and KASLR.

While it is reasonable on its own terms, to add symbols for the trampolines
to /proc/kallsyms, the motivation here is to have perf tools use them to
create memory maps in the same fashion as is done for the kernel text.

So the first 2 patches add symbols to /proc/kallsyms for the trampolines:

      kallsyms: Simplify update_iter_mod()
      kallsyms, x86: Export addresses of syscall trampolines

perf tools have the ability to use /proc/kcore (in conjunction with
/proc/kallsyms) as the kernel image. So the next 2 patches add program
headers for the trampolines to the kcore ELF:

      x86: Add entry trampolines to kcore
      x86: kcore: Give entry trampolines all the same offset in kcore

It is worth noting that, with the kcore changes alone, perf tools require
no changes to recognise the trampolines when using /proc/kcore.

Similarly, if perf tools are used with a matching kallsyms only (by denying
access to /proc/kcore or a vmlinux image), then the kallsyms patches are
sufficient to recognise the trampolines with no changes needed to the
tools.

However, in the general case, when using vmlinux or dealing with
relocations, perf tools needs memory maps for the trampolines.  Because the
kernel text map is constructed as a special case, using the same approach
for the trampolines means treating them as a special case also, which
requires a number of changes to perf tools, and the remaining patches deal
with that.


Example: make a program that does lots of small syscalls e.g.

	$ cat uname_x_n.c

	#include <sys/utsname.h>
	#include <stdlib.h>

	int main(int argc, char *argv[])
	{
		long n = argc > 1 ? strtol(argv[1], NULL, 0) : 0;
		struct utsname u;

		while (n--)
			uname(&u);

		return 0;
	}

and then:

	sudo perf record uname_x_n 100000
	sudo perf report --stdio

Before the changes, there are unknown symbols:

 # Overhead  Command    Shared Object     Symbol
 # ........  .........  ................  ..................................
 #
    41.91%  uname_x_n  [kernel.vmlinux]  [k] syscall_return_via_sysret
    19.22%  uname_x_n  [kernel.vmlinux]  [k] copy_user_enhanced_fast_string
    18.70%  uname_x_n  [unknown]         [k] 0xfffffe00000e201b
     4.09%  uname_x_n  libc-2.19.so      [.] __GI___uname
     3.08%  uname_x_n  [kernel.vmlinux]  [k] do_syscall_64
     3.02%  uname_x_n  [unknown]         [k] 0xfffffe00000e2025
     2.32%  uname_x_n  [kernel.vmlinux]  [k] down_read
     2.27%  uname_x_n  ld-2.19.so        [.] _dl_start
     1.97%  uname_x_n  [unknown]         [k] 0xfffffe00000e201e
     1.25%  uname_x_n  [kernel.vmlinux]  [k] up_read
     1.02%  uname_x_n  [unknown]         [k] 0xfffffe00000e200c
     0.99%  uname_x_n  [kernel.vmlinux]  [k] entry_SYSCALL_64
     0.16%  uname_x_n  [kernel.vmlinux]  [k] flush_signal_handlers
     0.01%  perf       [kernel.vmlinux]  [k] native_sched_clock
     0.00%  perf       [kernel.vmlinux]  [k] native_write_msr

After the changes there are not:

 # Overhead  Command    Shared Object     Symbol
 # ........  .........  ................  ..................................
 #
    41.91%  uname_x_n  [kernel.vmlinux]  [k] syscall_return_via_sysret
    24.70%  uname_x_n  [kernel.vmlinux]  [k] entry_SYSCALL_64_trampoline
    19.22%  uname_x_n  [kernel.vmlinux]  [k] copy_user_enhanced_fast_string
     4.09%  uname_x_n  libc-2.19.so      [.] __GI___uname
     3.08%  uname_x_n  [kernel.vmlinux]  [k] do_syscall_64
     2.32%  uname_x_n  [kernel.vmlinux]  [k] down_read
     2.27%  uname_x_n  ld-2.19.so        [.] _dl_start
     1.25%  uname_x_n  [kernel.vmlinux]  [k] up_read
     0.99%  uname_x_n  [kernel.vmlinux]  [k] entry_SYSCALL_64
     0.16%  uname_x_n  [kernel.vmlinux]  [k] flush_signal_handlers
     0.01%  perf       [kernel.vmlinux]  [k] native_sched_clock
     0.00%  perf       [kernel.vmlinux]  [k] native_write_msr


Adrian Hunter (17):
      kallsyms: Simplify update_iter_mod()
      x86: kcore: Give entry trampolines all the same offset in kcore
      perf tools: Use the _stest symbol to identify the kernel map when loading kcore
      perf tools: Fix kernel_start for KPTI on x86_64
      perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
      perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols
      perf tools: Allow for special kernel maps
      perf tools: Create maps for x86_64 KPTI entry trampolines
      perf tools: Synthesize and process mmap events for x86_64 KPTI entry trampolines
      perf buildid-cache: kcore_copy: Keep phdr data in a list
      perf buildid-cache: kcore_copy: Keep a count of phdrs
      perf buildid-cache: kcore_copy: Calculate offset from phnum
      perf buildid-cache: kcore_copy: Layout sections
      perf buildid-cache: kcore_copy: Iterate phdrs
      perf buildid-cache: kcore_copy: Get rid of kernel_map
      perf buildid-cache: kcore_copy: Copy x86_64 entry trampoline sections
      perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text

Alexander Shishkin (2):
      kallsyms, x86: Export addresses of syscall trampolines
      x86: Add entry trampolines to kcore

 arch/x86/mm/cpu_entry_area.c |  28 +++++
 fs/proc/kcore.c              |   7 +-
 include/linux/kcore.h        |  13 ++
 kernel/kallsyms.c            |  46 ++++---
 tools/perf/util/event.c      |  92 +++++++++++++-
 tools/perf/util/machine.c    | 288 ++++++++++++++++++++++++++++++++++++++++++-
 tools/perf/util/machine.h    |   6 +
 tools/perf/util/map.c        |  22 +++-
 tools/perf/util/map.h        |  15 ++-
 tools/perf/util/symbol-elf.c | 209 ++++++++++++++++++++++++++-----
 tools/perf/util/symbol.c     |  65 +++++++---
 11 files changed, 709 insertions(+), 82 deletions(-)


Regards
Adrian

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod()
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-10 13:01   ` Jiri Olsa
  2018-05-09 11:43 ` [PATCH RFC 02/19] kallsyms, x86: Export addresses of syscall trampolines Adrian Hunter
                   ` (17 subsequent siblings)
  18 siblings, 1 reply; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

Simplify logic in update_iter_mod().

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 kernel/kallsyms.c | 20 ++++++--------------
 1 file changed, 6 insertions(+), 14 deletions(-)

diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
index a23e21ada81b..eda4b0222dab 100644
--- a/kernel/kallsyms.c
+++ b/kernel/kallsyms.c
@@ -510,23 +510,15 @@ static int update_iter_mod(struct kallsym_iter *iter, loff_t pos)
 {
 	iter->pos = pos;
 
-	if (iter->pos_ftrace_mod_end > 0 &&
-	    iter->pos_ftrace_mod_end < iter->pos)
-		return get_ksymbol_bpf(iter);
-
-	if (iter->pos_mod_end > 0 &&
-	    iter->pos_mod_end < iter->pos) {
-		if (!get_ksymbol_ftrace_mod(iter))
-			return get_ksymbol_bpf(iter);
+	if ((!iter->pos_mod_end || iter->pos_mod_end > pos) &&
+	    get_ksymbol_mod(iter))
 		return 1;
-	}
 
-	if (!get_ksymbol_mod(iter)) {
-		if (!get_ksymbol_ftrace_mod(iter))
-			return get_ksymbol_bpf(iter);
-	}
+	if ((!iter->pos_ftrace_mod_end || iter->pos_ftrace_mod_end > pos) &&
+	    get_ksymbol_ftrace_mod(iter))
+		return 1;
 
-	return 1;
+	return get_ksymbol_bpf(iter);
 }
 
 /* Returns false if pos at or past end of file. */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 02/19] kallsyms, x86: Export addresses of syscall trampolines
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod() Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 03/19] x86: Add entry trampolines to kcore Adrian Hunter
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

From: Alexander Shishkin <alexander.shishkin@linux.intel.com>

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
---
 arch/x86/mm/cpu_entry_area.c | 23 +++++++++++++++++++++++
 kernel/kallsyms.c            | 28 +++++++++++++++++++++++++++-
 2 files changed, 50 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index b45f5aaefd74..d1da5cf4b2de 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -2,6 +2,7 @@
 
 #include <linux/spinlock.h>
 #include <linux/percpu.h>
+#include <linux/kallsyms.h>
 
 #include <asm/cpu_entry_area.h>
 #include <asm/pgtable.h>
@@ -150,6 +151,28 @@ static void __init setup_cpu_entry_area(int cpu)
 	percpu_setup_debug_store(cpu);
 }
 
+#ifdef CONFIG_X86_64
+int arch_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
+		     char *name)
+{
+	unsigned int cpu, ncpu;
+
+	if (symnum >= num_possible_cpus())
+		return -EINVAL;
+
+	for (cpu = cpumask_first(cpu_possible_mask), ncpu = 0;
+	     cpu < num_possible_cpus() && ncpu < symnum;
+	     cpu = cpumask_next(cpu, cpu_possible_mask), ncpu++)
+		;
+
+	*value = (unsigned long)&get_cpu_entry_area(cpu)->entry_trampoline;
+	*type = 't';
+	strlcpy(name, "__entry_SYSCALL_64_trampoline", KSYM_NAME_LEN);
+
+	return 0;
+}
+#endif
+
 static __init void setup_cpu_entry_area_ptes(void)
 {
 #ifdef CONFIG_X86_32
diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
index eda4b0222dab..ebe6befac47e 100644
--- a/kernel/kallsyms.c
+++ b/kernel/kallsyms.c
@@ -432,6 +432,7 @@ int sprint_backtrace(char *buffer, unsigned long address)
 /* To avoid using get_symbol_offset for every symbol, we carry prefix along. */
 struct kallsym_iter {
 	loff_t pos;
+	loff_t pos_arch_end;
 	loff_t pos_mod_end;
 	loff_t pos_ftrace_mod_end;
 	unsigned long value;
@@ -443,9 +444,29 @@ struct kallsym_iter {
 	int show_value;
 };
 
+int __weak arch_get_kallsym(unsigned int symnum, unsigned long *value,
+			    char *type, char *name)
+{
+	return -EINVAL;
+}
+
+static int get_ksymbol_arch(struct kallsym_iter *iter)
+{
+	int ret = arch_get_kallsym(iter->pos - kallsyms_num_syms,
+				   &iter->value, &iter->type,
+				   iter->name);
+
+	if (ret < 0) {
+		iter->pos_arch_end = iter->pos;
+		return 0;
+	}
+
+	return 1;
+}
+
 static int get_ksymbol_mod(struct kallsym_iter *iter)
 {
-	int ret = module_get_kallsym(iter->pos - kallsyms_num_syms,
+	int ret = module_get_kallsym(iter->pos - iter->pos_arch_end,
 				     &iter->value, &iter->type,
 				     iter->name, iter->module_name,
 				     &iter->exported);
@@ -501,6 +522,7 @@ static void reset_iter(struct kallsym_iter *iter, loff_t new_pos)
 	iter->nameoff = get_symbol_offset(new_pos);
 	iter->pos = new_pos;
 	if (new_pos == 0) {
+		iter->pos_arch_end = 0;
 		iter->pos_mod_end = 0;
 		iter->pos_ftrace_mod_end = 0;
 	}
@@ -510,6 +532,10 @@ static int update_iter_mod(struct kallsym_iter *iter, loff_t pos)
 {
 	iter->pos = pos;
 
+	if ((!iter->pos_arch_end || iter->pos_arch_end > pos) &&
+	    get_ksymbol_arch(iter))
+		return 1;
+
 	if ((!iter->pos_mod_end || iter->pos_mod_end > pos) &&
 	    get_ksymbol_mod(iter))
 		return 1;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 03/19] x86: Add entry trampolines to kcore
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod() Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 02/19] kallsyms, x86: Export addresses of syscall trampolines Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 04/19] x86: kcore: Give entry trampolines all the same offset in kcore Adrian Hunter
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

From: Alexander Shishkin <alexander.shishkin@linux.intel.com>

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
---
 arch/x86/mm/cpu_entry_area.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index d1da5cf4b2de..fb1fbc8538fa 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -3,6 +3,7 @@
 #include <linux/spinlock.h>
 #include <linux/percpu.h>
 #include <linux/kallsyms.h>
+#include <linux/kcore.h>
 
 #include <asm/cpu_entry_area.h>
 #include <asm/pgtable.h>
@@ -14,6 +15,7 @@
 #ifdef CONFIG_X86_64
 static DEFINE_PER_CPU_PAGE_ALIGNED(char, exception_stacks
 	[(N_EXCEPTION_STACKS - 1) * EXCEPTION_STKSZ + DEBUG_STKSZ]);
+static DEFINE_PER_CPU(struct kcore_list, kcore_entry_trampoline);
 #endif
 
 struct cpu_entry_area *get_cpu_entry_area(int cpu)
@@ -147,6 +149,9 @@ static void __init setup_cpu_entry_area(int cpu)
 
 	cea_set_pte(&get_cpu_entry_area(cpu)->entry_trampoline,
 		     __pa_symbol(_entry_trampoline), PAGE_KERNEL_RX);
+	kclist_add(&per_cpu(kcore_entry_trampoline, cpu),
+		   &get_cpu_entry_area(cpu)->entry_trampoline, PAGE_SIZE,
+		   KCORE_TEXT);
 #endif
 	percpu_setup_debug_store(cpu);
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 04/19] x86: kcore: Give entry trampolines all the same offset in kcore
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (2 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 03/19] x86: Add entry trampolines to kcore Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 05/19] perf tools: Use the _stest symbol to identify the kernel map when loading kcore Adrian Hunter
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

Entry trampolines all map to the same page.  Represent that by giving the
corresponding program headers in kcore the same offset.

This has the benefit that, when perf tools uses /proc/kcore as a source for
kernel object code, samples from different CPU trampolines are aggregated
together.  Note, such aggregation is normal for profiling i.e. people want
to profile the object code, not every different virtual address the object
code might be mapped to (across different processes for example).

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 arch/x86/mm/cpu_entry_area.c |  6 +++---
 fs/proc/kcore.c              |  7 +++++--
 include/linux/kcore.h        | 13 +++++++++++++
 3 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
index fb1fbc8538fa..faea80e9bbd2 100644
--- a/arch/x86/mm/cpu_entry_area.c
+++ b/arch/x86/mm/cpu_entry_area.c
@@ -149,9 +149,9 @@ static void __init setup_cpu_entry_area(int cpu)
 
 	cea_set_pte(&get_cpu_entry_area(cpu)->entry_trampoline,
 		     __pa_symbol(_entry_trampoline), PAGE_KERNEL_RX);
-	kclist_add(&per_cpu(kcore_entry_trampoline, cpu),
-		   &get_cpu_entry_area(cpu)->entry_trampoline, PAGE_SIZE,
-		   KCORE_TEXT);
+	kclist_add_remap(&per_cpu(kcore_entry_trampoline, cpu),
+			 _entry_trampoline,
+			 &get_cpu_entry_area(cpu)->entry_trampoline, PAGE_SIZE);
 #endif
 	percpu_setup_debug_store(cpu);
 }
diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
index d1e82761de81..b5e02b0379ca 100644
--- a/fs/proc/kcore.c
+++ b/fs/proc/kcore.c
@@ -374,8 +374,11 @@ static void elf_kcore_store_hdr(char *bufp, int nphdr, int dataoff)
 		phdr->p_type	= PT_LOAD;
 		phdr->p_flags	= PF_R|PF_W|PF_X;
 		phdr->p_offset	= kc_vaddr_to_offset(m->addr) + dataoff;
-		phdr->p_vaddr	= (size_t)m->addr;
-		if (m->type == KCORE_RAM || m->type == KCORE_TEXT)
+		if (m->type == KCORE_REMAP)
+			phdr->p_vaddr	= (size_t)m->vaddr;
+		else
+			phdr->p_vaddr	= (size_t)m->addr;
+		if (m->type == KCORE_RAM || m->type == KCORE_TEXT || m->type == KCORE_REMAP)
 			phdr->p_paddr	= __pa(m->addr);
 		else
 			phdr->p_paddr	= (elf_addr_t)-1;
diff --git a/include/linux/kcore.h b/include/linux/kcore.h
index 80db19d3a505..3a11ce51e137 100644
--- a/include/linux/kcore.h
+++ b/include/linux/kcore.h
@@ -12,11 +12,13 @@ enum kcore_type {
 	KCORE_VMEMMAP,
 	KCORE_USER,
 	KCORE_OTHER,
+	KCORE_REMAP,
 };
 
 struct kcore_list {
 	struct list_head list;
 	unsigned long addr;
+	unsigned long vaddr;
 	size_t size;
 	int type;
 };
@@ -30,11 +32,22 @@ struct vmcore {
 
 #ifdef CONFIG_PROC_KCORE
 extern void kclist_add(struct kcore_list *, void *, size_t, int type);
+static inline
+void kclist_add_remap(struct kcore_list *m, void *addr, void *vaddr, size_t sz)
+{
+	m->vaddr = (unsigned long)vaddr;
+	kclist_add(m, addr, sz, KCORE_REMAP);
+}
 #else
 static inline
 void kclist_add(struct kcore_list *new, void *addr, size_t size, int type)
 {
 }
+
+static inline
+void kclist_add_remap(struct kcore_list *m, void *addr, void *vaddr, size_t sz)
+{
+}
 #endif
 
 #endif /* _LINUX_KCORE_H */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 05/19] perf tools: Use the _stest symbol to identify the kernel map when loading kcore
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (3 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 04/19] x86: kcore: Give entry trampolines all the same offset in kcore Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-16 18:04   ` [tip:perf/core] perf tools: Use the "_stest" " tip-bot for Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 06/19] perf tools: Fix kernel_start for KPTI on x86_64 Adrian Hunter
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

The first symbol is not necessarily in the kernel text.  Instead of using
the first symbol, use the _stest symbol to identify the kernel map when
loading kcore.

This allows for the introduction of symbols to identify the x86_64 KPTI
entry trampolines.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/symbol.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index f48dc157c2bd..4a39f4d0a174 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1149,7 +1149,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	bool is_64_bit;
 	int err, fd;
 	char kcore_filename[PATH_MAX];
-	struct symbol *sym;
+	u64 stext;
 
 	if (!kmaps)
 		return -EINVAL;
@@ -1198,13 +1198,13 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 		old_map = next;
 	}
 
-	/* Find the kernel map using the first symbol */
-	sym = dso__first_symbol(dso);
-	list_for_each_entry(new_map, &md.maps, node) {
-		if (sym && sym->start >= new_map->start &&
-		    sym->start < new_map->end) {
-			replacement_map = new_map;
-			break;
+	/* Find the kernel map using the '_stext' symbol */
+	if (!kallsyms__get_function_start(kallsyms_filename, "_stext", &stext)) {
+		list_for_each_entry(new_map, &md.maps, node) {
+			if (stext >= new_map->start && stext < new_map->end) {
+				replacement_map = new_map;
+				break;
+			}
 		}
 	}
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 06/19] perf tools: Fix kernel_start for KPTI on x86_64
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (4 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 05/19] perf tools: Use the _stest symbol to identify the kernel map when loading kcore Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 17:08   ` Arnaldo Carvalho de Melo
  2018-05-09 11:43 ` [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines Adrian Hunter
                   ` (12 subsequent siblings)
  18 siblings, 1 reply; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

On x86_64, KPTI entry trampolines are less than the start of kernel text,
but still above 2^63. So leave kernel_start = 1ULL << 63 for x86_64.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/machine.c | 16 +++++++++++++++-
 tools/perf/util/machine.h |  2 ++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 72a351613d85..22047ff3cf2a 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -2296,6 +2296,15 @@ int machine__set_current_tid(struct machine *machine, int cpu, pid_t pid,
 	return 0;
 }
 
+/*
+ * Compares the raw arch string. N.B. see instead perf_env__arch() if a
+ * normalized arch is needed.
+ */
+bool machine__is(struct machine *machine, const char *arch)
+{
+	return machine->env && !strcmp(machine->env->arch, arch);
+}
+
 int machine__get_kernel_start(struct machine *machine)
 {
 	struct map *map = machine__kernel_map(machine);
@@ -2312,7 +2321,12 @@ int machine__get_kernel_start(struct machine *machine)
 	machine->kernel_start = 1ULL << 63;
 	if (map) {
 		err = map__load(map);
-		if (!err)
+		/*
+		 * On x86_64, KPTI entry trampolines are less than the
+		 * start of kernel text, but still above 2^63. So leave
+		 * kernel_start = 1ULL << 63 for x86_64.
+		 */
+		if (!err && !machine__is(machine, "x86_64"))
 			machine->kernel_start = map->start;
 	}
 	return err;
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index 388fb4741c54..b31d33b5aa2a 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -188,6 +188,8 @@ static inline bool machine__is_host(struct machine *machine)
 	return machine ? machine->pid == HOST_KERNEL_ID : false;
 }
 
+bool machine__is(struct machine *machine, const char *arch);
+
 struct thread *__machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid);
 struct thread *machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid);
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (5 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 06/19] perf tools: Fix kernel_start for KPTI on x86_64 Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 17:07   ` Arnaldo Carvalho de Melo
  2018-05-15 10:30   ` Jiri Olsa
  2018-05-09 11:43 ` [PATCH RFC 08/19] perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols Adrian Hunter
                   ` (11 subsequent siblings)
  18 siblings, 2 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

On x86_64 the KPTI entry trampolines are not in the kernel map created by
perf tools. That results in the addresses having no symbols and prevents
annotation. It also causes Intel PT to have decoding errors at the
trampoline addresses. Workaround that by creating maps for the trampolines.
At present the kernel does not export information revealing where the
trampolines are. Until that happens, the addresses are hardcoded.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/machine.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++
 tools/perf/util/machine.h |   3 ++
 tools/perf/util/symbol.c  |  12 +++---
 3 files changed, 114 insertions(+), 5 deletions(-)

diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 22047ff3cf2a..1bf15aa0b099 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -851,6 +851,110 @@ static int machine__get_running_kernel_start(struct machine *machine,
 	return 0;
 }
 
+struct special_kernal_map {
+	u64 start;
+	u64 end;
+	u64 pgoff;
+};
+
+static int machine__create_special_kernel_map(struct machine *machine,
+					      struct dso *kernel,
+					      struct special_kernal_map *sm)
+{
+	struct kmap *kmap;
+	struct map *map;
+
+	map = map__new2(sm->start, kernel);
+	if (!map)
+		return -1;
+
+	map->end   = sm->end;
+	map->pgoff = sm->pgoff;
+
+	kmap = map__kmap(map);
+
+	kmap->kmaps = &machine->kmaps;
+
+	map_groups__insert(&machine->kmaps, map);
+
+	pr_debug2("Added special kernel map %" PRIx64 "-%" PRIx64 "\n",
+		  map->start, map->end);
+
+	map__put(map);
+
+	return 0;
+}
+
+static u64 find_entry_trampoline(struct dso *dso)
+{
+	struct {
+		const char *name;
+		u64 addr;
+	} syms[] = {
+		/* Duplicates are removed so lookup all aliases */
+		{"_entry_trampoline", 0},
+		{"__entry_trampoline_start", 0},
+		{"entry_SYSCALL_64_trampoline", 0},
+	};
+	struct symbol *sym = dso__first_symbol(dso);
+	unsigned int i;
+
+	for (; sym; sym = dso__next_symbol(sym)) {
+		if (sym->binding != STB_GLOBAL)
+			continue;
+		for (i = 0; i < ARRAY_SIZE(syms); i++) {
+			if (!strcmp(sym->name, syms[i].name))
+				syms[i].addr = sym->start;
+		}
+	}
+
+	for (i = 0; i < ARRAY_SIZE(syms); i++) {
+		if (syms[i].addr)
+			return syms[i].addr;
+	}
+
+	return 0;
+}
+
+/*
+ * These values can be used for kernels that do not have symbols for the entry
+ * trampolines in kallsyms.
+ */
+#define X86_64_CPU_ENTRY_AREA_PER_CPU	0xfffffe0000000000ULL
+#define X86_64_CPU_ENTRY_AREA_SIZE	0x2c000
+#define X86_64_ENTRY_TRAMPOLINE		0x6000
+
+/* Map x86_64 KPTI entry trampolines */
+int machine__map_x86_64_entry_trampolines(struct machine *machine,
+					  struct dso *kernel)
+{
+	u64 pgoff = find_entry_trampoline(kernel);
+	int nr_cpus_avail = 0, cpu;
+
+	if (!pgoff)
+		return 0;
+
+	if (machine->env)
+		nr_cpus_avail = machine->env->nr_cpus_avail;
+
+	/* Add a 1 page map for each CPU's entry trampoline */
+	for (cpu = 0; cpu < nr_cpus_avail; cpu++) {
+		u64 va = X86_64_CPU_ENTRY_AREA_PER_CPU +
+			 cpu * X86_64_CPU_ENTRY_AREA_SIZE +
+			 X86_64_ENTRY_TRAMPOLINE;
+		struct special_kernal_map sm = {
+			.start = va,
+			.end   = va + page_size,
+			.pgoff = pgoff,
+		};
+
+		if (machine__create_special_kernel_map(machine, kernel, &sm) < 0)
+			return -1;
+	}
+
+	return 0;
+}
+
 static int
 __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
 {
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index b31d33b5aa2a..6e1c63d3a625 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -267,4 +267,7 @@ int machine__set_current_tid(struct machine *machine, int cpu, pid_t pid,
  */
 char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, char **modp);
 
+int machine__map_x86_64_entry_trampolines(struct machine *machine,
+					  struct dso *kernel);
+
 #endif /* __PERF_MACHINE_H */
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 4a39f4d0a174..c3a1a89a61cb 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1490,20 +1490,22 @@ int dso__load(struct dso *dso, struct map *map)
 		goto out;
 	}
 
+	if (map->groups && map->groups->machine)
+		machine = map->groups->machine;
+	else
+		machine = NULL;
+
 	if (dso->kernel) {
 		if (dso->kernel == DSO_TYPE_KERNEL)
 			ret = dso__load_kernel_sym(dso, map);
 		else if (dso->kernel == DSO_TYPE_GUEST_KERNEL)
 			ret = dso__load_guest_kernel_sym(dso, map);
 
+		if (machine && machine__is(machine, "x86_64"))
+			machine__map_x86_64_entry_trampolines(machine, dso);
 		goto out;
 	}
 
-	if (map->groups && map->groups->machine)
-		machine = map->groups->machine;
-	else
-		machine = NULL;
-
 	dso->adjust_symbols = 0;
 
 	if (perfmap) {
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 08/19] perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (6 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 09/19] perf tools: Allow for special kernel maps Adrian Hunter
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

When kernel symbols are derived from /proc/kallsyms only (not using vmlinux
or /proc/kcore) map_groups__split_kallsyms() is used. However that function
makes assumptions that are not true with entry trampoline symbols. For now,
remove the entry trampoline symbols at that point, as they are no longer
needed at that point.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/map.h    |  8 ++++++++
 tools/perf/util/symbol.c | 13 +++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index f1afe1ab6ff7..fafcc375ed37 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -8,6 +8,7 @@
 #include <linux/rbtree.h>
 #include <pthread.h>
 #include <stdio.h>
+#include <string.h>
 #include <stdbool.h>
 #include <linux/types.h>
 #include "rwsem.h"
@@ -239,4 +240,11 @@ static inline bool __map__is_kmodule(const struct map *map)
 
 bool map__has_symbols(const struct map *map);
 
+#define ENTRY_TRAMPOLINE_NAME "__entry_SYSCALL_64_trampoline"
+
+static inline bool is_entry_trampoline(const char *name)
+{
+	return !strcmp(name, ENTRY_TRAMPOLINE_NAME);
+}
+
 #endif /* __PERF_MAP_H */
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index c3a1a89a61cb..e393f37b273b 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -737,12 +737,15 @@ static int map_groups__split_kallsyms(struct map_groups *kmaps, struct dso *dso,
 	struct rb_root *root = &dso->symbols;
 	struct rb_node *next = rb_first(root);
 	int kernel_range = 0;
+	bool x86_64;
 
 	if (!kmaps)
 		return -1;
 
 	machine = kmaps->machine;
 
+	x86_64 = machine && machine__is(machine, "x86_64");
+
 	while (next) {
 		char *module;
 
@@ -790,6 +793,16 @@ static int map_groups__split_kallsyms(struct map_groups *kmaps, struct dso *dso,
 			 */
 			pos->start = curr_map->map_ip(curr_map, pos->start);
 			pos->end   = curr_map->map_ip(curr_map, pos->end);
+		} else if (x86_64 && is_entry_trampoline(pos->name)) {
+			/*
+			 * These symbols are not needed anymore since the
+			 * trampoline maps refer to the text section and it's
+			 * symbols instead. Avoid having to deal with
+			 * relocations, and the assumption that the first symbol
+			 * is the start of kernel text, by simply removing the
+			 * symbols at this point.
+			 */
+			goto discard_symbol;
 		} else if (curr_map != initial_map) {
 			char dso_name[PATH_MAX];
 			struct dso *ndso;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 09/19] perf tools: Allow for special kernel maps
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (7 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 08/19] perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 10/19] perf tools: Create maps for x86_64 KPTI entry trampolines Adrian Hunter
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

Identify special kernel maps by name so that they can be distinguished from
the kernel map and module maps.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/event.c   |  2 +-
 tools/perf/util/machine.c |  8 ++++++--
 tools/perf/util/map.c     | 22 ++++++++++++++++++----
 tools/perf/util/map.h     |  7 ++++++-
 tools/perf/util/symbol.c  |  7 +++----
 5 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 244135b5ea43..aafa9878465f 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -487,7 +487,7 @@ int perf_event__synthesize_modules(struct perf_tool *tool,
 	for (pos = maps__first(maps); pos; pos = map__next(pos)) {
 		size_t size;
 
-		if (__map__is_kernel(pos))
+		if (!__map__is_kmodule(pos))
 			continue;
 
 		size = PERF_ALIGN(pos->dso->long_name_len + 1, sizeof(u64));
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index 1bf15aa0b099..f8c8e95062d0 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -855,6 +855,7 @@ struct special_kernal_map {
 	u64 start;
 	u64 end;
 	u64 pgoff;
+	char name[KMAP_NAME_LEN];
 };
 
 static int machine__create_special_kernel_map(struct machine *machine,
@@ -874,11 +875,12 @@ static int machine__create_special_kernel_map(struct machine *machine,
 	kmap = map__kmap(map);
 
 	kmap->kmaps = &machine->kmaps;
+	strlcpy(kmap->name, sm->name, KMAP_NAME_LEN);
 
 	map_groups__insert(&machine->kmaps, map);
 
-	pr_debug2("Added special kernel map %" PRIx64 "-%" PRIx64 "\n",
-		  map->start, map->end);
+	pr_debug2("Added special kernel map %s %" PRIx64 "-%" PRIx64 "\n",
+		  kmap->name, map->start, map->end);
 
 	map__put(map);
 
@@ -948,6 +950,8 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
 			.pgoff = pgoff,
 		};
 
+		strlcpy(sm.name, ENTRY_TRAMPOLINE_NAME, KMAP_NAME_LEN);
+
 		if (machine__create_special_kernel_map(machine, kernel, &sm) < 0)
 			return -1;
 	}
diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index c8fe836e4c3c..8f36c12b4223 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -252,6 +252,13 @@ bool __map__is_kernel(const struct map *map)
 	return machine__kernel_map(map->groups->machine) == map;
 }
 
+bool __map__is_special_kernel_map(const struct map *map)
+{
+	struct kmap *kmap = __map__kmap((struct map *)map);
+
+	return kmap && kmap->name[0];
+}
+
 bool map__has_symbols(const struct map *map)
 {
 	return dso__has_symbols(map->dso);
@@ -846,15 +853,22 @@ struct map *map__next(struct map *map)
 	return NULL;
 }
 
-struct kmap *map__kmap(struct map *map)
+struct kmap *__map__kmap(struct map *map)
 {
-	if (!map->dso || !map->dso->kernel) {
-		pr_err("Internal error: map__kmap with a non-kernel map\n");
+	if (!map->dso || !map->dso->kernel)
 		return NULL;
-	}
 	return (struct kmap *)(map + 1);
 }
 
+struct kmap *map__kmap(struct map *map)
+{
+	struct kmap *kmap = __map__kmap(map);
+
+	if (!kmap)
+		pr_err("Internal error: map__kmap with a non-kernel map\n");
+	return kmap;
+}
+
 struct map_groups *map__kmaps(struct map *map)
 {
 	struct kmap *kmap = map__kmap(map);
diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index fafcc375ed37..e6dd5998ebf9 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -47,9 +47,12 @@ struct map {
 	refcount_t		refcnt;
 };
 
+#define KMAP_NAME_LEN 256
+
 struct kmap {
 	struct ref_reloc_sym	*ref_reloc_sym;
 	struct map_groups	*kmaps;
+	char			name[KMAP_NAME_LEN];
 };
 
 struct maps {
@@ -76,6 +79,7 @@ static inline struct map_groups *map_groups__get(struct map_groups *mg)
 
 void map_groups__put(struct map_groups *mg);
 
+struct kmap *__map__kmap(struct map *map);
 struct kmap *map__kmap(struct map *map);
 struct map_groups *map__kmaps(struct map *map);
 
@@ -232,10 +236,11 @@ int map_groups__fixup_overlappings(struct map_groups *mg, struct map *map,
 struct map *map_groups__find_by_name(struct map_groups *mg, const char *name);
 
 bool __map__is_kernel(const struct map *map);
+bool __map__is_special_kernel_map(const struct map *map);
 
 static inline bool __map__is_kmodule(const struct map *map)
 {
-	return !__map__is_kernel(map);
+	return !__map__is_kernel(map) && !__map__is_special_kernel_map(map);
 }
 
 bool map__has_symbols(const struct map *map);
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index e393f37b273b..35a91c1b7d3e 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1030,7 +1030,7 @@ struct map *map_groups__first(struct map_groups *mg)
 	return maps__first(&mg->maps);
 }
 
-static int do_validate_kcore_modules(const char *filename, struct map *map,
+static int do_validate_kcore_modules(const char *filename,
 				  struct map_groups *kmaps)
 {
 	struct rb_root modules = RB_ROOT;
@@ -1046,8 +1046,7 @@ static int do_validate_kcore_modules(const char *filename, struct map *map,
 		struct map *next = map_groups__next(old_map);
 		struct module_info *mi;
 
-		if (old_map == map || old_map->start == map->start) {
-			/* The kernel map */
+		if (!__map__is_kmodule(old_map)) {
 			old_map = next;
 			continue;
 		}
@@ -1104,7 +1103,7 @@ static int validate_kcore_modules(const char *kallsyms_filename,
 					     kallsyms_filename))
 		return -EINVAL;
 
-	if (do_validate_kcore_modules(modules_filename, map, kmaps))
+	if (do_validate_kcore_modules(modules_filename, kmaps))
 		return -EINVAL;
 
 	return 0;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 10/19] perf tools: Create maps for x86_64 KPTI entry trampolines
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (8 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 09/19] perf tools: Allow for special kernel maps Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-14  8:32   ` Ingo Molnar
  2018-05-09 11:43 ` [PATCH RFC 11/19] perf tools: Synthesize and process mmap events " Adrian Hunter
                   ` (8 subsequent siblings)
  18 siblings, 1 reply; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

Create maps for x86_64 KPTI entry trampolines, based on symbols found in
kallsyms. It is also necessary to keep track of whether the trampolines
have been mapped particularly when the kernel dso is kcore.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/machine.c | 138 ++++++++++++++++++++++++++++++++++++++++++++--
 tools/perf/util/machine.h |   1 +
 tools/perf/util/symbol.c  |  17 ++++++
 3 files changed, 150 insertions(+), 6 deletions(-)

diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index f8c8e95062d0..aa6bb493fcfa 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -930,9 +930,33 @@ static u64 find_entry_trampoline(struct dso *dso)
 int machine__map_x86_64_entry_trampolines(struct machine *machine,
 					  struct dso *kernel)
 {
-	u64 pgoff = find_entry_trampoline(kernel);
+	struct map_groups *kmaps = &machine->kmaps;
+	struct maps *maps = &kmaps->maps;
 	int nr_cpus_avail = 0, cpu;
+	bool found = false;
+	struct map *map;
+	u64 pgoff;
+
+	/*
+	 * In the vmlinux case, pgoff is a virtual address which must now be
+	 * mapped to a vmlinux offset.
+	 */
+	for (map = maps__first(maps); map; map = map__next(map)) {
+		struct kmap *kmap = __map__kmap(map);
+		struct map *dest_map;
 
+		if (!kmap || !is_entry_trampoline(kmap->name))
+			continue;
+
+		dest_map = map_groups__find(kmaps, map->pgoff);
+		if (dest_map != map)
+			map->pgoff = dest_map->map_ip(dest_map, map->pgoff);
+		found = true;
+	}
+	if (found || machine->trampolines_mapped)
+		return 0;
+
+	pgoff = find_entry_trampoline(kernel);
 	if (!pgoff)
 		return 0;
 
@@ -956,9 +980,107 @@ int machine__map_x86_64_entry_trampolines(struct machine *machine,
 			return -1;
 	}
 
+	machine->trampolines_mapped = nr_cpus_avail;
+
+	return 0;
+}
+
+#if defined(__x86_64__)
+
+struct special_kernal_map_info {
+	int cnt;
+	int max_cnt;
+	struct special_kernal_map *maps;
+	bool get_entry_trampolines;
+	u64 entry_trampoline;
+};
+
+static int add_special_kernal_map(struct special_kernal_map_info *si, u64 start,
+				  u64 end, u64 pgoff, const char *name)
+{
+	if (si->cnt >= si->max_cnt) {
+		void *buf;
+		size_t sz;
+
+		si->max_cnt = si->max_cnt ? si->max_cnt * 2 : 32;
+		sz = sizeof(struct special_kernal_map) * si->max_cnt;
+		buf = realloc(si->maps, sz);
+		if (!buf)
+			return -1;
+		si->maps = buf;
+	}
+
+	si->maps[si->cnt].start = start;
+	si->maps[si->cnt].end   = end;
+	si->maps[si->cnt].pgoff = pgoff;
+	strlcpy(si->maps[si->cnt].name, name, KMAP_NAME_LEN);
+
+	si->cnt += 1;
+
+	return 0;
+}
+
+static int find_special_kernal_maps(void *arg, const char *name, char type,
+				    u64 start)
+{
+	struct special_kernal_map_info *si = arg;
+
+	if (!si->entry_trampoline && kallsyms2elf_binding(type) == STB_GLOBAL &&
+	    !strcmp(name, "_entry_trampoline")) {
+		si->entry_trampoline = start;
+		return 0;
+	}
+
+	if (is_entry_trampoline(name)) {
+		u64 end = start + page_size;
+
+		return add_special_kernal_map(si, start, end, 0, name);
+	}
+
 	return 0;
 }
 
+static int machine__create_special_kernel_maps(struct machine *machine,
+					       struct dso *kernel)
+{
+	struct special_kernal_map_info si = {0};
+	char filename[PATH_MAX];
+	int ret;
+	int i;
+
+	machine__get_kallsyms_filename(machine, filename, PATH_MAX);
+
+	if (symbol__restricted_filename(filename, "/proc/kallsyms"))
+		return 0;
+
+	ret = kallsyms__parse(filename, &si, find_special_kernal_maps);
+	if (ret)
+		goto out_free;
+
+	if (!si.entry_trampoline)
+		goto out_free;
+
+	for (i = 0; i < si.cnt; i++) {
+		struct special_kernal_map *sm = &si.maps[i];
+
+		sm->pgoff = si.entry_trampoline;
+		ret = machine__create_special_kernel_map(machine, kernel, sm);
+		if (ret)
+			goto out_free;
+	}
+
+	machine->trampolines_mapped = si.cnt;
+out_free:
+	free(si.maps);
+	return ret;
+}
+
+#else
+
+#define machine__create_special_kernel_maps(m, k) 0
+
+#endif
+
 static int
 __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
 {
@@ -1314,9 +1436,8 @@ int machine__create_kernel_maps(struct machine *machine)
 		return -1;
 
 	ret = __machine__create_kernel_maps(machine, kernel);
-	dso__put(kernel);
 	if (ret < 0)
-		return -1;
+		goto out_put;
 
 	if (symbol_conf.use_modules && machine__create_modules(machine) < 0) {
 		if (machine__is_host(machine))
@@ -1331,7 +1452,8 @@ int machine__create_kernel_maps(struct machine *machine)
 		if (name &&
 		    map__set_kallsyms_ref_reloc_sym(machine->vmlinux_map, name, addr)) {
 			machine__destroy_kernel_maps(machine);
-			return -1;
+			ret = -1;
+			goto out_put;
 		}
 
 		/* we have a real start address now, so re-order the kmaps */
@@ -1347,12 +1469,16 @@ int machine__create_kernel_maps(struct machine *machine)
 		map__put(map);
 	}
 
+	if (machine__create_special_kernel_maps(machine, kernel))
+		pr_debug("Problems creating special kernel maps, continuing anyway...\n");
+
 	/* update end address of the kernel map using adjacent module address */
 	map = map__next(machine__kernel_map(machine));
 	if (map)
 		machine__set_kernel_mmap(machine, addr, map->start);
-
-	return 0;
+out_put:
+	dso__put(kernel);
+	return ret;
 }
 
 static bool machine__uses_kcore(struct machine *machine)
diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
index 6e1c63d3a625..da430cf57e37 100644
--- a/tools/perf/util/machine.h
+++ b/tools/perf/util/machine.h
@@ -56,6 +56,7 @@ struct machine {
 		void	  *priv;
 		u64	  db_id;
 	};
+	bool		  trampolines_mapped;
 };
 
 static inline struct threads *machine__threads(struct machine *machine, pid_t tid)
diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index 35a91c1b7d3e..927998f33e4f 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1158,6 +1158,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	struct map_groups *kmaps = map__kmaps(map);
 	struct kcore_mapfn_data md;
 	struct map *old_map, *new_map, *replacement_map = NULL;
+	struct machine *machine;
 	bool is_64_bit;
 	int err, fd;
 	char kcore_filename[PATH_MAX];
@@ -1166,6 +1167,8 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	if (!kmaps)
 		return -EINVAL;
 
+	machine = kmaps->machine;
+
 	/* This function requires that the map is the kernel map */
 	if (!__map__is_kernel(map))
 		return -EINVAL;
@@ -1209,6 +1212,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 			map_groups__remove(kmaps, old_map);
 		old_map = next;
 	}
+	machine->trampolines_mapped = false;
 
 	/* Find the kernel map using the '_stext' symbol */
 	if (!kallsyms__get_function_start(kallsyms_filename, "_stext", &stext)) {
@@ -1245,6 +1249,19 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 		map__put(new_map);
 	}
 
+	if (machine__is(machine, "x86_64")) {
+		u64 addr;
+
+		/*
+		 * If one of the corresponding symbols is there, assume the
+		 * entry trampoline maps are too.
+		 */
+		if (!kallsyms__get_function_start(kallsyms_filename,
+						  "entry_trampoline_cpu0",
+						  &addr))
+			machine->trampolines_mapped = true;
+	}
+
 	/*
 	 * Set the data type and long name so that kcore can be read via
 	 * dso__data_read_addr().
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 11/19] perf tools: Synthesize and process mmap events for x86_64 KPTI entry trampolines
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (9 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 10/19] perf tools: Create maps for x86_64 KPTI entry trampolines Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-15 10:49   ` Jiri Olsa
  2018-05-09 11:43 ` [PATCH RFC 12/19] perf buildid-cache: kcore_copy: Keep phdr data in a list Adrian Hunter
                   ` (7 subsequent siblings)
  18 siblings, 1 reply; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

Like the kernel text, the location of x86_64 KPTI entry trampolines must be
recorded in the perf.data file. Like the kernel, synthesize a mmap event
for that, and add processing for it.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/event.c   | 90 +++++++++++++++++++++++++++++++++++++++++++++--
 tools/perf/util/machine.c | 28 +++++++++++++++
 2 files changed, 115 insertions(+), 3 deletions(-)

diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index aafa9878465f..d810ff8488b1 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -888,9 +888,80 @@ int kallsyms__get_function_start(const char *kallsyms_filename,
 	return 0;
 }
 
-int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
-				       perf_event__handler_t process,
-				       struct machine *machine)
+#if defined(__x86_64__)
+
+static int perf_event__synthesize_special_kmaps(struct perf_tool *tool,
+						perf_event__handler_t process,
+						struct machine *machine)
+{
+	int rc = 0;
+	struct map *pos;
+	struct map_groups *kmaps = &machine->kmaps;
+	struct maps *maps = &kmaps->maps;
+	union perf_event *event = zalloc(sizeof(event->mmap) +
+					 machine->id_hdr_size);
+
+	if (!event) {
+		pr_debug("Not enough memory synthesizing mmap event "
+			 "for special kernel maps\n");
+		return -1;
+	}
+
+	for (pos = maps__first(maps); pos; pos = map__next(pos)) {
+		struct kmap *kmap;
+		size_t size;
+
+		if (!__map__is_special_kernel_map(pos))
+			continue;
+
+		kmap = map__kmap(pos);
+
+		size = sizeof(event->mmap) - sizeof(event->mmap.filename) +
+		       PERF_ALIGN(strlen(kmap->name) + 1, sizeof(u64)) +
+		       machine->id_hdr_size;
+
+		memset(event, 0, size);
+
+		event->mmap.header.type = PERF_RECORD_MMAP;
+
+		/*
+		 * kernel uses 0 for user space maps, see kernel/perf_event.c
+		 * __perf_event_mmap
+		 */
+		if (machine__is_host(machine))
+			event->header.misc = PERF_RECORD_MISC_KERNEL;
+		else
+			event->header.misc = PERF_RECORD_MISC_GUEST_KERNEL;
+
+		event->mmap.header.size = size;
+
+		event->mmap.start = pos->start;
+		event->mmap.len   = pos->end - pos->start;
+		event->mmap.pgoff = pos->pgoff;
+		event->mmap.pid   = machine->pid;
+
+		strlcpy(event->mmap.filename, kmap->name, PATH_MAX);
+
+		if (perf_tool__process_synth_event(tool, event, machine,
+						   process) != 0) {
+			rc = -1;
+			break;
+		}
+	}
+
+	free(event);
+	return rc;
+}
+
+#else
+
+#define perf_event__synthesize_special_kmaps(t, p, m) 0
+
+#endif
+
+static int __perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
+						perf_event__handler_t process,
+						struct machine *machine)
 {
 	size_t size;
 	struct map *map = machine__kernel_map(machine);
@@ -943,6 +1014,19 @@ int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
 	return err;
 }
 
+int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
+				       perf_event__handler_t process,
+				       struct machine *machine)
+{
+	int err;
+
+	err = __perf_event__synthesize_kernel_mmap(tool, process, machine);
+	if (err < 0)
+		return err;
+
+	return perf_event__synthesize_special_kmaps(tool, process, machine);
+}
+
 int perf_event__synthesize_thread_map2(struct perf_tool *tool,
 				      struct thread_map *threads,
 				      perf_event__handler_t process,
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
index aa6bb493fcfa..aa938f92d6d1 100644
--- a/tools/perf/util/machine.c
+++ b/tools/perf/util/machine.c
@@ -1493,6 +1493,32 @@ static bool machine__uses_kcore(struct machine *machine)
 	return false;
 }
 
+static bool perf_event__is_special_kernel_mmap(struct machine *machine,
+					       union perf_event *event)
+{
+	return machine__is(machine, "x86_64") &&
+	       is_entry_trampoline(event->mmap.filename);
+}
+
+static int machine__process_special_kernel_map(struct machine *machine,
+					       union perf_event *event)
+{
+	struct map *kernel_map = machine__kernel_map(machine);
+	struct dso *kernel = kernel_map ? kernel_map->dso : NULL;
+	struct special_kernal_map sm = {
+		.start = event->mmap.start,
+		.end   = event->mmap.start + event->mmap.len,
+		.pgoff = event->mmap.pgoff,
+	};
+
+	if (kernel == NULL)
+		return -1;
+
+	strlcpy(sm.name, event->mmap.filename, KMAP_NAME_LEN);
+
+	return machine__create_special_kernel_map(machine, kernel, &sm);
+}
+
 static int machine__process_kernel_mmap_event(struct machine *machine,
 					      union perf_event *event)
 {
@@ -1596,6 +1622,8 @@ static int machine__process_kernel_mmap_event(struct machine *machine,
 			 */
 			dso__load(kernel, machine__kernel_map(machine));
 		}
+	} else if (perf_event__is_special_kernel_mmap(machine, event)) {
+		return machine__process_special_kernel_map(machine, event);
 	}
 	return 0;
 out_problem:
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 12/19] perf buildid-cache: kcore_copy: Keep phdr data in a list
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (10 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 11/19] perf tools: Synthesize and process mmap events " Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 13/19] perf buildid-cache: kcore_copy: Keep a count of phdrs Adrian Hunter
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

In preparation to add more program headers, keep phdr data in a list.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/symbol-elf.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 48943b834f11..7faca21141a2 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1388,6 +1388,7 @@ struct phdr_data {
 	off_t offset;
 	u64 addr;
 	u64 len;
+	struct list_head list;
 };
 
 struct kcore_copy_info {
@@ -1399,6 +1400,7 @@ struct kcore_copy_info {
 	u64 last_module_symbol;
 	struct phdr_data kernel_map;
 	struct phdr_data modules_map;
+	struct list_head phdrs;
 };
 
 static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
@@ -1510,6 +1512,11 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
 	if (elf_read_maps(elf, true, kcore_copy__read_map, kci) < 0)
 		return -1;
 
+	if (kci->kernel_map.len)
+		list_add_tail(&kci->kernel_map.list, &kci->phdrs);
+	if (kci->modules_map.len)
+		list_add_tail(&kci->modules_map.list, &kci->phdrs);
+
 	return 0;
 }
 
@@ -1678,6 +1685,8 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 	char kcore_filename[PATH_MAX];
 	char extract_filename[PATH_MAX];
 
+	INIT_LIST_HEAD(&kci.phdrs);
+
 	if (kcore_copy__copy_file(from_dir, to_dir, "kallsyms"))
 		return -1;
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 13/19] perf buildid-cache: kcore_copy: Keep a count of phdrs
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (11 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 12/19] perf buildid-cache: kcore_copy: Keep phdr data in a list Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 14/19] perf buildid-cache: kcore_copy: Calculate offset from phnum Adrian Hunter
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

In preparation to add more program headers, keep a count of phdrs.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/symbol-elf.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 7faca21141a2..3a177c245683 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1398,6 +1398,7 @@ struct kcore_copy_info {
 	u64 last_symbol;
 	u64 first_module;
 	u64 last_module_symbol;
+	size_t phnum;
 	struct phdr_data kernel_map;
 	struct phdr_data modules_map;
 	struct list_head phdrs;
@@ -1517,6 +1518,8 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
 	if (kci->modules_map.len)
 		list_add_tail(&kci->modules_map.list, &kci->phdrs);
 
+	kci->phnum = !!kci->kernel_map.len + !!kci->modules_map.len;
+
 	return 0;
 }
 
@@ -1678,7 +1681,6 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 {
 	struct kcore kcore;
 	struct kcore extract;
-	size_t count = 2;
 	int idx = 0, err = -1;
 	off_t offset = page_size, sz, modules_offset = 0;
 	struct kcore_copy_info kci = { .stext = 0, };
@@ -1705,10 +1707,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 	if (kcore__init(&extract, extract_filename, kcore.elfclass, false))
 		goto out_kcore_close;
 
-	if (!kci.modules_map.addr)
-		count -= 1;
-
-	if (kcore__copy_hdr(&kcore, &extract, count))
+	if (kcore__copy_hdr(&kcore, &extract, kci.phnum))
 		goto out_extract_close;
 
 	if (kcore__add_phdr(&extract, idx++, offset, kci.kernel_map.addr,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 14/19] perf buildid-cache: kcore_copy: Calculate offset from phnum
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (12 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 13/19] perf buildid-cache: kcore_copy: Keep a count of phdrs Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 15/19] perf buildid-cache: kcore_copy: Layout sections Adrian Hunter
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

In preparation to add more program headers, calculate offset from the
number of phdrs.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/symbol-elf.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 3a177c245683..2e8d89d64166 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1682,7 +1682,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 	struct kcore kcore;
 	struct kcore extract;
 	int idx = 0, err = -1;
-	off_t offset = page_size, sz, modules_offset = 0;
+	off_t offset, sz, modules_offset = 0;
 	struct kcore_copy_info kci = { .stext = 0, };
 	char kcore_filename[PATH_MAX];
 	char extract_filename[PATH_MAX];
@@ -1710,6 +1710,10 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 	if (kcore__copy_hdr(&kcore, &extract, kci.phnum))
 		goto out_extract_close;
 
+	offset = gelf_fsize(extract.elf, ELF_T_EHDR, 1, EV_CURRENT) +
+		 gelf_fsize(extract.elf, ELF_T_PHDR, kci.phnum, EV_CURRENT);
+	offset = round_up(offset, page_size);
+
 	if (kcore__add_phdr(&extract, idx++, offset, kci.kernel_map.addr,
 			    kci.kernel_map.len))
 		goto out_extract_close;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 15/19] perf buildid-cache: kcore_copy: Layout sections
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (13 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 14/19] perf buildid-cache: kcore_copy: Calculate offset from phnum Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 16/19] perf buildid-cache: kcore_copy: Iterate phdrs Adrian Hunter
                   ` (3 subsequent siblings)
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

In preparation to add more program headers, layout the relative offset of
each section.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/symbol-elf.c | 25 ++++++++++++++++++++++---
 1 file changed, 22 insertions(+), 3 deletions(-)

diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 2e8d89d64166..9ecd83418bb5 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1386,6 +1386,7 @@ static off_t kcore__write(struct kcore *kcore)
 
 struct phdr_data {
 	off_t offset;
+	off_t rel;
 	u64 addr;
 	u64 len;
 	struct list_head list;
@@ -1404,6 +1405,9 @@ struct kcore_copy_info {
 	struct list_head phdrs;
 };
 
+#define kcore_copy__for_each_phdr(k, p) \
+	list_for_each_entry((p), &(k)->phdrs, list)
+
 static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
 					u64 start)
 {
@@ -1518,11 +1522,21 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
 	if (kci->modules_map.len)
 		list_add_tail(&kci->modules_map.list, &kci->phdrs);
 
-	kci->phnum = !!kci->kernel_map.len + !!kci->modules_map.len;
-
 	return 0;
 }
 
+static void kcore_copy__layout(struct kcore_copy_info *kci)
+{
+	struct phdr_data *p;
+	off_t rel = 0;
+
+	kcore_copy__for_each_phdr(kci, p) {
+		p->rel = rel;
+		rel += p->len;
+		kci->phnum += 1;
+	}
+}
+
 static int kcore_copy__calc_maps(struct kcore_copy_info *kci, const char *dir,
 				 Elf *elf)
 {
@@ -1558,7 +1572,12 @@ static int kcore_copy__calc_maps(struct kcore_copy_info *kci, const char *dir,
 	if (kci->first_module && !kci->last_module_symbol)
 		return -1;
 
-	return kcore_copy__read_maps(kci, elf);
+	if (kcore_copy__read_maps(kci, elf))
+		return -1;
+
+	kcore_copy__layout(kci);
+
+	return 0;
 }
 
 static int kcore_copy__copy_file(const char *from_dir, const char *to_dir,
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 16/19] perf buildid-cache: kcore_copy: Iterate phdrs
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (14 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 15/19] perf buildid-cache: kcore_copy: Layout sections Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 17/19] perf buildid-cache: kcore_copy: Get rid of kernel_map Adrian Hunter
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

In preparation to add more program headers, iterate phdrs instead of
assuming there is only one for the kernel text and one for the modules.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/symbol-elf.c | 25 ++++++++++---------------
 1 file changed, 10 insertions(+), 15 deletions(-)

diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 9ecd83418bb5..161d12fb246c 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1701,10 +1701,11 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 	struct kcore kcore;
 	struct kcore extract;
 	int idx = 0, err = -1;
-	off_t offset, sz, modules_offset = 0;
+	off_t offset, sz;
 	struct kcore_copy_info kci = { .stext = 0, };
 	char kcore_filename[PATH_MAX];
 	char extract_filename[PATH_MAX];
+	struct phdr_data *p;
 
 	INIT_LIST_HEAD(&kci.phdrs);
 
@@ -1733,14 +1734,10 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 		 gelf_fsize(extract.elf, ELF_T_PHDR, kci.phnum, EV_CURRENT);
 	offset = round_up(offset, page_size);
 
-	if (kcore__add_phdr(&extract, idx++, offset, kci.kernel_map.addr,
-			    kci.kernel_map.len))
-		goto out_extract_close;
+	kcore_copy__for_each_phdr(&kci, p) {
+		off_t offs = p->rel + offset;
 
-	if (kci.modules_map.addr) {
-		modules_offset = offset + kci.kernel_map.len;
-		if (kcore__add_phdr(&extract, idx, modules_offset,
-				    kci.modules_map.addr, kci.modules_map.len))
+		if (kcore__add_phdr(&extract, idx++, offs, p->addr, p->len))
 			goto out_extract_close;
 	}
 
@@ -1748,14 +1745,12 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 	if (sz < 0 || sz > offset)
 		goto out_extract_close;
 
-	if (copy_bytes(kcore.fd, kci.kernel_map.offset, extract.fd, offset,
-		       kci.kernel_map.len))
-		goto out_extract_close;
+	kcore_copy__for_each_phdr(&kci, p) {
+		off_t offs = p->rel + offset;
 
-	if (modules_offset && copy_bytes(kcore.fd, kci.modules_map.offset,
-					 extract.fd, modules_offset,
-					 kci.modules_map.len))
-		goto out_extract_close;
+		if (copy_bytes(kcore.fd, p->offset, extract.fd, offs, p->len))
+			goto out_extract_close;
+	}
 
 	if (kcore_copy__compare_file(from_dir, to_dir, "modules"))
 		goto out_extract_close;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 17/19] perf buildid-cache: kcore_copy: Get rid of kernel_map
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (15 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 16/19] perf buildid-cache: kcore_copy: Iterate phdrs Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 18/19] perf buildid-cache: kcore_copy: Copy x86_64 entry trampoline sections Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 19/19] perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text Adrian Hunter
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

In preparation to add more program headers, get rid of kernel_map and
modules_map.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/symbol-elf.c | 60 +++++++++++++++++++++++++++++++-------------
 1 file changed, 42 insertions(+), 18 deletions(-)

diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 161d12fb246c..6c6c3481477e 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1400,14 +1400,37 @@ struct kcore_copy_info {
 	u64 first_module;
 	u64 last_module_symbol;
 	size_t phnum;
-	struct phdr_data kernel_map;
-	struct phdr_data modules_map;
 	struct list_head phdrs;
 };
 
 #define kcore_copy__for_each_phdr(k, p) \
 	list_for_each_entry((p), &(k)->phdrs, list)
 
+static struct phdr_data *kcore_copy__new_phdr(struct kcore_copy_info *kci,
+					      u64 addr, u64 len, off_t offset)
+{
+	struct phdr_data *p = zalloc(sizeof(*p));
+
+	if (p) {
+		p->addr   = addr;
+		p->len    = len;
+		p->offset = offset;
+		list_add_tail(&p->list, &kci->phdrs);
+	}
+
+	return p;
+}
+
+static void kcore_copy__free_phdrs(struct kcore_copy_info *kci)
+{
+	struct phdr_data *p, *tmp;
+
+	list_for_each_entry_safe(p, tmp, &kci->phdrs, list) {
+		list_del(&p->list);
+		free(p);
+	}
+}
+
 static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
 					u64 start)
 {
@@ -1487,15 +1510,18 @@ static int kcore_copy__parse_modules(struct kcore_copy_info *kci,
 	return 0;
 }
 
-static void kcore_copy__map(struct phdr_data *p, u64 start, u64 end, u64 pgoff,
-			    u64 s, u64 e)
+static int kcore_copy__map(struct kcore_copy_info *kci, u64 start, u64 end,
+			   u64 pgoff, u64 s, u64 e)
 {
-	if (p->addr || s < start || s >= end)
-		return;
+	u64 len, offset;
+
+	if (s < start || s >= end)
+		return 0;
+
+	offset = (s - start) + pgoff;
+	len = e < end ? e - s : end - s;
 
-	p->addr = s;
-	p->offset = (s - start) + pgoff;
-	p->len = e < end ? e - s : end - s;
+	return kcore_copy__new_phdr(kci, s, len, offset) ? 0 : -1;
 }
 
 static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
@@ -1503,11 +1529,12 @@ static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
 	struct kcore_copy_info *kci = data;
 	u64 end = start + len;
 
-	kcore_copy__map(&kci->kernel_map, start, end, pgoff, kci->stext,
-			kci->etext);
+	if (kcore_copy__map(kci, start, end, pgoff, kci->stext, kci->etext))
+		return -1;
 
-	kcore_copy__map(&kci->modules_map, start, end, pgoff, kci->first_module,
-			kci->last_module_symbol);
+	if (kcore_copy__map(kci, start, end, pgoff, kci->first_module,
+			    kci->last_module_symbol))
+		return -1;
 
 	return 0;
 }
@@ -1517,11 +1544,6 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
 	if (elf_read_maps(elf, true, kcore_copy__read_map, kci) < 0)
 		return -1;
 
-	if (kci->kernel_map.len)
-		list_add_tail(&kci->kernel_map.list, &kci->phdrs);
-	if (kci->modules_map.len)
-		list_add_tail(&kci->modules_map.list, &kci->phdrs);
-
 	return 0;
 }
 
@@ -1773,6 +1795,8 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 	if (err)
 		kcore_copy__unlink(to_dir, "kallsyms");
 
+	kcore_copy__free_phdrs(&kci);
+
 	return err;
 }
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 18/19] perf buildid-cache: kcore_copy: Copy x86_64 entry trampoline sections
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (16 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 17/19] perf buildid-cache: kcore_copy: Get rid of kernel_map Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  2018-05-09 11:43 ` [PATCH RFC 19/19] perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text Adrian Hunter
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

Identify and copy any sections for x86_64 KPTI entry trampolines.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/symbol-elf.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 6c6c3481477e..152e56ae9cba 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1392,6 +1392,11 @@ struct phdr_data {
 	struct list_head list;
 };
 
+struct sym_data {
+	u64 addr;
+	struct list_head list;
+};
+
 struct kcore_copy_info {
 	u64 stext;
 	u64 etext;
@@ -1401,6 +1406,7 @@ struct kcore_copy_info {
 	u64 last_module_symbol;
 	size_t phnum;
 	struct list_head phdrs;
+	struct list_head syms;
 };
 
 #define kcore_copy__for_each_phdr(k, p) \
@@ -1431,6 +1437,29 @@ static void kcore_copy__free_phdrs(struct kcore_copy_info *kci)
 	}
 }
 
+static struct sym_data *kcore_copy__new_sym(struct kcore_copy_info *kci,
+					    u64 addr)
+{
+	struct sym_data *s = zalloc(sizeof(*s));
+
+	if (s) {
+		s->addr = addr;
+		list_add_tail(&s->list, &kci->syms);
+	}
+
+	return s;
+}
+
+static void kcore_copy__free_syms(struct kcore_copy_info *kci)
+{
+	struct sym_data *s, *tmp;
+
+	list_for_each_entry_safe(s, tmp, &kci->syms, list) {
+		list_del(&s->list);
+		free(s);
+	}
+}
+
 static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
 					u64 start)
 {
@@ -1461,6 +1490,9 @@ static int kcore_copy__process_kallsyms(void *arg, const char *name, char type,
 		return 0;
 	}
 
+	if (is_entry_trampoline(name) && !kcore_copy__new_sym(kci, start))
+		return -1;
+
 	return 0;
 }
 
@@ -1528,6 +1560,7 @@ static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
 {
 	struct kcore_copy_info *kci = data;
 	u64 end = start + len;
+	struct sym_data *sdat;
 
 	if (kcore_copy__map(kci, start, end, pgoff, kci->stext, kci->etext))
 		return -1;
@@ -1536,6 +1569,13 @@ static int kcore_copy__read_map(u64 start, u64 len, u64 pgoff, void *data)
 			    kci->last_module_symbol))
 		return -1;
 
+	list_for_each_entry(sdat, &kci->syms, list) {
+		u64 s = round_down(sdat->addr, page_size);
+
+		if (kcore_copy__map(kci, start, end, pgoff, s, s + len))
+			return -1;
+	}
+
 	return 0;
 }
 
@@ -1730,6 +1770,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 	struct phdr_data *p;
 
 	INIT_LIST_HEAD(&kci.phdrs);
+	INIT_LIST_HEAD(&kci.syms);
 
 	if (kcore_copy__copy_file(from_dir, to_dir, "kallsyms"))
 		return -1;
@@ -1796,6 +1837,7 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 		kcore_copy__unlink(to_dir, "kallsyms");
 
 	kcore_copy__free_phdrs(&kci);
+	kcore_copy__free_syms(&kci);
 
 	return err;
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH RFC 19/19] perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text
  2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
                   ` (17 preceding siblings ...)
  2018-05-09 11:43 ` [PATCH RFC 18/19] perf buildid-cache: kcore_copy: Copy x86_64 entry trampoline sections Adrian Hunter
@ 2018-05-09 11:43 ` Adrian Hunter
  18 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-09 11:43 UTC (permalink / raw)
  To: Thomas Gleixner, Arnaldo Carvalho de Melo
  Cc: Ingo Molnar, Peter Zijlstra, Andy Lutomirski, H. Peter Anvin,
	Andi Kleen, Alexander Shishkin, Dave Hansen, Joerg Roedel,
	Jiri Olsa, linux-kernel, x86

x86_64 entry trampolines all map to the same physical page. If that is
reflected in the program headers of /proc/kcore, then do the same for the
copy of kcore.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
---
 tools/perf/util/symbol-elf.c | 53 ++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 51 insertions(+), 2 deletions(-)

diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c
index 152e56ae9cba..0f784dcaef0f 100644
--- a/tools/perf/util/symbol-elf.c
+++ b/tools/perf/util/symbol-elf.c
@@ -1390,6 +1390,7 @@ struct phdr_data {
 	u64 addr;
 	u64 len;
 	struct list_head list;
+	struct phdr_data *remaps;
 };
 
 struct sym_data {
@@ -1587,16 +1588,62 @@ static int kcore_copy__read_maps(struct kcore_copy_info *kci, Elf *elf)
 	return 0;
 }
 
+static void kcore_copy__find_remaps(struct kcore_copy_info *kci)
+{
+	struct phdr_data *p, *k = NULL;
+	u64 kend;
+
+	if (!kci->stext)
+		return;
+
+	/* Find phdr that corresponds to the kernel map (contains stext) */
+	kcore_copy__for_each_phdr(kci, p) {
+		u64 pend = p->addr + p->len - 1;
+
+		if (p->addr <= kci->stext && pend >= kci->stext) {
+			k = p;
+			break;
+		}
+	}
+
+	if (!k)
+		return;
+
+	kend = k->offset + k->len;
+
+	/* Find phdrs that remap the kernel */
+	kcore_copy__for_each_phdr(kci, p) {
+		u64 pend = p->offset + p->len;
+
+		if (p == k)
+			continue;
+
+		if (p->offset >= k->offset && pend <= kend)
+			p->remaps = k;
+	}
+}
+
 static void kcore_copy__layout(struct kcore_copy_info *kci)
 {
 	struct phdr_data *p;
 	off_t rel = 0;
 
+	kcore_copy__find_remaps(kci);
+
 	kcore_copy__for_each_phdr(kci, p) {
-		p->rel = rel;
-		rel += p->len;
+		if (!p->remaps) {
+			p->rel = rel;
+			rel += p->len;
+		}
 		kci->phnum += 1;
 	}
+
+	kcore_copy__for_each_phdr(kci, p) {
+		struct phdr_data *k = p->remaps;
+
+		if (k)
+			p->rel = p->offset - k->offset + k->rel;
+	}
 }
 
 static int kcore_copy__calc_maps(struct kcore_copy_info *kci, const char *dir,
@@ -1811,6 +1858,8 @@ int kcore_copy(const char *from_dir, const char *to_dir)
 	kcore_copy__for_each_phdr(&kci, p) {
 		off_t offs = p->rel + offset;
 
+		if (p->remaps)
+			continue;
 		if (copy_bytes(kcore.fd, p->offset, extract.fd, offs, p->len))
 			goto out_extract_close;
 	}
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-09 11:43 ` [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines Adrian Hunter
@ 2018-05-09 17:07   ` Arnaldo Carvalho de Melo
  2018-05-10 19:08     ` Hunter, Adrian
  2018-05-15 10:30   ` Jiri Olsa
  1 sibling, 1 reply; 38+ messages in thread
From: Arnaldo Carvalho de Melo @ 2018-05-09 17:07 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
	Joerg Roedel, Jiri Olsa, linux-kernel, x86

Em Wed, May 09, 2018 at 02:43:36PM +0300, Adrian Hunter escreveu:
> On x86_64 the KPTI entry trampolines are not in the kernel map created by
> perf tools. That results in the addresses having no symbols and prevents
> annotation. It also causes Intel PT to have decoding errors at the
> trampoline addresses. Workaround that by creating maps for the trampolines.
> At present the kernel does not export information revealing where the
> trampolines are. Until that happens, the addresses are hardcoded.
> 
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
>  tools/perf/util/machine.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++
>  tools/perf/util/machine.h |   3 ++
>  tools/perf/util/symbol.c  |  12 +++---
>  3 files changed, 114 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index 22047ff3cf2a..1bf15aa0b099 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -851,6 +851,110 @@ static int machine__get_running_kernel_start(struct machine *machine,
>  	return 0;
>  }
>  
> +struct special_kernal_map {

s/kernal/kernel/

And "special"?

> +	u64 start;
> +	u64 end;
> +	u64 pgoff;
> +};
> +
> +static int machine__create_special_kernel_map(struct machine *machine,
> +					      struct dso *kernel,
> +					      struct special_kernal_map *sm)
> +{
> +	struct kmap *kmap;
> +	struct map *map;
> +
> +	map = map__new2(sm->start, kernel);
> +	if (!map)
> +		return -1;
> +
> +	map->end   = sm->end;
> +	map->pgoff = sm->pgoff;
> +
> +	kmap = map__kmap(map);
> +
> +	kmap->kmaps = &machine->kmaps;
> +
> +	map_groups__insert(&machine->kmaps, map);
> +
> +	pr_debug2("Added special kernel map %" PRIx64 "-%" PRIx64 "\n",
> +		  map->start, map->end);
> +
> +	map__put(map);
> +
> +	return 0;
> +}
> +
> +static u64 find_entry_trampoline(struct dso *dso)
> +{
> +	struct {
> +		const char *name;
> +		u64 addr;
> +	} syms[] = {
> +		/* Duplicates are removed so lookup all aliases */
> +		{"_entry_trampoline", 0},
> +		{"__entry_trampoline_start", 0},
> +		{"entry_SYSCALL_64_trampoline", 0},

We've been using named initializers consistently, so please change this
to:

	struct {
		const char *name;
		u64	   addr;
	} syms[] = {
		{ .name = "_entry_trampoline", },
		{ .name = "__entry_trampoline_start", },
		{ .name = "entry_SYSCALL_64_trampoline", },
	},

Also why do you have to lookup to all of them to them use just the first
found? I.e. you say they are aliases, why not return the first symbol
found, i.e. the above would be reduced to:

	const char *syms[] = {
		"_entry_trampoline",
		"__entry_trampoline_start",
		"entry_SYSCALL_64_trampoline",
	},

And then:

	struct symbol *sym = dso__first_symbol(dso);
	unsigned int i;

	for (; sym; sym = dso__next_symbol(sym)) {
		if (sym->binding != STB_GLOBAL)
			continue;
		for (i = 0; i < ARRAY_SIZE(syms); i++) {
			if (!strcmp(sym->name, syms[i].name))
				return sym->start;
		}
	}

	return 0;

> +	};
> +	struct symbol *sym = dso__first_symbol(dso);
> +	unsigned int i;
> +
> +	for (; sym; sym = dso__next_symbol(sym)) {
> +		if (sym->binding != STB_GLOBAL)
> +			continue;
> +		for (i = 0; i < ARRAY_SIZE(syms); i++) {
> +			if (!strcmp(sym->name, syms[i].name))
> +				syms[i].addr = sym->start;
> +		}
> +	}
> +
> +	for (i = 0; i < ARRAY_SIZE(syms); i++) {
> +		if (syms[i].addr)
> +			return syms[i].addr;
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * These values can be used for kernels that do not have symbols for the entry
> + * trampolines in kallsyms.
> + */
> +#define X86_64_CPU_ENTRY_AREA_PER_CPU	0xfffffe0000000000ULL
> +#define X86_64_CPU_ENTRY_AREA_SIZE	0x2c000
> +#define X86_64_ENTRY_TRAMPOLINE		0x6000
> +
> +/* Map x86_64 KPTI entry trampolines */
> +int machine__map_x86_64_entry_trampolines(struct machine *machine,
> +					  struct dso *kernel)
> +{
> +	u64 pgoff = find_entry_trampoline(kernel);
> +	int nr_cpus_avail = 0, cpu;
> +
> +	if (!pgoff)
> +		return 0;
> +
> +	if (machine->env)
> +		nr_cpus_avail = machine->env->nr_cpus_avail;
> +
> +	/* Add a 1 page map for each CPU's entry trampoline */
> +	for (cpu = 0; cpu < nr_cpus_avail; cpu++) {
> +		u64 va = X86_64_CPU_ENTRY_AREA_PER_CPU +
> +			 cpu * X86_64_CPU_ENTRY_AREA_SIZE +
> +			 X86_64_ENTRY_TRAMPOLINE;
> +		struct special_kernal_map sm = {
> +			.start = va,
> +			.end   = va + page_size,
> +			.pgoff = pgoff,
> +		};
> +
> +		if (machine__create_special_kernel_map(machine, kernel, &sm) < 0)
> +			return -1;
> +	}
> +
> +	return 0;
> +}
> +
>  static int
>  __machine__create_kernel_maps(struct machine *machine, struct dso *kernel)
>  {
> diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
> index b31d33b5aa2a..6e1c63d3a625 100644
> --- a/tools/perf/util/machine.h
> +++ b/tools/perf/util/machine.h
> @@ -267,4 +267,7 @@ int machine__set_current_tid(struct machine *machine, int cpu, pid_t pid,
>   */
>  char *machine__resolve_kernel_addr(void *vmachine, unsigned long long *addrp, char **modp);
>  
> +int machine__map_x86_64_entry_trampolines(struct machine *machine,
> +					  struct dso *kernel);
> +
>  #endif /* __PERF_MACHINE_H */
> diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
> index 4a39f4d0a174..c3a1a89a61cb 100644
> --- a/tools/perf/util/symbol.c
> +++ b/tools/perf/util/symbol.c
> @@ -1490,20 +1490,22 @@ int dso__load(struct dso *dso, struct map *map)
>  		goto out;
>  	}
>  
> +	if (map->groups && map->groups->machine)
> +		machine = map->groups->machine;
> +	else
> +		machine = NULL;
> +
>  	if (dso->kernel) {
>  		if (dso->kernel == DSO_TYPE_KERNEL)
>  			ret = dso__load_kernel_sym(dso, map);
>  		else if (dso->kernel == DSO_TYPE_GUEST_KERNEL)
>  			ret = dso__load_guest_kernel_sym(dso, map);
>  
> +		if (machine && machine__is(machine, "x86_64"))
> +			machine__map_x86_64_entry_trampolines(machine, dso);
>  		goto out;
>  	}
>  
> -	if (map->groups && map->groups->machine)
> -		machine = map->groups->machine;
> -	else
> -		machine = NULL;
> -
>  	dso->adjust_symbols = 0;
>  
>  	if (perfmap) {
> -- 
> 1.9.1

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 06/19] perf tools: Fix kernel_start for KPTI on x86_64
  2018-05-09 11:43 ` [PATCH RFC 06/19] perf tools: Fix kernel_start for KPTI on x86_64 Adrian Hunter
@ 2018-05-09 17:08   ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 38+ messages in thread
From: Arnaldo Carvalho de Melo @ 2018-05-09 17:08 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
	Joerg Roedel, Jiri Olsa, linux-kernel, x86

Em Wed, May 09, 2018 at 02:43:35PM +0300, Adrian Hunter escreveu:
> On x86_64, KPTI entry trampolines are less than the start of kernel text,
> but still above 2^63. So leave kernel_start = 1ULL << 63 for x86_64.
> 
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
>  tools/perf/util/machine.c | 16 +++++++++++++++-
>  tools/perf/util/machine.h |  2 ++
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> index 72a351613d85..22047ff3cf2a 100644
> --- a/tools/perf/util/machine.c
> +++ b/tools/perf/util/machine.c
> @@ -2296,6 +2296,15 @@ int machine__set_current_tid(struct machine *machine, int cpu, pid_t pid,
>  	return 0;
>  }
>  
> +/*
> + * Compares the raw arch string. N.B. see instead perf_env__arch() if a
> + * normalized arch is needed.
> + */
> +bool machine__is(struct machine *machine, const char *arch)
> +{
> +	return machine->env && !strcmp(machine->env->arch, arch);
> +}
> +

Please make machine__is(NULL) return false, to reduce boilerplate in
callers:

bool machine__is(struct machine *machine, const char *arch)
{
	return machine && machine->env && !strcmp(machine->env->arch, arch);
}


>  int machine__get_kernel_start(struct machine *machine)
>  {
>  	struct map *map = machine__kernel_map(machine);
> @@ -2312,7 +2321,12 @@ int machine__get_kernel_start(struct machine *machine)
>  	machine->kernel_start = 1ULL << 63;
>  	if (map) {
>  		err = map__load(map);
> -		if (!err)
> +		/*
> +		 * On x86_64, KPTI entry trampolines are less than the
> +		 * start of kernel text, but still above 2^63. So leave
> +		 * kernel_start = 1ULL << 63 for x86_64.
> +		 */
> +		if (!err && !machine__is(machine, "x86_64"))
>  			machine->kernel_start = map->start;
>  	}
>  	return err;
> diff --git a/tools/perf/util/machine.h b/tools/perf/util/machine.h
> index 388fb4741c54..b31d33b5aa2a 100644
> --- a/tools/perf/util/machine.h
> +++ b/tools/perf/util/machine.h
> @@ -188,6 +188,8 @@ static inline bool machine__is_host(struct machine *machine)
>  	return machine ? machine->pid == HOST_KERNEL_ID : false;
>  }
>  
> +bool machine__is(struct machine *machine, const char *arch);
> +
>  struct thread *__machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid);
>  struct thread *machine__findnew_thread(struct machine *machine, pid_t pid, pid_t tid);
>  
> -- 
> 1.9.1

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod()
  2018-05-09 11:43 ` [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod() Adrian Hunter
@ 2018-05-10 13:01   ` Jiri Olsa
  2018-05-10 17:02     ` Hunter, Adrian
  0 siblings, 1 reply; 38+ messages in thread
From: Jiri Olsa @ 2018-05-10 13:01 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Thomas Gleixner, Arnaldo Carvalho de Melo, Ingo Molnar,
	Peter Zijlstra, Andy Lutomirski, H. Peter Anvin, Andi Kleen,
	Alexander Shishkin, Dave Hansen, Joerg Roedel, linux-kernel, x86

On Wed, May 09, 2018 at 02:43:30PM +0300, Adrian Hunter wrote:
> Simplify logic in update_iter_mod().
> 
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
>  kernel/kallsyms.c | 20 ++++++--------------
>  1 file changed, 6 insertions(+), 14 deletions(-)
> 
> diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
> index a23e21ada81b..eda4b0222dab 100644
> --- a/kernel/kallsyms.c
> +++ b/kernel/kallsyms.c
> @@ -510,23 +510,15 @@ static int update_iter_mod(struct kallsym_iter *iter, loff_t pos)
>  {
>  	iter->pos = pos;
>  
> -	if (iter->pos_ftrace_mod_end > 0 &&
> -	    iter->pos_ftrace_mod_end < iter->pos)
> -		return get_ksymbol_bpf(iter);
> -
> -	if (iter->pos_mod_end > 0 &&
> -	    iter->pos_mod_end < iter->pos) {
> -		if (!get_ksymbol_ftrace_mod(iter))
> -			return get_ksymbol_bpf(iter);
> +	if ((!iter->pos_mod_end || iter->pos_mod_end > pos) &&
> +	    get_ksymbol_mod(iter))
>  		return 1;

hum, should that be iter->pos_mod_end >= pos ?


> -	}
>  
> -	if (!get_ksymbol_mod(iter)) {
> -		if (!get_ksymbol_ftrace_mod(iter))
> -			return get_ksymbol_bpf(iter);
> -	}
> +	if ((!iter->pos_ftrace_mod_end || iter->pos_ftrace_mod_end > pos) &&
> +	    get_ksymbol_ftrace_mod(iter))

same here? iter->pos_ftrace_mod_end >= pos

jirka

> +		return 1;
>  
> -	return 1;
> +	return get_ksymbol_bpf(iter);
>  }
>  
>  /* Returns false if pos at or past end of file. */
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 38+ messages in thread

* RE: [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod()
  2018-05-10 13:01   ` Jiri Olsa
@ 2018-05-10 17:02     ` Hunter, Adrian
  2018-05-14 17:55       ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 38+ messages in thread
From: Hunter, Adrian @ 2018-05-10 17:02 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Thomas Gleixner, Arnaldo Carvalho de Melo, Ingo Molnar,
	Peter Zijlstra, Andy Lutomirski, H. Peter Anvin, Andi Kleen,
	Alexander Shishkin, Dave Hansen, Joerg Roedel, linux-kernel, x86

> -----Original Message-----
> From: Jiri Olsa [mailto:jolsa@redhat.com]
> Sent: Thursday, May 10, 2018 4:02 PM
> To: Hunter, Adrian <adrian.hunter@intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Arnaldo Carvalho de Melo
> <acme@kernel.org>; Ingo Molnar <mingo@redhat.com>; Peter Zijlstra
> <peterz@infradead.org>; Andy Lutomirski <luto@kernel.org>; H. Peter
> Anvin <hpa@zytor.com>; Andi Kleen <ak@linux.intel.com>; Alexander
> Shishkin <alexander.shishkin@linux.intel.com>; Dave Hansen
> <dave.hansen@linux.intel.com>; Joerg Roedel <joro@8bytes.org>; linux-
> kernel@vger.kernel.org; x86@kernel.org
> Subject: Re: [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod()
> 
> On Wed, May 09, 2018 at 02:43:30PM +0300, Adrian Hunter wrote:
> > Simplify logic in update_iter_mod().
> >
> > Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> > ---
> >  kernel/kallsyms.c | 20 ++++++--------------
> >  1 file changed, 6 insertions(+), 14 deletions(-)
> >
> > diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c index
> > a23e21ada81b..eda4b0222dab 100644
> > --- a/kernel/kallsyms.c
> > +++ b/kernel/kallsyms.c
> > @@ -510,23 +510,15 @@ static int update_iter_mod(struct kallsym_iter
> > *iter, loff_t pos)  {
> >  	iter->pos = pos;
> >
> > -	if (iter->pos_ftrace_mod_end > 0 &&
> > -	    iter->pos_ftrace_mod_end < iter->pos)
> > -		return get_ksymbol_bpf(iter);
> > -
> > -	if (iter->pos_mod_end > 0 &&
> > -	    iter->pos_mod_end < iter->pos) {
> > -		if (!get_ksymbol_ftrace_mod(iter))
> > -			return get_ksymbol_bpf(iter);
> > +	if ((!iter->pos_mod_end || iter->pos_mod_end > pos) &&
> > +	    get_ksymbol_mod(iter))
> >  		return 1;
> 
> hum, should that be iter-> pos_mod_end >= pos ?

But module_get_kallsym() returned -1 when pos_mod_end was set to pos.

> 
> 
> > -	}
> >
> > -	if (!get_ksymbol_mod(iter)) {
> > -		if (!get_ksymbol_ftrace_mod(iter))
> > -			return get_ksymbol_bpf(iter);
> > -	}
> > +	if ((!iter->pos_ftrace_mod_end || iter->pos_ftrace_mod_end >
> pos) &&
> > +	    get_ksymbol_ftrace_mod(iter))
> 
> same here? iter->pos_ftrace_mod_end >= pos
> 
> jirka
> 
> > +		return 1;
> >
> > -	return 1;
> > +	return get_ksymbol_bpf(iter);
> >  }
> >
> >  /* Returns false if pos at or past end of file. */
> > --
> > 1.9.1
> >

^ permalink raw reply	[flat|nested] 38+ messages in thread

* RE: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-09 17:07   ` Arnaldo Carvalho de Melo
@ 2018-05-10 19:08     ` Hunter, Adrian
  2018-05-10 20:15       ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 38+ messages in thread
From: Hunter, Adrian @ 2018-05-10 19:08 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
	Joerg Roedel, Jiri Olsa, linux-kernel, x86

> -----Original Message-----
> From: Arnaldo Carvalho de Melo [mailto:acme@kernel.org]
> Sent: Wednesday, May 9, 2018 8:07 PM
> To: Hunter, Adrian <adrian.hunter@intel.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>; Ingo Molnar
> <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Andy
> Lutomirski <luto@kernel.org>; H. Peter Anvin <hpa@zytor.com>; Andi Kleen
> <ak@linux.intel.com>; Alexander Shishkin
> <alexander.shishkin@linux.intel.com>; Dave Hansen
> <dave.hansen@linux.intel.com>; Joerg Roedel <joro@8bytes.org>; Jiri Olsa
> <jolsa@redhat.com>; linux-kernel@vger.kernel.org; x86@kernel.org
> Subject: Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for
> x86_64 KPTI entry trampolines
> 
> Em Wed, May 09, 2018 at 02:43:36PM +0300, Adrian Hunter escreveu:
> > On x86_64 the KPTI entry trampolines are not in the kernel map created
> > by perf tools. That results in the addresses having no symbols and
> > prevents annotation. It also causes Intel PT to have decoding errors
> > at the trampoline addresses. Workaround that by creating maps for the
> trampolines.
> > At present the kernel does not export information revealing where the
> > trampolines are. Until that happens, the addresses are hardcoded.
> >
> > Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> > ---
> >  tools/perf/util/machine.c | 104
> ++++++++++++++++++++++++++++++++++++++++++++++
> >  tools/perf/util/machine.h |   3 ++
> >  tools/perf/util/symbol.c  |  12 +++---
> >  3 files changed, 114 insertions(+), 5 deletions(-)
> >
> > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > index 22047ff3cf2a..1bf15aa0b099 100644
> > --- a/tools/perf/util/machine.c
> > +++ b/tools/perf/util/machine.c
> > @@ -851,6 +851,110 @@ static int
> machine__get_running_kernel_start(struct machine *machine,
> >  	return 0;
> >  }
> >
> > +struct special_kernal_map {
> 
> s/kernal/kernel/
> 
> And "special"?

I have added comment:

/* Kernel-space maps that are not the main kernel map nor a module map */

And fixed kernal, and changed machine__is()

Revised patch set is here:

	http://git.infradead.org/users/ahunter/linux-perf.git/shortlog/refs/heads/perf-tools-kpti

which is the perf-tools-kpti branch of:

	git://git.infradead.org/users/ahunter/linux-perf.git

Let me know if you want me to post the workaround patches separately,
otherwise I will wait a bit before sending the patches again.

> 
> > +	u64 start;
> > +	u64 end;
> > +	u64 pgoff;
> > +};
> > +
> > +static int machine__create_special_kernel_map(struct machine
> *machine,
> > +					      struct dso *kernel,
> > +					      struct special_kernal_map *sm) {
> > +	struct kmap *kmap;
> > +	struct map *map;
> > +
> > +	map = map__new2(sm->start, kernel);
> > +	if (!map)
> > +		return -1;
> > +
> > +	map->end   = sm->end;
> > +	map->pgoff = sm->pgoff;
> > +
> > +	kmap = map__kmap(map);
> > +
> > +	kmap->kmaps = &machine->kmaps;
> > +
> > +	map_groups__insert(&machine->kmaps, map);
> > +
> > +	pr_debug2("Added special kernel map %" PRIx64 "-%" PRIx64 "\n",
> > +		  map->start, map->end);
> > +
> > +	map__put(map);
> > +
> > +	return 0;
> > +}
> > +
> > +static u64 find_entry_trampoline(struct dso *dso) {
> > +	struct {
> > +		const char *name;
> > +		u64 addr;
> > +	} syms[] = {
> > +		/* Duplicates are removed so lookup all aliases */
> > +		{"_entry_trampoline", 0},
> > +		{"__entry_trampoline_start", 0},
> > +		{"entry_SYSCALL_64_trampoline", 0},
> 
> We've been using named initializers consistently, so please change this
> to:
> 
> 	struct {
> 		const char *name;
> 		u64	   addr;
> 	} syms[] = {
> 		{ .name = "_entry_trampoline", },
> 		{ .name = "__entry_trampoline_start", },
> 		{ .name = "entry_SYSCALL_64_trampoline", },
> 	},
> 
> Also why do you have to lookup to all of them to them use just the first
> found? I.e. you say they are aliases, why not return the first symbol found,
> i.e. the above would be reduced to:
> 
> 	const char *syms[] = {
> 		"_entry_trampoline",
> 		"__entry_trampoline_start",
> 		"entry_SYSCALL_64_trampoline",
> 	},
> 
> And then:
> 
> 	struct symbol *sym = dso__first_symbol(dso);
> 	unsigned int i;
> 
> 	for (; sym; sym = dso__next_symbol(sym)) {
> 		if (sym->binding != STB_GLOBAL)
> 			continue;
> 		for (i = 0; i < ARRAY_SIZE(syms); i++) {
> 			if (!strcmp(sym->name, syms[i].name))
> 				return sym->start;
> 		}
> 	}
> 
> 	return 0;
> 
> > +	};
> > +	struct symbol *sym = dso__first_symbol(dso);
> > +	unsigned int i;
> > +
> > +	for (; sym; sym = dso__next_symbol(sym)) {
> > +		if (sym->binding != STB_GLOBAL)
> > +			continue;
> > +		for (i = 0; i < ARRAY_SIZE(syms); i++) {
> > +			if (!strcmp(sym->name, syms[i].name))
> > +				syms[i].addr = sym->start;
> > +		}
> > +	}
> > +
> > +	for (i = 0; i < ARRAY_SIZE(syms); i++) {
> > +		if (syms[i].addr)
> > +			return syms[i].addr;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +/*
> > + * These values can be used for kernels that do not have symbols for
> > +the entry
> > + * trampolines in kallsyms.
> > + */
> > +#define X86_64_CPU_ENTRY_AREA_PER_CPU
> 	0xfffffe0000000000ULL
> > +#define X86_64_CPU_ENTRY_AREA_SIZE	0x2c000
> > +#define X86_64_ENTRY_TRAMPOLINE		0x6000
> > +
> > +/* Map x86_64 KPTI entry trampolines */ int
> > +machine__map_x86_64_entry_trampolines(struct machine *machine,
> > +					  struct dso *kernel)
> > +{
> > +	u64 pgoff = find_entry_trampoline(kernel);
> > +	int nr_cpus_avail = 0, cpu;
> > +
> > +	if (!pgoff)
> > +		return 0;
> > +
> > +	if (machine->env)
> > +		nr_cpus_avail = machine->env->nr_cpus_avail;
> > +
> > +	/* Add a 1 page map for each CPU's entry trampoline */
> > +	for (cpu = 0; cpu < nr_cpus_avail; cpu++) {
> > +		u64 va = X86_64_CPU_ENTRY_AREA_PER_CPU +
> > +			 cpu * X86_64_CPU_ENTRY_AREA_SIZE +
> > +			 X86_64_ENTRY_TRAMPOLINE;
> > +		struct special_kernal_map sm = {
> > +			.start = va,
> > +			.end   = va + page_size,
> > +			.pgoff = pgoff,
> > +		};
> > +
> > +		if (machine__create_special_kernel_map(machine, kernel,
> &sm) < 0)
> > +			return -1;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> >  static int
> >  __machine__create_kernel_maps(struct machine *machine, struct dso
> > *kernel)  { diff --git a/tools/perf/util/machine.h
> > b/tools/perf/util/machine.h index b31d33b5aa2a..6e1c63d3a625 100644
> > --- a/tools/perf/util/machine.h
> > +++ b/tools/perf/util/machine.h
> > @@ -267,4 +267,7 @@ int machine__set_current_tid(struct machine
> *machine, int cpu, pid_t pid,
> >   */
> >  char *machine__resolve_kernel_addr(void *vmachine, unsigned long long
> > *addrp, char **modp);
> >
> > +int machine__map_x86_64_entry_trampolines(struct machine *machine,
> > +					  struct dso *kernel);
> > +
> >  #endif /* __PERF_MACHINE_H */
> > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c index
> > 4a39f4d0a174..c3a1a89a61cb 100644
> > --- a/tools/perf/util/symbol.c
> > +++ b/tools/perf/util/symbol.c
> > @@ -1490,20 +1490,22 @@ int dso__load(struct dso *dso, struct map
> *map)
> >  		goto out;
> >  	}
> >
> > +	if (map->groups && map->groups->machine)
> > +		machine = map->groups->machine;
> > +	else
> > +		machine = NULL;
> > +
> >  	if (dso->kernel) {
> >  		if (dso->kernel == DSO_TYPE_KERNEL)
> >  			ret = dso__load_kernel_sym(dso, map);
> >  		else if (dso->kernel == DSO_TYPE_GUEST_KERNEL)
> >  			ret = dso__load_guest_kernel_sym(dso, map);
> >
> > +		if (machine && machine__is(machine, "x86_64"))
> > +
> 	machine__map_x86_64_entry_trampolines(machine, dso);
> >  		goto out;
> >  	}
> >
> > -	if (map->groups && map->groups->machine)
> > -		machine = map->groups->machine;
> > -	else
> > -		machine = NULL;
> > -
> >  	dso->adjust_symbols = 0;
> >
> >  	if (perfmap) {
> > --
> > 1.9.1

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-10 19:08     ` Hunter, Adrian
@ 2018-05-10 20:15       ` Arnaldo Carvalho de Melo
  2018-05-10 20:19         ` Arnaldo Carvalho de Melo
  2018-05-11 11:15         ` Adrian Hunter
  0 siblings, 2 replies; 38+ messages in thread
From: Arnaldo Carvalho de Melo @ 2018-05-10 20:15 UTC (permalink / raw)
  To: Hunter, Adrian
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
	Joerg Roedel, Jiri Olsa, linux-kernel, x86

Em Thu, May 10, 2018 at 07:08:37PM +0000, Hunter, Adrian escreveu:
> > -----Original Message-----
> > From: Arnaldo Carvalho de Melo [mailto:acme@kernel.org]
> > Sent: Wednesday, May 9, 2018 8:07 PM
> > To: Hunter, Adrian <adrian.hunter@intel.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>; Ingo Molnar
> > <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Andy
> > Lutomirski <luto@kernel.org>; H. Peter Anvin <hpa@zytor.com>; Andi Kleen
> > <ak@linux.intel.com>; Alexander Shishkin
> > <alexander.shishkin@linux.intel.com>; Dave Hansen
> > <dave.hansen@linux.intel.com>; Joerg Roedel <joro@8bytes.org>; Jiri Olsa
> > <jolsa@redhat.com>; linux-kernel@vger.kernel.org; x86@kernel.org
> > Subject: Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for
> > x86_64 KPTI entry trampolines
> > 
> > Em Wed, May 09, 2018 at 02:43:36PM +0300, Adrian Hunter escreveu:
> > > On x86_64 the KPTI entry trampolines are not in the kernel map created
> > > by perf tools. That results in the addresses having no symbols and
> > > prevents annotation. It also causes Intel PT to have decoding errors
> > > at the trampoline addresses. Workaround that by creating maps for the
> > trampolines.
> > > At present the kernel does not export information revealing where the
> > > trampolines are. Until that happens, the addresses are hardcoded.
> > >
> > > Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> > > ---
> > >  tools/perf/util/machine.c | 104
> > ++++++++++++++++++++++++++++++++++++++++++++++
> > >  tools/perf/util/machine.h |   3 ++
> > >  tools/perf/util/symbol.c  |  12 +++---
> > >  3 files changed, 114 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > > index 22047ff3cf2a..1bf15aa0b099 100644
> > > --- a/tools/perf/util/machine.c
> > > +++ b/tools/perf/util/machine.c
> > > @@ -851,6 +851,110 @@ static int
> > machine__get_running_kernel_start(struct machine *machine,
> > >  	return 0;
> > >  }
> > >
> > > +struct special_kernal_map {
> > 
> > s/kernal/kernel/
> > 
> > And "special"?
> 
> I have added comment:
> 
> /* Kernel-space maps that are not the main kernel map nor a module map */

Perhaps:

  /* Kernel-space maps for symbols that are outside the main kernel map and module maps */

  struct extra_kernel_map;

What do you think?

> And fixed kernal, and changed machine__is()

Thanks
 
> Revised patch set is here:
> 
> 	http://git.infradead.org/users/ahunter/linux-perf.git/shortlog/refs/heads/perf-tools-kpti
> 
> which is the perf-tools-kpti branch of:
> 
> 	git://git.infradead.org/users/ahunter/linux-perf.git
> 
> Let me know if you want me to post the workaround patches separately,
> otherwise I will wait a bit before sending the patches again.

I'll see if I went thru all of the patches already...

- Arnaldo
 
> > 
> > > +	u64 start;
> > > +	u64 end;
> > > +	u64 pgoff;
> > > +};
> > > +
> > > +static int machine__create_special_kernel_map(struct machine
> > *machine,
> > > +					      struct dso *kernel,
> > > +					      struct special_kernal_map *sm) {
> > > +	struct kmap *kmap;
> > > +	struct map *map;
> > > +
> > > +	map = map__new2(sm->start, kernel);
> > > +	if (!map)
> > > +		return -1;
> > > +
> > > +	map->end   = sm->end;
> > > +	map->pgoff = sm->pgoff;
> > > +
> > > +	kmap = map__kmap(map);
> > > +
> > > +	kmap->kmaps = &machine->kmaps;
> > > +
> > > +	map_groups__insert(&machine->kmaps, map);
> > > +
> > > +	pr_debug2("Added special kernel map %" PRIx64 "-%" PRIx64 "\n",
> > > +		  map->start, map->end);
> > > +
> > > +	map__put(map);
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +static u64 find_entry_trampoline(struct dso *dso) {
> > > +	struct {
> > > +		const char *name;
> > > +		u64 addr;
> > > +	} syms[] = {
> > > +		/* Duplicates are removed so lookup all aliases */
> > > +		{"_entry_trampoline", 0},
> > > +		{"__entry_trampoline_start", 0},
> > > +		{"entry_SYSCALL_64_trampoline", 0},
> > 
> > We've been using named initializers consistently, so please change this
> > to:
> > 
> > 	struct {
> > 		const char *name;
> > 		u64	   addr;
> > 	} syms[] = {
> > 		{ .name = "_entry_trampoline", },
> > 		{ .name = "__entry_trampoline_start", },
> > 		{ .name = "entry_SYSCALL_64_trampoline", },
> > 	},
> > 
> > Also why do you have to lookup to all of them to them use just the first
> > found? I.e. you say they are aliases, why not return the first symbol found,
> > i.e. the above would be reduced to:
> > 
> > 	const char *syms[] = {
> > 		"_entry_trampoline",
> > 		"__entry_trampoline_start",
> > 		"entry_SYSCALL_64_trampoline",
> > 	},
> > 
> > And then:
> > 
> > 	struct symbol *sym = dso__first_symbol(dso);
> > 	unsigned int i;
> > 
> > 	for (; sym; sym = dso__next_symbol(sym)) {
> > 		if (sym->binding != STB_GLOBAL)
> > 			continue;
> > 		for (i = 0; i < ARRAY_SIZE(syms); i++) {
> > 			if (!strcmp(sym->name, syms[i].name))
> > 				return sym->start;
> > 		}
> > 	}
> > 
> > 	return 0;
> > 
> > > +	};
> > > +	struct symbol *sym = dso__first_symbol(dso);
> > > +	unsigned int i;
> > > +
> > > +	for (; sym; sym = dso__next_symbol(sym)) {
> > > +		if (sym->binding != STB_GLOBAL)
> > > +			continue;
> > > +		for (i = 0; i < ARRAY_SIZE(syms); i++) {
> > > +			if (!strcmp(sym->name, syms[i].name))
> > > +				syms[i].addr = sym->start;
> > > +		}
> > > +	}
> > > +
> > > +	for (i = 0; i < ARRAY_SIZE(syms); i++) {
> > > +		if (syms[i].addr)
> > > +			return syms[i].addr;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > > +/*
> > > + * These values can be used for kernels that do not have symbols for
> > > +the entry
> > > + * trampolines in kallsyms.
> > > + */
> > > +#define X86_64_CPU_ENTRY_AREA_PER_CPU
> > 	0xfffffe0000000000ULL
> > > +#define X86_64_CPU_ENTRY_AREA_SIZE	0x2c000
> > > +#define X86_64_ENTRY_TRAMPOLINE		0x6000
> > > +
> > > +/* Map x86_64 KPTI entry trampolines */ int
> > > +machine__map_x86_64_entry_trampolines(struct machine *machine,
> > > +					  struct dso *kernel)
> > > +{
> > > +	u64 pgoff = find_entry_trampoline(kernel);
> > > +	int nr_cpus_avail = 0, cpu;
> > > +
> > > +	if (!pgoff)
> > > +		return 0;
> > > +
> > > +	if (machine->env)
> > > +		nr_cpus_avail = machine->env->nr_cpus_avail;
> > > +
> > > +	/* Add a 1 page map for each CPU's entry trampoline */
> > > +	for (cpu = 0; cpu < nr_cpus_avail; cpu++) {
> > > +		u64 va = X86_64_CPU_ENTRY_AREA_PER_CPU +
> > > +			 cpu * X86_64_CPU_ENTRY_AREA_SIZE +
> > > +			 X86_64_ENTRY_TRAMPOLINE;
> > > +		struct special_kernal_map sm = {
> > > +			.start = va,
> > > +			.end   = va + page_size,
> > > +			.pgoff = pgoff,
> > > +		};
> > > +
> > > +		if (machine__create_special_kernel_map(machine, kernel,
> > &sm) < 0)
> > > +			return -1;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > >  static int
> > >  __machine__create_kernel_maps(struct machine *machine, struct dso
> > > *kernel)  { diff --git a/tools/perf/util/machine.h
> > > b/tools/perf/util/machine.h index b31d33b5aa2a..6e1c63d3a625 100644
> > > --- a/tools/perf/util/machine.h
> > > +++ b/tools/perf/util/machine.h
> > > @@ -267,4 +267,7 @@ int machine__set_current_tid(struct machine
> > *machine, int cpu, pid_t pid,
> > >   */
> > >  char *machine__resolve_kernel_addr(void *vmachine, unsigned long long
> > > *addrp, char **modp);
> > >
> > > +int machine__map_x86_64_entry_trampolines(struct machine *machine,
> > > +					  struct dso *kernel);
> > > +
> > >  #endif /* __PERF_MACHINE_H */
> > > diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c index
> > > 4a39f4d0a174..c3a1a89a61cb 100644
> > > --- a/tools/perf/util/symbol.c
> > > +++ b/tools/perf/util/symbol.c
> > > @@ -1490,20 +1490,22 @@ int dso__load(struct dso *dso, struct map
> > *map)
> > >  		goto out;
> > >  	}
> > >
> > > +	if (map->groups && map->groups->machine)
> > > +		machine = map->groups->machine;
> > > +	else
> > > +		machine = NULL;
> > > +
> > >  	if (dso->kernel) {
> > >  		if (dso->kernel == DSO_TYPE_KERNEL)
> > >  			ret = dso__load_kernel_sym(dso, map);
> > >  		else if (dso->kernel == DSO_TYPE_GUEST_KERNEL)
> > >  			ret = dso__load_guest_kernel_sym(dso, map);
> > >
> > > +		if (machine && machine__is(machine, "x86_64"))
> > > +
> > 	machine__map_x86_64_entry_trampolines(machine, dso);
> > >  		goto out;
> > >  	}
> > >
> > > -	if (map->groups && map->groups->machine)
> > > -		machine = map->groups->machine;
> > > -	else
> > > -		machine = NULL;
> > > -
> > >  	dso->adjust_symbols = 0;
> > >
> > >  	if (perfmap) {
> > > --
> > > 1.9.1

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-10 20:15       ` Arnaldo Carvalho de Melo
@ 2018-05-10 20:19         ` Arnaldo Carvalho de Melo
  2018-05-10 20:47           ` Arnaldo Carvalho de Melo
  2018-05-11 11:15         ` Adrian Hunter
  1 sibling, 1 reply; 38+ messages in thread
From: Arnaldo Carvalho de Melo @ 2018-05-10 20:19 UTC (permalink / raw)
  To: Hunter, Adrian
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
	Joerg Roedel, Jiri Olsa, linux-kernel, x86

Em Thu, May 10, 2018 at 05:15:42PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Thu, May 10, 2018 at 07:08:37PM +0000, Hunter, Adrian escreveu:
> > > -----Original Message-----
> > > From: Arnaldo Carvalho de Melo [mailto:acme@kernel.org]
> > > Sent: Wednesday, May 9, 2018 8:07 PM
> > > To: Hunter, Adrian <adrian.hunter@intel.com>
> > > Cc: Thomas Gleixner <tglx@linutronix.de>; Ingo Molnar
> > > <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Andy
> > > Lutomirski <luto@kernel.org>; H. Peter Anvin <hpa@zytor.com>; Andi Kleen
> > > <ak@linux.intel.com>; Alexander Shishkin
> > > <alexander.shishkin@linux.intel.com>; Dave Hansen
> > > <dave.hansen@linux.intel.com>; Joerg Roedel <joro@8bytes.org>; Jiri Olsa
> > > <jolsa@redhat.com>; linux-kernel@vger.kernel.org; x86@kernel.org
> > > Subject: Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for
> > > x86_64 KPTI entry trampolines
> > > 
> > > Em Wed, May 09, 2018 at 02:43:36PM +0300, Adrian Hunter escreveu:
> > > > On x86_64 the KPTI entry trampolines are not in the kernel map created
> > > > by perf tools. That results in the addresses having no symbols and
> > > > prevents annotation. It also causes Intel PT to have decoding errors
> > > > at the trampoline addresses. Workaround that by creating maps for the
> > > trampolines.
> > > > At present the kernel does not export information revealing where the
> > > > trampolines are. Until that happens, the addresses are hardcoded.
> > > >
> > > > Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> > > > ---
> > > >  tools/perf/util/machine.c | 104
> > > ++++++++++++++++++++++++++++++++++++++++++++++
> > > >  tools/perf/util/machine.h |   3 ++
> > > >  tools/perf/util/symbol.c  |  12 +++---
> > > >  3 files changed, 114 insertions(+), 5 deletions(-)
> > > >
> > > > diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
> > > > index 22047ff3cf2a..1bf15aa0b099 100644
> > > > --- a/tools/perf/util/machine.c
> > > > +++ b/tools/perf/util/machine.c
> > > > @@ -851,6 +851,110 @@ static int
> > > machine__get_running_kernel_start(struct machine *machine,
> > > >  	return 0;
> > > >  }
> > > >
> > > > +struct special_kernal_map {
> > > 
> > > s/kernal/kernel/
> > > 
> > > And "special"?
> > 
> > I have added comment:
> > 
> > /* Kernel-space maps that are not the main kernel map nor a module map */
> 
> Perhaps:
> 
>   /* Kernel-space maps for symbols that are outside the main kernel map and module maps */
> 
>   struct extra_kernel_map;
> 
> What do you think?
> 
> > And fixed kernal, and changed machine__is()
> 
> Thanks
>  
> > Revised patch set is here:
> > 
> > 	http://git.infradead.org/users/ahunter/linux-perf.git/shortlog/refs/heads/perf-tools-kpti
> > 
> > which is the perf-tools-kpti branch of:
> > 
> > 	git://git.infradead.org/users/ahunter/linux-perf.git
> > 
> > Let me know if you want me to post the workaround patches separately,
> > otherwise I will wait a bit before sending the patches again.
> 
> I'll see if I went thru all of the patches already...

So I looked at the patches posted and one comment is about the terse
commit logs for some of the kcore_copy patches, for instance:

--------------------
 In preparation to add more program headers, get rid of kernel_map and
modules_map.
--------------------

Can't this be made a bit more verbose? Lemme re-read the patch...

- Arnaldo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-10 20:19         ` Arnaldo Carvalho de Melo
@ 2018-05-10 20:47           ` Arnaldo Carvalho de Melo
  2018-05-11 11:18             ` Adrian Hunter
  0 siblings, 1 reply; 38+ messages in thread
From: Arnaldo Carvalho de Melo @ 2018-05-10 20:47 UTC (permalink / raw)
  To: Hunter, Adrian
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
	Joerg Roedel, Jiri Olsa, linux-kernel, x86

Em Thu, May 10, 2018 at 05:19:22PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Thu, May 10, 2018 at 05:15:42PM -0300, Arnaldo Carvalho de Melo escreveu:
> > Em Thu, May 10, 2018 at 07:08:37PM +0000, Hunter, Adrian escreveu:
> > > Let me know if you want me to post the workaround patches separately,
> > > otherwise I will wait a bit before sending the patches again.

> > I'll see if I went thru all of the patches already...
 
> So I looked at the patches posted and one comment is about the terse
> commit logs for some of the kcore_copy patches, for instance:
 
> --------------------
>  In preparation to add more program headers, get rid of kernel_map and
> modules_map.
> --------------------
 
> Can't this be made a bit more verbose? Lemme re-read the patch...

So you had just one pointers to the kernel map and a module_maps, and
then this is replaced by kcore_copy__map() that instead of populating
those fields that are being removed:

-       struct phdr_data kernel_map;
-       struct phdr_data modules_map;

Will allocate and add "struct phdr_data" instances to the
kcore_copy_info->phdrs list, so I propose, to follow convention used
elsewhere in tools/perf/ that you rename kcore_copy__map() to

  kcore_copy_info__addnew(kci, fields)

I would do it as:

struct phdr_data *phdr_data__new(fields)
{
	return  zalloc() + init fields;
}

struct phdr_data *kcore_copy_info__addnew(kci, fields)
{
	struct phdr_data *pd = phdr_data__new(fields);

	if (pd)
		list_add(&pd->list, &kci->phdrs)
}

Also please rename pd->list to pd->node, to clarify that it is a node in
some list, not a list.

The commit log list then could reflect that somehow, with something
around:

----------------------

Move ->kernel_map and ->modules_map to newly allocated entries in the
->phdrs list.

----------------------

wdyt?

- Arnaldo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-10 20:15       ` Arnaldo Carvalho de Melo
  2018-05-10 20:19         ` Arnaldo Carvalho de Melo
@ 2018-05-11 11:15         ` Adrian Hunter
  1 sibling, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-11 11:15 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
	Joerg Roedel, Jiri Olsa, linux-kernel, x86

On 10/05/18 23:15, Arnaldo Carvalho de Melo wrote:
> Em Thu, May 10, 2018 at 07:08:37PM +0000, Hunter, Adrian escreveu:
>>> -----Original Message-----
>>> From: Arnaldo Carvalho de Melo [mailto:acme@kernel.org]
>>> Sent: Wednesday, May 9, 2018 8:07 PM
>>> To: Hunter, Adrian <adrian.hunter@intel.com>
>>> Cc: Thomas Gleixner <tglx@linutronix.de>; Ingo Molnar
>>> <mingo@redhat.com>; Peter Zijlstra <peterz@infradead.org>; Andy
>>> Lutomirski <luto@kernel.org>; H. Peter Anvin <hpa@zytor.com>; Andi Kleen
>>> <ak@linux.intel.com>; Alexander Shishkin
>>> <alexander.shishkin@linux.intel.com>; Dave Hansen
>>> <dave.hansen@linux.intel.com>; Joerg Roedel <joro@8bytes.org>; Jiri Olsa
>>> <jolsa@redhat.com>; linux-kernel@vger.kernel.org; x86@kernel.org
>>> Subject: Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for
>>> x86_64 KPTI entry trampolines
>>>
>>> Em Wed, May 09, 2018 at 02:43:36PM +0300, Adrian Hunter escreveu:
>>>> On x86_64 the KPTI entry trampolines are not in the kernel map created
>>>> by perf tools. That results in the addresses having no symbols and
>>>> prevents annotation. It also causes Intel PT to have decoding errors
>>>> at the trampoline addresses. Workaround that by creating maps for the
>>> trampolines.
>>>> At present the kernel does not export information revealing where the
>>>> trampolines are. Until that happens, the addresses are hardcoded.
>>>>
>>>> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
>>>> ---
>>>>  tools/perf/util/machine.c | 104
>>> ++++++++++++++++++++++++++++++++++++++++++++++
>>>>  tools/perf/util/machine.h |   3 ++
>>>>  tools/perf/util/symbol.c  |  12 +++---
>>>>  3 files changed, 114 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
>>>> index 22047ff3cf2a..1bf15aa0b099 100644
>>>> --- a/tools/perf/util/machine.c
>>>> +++ b/tools/perf/util/machine.c
>>>> @@ -851,6 +851,110 @@ static int
>>> machine__get_running_kernel_start(struct machine *machine,
>>>>  	return 0;
>>>>  }
>>>>
>>>> +struct special_kernal_map {
>>>
>>> s/kernal/kernel/
>>>
>>> And "special"?
>>
>> I have added comment:
>>
>> /* Kernel-space maps that are not the main kernel map nor a module map */
> 
> Perhaps:
> 
>   /* Kernel-space maps for symbols that are outside the main kernel map and module maps */
> 
>   struct extra_kernel_map;
> 
> What do you think?

I have done the re-naming and comment change and pushed the changes to the
same branch.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-10 20:47           ` Arnaldo Carvalho de Melo
@ 2018-05-11 11:18             ` Adrian Hunter
  2018-05-11 14:45               ` Arnaldo Carvalho de Melo
  0 siblings, 1 reply; 38+ messages in thread
From: Adrian Hunter @ 2018-05-11 11:18 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
	Joerg Roedel, Jiri Olsa, linux-kernel, x86

On 10/05/18 23:47, Arnaldo Carvalho de Melo wrote:
> Em Thu, May 10, 2018 at 05:19:22PM -0300, Arnaldo Carvalho de Melo escreveu:
>> Em Thu, May 10, 2018 at 05:15:42PM -0300, Arnaldo Carvalho de Melo escreveu:
>>> Em Thu, May 10, 2018 at 07:08:37PM +0000, Hunter, Adrian escreveu:
>>>> Let me know if you want me to post the workaround patches separately,
>>>> otherwise I will wait a bit before sending the patches again.
> 
>>> I'll see if I went thru all of the patches already...
>  
>> So I looked at the patches posted and one comment is about the terse
>> commit logs for some of the kcore_copy patches, for instance:
>  
>> --------------------
>>  In preparation to add more program headers, get rid of kernel_map and
>> modules_map.
>> --------------------
>  
>> Can't this be made a bit more verbose? Lemme re-read the patch...
> 
> So you had just one pointers to the kernel map and a module_maps, and
> then this is replaced by kcore_copy__map() that instead of populating
> those fields that are being removed:
> 
> -       struct phdr_data kernel_map;
> -       struct phdr_data modules_map;
> 
> Will allocate and add "struct phdr_data" instances to the
> kcore_copy_info->phdrs list, so I propose, to follow convention used
> elsewhere in tools/perf/ that you rename kcore_copy__map() to
> 
>   kcore_copy_info__addnew(kci, fields)
> 
> I would do it as:
> 
> struct phdr_data *phdr_data__new(fields)
> {
> 	return  zalloc() + init fields;
> }
> 
> struct phdr_data *kcore_copy_info__addnew(kci, fields)
> {
> 	struct phdr_data *pd = phdr_data__new(fields);
> 
> 	if (pd)
> 		list_add(&pd->list, &kci->phdrs)
> }
> 
> Also please rename pd->list to pd->node, to clarify that it is a node in
> some list, not a list.
> 
> The commit log list then could reflect that somehow, with something
> around:
> 
> ----------------------
> 
> Move ->kernel_map and ->modules_map to newly allocated entries in the
> ->phdrs list.
> 
> ----------------------
> 
> wdyt?

I have done the changes but still have kcore_copy__map() calling
kcore_copy_info__addnew().  The changes have been pushed to the same branch.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-11 11:18             ` Adrian Hunter
@ 2018-05-11 14:45               ` Arnaldo Carvalho de Melo
  2018-05-14 13:02                 ` Adrian Hunter
  0 siblings, 1 reply; 38+ messages in thread
From: Arnaldo Carvalho de Melo @ 2018-05-11 14:45 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
	Joerg Roedel, Jiri Olsa, linux-kernel, x86

Em Fri, May 11, 2018 at 02:18:01PM +0300, Adrian Hunter escreveu:
> On 10/05/18 23:47, Arnaldo Carvalho de Melo wrote:
> > Em Thu, May 10, 2018 at 05:19:22PM -0300, Arnaldo Carvalho de Melo escreveu:
> >> Em Thu, May 10, 2018 at 05:15:42PM -0300, Arnaldo Carvalho de Melo escreveu:
> >>> Em Thu, May 10, 2018 at 07:08:37PM +0000, Hunter, Adrian escreveu:
> >>>> Let me know if you want me to post the workaround patches separately,
> >>>> otherwise I will wait a bit before sending the patches again.
> > 
> >>> I'll see if I went thru all of the patches already...
> >  
> >> So I looked at the patches posted and one comment is about the terse
> >> commit logs for some of the kcore_copy patches, for instance:
> >  
> >> --------------------
> >>  In preparation to add more program headers, get rid of kernel_map and
> >> modules_map.
> >> --------------------
> >  
> >> Can't this be made a bit more verbose? Lemme re-read the patch...
> > 
> > So you had just one pointers to the kernel map and a module_maps, and
> > then this is replaced by kcore_copy__map() that instead of populating
> > those fields that are being removed:
> > 
> > -       struct phdr_data kernel_map;
> > -       struct phdr_data modules_map;
> > 
> > Will allocate and add "struct phdr_data" instances to the
> > kcore_copy_info->phdrs list, so I propose, to follow convention used
> > elsewhere in tools/perf/ that you rename kcore_copy__map() to
> > 
> >   kcore_copy_info__addnew(kci, fields)
> > 
> > I would do it as:
> > 
> > struct phdr_data *phdr_data__new(fields)
> > {
> > 	return  zalloc() + init fields;
> > }
> > 
> > struct phdr_data *kcore_copy_info__addnew(kci, fields)
> > {
> > 	struct phdr_data *pd = phdr_data__new(fields);
> > 
> > 	if (pd)
> > 		list_add(&pd->list, &kci->phdrs)
> > }
> > 
> > Also please rename pd->list to pd->node, to clarify that it is a node in
> > some list, not a list.
> > 
> > The commit log list then could reflect that somehow, with something
> > around:
> > 
> > ----------------------
> > 
> > Move ->kernel_map and ->modules_map to newly allocated entries in the
> > ->phdrs list.
> > 
> > ----------------------
> > 
> > wdyt?
> 
> I have done the changes but still have kcore_copy__map() calling
> kcore_copy_info__addnew().  The changes have been pushed to the same branch.

Thanks! I'll process a perf/urgent round and then go over your updated
tree to get it tested on a skylake machine and merged to perf/core,

- arnaldo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 10/19] perf tools: Create maps for x86_64 KPTI entry trampolines
  2018-05-09 11:43 ` [PATCH RFC 10/19] perf tools: Create maps for x86_64 KPTI entry trampolines Adrian Hunter
@ 2018-05-14  8:32   ` Ingo Molnar
  0 siblings, 0 replies; 38+ messages in thread
From: Ingo Molnar @ 2018-05-14  8:32 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Thomas Gleixner, Arnaldo Carvalho de Melo, Ingo Molnar,
	Peter Zijlstra, Andy Lutomirski, H. Peter Anvin, Andi Kleen,
	Alexander Shishkin, Dave Hansen, Joerg Roedel, Jiri Olsa,
	linux-kernel, x86


* Adrian Hunter <adrian.hunter@intel.com> wrote:

> Create maps for x86_64 KPTI entry trampolines, based on symbols found in
> kallsyms. It is also necessary to keep track of whether the trampolines
> have been mapped particularly when the kernel dso is kcore.
> 
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
>  tools/perf/util/machine.c | 138 ++++++++++++++++++++++++++++++++++++++++++++--
>  tools/perf/util/machine.h |   1 +
>  tools/perf/util/symbol.c  |  17 ++++++
>  3 files changed, 150 insertions(+), 6 deletions(-)

Minor detail, please call it 'x86 PTI'.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-11 14:45               ` Arnaldo Carvalho de Melo
@ 2018-05-14 13:02                 ` Adrian Hunter
  0 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-14 13:02 UTC (permalink / raw)
  To: Arnaldo Carvalho de Melo
  Cc: Thomas Gleixner, Ingo Molnar, Peter Zijlstra, Andy Lutomirski,
	H. Peter Anvin, Andi Kleen, Alexander Shishkin, Dave Hansen,
	Joerg Roedel, Jiri Olsa, linux-kernel, x86

On 11/05/18 17:45, Arnaldo Carvalho de Melo wrote:
> Em Fri, May 11, 2018 at 02:18:01PM +0300, Adrian Hunter escreveu:
>> On 10/05/18 23:47, Arnaldo Carvalho de Melo wrote:
>>> Em Thu, May 10, 2018 at 05:19:22PM -0300, Arnaldo Carvalho de Melo escreveu:
>>>> Em Thu, May 10, 2018 at 05:15:42PM -0300, Arnaldo Carvalho de Melo escreveu:
>>>>> Em Thu, May 10, 2018 at 07:08:37PM +0000, Hunter, Adrian escreveu:
>>>>>> Let me know if you want me to post the workaround patches separately,
>>>>>> otherwise I will wait a bit before sending the patches again.
>>>
>>>>> I'll see if I went thru all of the patches already...
>>>  
>>>> So I looked at the patches posted and one comment is about the terse
>>>> commit logs for some of the kcore_copy patches, for instance:
>>>  
>>>> --------------------
>>>>  In preparation to add more program headers, get rid of kernel_map and
>>>> modules_map.
>>>> --------------------
>>>  
>>>> Can't this be made a bit more verbose? Lemme re-read the patch...
>>>
>>> So you had just one pointers to the kernel map and a module_maps, and
>>> then this is replaced by kcore_copy__map() that instead of populating
>>> those fields that are being removed:
>>>
>>> -       struct phdr_data kernel_map;
>>> -       struct phdr_data modules_map;
>>>
>>> Will allocate and add "struct phdr_data" instances to the
>>> kcore_copy_info->phdrs list, so I propose, to follow convention used
>>> elsewhere in tools/perf/ that you rename kcore_copy__map() to
>>>
>>>   kcore_copy_info__addnew(kci, fields)
>>>
>>> I would do it as:
>>>
>>> struct phdr_data *phdr_data__new(fields)
>>> {
>>> 	return  zalloc() + init fields;
>>> }
>>>
>>> struct phdr_data *kcore_copy_info__addnew(kci, fields)
>>> {
>>> 	struct phdr_data *pd = phdr_data__new(fields);
>>>
>>> 	if (pd)
>>> 		list_add(&pd->list, &kci->phdrs)
>>> }
>>>
>>> Also please rename pd->list to pd->node, to clarify that it is a node in
>>> some list, not a list.
>>>
>>> The commit log list then could reflect that somehow, with something
>>> around:
>>>
>>> ----------------------
>>>
>>> Move ->kernel_map and ->modules_map to newly allocated entries in the
>>> ->phdrs list.
>>>
>>> ----------------------
>>>
>>> wdyt?
>>
>> I have done the changes but still have kcore_copy__map() calling
>> kcore_copy_info__addnew().  The changes have been pushed to the same branch.
> 
> Thanks! I'll process a perf/urgent round and then go over your updated
> tree to get it tested on a skylake machine and merged to perf/core,

I changed some terminology 'x86_64 KPTI' -> 'x86 PTI' as requested by Ingo,
and pushed to a new branch perf-tools-kpti-v0.

	http://git.infradead.org/users/ahunter/linux-perf.git/shortlog/refs/heads/perf-tools-kpti-v0

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod()
  2018-05-10 17:02     ` Hunter, Adrian
@ 2018-05-14 17:55       ` Arnaldo Carvalho de Melo
  0 siblings, 0 replies; 38+ messages in thread
From: Arnaldo Carvalho de Melo @ 2018-05-14 17:55 UTC (permalink / raw)
  To: Hunter, Adrian
  Cc: Jiri Olsa, Thomas Gleixner, Ingo Molnar, Peter Zijlstra,
	Andy Lutomirski, H. Peter Anvin, Andi Kleen, Alexander Shishkin,
	Dave Hansen, Joerg Roedel, linux-kernel, x86

Em Thu, May 10, 2018 at 05:02:18PM +0000, Hunter, Adrian escreveu:
> > -----Original Message-----
> > From: Jiri Olsa [mailto:jolsa@redhat.com]
> > Sent: Thursday, May 10, 2018 4:02 PM
> > To: Hunter, Adrian <adrian.hunter@intel.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>; Arnaldo Carvalho de Melo
> > <acme@kernel.org>; Ingo Molnar <mingo@redhat.com>; Peter Zijlstra
> > <peterz@infradead.org>; Andy Lutomirski <luto@kernel.org>; H. Peter
> > Anvin <hpa@zytor.com>; Andi Kleen <ak@linux.intel.com>; Alexander
> > Shishkin <alexander.shishkin@linux.intel.com>; Dave Hansen
> > <dave.hansen@linux.intel.com>; Joerg Roedel <joro@8bytes.org>; linux-
> > kernel@vger.kernel.org; x86@kernel.org
> > Subject: Re: [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod()
> > 
> > On Wed, May 09, 2018 at 02:43:30PM +0300, Adrian Hunter wrote:
> > > Simplify logic in update_iter_mod().
> > >
> > > Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> > > ---
> > >  kernel/kallsyms.c | 20 ++++++--------------
> > >  1 file changed, 6 insertions(+), 14 deletions(-)
> > >
> > > diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c index
> > > a23e21ada81b..eda4b0222dab 100644
> > > --- a/kernel/kallsyms.c
> > > +++ b/kernel/kallsyms.c
> > > @@ -510,23 +510,15 @@ static int update_iter_mod(struct kallsym_iter
> > > *iter, loff_t pos)  {
> > >  	iter->pos = pos;
> > >
> > > -	if (iter->pos_ftrace_mod_end > 0 &&
> > > -	    iter->pos_ftrace_mod_end < iter->pos)
> > > -		return get_ksymbol_bpf(iter);
> > > -
> > > -	if (iter->pos_mod_end > 0 &&
> > > -	    iter->pos_mod_end < iter->pos) {
> > > -		if (!get_ksymbol_ftrace_mod(iter))
> > > -			return get_ksymbol_bpf(iter);
> > > +	if ((!iter->pos_mod_end || iter->pos_mod_end > pos) &&
> > > +	    get_ksymbol_mod(iter))
> > >  		return 1;
> > 
> > hum, should that be iter-> pos_mod_end >= pos ?
> 
> But module_get_kallsym() returned -1 when pos_mod_end was set to pos.

After looking at the reset_iter(), etc, i.e. the lifetime of reading
/proc/kallsyms, I _think_ this is ok, but if it works already, why
simplify it? Because it paves the way for later changes in this function
in this patchset? If so, please state that, and then spelling out why
this is better than before helps with reviewing, please expand on this
cset comment log.

- Arnaldo
 
> > 
> > 
> > > -	}
> > >
> > > -	if (!get_ksymbol_mod(iter)) {
> > > -		if (!get_ksymbol_ftrace_mod(iter))
> > > -			return get_ksymbol_bpf(iter);
> > > -	}
> > > +	if ((!iter->pos_ftrace_mod_end || iter->pos_ftrace_mod_end >
> > pos) &&
> > > +	    get_ksymbol_ftrace_mod(iter))
> > 
> > same here? iter->pos_ftrace_mod_end >= pos
> > 
> > jirka
> > 
> > > +		return 1;
> > >
> > > -	return 1;
> > > +	return get_ksymbol_bpf(iter);
> > >  }
> > >
> > >  /* Returns false if pos at or past end of file. */
> > > --
> > > 1.9.1
> > >

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-09 11:43 ` [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines Adrian Hunter
  2018-05-09 17:07   ` Arnaldo Carvalho de Melo
@ 2018-05-15 10:30   ` Jiri Olsa
  2018-05-15 10:40     ` Adrian Hunter
  1 sibling, 1 reply; 38+ messages in thread
From: Jiri Olsa @ 2018-05-15 10:30 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Thomas Gleixner, Arnaldo Carvalho de Melo, Ingo Molnar,
	Peter Zijlstra, Andy Lutomirski, H. Peter Anvin, Andi Kleen,
	Alexander Shishkin, Dave Hansen, Joerg Roedel, linux-kernel, x86

On Wed, May 09, 2018 at 02:43:36PM +0300, Adrian Hunter wrote:

SNIP

> +
> +	for (i = 0; i < ARRAY_SIZE(syms); i++) {
> +		if (syms[i].addr)
> +			return syms[i].addr;
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * These values can be used for kernels that do not have symbols for the entry
> + * trampolines in kallsyms.
> + */
> +#define X86_64_CPU_ENTRY_AREA_PER_CPU	0xfffffe0000000000ULL
> +#define X86_64_CPU_ENTRY_AREA_SIZE	0x2c000
> +#define X86_64_ENTRY_TRAMPOLINE		0x6000
> +
> +/* Map x86_64 KPTI entry trampolines */
> +int machine__map_x86_64_entry_trampolines(struct machine *machine,
> +					  struct dso *kernel)
> +{

would it make sense to put all this under arch/x86/util/machine.c ?

jirka

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines
  2018-05-15 10:30   ` Jiri Olsa
@ 2018-05-15 10:40     ` Adrian Hunter
  0 siblings, 0 replies; 38+ messages in thread
From: Adrian Hunter @ 2018-05-15 10:40 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Thomas Gleixner, Arnaldo Carvalho de Melo, Ingo Molnar,
	Peter Zijlstra, Andy Lutomirski, H. Peter Anvin, Andi Kleen,
	Alexander Shishkin, Dave Hansen, Joerg Roedel, linux-kernel, x86

On 15/05/18 13:30, Jiri Olsa wrote:
> On Wed, May 09, 2018 at 02:43:36PM +0300, Adrian Hunter wrote:
> 
> SNIP
> 
>> +
>> +	for (i = 0; i < ARRAY_SIZE(syms); i++) {
>> +		if (syms[i].addr)
>> +			return syms[i].addr;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +/*
>> + * These values can be used for kernels that do not have symbols for the entry
>> + * trampolines in kallsyms.
>> + */
>> +#define X86_64_CPU_ENTRY_AREA_PER_CPU	0xfffffe0000000000ULL
>> +#define X86_64_CPU_ENTRY_AREA_SIZE	0x2c000
>> +#define X86_64_ENTRY_TRAMPOLINE		0x6000
>> +
>> +/* Map x86_64 KPTI entry trampolines */
>> +int machine__map_x86_64_entry_trampolines(struct machine *machine,
>> +					  struct dso *kernel)
>> +{
> 
> would it make sense to put all this under arch/x86/util/machine.c ?

In this case, machine__map_x86_64_entry_trampolines() is specific to the
arch in the perf.data file, not the arch perf is currently running on.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH RFC 11/19] perf tools: Synthesize and process mmap events for x86_64 KPTI entry trampolines
  2018-05-09 11:43 ` [PATCH RFC 11/19] perf tools: Synthesize and process mmap events " Adrian Hunter
@ 2018-05-15 10:49   ` Jiri Olsa
  0 siblings, 0 replies; 38+ messages in thread
From: Jiri Olsa @ 2018-05-15 10:49 UTC (permalink / raw)
  To: Adrian Hunter
  Cc: Thomas Gleixner, Arnaldo Carvalho de Melo, Ingo Molnar,
	Peter Zijlstra, Andy Lutomirski, H. Peter Anvin, Andi Kleen,
	Alexander Shishkin, Dave Hansen, Joerg Roedel, linux-kernel, x86

On Wed, May 09, 2018 at 02:43:40PM +0300, Adrian Hunter wrote:
> Like the kernel text, the location of x86_64 KPTI entry trampolines must be
> recorded in the perf.data file. Like the kernel, synthesize a mmap event
> for that, and add processing for it.
> 
> Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
> ---
>  tools/perf/util/event.c   | 90 +++++++++++++++++++++++++++++++++++++++++++++--
>  tools/perf/util/machine.c | 28 +++++++++++++++
>  2 files changed, 115 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
> index aafa9878465f..d810ff8488b1 100644
> --- a/tools/perf/util/event.c
> +++ b/tools/perf/util/event.c
> @@ -888,9 +888,80 @@ int kallsyms__get_function_start(const char *kallsyms_filename,
>  	return 0;
>  }
>  
> -int perf_event__synthesize_kernel_mmap(struct perf_tool *tool,
> -				       perf_event__handler_t process,
> -				       struct machine *machine)
> +#if defined(__x86_64__)

also this could go under arch/x86/

jirka

> +
> +static int perf_event__synthesize_special_kmaps(struct perf_tool *tool,
> +						perf_event__handler_t process,
> +						struct machine *machine)
> +{
> +	int rc = 0;
> +	struct map *pos;
> +	struct map_groups *kmaps = &machine->kmaps;
> +	struct maps *maps = &kmaps->maps;
> +	union perf_event *event = zalloc(sizeof(event->mmap) +
> +					 machine->id_hdr_size);
> +
> +	if (!event) {
> +		pr_debug("Not enough memory synthesizing mmap event "
> +			 "for special kernel maps\n");
> +		return -1;
> +	}
> +

SNIP

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [tip:perf/core] perf tools: Use the "_stest" symbol to identify the kernel map when loading kcore
  2018-05-09 11:43 ` [PATCH RFC 05/19] perf tools: Use the _stest symbol to identify the kernel map when loading kcore Adrian Hunter
@ 2018-05-16 18:04   ` tip-bot for Adrian Hunter
  0 siblings, 0 replies; 38+ messages in thread
From: tip-bot for Adrian Hunter @ 2018-05-16 18:04 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: jolsa, dave.hansen, ak, hpa, mingo, linux-kernel, peterz, tglx,
	alexander.shishkin, adrian.hunter, joro, acme, luto

Commit-ID:  5654997838c2ac9b1950d633fc97f354cc4180e7
Gitweb:     https://git.kernel.org/tip/5654997838c2ac9b1950d633fc97f354cc4180e7
Author:     Adrian Hunter <adrian.hunter@intel.com>
AuthorDate: Wed, 9 May 2018 14:43:34 +0300
Committer:  Arnaldo Carvalho de Melo <acme@redhat.com>
CommitDate: Tue, 15 May 2018 14:31:25 -0300

perf tools: Use the "_stest" symbol to identify the kernel map when loading kcore

The first symbol is not necessarily in the kernel text.  Instead of
using the first symbol, use the _stest symbol to identify the kernel map
when loading kcore.

This allows for the introduction of symbols to identify the x86_64 PTI
entry trampolines.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86@kernel.org
Link: http://lkml.kernel.org/r/1525866228-30321-6-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
 tools/perf/util/symbol.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
index f48dc157c2bd..4a39f4d0a174 100644
--- a/tools/perf/util/symbol.c
+++ b/tools/perf/util/symbol.c
@@ -1149,7 +1149,7 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 	bool is_64_bit;
 	int err, fd;
 	char kcore_filename[PATH_MAX];
-	struct symbol *sym;
+	u64 stext;
 
 	if (!kmaps)
 		return -EINVAL;
@@ -1198,13 +1198,13 @@ static int dso__load_kcore(struct dso *dso, struct map *map,
 		old_map = next;
 	}
 
-	/* Find the kernel map using the first symbol */
-	sym = dso__first_symbol(dso);
-	list_for_each_entry(new_map, &md.maps, node) {
-		if (sym && sym->start >= new_map->start &&
-		    sym->start < new_map->end) {
-			replacement_map = new_map;
-			break;
+	/* Find the kernel map using the '_stext' symbol */
+	if (!kallsyms__get_function_start(kallsyms_filename, "_stext", &stext)) {
+		list_for_each_entry(new_map, &md.maps, node) {
+			if (stext >= new_map->start && stext < new_map->end) {
+				replacement_map = new_map;
+				break;
+			}
 		}
 	}
 

^ permalink raw reply related	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2018-05-16 18:05 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-09 11:43 [PATCH RFC 00/19] perf tools and x86_64 KPTI entry trampolines Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 01/19] kallsyms: Simplify update_iter_mod() Adrian Hunter
2018-05-10 13:01   ` Jiri Olsa
2018-05-10 17:02     ` Hunter, Adrian
2018-05-14 17:55       ` Arnaldo Carvalho de Melo
2018-05-09 11:43 ` [PATCH RFC 02/19] kallsyms, x86: Export addresses of syscall trampolines Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 03/19] x86: Add entry trampolines to kcore Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 04/19] x86: kcore: Give entry trampolines all the same offset in kcore Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 05/19] perf tools: Use the _stest symbol to identify the kernel map when loading kcore Adrian Hunter
2018-05-16 18:04   ` [tip:perf/core] perf tools: Use the "_stest" " tip-bot for Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 06/19] perf tools: Fix kernel_start for KPTI on x86_64 Adrian Hunter
2018-05-09 17:08   ` Arnaldo Carvalho de Melo
2018-05-09 11:43 ` [PATCH RFC 07/19] perf tools: Workaround missing maps for x86_64 KPTI entry trampolines Adrian Hunter
2018-05-09 17:07   ` Arnaldo Carvalho de Melo
2018-05-10 19:08     ` Hunter, Adrian
2018-05-10 20:15       ` Arnaldo Carvalho de Melo
2018-05-10 20:19         ` Arnaldo Carvalho de Melo
2018-05-10 20:47           ` Arnaldo Carvalho de Melo
2018-05-11 11:18             ` Adrian Hunter
2018-05-11 14:45               ` Arnaldo Carvalho de Melo
2018-05-14 13:02                 ` Adrian Hunter
2018-05-11 11:15         ` Adrian Hunter
2018-05-15 10:30   ` Jiri Olsa
2018-05-15 10:40     ` Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 08/19] perf tools: Fix map_groups__split_kallsyms() for entry trampoline symbols Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 09/19] perf tools: Allow for special kernel maps Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 10/19] perf tools: Create maps for x86_64 KPTI entry trampolines Adrian Hunter
2018-05-14  8:32   ` Ingo Molnar
2018-05-09 11:43 ` [PATCH RFC 11/19] perf tools: Synthesize and process mmap events " Adrian Hunter
2018-05-15 10:49   ` Jiri Olsa
2018-05-09 11:43 ` [PATCH RFC 12/19] perf buildid-cache: kcore_copy: Keep phdr data in a list Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 13/19] perf buildid-cache: kcore_copy: Keep a count of phdrs Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 14/19] perf buildid-cache: kcore_copy: Calculate offset from phnum Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 15/19] perf buildid-cache: kcore_copy: Layout sections Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 16/19] perf buildid-cache: kcore_copy: Iterate phdrs Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 17/19] perf buildid-cache: kcore_copy: Get rid of kernel_map Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 18/19] perf buildid-cache: kcore_copy: Copy x86_64 entry trampoline sections Adrian Hunter
2018-05-09 11:43 ` [PATCH RFC 19/19] perf buildid-cache: kcore_copy: Amend the offset of sections that remap kernel text Adrian Hunter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.